de-francophones commited on
Commit
42cfda4
·
verified ·
1 Parent(s): 9e8076f

845a1ea492e901a6c594f191e912231a51b796066475c46e8b2a97632382dd52

Browse files
en/1719.html.txt ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Emotions are biological states associated with the nervous system[1][2][3] brought on by neurophysiological changes variously associated with thoughts, feelings, behavioural responses, and a degree of pleasure or displeasure.[4][5] There is currently no scientific consensus on a definition. Emotions are often intertwined with mood, temperament, personality, disposition, creativity[6][7] and motivation.[8]
4
+
5
+ Research on emotion has increased significantly over the past two decades with many fields contributing including psychology, neuroscience, affective neuroscience, endocrinology, medicine, history, sociology of emotions, and computer science. The numerous theories that attempt to explain the origin, neurobiology, experience, and function of emotions have only fostered more intense research on this topic. Current areas of research in the concept of emotion include the development of materials that stimulate and elicit emotion. In addition, PET scans and fMRI scans help study the affective picture processes in the brain.[9]
6
+
7
+ From a purely mechanistic perspective, "Emotions can be defined as a positive or negative experience that is associated with a particular pattern of physiological activity." Emotions produce different physiological, behavioral and cognitive changes. The original role of emotions was to motivate adaptive behaviors that in the past would have contributed to the passing on of genes through survival, reproduction, and kin selection.[10][11]
8
+
9
+ In some theories, cognition is an important aspect of emotion. For those who act primarily on emotions, they may assume that they are not thinking, but mental processes involving cognition are still essential, particularly in the interpretation of events. For example, the realization of our believing that we are in a dangerous situation and the subsequent arousal of our body's nervous system (rapid heartbeat and breathing, sweating, muscle tension) is integral to the experience of our feeling afraid. Other theories, however, claim that emotion is separate from and can precede cognition. Consciously experiencing an emotion is exhibiting a mental representation of that emotion from a past or hypothetical experience, which is linked back to a content state of pleasure or displeasure.[12] The content states are established by verbal explanations of experiences, describing an internal state.[13]
10
+
11
+ Emotions are complex. According to some theories, they are states of feeling that result in physical and psychological changes that influence our behavior.[5] The physiology of emotion is closely linked to arousal of the nervous system with various states and strengths of arousal relating, apparently, to particular emotions. Emotion is also linked to behavioral tendency. Extroverted people are more likely to be social and express their emotions, while introverted people are more likely to be more socially withdrawn and conceal their emotions. Emotion is often the driving force behind motivation, positive or negative.[14] According to other theories, emotions are not causal forces but simply syndromes of components, which might include motivation, feeling, behavior, and physiological changes, but no one of these components is the emotion. Nor is the emotion an entity that causes these components.[15]
12
+
13
+ Emotions involve different components, such as subjective experience, cognitive processes, expressive behavior, psychophysiological changes, and instrumental behavior. At one time, academics attempted to identify the emotion with one of the components: William James with a subjective experience, behaviorists with instrumental behavior, psychophysiologists with physiological changes, and so on. More recently, emotion is said to consist of all the components. The different components of emotion are categorized somewhat differently depending on the academic discipline. In psychology and philosophy, emotion typically includes a subjective, conscious experience characterized primarily by psychophysiological expressions, biological reactions, and mental states. A similar multicomponential description of emotion is found in sociology. For example, Peggy Thoits[16] described emotions as involving physiological components, cultural or emotional labels (anger, surprise, etc.), expressive body actions, and the appraisal of situations and contexts.
14
+
15
+ Human nature and the following bodily sensations have been always part of the interest of thinkers and philosophers. Far most extensively, this interest has been of great interest by both Western and Eastern societies. Emotional states have been associated with the divine and the enlightenment of the human mind and body.[17] The ever changing actions of individuals and its mood variations have been of great importance by most of the Western philosophers (Aristotle, Plato, Descartes, Aquinas, Hobbes) that lead them to propose vast theories; often competing theories, that sought to explain emotion and the following motivators of human action and its consequences.
16
+
17
+ In the Age of Enlightenment Scottish thinker David Hume [18] proposed a revolutionary argument that sought to explain the main motivators of human action and conduct. He proposed that actions are motivated by “fears, desires, and passions”. As he wrote in his book Treatise of Human Nature (1973) “Reason alone can never be a motive to any action of the will… it can never oppose passion in the direction of the will… Reason is, and ought to be the slave of the passions, and can never pretend to any other office than to serve and obey them”[19] with these lines Hume pretended to explain that reason and further action will be subjected to the desires and experience of the self. Later thinkers would propose that actions and emotions are deeply interrelated to social, political, historical, and cultural aspects of reality that would be also associated with sophisticated neurological and physiological research on the brain and other parts of the physical body.
18
+
19
+ The word "emotion" dates back to 1579, when it was adapted from the French word émouvoir, which means "to stir up". The term emotion was introduced into academic discussion as a catch-all term to passions, sentiments and affections.[20] The word "emotion" was coined in the early 1800s by Thomas Brown and it is around the 1830s that the modern concept of emotion first emerged for the English language.[21] "No one felt emotions before about 1830. Instead they felt other things - "passions", "accidents of the soul", "moral sentiments" - and explained them very differently from how we understand emotions today."[21]
20
+
21
+ Some cross-cultural studies indicate that the categorization of "emotion" and classification of basic emotions such as "anger" and "sadness" are not universal and that the boundaries and domains of these concepts are categorized differently by all cultures.[22] However, others argue that there are some universal bases of emotions (see Section 6.1).[23] In psychiatry and psychology, an inability to express or perceive emotion is sometimes referred to as alexithymia.[24]
22
+
23
+ The Oxford Dictionaries definition of emotion is "A strong feeling deriving from one's circumstances, mood, or relationships with others."[25] Emotions are responses to significant internal and external events.[26]
24
+
25
+ Emotions can be occurrences (e.g., panic) or dispositions (e.g., hostility), and short-lived (e.g., anger) or long-lived (e.g., grief).[27] Psychotherapist Michael C. Graham describes all emotions as existing on a continuum of intensity.[28] Thus fear might range from mild concern to terror or shame might range from simple embarrassment to toxic shame.[29] Emotions have been described as consisting of a coordinated set of responses, which may include verbal, physiological, behavioral, and neural mechanisms.[30]
26
+
27
+ Emotions have been categorized, with some relationships existing between emotions and some direct opposites existing. Graham differentiates emotions as functional or dysfunctional and argues all functional emotions have benefits.[31]
28
+
29
+ In some uses of the word, emotions are intense feelings that are directed at someone or something.[32] On the other hand, emotion can be used to refer to states that are mild (as in annoyed or content) and to states that are not directed at anything (as in anxiety and depression). One line of research looks at the meaning of the word emotion in everyday language and finds that this usage is rather different from that in academic discourse.[33]
30
+
31
+ In practical terms, Joseph LeDoux has defined emotions as the result of a cognitive and conscious process which occurs in response to a body system response to a trigger.[34]
32
+
33
+ According to Scherer's Component Process Model (CPM) of emotion,[35] there are five crucial elements of emotion. From the component process perspective, emotional experience requires that all of these processes become coordinated and synchronized for a short period of time, driven by appraisal processes. Although the inclusion of cognitive appraisal as one of the elements is slightly controversial, since some theorists make the assumption that emotion and cognition are separate but interacting systems, the CPM provides a sequence of events that effectively describes the coordination involved during an emotional episode.
34
+
35
+ Emotion can be differentiated from a number of similar constructs within the field of affective neuroscience:[30]
36
+
37
+ One view is that emotions facilitate adaptive responses to environmental challenges. Emotions have been described as a result of evolution because they provided good solutions to ancient and recurring problems that faced our ancestors.[37] Emotions can function as a way to communicate what's important to us, such as values and ethics.[38] However some emotions, such as some forms of anxiety, are sometimes regarded as part of a mental illness and thus possibly of negative value.[39]
38
+
39
+ A distinction can be made between emotional episodes and emotional dispositions. Emotional dispositions are also comparable to character traits, where someone may be said to be generally disposed to experience certain emotions. For example, an irritable person is generally disposed to feel irritation more easily or quickly than others do. Finally, some theorists place emotions within a more general category of "affective states" where affective states can also include emotion-related phenomena such as pleasure and pain, motivational states (for example, hunger or curiosity), moods, dispositions and traits.[40]
40
+
41
+ For more than 40 years, Paul Ekman has supported the view that emotions are discrete, measurable, and physiologically distinct. Ekman's most influential work revolved around the finding that certain emotions appeared to be universally recognized, even in cultures that were preliterate and could not have learned associations for facial expressions through media. Another classic study found that when participants contorted their facial muscles into distinct facial expressions (for example, disgust), they reported subjective and physiological experiences that matched the distinct facial expressions. Ekman's facial-expression research examined six basic emotions: anger, disgust, fear, happiness, sadness and surprise.[41] Later in his career,[42] Ekman theorized that other universal emotions may exist beyond these six. In light of this, recent cross-cultural studies led by Daniel Cordaro and Dacher Keltner, both former students of Ekman, extended the list of universal emotions. In addition to the original six, these studies provided evidence for amusement, awe, contentment, desire, embarrassment, pain, relief, and sympathy in both facial and vocal expressions. They also found evidence for boredom, confusion, interest, pride, and shame facial expressions, as well as contempt, interest, relief, and triumph vocal expressions.[43][44][45]
42
+
43
+ Robert Plutchik agreed with Ekman's biologically driven perspective but developed the "wheel of emotions", suggesting eight primary emotions grouped on a positive or negative basis: joy versus sadness; anger versus fear; trust versus disgust; and surprise versus anticipation.[46] Some basic emotions can be modified to form complex emotions. The complex emotions could arise from cultural conditioning or association combined with the basic emotions. Alternatively, similar to the way primary colors combine, primary emotions could blend to form the full spectrum of human emotional experience. For example, interpersonal anger and disgust could blend to form contempt. Relationships exist between basic emotions, resulting in positive or negative influences.[47]
44
+
45
+ Psychologists have used methods such as factor analysis to attempt to map emotion-related responses onto a more limited number of dimensions. Such methods attempt to boil emotions down to underlying dimensions that capture the similarities and differences between experiences.[49] Often, the first two dimensions uncovered by factor analysis are valence (how negative or positive the experience feels) and arousal (how energized or enervated the experience feels). These two dimensions can be depicted on a 2D coordinate map.[50] This two-dimensional map has been theorized to capture one important component of emotion called core affect.[51][52] Core affect is not theorized to be the only component to emotion, but to give the emotion its hedonic and felt energy.
46
+
47
+ In stoic theories it was seen as a hindrance to reason and therefore a hindrance to virtue. Aristotle believed that emotions were an essential component of virtue.[53] In the Aristotelian view all emotions (called passions) corresponded to appetites or capacities. During the Middle Ages, the Aristotelian view was adopted and further developed by scholasticism and Thomas Aquinas[54] in particular.
48
+
49
+ In Chinese antiquity, excessive emotion was believed to cause damage to qi, which in turn, damages the vital organs.[55] The four humours theory made popular by Hippocrates contributed to the study of emotion in the same way that it did for medicine.
50
+
51
+ In the early 11th century, Avicenna theorized about the influence of emotions on health and behaviors, suggesting the need to manage emotions.[56]
52
+
53
+ Early modern views on emotion are developed in the works of philosophers such as René Descartes, Niccolò Machiavelli, Baruch Spinoza,[57] Thomas Hobbes[58] and David Hume. In the 19th century emotions were considered adaptive and were studied more frequently from an empiricist psychiatric perspective.
54
+
55
+ Christian perspective on emotion presupposes a theistic origin to humanity. God who created humans gave humans the ability to feel emotion and interact emotionally. Biblical content expresses that God is a person who feels and expresses emotion. Though a somatic view would place the locus of emotions in the physical body, Christian theory of emotions would view the body more as a platform for the sensing and expression of emotions. Therefore emotions themselves arise from the person, or that which is "imago-dei" or image of God in humans. In Christian thought, emotions have the potential to be controlled through reasoned reflection. That reasoned reflection also mimics God who made mind. The purpose of emotions in human life are therefore summarized in God's call to enjoy Him and creation, humans are to enjoy emotions and benefit from them and use them to energize behavior.
56
+
57
+ Perspectives on emotions from evolutionary theory were initiated during the mid-late 19th century with Charles Darwin's 1872 book The Expression of the Emotions in Man and Animals.[59] Surprisingly, Darwin argued that emotions served no evolved purpose for humans, neither in communication, nor in aiding survival.[60] Darwin largely argued that emotions evolved via the inheritance of acquired characters.[61] He pioneered various methods for studying non-verbal expressions, from which he concluded that some expressions had cross-cultural universality. Darwin also detailed homologous expressions of emotions that occur in animals. This led the way for animal research on emotions and the eventual determination of the neural underpinnings of emotion.
58
+
59
+ More contemporary views along the evolutionary psychology spectrum posit that both basic emotions and social emotions evolved to motivate (social) behaviors that were adaptive in the ancestral environment.[14] Emotion is an essential part of any human decision-making and planning, and the famous distinction made between reason and emotion is not as clear as it seems.[62] Paul D. MacLean claims that emotion competes with even more instinctive responses, on one hand, and the more abstract reasoning, on the other hand. The increased potential in neuroimaging has also allowed investigation into evolutionarily ancient parts of the brain. Important neurological advances were derived from these perspectives in the 1990s by Joseph E. LeDoux and António Damásio.
60
+
61
+ Research on social emotion also focuses on the physical displays of emotion including body language of animals and humans (see affect display). For example, spite seems to work against the individual but it can establish an individual's reputation as someone to be feared.[14] Shame and pride can motivate behaviors that help one maintain one's standing in a community, and self-esteem is one's estimate of one's status.[14][63]
62
+
63
+ Somatic theories of emotion claim that bodily responses, rather than cognitive interpretations, are essential to emotions. The first modern version of such theories came from William James in the 1880s. The theory lost favor in the 20th century, but has regained popularity more recently due largely to theorists such as John Cacioppo,[64] António Damásio,[65] Joseph E. LeDoux[66] and Robert Zajonc[67] who are able to appeal to neurological evidence.[68]
64
+
65
+ In his 1884 article[69] William James argued that feelings and emotions were secondary to physiological phenomena. In his theory, James proposed that the perception of what he called an "exciting fact" directly led to a physiological response, known as "emotion."[70] To account for different types of emotional experiences, James proposed that stimuli trigger activity in the autonomic nervous system, which in turn produces an emotional experience in the brain. The Danish psychologist Carl Lange also proposed a similar theory at around the same time, and therefore this theory became known as the James–Lange theory. As James wrote, "the perception of bodily changes, as they occur, is the emotion." James further claims that "we feel sad because we cry, angry because we strike, afraid because we tremble, and either we cry, strike, or tremble because we are sorry, angry, or fearful, as the case may be."[69]
66
+
67
+ An example of this theory in action would be as follows: An emotion-evoking stimulus (snake) triggers a pattern of physiological response (increased heart rate, faster breathing, etc.), which is interpreted as a particular emotion (fear). This theory is supported by experiments in which by manipulating the bodily state induces a desired emotional state.[71] Some people may believe that emotions give rise to emotion-specific actions, for example, "I'm crying because I'm sad," or "I ran away because I was scared." The issue with the James–Lange theory is that of causation (bodily states causing emotions and being a priori), not that of the bodily influences on emotional experience (which can be argued and is still quite prevalent today in biofeedback studies and embodiment theory).[72]
68
+
69
+ Although mostly abandoned in its original form, Tim Dalgleish argues that most contemporary neuroscientists have embraced the components of the James-Lange theory of emotions.[73]
70
+
71
+ The James–Lange theory has remained influential. Its main contribution is the emphasis it places on the embodiment of emotions, especially the argument that changes in the bodily concomitants of emotions can alter their experienced intensity. Most contemporary neuroscientists would endorse a modified James–Lange view in which bodily feedback modulates the experience of emotion. (p. 583)
72
+
73
+ Walter Bradford Cannon agreed that physiological responses played a crucial role in emotions, but did not believe that physiological responses alone could explain subjective emotional experiences. He argued that physiological responses were too slow and often imperceptible and this could not account for the relatively rapid and intense subjective awareness of emotion.[74] He also believed that the richness, variety, and temporal course of emotional experiences could not stem from physiological reactions, that reflected fairly undifferentiated fight or flight responses.[75][76] An example of this theory in action is as follows: An emotion-evoking event (snake) triggers simultaneously both a physiological response and a conscious experience of an emotion.
74
+
75
+ Phillip Bard contributed to the theory with his work on animals. Bard found that sensory, motor, and physiological information all had to pass through the diencephalon (particularly the thalamus), before being subjected to any further processing. Therefore, Cannon also argued that it was not anatomically possible for sensory events to trigger a physiological response prior to triggering conscious awareness and emotional stimuli had to trigger both physiological and experiential aspects of emotion simultaneously.[75]
76
+
77
+ Stanley Schachter formulated his theory on the earlier work of a Spanish physician, Gregorio Marañón, who injected patients with epinephrine and subsequently asked them how they felt. Marañón found that most of these patients felt something but in the absence of an actual emotion-evoking stimulus, the patients were unable to interpret their physiological arousal as an experienced emotion. Schachter did agree that physiological reactions played a big role in emotions. He suggested that physiological reactions contributed to emotional experience by facilitating a focused cognitive appraisal of a given physiologically arousing event and that this appraisal was what defined the subjective emotional experience. Emotions were thus a result of two-stage process: general physiological arousal, and experience of emotion. For example, the physiological arousal, heart pounding, in a response to an evoking stimulus, the sight of a bear in the kitchen. The brain then quickly scans the area, to explain the pounding, and notices the bear. Consequently, the brain interprets the pounding heart as being the result of fearing the bear.[77] With his student, Jerome Singer, Schachter demonstrated that subjects can have different emotional reactions despite being placed into the same physiological state with an injection of epinephrine. Subjects were observed to express either anger or amusement depending on whether another person in the situation (a confederate) displayed that emotion. Hence, the combination of the appraisal of the situation (cognitive) and the participants' reception of adrenaline or a placebo together determined the response. This experiment has been criticized in Jesse Prinz's (2004) Gut Reactions.[78]
78
+
79
+ With the two-factor theory now incorporating cognition, several theories began to argue that cognitive activity in the form of judgments, evaluations, or thoughts were entirely necessary for an emotion to occur. One of the main proponents of this view was Richard Lazarus who argued that emotions must have some cognitive intentionality. The cognitive activity involved in the interpretation of an emotional context may be conscious or unconscious and may or may not take the form of conceptual processing.
80
+
81
+ Lazarus' theory is very influential; emotion is a disturbance that occurs in the following order:
82
+
83
+ For example: Jenny sees a snake.
84
+
85
+ Lazarus stressed that the quality and intensity of emotions are controlled through cognitive processes. These processes underline coping strategies that form the emotional reaction by altering the relationship between the person and the environment.
86
+
87
+ George Mandler provided an extensive theoretical and empirical discussion of emotion as influenced by cognition, consciousness, and the autonomic nervous system in two books (Mind and Emotion, 1975,[79] and Mind and Body: Psychology of Emotion and Stress, 1984[80])
88
+
89
+ There are some theories on emotions arguing that cognitive activity in the form of judgments, evaluations, or thoughts are necessary in order for an emotion to occur. A prominent philosophical exponent is Robert C. Solomon (for example, The Passions, Emotions and the Meaning of Life, 1993[81]). Solomon claims that emotions are judgments. He has put forward a more nuanced view which responds to what he has called the ‘standard objection’ to cognitivism, the idea that a judgment that something is fearsome can occur with or without emotion, so judgment cannot be identified with emotion. The theory proposed by Nico Frijda where appraisal leads to action tendencies is another example.
90
+
91
+ It has also been suggested that emotions (affect heuristics, feelings and gut-feeling reactions) are often used as shortcuts to process information and influence behavior.[82] The affect infusion model (AIM) is a theoretical model developed by Joseph Forgas in the early 1990s that attempts to explain how emotion and mood interact with one's ability to process information.
92
+
93
+ Theories dealing with perception either use one or multiples perceptions in order to find an emotion.[83] A recent hybrid of the somatic and cognitive theories of emotion is the perceptual theory. This theory is neo-Jamesian in arguing that bodily responses are central to emotions, yet it emphasizes the meaningfulness of emotions or the idea that emotions are about something, as is recognized by cognitive theories. The novel claim of this theory is that conceptually-based cognition is unnecessary for such meaning. Rather the bodily changes themselves perceive the meaningful content of the emotion because of being causally triggered by certain situations. In this respect, emotions are held to be analogous to faculties such as vision or touch, which provide information about the relation between the subject and the world in various ways. A sophisticated defense of this view is found in philosopher Jesse Prinz's book Gut Reactions,[78] and psychologist James Laird's book Feelings.[71]
94
+
95
+ Affective events theory is a communication-based theory developed by Howard M. Weiss and Russell Cropanzano (1996),[84] that looks at the causes, structures, and consequences of emotional experience (especially in work contexts). This theory suggests that emotions are influenced and caused by events which in turn influence attitudes and behaviors. This theoretical frame also emphasizes time in that human beings experience what they call emotion episodes –\ a "series of emotional states extended over time and organized around an underlying theme." This theory has been utilized by numerous researchers to better understand emotion from a communicative lens, and was reviewed further by Howard M. Weiss and Daniel J. Beal in their article, "Reflections on Affective Events Theory", published in Research on Emotion in Organizations in 2005.[85]
96
+
97
+ A situated perspective on emotion, developed by Paul E. Griffiths and Andrea Scarantino, emphasizes the importance of external factors in the development and communication of emotion, drawing upon the situationism approach in psychology.[86] This theory is markedly different from both cognitivist and neo-Jamesian theories of emotion, both of which see emotion as a purely internal process, with the environment only acting as a stimulus to the emotion. In contrast, a situationist perspective on emotion views emotion as the product of an organism investigating its environment, and observing the responses of other organisms. Emotion stimulates the evolution of social relationships, acting as a signal to mediate the behavior of other organisms. In some contexts, the expression of emotion (both voluntary and involuntary) could be seen as strategic moves in the transactions between different organisms. The situated perspective on emotion states that conceptual thought is not an inherent part of emotion, since emotion is an action-oriented form of skillful engagement with the world. Griffiths and Scarantino suggested that this perspective on emotion could be helpful in understanding phobias, as well as the emotions of infants and animals.
98
+
99
+ Emotions can motivate social interactions and relationships and therefore are directly related with basic physiology, particularly with the stress systems. This is important because emotions are related to the anti-stress complex, with an oxytocin-attachment system, which plays a major role in bonding. Emotional phenotype temperaments affect social connectedness and fitness in complex social systems.[87] These characteristics are shared with other species and taxa and are due to the effects of genes and their continuous transmission. Information that is encoded in the DNA sequences provides the blueprint for assembling proteins that make up our cells. Zygotes require genetic information from their parental germ cells, and at every speciation event, heritable traits that have enabled its ancestor to survive and reproduce successfully are passed down along with new traits that could be potentially beneficial to the offspring.
100
+
101
+ In the five million years since the lineages leading to modern humans and chimpanzees split, only about 1.2% of their genetic material has been modified. This suggests that everything that separates us from chimpanzees must be encoded in that very small amount of DNA, including our behaviors. Students that study animal behaviors have only identified intraspecific examples of gene-dependent behavioral phenotypes. In voles (Microtus spp.) minor genetic differences have been identified in a vasopressin receptor gene that corresponds to major species differences in social organization and the mating system.[88] Another potential example with behavioral differences is the FOCP2 gene, which is involved in neural circuitry handling speech and language.[89] Its present form in humans differed from that of the chimpanzees by only a few mutations and has been present for about 200,000 years, coinciding with the beginning of modern humans.[90] Speech, language, and social organization are all part of the basis for emotions.
102
+
103
+ Based on discoveries made through neural mapping of the limbic system, the neurobiological explanation of human emotion is that emotion is a pleasant or unpleasant mental state organized in the limbic system of the mammalian brain. If distinguished from reactive responses of reptiles, emotions would then be mammalian elaborations of general vertebrate arousal patterns, in which neurochemicals (for example, dopamine, noradrenaline, and serotonin) step-up or step-down the brain's activity level, as visible in body movements, gestures and postures. Emotions can likely be mediated by pheromones (see fear).[36]
104
+
105
+ For example, the emotion of love is proposed to be the expression of paleocircuits of the mammalian brain (specifically, modules of the cingulate gyrus) which facilitate the care, feeding, and grooming of offspring. Paleocircuits are neural platforms for bodily expression configured before the advent of cortical circuits for speech. They consist of pre-configured pathways or networks of nerve cells in the forebrain, brain stem and spinal cord.
106
+
107
+ Other emotions like fear and anxiety long thought to be exclusively generated by the most primitive parts of the brain (stem) and more associated to the fight-or-flight responses of behavior, have also been associated as adaptive expressions of defensive behavior whenever a threat is encountered. Although defensive behaviors have been present in a wide variety of species, Blanchard et al. (2001) discovered a correlation of given stimuli and situation that resulted in a similar pattern of defensive behavior towards a threat in human and non-human mammals. [91]
108
+
109
+ Whenever, potentially dangerous stimuli is presented additional brain structures activate that previously thought (hippocampus, thalamus, etc). Thus, giving the amygdala an important role on coordinating the following behavioral input based on the presented neurotransmitters that respond to threat stimuli. These biological functions of the amygdala are not only limited to the “fear-conditioning” and “processing of aversive stimuli”, but also are present on other components of the amygdala. Therefore, it can referred the amygdala as a key structure to understand the potential responses of behavior in danger like situations in human and non-human mammals.[92]
110
+
111
+ The motor centers of reptiles react to sensory cues of vision, sound, touch, chemical, gravity, and motion with pre-set body movements and programmed postures. With the arrival of night-active mammals, smell replaced vision as the dominant sense, and a different way of responding arose from the olfactory sense, which is proposed to have developed into mammalian emotion and emotional memory. The mammalian brain invested heavily in olfaction to succeed at night as reptiles slept – one explanation for why olfactory lobes in mammalian brains are proportionally larger than in the reptiles. These odor pathways gradually formed the neural blueprint for what was later to become our limbic brain.[36]
112
+
113
+ Emotions are thought to be related to certain activities in brain areas that direct our attention, motivate our behavior, and determine the significance of what is going on around us. Pioneering work by Paul Broca (1878),[93] James Papez (1937),[94] and Paul D. MacLean (1952)[95] suggested that emotion is related to a group of structures in the center of the brain called the limbic system, which includes the hypothalamus, cingulate cortex, hippocampi, and other structures. More recent research has shown that some of these limbic structures are not as directly related to emotion as others are while some non-limbic structures have been found to be of greater emotional relevance.
114
+
115
+ There is ample evidence that the left prefrontal cortex is activated by stimuli that cause positive approach.[96] If attractive stimuli can selectively activate a region of the brain, then logically the converse should hold, that selective activation of that region of the brain should cause a stimulus to be judged more positively. This was demonstrated for moderately attractive visual stimuli[97] and replicated and extended to include negative stimuli.[98]
116
+
117
+ Two neurobiological models of emotion in the prefrontal cortex made opposing predictions. The valence model predicted that anger, a negative emotion, would activate the right prefrontal cortex. The direction model predicted that anger, an approach emotion, would activate the left prefrontal cortex. The second model was supported.[99]
118
+
119
+ This still left open the question of whether the opposite of approach in the prefrontal cortex is better described as moving away (direction model), as unmoving but with strength and resistance (movement model), or as unmoving with passive yielding (action tendency model). Support for the action tendency model (passivity related to right prefrontal activity) comes from research on shyness[100] and research on behavioral inhibition.[101] Research that tested the competing hypotheses generated by all four models also supported the action tendency model.[102][103]
120
+
121
+
122
+
123
+ Another neurological approach proposed by Bud Craig in 2003 distinguishes two classes of emotion: "classical" emotions such as love, anger and fear that are evoked by environmental stimuli, and "homeostatic emotions" – attention-demanding feelings evoked by body states, such as pain, hunger and fatigue, that motivate behavior (withdrawal, eating or resting in these examples) aimed at maintaining the body's internal milieu at its ideal state.[104]
124
+
125
+ Derek Denton calls the latter "primordial emotions" and defines them as "the subjective element of the instincts, which are the genetically programmed behavior patterns which contrive homeostasis. They include thirst, hunger for air, hunger for food, pain and hunger for specific minerals etc. There are two constituents of a primordial emotion--the specific sensation which when severe may be imperious, and the compelling intention for gratification by a consummatory act."[105]
126
+
127
+ Joseph LeDoux differentiates between the human's defence system, which has evolved over time, and emotions such as fear and anxiety. He has said that the amygdala may release hormones due to a trigger (such as an innate reaction to seeing a snake), but "then we elaborate it through cognitive and conscious processes."[106]
128
+
129
+ Lisa Feldman Barrett highlights differences in emotions between different cultures,[107] and says that emotions (such as anxiety) "are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment."[108] She has termed this approach the theory of constructed emotion.
130
+
131
+ Many different disciplines have produced work on the emotions. Human sciences study the role of emotions in mental processes, disorders, and neural mechanisms. In psychiatry, emotions are examined as part of the discipline's study and treatment of mental disorders in humans. Nursing studies emotions as part of its approach to the provision of holistic health care to humans. Psychology examines emotions from a scientific perspective by treating them as mental processes and behavior and they explore the underlying physiological and neurological processes. In neuroscience sub-fields such as social neuroscience and affective neuroscience, scientists study the neural mechanisms of emotion by combining neuroscience with the psychological study of personality, emotion, and mood. In linguistics, the expression of emotion may change to the meaning of sounds. In education, the role of emotions in relation to learning is examined.
132
+
133
+ Social sciences often examine emotion for the role that it plays in human culture and social interactions. In sociology, emotions are examined for the role they play in human society, social patterns and interactions, and culture. In anthropology, the study of humanity, scholars use ethnography to undertake contextual analyses and cross-cultural comparisons of a range of human activities. Some anthropology studies examine the role of emotions in human activities. In the field of communication sciences, critical organizational scholars have examined the role of emotions in organizations, from the perspectives of managers, employees, and even customers. A focus on emotions in organizations can be credited to Arlie Russell Hochschild's concept of emotional labor. The University of Queensland hosts EmoNet,[109] an e-mail distribution list representing a network of academics that facilitates scholarly discussion of all matters relating to the study of emotion in organizational settings. The list was established in January 1997 and has over 700 members from across the globe.
134
+
135
+ In economics, the social science that studies the production, distribution, and consumption of goods and services, emotions are analyzed in some sub-fields of microeconomics, in order to assess the role of emotions on purchase decision-making and risk perception. In criminology, a social science approach to the study of crime, scholars often draw on behavioral sciences, sociology, and psychology; emotions are examined in criminology issues such as anomie theory and studies of "toughness," aggressive behavior, and hooliganism. In law, which underpins civil obedience, politics, economics and society, evidence about people's emotions is often raised in tort law claims for compensation and in criminal law prosecutions against alleged lawbreakers (as evidence of the defendant's state of mind during trials, sentencing, and parole hearings). In political science, emotions are examined in a number of sub-fields, such as the analysis of voter decision-making.
136
+
137
+ In philosophy, emotions are studied in sub-fields such as ethics, the philosophy of art (for example, sensory–emotional values, and matters of taste and sentimentality), and the philosophy of music (see also Music and emotion). In history, scholars examine documents and other sources to interpret and analyze past activities; speculation on the emotional state of the authors of historical documents is one of the tools of interpretation. In literature and film-making, the expression of emotion is the cornerstone of genres such as drama, melodrama, and romance. In communication studies, scholars study the role that emotion plays in the dissemination of ideas and messages. Emotion is also studied in non-human animals in ethology, a branch of zoology which focuses on the scientific study of animal behavior. Ethology is a combination of laboratory and field science, with strong ties to ecology and evolution. Ethologists often study one type of behavior (for example, aggression) in a number of unrelated animals.
138
+
139
+ The history of emotions has become an increasingly popular topic recently, with some scholars[who?] arguing that it is an essential category of analysis, not unlike class, race, or gender. Historians, like other social scientists, assume that emotions, feelings and their expressions are regulated in different ways by both different cultures and different historical times, and the constructivist school of history claims even that some sentiments and meta-emotions, for example Schadenfreude, are learnt and not only regulated by culture. Historians of emotion trace and analyse the changing norms and rules of feeling, while examining emotional regimes, codes, and lexicons from social, cultural, or political history perspectives. Others focus on the history of medicine, science, or psychology. What somebody can and may feel (and show) in a given situation, towards certain people or things, depends on social norms and rules; thus historically variable and open to change.[110] Several research centers have opened in the past few years in Germany, England, Spain,[111] Sweden, and Australia.
140
+
141
+ Furthermore, research in historical trauma suggests that some traumatic emotions can be passed on from parents to offspring to second and even third generation, presented as examples of transgenerational trauma.
142
+
143
+ A common way in which emotions are conceptualized in sociology is in terms of the multidimensional characteristics including cultural or emotional labels (for example, anger, pride, fear, happiness), physiological changes (for example, increased perspiration, changes in pulse rate), expressive facial and body movements (for example, smiling, frowning, baring teeth), and appraisals of situational cues.[16] One comprehensive theory of emotional arousal in humans has been developed by Jonathan Turner (2007: 2009).[112][113] Two of the key eliciting factors for the arousal of emotions within this theory are expectations states and sanctions. When people enter a situation or encounter with certain expectations for how the encounter should unfold, they will experience different emotions depending on the extent to which expectations for Self, other and situation are met or not met. People can also provide positive or negative sanctions directed at Self or other which also trigger different emotional experiences in individuals. Turner analyzed a wide range of emotion theories across different fields of research including sociology, psychology, evolutionary science, and neuroscience. Based on this analysis, he identified four emotions that all researchers consider being founded on human neurology including assertive-anger, aversion-fear, satisfaction-happiness, and disappointment-sadness. These four categories are called primary emotions and there is some agreement amongst researchers that these primary emotions become combined to produce more elaborate and complex emotional experiences. These more elaborate emotions are called first-order elaborations in Turner's theory and they include sentiments such as pride, triumph, and awe. Emotions can also be experienced at different levels of intensity so that feelings of concern are a low-intensity variation of the primary emotion aversion-fear whereas depression is a higher intensity variant.
144
+
145
+ Attempts are frequently made to regulate emotion according to the conventions of the society and the situation based on many (sometimes conflicting) demands and expectations which originate from various entities. The expression of anger is in many cultures discouraged in girls and women to a greater extent than in boys and men (the notion being that an angry man has a valid complaint that needs to be rectified, while an angry women is hysterical or oversensitive, and her anger is somehow invalid), while the expression of sadness or fear is discouraged in boys and men relative to girls and women (attitudes implicit in phrases like "man up" or "don't be a sissy").[114][115] Expectations attached to social roles, such as "acting as man" and not as a woman, and the accompanying "feeling rules" contribute to the differences in expression of certain emotions. Some cultures encourage or discourage happiness, sadness, or jealousy, and the free expression of the emotion of disgust is considered socially unacceptable in most cultures. Some social institutions are seen as based on certain emotion, such as love in the case of contemporary institution of marriage. In advertising, such as health campaigns and political messages, emotional appeals are commonly found. Recent examples include no-smoking health campaigns and political campaigns emphasizing the fear of terrorism.[116]
146
+
147
+ Sociological attention to emotion has varied over time. Émile Durkheim (1915/1965)[117] wrote about the collective effervescence or emotional energy that was experienced by members of totemic rituals in Australian aborigine society. He explained how the heightened state of emotional energy achieved during totemic rituals transported individuals above themselves giving them the sense that they were in the presence of a higher power, a force, that was embedded in the sacred objects that were worshipped. These feelings of exaltation, he argued, ultimately lead people to believe that there were forces that governed sacred objects.
148
+
149
+ In the 1990s, sociologists focused on different aspects of specific emotions and how these emotions were socially relevant. For Cooley (1992),[118] pride and shame were the most important emotions that drive people to take various social actions. During every encounter, he proposed that we monitor ourselves through the "looking glass" that the gestures and reactions of others provide. Depending on these reactions, we either experience pride or shame and this results in particular paths of action. Retzinger (1991)[119] conducted studies of married couples who experienced cycles of rage and shame. Drawing predominantly on Goffman and Cooley's work, Scheff (1990)[120] developed a micro sociological theory of the social bond. The formation or disruption of social bonds is dependent on the emotions that people experience during interactions.
150
+
151
+ Subsequent to these developments, Randall Collins (2004)[121] formulated his interaction ritual theory by drawing on Durkheim's work on totemic rituals that was extended by Goffman (1964/2013; 1967)[122][123] into everyday focused encounters. Based on interaction ritual theory, we experience different levels or intensities of emotional energy during face-to-face interactions. Emotional energy is considered to be a feeling of confidence to take action and a boldness that one experiences when they are charged up from the collective effervescence generated during group gatherings that reach high levels of intensity.
152
+
153
+ There is a growing body of research applying the sociology of emotion to understanding the learning experiences of students during classroom interactions with teachers and other students (for example, Milne & Otieno, 2007;[124] Olitsky, 2007;[125] Tobin, et al., 2013;[126] Zembylas, 2002[127]). These studies show that learning subjects like science can be understood in terms of classroom interaction rituals that generate emotional energy and collective states of emotional arousal like emotional climate.
154
+
155
+ Apart from interaction ritual traditions of the sociology of emotion, other approaches have been classed into one of six other categories:[113]
156
+
157
+ This list provides a general overview of different traditions in the sociology of emotion that sometimes conceptualise emotion in different ways and at other times in complementary ways. Many of these different approaches were synthesized by Turner (2007) in his sociological theory of human emotions in an attempt to produce one comprehensive sociological account that draws on developments from many of the above traditions.[112]
158
+
159
+ [89]
160
+ [90]
161
+ [88]
162
+
163
+ Emotion regulation refers to the cognitive and behavioral strategies people use to influence their own emotional experience.[128] For example, a behavioral strategy in which one avoids a situation to avoid unwanted emotions (trying not to think about the situation, doing distracting activities, etc.).[129] Depending on the particular school's general emphasis on either cognitive components of emotion, physical energy discharging, or on symbolic movement and facial expression components of emotion different schools of psychotherapy approach the regulation of emotion differently. Cognitively oriented schools approach them via their cognitive components, such as rational emotive behavior therapy. Yet others approach emotions via symbolic movement and facial expression components (like in contemporary Gestalt therapy).[130]
164
+
165
+ Research on emotions reveals the strong presence of cross-cultural differences in emotional reactions and that emotional reactions are likely to be culture-specific.[131] In strategic settings, cross-cultural research on emotions is required for understanding the psychological situation of a given population or specific actors. This implies the need to comprehend the current emotional state, mental disposition or other behavioral motivation of a target audience located in a different culture, basically founded on its national political, social, economic, and psychological peculiarities but also subject to the influence of circumstances and events.[132] There are many cultural variations in emotions. Trnka et al. (2018) proposed framework which conceptually distinguishes five main components of cultural complexity relating to emotions: "1) emotion language, 2) conceptual knowledge about emotions, 3) emotion-related values, 4) feelings rules, i.e. norms for subjective experience, and 5) display rules, i.e. norms for emotional expression."[133]
166
+
167
+ In the 2000s, research in computer science, engineering, psychology and neuroscience has been aimed at developing devices that recognize human affect display and model emotions.[134] In computer science, affective computing is a branch of the study and development of artificial intelligence that deals with the design of systems and devices that can recognize, interpret, and process human emotions. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science.[135] While the origins of the field may be traced as far back as to early philosophical enquiries into emotion,[69] the more modern branch of computer science originated with Rosalind Picard's 1995 paper[136] on affective computing.[137][138] Detecting emotional information begins with passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. The detection and processing of facial expression or body gestures is achieved through detectors and sensors.
168
+
169
+ Emotion affects the way autobiographical memories are encoded and retrieved. Emotional memories are reactivated more, they are remembered better and have more attention devoted to them [139]. Through remembering our past achievements and failures, autobiographical memories affect how we perceive and feel about ourselves [140].
170
+
171
+ In the late 19th century, the most influential theorists were William James (1842–1910) and Carl Lange (1834–1900). James was an American psychologist and philosopher who wrote about educational psychology, psychology of religious experience/mysticism, and the philosophy of pragmatism. Lange was a Danish physician and psychologist. Working independently, they developed the James–Lange theory, a hypothesis on the origin and nature of emotions. The theory states that within human beings, as a response to experiences in the world, the autonomic nervous system creates physiological events such as muscular tension, a rise in heart rate, perspiration, and dryness of the mouth. Emotions, then, are feelings which come about as a result of these physiological changes, rather than being their cause.[141]
172
+
173
+ Silvan Tomkins (1911–1991) developed the affect theory and script theory. The affect theory introduced the concept of basic emotions, and was based on the idea that the dominance of the emotion, which he called the affected system, was the motivating force in human life.[142]
174
+
175
+ Some of the most influential deceased theorists on emotion from the 20th century include Magda B. Arnold (1903–2002), an American psychologist who developed the appraisal theory of emotions;[143] Richard Lazarus (1922–2002), an American psychologist who specialized in emotion and stress, especially in relation to cognition; Herbert A. Simon (1916–2001), who included emotions into decision making and artificial intelligence; Robert Plutchik (1928–2006), an American psychologist who developed a psychoevolutionary theory of emotion;[144] Robert Zajonc (1923–2008) a Polish–American social psychologist who specialized in social and cognitive processes such as social facilitation; Robert C. Solomon (1942–2007), an American philosopher who contributed to the theories on the philosophy of emotions with books such as What Is An Emotion?: Classic and Contemporary Readings (2003);[145] Peter Goldie (1946–2011), a British philosopher who specialized in ethics, aesthetics, emotion, mood and character; Nico Frijda (1927–2015), a Dutch psychologist who advanced the theory that human emotions serve to promote a tendency to undertake actions that are appropriate in the circumstances, detailed in his book The Emotions (1986);[146] Jaak Panksepp (1943-2017), an Estonian-born American psychologist, psychobiologist, neuroscientist and pioneer in affective neuroscience.
176
+
177
+ Influential theorists who are still active include the following psychologists, neurologists, philosophers, and sociologists:
en/172.html.txt ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The International Phonetic Alphabet (IPA) is an alphabetic system of phonetic notation based primarily on the Latin alphabet. It was devised by the International Phonetic Association in the late 19th century as a standardized representation of the sounds of spoken language.[1] The IPA is used by lexicographers, foreign language students and teachers, linguists, speech-language pathologists, singers, actors, constructed language creators and translators.[2][3]
4
+
5
+ The IPA is designed to represent only those qualities of speech that are part of oral language: phones, phonemes, intonation and the separation of words and syllables.[1] To represent additional qualities of speech, such as tooth gnashing, lisping, and sounds made with a cleft lip and cleft palate, an extended set of symbols, the extensions to the International Phonetic Alphabet, may be used.[2]
6
+
7
+ IPA symbols are composed of one or more elements of two basic types, letters and diacritics. For example, the sound of the English letter ⟨t⟩ may be transcribed in IPA with a single letter, [t], or with a letter plus diacritics, [t̺ʰ], depending on how precise one wishes to be.[note 1] Often, slashes are used to signal broad or phonemic transcription; thus, /t/ is less specific than, and could refer to, either [t̺ʰ] or [t], depending on the context and language.
8
+
9
+ Occasionally letters or diacritics are added, removed or modified by the International Phonetic Association. As of the most recent change in 2005,[4] there are 107 letters, 52 diacritics and four prosodic marks in the IPA. These are shown in the current IPA chart, also posted below in this article and at the website of the IPA.[5]
10
+
11
+ In 1886, a group of French and British language teachers, led by the French linguist Paul Passy, formed what would come to be known from 1897 onwards as the International Phonetic Association (in French, l'Association phonétique internationale).[6] Their original alphabet was based on a spelling reform for English known as the Romic alphabet, but in order to make it usable for other languages, the values of the symbols were allowed to vary from language to language.[7] For example, the sound [ʃ] (the sh in shoe) was originally represented with the letter ⟨c⟩ in English, but with the digraph ⟨ch⟩ in French.[6] However, in 1888, the alphabet was revised so as to be uniform across languages, thus providing the base for all future revisions.[6][8] The idea of making the IPA was first suggested by Otto Jespersen in a letter to Paul Passy. It was developed by Alexander John Ellis, Henry Sweet, Daniel Jones, and Passy.[9]
12
+
13
+ Since its creation, the IPA has undergone a number of revisions. After revisions and expansions from the 1890s to the 1940s, the IPA remained primarily unchanged until the Kiel Convention in 1989. A minor revision took place in 1993 with the addition of four letters for mid central vowels[2] and the removal of letters for voiceless implosives.[10] The alphabet was last revised in May 2005 with the addition of a letter for a labiodental flap.[11] Apart from the addition and removal of symbols, changes to the IPA have consisted largely of renaming symbols and categories and in modifying typefaces.[2]
14
+
15
+ Extensions to the International Phonetic Alphabet for speech pathology were created in 1990 and officially adopted by the International Clinical Phonetics and Linguistics Association in 1994.[12]
16
+
17
+ The general principle of the IPA is to provide one letter for each distinctive sound (speech segment), although this practice is not followed if the sound itself is complex.[13] This means that:
18
+
19
+ The alphabet is designed for transcribing sounds (phones), not phonemes, though it is used for phonemic transcription as well. A few letters that did not indicate specific sounds have been retired (⟨ˇ⟩, once used for the 'compound' tone of Swedish and Norwegian, and ⟨ƞ⟩, once used for the moraic nasal of Japanese), though one remains: ⟨ɧ⟩, used for the sj-sound of Swedish. When the IPA is used for phonemic transcription, the letter–sound correspondence can be rather loose. For example, ⟨c⟩ and ⟨ɟ⟩ are used in the IPA Handbook for /t͡ʃ/ and /d͡ʒ/.
20
+
21
+ Among the symbols of the IPA, 107 letters represent consonants and vowels, 31 diacritics are used to modify these, and 19 additional signs indicate suprasegmental qualities such as length, tone, stress, and intonation.[note 3] These are organized into a chart; the chart displayed here is the
22
+ official chart as posted at the website of the IPA.
23
+
24
+ The letters chosen for the IPA are meant to harmonize with the Latin alphabet.[note 4] For this reason, most letters are either Latin or Greek, or modifications thereof. Some letters are neither: for example, the letter denoting the glottal stop, ⟨ʔ⟩, has the form of a dotless question mark, and derives originally from an apostrophe. A few letters, such as that of the voiced pharyngeal fricative, ⟨ʕ⟩, were inspired by other writing systems (in this case, the Arabic letter ﻉ‎ ʿayn).[10]
25
+
26
+ Despite its preference for harmonizing with the Latin script, the International Phonetic Association has occasionally admitted other letters. For example, before 1989, the IPA letters for click consonants were ⟨ʘ⟩, ⟨ʇ⟩, ⟨ʗ⟩, and ⟨ʖ⟩, all of which were derived either from existing IPA letters, or from Latin and Greek letters. However, except for ⟨ʘ⟩, none of these letters were widely used among Khoisanists or Bantuists, and as a result they were replaced by the more widespread symbols ⟨ʘ⟩, ⟨ǀ⟩, ⟨ǃ⟩, ⟨ǂ⟩, and ⟨ǁ⟩ at the IPA Kiel Convention in 1989.[14]
27
+
28
+ Although the IPA diacritics are fully featural, there is little systemicity in the letter forms. A retroflex articulation is consistently indicated with a right-swinging tail, as in ⟨ɖ ɳ ʂ⟩, and implosion by a top hook, ⟨ɠ ɗ ɓ⟩, but other pseudo-featural elements are due to haphazard derivation and coincidence. For example, all nasal consonants but uvular ⟨ɴ⟩ are based on the form ⟨n⟩: ⟨m ɱ n ɳ ɲ ŋ⟩. However, the similarity between ⟨m⟩ and ⟨n⟩ is a historical accident; ⟨ɲ⟩ and ⟨ŋ⟩ are derived from ligatures of gn and ng, and ⟨ɱ⟩ is an ad hoc imitation of ⟨ŋ⟩.
29
+
30
+ Some of the new letters were ordinary Latin letters turned 180 degrees, such as ɐ ɔ ə ɟ ɥ ɯ ɹ ʇ ʌ ʍ ʎ (turned a c e f h m r t v w y). This was easily done in the era of mechanical typesetting, and had the advantage of not requiring the casting of special type for IPA symbols.
31
+
32
+ Full capital letters are not used as IPA symbols. They are, however, often used for archiphonemes and for natural classes of phonemes (that is, as wildcards). Such usage is not part of the IPA or even standardized, and may be ambiguous between authors, but it is commonly used in conjunction with the IPA. (The extIPA chart, for example, uses wildcards in its illustrations.) Capital letters are also basic to the Voice Quality Symbols sometimes used in conjunction with the IPA.
33
+
34
+ As wildcards, ⟨C⟩ for {consonant} and ⟨V⟩ for {vowel} are ubiquitous. Other common capital-letter symbols are ⟨T⟩ for {tone/accent} (tonicity), ⟨N⟩ for {nasal}, ⟨P⟩ for {plosive}, ⟨F⟩ for {fricative}, ⟨S⟩ for {sibilant},[15] ⟨G⟩ for {glide/approximant}, ⟨L⟩ for {liquid}, ⟨R⟩ for {rhotic} or {resonant} (sonorant), ⟨Ʞ⟩ for {click}, ⟨A, E, O, Ʉ⟩ for {open, front, back, close vowel} and ⟨B, D, J (or Ɉ), K, Q, Φ, H⟩ for {labial, alveolar, post-alveolar/palatal, velar, uvular, pharyngeal, glottal consonant}, respectively, and ⟨X⟩ for any sound. For example, the possible syllable shapes of Mandarin can be abstracted as ranging from /V/ (an atonic vowel) to /CGVNᵀ/ (a consonant-glide-vowel-nasal syllable with tone). The letters can be modified with IPA diacritics, for example ⟨Cʼ⟩ for {ejective}, ⟨Ƈ⟩ for {implosive}, ⟨N͡C⟩ or ⟨ᴺC⟩ for {prenasalized consonant}, ⟨Ṽ⟩ for {nasal vowel}, ⟨S̬⟩ for {voiced sibilant}, ⟨N̥⟩ for {voiceless nasal}, ⟨P͡F⟩ or ⟨PF⟩ for {affricate}, ⟨Cʲ⟩ for {palatalized consonant} and ⟨D̪⟩ for {dental consonant}. In speech pathology, capital letters represent indeterminate sounds, and may be superscripted to indicate they are weakly articulated: e.g. [ᴰ] is a weak indeterminate alveolar, [ᴷ] a weak indeterminate velar.[16]
35
+
36
+ Typical examples of archiphonemic use of capital letters are ⟨I⟩ for the Turkish harmonic vowel set {i y ɯ u}[17] ⟨D⟩ for the conflated flapped middle consonant of American English writer and rider, and ⟨N⟩ for the homorganic syllable-coda nasal of languages such as Spanish (essentially equivalent to the wild-card usage of the letter).
37
+
38
+ ⟨V⟩, ⟨F⟩ and ⟨C⟩ have different meanings as Voice Quality Symbols, where they stand for "voice" (generally meaning secondary articulation rather than phonetic voicing), "falsetto" and "creak". They may take diacritics that indicate what kind of voice quality an utterance has, and may be used to extract a suprasegmental feature that occurs on all susceptible segments in a stretch of IPA. For instance, the transcription of Scottish Gaelic [kʷʰuˣʷt̪ʷs̟ʷ] 'cat' and [kʷʰʉˣʷt͜ʃʷ] 'cats' (Islay dialect) can be made more economical by extracting the suprasegmental labialization of the words: Vʷ[kʰuˣt̪s̟] and Vʷ[kʰʉˣt͜ʃ].[18]
39
+
40
+ The International Phonetic Alphabet is based on the Latin alphabet, using as few non-Latin forms as possible.[6] The Association created the IPA so that the sound values of most consonant letters taken from the Latin alphabet would correspond to "international usage".[6] Hence, the letters ⟨b⟩, ⟨d⟩, ⟨f⟩, (hard) ⟨ɡ⟩, (non-silent) ⟨h⟩, (unaspirated) ⟨k⟩, ⟨l⟩, ⟨m⟩, ⟨n⟩, (unaspirated) ⟨p⟩, (voiceless) ⟨s⟩, (unaspirated) ⟨t⟩, ⟨v⟩, ⟨w⟩, and ⟨z⟩ have the values used in English; and the vowel letters from the Latin alphabet (⟨a⟩, ⟨e⟩, ⟨i⟩, ⟨o⟩, ⟨u⟩) correspond to the (long) sound values of Latin: [i] is like the vowel in machine, [u] is as in rule, etc. Other letters may differ from English, but are used with these values in other European languages, such as ⟨j⟩, ⟨r⟩, and ⟨y⟩.
41
+
42
+ This inventory was extended by using small-capital and cursive forms, diacritics and rotation. There are also several symbols derived or taken from the Greek alphabet, though the sound values may differ. For example, ⟨ʋ⟩ is a vowel in Greek, but an only indirectly related consonant in the IPA. For most of these, subtly different glyph shapes have been devised for the IPA, namely ⟨ɑ⟩, ⟨ꞵ⟩, ⟨ɣ⟩, ⟨ɛ⟩, ⟨ɸ⟩, ⟨ꭓ⟩, and ⟨ʋ⟩, which are encoded in Unicode separately from their parent Greek letters, though one of them – ⟨θ⟩ – is not, while Greek ⟨β⟩ and ⟨χ⟩ are generally used for Latin ⟨ꞵ⟩ and ⟨ꭓ⟩.[19]
43
+
44
+ The sound values of modified Latin letters can often be derived from those of the original letters.[20] For example, letters with a rightward-facing hook at the bottom represent retroflex consonants; and small capital letters usually represent uvular consonants. Apart from the fact that certain kinds of modification to the shape of a letter generally correspond to certain kinds of modification to the sound represented, there is no way to deduce the sound represented by a symbol from its shape (as for example in Visible Speech) nor even any systematic relation between signs and the sounds they represent (as in Hangul).
45
+
46
+ Beyond the letters themselves, there are a variety of secondary symbols which aid in transcription. Diacritic marks can be combined with IPA letters to transcribe modified phonetic values or secondary articulations. There are also special symbols for suprasegmental features such as stress and tone that are often employed.
47
+
48
+ There are two principal types of brackets used to set off IPA transcriptions:
49
+
50
+ Other conventions are less commonly seen:
51
+
52
+ All three of the above are provided by the IPA Handbook. The following are not, but may be seen in IPA transcription:
53
+
54
+ IPA letters have cursive forms designed for use in manuscripts and when taking field notes.
55
+
56
+ In the early stages of the alphabet, the typographic variants of g, opentail ⟨ɡ⟩ () and looptail ⟨g⟩ (), represented different values, but are now regarded as equivalents. Opentail ⟨ɡ⟩ has always represented a voiced velar plosive, while ⟨⟩ was distinguished from ⟨ɡ⟩ and represented a voiced velar fricative from 1895 to 1900.[29][30] Subsequently, ⟨ǥ⟩ represented the fricative, until 1931 when it was replaced again by ⟨ɣ⟩.[31]
57
+
58
+ In 1948, the Council of the Association recognized ⟨ɡ⟩ and ⟨⟩ as typographic equivalents,[32] and this decision was reaffirmed in 1993.[33] While the 1949 Principles of the International Phonetic Association recommended the use of ⟨⟩ for a velar plosive and ⟨ɡ⟩ for an advanced one for languages where it is preferable to distinguish the two, such as Russian,[34] this practice never caught on.[35] The 1999 Handbook of the International Phonetic Association, the successor to the Principles, abandoned the recommendation and acknowledged both shapes as acceptable variants.[36]
59
+
60
+ The International Phonetic Alphabet is occasionally modified by the Association. After each modification, the Association provides an updated simplified presentation of the alphabet in the form of a chart. (See History of the IPA.) Not all aspects of the alphabet can be accommodated in a chart of the size published by the IPA. The alveolo-palatal and epiglottal consonants, for example, are not included in the consonant chart for reasons of space rather than of theory (two additional columns would be required, one between the retroflex and palatal columns and the other between the pharyngeal and glottal columns), and the lateral flap would require an additional row for that single consonant, so they are listed instead under the catchall block of "other symbols".[37] The indefinitely large number of tone letters would make a full accounting impractical even on a larger page, and only a few examples are shown.
61
+
62
+ The procedure for modifying the alphabet or the chart is to propose the change in the Journal of the IPA. (See, for example, August 2008 on an open central unrounded vowel and August 2011 on central approximants.)[38] Reactions to the proposal may be published in the same or subsequent issues of the Journal (as in August 2009 on the open central vowel).[39] A formal proposal is then put to the Council of the IPA[40] – which is elected by the membership[41] – for further discussion and a formal vote.[42][43]
63
+
64
+ Only changes to the alphabet or chart that have been approved by the Council can be considered part of the official IPA. Nonetheless, many users of the alphabet, including the leadership of the Association itself, make personal changes or additions in their own practice, either for convenience in the broad phonetic or phonemic transcription of a particular language (see "Illustrations of the IPA" for individual languages in the Handbook, which for example may use ⟨/c/⟩ as a phonemic symbol for what is phonetically realized as [tʃ]),[44] or because they object to some aspect of the official version.
65
+
66
+ Although the IPA offers over 160 symbols for transcribing speech, only a relatively small subset of these will be used to transcribe any one language. It is possible to transcribe speech with various levels of precision. A precise phonetic transcription, in which sounds are described in a great deal of detail, is known as a narrow transcription. A coarser transcription which ignores some of this detail is called a broad transcription. Both are relative terms, and both are generally enclosed in square brackets.[1] Broad phonetic transcriptions may restrict themselves to easily heard details, or only to details that are relevant to the discussion at hand, and may differ little if at all from phonemic transcriptions, but they make no theoretical claim that all the distinctions transcribed are necessarily meaningful in the language.
67
+
68
+ For example, the English word little may be transcribed broadly using the IPA as /ˈlɪtəl/, and this broad (imprecise) transcription is a more or less accurate description of many pronunciations. A narrower transcription may focus on individual or dialectical details: [ˈɫɪɾɫ] in General American, [ˈlɪʔo] in Cockney, or [ˈɫɪːɫ] in Southern US English.
69
+
70
+ It is customary to use simpler letters, without many diacritics, in phonemic transcriptions. The choice of IPA letters may reflect the theoretical claims of the author, or merely be a convenience for typesetting. For instance, in English, either the vowel of pick or the vowel of peak may be transcribed as /i/ (for the pairs /pik, piːk/ or /pɪk, pik/), and neither is identical to the vowel of the French word pique which is also generally transcribed /i/. That is, letters between slashes do not have absolute values, something true of broader phonetic approximations as well. A narrow transcription may, however, be used to distinguish them: [pʰɪk], [pʰiːk], [pikʲ].
71
+
72
+ Although IPA is popular for transcription by linguists, American linguists often alternate use of the IPA with Americanist phonetic notation or use the IPA together with some nonstandard symbols, for reasons including reducing the error rate on reading handwritten transcriptions or avoiding perceived awkwardness of IPA in some situations. The exact practice may vary somewhat between languages and even individual researchers, so authors are generally encouraged to include a chart or other explanation of their choices.[45]
73
+
74
+ Some language study programs use the IPA to teach pronunciation. For example, in Russia (and earlier in the Soviet Union) and mainland China, textbooks for children[46] and adults[47] for studying English and French consistently use the IPA. English teachers and textbooks in Taiwan tend to use the Kenyon and Knott system, a slight typographical variant of the IPA first used in the 1944 Pronouncing Dictionary of American English.
75
+
76
+ Many British dictionaries, including the Oxford English Dictionary and some learner's dictionaries such as the Oxford Advanced Learner's Dictionary and the Cambridge Advanced Learner's Dictionary, now use the International Phonetic Alphabet to represent the pronunciation of words.[48] However, most American (and some British) volumes use one of a variety of pronunciation respelling systems, intended to be more comfortable for readers of English. For example, the respelling systems in many American dictionaries (such as Merriam-Webster) use ⟨y⟩ for IPA [j] and ⟨sh⟩ for IPA [ʃ], reflecting common representations of those sounds in written English,[49] using only letters of the English Roman alphabet and variations of them. (In IPA, [y] represents the sound of the French ⟨u⟩ (as in tu), and [sh] represents the pair of sounds in grasshopper.)
77
+
78
+ The IPA is also not universal among dictionaries in languages other than English. Monolingual dictionaries of languages with generally phonemic orthographies generally do not bother with indicating the pronunciation of most words, and tend to use respelling systems for words with unexpected pronunciations. Dictionaries produced in Israel use the IPA rarely and sometimes use the Hebrew alphabet for transcription of foreign words. Monolingual Hebrew dictionaries use pronunciation respelling for words with unusual spelling; for example, the Even-Shoshan Dictionary respells תָּכְנִית as תּוֹכְנִית because this word uses kamatz katan. Bilingual dictionaries that translate from foreign languages into Russian usually employ the IPA, but monolingual Russian dictionaries occasionally use pronunciation respelling for foreign words; for example, Sergey Ozhegov's dictionary adds нэ́ in brackets for the French word пенсне (pince-nez) to indicate that the final е does not iotate the preceding н.
79
+
80
+ The IPA is more common in bilingual dictionaries, but there are exceptions here too. Mass-market bilingual Czech dictionaries, for instance, tend to use the IPA only for sounds not found in the Czech language.[50]
81
+
82
+ IPA letters have been incorporated into the alphabets of various languages, notably via the Africa Alphabet in many sub-Saharan languages such as Hausa, Fula, Akan, Gbe languages, Manding languages, Lingala, etc. This has created the need for capital variants. For example, Kabiyè of northern Togo has Ɖ ɖ, Ŋ ŋ, Ɣ ɣ, Ɔ ɔ, Ɛ ɛ, Ʋ ʋ. These, and others, are supported by Unicode, but appear in Latin ranges other than the IPA extensions.
83
+
84
+ In the IPA itself, however, only lower-case letters are used. The 1949 edition of the IPA handbook indicated that an asterisk ⟨*⟩ may be prefixed to indicate that a word is a proper name,[51] but this convention was not included in the 1999 Handbook.
85
+
86
+ IPA has widespread use among classical singers during preparation as they are frequently required to sing in a variety of foreign languages, in addition to being taught by vocal coach in order to perfect the diction of their students and to globally improve tone quality and tuning.[52] Opera librettos are authoritatively transcribed in IPA, such as Nico Castel's volumes[53] and Timothy Cheek's book Singing in Czech.[54] Opera singers' ability to read IPA was used by the site Visual Thesaurus, which employed several opera singers "to make recordings for the 150,000 words and phrases in VT's lexical database ... for their vocal stamina, attention to the details of enunciation, and most of all, knowledge of IPA".[55]
87
+
88
+ The International Phonetic Association organizes the letters of the IPA into three categories: pulmonic consonants, non-pulmonic consonants, and vowels.[56][57]
89
+
90
+ Pulmonic consonant letters are arranged singly or in pairs of voiceless (tenuis) and voiced sounds, with these then grouped in columns from front (labial) sounds on the left to back (glottal) sounds on the right. In official publications by the IPA, two columns are omitted to save space, with the letters listed among 'other symbols',[58] and with the remaining consonants arranged in rows from full closure (occlusives: stops and nasals), to brief closure (vibrants: trills and taps), to partial closure (fricatives) and minimal closure (approximants), again with a row left out to save space. In the table below, a slightly different arrangement is made: All pulmonic consonants are included in the pulmonic-consonant table, and the vibrants and laterals are separated out so that the rows reflect the common lenition pathway of stop → fricative → approximant, as well as the fact that several letters pull double duty as both fricative and approximant; affricates may be created by joining stops and fricatives from adjacent cells. Shaded cells represent articulations that are judged to be impossible.
91
+
92
+ Vowel letters are also grouped in pairs—of unrounded and rounded vowel sounds—with these pairs also arranged from front on the left to back on the right, and from maximal closure at top to minimal closure at bottom. No vowel letters are omitted from the chart, though in the past some of the mid central vowels were listed among the 'other symbols'.
93
+
94
+ Each character is assigned a number, to prevent confusion between similar characters (such as ɵ and θ, ɤ and ɣ, or ʃ and ʄ) in such situations as the printing of manuscripts. The categories of sounds are assigned different ranges of numbers.[59]
95
+
96
+ The numbers are assigned to sounds and to symbols, e.g. 304 is the open front unrounded vowel, 415 is the centralization diacritic. Together, they form a symbol that represents the open central unrounded vowel, [ä].
97
+
98
+ A pulmonic consonant is a consonant made by obstructing the glottis (the space between the vocal cords) or oral cavity (the mouth) and either simultaneously or subsequently letting out air from the lungs. Pulmonic consonants make up the majority of consonants in the IPA, as well as in human language. All consonants in the English language fall into this category.[60]
99
+
100
+ The pulmonic consonant table, which includes most consonants, is arranged in rows that designate manner of articulation, meaning how the consonant is produced, and columns that designate place of articulation, meaning where in the vocal tract the consonant is produced. The main chart includes only consonants with a single place of articulation.
101
+
102
+ Notes
103
+
104
+ Non-pulmonic consonants are sounds whose airflow is not dependent on the lungs. These include clicks (found in the Khoisan languages of Africa), implosives (found in languages such as Sindhi, Saraiki, Swahili and Vietnamese), and ejectives (found in many Amerindian and Caucasian languages).
105
+
106
+ Notes
107
+
108
+ Affricates and co-articulated stops are represented by two letters joined by a tie bar, either above or below the letters.[64] The six most common affricates are optionally represented by ligatures, though this is no longer official IPA usage,[1] because a great number of ligatures would be required to represent all affricates this way. Alternatively, a superscript notation for a consonant release is sometimes used to transcribe affricates, for example tˢ for t͡s, paralleling kˣ ~ k͡x. The letters for the palatal plosives c and ɟ are often used as a convenience for t͡ʃ and d͡ʒ or similar affricates, even in official IPA publications, so they must be interpreted with care.
109
+
110
+ Note
111
+
112
+ Co-articulated consonants are sounds that involve two simultaneous places of articulation (are pronounced using two parts of the vocal tract). In English, the [w] in "went" is a coarticulated consonant, being pronounced by rounding the lips and raising the back of the tongue. Similar sounds are [ʍ] and [ɥ].
113
+
114
+ Notes
115
+
116
+ The IPA defines a vowel as a sound which occurs at a syllable center.[66] Below is a chart depicting the vowels of the IPA. The IPA maps the vowels according to the position of the tongue.
117
+
118
+ The vertical axis of the chart is mapped by vowel height. Vowels pronounced with the tongue lowered are at the bottom, and vowels pronounced with the tongue raised are at the top. For example, [ɑ] (the first vowel in father) is at the bottom because the tongue is lowered in this position. However, [i] (the vowel in "meet") is at the top because the sound is said with the tongue raised to the roof of the mouth.
119
+
120
+ In a similar fashion, the horizontal axis of the chart is determined by vowel backness. Vowels with the tongue moved towards the front of the mouth (such as [ɛ], the vowel in "met") are to the left in the chart, while those in which it is moved to the back (such as [ʌ], the vowel in "but") are placed to the right in the chart.
121
+
122
+ In places where vowels are paired, the right represents a rounded vowel (in which the lips are rounded) while the left is its unrounded counterpart.
123
+
124
+ Diphthongs are typically specified with a non-syllabic diacritic, as in ⟨uɪ̯⟩ or ⟨u̯ɪ⟩, or with a superscript for the on- or off-glide, as in ⟨uᶦ⟩ or ⟨ᵘɪ⟩. Sometimes a tie bar is used, especially if it is difficult to tell if the diphthong is characterized by an on-glide, an off-glide or is variable: ⟨u͡ɪ⟩.
125
+
126
+ Notes
127
+
128
+ Diacritics are used for phonetic detail. They are added to IPA letters to indicate a modification or specification of that letter's normal pronunciation.[67]
129
+
130
+ By being made superscript, any IPA letter may function as a diacritic, conferring elements of its articulation to the base letter. (See secondary articulation for a list of superscript IPA letters supported by Unicode.) Those superscript letters listed below are specifically provided for by the IPA; others include ⟨tˢ⟩ ([t] with fricative release), ⟨ᵗs⟩ ([s] with affricate onset), ⟨ⁿd⟩ (prenasalized [d]), ⟨bʱ⟩ ([b] with breathy voice), ⟨mˀ⟩ (glottalized [m]), ⟨sᶴ⟩ ([s] with a flavor of [ʃ]), ⟨oᶷ⟩ ([o] with diphthongization), ⟨ɯᵝ⟩ (compressed [ɯ]). Superscript diacritics placed after a letter are ambiguous between simultaneous modification of the sound and phonetic detail at the end of the sound. For example, labialized ⟨kʷ⟩ may mean either simultaneous [k] and [w] or else [k] with a labialized release. Superscript diacritics placed before a letter, on the other hand, normally indicate a modification of the onset of the sound (⟨mˀ⟩ glottalized [m], ⟨ˀm⟩ [m] with a glottal onset).
131
+
132
+ Notes
133
+
134
+ Subdiacritics (diacritics normally placed below a letter) may be moved above a letter to avoid conflict with a descender, as in voiceless ⟨ŋ̊⟩.[67] The raising and lowering diacritics have optional forms ⟨˔⟩, ⟨˕⟩ that avoid descenders.
135
+
136
+ The state of the glottis can be finely transcribed with diacritics. A series of alveolar plosives ranging from an open to a closed glottis phonation are:
137
+
138
+ Additional diacritics are provided by the Extensions to the IPA for speech pathology.
139
+
140
+ These symbols describe the features of a language above the level of individual consonants and vowels, such as prosody, tone, length, and stress, which often operate on syllables, words, or phrases: that is, elements such as the intensity, pitch, and gemination of the sounds of a language, as well as the rhythm and intonation of speech.[68] Although most of these symbols indicate distinctions that are phonemic at the word level, symbols also exist for intonation on a level greater than that of the word.[68] Various ligatures of tone letters are used in the IPA Handbook despite not being found on the simplified official IPA chart.
141
+
142
+ * The IPA provides six transcriptional conventions for tone letters: with or without a stave, facing left or facing right from a stave, and placed before or after the word or syllable. That is, an [e] with extra-high tone may be transcribed ⟨˥e⟩, ⟨꜒e⟩, ⟨¯e⟩, ⟨e˥⟩, ⟨e꜒⟩, ⟨e¯⟩.[70] Only left-facing staved letters are shown on the Chart, and in practice it is currently more common for tone letters to occur after the syllable/word than before, though historically they came before, as the stress marks still do. As of 2020, the old staveless letters do not have full Unicode support.
143
+
144
+ Finer distinctions of tone may be indicated by combining the tone diacritics and tone letters shown above, though not all IPA fonts support this. The four additional rising and falling tones supported by diacritics are high/mid rising ɔ᷄, ɔ˧˥, low rising ɔ᷅, ɔ˩˧, high falling ɔ᷇, ɔ˥˧, and low/mid falling ɔ᷆, ɔ˧˩. That is, tone diacritics only support contour tones across three levels (high, mid, low), despite supporting five levels for register tones. For other contour tones, tone letters must be used: ɔ˨˦, ɔ˥˦, etc. For more complex (peaking and dipping, etc.) tones, one may combine three or four tone diacritics in any permutation,[70] though in practice only generic peaking ɔ᷈ and dipping ɔ᷉ combinations are used. For finer detail, tone letters are again required (ɔ˧˥˧, ɔ˩˨˩, ɔ˦˩˧, ɔ˨˩˦, etc.) The correspondence between tone diacritics and tone letters is therefore only approximate.
145
+
146
+ A work-around for diacritics sometimes seen when a language has more than one rising or falling tone, and the author wishes to avoid the poorly legible diacritics ɔ᷄, ɔ᷅, ɔ᷇, ɔ᷆ but does not wish to completely abandon the IPA, is to restrict generic rising ɔ̌ and falling ɔ̂ to the higher-pitched of the rising and falling tones, say ɔ˥˧ and ɔ˧˥, and to use the old (retired) IPA subscript diacritics ɔ̗ and ɔ̖ for the lower-pitched rising and falling tones, say ɔ˩˧ and ɔ˧˩. When a language has four or six level tones, the two mid tones are sometimes transcribed as high-mid ɔ̍ (non-standard) and low-mid ɔ̄.
147
+
148
+ A stress mark typically appears before the stressed syllable, and thus marks the syllable break as well as stress. However, occasionally the stress mark is placed immediately before the stressed vowel, after any consonantal syllable onset.[71] In such transcriptions, the stress mark does not function as a mark of the syllable boundary.
149
+
150
+ Tone letters generally appear after each syllable, for a language with syllable tone (⟨a˧vɔ˥˩⟩), or after the phonological word, for a language with word tone (⟨avɔ˧˥˩⟩). However, in older versions of the IPA, ad hoc tone marks were placed before the syllable, the same position as used to mark stress, and this convention is still sometimes seen (⟨˧a˥˩vɔ⟩, ⟨˧˥˩avɔ⟩).
151
+
152
+ There are three boundary markers, ⟨.⟩ for a syllable break, ⟨|⟩ for a minor prosodic break and ⟨‖⟩ for a major prosodic break. The tags 'minor' and 'major' are intentionally ambiguous. Depending on need, 'minor' may vary from a foot break to a continuing–prosodic-unit boundary (equivalent to a comma), and while 'major' is often any intonation break, it may be restricted to a final–prosodic-unit boundary (equivalent to a period). Although not part of the IPA, the following boundary symbols are often used in conjunction with the IPA: ⟨μ⟩ for a mora or mora boundary, ⟨σ⟩ for a syllable or syllable boundary, ⟨#⟩ for a word boundary, ⟨$⟩ for a phrase or intermediate boundary and ⟨%⟩ for a prosodic boundary. For example, C# is a word-final consonant, %V a post-pausa vowel, and T% an IU-final tone (edge tone).
153
+
154
+ IPA diacritics may be doubled to indicate an extra degree of the feature indicated. This is a productive process, but apart from extra-high and extra-low tones ⟨ə̋, ə̏⟩ being marked by doubled high- and low-tone diacritics, and the major prosodic break ⟨‖⟩ being marked as a double minor break ⟨|⟩, it is not specifically regulated by the IPA. (Note that transcription marks are similar: double slashes indicate extra (morpho)-phonemic, double square brackets especially precise, and double parentheses especially unintelligible.)[citation needed]
155
+
156
+ For example, the stress mark may be doubled to indicate an extra degree of stress, such as prosodic stress in English.[72] An example in French, with a single stress mark for normal prosodic stress at the end of each prosodic unit (marked as a minor prosodic break), and a double stress mark for contrastive/emphatic stress: [ˈˈɑ̃ːˈtre | məˈsjø ‖ ˈˈvwala maˈdam ‖] Entrez monsieur, voilà madame. [73] Similarly, a doubled secondary stress mark ⟨ˌˌ⟩ is commonly used for tertiary (extra-light) stress.[74][full citation needed]
157
+
158
+ Length is commonly extended by repeating the length mark, as in English shhh! [ʃːːː], or for "overlong" segments in Estonian:
159
+
160
+ (Normally additional degrees of length are handled by the extra-short or half-long diacritics, but in the Estonian examples, the first two cases are analyzed as simply short and long.)
161
+
162
+ Occasionally other diacritics are doubled:
163
+
164
+ The IPA once had parallel symbols from alternative proposals, but in most cases eventually settled on one for each sound. The rejected symbols are now considered obsolete. An example is the vowel letter ⟨ɷ⟩, rejected in favor of ⟨ʊ⟩. Letters for affricates and sounds with inherent secondary articulation have also been mostly rejected, with the idea that such features should be indicated with tie bars or diacritics: ⟨ƍ⟩ for [zʷ] is one. In addition, the rare voiceless implosives, ⟨ƥ ƭ ƈ ƙ ʠ⟩, have been dropped and are now usually written ⟨ɓ̥ ɗ̥ ʄ̊ ɠ̊ ʛ̥⟩. A retired set of click letters, ⟨ʇ, ʗ, ʖ⟩, is still sometimes seen, as the official pipe letters ⟨ǀ, ǃ, ǁ⟩ may cause problems with legibility, especially when used with brackets ([ ] or / /), the letter ⟨l⟩, or the prosodic marks ⟨|, ‖⟩ (for this reason, some publications which use the current IPA pipe letters disallow IPA brackets).[81]
165
+
166
+ Individual non-IPA letters may find their way into publications that otherwise use the standard IPA. This is especially common with:
167
+
168
+ In addition, there are typewriter substitutions for when IPA support is not available, such as capital ⟨I, E, U, O, A⟩ for [ɪ, ɛ, ʊ, ɔ, ɑ].
169
+
170
+ The "Extensions to the IPA", often abbreviated as "extIPA" and sometimes called "Extended IPA", are symbols whose original purpose was to accurately transcribe disordered speech. At the Kiel Convention in 1989, a group of linguists drew up the initial extensions,[82] which were based on the previous work of the PRDS (Phonetic Representation of Disordered Speech) Group in the early 1980s.[83] The extensions were first published in 1990, then modified, and published again in 1994 in the Journal of the International Phonetic Association, when they were officially adopted by the ICPLA.[84] While the original purpose was to transcribe disordered speech, linguists have used the extensions to designate a number of unique sounds within standard communication, such as hushing, gnashing teeth, and smacking lips.[2]
171
+
172
+ In addition to the Extensions to the IPA there are the conventions of the Voice Quality Symbols, which besides the concept of voice quality in phonetics include a number of symbols for additional airstream mechanisms and secondary articulations.
173
+
174
+ The blank cells on the IPA chart can be filled without too much difficulty if the need arises. Some ad hoc letters have appeared in the literature for the retroflex lateral flap and the retroflex clicks (having the expected forms of ⟨ɺ⟩ and ⟨ǃ⟩ plus a retroflex tail; the analogous ⟨ᶑ⟩ for a retroflex implosive is even mentioned in the IPA Handbook), the voiceless lateral fricatives (now provided for by the extIPA), the epiglottal trill (arguably covered by the generally-trilled epiglottal "fricatives" ⟨ʜ ʢ⟩), the labiodental plosives (⟨ȹ ȸ⟩ in some old Bantuist texts) and the near-close central vowels (⟨ᵻ ᵿ⟩ in some publications). Diacritics can duplicate some of those, such as ⟨ɭ̆⟩ for the lateral flap, ⟨p̪ b̪⟩ for the labiodental plosives and ⟨ɪ̈ ʊ̈⟩ for the central vowels, and are able to fill in most of the remainder of the charts.[85] If a sound cannot be transcribed, an asterisk ⟨*⟩ may be used, either as a letter or as a diacritic (as in ⟨k*⟩ sometimes seen for the Korean "fortis" velar).
175
+
176
+ Representations of consonant sounds outside of the core set are created by adding diacritics to letters with similar sound values. The Spanish bilabial and dental approximants are commonly written as lowered fricatives, [β̞] and [ð̞] respectively.[86] Similarly, voiced lateral fricatives would be written as raised lateral approximants, [ɭ˔ ʎ̝ ʟ̝]. A few languages such as Banda have a bilabial flap as the preferred allophone of what is elsewhere a labiodental flap. It has been suggested that this be written with the labiodental flap letter and the advanced diacritic, [ⱱ̟].[87]
177
+
178
+ Similarly, a labiodental trill would be written [ʙ̪] (bilabial trill and the dental sign), and labiodental stops [p̪ b̪] rather than with the ad hoc letters sometimes found in the literature. Other taps can be written as extra-short plosives or laterals, e.g. [ɟ̆ ɢ̆ ʟ̆], though in some cases the diacritic would need to be written below the letter. A retroflex trill can be written as a retracted [r̠], just as non-subapical retroflex fricatives sometimes are. The remaining consonants, the uvular laterals (ʟ̠ etc.) and the palatal trill, while not strictly impossible, are very difficult to pronounce and are unlikely to occur even as allophones in the world's languages.
179
+
180
+ The vowels are similarly manageable by using diacritics for raising, lowering, fronting, backing, centering, and mid-centering.[88] For example, the unrounded equivalent of [ʊ] can be transcribed as mid-centered [ɯ̽], and the rounded equivalent of [æ] as raised [ɶ̝] or lowered [œ̞] (though for those who conceive of vowel space as a triangle, simple [ɶ] already is the rounded equivalent of [æ]). True mid vowels are lowered [e̞ ø̞ ɘ̞ ɵ̞ ɤ̞ o̞] or raised [ɛ̝ œ̝ ɜ̝ ɞ̝ ʌ̝ ɔ̝], while centered [ɪ̈ ʊ̈] and [ä] (or, less commonly, [ɑ̈]) are near-close and open central vowels, respectively. The only known vowels that cannot be represented in this scheme are vowels with unexpected roundedness, which would require a dedicated diacritic, such as protruded ⟨ʏʷ⟩ and compressed ⟨uᵝ⟩ (or ⟨ɪʷ⟩ and ⟨ɯᶹ⟩).
181
+
182
+ An IPA symbol is often distinguished from the sound it is intended to represent, since there is not necessarily a one-to-one correspondence between letter and sound in broad transcription, making articulatory descriptions such as "mid front rounded vowel" or "voiced velar stop" unreliable. While the Handbook of the International Phonetic Association states that no official names exist for its symbols, it admits the presence of one or two common names for each.[89] The symbols also have nonce names in the Unicode standard. In some cases, the Unicode names and the IPA names do not agree. For example, IPA calls ɛ "epsilon", but Unicode calls it "small letter open E".
183
+
184
+ The traditional names of the Latin and Greek letters are usually used for unmodified letters.[note 5] Letters which are not directly derived from these alphabets, such as [ʕ], may have a variety of names, sometimes based on the appearance of the symbol or on the sound that it represents. In Unicode, some of the letters of Greek origin have Latin forms for use in IPA; the others use the letters from the Greek section.
185
+
186
+ For diacritics, there are two methods of naming. For traditional diacritics, the IPA notes the name in a well known language; for example, é is acute, based on the name of the diacritic in English and French. Non-traditional diacritics are often named after objects they resemble, so d̪ is called bridge.
187
+
188
+ Geoffrey Pullum and William Ladusaw list a variety of names in use for IPA symbols, both current and retired, in addition to names of many other non-IPA phonetic symbols in their Phonetic Symbol Guide.[10]
189
+
190
+ IPA typeface support is increasing, and is now included in several typefaces such as the Times New Roman versions that come with various recent computer operating systems. Diacritics are not always properly rendered, however.
191
+
192
+ IPA typefaces that are freely available online include:
193
+
194
+ Some commercial IPA-compatible typefaces include:
195
+
196
+ These all include several ranges of characters in addition to the IPA. Modern Web browsers generally do not need any configuration to display these symbols, provided that a typeface capable of doing so is available to the operating system.
197
+
198
+ Several systems have been developed that map the IPA symbols to ASCII characters. Notable systems include SAMPA and X-SAMPA. The usage of mapping systems in on-line text has to some extent been adopted in the context input methods, allowing convenient keying of IPA characters that would be otherwise unavailable on standard keyboard layouts.
199
+
200
+ Online IPA keyboard utilities[90] are available, and they cover the complete range of IPA symbols and diacritics. In April 2019, Google's Gboard for Android and iOS added an IPA keyboard to its platform.[91][92]
201
+
202
+ Symbols to the right in a cell are voiced, to the left are voiceless. Shaded areas denote articulations judged impossible.
203
+
204
+ Vowels beside dots are: unrounded • rounded
en/1720.html.txt ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Augustus (Imperator Caesar divi filius Augustus; 23 September 63 BC – 19 August AD 14) was a Roman statesman and military leader who became the first emperor of the Roman Empire, reigning from 27 BC until his death in AD 14.[nb 1] He was the first ruler of the Julio-Claudian dynasty. His status as the founder of the Roman Principate has consolidated an enduring legacy as one of the most effective and controversial leaders in human history.[1][2] The reign of Augustus initiated an era of relative peace known as the Pax Romana. The Roman world was largely free from large-scale conflict for more than two centuries, despite continuous wars of imperial expansion on the Empire's frontiers and the year-long civil war known as the "Year of the Four Emperors" over the imperial succession.
4
+
5
+ Augustus was born Gaius Octavius into an old and wealthy equestrian branch of the plebeian gens Octavia. His maternal great-uncle Julius Caesar was assassinated in 44 BC, and Octavius was named in Caesar's will as his adopted son and heir, taking the name Octavian (Latin: Gaius Julius Caesar Octavianus). Along with Mark Antony and Marcus Lepidus, he formed the Second Triumvirate to defeat the assassins of Caesar. Following their victory at the Battle of Philippi, the Triumvirate divided the Roman Republic among themselves and ruled as de facto dictators. The Triumvirate was eventually torn apart by the competing ambitions of its members. Lepidus was driven into exile and stripped of his position, and Antony committed suicide following his defeat at the Battle of Actium by Octavian in 31 BC.
6
+
7
+ After the demise of the Second Triumvirate, Augustus restored the outward façade of the free Republic, with governmental power vested in the Roman Senate, the executive magistrates, and the legislative assemblies. In reality, however, he retained his autocratic power over the Republic. By law, Augustus held a collection of powers granted to him for life by the Senate, including supreme military command, and those of tribune and censor. It took several years for Augustus to develop the framework within which a formally republican state could be led under his sole rule. He rejected monarchical titles, and instead called himself Princeps Civitatis ("First Citizen"). The resulting constitutional framework became known as the Principate, the first phase of the Roman Empire.
8
+
9
+ Augustus dramatically enlarged the Empire, annexing Egypt, Dalmatia, Pannonia, Noricum, and Raetia, expanding possessions in Africa, and completing the conquest of Hispania, but suffered a major setback in Germania. Beyond the frontiers, he secured the Empire with a buffer region of client states and made peace with the Parthian Empire through diplomacy. He reformed the Roman system of taxation, developed networks of roads with an official courier system, established a standing army, established the Praetorian Guard, created official police and fire-fighting services for Rome, and rebuilt much of the city during his reign. Augustus died in AD 14 at the age of 75, probably from natural causes. However, there were unconfirmed rumors that his wife Livia poisoned him. He was succeeded as emperor by his adopted son Tiberius (also stepson and former son-in-law).
10
+
11
+ As a consequence of Roman customs, society, and personal preference, Augustus (/ɔːˈɡʌstəs, əˈ-/ aw-GUST-əs, ə-, Latin: [au̯ˈɡʊstʊs]) was known by many names throughout his life:
12
+
13
+ While his paternal family was from the Volscian town of Velletri, approximately 40 kilometres (25 mi) to the south-east of Rome, Augustus was born in the city of Rome on 23 September 63 BC.[12] He was born at Ox Head, a small property on the Palatine Hill, very close to the Roman Forum. He was given the name Gaius Octavius Thurinus, his cognomen possibly commemorating his father's victory at Thurii over a rebellious band of slaves which occurred a few years after his birth.[13][14] Suetonius wrote: "There are many indications that the Octavian family was in days of old a distinguished one at Velitrae; for not only was a street in the most frequented part of town long ago called Octavian, but an altar was shown there besides, consecrated by an Octavius. This man was leader in a war with a neighbouring town ..." [15]
14
+
15
+ Due to the crowded nature of Rome at the time, Octavius was taken to his father's home village at Velletri to be raised. Octavius mentions his father's equestrian family only briefly in his memoirs. His paternal great-grandfather Gaius Octavius was a military tribune in Sicily during the Second Punic War. His grandfather had served in several local political offices. His father, also named Gaius Octavius, had been governor of Macedonia. His mother, Atia, was the niece of Julius Caesar.[16][17]
16
+
17
+ In 59 BC, when he was four years old, his father died.[18] His mother married a former governor of Syria, Lucius Marcius Philippus.[19] Philippus claimed descent from Alexander the Great, and was elected consul in 56 BC. Philippus never had much of an interest in young Octavius. Because of this, Octavius was raised by his grandmother, Julia, the sister of Julius Caesar. Julia died in 52 or 51 BC, and Octavius delivered the funeral oration for his grandmother.[20][21] From this point, his mother and stepfather took a more active role in raising him. He donned the toga virilis four years later,[22] and was elected to the College of Pontiffs in 47 BC.[23][24] The following year he was put in charge of the Greek games that were staged in honor of the Temple of Venus Genetrix, built by Julius Caesar.[24] According to Nicolaus of Damascus, Octavius wished to join Caesar's staff for his campaign in Africa, but gave way when his mother protested.[25] In 46 BC, she consented for him to join Caesar in Hispania, where he planned to fight the forces of Pompey, Caesar's late enemy, but Octavius fell ill and was unable to travel.
18
+
19
+ When he had recovered, he sailed to the front, but was shipwrecked; after coming ashore with a handful of companions, he crossed hostile territory to Caesar's camp, which impressed his great-uncle considerably.[22] Velleius Paterculus reports that after that time, Caesar allowed the young man to share his carriage.[26] When back in Rome, Caesar deposited a new will with the Vestal Virgins, naming Octavius as the prime beneficiary.[27]
20
+
21
+ Octavius was studying and undergoing military training in Apollonia, Illyria, when Julius Caesar was killed on the Ides of March (15 March) 44 BC. He rejected the advice of some army officers to take refuge with the troops in Macedonia and sailed to Italy to ascertain whether he had any potential political fortunes or security.[28] Caesar had no living legitimate children under Roman law,[nb 3] and so had adopted Octavius, his grand-nephew, making him his primary heir.[29] Mark Antony later charged that Octavian had earned his adoption by Caesar through sexual favours, though Suetonius describes Antony's accusation as political slander.[30] This form of slander was popular during this time in the Roman Republic to demean and discredit political opponents by accusing them of having an inappropriate sexual affair.[31][32] After landing at Lupiae near Brundisium, Octavius learned the contents of Caesar's will, and only then did he decide to become Caesar's political heir as well as heir to two-thirds of his estate.[24][28][33]
22
+
23
+ Upon his adoption, Octavius assumed his great-uncle's name Gaius Julius Caesar. Roman citizens adopted into a new family usually retained their old nomen in cognomen form (e.g., Octavianus for one who had been an Octavius, Aemilianus for one who had been an Aemilius, etc.). However, though some of his contemporaries did,[34] there is no evidence that Octavius ever himself officially used the name Octavianus, as it would have made his modest origins too obvious.[35][36][37] Historians usually refer to the new Caesar as Octavian during the time between his adoption and his assumption of the name Augustus in 27 BC in order to avoid confusing the dead dictator with his heir.[38]
24
+
25
+ Octavian could not rely on his limited funds to make a successful entry into the upper echelons of the Roman political hierarchy.[39] After a warm welcome by Caesar's soldiers at Brundisium,[40] Octavian demanded a portion of the funds that were allotted by Caesar for the intended war against the Parthian Empire in the Middle East.[39] This amounted to 700 million sesterces stored at Brundisium, the staging ground in Italy for military operations in the east.[41]
26
+
27
+ A later senatorial investigation into the disappearance of the public funds took no action against Octavian, since he subsequently used that money to raise troops against the Senate's arch enemy Mark Antony.[40] Octavian made another bold move in 44 BC when, without official permission, he appropriated the annual tribute that had been sent from Rome's Near Eastern province to Italy.[36][42]
28
+
29
+ Octavian began to bolster his personal forces with Caesar's veteran legionaries and with troops designated for the Parthian war, gathering support by emphasizing his status as heir to Caesar.[28][43] On his march to Rome through Italy, Octavian's presence and newly acquired funds attracted many, winning over Caesar's former veterans stationed in Campania.[36] By June, he had gathered an army of 3,000 loyal veterans, paying each a salary of 500 denarii.[44][45][46]
30
+
31
+ Arriving in Rome on 6 May 44 BC, Octavian found consul Mark Antony, Caesar's former colleague, in an uneasy truce with the dictator's assassins. They had been granted a general amnesty on 17 March, yet Antony had succeeded in driving most of them out of Rome with an inflammatory eulogy at Caesar's funeral, mounting public opinion against the assassins.[36]
32
+
33
+ Mark Antony was amassing political support, but Octavian still had opportunity to rival him as the leading member of the faction supporting Caesar. Mark Antony had lost the support of many Romans and supporters of Caesar when he initially opposed the motion to elevate Caesar to divine status.[47] Octavian failed to persuade Antony to relinquish Caesar's money to him. During the summer, he managed to win support from Caesarian sympathizers and also made common with the Optimates, the former enemies of Caesar, who saw him as the lesser evil and hoped to manipulate him.[48] In September, the leading Optimate orator Marcus Tullius Cicero began to attack Antony in a series of speeches portraying him as a threat to the Republican order.[49][50]
34
+
35
+ With opinion in Rome turning against him and his year of consular power nearing its end, Antony attempted to pass laws that would assign him the province of Cisalpine Gaul.[51][52] Octavian meanwhile built up a private army in Italy by recruiting Caesarian veterans and, on 28 November, he won over two of Antony's legions with the enticing offer of monetary gain.[53][54][55]
36
+
37
+ In the face of Octavian's large and capable force, Antony saw the danger of staying in Rome and, to the relief of the Senate, he left Rome for Cisalpine Gaul, which was to be handed to him on 1 January.[55] However, the province had earlier been assigned to Decimus Junius Brutus Albinus, one of Caesar's assassins, who now refused to yield to Antony. Antony besieged him at Mutina[56] and rejected the resolutions passed by the Senate to stop the fighting. The Senate had no army to enforce their resolutions. This provided an opportunity for Octavian, who already was known to have armed forces.[54] Cicero also defended Octavian against Antony's taunts about Octavian's lack of noble lineage and aping of Julius Caesar's name, stating "we have no more brilliant example of traditional piety among our youth."[57]
38
+
39
+ At the urging of Cicero, the Senate inducted Octavian as senator on 1 January 43 BC, yet he also was given the power to vote alongside the former consuls.[54][55] In addition, Octavian was granted propraetor imperium (commanding power) which legalized his command of troops, sending him to relieve the siege along with Hirtius and Pansa (the consuls for 43 BC).[54][58] In April 43 BC, Antony's forces were defeated at the battles of Forum Gallorum and Mutina, forcing Antony to retreat to Transalpine Gaul. Both consuls were killed, however, leaving Octavian in sole command of their armies.[59][60]
40
+
41
+ The senate heaped many more rewards on Decimus Brutus than on Octavian for defeating Antony, then attempted to give command of the consular legions to Decimus Brutus.[61] In response, Octavian stayed in the Po Valley and refused to aid any further offensive against Antony.[62] In July, an embassy of centurions sent by Octavian entered Rome and demanded the consulship left vacant by Hirtius and Pansa[63] and also that the decree should be rescinded which declared Antony a public enemy.[62] When this was refused, he marched on the city with eight legions.[62] He encountered no military opposition in Rome, and on 19 August 43 BC was elected consul with his relative Quintus Pedius as co-consul.[64][65] Meanwhile, Antony formed an alliance with Marcus Aemilius Lepidus, another leading Caesarian.[66]
42
+
43
+ In a meeting near Bologna in October 43 BC, Octavian, Antony, and Lepidus formed the Second Triumvirate.[68] This explicit arrogation of special powers lasting five years was then legalised by law passed by the plebs, unlike the unofficial First Triumvirate formed by Pompey, Julius Caesar, and Marcus Licinius Crassus.[68][69] The triumvirs then set in motion proscriptions, in which between 130 and 300 senators[nb 4] and 2,000 equites were branded as outlaws and deprived of their property and, for those who failed to escape, their lives.[71] This decree issued by the triumvirate was motivated in part by a need to raise money to pay the salaries of their troops for the upcoming conflict against Caesar's assassins, Marcus Junius Brutus and Gaius Cassius Longinus.[72] Rewards for their arrest gave incentive for Romans to capture those proscribed, while the assets and properties of those arrested were seized by the triumvirs.[71]
44
+
45
+ Contemporary Roman historians provide conflicting reports as to which triumvir was most responsible for the proscriptions and killing. However, the sources agree that enacting the proscriptions was a means by all three factions to eliminate political enemies.[73] Marcus Velleius Paterculus asserted that Octavian tried to avoid proscribing officials whereas Lepidus and Antony were to blame for initiating them. Cassius Dio defended Octavian as trying to spare as many as possible, whereas Antony and Lepidus, being older and involved in politics longer, had many more enemies to deal with.[74]
46
+
47
+ This claim was rejected by Appian, who maintained that Octavian shared an equal interest with Lepidus and Antony in eradicating his enemies.[75] Suetonius said that Octavian was reluctant to proscribe officials, but did pursue his enemies with more vigor than the other triumvirs.[73] Plutarch described the proscriptions as a ruthless and cutthroat swapping of friends and family among Antony, Lepidus, and Octavian. For example, Octavian allowed the proscription of his ally Cicero, Antony the proscription of his maternal uncle Lucius Julius Caesar (the consul of 64 BC), and Lepidus his brother Paullus.[74]
48
+
49
+ On 1 January 42 BC, the Senate posthumously recognized Julius Caesar as a divinity of the Roman state, Divus Iulius. Octavian was able to further his cause by emphasizing the fact that he was Divi filius, "Son of the Divine".[76] Antony and Octavian then sent 28 legions by sea to face the armies of Brutus and Cassius, who had built their base of power in Greece.[77] After two battles at Philippi in Macedonia in October 42, the Caesarian army was victorious and Brutus and Cassius committed suicide. Mark Antony later used the examples of these battles as a means to belittle Octavian, as both battles were decisively won with the use of Antony's forces. In addition to claiming responsibility for both victories, Antony also branded Octavian as a coward for handing over his direct military control to Marcus Vipsanius Agrippa instead.[78]
50
+
51
+ After Philippi, a new territorial arrangement was made among the members of the Second Triumvirate. Gaul and the province of Hispania were placed in the hands of Octavian. Antony traveled east to Egypt where he allied himself with Queen Cleopatra VII, the former lover of Julius Caesar and mother of Caesar's infant son Caesarion. Lepidus was left with the province of Africa, stymied by Antony, who conceded Hispania to Octavian instead.[79]
52
+
53
+ Octavian was left to decide where in Italy to settle the tens of thousands of veterans of the Macedonian campaign, whom the triumvirs had promised to discharge. The tens of thousands who had fought on the republican side with Brutus and Cassius could easily ally with a political opponent of Octavian if not appeased, and they also required land.[79] There was no more government-controlled land to allot as settlements for their soldiers, so Octavian had to choose one of two options: alienating many Roman citizens by confiscating their land, or alienating many Roman soldiers who could mount a considerable opposition against him in the Roman heartland. Octavian chose the former.[80] There were as many as eighteen Roman towns affected by the new settlements, with entire populations driven out or at least given partial evictions.[81]
54
+
55
+ There was widespread dissatisfaction with Octavian over these settlements of his soldiers, and this encouraged many to rally at the side of Lucius Antonius, who was brother of Mark Antony and supported by a majority in the Senate. Meanwhile, Octavian asked for a divorce from Clodia Pulchra, the daughter of Fulvia (Mark Antony's wife) and her first husband Publius Clodius Pulcher. He returned Clodia to her mother, claiming that their marriage had never been consummated. Fulvia decided to take action. Together with Lucius Antonius, she raised an army in Italy to fight for Antony's rights against Octavian. Lucius and Fulvia took a political and martial gamble in opposing Octavian, however, since the Roman army still depended on the triumvirs for their salaries. Lucius and his allies ended up in a defensive siege at Perusia (modern Perugia), where Octavian forced them into surrender in early 40 BC.[81]
56
+
57
+ Lucius and his army were spared, due to his kinship with Antony, the strongman of the East, while Fulvia was exiled to Sicyon.[82] Octavian showed no mercy, however, for the mass of allies loyal to Lucius; on 15 March, the anniversary of Julius Caesar's assassination, he had 300 Roman senators and equestrians executed for allying with Lucius.[83] Perusia also was pillaged and burned as a warning for others.[82] This bloody event sullied Octavian's reputation and was criticized by many, such as Augustan poet Sextus Propertius.[83]
58
+
59
+ Sextus Pompeius, the son of Pompey and still a renegade general following Julius Caesar's victory over his father, had established himself in Sicily and Sardinia as part of an agreement reached with the Second Triumvirate in 39 BC.[84] Both Antony and Octavian were vying for an alliance with Pompeius. Octavian succeeded in a temporary alliance in 40 BC when he married Scribonia, a sister or daughter of Pompeius's father-in-law Lucius Scribonius Libo. Scribonia gave birth to Octavian's only natural child, Julia, the same day that he divorced her to marry Livia Drusilla, little more than a year after their marriage.[83]
60
+
61
+ While in Egypt, Antony had been engaged in an affair with Cleopatra and had fathered three children with her.[nb 5] Aware of his deteriorating relationship with Octavian, Antony left Cleopatra; he sailed to Italy in 40 BC with a large force to oppose Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their Caesarian cause, while the legions under their command followed suit. Meanwhile, in Sicyon, Antony's wife Fulvia died of a sudden illness while Antony was en route to meet her. Fulvia's death and the mutiny of their centurions allowed the two remaining triumvirs to effect a reconciliation.[85][86]
62
+
63
+ In the autumn of 40, Octavian and Antony approved the Treaty of Brundisium, by which Lepidus would remain in Africa, Antony in the East, Octavian in the West. The Italian Peninsula was left open to all for the recruitment of soldiers, but in reality, this provision was useless for Antony in the East. To further cement relations of alliance with Mark Antony, Octavian gave his sister, Octavia Minor, in marriage to Antony in late 40 BC.[85]
64
+
65
+ Sextus Pompeius threatened Octavian in Italy by denying shipments of grain through the Mediterranean Sea to the peninsula. Pompeius's own son was put in charge as naval commander in the effort to cause widespread famine in Italy.[86] Pompeius's control over the sea prompted him to take on the name Neptuni filius, "son of Neptune".[87] A temporary peace agreement was reached in 39 BC with the treaty of Misenum; the blockade on Italy was lifted once Octavian granted Pompeius Sardinia, Corsica, Sicily, and the Peloponnese, and ensured him a future position as consul for 35 BC.[86][87]
66
+
67
+ The territorial agreement between the triumvirate and Sextus Pompeius began to crumble once Octavian divorced Scribonia and married Livia on 17 January 38 BC.[88] One of Pompeius's naval commanders betrayed him and handed over Corsica and Sardinia to Octavian. Octavian lacked the resources to confront Pompeius alone, however, so an agreement was reached with the Second Triumvirate's extension for another five-year period beginning in 37 BC.[69][89]
68
+
69
+ In supporting Octavian, Antony expected to gain support for his own campaign against the Parthian Empire, desiring to avenge Rome's defeat at Carrhae in 53 BC.[89] In an agreement reached at Tarentum, Antony provided 120 ships for Octavian to use against Pompeius, while Octavian was to send 20,000 legionaries to Antony for use against Parthia. Octavian sent only a tenth of those promised, however, which Antony viewed as an intentional provocation.[90]
70
+
71
+ Octavian and Lepidus launched a joint operation against Sextus in Sicily in 36 BC.[91] Despite setbacks for Octavian, the naval fleet of Sextus Pompeius was almost entirely destroyed on 3 September by General Agrippa at the naval Battle of Naulochus. Sextus fled to the east with his remaining forces, where he was captured and executed in Miletus by one of Antony's generals the following year. As Lepidus and Octavian accepted the surrender of Pompeius's troops, Lepidus attempted to claim Sicily for himself, ordering Octavian to leave. Lepidus's troops deserted him, however, and defected to Octavian since they were weary of fighting and were enticed by Octavian's promises of money.[92]
72
+
73
+ Lepidus surrendered to Octavian and was permitted to retain the office of Pontifex Maximus (head of the college of priests), but was ejected from the Triumvirate, his public career at an end, and effectively was exiled to a villa at Cape Circei in Italy.[72][92] The Roman dominions were now divided between Octavian in the West and Antony in the East. Octavian ensured Rome's citizens of their rights to property in order to maintain peace and stability in his portion of the Empire. This time, he settled his discharged soldiers outside of Italy, while also returning 30,000 slaves to their former Roman owners—slaves who had fled to join Pompeius's army and navy.[93] Octavian had the Senate grant him, his wife, and his sister tribunal immunity, or sacrosanctitas, in order to ensure his own safety and that of Livia and Octavia once he returned to Rome.[94]
74
+
75
+ Meanwhile, Antony's campaign turned disastrous against Parthia, tarnishing his image as a leader, and the mere 2,000 legionaries sent by Octavian to Antony were hardly enough to replenish his forces.[95] On the other hand, Cleopatra could restore his army to full strength; he already was engaged in a romantic affair with her, so he decided to send Octavia back to Rome.[96] Octavian used this to spread propaganda implying that Antony was becoming less than Roman because he rejected a legitimate Roman spouse for an "Oriental paramour".[97] In 36 BC, Octavian used a political ploy to make himself look less autocratic and Antony more the villain by proclaiming that the civil wars were coming to an end, and that he would step down as triumvir—if only Antony would do the same. Antony refused.[98]
76
+
77
+ Roman troops captured the Kingdom of Armenia in 34 BC, and Antony made his son Alexander Helios the ruler of Armenia. He also awarded the title "Queen of Kings" to Cleopatra, acts that Octavian used to convince the Roman Senate that Antony had ambitions to diminish the preeminence of Rome.[97] Octavian became consul once again on 1 January 33 BC, and he opened the following session in the Senate with a vehement attack on Antony's grants of titles and territories to his relatives and to his queen.[99]
78
+
79
+ The breach between Antony and Octavian prompted a large portion of the Senators, as well as both of that year's consuls, to leave Rome and defect to Antony. However, Octavian received two key deserters from Antony in the autumn of 32 BC: Munatius Plancus and Marcus Titius.[100] These defectors gave Octavian the information that he needed to confirm with the Senate all the accusations that he made against Antony.[101]
80
+
81
+ Octavian forcibly entered the temple of the Vestal Virgins and seized Antony's secret will, which he promptly publicized. The will would have given away Roman-conquered territories as kingdoms for his sons to rule, and designated Alexandria as the site for a tomb for him and his queen.[102][103] In late 32 BC, the Senate officially revoked Antony's powers as consul and declared war on Cleopatra's regime in Egypt.[104][105]
82
+
83
+ In early 31 BC, Antony and Cleopatra were temporarily stationed in Greece when Octavian gained a preliminary victory: the navy successfully ferried troops across the Adriatic Sea under the command of Agrippa. Agrippa cut off Antony and Cleopatra's main force from their supply routes at sea, while Octavian landed on the mainland opposite the island of Corcyra (modern Corfu) and marched south. Trapped on land and sea, deserters of Antony's army fled to Octavian's side daily while Octavian's forces were comfortable enough to make preparations.[108]
84
+
85
+ Antony's fleet sailed through the bay of Actium on the western coast of Greece in a desperate attempt to break free of the naval blockade. It was there that Antony's fleet faced the much larger fleet of smaller, more maneuverable ships under commanders Agrippa and Gaius Sosius in the Battle of Actium on 2 September 31 BC.[109] Antony and his remaining forces were spared only due to a last-ditch effort by Cleopatra's fleet that had been waiting nearby.[110]
86
+
87
+ Octavian pursued them and defeated their forces in Alexandria on 1 August 30 BC—after which Antony and Cleopatra committed suicide. Antony fell on his own sword and was taken by his soldiers back to Alexandria where he died in Cleopatra's arms. Cleopatra died soon after, reputedly by the venomous bite of an asp or by poison.[111] Octavian had exploited his position as Caesar's heir to further his own political career, and he was well aware of the dangers in allowing another person to do the same. He therefore followed the advice of Arius Didymus that "two Caesars are one too many", ordering Caesarion, Julius Caesar's son by Cleopatra, killed, while sparing Cleopatra's children by Antony, with the exception of Antony's older son.[112][113] Octavian had previously shown little mercy to surrendered enemies and acted in ways that had proven unpopular with the Roman people, yet he was given credit for pardoning many of his opponents after the Battle of Actium.[114]
88
+
89
+ After Actium and the defeat of Antony and Cleopatra, Octavian was in a position to rule the entire Republic under an unofficial principate[115]—but he had to achieve this through incremental power gains. He did so by courting the Senate and the people while upholding the republican traditions of Rome, appearing that he was not aspiring to dictatorship or monarchy.[116][117] Marching into Rome, Octavian and Marcus Agrippa were elected as consuls by the Senate.[118]
90
+
91
+ Years of civil war had left Rome in a state of near lawlessness, but the Republic was not prepared to accept the control of Octavian as a despot. At the same time, Octavian could not simply give up his authority without risking further civil wars among the Roman generals and, even if he desired no position of authority whatsoever, his position demanded that he look to the well-being of the city of Rome and the Roman provinces. Octavian's aims from this point forward were to return Rome to a state of stability, traditional legality, and civility by lifting the overt political pressure imposed on the courts of law and ensuring free elections—in name at least.[119]
92
+
93
+ In 27 BC, Octavian made a show of returning full power to the Roman Senate and relinquishing his control of the Roman provinces and their armies. Under his consulship, however, the Senate had little power in initiating legislation by introducing bills for senatorial debate. Octavian was no longer in direct control of the provinces and their armies, but he retained the loyalty of active duty soldiers and veterans alike. The careers of many clients and adherents depended on his patronage, as his financial power was unrivaled in the Roman Republic.[118] Historian Werner Eck states:
94
+
95
+ The sum of his power derived first of all from various powers of office delegated to him by the Senate and people, secondly from his immense private fortune, and thirdly from numerous patron-client relationships he established with individuals and groups throughout the Empire. All of them taken together formed the basis of his auctoritas, which he himself emphasized as the foundation of his political actions.[120]
96
+
97
+ To a large extent, the public were aware of the vast financial resources that Octavian commanded. He failed to encourage enough senators to finance the building and maintenance of networks of roads in Italy in 20 BC, but he undertook direct responsibility for them. This was publicized on the Roman currency issued in 16 BC, after he donated vast amounts of money to the aerarium Saturni, the public treasury.[121]
98
+
99
+ According to H. H. Scullard, however, Octavian's power was based on the exercise of "a predominant military power and ... the ultimate sanction of his authority was force, however much the fact was disguised."[122] The Senate proposed to Octavian, the victor of Rome's civil wars, that he once again assume command of the provinces. The Senate's proposal was a ratification of Octavian's extra-constitutional power. Through the Senate, Octavian was able to continue the appearance of a still-functional constitution. Feigning reluctance, he accepted a ten-year responsibility of overseeing provinces that were considered chaotic.[123][124]
100
+
101
+ The provinces ceded to Augustus for that ten-year period comprised much of the conquered Roman world, including all of Hispania and Gaul, Syria, Cilicia, Cyprus, and Egypt.[123][125] Moreover, command of these provinces provided Octavian with control over the majority of Rome's legions.[125][126]
102
+
103
+ While Octavian acted as consul in Rome, he dispatched senators to the provinces under his command as his representatives to manage provincial affairs and ensure that his orders were carried out. The provinces not under Octavian's control were overseen by governors chosen by the Roman Senate.[126] Octavian became the most powerful political figure in the city of Rome and in most of its provinces, but he did not have a monopoly on political and martial power.[127]
104
+
105
+ The Senate still controlled North Africa, an important regional producer of grain, as well as Illyria and Macedonia, two strategic regions with several legions.[127] However, the Senate had control of only five or six legions distributed among three senatorial proconsuls, compared to the twenty legions under the control of Octavian, and their control of these regions did not amount to any political or military challenge to Octavian.[116][122] The Senate's control over some of the Roman provinces helped maintain a republican façade for the autocratic Principate. Also, Octavian's control of entire provinces followed Republican-era precedents for the objective of securing peace and creating stability, in which such prominent Romans as Pompey had been granted similar military powers in times of crisis and instability.[116]
106
+
107
+ On 16 January 27 BC the Senate gave Octavian the new titles of Augustus and Princeps.[128] Augustus is from the Latin word Augere (meaning to increase) and can be translated as "the illustrious one". It was a title of religious authority rather than political authority. His new title of Augustus was also more favorable than Romulus, the previous one which he styled for himself in reference to the story of the legendary founder of Rome, which symbolized a second founding of Rome.[114] The title of Romulus was associated too strongly with notions of monarchy and kingship, an image that Octavian tried to avoid.[129] The title princeps senatus originally meant the member of the Senate with the highest precedence,[130] but in the case of Augustus, it became an almost regnal title for a leader who was first in charge.[131] Augustus also styled himself as Imperator Caesar divi filius, "Commander Caesar son of the deified one". With this title, he boasted his familial link to deified Julius Caesar, and the use of Imperator signified a permanent link to the Roman tradition of victory. He transformed Caesar, a cognomen for one branch of the Julian family, into a new family line that began with him.[128]
108
+
109
+ Augustus was granted the right to hang the corona civica above his door, the "civic crown" made from oak, and to have laurels drape his doorposts.[127] However, he renounced flaunting insignia of power such as holding a scepter, wearing a diadem, or wearing the golden crown and purple toga of his predecessor Julius Caesar.[132] If he refused to symbolize his power by donning and bearing these items on his person, the Senate nonetheless awarded him with a golden shield displayed in the meeting hall of the Curia, bearing the inscription virtus, pietas, clementia, iustitia—"valor, piety, clemency, and justice."[127][133]
110
+
111
+ By 23 BC, some of the un-Republican implications were becoming apparent concerning the settlement of 27 BC. Augustus's retention of an annual consulate drew attention to his de facto dominance over the Roman political system, and cut in half the opportunities for others to achieve what was still nominally the preeminent position in the Roman state.[134] Further, he was causing political problems by desiring to have his nephew Marcus Claudius Marcellus follow in his footsteps and eventually assume the Principate in his turn,[nb 6] alienating his three greatest supporters – Agrippa, Maecenas, and Livia.[135] He appointed noted Republican Calpurnius Piso (who had fought against Julius Caesar and supported Cassius and Brutus[136]) as co-consul in 23 BC, after his choice Aulus Terentius Varro Murena died unexpectedly.[137]
112
+
113
+ In the late spring Augustus suffered a severe illness, and on his supposed deathbed made arrangements that would ensure the continuation of the Principate in some form,[138] while allaying senators' suspicions of his anti-republicanism. Augustus prepared to hand down his signet ring to his favored general Agrippa. However, Augustus handed over to his co-consul Piso all of his official documents, an account of public finances, and authority over listed troops in the provinces while Augustus's supposedly favored nephew Marcellus came away empty-handed.[139][140] This was a surprise to many who believed Augustus would have named an heir to his position as an unofficial emperor.[141]
114
+
115
+ Augustus bestowed only properties and possessions to his designated heirs, as an obvious system of institutionalized imperial inheritance would have provoked resistance and hostility among the republican-minded Romans fearful of monarchy.[117] With regards to the Principate, it was obvious to Augustus that Marcellus was not ready to take on his position;[142] nonetheless, by giving his signet ring to Agrippa, Augustus intended to signal to the legions that Agrippa was to be his successor, and that constitutional procedure notwithstanding, they should continue to obey Agrippa.[143]
116
+
117
+ Soon after his bout of illness subsided, Augustus gave up his consulship. The only other times Augustus would serve as consul would be in the years 5 and 2 BC,[140][144] both times to introduce his grandsons into public life.[136] This was a clever ploy by Augustus; ceasing to serve as one of two annually elected consuls allowed aspiring senators a better chance to attain the consular position, while allowing Augustus to exercise wider patronage within the senatorial class.[145] Although Augustus had resigned as consul, he desired to retain his consular imperium not just in his provinces but throughout the empire. This desire, as well as the Marcus Primus Affair, led to a second compromise between him and the Senate known as the Second Settlement.[146]
118
+
119
+ The primary reasons for the Second Settlement were as follows. First, after Augustus relinquished the annual consulship, he was no longer in an official position to rule the state, yet his dominant position remained unchanged over his Roman, 'imperial' provinces where he was still a proconsul.[140][147] When he annually held the office of consul, he had the power to intervene with the affairs of the other provincial proconsuls appointed by the Senate throughout the empire, when he deemed necessary.[148]
120
+
121
+ A second problem later arose showing the need for the Second Settlement in what became known as the "Marcus Primus Affair".[149] In late 24 or early 23 BC, charges were brought against Marcus Primus, the former proconsul (governor) of Macedonia, for waging a war without prior approval of the Senate on the Odrysian kingdom of Thrace, whose king was a Roman ally.[150] He was defended by Lucius Lucinius Varro Murena, who told the trial that his client had received specific instructions from Augustus, ordering him to attack the client state.[151] Later, Primus testified that the orders came from the recently deceased Marcellus.[152]
122
+
123
+ Such orders, had they been given, would have been considered a breach of the Senate's prerogative under the Constitutional settlement of 27 BC and its aftermath—i.e., before Augustus was granted imperium proconsulare maius—as Macedonia was a Senatorial province under the Senate's jurisdiction, not an imperial province under the authority of Augustus. Such an action would have ripped away the veneer of Republican restoration as promoted by Augustus, and exposed his fraud of merely being the first citizen, a first among equals.[151] Even worse, the involvement of Marcellus provided some measure of proof that Augustus's policy was to have the youth take his place as Princeps, instituting a form of monarchy – accusations that had already played out.[142]
124
+
125
+ The situation was so serious that Augustus himself appeared at the trial, even though he had not been called as a witness. Under oath, Augustus declared that he gave no such order.[153] Murena disbelieved Augustus's testimony and resented his attempt to subvert the trial by using his auctoritas. He rudely demanded to know why Augustus had turned up to a trial to which he had not been called; Augustus replied that he came in the public interest.[154] Although Primus was found guilty, some jurors voted to acquit, meaning that not everybody believed Augustus's testimony, an insult to the 'August One'.[155]
126
+
127
+ The Second Constitutional Settlement was completed in part to allay confusion and formalize Augustus's legal authority to intervene in Senatorial provinces. The Senate granted Augustus a form of general imperium proconsulare, or proconsular imperium (power) that applied throughout the empire, not solely to his provinces. Moreover, the Senate augmented Augustus's proconsular imperium into imperium proconsulare maius, or proconsular imperium applicable throughout the empire that was more (maius) or greater than that held by the other proconsuls. This in effect gave Augustus constitutional power superior to all other proconsuls in the empire.[146] Augustus stayed in Rome during the renewal process and provided veterans with lavish donations to gain their support, thereby ensuring that his status of proconsular imperium maius was renewed in 13 BC.[144]
128
+
129
+ During the second settlement, Augustus was also granted the power of a tribune (tribunicia potestas) for life, though not the official title of tribune.[146] For some years, Augustus had been awarded tribunicia sacrosanctitas, the immunity given to a Tribune of the Plebs. Now he decided to assume the full powers of the magistracy, renewed annually, in perpetuity. Legally, it was closed to patricians, a status that Augustus had acquired some years earlier when adopted by Julius Caesar.[145]
130
+
131
+ This power allowed him to convene the Senate and people at will and lay business before them, to veto the actions of either the Assembly or the Senate, to preside over elections, and to speak first at any meeting.[144][156] Also included in Augustus's tribunician authority were powers usually reserved for the Roman censor; these included the right to supervise public morals and scrutinize laws to ensure that they were in the public interest, as well as the ability to hold a census and determine the membership of the Senate.[157]
132
+
133
+ With the powers of a censor, Augustus appealed to virtues of Roman patriotism by banning all attire but the classic toga while entering the Forum.[158] There was no precedent within the Roman system for combining the powers of the tribune and the censor into a single position, nor was Augustus ever elected to the office of censor.[159] Julius Caesar had been granted similar powers, wherein he was charged with supervising the morals of the state. However, this position did not extend to the censor's ability to hold a census and determine the Senate's roster. The office of the tribunus plebis began to lose its prestige due to Augustus's amassing of tribunal powers, so he revived its importance by making it a mandatory appointment for any plebeian desiring the praetorship.[160]
134
+
135
+ Augustus was granted sole imperium within the city of Rome itself, in addition to being granted proconsular imperium maius and tribunician authority for life. Traditionally, proconsuls (Roman province governors) lost their proconsular "imperium" when they crossed the Pomerium – the sacred boundary of Rome – and entered the city. In these situations, Augustus would have power as part of his tribunician authority but his constitutional imperium within the Pomerium would be less than that of a serving consul. That would mean that, when he was in the city, he might not be the constitutional magistrate with the most authority. Thanks to his prestige or auctoritas, his wishes would usually be obeyed, but there might be some difficulty. To fill this power vacuum, the Senate voted that Augustus's imperium proconsulare maius (superior proconsular power) should not lapse when he was inside the city walls. All armed forces in the city had formerly been under the control of the urban praetors and consuls, but this situation now placed them under the sole authority of Augustus.[161]
136
+
137
+ In addition, the credit was given to Augustus for each subsequent Roman military victory after this time, because the majority of Rome's armies were stationed in imperial provinces commanded by Augustus through the legatus who were deputies of the princeps in the provinces. Moreover, if a battle was fought in a Senatorial province, Augustus's proconsular imperium maius allowed him to take command of (or credit for) any major military victory. This meant that Augustus was the only individual able to receive a triumph, a tradition that began with Romulus, Rome's first King and first triumphant general. Lucius Cornelius Balbus was the last man outside Augustus's family to receive this award, in 19 BC.[162] Tiberius, Augustus's eldest stepson by Livia, was the only other general to receive a triumph—for victories in Germania in 7 BC.[163]
138
+
139
+ Many of the political subtleties of the Second Settlement seem to have evaded the comprehension of the Plebeian class, who were Augustus's greatest supporters and clientele. This caused them to insist upon Augustus's participation in imperial affairs from time to time. Augustus failed to stand for election as consul in 22 BC, and fears arose once again that he was being forced from power by the aristocratic Senate. In 22, 21, and 19 BC, the people rioted in response, and only allowed a single consul to be elected for each of those years, ostensibly to leave the other position open for Augustus.[164]
140
+
141
+ Likewise, there was a food shortage in Rome in 22 BC which sparked panic, while many urban plebs called for Augustus to take on dictatorial powers to personally oversee the crisis. After a theatrical display of refusal before the Senate, Augustus finally accepted authority over Rome's grain supply "by virtue of his proconsular imperium", and ended the crisis almost immediately.[144] It was not until AD 8 that a food crisis of this sort prompted Augustus to establish a praefectus annonae, a permanent prefect who was in charge of procuring food supplies for Rome.[165]
142
+
143
+ There were some who were concerned by the expansion of powers granted to Augustus by the Second Settlement, and this came to a head with the apparent conspiracy of Fannius Caepio.[149] Some time prior to 1 September 22 BC, a certain Castricius provided Augustus with information about a conspiracy led by Fannius Caepio.[166] Murena, the outspoken Consul who defended Primus in the Marcus Primus Affair, was named among the conspirators. The conspirators were tried in absentia with Tiberius acting as prosecutor; the jury found them guilty, but it was not a unanimous verdict.[167] All the accused were sentenced to death for treason and executed as soon as they were captured—without ever giving testimony in their defence.[168] Augustus ensured that the facade of Republican government continued with an effective cover-up of the events.[169]
144
+
145
+ In 19 BC, the Senate granted Augustus a form of 'general consular imperium', which was probably 'imperium consulare maius', like the proconsular powers that he received in 23 BC. Like his tribune authority, the consular powers were another instance of gaining power from offices that he did not actually hold.[170] In addition, Augustus was allowed to wear the consul's insignia in public and before the Senate,[161] as well as to sit in the symbolic chair between the two consuls and hold the fasces, an emblem of consular authority.[170] This seems to have assuaged the populace; regardless of whether or not Augustus was a consul, the importance was that he both appeared as one before the people and could exercise consular power if necessary. On 6 March 12 BC, after the death of Lepidus, he additionally took up the position of pontifex maximus, the high priest of the college of the Pontiffs, the most important position in Roman religion.[171][172] On 5 February 2 BC, Augustus was also given the title pater patriae, or "father of the country".[173][174]
146
+
147
+ A final reason for the Second Settlement was to give the Principate constitutional stability and staying power in case something happened to Princeps Augustus. His illness of early 23 BC and the Caepio conspiracy showed that the regime's existence hung by the thin thread of the life of one man, Augustus himself, who suffered from several severe and dangerous illnesses throughout his life.[175] If he were to die from natural causes or fall victim to assassination, Rome could be subjected to another round of civil war. The memories of Pharsalus, the Ides of March, the proscriptions, Philippi, and Actium, barely twenty-five years distant, were still vivid in the minds of many citizens. Proconsular imperium was conferred upon Agrippa for five years, similar to Augustus's power, in order to accomplish this constitutional stability. The exact nature of the grant is uncertain but it probably covered Augustus's imperial provinces, east and west, perhaps lacking authority over the provinces of the Senate. That came later, as did the jealously guarded tribunicia potestas.[176] Augustus's accumulation of powers was now complete. In fact, he dated his 'reign' from the completion of the Second Settlement, 1 July 23 BC.[177]
148
+
149
+ Augustus chose Imperator ("victorious commander") to be his first name, since he wanted to make an emphatically clear connection between himself and the notion of victory, and consequently became known as Imperator Caesar Divi Filius Augustus. By the year 13, Augustus boasted 21 occasions where his troops proclaimed "imperator" as his title after a successful battle. Almost the entire fourth chapter in his publicly released memoirs of achievements known as the Res Gestae was devoted to his military victories and honors.[178]
150
+
151
+ Augustus also promoted the ideal of a superior Roman civilization with a task of ruling the world (to the extent to which the Romans knew it), a sentiment embodied in words that the contemporary poet Virgil attributes to a legendary ancestor of Augustus: tu regere imperio populos, Romane, memento—"Roman, remember by your strength to rule the Earth's peoples!"[158] The impulse for expansionism was apparently prominent among all classes at Rome, and it is accorded divine sanction by Virgil's Jupiter in Book 1 of the Aeneid, where Jupiter promises Rome imperium sine fine, "sovereignty without end".[179]
152
+
153
+ By the end of his reign, the armies of Augustus had conquered northern Hispania (modern Spain and Portugal) and the Alpine regions of Raetia and Noricum (modern Switzerland, Bavaria, Austria, Slovenia), Illyricum and Pannonia (modern Albania, Croatia, Hungary, Serbia, etc.), and had extended the borders of the Africa Province to the east and south. Judea was added to the province of Syria when Augustus deposed Herod Archelaus, successor to client king Herod the Great (73–4 BC). Syria (like Egypt after Antony) was governed by a high prefect of the equestrian class rather than by a proconsul or legate of Augustus.[180]
154
+
155
+ Again, no military effort was needed in 25 BC when Galatia (modern Turkey) was converted to a Roman province shortly after Amyntas of Galatia was killed by an avenging widow of a slain prince from Homonada.[180] The rebellious tribes of Asturias and Cantabria in modern-day Spain were finally quelled in 19 BC, and the territory fell under the provinces of Hispania and Lusitania. This region proved to be a major asset in funding Augustus's future military campaigns, as it was rich in mineral deposits that could be fostered in Roman mining projects, especially the very rich gold deposits at Las Medulas.[181]
156
+
157
+ Conquering the peoples of the Alps in 16 BC was another important victory for Rome, since it provided a large territorial buffer between the Roman citizens of Italy and Rome's enemies in Germania to the north.[182] Horace dedicated an ode to the victory, while the monumental Trophy of Augustus near Monaco was built to honor the occasion.[183] The capture of the Alpine region also served the next offensive in 12 BC, when Tiberius began the offensive against the Pannonian tribes of Illyricum, and his brother Nero Claudius Drusus moved against the Germanic tribes of the eastern Rhineland. Both campaigns were successful, as Drusus's forces reached the Elbe River by 9 BC—though he died shortly after by falling off his horse.[184] It was recorded that the pious Tiberius walked in front of his brother's body all the way back to Rome.[185]
158
+
159
+ To protect Rome's eastern territories from the Parthian Empire, Augustus relied on the client states of the east to act as territorial buffers and areas that could raise their own troops for defense. To ensure security of the Empire's eastern flank, Augustus stationed a Roman army in Syria, while his skilled stepson Tiberius negotiated with the Parthians as Rome's diplomat to the East.[186] Tiberius was responsible for restoring Tigranes V to the throne of the Kingdom of Armenia.[185]
160
+
161
+ Yet arguably his greatest diplomatic achievement was negotiating with Phraates IV of Parthia (37–2 BC) in 20 BC for the return of the battle standards lost by Crassus in the Battle of Carrhae, a symbolic victory and great boost of morale for Rome.[185][186][187] Werner Eck claims that this was a great disappointment for Romans seeking to avenge Crassus's defeat by military means.[188] However, Maria Brosius explains that Augustus used the return of the standards as propaganda symbolizing the submission of Parthia to Rome. The event was celebrated in art such as the breastplate design on the statue Augustus of Prima Porta and in monuments such as the Temple of Mars Ultor ('Mars the Avenger') built to house the standards.[189]
162
+
163
+ Parthia had always posed a threat to Rome in the east, but the real battlefront was along the Rhine and Danube rivers.[186] Before the final fight with Antony, Octavian's campaigns against the tribes in Dalmatia were the first step in expanding Roman dominions to the Danube.[190] Victory in battle was not always a permanent success, as newly conquered territories were constantly retaken by Rome's enemies in Germania.[186]
164
+
165
+ A prime example of Roman loss in battle was the Battle of Teutoburg Forest in AD 9, where three entire legions led by Publius Quinctilius Varus were destroyed by Arminius, leader of the Cherusci, an apparent Roman ally.[191] Augustus retaliated by dispatching Tiberius and Drusus to the Rhineland to pacify it, which had some success although the battle of AD 9 brought the end to Roman expansion into Germany.[192] Roman general Germanicus took advantage of a Cherusci civil war between Arminius and Segestes; they defeated Arminius, who fled that Battle of Idistaviso in AD 16 but was killed later in 21 due to treachery.[193]
166
+
167
+ The illness of Augustus in 23 BC brought the problem of succession to the forefront of political issues and the public. To ensure stability, he needed to designate an heir to his unique position in Roman society and government. This was to be achieved in small, undramatic, and incremental ways that did not stir senatorial fears of monarchy. If someone was to succeed to Augustus's unofficial position of power, he would have to earn it through his own publicly proven merits.[194]
168
+
169
+ Some Augustan historians argue that indications pointed toward his sister's son Marcellus, who had been quickly married to Augustus's daughter Julia the Elder.[195] Other historians dispute this due to Augustus's will being read aloud to the Senate while he was seriously ill in 23 BC,[196] instead indicating a preference for Marcus Agrippa, who was Augustus's second in charge and arguably the only one of his associates who could have controlled the legions and held the Empire together.[197]
170
+
171
+ After the death of Marcellus in 23 BC, Augustus married his daughter to Agrippa. This union produced five children, three sons and two daughters: Gaius Caesar, Lucius Caesar, Vipsania Julia, Agrippina the Elder, and Postumus Agrippa, so named because he was born after Marcus Agrippa died. Shortly after the Second Settlement, Agrippa was granted a five-year term of administering the eastern half of the Empire with the imperium of a proconsul and the same tribunicia potestas granted to Augustus (although not trumping Augustus's authority), his seat of governance stationed at Samos in the eastern Aegean.[197][198] This granting of power showed Augustus's favor for Agrippa, but it was also a measure to please members of his Caesarian party by allowing one of their members to share a considerable amount of power with him.[198]
172
+
173
+ Augustus's intent became apparent to make Gaius and Lucius Caesar his heirs when he adopted them as his own children.[199] He took the consulship in 5 and 2 BC so that he could personally usher them into their political careers,[200] and they were nominated for the consulships of AD 1 and 4.[201] Augustus also showed favor to his stepsons, Livia's children from her first marriage Nero Claudius Drusus Germanicus (henceforth referred to as Drusus) and Tiberius Claudius (henceforth Tiberius), granting them military commands and public office, though seeming to favor Drusus. After Agrippa died in 12 BC, Tiberius was ordered to divorce his own wife Vipsania Agrippina and marry Agrippa's widow, Augustus's daughter Julia—as soon as a period of mourning for Agrippa had ended.[202] Drusus's marriage to Augustus's niece Antonia was considered an unbreakable affair, whereas Vipsania was "only" the daughter of the late Agrippa from his first marriage.[202]
174
+
175
+ Tiberius shared in Augustus's tribune powers as of 6 BC, but shortly thereafter went into retirement, reportedly wanting no further role in politics while he exiled himself to Rhodes.[163][203] No specific reason is known for his departure, though it could have been a combination of reasons, including a failing marriage with Julia,[163][203] as well as a sense of envy and exclusion over Augustus's apparent favouring of his young grandchildren-turned-sons Gaius and Lucius. (Gaius and Lucius joined the college of priests at an early age, were presented to spectators in a more favorable light, and were introduced to the army in Gaul.)[204][205]
176
+
177
+ After the early deaths of both Lucius and Gaius in AD 2 and 4 respectively, and the earlier death of his brother Drusus (9 BC), Tiberius was recalled to Rome in June AD 4, where he was adopted by Augustus on the condition that he, in turn, adopt his nephew Germanicus.[206] This continued the tradition of presenting at least two generations of heirs.[202] In that year, Tiberius was also granted the powers of a tribune and proconsul, emissaries from foreign kings had to pay their respects to him, and by AD 13 was awarded with his second triumph and equal level of imperium with that of Augustus.[207]
178
+
179
+ The only other possible claimant as heir was Postumus Agrippa, who had been exiled by Augustus in AD 7, his banishment made permanent by senatorial decree, and Augustus officially disowned him. He certainly fell out of Augustus's favor as an heir; the historian Erich S. Gruen notes various contemporary sources that state Postumus Agrippa was a "vulgar young man, brutal and brutish, and of depraved character".[208]
180
+
181
+ On 19 August AD 14, Augustus died while visiting Nola where his father had died. Both Tacitus and Cassius Dio wrote that Livia was rumored to have brought about Augustus's death by poisoning fresh figs.[209][210] This element features in many modern works of historical fiction pertaining to Augustus's life, but some historians view it as likely to have been a salacious fabrication made by those who had favoured Postumus as heir, or other of Tiberius's political enemies. Livia had long been the target of similar rumors of poisoning on the behalf of her son, most or all of which are unlikely to have been true.[211]
182
+
183
+ Alternatively, it is possible that Livia did supply a poisoned fig (she did cultivate a variety of fig named for her that Augustus is said to have enjoyed), but did so as a means of assisted suicide rather than murder. Augustus's health had been in decline in the months immediately before his death, and he had made significant preparations for a smooth transition in power, having at last reluctantly settled on Tiberius as his choice of heir.[212] It is likely that Augustus was not expected to return alive from Nola, but it seems that his health improved once there; it has therefore been speculated that Augustus and Livia conspired to end his life at the anticipated time, having committed all political process to accepting Tiberius, in order to not endanger that transition.[211]
184
+
185
+ Augustus's famous last words were, "Have I played the part well? Then applaud as I exit"—referring to the play-acting and regal authority that he had put on as emperor. Publicly, though, his last words were, "Behold, I found Rome of clay, and leave her to you of marble." An enormous funerary procession of mourners traveled with Augustus's body from Nola to Rome, and on the day of his burial all public and private businesses closed for the day.[212] Tiberius and his son Drusus delivered the eulogy while standing atop two rostra. Augustus's body was coffin-bound and cremated on a pyre close to his mausoleum. It was proclaimed that Augustus joined the company of the gods as a member of the Roman pantheon.[213]
186
+
187
+ Historian D. C. A. Shotter states that Augustus's policy of favoring the Julian family line over the Claudian might have afforded Tiberius sufficient cause to show open disdain for Augustus after the latter's death; instead, Tiberius was always quick to rebuke those who criticized Augustus.[214] Shotter suggests that Augustus's deification obliged Tiberius to suppress any open resentment that he might have harbored, coupled with Tiberius's "extremely conservative" attitude towards religion.[215] Also, historian R. Shaw-Smith points to letters of Augustus to Tiberius which display affection towards Tiberius and high regard for his military merits.[216] Shotter states that Tiberius focused his anger and criticism on Gaius Asinius Gallus (for marrying Vipsania after Augustus forced Tiberius to divorce her), as well as toward the two young Caesars, Gaius and Lucius—instead of Augustus, the real architect of his divorce and imperial demotion.[215]
188
+
189
+ Augustus's reign laid the foundations of a regime that lasted, in one form or another, for nearly fifteen hundred years through the ultimate decline of the Western Roman Empire and until the Fall of Constantinople in 1453. Both his adoptive surname, Caesar, and his title Augustus became the permanent titles of the rulers of the Roman Empire for fourteen centuries after his death, in use both at Old Rome and at New Rome. In many languages, Caesar became the word for Emperor, as in the German Kaiser and in the Bulgarian and subsequently Russian Tsar (sometimes Csar or Czar). The cult of Divus Augustus continued until the state religion of the Empire was changed to Christianity in 391 by Theodosius I. Consequently, there are many excellent statues and busts of the first emperor. He had composed an account of his achievements, the Res Gestae Divi Augusti, to be inscribed in bronze in front of his mausoleum.[218] Copies of the text were inscribed throughout the Empire upon his death.[219] The inscriptions in Latin featured translations in Greek beside it, and were inscribed on many public edifices, such as the temple in Ankara dubbed the Monumentum Ancyranum, called the "queen of inscriptions" by historian Theodor Mommsen.[220]
190
+
191
+ The Res Gestae is the only work to have survived from antiquity, though Augustus is also known to have composed poems entitled Sicily, Epiphanus, and Ajax, an autobiography of 13 books, a philosophical treatise, and a written rebuttal to Brutus's Eulogy of Cato.[221] Historians are able to analyze excerpts of letters penned by Augustus, preserved in other works, to others for additional facts or clues about his personal life.[216][222]
192
+
193
+ Many consider Augustus to be Rome's greatest emperor; his policies certainly extended the Empire's life span and initiated the celebrated Pax Romana or Pax Augusta. The Roman Senate wished subsequent emperors to "be more fortunate than Augustus and better than Trajan". Augustus was intelligent, decisive, and a shrewd politician, but he was not perhaps as charismatic as Julius Caesar and was influenced on occasion by Livia (sometimes for the worse). Nevertheless, his legacy proved more enduring. The city of Rome was utterly transformed under Augustus, with Rome's first institutionalized police force, fire fighting force, and the establishment of the municipal prefect as a permanent office. The police force was divided into cohorts of 500 men each, while the units of firemen ranged from 500 to 1,000 men each, with 7 units assigned to 14 divided city sectors.[223]
194
+
195
+ A praefectus vigilum, or "Prefect of the Watch" was put in charge of the vigiles, Rome's fire brigade and police.[224] With Rome's civil wars at an end, Augustus was also able to create a standing army for the Roman Empire, fixed at a size of 28 legions of about 170,000 soldiers.[225] This was supported by numerous auxiliary units of 500 non-citizen soldiers each, often recruited from recently conquered areas.[226]
196
+
197
+ With his finances securing the maintenance of roads throughout Italy, Augustus also installed an official courier system of relay stations overseen by a military officer known as the praefectus vehiculorum.[227] Besides the advent of swifter communication among Italian polities, his extensive building of roads throughout Italy also allowed Rome's armies to march swiftly and at an unprecedented pace across the country.[228] In the year 6 Augustus established the aerarium militare, donating 170 million sesterces to the new military treasury that provided for both active and retired soldiers.[229]
198
+
199
+ One of the most enduring institutions of Augustus was the establishment of the Praetorian Guard in 27 BC, originally a personal bodyguard unit on the battlefield that evolved into an imperial guard as well as an important political force in Rome.[230] They had the power to intimidate the Senate, install new emperors, and depose ones they disliked; the last emperor they served was Maxentius, as it was Constantine I who disbanded them in the early 4th century and destroyed their barracks, the Castra Praetoria.[231]
200
+
201
+ Although the most powerful individual in the Roman Empire, Augustus wished to embody the spirit of Republican virtue and norms. He also wanted to relate to and connect with the concerns of the plebs and lay people. He achieved this through various means of generosity and a cutting back of lavish excess. In the year 29 BC, Augustus gave 400 sesterces (equal to 1/10 of a Roman pound of gold) each to 250,000 citizens, 1,000 sesterces each to 120,000 veterans in the colonies, and spent 700 million sesterces in purchasing land for his soldiers to settle upon.[232] He also restored 82 different temples to display his care for the Roman pantheon of deities.[232] In 28 BC, he melted down 80 silver statues erected in his likeness and in honor of him, an attempt of his to appear frugal and modest.[232]
202
+
203
+ The longevity of Augustus's reign and its legacy to the Roman world should not be overlooked as a key factor in its success. As Tacitus wrote, the younger generations alive in AD 14 had never known any form of government other than the Principate.[233] Had Augustus died earlier (in 23 BC, for instance), matters might have turned out differently. The attrition of the civil wars on the old Republican oligarchy and the longevity of Augustus, therefore, must be seen as major contributing factors in the transformation of the Roman state into a de facto monarchy in these years. Augustus's own experience, his patience, his tact, and his political acumen also played their parts. He directed the future of the Empire down many lasting paths, from the existence of a standing professional army stationed at or near the frontiers, to the dynastic principle so often employed in the imperial succession, to the embellishment of the capital at the emperor's expense. Augustus's ultimate legacy was the peace and prosperity the Empire enjoyed for the next two centuries under the system he initiated. His memory was enshrined in the political ethos of the Imperial age as a paradigm of the good emperor. Every Emperor of Rome adopted his name, Caesar Augustus, which gradually lost its character as a name and eventually became a title.[213] The Augustan era poets Virgil and Horace praised Augustus as a defender of Rome, an upholder of moral justice, and an individual who bore the brunt of responsibility in maintaining the empire.[234]
204
+
205
+ However, for his rule of Rome and establishing the principate, Augustus has also been subjected to criticism throughout the ages. The contemporary Roman jurist Marcus Antistius Labeo (d. AD 10/11), fond of the days of pre-Augustan republican liberty in which he had been born, openly criticized the Augustan regime. In the beginning of his Annals, the Roman historian Tacitus (c. 56–c.117) wrote that Augustus had cunningly subverted Republican Rome into a position of slavery. He continued to say that, with Augustus's death and swearing of loyalty to Tiberius, the people of Rome simply traded one slaveholder for another.[235] Tacitus, however, records two contradictory but common views of Augustus:
206
+
207
+ Intelligent people praised or criticized him in varying ways. One opinion was as follows. Filial duty and a national emergency, in which there was no place for law-abiding conduct, had driven him to civil war—and this can neither be initiated nor maintained by decent methods. He had made many concessions to Anthony and to Lepidus for the sake of vengeance on his father's murderers. When Lepidus grew old and lazy, and Anthony's self-indulgence got the better of him, the only possible cure for the distracted country had been government by one man. However, Augustus had put the state in order not by making himself king or dictator, but by creating the Principate. The Empire's frontiers were on the ocean, or distant rivers. Armies, provinces, fleets, the whole system was interrelated. Roman citizens were protected by the law. Provincials were decently treated. Rome itself had been lavishly beautified. Force had been sparingly used—merely to preserve peace for the majority.[236]
208
+
209
+ According to the second opposing opinion:
210
+
211
+ filial duty and national crisis had been merely pretexts. In actual fact, the motive of Octavian, the future Augustus, was lust for power ... There had certainly been peace, but it was a blood-stained peace of disasters and assassinations.[237]
212
+
213
+ In a 2006 biography on Augustus, Anthony Everitt asserts that through the centuries, judgments on Augustus's reign have oscillated between these two extremes but stresses that:
214
+
215
+ Opposites do not have to be mutually exclusive, and we are not obliged to choose one or the other. The story of his career shows that Augustus was indeed ruthless, cruel, and ambitious for himself. This was only in part a personal trait, for upper-class Romans were educated to compete with one another and to excel. However, he combined an overriding concern for his personal interests with a deep-seated patriotism, based on a nostalgia of Rome's antique virtues. In his capacity as princeps, selfishness and selflessness coexisted in his mind. While fighting for dominance, he paid little attention to legality or to the normal civilities of political life. He was devious, untrustworthy, and bloodthirsty. But once he had established his authority, he governed efficiently and justly, generally allowed freedom of speech, and promoted the rule of law. He was immensely hardworking and tried as hard as any democratic parliamentarian to treat his senatorial colleagues with respect and sensitivity. He suffered from no delusions of grandeur.[238]
216
+
217
+ Tacitus was of the belief that Nerva (r. 96–98) successfully "mingled two formerly alien ideas, principate and liberty".[239] The 3rd-century historian Cassius Dio acknowledged Augustus as a benign, moderate ruler, yet like most other historians after the death of Augustus, Dio viewed Augustus as an autocrat.[235] The poet Marcus Annaeus Lucanus (AD 39–65) was of the opinion that Caesar's victory over Pompey and the fall of Cato the Younger (95 BC–46 BC) marked the end of traditional liberty in Rome; historian Chester G. Starr, Jr. writes of his avoidance of criticizing Augustus, "perhaps Augustus was too sacred a figure to accuse directly."[239]
218
+
219
+ The Anglo-Irish writer Jonathan Swift (1667–1745), in his Discourse on the Contests and Dissentions in Athens and Rome, criticized Augustus for installing tyranny over Rome, and likened what he believed Great Britain's virtuous constitutional monarchy to Rome's moral Republic of the 2nd century BC. In his criticism of Augustus, the admiral and historian Thomas Gordon (1658–1741) compared Augustus to the puritanical tyrant Oliver Cromwell (1599–1658).[240] Thomas Gordon and the French political philosopher Montesquieu (1689–1755) both remarked that Augustus was a coward in battle.[241] In his Memoirs of the Court of Augustus, the Scottish scholar Thomas Blackwell (1701–1757) deemed Augustus a Machiavellian ruler, "a bloodthirsty vindicative usurper", "wicked and worthless", "a mean spirit", and a "tyrant".[241]
220
+
221
+ Augustus's public revenue reforms had a great impact on the subsequent success of the Empire. Augustus brought a far greater portion of the Empire's expanded land base under consistent, direct taxation from Rome, instead of exacting varying, intermittent, and somewhat arbitrary tributes from each local province as Augustus's predecessors had done. This reform greatly increased Rome's net revenue from its territorial acquisitions, stabilized its flow, and regularized the financial relationship between Rome and the provinces, rather than provoking fresh resentments with each new arbitrary exaction of tribute.[242]
222
+
223
+ The measures of taxation in the reign of Augustus were determined by population census, with fixed quotas for each province. Citizens of Rome and Italy paid indirect taxes, while direct taxes were exacted from the provinces. Indirect taxes included a 4% tax on the price of slaves, a 1% tax on goods sold at auction, and a 5% tax on the inheritance of estates valued at over 100,000 sesterces by persons other than the next of kin.[243]
224
+
225
+ An equally important reform was the abolition of private tax farming, which was replaced by salaried civil service tax collectors. Private contractors who collected taxes for the State were the norm in the Republican era. Some of them were powerful enough to influence the number of votes for men running for offices in Rome. These tax farmers called publicans were infamous for their depredations, great private wealth, and the right to tax local areas.[242]
226
+
227
+ The use of Egypt's immense land rents to finance the Empire's operations resulted from Augustus's conquest of Egypt and the shift to a Roman form of government.[244] As it was effectively considered Augustus's private property rather than a province of the Empire, it became part of each succeeding emperor's patrimonium.[245] Instead of a legate or proconsul, Augustus installed a prefect from the equestrian class to administer Egypt and maintain its lucrative seaports; this position became the highest political achievement for any equestrian besides becoming Prefect of the Praetorian Guard.[246] The highly productive agricultural land of Egypt yielded enormous revenues that were available to Augustus and his successors to pay for public works and military expeditions.[244] During his reign the circus games resulted in the killing of 3,500 elephants.[247]
228
+
229
+ The month of August (Latin: Augustus) is named after Augustus; until his time it was called Sextilis (named so because it had been the sixth month of the original Roman calendar and the Latin word for six is sex). Commonly repeated lore has it that August has 31 days because Augustus wanted his month to match the length of Julius Caesar's July, but this is an invention of the 13th century scholar Johannes de Sacrobosco. Sextilis in fact had 31 days before it was renamed, and it was not chosen for its length (see Julian calendar). According to a senatus consultum quoted by Macrobius, Sextilis was renamed to honor Augustus because several of the most significant events in his rise to power, culminating in the fall of Alexandria, fell in that month.[248]
230
+
231
+ On his deathbed, Augustus boasted "I found a Rome of bricks; I leave to you one of marble." Although there is some truth in the literal meaning of this, Cassius Dio asserts that it was a metaphor for the Empire's strength.[249] Marble could be found in buildings of Rome before Augustus, but it was not extensively used as a building material until the reign of Augustus.[250]
232
+
233
+ Although this did not apply to the Subura slums, which were still as rickety and fire-prone as ever, he did leave a mark on the monumental topography of the centre and of the Campus Martius, with the Ara Pacis (Altar of Peace) and monumental sundial, whose central gnomon was an obelisk taken from Egypt.[251] The relief sculptures decorating the Ara Pacis visually augmented the written record of Augustus's triumphs in the Res Gestae. Its reliefs depicted the imperial pageants of the praetorians, the Vestals, and the citizenry of Rome.[252]
234
+
235
+ He also built the Temple of Caesar, the Baths of Agrippa, and the Forum of Augustus with its Temple of Mars Ultor.[253] Other projects were either encouraged by him, such as the Theatre of Balbus, and Agrippa's construction of the Pantheon, or funded by him in the name of others, often relations (e.g. Portico of Octavia, Theatre of Marcellus). Even his Mausoleum of Augustus was built before his death to house members of his family.[254] To celebrate his victory at the Battle of Actium, the Arch of Augustus was built in 29 BC near the entrance of the Temple of Castor and Pollux, and widened in 19 BC to include a triple-arch design.[250]
236
+
237
+ After the death of Agrippa in 12 BC, a solution had to be found in maintaining Rome's water supply system. This came about because it was overseen by Agrippa when he served as aedile, and was even funded by him afterwards when he was a private citizen paying at his own expense. In that year, Augustus arranged a system where the Senate designated three of its members as prime commissioners in charge of the water supply and to ensure that Rome's aqueducts did not fall into disrepair.[223]
238
+
239
+ In the late Augustan era, the commission of five senators called the curatores locorum publicorum iudicandorum (translated as "Supervisors of Public Property") was put in charge of maintaining public buildings and temples of the state cult.[223] Augustus created the senatorial group of the curatores viarum (translated as "Supervisors for Roads") for the upkeep of roads; this senatorial commission worked with local officials and contractors to organize regular repairs.[227]
240
+
241
+ The Corinthian order of architectural style originating from ancient Greece was the dominant architectural style in the age of Augustus and the imperial phase of Rome. Suetonius once commented that Rome was unworthy of its status as an imperial capital, yet Augustus and Agrippa set out to dismantle this sentiment by transforming the appearance of Rome upon the classical Greek model.[250]
242
+
243
+ His biographer Suetonius, writing about a century after Augustus's death, described his appearance as: "... unusually handsome and exceedingly graceful at all periods of his life, though he cared nothing for personal adornment. He was so far from being particular about the dressing of his hair, that he would have several barbers working in a hurry at the same time, and as for his beard he now had it clipped and now shaved, while at the very same time he would either be reading or writing something ... He had clear, bright eyes ... His teeth were wide apart, small, and ill-kept; his hair was slightly curly and inclined to golden; his eyebrows met. His ears were of moderate size, and his nose projected a little at the top and then bent ever so slightly inward. His complexion was between dark and fair. He was short of stature, although Julius Marathus, his freedman and keeper of his records, says that he was five feet and nine inches (just under 5 ft. 7 in., or 1.70 meters, in modern height measurements), but this was concealed by the fine proportion and symmetry of his figure, and was noticeable only by comparison with some taller person standing beside him...",[255] adding that "his shoes [were] somewhat high-soled, to make him look taller than he really was".[256] Scientific analysis of traces of paint found in his official statues show that he most likely had light brown hair and eyes (his hair and eyes were depicted as the same color).[257]
244
+
245
+ His official images were very tightly controlled and idealized, drawing from a tradition of Hellenistic royal portraiture rather than the tradition of realism in Roman portraiture. He first appeared on coins at the age of 19, and from about 29 BC "the explosion in the number of Augustan portraits attests a concerted propaganda campaign aimed at dominating all aspects of civil, religious, economic and military life with Augustus's person."[258] The early images did indeed depict a young man, but although there were gradual changes his images remained youthful until he died in his seventies, by which time they had "a distanced air of ageless majesty".[259] Among the best known of many surviving portraits are the Augustus of Prima Porta, the image on the Ara Pacis, and the Via Labicana Augustus, which shows him as a priest. Several cameo portraits include the Blacas Cameo and Gemma Augustea.
246
+
247
+ Primary sources
248
+
249
+ Secondary source material
250
+
251
+
252
+
en/1721.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/1722.html.txt ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ An emperor (from Latin: imperator , via Old French: empereor)[1] is a monarch, and usually the sovereign ruler of an empire or another type of imperial realm. Empress, the female equivalent, may indicate an emperor's wife (empress consort), mother (empress dowager), or a woman who rules in her own right (empress regnant). Emperors are generally recognized to be of a higher honour and rank than kings. In Europe, the title of Emperor has been used since the Middle Ages, considered in those times equal or almost equal in dignity to that of Pope due to the latter's position as visible head of the Church and spiritual leader of the Catholic part of Western Europe. The Emperor of Japan is the only currently reigning monarch whose title is translated into English as "Emperor."[2]
4
+
5
+ Both emperors and kings are monarchs, but emperor and empress are considered the higher monarchical titles. Inasmuch as there is a strict definition of emperor, it is that an emperor has no relations implying the superiority of any other ruler and typically rules over more than one nation. Therefore a king might be obliged to pay tribute to another ruler,[3] or be restrained in his actions in some unequal fashion, but an emperor should in theory be completely free of such restraints. However, monarchs heading empires have not always used the title in all contexts—the British sovereign did not assume the title Empress of the British Empire even during the incorporation of India, though she was declared Empress of India.
6
+
7
+ In Western Europe, the title of Emperor was used exclusively by the Holy Roman Emperor, whose imperial authority was derived from the concept of translatio imperii, i.e. they claimed succession to the authority of the Western Roman Emperors, thus linking themselves to Roman institutions and traditions as part of state ideology. Although initially ruling much of Central Europe and northern Italy, by the 19th century the Emperor exercised little power beyond the German-speaking states.
8
+
9
+ Although technically an elective title, by the late 16th century the imperial title had in practice come to be inherited by the Habsburg Archdukes of Austria and following the Thirty Years' War their control over the states (outside the Habsburg Monarchy, i.e. Austria, Bohemia and various territories outside the empire) had become nearly non-existent. However, Napoleon Bonaparte was crowned Emperor of the French in 1804 and was shortly followed by Francis II, Holy Roman Emperor, who declared himself Emperor of Austria in the same year. The position of Holy Roman Emperor nonetheless continued until Francis II abdicated that position in 1806. In Eastern Europe, the monarchs of Russia also used translatio imperii to wield imperial authority as successors to the Eastern Roman Empire. Their status was officially recognised by the Holy Roman Emperor in 1514, although not officially used by the Russian monarchs until 1547. However, the Russian emperors are better known by their Russian-language title of Tsar even after Peter the Great adopted the title of Emperor of All Russia in 1721.
10
+
11
+ Historians have liberally used emperor and empire anachronistically and out of its Roman and European context to describe any large state from the past or the present. Such pre-Roman titles as Great King or King of Kings, used by the Kings of Persia and others, are often considered as the equivalent. Sometimes this reference has even extended to non-monarchically ruled states and their spheres of influence such as the Athenian Empire of the late 5th century BC, the Angevin Empire of the Plantagenets and the Soviet and American "empires" of the Cold War era. However, such "empires" did not need to be headed by an "emperor". Empire became identified instead with vast territorial holdings rather than the title of its ruler by the mid-18th century.
12
+
13
+ For purposes of protocol, emperors were once given precedence over kings in international diplomatic relations, but currently precedence amongst heads of state who are sovereigns—whether they be kings, queens, emperors, empresses, princes, princesses and to a lesser degree presidents—is determined by the duration of time that each one has been continuously in office. Outside the European context, emperor was the translation given to holders of titles who were accorded the same precedence as European emperors in diplomatic terms. In reciprocity, these rulers might accredit equal titles in their native languages to their European peers. Through centuries of international convention, this has become the dominant rule to identifying an emperor in the modern era.
14
+
15
+ When Republican Rome turned into a de facto monarchy in the second half of the 1st century BC, at first there was no name for the title of the new type of monarch. Ancient Romans abhorred the name Rex ("king"), and it was critical to the political order to maintain the forms and pretenses of republican rule. Julius Caesar had been Dictator, an acknowledged and traditional office in Republican Rome. Caesar was not the first to hold it, but following his assassination the term was abhorred in Rome[citation needed].
16
+
17
+ Augustus, considered the first Roman emperor, established his hegemony by collecting on himself offices, titles, and honours of Republican Rome that had traditionally been distributed to different people, concentrating what had been distributed power in one man. One of these offices was princeps senatus, ("first man of the Senate") and became changed into Augustus' chief honorific, princeps civitatis ("first citizen") from which the modern English word and title prince is descended. The first period of the Roman Empire, from 27 BC – AD 284, is called the principate for this reason. However, it was the informal descriptive of Imperator ("commander") that became the title increasingly favored by his successors. Previously bestowed on high officials and military commanders who had imperium, Augustus reserved it exclusively to himself as the ultimate holder of all imperium. (Imperium is Latin for the authority to command, one of a various types of authority delineated in Roman political thought.)
18
+
19
+ Beginning with Augustus, Imperator appeared in the title of all Roman monarchs through the extinction of the Empire in 1453. After the reign of Augustus' immediate successor Tiberius, being proclaimed imperator was transformed into the act of accession to the head of state. Other honorifics used by the Roman Emperors have also come to be synonyms for Emperor:
20
+
21
+ After the turbulent Year of the four emperors in 69, the Flavian Dynasty reigned for three decades. The succeeding Nervan-Antonian Dynasty, ruling for most of the 2nd century, stabilised the Empire. This epoch became known as the era of the Five Good Emperors, and was followed by the short-lived Severan Dynasty.
22
+
23
+ During the Crisis of the 3rd century, Barracks Emperors succeeded one another at short intervals. Three short lived secessionist attempts had their own emperors: the Gallic Empire, the Britannic Empire, and the Palmyrene Empire though the latter used rex more regularly.
24
+
25
+ The Principate (27 BC – 284 AD) period was succeeded by what is known as the Dominate (284 AD – 527 AD), during which Emperor Diocletian tried to put the Empire on a more formal footing. Diocletian sought to address the challenges of the Empire's now vast geography and the instability caused by the informality of succession by the creation of co-emperors and junior emperors. At one point, there were as many as five sharers of the imperium (see: Tetrarchy). In 325 AD Constantine I defeated his rivals and restored single emperor rule, but following his death the empire was divided among his sons. For a time the concept was of one empire ruled by multiple emperors with varying territory under their control, however following the death of Theodosius I the rule was divided between his two sons and increasingly became separate entities. The areas administered from Rome are referred to by historians the Western Roman Empire and those under the immediate authority of Constantinople called the Eastern Roman Empire or (after the Battle of Yarmouk in 636 AD) the Later Roman or Byzantine Empire. The subdivisions and co-emperor system were formally abolished by Emperor Zeno in 480 AD following the death of Julius Nepos last Western Emperor and the ascension of Odoacer as the de facto King of Italy in 476 AD.
26
+
27
+ Historians generally refer to the continuing Roman Empire in the east as the Byzantine Empire after Byzantium, the original name of the town that Constantine I would elevate to the Imperial capital as New Rome in AD 330. (The city is more commonly called Constantinople and is today named Istanbul). Although the empire was again subdivided and a co-emperor sent to Italy at the end of the fourth century, the office became unitary again only 95 years later at the request of the Roman Senate and following the death of Julius Nepos, last Western Emperor. This change was a recognition of the reality that little remained of Imperial authority in the areas that had been the Western Empire, with even Rome and Italy itself now ruled by the essentially autonomous Odoacer.
28
+
29
+ These Later Roman "Byzantine" Emperors completed the transition from the idea of the Emperor as a semi-republican official to the Emperor as an absolute monarch. Of particular note was the translation of the Latin Imperator into the Greek Basileus, after Emperor Heraclius changed the official language of the empire from Latin to Greek in AD 620. Basileus, a title which had long been used for Alexander the Great was already in common usage as the Greek word for the Roman emperor, but its definition and sense was "King" in Greek, essentially equivalent with the Latin Rex. Byzantine period emperors also used the Greek word "autokrator", meaning "one who rules himself", or "monarch", which was traditionally used by Greek writers to translate the Latin dictator. Essentially, the Greek language did not incorporate the nuances of the Ancient Roman concepts that distinguished imperium from other forms of political power.
30
+
31
+ In general usage, the Byzantine imperial title evolved from simply "emperor" (basileus), to "emperor of the Romans" (basileus tōn Rōmaiōn) in the 9th century, to "emperor and autocrat of the Romans" (basileus kai autokratōr tōn Rōmaiōn) in the 10th.[4] In fact, none of these (and other) additional epithets and titles had ever been completely discarded.
32
+
33
+ One important distinction between the post Constantine I (reigned AD 306–337) emperors and their pagan predecessors was cesaropapism, the assertion that the Emperor (or other head of state) is also the head of the Church. Although this principle was held by all emperors after Constantine, it met with increasing resistance and ultimately rejection by bishops in the west after the effective end of Imperial power there. This concept became a key element of the meaning of "emperor" in the Byzantine and Orthodox east, but went out of favor in the west with the rise of Roman Catholicism.
34
+
35
+ The Byzantine Empire also produced three women who effectively governed the state: the Empress Irene and the Empresses Zoe and Theodora.
36
+
37
+ In 1204 Constantinople fell to the Venetians and the Franks in the Fourth Crusade. Following the tragedy of the horrific sacking of the city, the conquerors declared a new "Empire of Romania", known to historians as the Latin Empire of Constantinople, installing Baldwin IX, Count of Flanders, as Emperor. However, Byzantine resistance to the new empire meant that it was in constant struggle to establish itself. Byzantine Emperor Michael VIII Palaiologos succeeded in recapturing Constantinople in 1261. The Principality of Achaea, a vassal state the empire had created in Morea (Greece) intermittently continued to recognize the authority of the crusader emperors for another half century. Pretenders to the title continued among the European nobility until circa 1383.
38
+
39
+ With Constantinople occupied, claimants to the imperial succession styled themselves as emperor in the chief centers of resistance: The Laskarid dynasty in the Empire of Nicaea, the Komnenid dynasty in the Empire of Trebizond and the Doukid dynasty in the Despotate of Epirus. In 1248, Epirus recognized the Nicaean Emperors, who subsequently recaptured Constantinople in 1261. The Trapezuntine emperor formally submitted in Constantinople in 1281,[5] but frequently flouted convention by styling themselves emperor back in Trebizond thereafter.
40
+
41
+ Ottoman rulers held several titles denoting their Imperial status. These included:[citation needed] Sultan, Khan, Sovereign of the Imperial House of Osman, Sultan of Sultans, Khan of Khans, Commander of the Faithful and Successor of the Prophet of the Lord of the Universe, Protector of the Holy Cities of Mecca, Medina and Jerusalem, Emperor of The Three Cities of Constantinople, Adrianopole and Bursa as well as many other cities and countries.[citation needed]
42
+
43
+ After the Ottoman capture of Constantinople in 1453, the Ottoman sultans began to style themselves Kaysar-i Rum (Emperor of the Romans) as they asserted themselves to be the heirs to the Roman Empire by right of conquest. The title was of such importance to them that it led them to eliminate the various Byzantine successor states — and therefore rival claimants — over the next eight years. Though the term "emperor" was rarely used by Westerners of the Ottoman sultan, it was generally accepted by Westerners that he had imperial status.
44
+
45
+ The Emperor of the Romans' title was a reflection of the translatio imperii (transfer of rule) principle that regarded the Holy Roman Emperors as the inheritors of the title of Emperor of the Western Roman Empire, despite the continued existence of the Roman Empire in the east, hence the problem of two emperors.
46
+
47
+ From the time of Otto the Great onward, much of the former Carolingian kingdom of Eastern Francia became the Holy Roman Empire. The prince-electors elected one of their peers as King of the Romans and King of Italy before being crowned by the Pope. The Emperor could also pursue the election of his heir (usually a son) as King, who would then succeed him after his death. This junior King then bore the title of Roman King (King of the Romans). Although technically already ruling, after the election he would be crowned as emperor by the Pope. The last emperor to be crowned by the pope was Charles V; all emperors after him were technically emperors-elect, but were universally referred to as Emperor.
48
+
49
+ The first Austrian Emperor was the last Holy Roman Emperor Francis II. In the face of aggressions by Napoleon, Francis feared for the future of the Holy Roman Empire. He wished to maintain his and his family's Imperial status in the event that the Holy Roman Empire should be dissolved, as it indeed was in 1806 when an Austrian-led army suffered a humiliating defeat at the Battle of Austerlitz. After which, the victorious Napoleon proceeded to dismantle the old Reich by severing a good portion from the empire and turning it into a separate Confederation of the Rhine. With the size of his imperial realm significantly reduced, Francis II, Holy Roman Emperor became Francis I, Emperor of Austria. The new imperial title may have sounded less prestigious than the old one, but Francis' dynasty continued to rule from Austria and a Habsburg monarch was still an emperor (Kaiser), and not just merely a king (König), in name.
50
+
51
+ The title lasted just a little over one century until 1918, but it was never clear what territory constituted the "Empire of Austria". When Francis took the title in 1804, the Habsburg lands as a whole were dubbed the Kaisertum Österreich. Kaisertum might literally be translated as "emperordom" (on analogy with "kingdom") or "emperor-ship"; the term denotes specifically "the territory ruled by an emperor", and is thus somewhat more general than Reich, which in 1804 carried connotations of universal rule. Austria proper (as opposed to the complex of Habsburg lands as a whole) had been an Archduchy since the 15th century, and most of the other territories of the Empire had their own institutions and territorial history, although there were some attempts at centralization, especially during the reign of Marie Therese and her son Joseph II and then finalized in the early 19th century. When Hungary was given self-government in 1867, the non-Hungarian portions were called the Empire of Austria and were officially known as the "Kingdoms and Lands Represented in the Imperial Council (Reichsrat)". The title of Emperor of Austria and the associated Empire were both abolished at the end of the First World War in 1918, when German Austria became a republic and the other kingdoms and lands represented in the Imperial Council established their independence or adhesion to other states.
52
+
53
+ Byzantium's close cultural and political interaction with its Balkan neighbors Bulgaria and Serbia, and with Russia (Kievan Rus', then Muscovy) led to the adoption of Byzantine imperial traditions in all of these countries.
54
+
55
+ In 913, Simeon I of Bulgaria was crowned Emperor (Tsar) by the Patriarch of Constantinople and Imperial regent Nicholas Mystikos outside the Byzantine capital. In its final simplified form, the title read "Emperor and Autocrat of all Bulgarians and Romans" (Tsar i samodarzhets na vsichki balgari i gartsi in the modern vernacular). The Roman component in the Bulgarian imperial title indicated both rulership over Greek speakers and the derivation of the imperial tradition from the Romans, however this component was never recognised by the Byzantine court.
56
+
57
+ Byzantine recognition of Simeon's imperial title was revoked by the succeeding Byzantine government. The decade 914–924 was spent in destructive warfare between Byzantium and Bulgaria over this and other matters of conflict. The Bulgarian monarch, who had further irritated his Byzantine counterpart by claiming the title "Emperor of the Romans" (basileus tōn Rōmaiōn), was eventually recognized, as "Emperor of the Bulgarians" (basileus tōn Boulgarōn) by the Byzantine Emperor Romanos I Lakapenos in 924. Byzantine recognition of the imperial dignity of the Bulgarian monarch and the patriarchal dignity of the Bulgarian patriarch was again confirmed at the conclusion of permanent peace and a Bulgarian-Byzantine dynastic marriage in 927. In the meantime, the Bulgarian imperial title may have been also confirmed by the pope. The Bulgarian imperial title "tsar" was adopted by all Bulgarian monarchs up to the fall of Bulgaria under Ottoman rule. 14th-century Bulgarian literary compositions clearly denote the Bulgarian capital (Tarnovo) as a successor of Rome and Constantinople, in effect, the "Third Rome".
58
+
59
+ After Bulgaria obtained full independence from the Ottoman Empire in 1908, its monarch, who was previously styled Knyaz, [prince], took the traditional title of Tsar [king] and was recognized internationally as such.[by whom?]
60
+
61
+ The kings of the Ancien Régime and the July Monarchy used the title Empereur de France in diplomatic correspondence and treaties with the Ottoman emperor from at least 1673 onwards. The Ottomans insisted on this elevated style while refusing to recognize the Holy Roman Emperors or the Russian tsars because of their rival claims of the Roman crown. In short, it was an indirect insult by the Ottomans to the HRE and the Russians. The French kings also used it for Morocco (1682) and Persia (1715).
62
+
63
+ Napoleon Bonaparte, who was already First Consul of the French Republic (Premier Consul de la République française) for life, declared himself Emperor of the French (Empereur des Français) on 18 May 1804, thus creating the French Empire (Empire Français).
64
+
65
+ Napoleon relinquished the title of Emperor of the French on 6 April and again on 11 April 1814.
66
+ Napoleon's infant son, Napoleon II, was recognized by the Council of Peers, as Emperor from the moment of his father's abdication, and therefore reigned (as opposed to ruled) as Emperor for fifteen days, 22 June to 7 July 1815.
67
+
68
+ Since 3 May 1814, the Sovereign Principality of Elba was created a miniature non-hereditary Monarchy under the exiled French Emperor Napoleon I. Napoleon I was allowed, by the treaty of Fontainebleau (27 April), to enjoy, for life, the imperial title. The islands were not restyled an empire.
69
+
70
+ On 26 February 1815, Napoleon abandoned Elba for France, reviving the French Empire for a Hundred Days; the Allies declared an end to Napoleon's sovereignty over Elba on 25 March 1815, and on 31 March 1815 Elba was ceded to the restored Grand Duchy of Tuscany by the Congress of Vienna. After his final defeat, Napoleon was treated as a general by the British authorities during his second exile to Atlantic Isle of St. Helena. His title was a matter of dispute with the governor of St Helena, who insisted on addressing him as "General Bonaparte", despite the "historical reality that he had been an emperor" and therefore retained the title.[9][10][11]
71
+
72
+ Napoleon I's nephew, Napoleon III, resurrected the title of emperor on 2 December 1852, after establishing the Second French Empire in a presidential coup, subsequently approved by a plebiscite. His reign was marked by large scale public works, the development of social policy, and the extension of France's influence throughout the world. During his reign, he also set about creating the Second Mexican Empire (headed by his choice of Maximilian I of Mexico, a member of the House of Habsburg), to regain France's hold in the Americas and to achieve greatness for the 'Latin' race.[12] Napoleon III was deposed on 4 September 1870, after France's defeat in the Franco-Prussian War. The Third Republic followed and after the death of his son Napoleon (IV), in 1879 during the Zulu War, the Bonapartist movement split, and the Third Republic was to last until 1940.
73
+
74
+ The origin of the title Imperator totius Hispaniae (Latin for Emperor of All Spain[note 2]) is murky. It was associated with the Leonese monarchy perhaps as far back as Alfonso the Great (r. 866–910). The last two kings of its Astur-Leonese dynasty were called emperors in a contemporary source.
75
+
76
+ King Sancho III of Navarre conquered Leon in 1034 and began using it. His son, Ferdinand I of Castile also took the title in 1039. Ferdinand's son, Alfonso VI of León and Castile took the title in 1077. It then passed to his son-in-law, Alfonso I of Aragon in 1109. His stepson and Alfonso VI's grandson, Alfonso VII was the only one who actually had an imperial coronation in 1135.
77
+
78
+ The title was not exactly hereditary but self-proclaimed by those who had, wholly or partially, united the Christian northern part of the Iberian Peninsula, often at the expense of killing rival siblings. The popes and Holy Roman emperors protested at the usage of the imperial title as a usurpation of leadership in western Christendom. After Alfonso VII's death in 1157, the title was abandoned, and the kings who used it are not commonly mentioned as having been "emperors", in Spanish or other historiography.
79
+
80
+ After the fall of the Byzantine Empire, the legitimate heir to the throne, Andreas Palaiologos, willed away his claim to Ferdinand and Isabella in 1503.
81
+
82
+ After the independence and proclamation of the Empire of Brazil from the Kingdom of Portugal by Prince Pedro, who became Emperor, in 1822, his father, King John VI of Portugal briefly held the honorific style of Titular Emperor of Brazil and the treatment of His Imperial and Royal Majesty under the 1825 Treaty of Rio de Janeiro, by which Portugal recognized the independence of Brazil. The style of Titular Emperor was a life title, and became extinct upon the holder's demise. John VI held the imperial title for a few months only, from the ratification of the Treaty in November 1825 until his death in March 1826. During those months, however, as John's imperial title was purely honorific while his son, Pedro I, remained the sole monarch of the Brazilian Empire.
83
+
84
+ In the late 3rd century, by the end of the epoch of the barracks emperors in Rome, there were two Britannic Emperors, reigning for about a decade. After the end of Roman rule in Britain, the Imperator Cunedda forged the Kingdom of Gwynedd in northern Wales, but all his successors were titled kings and princes.
85
+
86
+ There was no consistent title for the king of England before 1066, and monarchs chose to style themselves as they pleased. Imperial titles were used inconsistently, beginning with Athelstan in 930 and ended with the Norman conquest of England. Empress Matilda (1102–1167) is the only English monarch commonly referred to as "emperor" or "empress", but she acquired her title through her marriage to Henry V, Holy Roman Emperor.
87
+
88
+ During the rule of Henry VIII the Statute in Restraint of Appeals declared that 'this realm of England is an Empire...governed by one Supreme Head and King having the dignity and royal estate of the imperial Crown of the same'. This was in the context of the divorce of Catherine of Aragon and the English Reformation, to emphasize that England had never accepted the quasi-imperial claims of the papacy. Hence England and, by extension its modern successor state, the United Kingdom of Great Britain and Northern Ireland, is according to English law an Empire ruled by a King endowed with the imperial dignity. However, this has not led to the creation of the title of Emperor in England, nor in Great Britain, nor in the United Kingdom.
89
+
90
+ In 1801, George III rejected the title of Emperor when offered. The only period when British monarchs held the title of Emperor in a dynastic succession started when the title Empress of India was created for Queen Victoria. The government led by Prime Minister Benjamin Disraeli, conferred the additional title upon her by an Act of Parliament, reputedly to assuage the monarch's irritation at being, as a mere Queen, notionally inferior to her own daughter (Princess Victoria, who was the wife of the reigning German Emperor); the Indian Imperial designation was also formally justified as the expression of Britain succeeding the former Mughal Emperor as suzerain over hundreds of princely states. The Indian Independence Act 1947 provided for the abolition of the use of the title "Emperor of India" by the British monarch, but this was not executed by King George VI until a royal proclamation on 22 June 1948. Despite this, George VI continued as king of India until 1950 and as king of Pakistan until his death in 1952.
91
+
92
+ The last Empress of India was George VI's wife, Queen Elizabeth The Queen Mother.
93
+
94
+ Under the guise of idealism giving way to realism, German nationalism rapidly shifted from its liberal and democratic character in 1848 to Prussian prime minister Otto von Bismarck's authoritarian Realpolitik. Bismarck wanted to unify the rival German states to achieve his aim of a conservative, Prussian-dominated Germany. Three wars led to military successes and helped to convince German people to do this: the Second war of Schleswig against Denmark in 1864, the Austro-Prussian War against Austria in 1866, and the Franco-Prussian War against the Second French Empire in 1870–71. During the Siege of Paris in 1871, the North German Confederation, supported by its allies from southern Germany, formed the German Empire with the proclamation of the Prussian king Wilhelm I as German Emperor in the Hall of Mirrors at the Palace of Versailles, to the humiliation of the French, who ceased to resist only days later.
95
+
96
+ After his death he was succeeded by his son Frederick III who was only emperor for 99 days. In the same year his son Wilhelm II became the third emperor within a year. He was the last German emperor. After the empire's defeat in World War I the empire, called in German Reich, had a president as head of state instead of an emperor. The use of the word Reich was abandoned after the Second World War.
97
+
98
+ In 1472, the niece of the last Byzantine emperor, Sophia Palaiologina, married Ivan III, grand prince of Moscow, who began championing the idea of Russia being the successor to the Byzantine Empire. This idea was represented more emphatically in the composition the monk Filofej addressed to their son Vasili III. After ending Muscovy's dependence on its Mongol overlords in 1480, Ivan III began the usage of the titles Tsar and Autocrat (samoderzhets). His insistence on recognition as such by the emperor of the Holy Roman Empire since 1489 resulted in the granting of this recognition in 1514 by Emperor Maximilian I to Vasili III. His son Ivan IV emphatically crowned himself Tsar of Russia on 16 January 1547. The word "Tsar" derives from Latin Caesar, but this title was used in Russia as equivalent to "King"; the error occurred when medieval Russian clerics referred to the biblical Jewish kings with the same title that was used to designate Roman and Byzantine rulers — "Caesar".
99
+
100
+ On 31 October 1721, Peter I was proclaimed Emperor by the Senate. The title used was Latin "Imperator", which is a westernizing form equivalent to the traditional Slavic title "Tsar". He based his claim partially upon a letter discovered in 1717 written in 1514 from Maximilian I to Vasili III, in which the Holy Roman Emperor used the term in referring to Vasili.
101
+
102
+ A formal address to the ruling Russian monarch adopted thereafter was 'Your Imperial Majesty'. The crown prince was addressed as 'Your Imperial Highness'.
103
+
104
+ The title has not been used in Russia since the abdication of Emperor Nicholas II on 15 March 1917.
105
+
106
+ Imperial Russia produced four reigning Empresses, all in the eighteenth century.
107
+
108
+ In 1345, the Serbian King Stefan Uroš IV Dušan proclaimed himself Emperor (Tsar) and was crowned as such at Skopje on Easter 1346 by the newly created Serbian Patriarch, and by the Patriarch of Bulgaria and the autocephalous Archbishop of Ohrid. His imperial title was recognized by Bulgaria and various other neighbors and trading partners but not by the Byzantine Empire. In its final simplified form, the Serbian imperial title read "Emperor of Serbs and Greeks" (цар Срба и Грка in modern Serbian). It was only employed by Stefan Uroš IV Dušan and his son Stefan Uroš V in Serbia (until his death in 1371), after which it became extinct. A half-brother of Dušan, Simeon Uroš, and then his son Jovan Uroš, claimed the same title, until the latter's abdication in 1373, while ruling as dynasts in Thessaly. The "Greek" component in the Serbian imperial title indicates both rulership over Greeks and the derivation of the imperial tradition from the Romans.
109
+
110
+ The Aztec and Inca traditions are unrelated to one another. Both were conquered under the reign of King Charles I of Spain who was simultaneously emperor-elect of the Holy Roman Empire during the fall of the Aztecs and fully emperor during the fall of the Incas. Incidentally by being king of Spain, he was also Roman (Byzantine) emperor in pretence through Andreas Palaiologos. The translations of their titles were provided by the Spanish.
111
+
112
+ The only pre-Columbian North American rulers to be commonly called emperors were the Hueyi Tlatoani of the Aztec Empire (1375–1521). It was an elected monarchy chosen by the elite. In the Aztec Empire, there were three emperors: Those of Tenochtitlan, Tlacopan and Texcoco. The Emperors of Tenochtitlan and Texcoco were nominally equals, each receiving two-fifths of tribute from the vassal kingdoms, whereas the Emperor of Tlacopan was a junior member and only received one-fifth of the tribute, due to the fact that Tlacopan was a newcomer to the alliance. Despite the nominal equality, Tenochtitlan eventually assumed a de facto dominant role in the Empire, to the point that even the Emperors of Tlacopan and Texcoco would acknowledge Tenochtitlan's effective supremacy. Spanish conquistador Hernán Cortés slew Emperor Cuauhtémoc and installed puppet rulers who became vassals for Spain.
113
+
114
+ The only pre-Columbian South American rulers to be commonly called emperors were the Sapa Inca of the Inca Empire (1438–1533). Spanish conquistador Francisco Pizarro, conquered the Inca for Spain, killed Emperor Atahualpa, and installed puppets as well. Atahualpa may actually be considered a usurper as he had achieved power by killing his half-brother and he did not perform the required coronation with the imperial crown mascaipacha by the Huillaq Uma (high priest).
115
+
116
+ When Napoleon I ordered the invasion of Portugal in 1807 because it refused to join the Continental System, the Portuguese Braganzas moved their capital to Rio de Janeiro to avoid the fate of the Spanish Bourbons (Napoleon I arrested them and made his brother Joseph king). When the French general Jean-Andoche Junot arrived in Lisbon, the Portuguese fleet had already left with all the local elite.
117
+
118
+ In 1808, under a British naval escort, the fleet arrived in Brazil. Later, in 1815, the Portuguese Prince Regent (since 1816 King João VI) proclaimed the United Kingdom of Portugal, Brazil and the Algarves, as a union of three kingdoms, lifting Brazil from its colonial status.
119
+
120
+ After the fall of Napoleon I and the Liberal revolution in Portugal, the Portuguese royal family returned to Europe (1821). Prince Pedro of Braganza (King João's older son) stayed in South America acting as regent of the local kingdom, but, two years later in 1822, he proclaimed himself Pedro I, first Emperor of Brazil. He did, however, recognize his father, João VI, as Titular Emperor of Brazil —a purely honorific title—until João VI's death in 1826.
121
+
122
+ The empire came to an end in 1889, with the overthrow of Emperor Pedro II (Pedro I's son and successor), when the Brazilian republic was proclaimed.
123
+
124
+ Haiti was declared an empire by its ruler, Jean-Jacques Dessalines, who made himself Jacques I, on 20 May 1805. He was assassinated the next year. Haiti again became an empire from 1849 to 1859 under Faustin Soulouque.
125
+
126
+ In Mexico, the First Mexican Empire was the first of two empires created. After the declaration of independence on 15 September 1821, it was the intention of the Mexican parliament to establish a commonwealth whereby the King of Spain, Ferdinand VII, would also be Emperor of Mexico, but in which both countries were to be governed by separate laws and with their own legislative offices. Should the king refuse the position, the law provided for a member of the House of Bourbon to accede to the Mexican throne.
127
+
128
+ Ferdinand VII, however, did not recognize the independence and said that Spain would not allow any other European prince to take the throne of Mexico. By request of Parliament, the president of the regency Agustín de Iturbide was proclaimed emperor of Mexico on 12 July 1822 as Agustín I. Agustín de Iturbide was the general who helped secure Mexican independence from Spanish rule, but was overthrown by the Plan of Casa Mata.
129
+
130
+ In 1863, the invading French, under Napoleon III (see above), in alliance with Mexican conservatives and nobility, helped create the Second Mexican Empire, and invited Archduke Maximilian, of the House of Habsburg-Lorraine, younger brother of the Austrian Emperor Franz Josef I, to become emperor Maximilian I of Mexico. The childless Maximilian and his consort Empress Carlota of Mexico, daughter of Leopold I of Belgium, adopted Agustín's grandsons Agustin and Salvador as his heirs to bolster his claim to the throne of Mexico. Maximilian and Carlota made Chapultepec Castle their home, which has been the only palace in North America to house sovereigns. After the withdrawal of French protection in 1867, Maximilian was captured and executed by the liberal forces of Benito Juárez.
131
+
132
+ This empire led to French influence in the Mexican culture and also immigration from France, Belgium, and Switzerland to Mexico.
133
+
134
+ In Persia, from the time of Darius the Great, Persian rulers used the title "King of Kings" (Shahanshah in Persian) since they had dominion over peoples from the borders of India to the borders of Greece and Egypt. Alexander probably crowned himself shahanshah after conquering Persia[citation needed], bringing the phrase basileus ton basileon to Greek. It is also known that Tigranes the Great, king of Armenia, was named as the king of kings when he made his empire after defeating the Parthians. Georgian title "mephet'mephe" has the same meaning.
135
+
136
+ The last shahanshah (Mohammad Reza Pahlavi) was ousted in 1979 following the Iranian Revolution. Shahanshah is usually translated as king of kings or simply king for ancient rulers of the Achaemenid, Arsacid, and Sassanid dynasties, and often shortened to shah for rulers since the Safavid dynasty in the 16th century. Iranian rulers were typically regarded in the West as emperors.
137
+
138
+ The Sanskrit word for emperor is Samrāj or Samraat or Chakravartin. This word has been used as an epithet of various Vedic deities, like Varuna, and has been attested in the Rig-Veda. Chakravarti refers to the king of kings. A Chakravarti is not only a sovereign ruler but also has feudatories.
139
+
140
+ Typically, in the later Vedic age, a Hindu high king (Maharaja) was only called Samraaṭ after performing the Vedic Rajasuya sacrifice, enabling him by religious tradition to claim superiority over the other kings and princes. Another word for emperor is sārvabhaumā. The title of Samraaṭ has been used by many rulers of the Indian subcontinent as claimed by the Hindu mythologies. In proper history, most historians call Chandragupta Maurya the first samraaṭ (emperor) of the Indian subcontinent, because of the huge empire he ruled. The most famous emperor was his grandson Ashoka the Great. Other dynasties that are considered imperial by historians are the Kushanas, Guptas, Vijayanagara, Kakatiya, Hoysala and the Cholas.
141
+
142
+ Rudhramadevi (1259–1289) was one of the most prominent rulers of the Kakatiya dynasty on the Deccan Plateau, being one of the few ruling queens (empress) in Indian history.
143
+
144
+ After India was invaded by the Mongol Khans and Turkic Muslims, the rulers of their major states on the subcontinent were titled Sultān or Badshah or Shahanshah. In this manner, the only empress-regnant ever to have actually sat on the throne of Delhi was Razia Sultan. The Mughal Emperors were the only Indian rulers for whom the term was consistently used by Western contemporaries. The emperors of the Maratha Empire were called Chhatrapati. From 1877 to 1947 the monarch of the United Kingdom adopted the additional title of Emperor/Empress of India (Kaisar-i-Hind).
145
+
146
+ From 1270 the Solomonic dynasty of Ethiopia used the title Nəgusä Nägäst, literally "King of Kings". The use of the king of kings style began a millennium earlier in this region, however, with the title being used by the Kings of Aksum, beginning with Sembrouthes in the 3rd century.
147
+
148
+ Another title used by this dynasty was Itegue Zetopia. Itegue translates as Empress, and was used by the only reigning Empress, Zauditu, along with the official title Negiste Negest ("Queen of Kings").
149
+
150
+ In 1936, the Italian king Victor Emmanuel III claimed the title of Emperor of Ethiopia after Ethiopia was occupied by Italy during the Second Italo-Abyssinian War. After the defeat of the Italians by the British and the Ethiopians in 1941, Haile Selassie was restored to the throne but Victor Emmanuel did not relinquish his claim to the title until 1943.[13]
151
+
152
+ In 1976, President Jean-Bédel Bokassa of the Central African Republic, proclaimed the country to be an autocratic Central African Empire, and made himself Emperor as Bokassa I. The expenses of his coronation ceremony actually bankrupted the country. He was overthrown three years later and the republic was restored.[14]
153
+
154
+ The rulers of China and (once Westerners became aware of the role) Japan were always accepted in the West as emperors, and referred to as such. The claims of other East Asian monarchies to the title may have been accepted for diplomatic purposes, but it was not necessarily used in more general contexts.
155
+
156
+ The East Asian tradition is different from the Roman tradition, having arisen separately. What links them together is the use of the Chinese logographs 皇 (huáng) and 帝 (dì) which together or individually are imperial. Because of the cultural influence of China, China's neighbors adopted these titles or had their native titles conform in hanzi. Anyone who spoke to the emperor was to address the emperor as bìxià (陛下, lit. the "Bottom of the Steps"), corresponding to "Imperial Majesty"; shèngshàng (聖上, lit. Holy Highness); or wànsuì (萬歲, lit. "You, of Ten Thousand Years").
157
+
158
+ In 221 BC, Ying Zheng, who was king of Qin at the time, proclaimed himself Shi Huangdi (始皇帝), which translates as "first emperor". Huangdi is composed of huang ("august one", 皇) and di ("sage-king", 帝), and referred to legendary/mythological sage-emperors living several millennia earlier, of which three were huang and five were di. Thus Zheng became Qin Shi Huang, abolishing the system where the huang/di titles were reserved to dead and/or mythological rulers. Since then, the title "king" became a lower ranked title, and later divided into two grades. Although not as popular, the title 王 wang (king or prince) was still used by many monarchs and dynasties in China up to the Taipings in the 19th century. 王 is pronounced vương in Vietnamese, ō in Japanese, and wang in Korean.
159
+
160
+ The imperial title continued in China until the Qing Dynasty was overthrown in 1912. The title was briefly revived from 12 December 1915 to 22 March 1916 by President Yuan Shikai and again in early July 1917 when General Zhang Xun attempted to restore last Qing emperor Puyi to the throne. Puyi retained the title and attributes of a foreign emperor, as a personal status, until 1924. After the Japanese occupied Manchuria in 1931, they proclaimed it to be the Empire of Manchukuo, and Puyi became emperor of Manchukuo. This empire ceased to exist when it was occupied by the Soviet Red Army in 1945.[15]
161
+
162
+ In general, an emperor would have one empress (Huanghou, 皇后) at one time, although posthumous entitlement to empress for a concubine was not uncommon. The earliest known usage of huanghou was in the Han Dynasty. The emperor would generally select the empress from his concubines. In subsequent dynasties, when the distinction between wife and concubine became more accentuated, the crown prince would have chosen an empress-designate before his reign. Imperial China produced only one reigning empress, Wu Zetian, and she used the same Chinese title as an emperor (Huangdi, 皇帝). Wu Zetian then reigned for about 15 years (690–705 AD).
163
+
164
+ The earliest Emperor recorded in Kojiki and Nihon Shoki is Emperor Jimmu, who is said to be a descendant of Amaterasu's grandson Ninigi who descended from Heaven (Tenson kōrin). If one believes what is written in Nihon Shoki, the Emperors have an unbroken direct male lineage that goes back more than 2,600 years.
165
+
166
+ In ancient Japan, the earliest titles for the sovereign were either ヤマト大王/大君 (yamato ōkimi, Grand King of Yamato), 倭王/倭国王 (waō/wakokuō, King of Wa, used externally), or 治天下大王 (amenoshita shiroshimesu ōkimi, Grand King who rules all under heaven, used internally).
167
+
168
+ In 607, Empress Suiko sent a diplomatic document to China, which she wrote "the emperor of the land of the rising sun (日出處天子) sends a document to the emperor of the land of the setting sun (日沒處天子)" and began to use the title emperor externally.[16] As early as the 7th century, the word 天皇 (which can be read either as sumera no mikoto, divine order, or as tennō, Heavenly Emperor, the latter being derived from a Tang Chinese term referring to the Pole star around which all other stars revolve) began to be used. The earliest use of this term is found on a wooden slat, or mokkan, unearthed in Asuka-mura, Nara Prefecture in 1998. The slat dated back to the reign of Emperor Tenmu and Empress Jitō.[17] The reading 'Tennō' has become the standard title for the Japanese sovereign up to the present age. The term 帝 (mikado, Emperor) is also found in literary sources.
169
+
170
+ In the Japanese language, the word tennō is restricted to Japan's own monarch; kōtei (皇帝) is used for foreign emperors. Historically, retired emperors often kept power over a child-emperor as de facto regent. For a long time, a shōgun (formally the imperial military dictator, but made hereditary) or an imperial regent wielded actual political power. In fact, through much of Japanese history, the emperor has been little more than a figurehead. The Meiji Restoration restored practical abilities and the political system under Emperor Meiji.[18] The last shogun Tokugawa Yoshinobu resigned in 1868.
171
+
172
+ After World War II, all claims of divinity were dropped (see Ningen-sengen). The Diet acquired all prerogative powers of the Crown, reverting the latter to a ceremonial role.[19] By the end of the 20th century, Japan was the only country with an emperor on the throne.
173
+
174
+ As of the early 21st century, Japan's succession law prohibits a female from ascending the throne. With the birth of a daughter as the first child of the then-Crown Prince Naruhito, Japan considered abandoning that rule. However, shortly after the announcement that Princess Kiko was pregnant with her third child, the proposal to alter the Imperial Household Law was suspended by then-Prime Minister Junichiro Koizumi. On 3 January 2007, as the child turned out to be a son, Prime Minister Shinzō Abe announced that he would drop the proposal.[20]
175
+
176
+ Emperor Naruhito is the 126th monarch according to Japan's traditional order of succession. The second and third in line of succession are Fumihito, Prince Akishino and Prince Hisahito. Historically, Japan has had eight reigning empresses who used the genderless title Tennō, rather than the female consort title kōgō (皇后) or chūgū (中宮). There is ongoing discussion of the Japanese Imperial succession controversy.
177
+ Although current Japanese law prohibits female succession, all Japanese emperors claim to trace their lineage to Amaterasu, the Sun Goddess of the Shintō religion. Thus, the Emperor is thought to be the highest authority of the Shinto religion, and one of his duties is to perform Shinto rituals for the people of Japan.
178
+
179
+ Some rulers of Goguryeo (37 BC–AD 668) used the title of Taewang (태왕; 太王), literally translated as "Greatest King". The title of Taewang was also used by some rulers of Silla (57 BC–AD 935), including Beopheung and Jinheung.
180
+
181
+ The rulers of Balhae (698–926) internally called themselves Seongwang (성왕; 聖王; lit. "Holy King").[21]
182
+
183
+ The rulers of Goryeo (918–1392) used the titles of emperor and Son of Heaven of the East of the Ocean (해동천자; 海東天子). Goryeo's imperial system ended in 1270 with capitulation to the Mongol Empire.[22]
184
+
185
+ In 1897, Gojong, the King of Joseon, proclaimed the founding of the Korean Empire (1897–1910), becoming the Emperor of Korea. He declared the era name of "Gwangmu" (광무; 光武), meaning "Bright and Martial". The Korean Empire lasted until 1910, when it was annexed by the Empire of Japan.
186
+
187
+ The title Khagan (khan of khans or grand khan) was held by Genghis Khan, founder of the Mongol Empire in 1206; he also formally took the Chinese title huangdi, as "Genghis Emperor" (成吉思皇帝; Chéngjísī Huángdì ). Only the Khagans from Genghis Khan to the fall of the Yuan dynasty in 1368 are normally referred to as Emperors in English.
188
+
189
+ Ngô Quyền, the first ruler of Đại Việt as an independent state, used the title Vương (王, King). However, after the death of Ngô Quyền, the country immersed in a civil war known as Anarchy of the 12 Warlords that lasted for over 20 years. In the end, Đinh Bộ Lĩnh unified the country after defeating all the warlords and became the first ruler of Đại Việt to use the title Hoàng Đế (皇帝, Emperor) in 968. Succeeding rulers in Vietnam then continued to use this Emperor title until 1806 when this title was stopped being used for a century.[citation needed]
190
+
191
+ Đinh Bộ Lĩnh was not the first to claim the title of Đế (帝, Emperor). Before him, Lý Bí and Mai Thúc Loan also claimed this title. However, their rules were short-lived.[citation needed]
192
+
193
+ The Vietnamese emperors also gave this title to their ancestors who were lords or influential figures in the previous dynasty, as did the Chinese emperors. This practice was one of the many indications that Vietnam considered itself an equal to China which remained intact up to the twentieth century.[23]
194
+
195
+ In 1802 the newly established Nguyễn dynasty requested canonization from the Chinese Jiaqing Emperor and received the title Quốc Vương (國王, King of a State) and the name of the country as An Nam (安南) instead Đại Việt (大越). To avoid unnecessary armed conflicts, the Vietnamese rulers accepted this in diplomatic relation and used the title Emperor only domestically. However, Vietnamese rulers never accepted the vassalage relationship with China and always refused to come to Chinese courts to pay homage to Chinese rulers (a sign of vassalage acceptance). China waged a number of wars against Vietnam throughout history, and after each failure, settled for the tributary relationship. The Yuan dynasty under Kublai Khan waged three wars against Vietnam to force it into a vassalage relationship but after successive failures, Kublai Khan's successor, Temür Khan, finally settled for a tributary relationship with Vietnam. Vietnam sent tributary missions to China once in three years (with some periods of disruptions) until the 19th century, Sino-French War France replaced China in control of northern Vietnam.[citation needed]
196
+
197
+ The emperors of the last dynasty of Vietnam continued to hold this title until the French conquered Vietnam. The emperor, however, was then a puppet figure only and could easily be disposed of by the French for more pro-France figure. Japan took Vietnam from France and the Axis-occupied Vietnam was declared an empire by the Japanese in March 1945. The line of emperors came to an end with Bảo Đại, who was deposed after the war, although he later served as head of state of South Vietnam from 1949-55.[citation needed]
198
+
199
+ The lone holders of the imperial title in Oceania were the heads of the semi-mythical Tuʻi Tonga Empire.
200
+
201
+ There have been many fictional emperors in movies and books. To see a list of these emperors, see Category of fictional emperors and empresses.
en/1723.html.txt ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Justinian I (/dʒʌˈstɪniən/; Latin: Flavius Petrus Sabbatius Iustinianus; Byzantine Greek: Ἰουστινιανός Αʹ ὁ Μέγας, romanized: Ioustinianós I ho Mégas; c. 482 – 14 November 565), also known as Justinian the Great, was the Eastern Roman emperor from 527 to 565.
4
+
5
+ His reign is marked by the ambitious but only partly realized renovatio imperii, or "restoration of the Empire".[2] Because of his restoration activities, Justinian has sometimes been known as the "Last Roman" in mid-20th century historiography.[3] This ambition was expressed by the partial recovery of the territories of the defunct Western Roman Empire.[4] His general, Belisarius, swiftly conquered the Vandal Kingdom in North Africa. Subsequently, Belisarius, Narses, and other generals conquered the Ostrogothic kingdom, restoring Dalmatia, Sicily, Italy, and Rome to the empire after more than half a century of rule by the Ostrogoths. The prefect Liberius reclaimed the south of the Iberian peninsula, establishing the province of Spania. These campaigns re-established Roman control over the western Mediterranean, increasing the Empire's annual revenue by over a million solidi.[5] During his reign, Justinian also subdued the Tzani, a people on the east coast of the Black Sea that had never been under Roman rule before.[6] He engaged the Sasanian Empire in the east during Kavad I's reign, and later again during Khosrow I's; this second conflict was partially initiated due to his ambitions in the west.
6
+
7
+ A still more resonant aspect of his legacy was the uniform rewriting of Roman law, the Corpus Juris Civilis, which is still the basis of civil law in many modern states.[7] His reign also marked a blossoming of Byzantine culture, and his building program yielded works such as the Hagia Sophia. He is called "Saint Justinian the Emperor" in the Eastern Orthodox Church.[8]
8
+
9
+ Justinian was born in Tauresium,[9] Dardania,[10] around 482. A native speaker of Latin (possibly the last Roman emperor to be one[11]), he came from a peasant family believed to have been of Illyro-Roman[12][13][14] or Thraco-Roman origins.[15][16][17]
10
+ The cognomen Iustinianus, which he took later, is indicative of adoption by his uncle Justin.[18] During his reign, he founded Justiniana Prima not far from his birthplace.[19][20][21] His mother was Vigilantia, the sister of Justin. Justin, who was in the imperial guard (the Excubitors) before he became emperor,[22] adopted Justinian, brought him to Constantinople, and ensured the boy's education.[22] As a result, Justinian was well educated in jurisprudence, theology and Roman history.[22] Justinian served for some time with the Excubitors but the details of his early career are unknown.[22] Chronicler John Malalas, who lived during the reign of Justinian, tells of his appearance that he was short, fair skinned, curly haired, round faced and handsome. Another contemporary chronicler, Procopius, compares Justinian's appearance to that of tyrannical Emperor Domitian, although this is probably slander.[23]
11
+
12
+ When Emperor Anastasius died in 518, Justin was proclaimed the new emperor, with significant help from Justinian.[22] During Justin's reign (518–527), Justinian was the emperor's close confidant. Justinian showed much ambition, and it has been thought that he was functioning as virtual regent long before Justin made him associate emperor on 1 April 527, although there is no conclusive evidence of this.[24] As Justin became senile near the end of his reign, Justinian became the de facto ruler.[22] Following the general Vitalian's assassination presumed to be orchestrated by Justinian or Justin, Justinian was appointed consul in 521 and later commander of the army of the east.[22][25] Upon Justin's death on 1 August 527, Justinian became the sole sovereign.[22]
13
+
14
+ As a ruler, Justinian showed great energy. He was known as "the emperor who never sleeps" on account of his work habits. Nevertheless, he seems to have been amiable and easy to approach.[26] Around 525, he married his mistress, Theodora, in Constantinople. She was by profession an actress and some twenty years his junior. In earlier times, Justinian could not have married her owing to her class, but his uncle, Emperor Justin I, had passed a law lifting restrictions on marriages with ex-actresses.[27][28] Though the marriage caused a scandal, Theodora would become very influential in the politics of the Empire. Other talented individuals included Tribonian, his legal adviser; Peter the Patrician, the diplomat and longtime head of the palace bureaucracy; Justinian's finance ministers John the Cappadocian and Peter Barsymes, who managed to collect taxes more efficiently than any before, thereby funding Justinian's wars; and finally, his prodigiously talented generals, Belisarius and Narses.
15
+
16
+ Justinian's rule was not universally popular; early in his reign he nearly lost his throne during the Nika riots, and a conspiracy against the emperor's life by dissatisfied businessmen was discovered as late as 562.[29] Justinian was struck by the plague in the early 540s but recovered. Theodora died in 548[30] at a relatively young age, possibly of cancer; Justinian outlived her by nearly twenty years. Justinian, who had always had a keen interest in theological matters and actively participated in debates on Christian doctrine,[31] became even more devoted to religion during the later years of his life. When he died on 14 November 565, he left no children. He was succeeded by Justin II, who was the son of his sister Vigilantia and married to Sophia, the niece of Theodora. Justinian's body was entombed in a specially built mausoleum in the Church of the Holy Apostles until it was desecrated and robbed during the pillage of the city in 1204 by the Latin States of the Fourth Crusade.[32]
17
+
18
+ Justinian achieved lasting fame through his judicial reforms, particularly through the complete revision of all Roman law,[33] something that had not previously been attempted. The total of Justinian's legislation is known today as the Corpus juris civilis. It consists of the Codex Justinianeus, the Digesta or Pandectae, the Institutiones, and the Novellae.
19
+
20
+ Early in his reign, Justinian appointed the quaestor Tribonian to oversee this task. The first draft of the Codex Justinianeus, a codification of imperial constitutions from the 2nd century onward, was issued on 7 April 529. (The final version appeared in 534.) It was followed by the Digesta (or Pandectae), a compilation of older legal texts, in 533, and by the Institutiones, a textbook explaining the principles of law. The Novellae, a collection of new laws issued during Justinian's reign, supplements the Corpus. As opposed to the rest of the corpus, the Novellae appeared in Greek, the common language of the Eastern Empire.
21
+
22
+ The Corpus forms the basis of Latin jurisprudence (including ecclesiastical Canon Law) and, for historians, provides a valuable insight into the concerns and activities of the later Roman Empire. As a collection it gathers together the many sources in which the leges (laws) and the other rules were expressed or published: proper laws, senatorial consults (senatusconsulta), imperial decrees, case law, and jurists' opinions and interpretations (responsa prudentum).
23
+ Tribonian's code ensured the survival of Roman law. It formed the basis of later Byzantine law, as expressed in the Basilika of Basil I and Leo VI the Wise. The only western province where the Justinian code was introduced was Italy (after the conquest by the so-called Pragmatic Sanction of 554),[34] from where it was to pass to Western Europe in the 12th century and become the basis of much European law code. It eventually passed to Eastern Europe where it appeared in Slavic editions, and it also passed on to Russia.[35] It remains influential to this day.
24
+
25
+ He passed laws to protect prostitutes from exploitation and women from being forced into prostitution. Rapists were treated severely. Further, by his policies: women charged with major crimes should be guarded by other women to prevent sexual abuse; if a woman was widowed, her dowry should be returned; and a husband could not take on a major debt without his wife giving her consent twice.[36]
26
+
27
+ Justinian discontinued the appointments of consuls beginning 541.[37] The consulship was revived in 566 by his successor Justin II who simply appointed himself to the position.
28
+
29
+ Justinian's habit of choosing efficient, but unpopular advisers nearly cost him his throne early in his reign. In January 532, partisans of the chariot racing factions in Constantinople, normally rivals, united against Justinian in a revolt that has become known as the Nika riots. They forced him to dismiss Tribonian and two of his other ministers, and then attempted to overthrow Justinian himself and replace him with the senator Hypatius, who was a nephew of the late emperor Anastasius. While the crowd was rioting in the streets, Justinian considered fleeing the capital by sea, but eventually decided to stay, apparently on the prompting of Theodora, who refused to leave. In the next two days, he ordered the brutal suppression of the riots by his generals Belisarius and Mundus. Procopius relates that 30,000[38] unarmed civilians were killed in the Hippodrome. On Theodora's insistence, and apparently against his own judgment,[39] Justinian had Anastasius' nephews executed.[40]
30
+
31
+ The destruction that took place during the revolt provided Justinian with an opportunity to tie his name to a series of splendid new buildings, most notably the architectural innovation of the domed Hagia Sophia.
32
+
33
+ Lazic War
34
+
35
+ One of the most spectacular features of Justinian's reign was the recovery of large stretches of land around the Western Mediterranean basin that had slipped out of Imperial control in the 5th century.[41] As a Christian Roman emperor, Justinian considered it his divine duty to restore the Roman Empire to its ancient boundaries. Although he never personally took part in military campaigns, he boasted of his successes in the prefaces to his laws and had them commemorated in art.[42] The re-conquests were in large part carried out by his general Belisarius.[43]
36
+
37
+ From his uncle, Justinian inherited ongoing hostilities with the Sassanid Empire.[44] In 530 the Persian forces suffered a double defeat at Dara and Satala, but the next year saw the defeat of Roman forces under Belisarius near Callinicum.[45] Justinian then tried to make alliance with the Axumites of Ethiopia and the Himyarites of Yemen against the Persians, but this failed.[46] When king Kavadh I of Persia died (September 531), Justinian concluded an "Eternal Peace" (which cost him 11,000 pounds of gold)[45] with his successor Khosrau I (532). Having thus secured his eastern frontier, Justinian turned his attention to the West, where Germanic kingdoms had been established in the territories of the former Western Roman Empire.
38
+
39
+ The first of the western kingdoms Justinian attacked was that of the Vandals in North Africa. King Hilderic, who had maintained good relations with Justinian and the North African Catholic clergy, had been overthrown by his cousin Gelimer in 530 A.D. Imprisoned, the deposed king appealed to Justinian.
40
+
41
+ In 533, Belisarius sailed to Africa with a fleet of 92 dromons, escorting 500 transports carrying an army of about 15,000 men, as well as a number of barbarian troops. They landed at Caput Vada (modern Ras Kaboudia) in modern Tunisia. They defeated the Vandals, who were caught completely off guard, at Ad Decimum on 14 September 533 and Tricamarum in December; Belisarius took Carthage. King Gelimer fled to Mount Pappua in Numidia, but surrendered the next spring. He was taken to Constantinople, where he was paraded in a triumph. Sardinia and Corsica, the Balearic Islands, and the stronghold Septem Fratres near Gibraltar were recovered in the same campaign.[47]
42
+
43
+ In this war, the contemporary Procopius remarks that Africa was so entirely depopulated that a person might travel several days without meeting a human being, and he adds, "it is no exaggeration to say, that in the course of the war 5,000,000 perished by the sword, and famine, and pestilence."
44
+
45
+ An African prefecture, centered in Carthage, was established in April 534,[48] but it would teeter on the brink of collapse during the next 15 years, amidst warfare with the Moors and military mutinies. The area was not completely pacified until 548,[49] but remained peaceful thereafter and enjoyed a measure of prosperity. The recovery of Africa cost the empire about 100,000 pounds of gold.[50]
46
+
47
+ As in Africa, dynastic struggles in Ostrogothic Italy provided an opportunity for intervention. The young king Athalaric had died on 2 October 534, and a usurper, Theodahad, had imprisoned queen Amalasuntha, Theodoric's daughter and mother of Athalaric, on the island of Martana in Lake Bolsena, where he had her assassinated in 535. Thereupon Belisarius, with 7,500 men,[51] invaded Sicily (535) and advanced into Italy, sacking Naples and capturing Rome on 9 December 536. By that time Theodahad had been deposed by the Ostrogothic army, who had elected Vitigis as their new king. He gathered a large army and besieged Rome from February 537 to March 538 without being able to retake the city.
48
+
49
+ Justinian sent another general, Narses, to Italy, but tensions between Narses and Belisarius hampered the progress of the campaign. Milan was taken, but was soon recaptured and razed by the Ostrogoths. Justinian recalled Narses in 539. By then the military situation had turned in favour of the Romans, and in 540 Belisarius reached the Ostrogothic capital Ravenna. There he was offered the title of Western Roman Emperor by the Ostrogoths at the same time that envoys of Justinian were arriving to negotiate a peace that would leave the region north of the Po River in Gothic hands. Belisarius feigned acceptance of the offer, entered the city in May 540, and reclaimed it for the Empire.[52] Then, having been recalled by Justinian, Belisarius returned to Constantinople, taking the captured Vitigis and his wife Matasuntha with him.
50
+
51
+ Belisarius had been recalled in the face of renewed hostilities by the Persians. Following a revolt against the Empire in Armenia in the late 530s and possibly motivated by the pleas of Ostrogothic ambassadors, King Khosrau I broke the "Eternal Peace" and invaded Roman territory in the spring of 540.[53] He first sacked Beroea and then Antioch (allowing the garrison of 6,000 men to leave the city),[54] besieged Daras, and then went on to attack the small but strategically significant satellite kingdom of Lazica near the Black Sea, exacting tribute from the towns he passed along his way. He forced Justinian I to pay him 5,000 pounds of gold, plus 500 pounds of gold more each year.[54]
52
+
53
+ Belisarius arrived in the East in 541, but after some success, was again recalled to Constantinople in 542. The reasons for his withdrawal are not known, but it may have been instigated by rumours of his disloyalty reaching the court.[55]
54
+ The outbreak of the plague caused a lull in the fighting during the year 543. The following year Khosrau defeated a Byzantine army of 30,000 men,[56] but unsuccessfully besieged the major city of Edessa. Both parties made little headway, and in 545 a truce was agreed upon for the southern part of the Roman-Persian frontier. After that the Lazic War in the North continued for several years, until a second truce in 557, followed by a Fifty Years' Peace in 562. Under its terms, the Persians agreed to abandon Lazica in exchange for an annual tribute of 400 or 500 pounds of gold (30,000 solidi) to be paid by the Romans.[57]
55
+
56
+ While military efforts were directed to the East, the situation in Italy took a turn for the worse. Under their respective kings Ildibad and Eraric (both murdered in 541) and especially Totila, the Ostrogoths made quick gains. After a victory at Faenza in 542, they reconquered the major cities of Southern Italy and soon held almost the entire Italian peninsula. Belisarius was sent back to Italy late in 544 but lacked sufficient troops and supplies. Making no headway, he was relieved of his command in 548. Belisarius succeeded in defeating a Gothic fleet of 200 ships.[citation needed] During this period the city of Rome changed hands three more times, first taken and depopulated by the Ostrogoths in December 546, then reconquered by the Byzantines in 547, and then again by the Goths in January 550. Totila also plundered Sicily and attacked Greek coastlines.
57
+
58
+ Finally, Justinian dispatched a force of approximately 35,000 men (2,000 men were detached and sent to invade southern Visigothic Hispania) under the command of Narses.[58] The army reached Ravenna in June 552 and defeated the Ostrogoths decisively within a month at the battle of Busta Gallorum in the Apennines, where Totila was slain. After a second battle at Mons Lactarius in October that year, the resistance of the Ostrogoths was finally broken. In 554, a large-scale Frankish invasion was defeated at Casilinum, and Italy was secured for the Empire, though it would take Narses several years to reduce the remaining Gothic strongholds. At the end of the war, Italy was garrisoned with an army of 16,000 men.[59] The recovery of Italy cost the empire about 300,000 pounds of gold.[50] Procopius estimated "the loss of the Goths at 15,000,000."[60]
59
+
60
+ In addition to the other conquests, the Empire established a presence in Visigothic Hispania, when the usurper Athanagild requested assistance in his rebellion against King Agila I. In 552, Justinian dispatched a force of 2,000 men; according to the historian Jordanes, this army was led by the octogenarian Liberius.[61] The Byzantines took Cartagena and other cities on the southeastern coast and founded the new province of Spania before being checked by their former ally Athanagild, who had by now become king. This campaign marked the apogee of Byzantine expansion.
61
+
62
+ During Justinian's reign, the Balkans suffered from several incursions by the Turkic and Slavic peoples who lived north of the Danube. Here, Justinian resorted mainly to a combination of diplomacy and a system of defensive works. In 559 a particularly dangerous invasion of Sklavinoi and Kutrigurs under their khan Zabergan threatened Constantinople, but they were repulsed by the aged general Belisarius.
63
+
64
+ Justinian's ambition to restore the Roman Empire to its former glory was only partly realized. In the West, the brilliant early military successes of the 530s were followed by years of stagnation. The dragging war with the Goths was a disaster for Italy, even though its long-lasting effects may have been less severe than is sometimes thought.[62] The heavy taxes that the administration imposed upon its population were deeply resented. The final victory in Italy and the conquest of Africa and the coast of southern Hispania significantly enlarged the area over which the Empire could project its power and eliminated all naval threats to the empire. Despite losing much of Italy soon after Justinian's death, the empire retained several important cities, including Rome, Naples, and Ravenna, leaving the Lombards as a regional threat. The newly founded province of Spania kept the Visigoths as a threat to Hispania alone and not to the western Mediterranean and Africa.
65
+ Events of the later years of the reign showed that Constantinople itself was not safe from barbarian incursions from the north, and even the relatively benevolent historian Menander Protector felt the need to attribute the Emperor's failure to protect the capital to the weakness of his body in his old age.[63] In his efforts to renew the Roman Empire, Justinian dangerously stretched its resources while failing to take into account the changed realities of 6th-century Europe.[64]
66
+
67
+ Justinian saw the orthodoxy of his empire threatened by diverging religious currents, especially Monophysitism, which had many adherents in the eastern provinces of Syria and Egypt. Monophysite doctrine, which maintains that Jesus Christ had one divine nature or a synthesis of a divine and human nature, had been condemned as a heresy by the Council of Chalcedon in 451, and the tolerant policies towards Monophysitism of Zeno and Anastasius I had been a source of tension in the relationship with the bishops of Rome. Justin reversed this trend and confirmed the Chalcedonian doctrine, openly condemning the Monophysites. Justinian, who continued this policy, tried to impose religious unity on his subjects by forcing them to accept doctrinal compromises that might appeal to all parties, a policy that proved unsuccessful as he satisfied none of them.[66]
68
+
69
+ Near the end of his life, Justinian became ever more inclined towards the Monophysite doctrine, especially in the form of Aphthartodocetism, but he died before being able to issue any legislation. The empress Theodora sympathized with the Monophysites and is said to have been a constant source of pro-Monophysite intrigues at the court in Constantinople in the earlier years. In the course of his reign, Justinian, who had a genuine interest in matters of theology, authored a small number of theological treatises.[67]
70
+
71
+ As in his secular administration, despotism appeared also in the Emperor's ecclesiastical policy. He regulated everything, both in religion and in law.
72
+
73
+ At the very beginning of his reign, he deemed it proper to promulgate by law the Church's belief in the Trinity and the Incarnation, and to threaten all heretics with the appropriate penalties,[68] whereas he subsequently declared that he intended to deprive all disturbers of orthodoxy of the opportunity for such offense by due process of law.[69] He made the Nicaeno-Constantinopolitan creed the sole symbol of the Church[70] and accorded legal force to the canons of the four ecumenical councils.[71] The bishops in attendance at the Second Council of Constantinople in 553 recognized that nothing could be done in the Church contrary to the emperor's will and command,[72] while, on his side, the emperor, in the case of the Patriarch Anthimus, reinforced the ban of the Church with temporal proscription.[73] Justinian protected the purity of the church by suppressing heretics. He neglected no opportunity to secure the rights of the Church and clergy, and to protect and extend monasticism. He granted the monks the right to inherit property from private citizens and the right to receive solemnia, or annual gifts, from the Imperial treasury or from the taxes of certain provinces and he prohibited lay confiscation of monastic estates.
74
+
75
+ Although the despotic character of his measures is contrary to modern sensibilities, he was indeed a "nursing father" of the Church. Both the Codex and the Novellae contain many enactments regarding donations, foundations, and the administration of ecclesiastical property; election and rights of bishops, priests and abbots; monastic life, residential obligations of the clergy, conduct of divine service, episcopal jurisdiction, etc. Justinian also rebuilt the Church of Hagia Sophia (which cost 20,000 pounds of gold),[74] the original site having been destroyed during the Nika riots. The new Hagia Sophia, with its numerous chapels and shrines, gilded octagonal dome, and mosaics, became the centre and most visible monument of Eastern Orthodoxy in Constantinople.
76
+
77
+ From the middle of the 5th century onward, increasingly arduous tasks confronted the emperors of the East in ecclesiastical matters.
78
+ Justinian entered the arena of ecclesiastical statecraft shortly after his uncle's accession in 518, and put an end to the Acacian schism. Previous Emperors had tried to alleviate theological conflicts by declarations that deemphasized the Council of Chalcedon, which had condemned Monophysitism, which had strongholds in Egypt and Syria, and by tolerating the appointment of Monophysites to church offices. The Popes reacted by severing ties with the Patriarch of Constantinople who supported these policies. Emperors Justin I (and later Justinian himself) rescinded these policies and reestablished the union between Constantinople and Rome.[75] After this, Justinian also felt entitled to settle disputes in papal elections, as he did when he favoured Vigilius and had his rival Silverius deported.
79
+
80
+ This new-found unity between East and West did not, however, solve the ongoing disputes in the east. Justinian's policies switched between attempts to force Monophysites to accept the Chalcedonian creed by persecuting their bishops and monks – thereby embittering their sympathizers in Egypt and other provinces – and attempts at a compromise that would win over the Monophysites without surrendering the Chalcedonian faith. Such an approach was supported by the Empress Theodora, who favoured the Monophysites unreservedly. In the condemnation of the Three Chapters, three theologians that had opposed Monophysitism before and after the Council of Chalcedon, Justinian tried to win over the opposition. At the Fifth Ecumenical Council, most of the Eastern church yielded to the Emperor's demands, and Pope Vigilius, who was forcibly brought to Constantinople and besieged at a chapel, finally also gave his assent. However, the condemnation was received unfavourably in the west, where it led to new (albeit temporal) schism, and failed to reach its goal in the east, as the Monophysites remained unsatisfied ��� all the more bitter for him because during his last years he took an even greater interest in theological matters.
81
+
82
+ Justinian's religious policy reflected the Imperial conviction that the unity of the Empire presupposed unity of faith, and it appeared to him obvious that this faith could only be the orthodox (Nicaean). Those of a different belief were subjected to persecution, which imperial legislation had effected from the time of Constantius II and which would now vigorously continue. The Codex contained two statutes[76] that decreed the total destruction of paganism, even in private life; these provisions were zealously enforced. Contemporary sources (John Malalas, Theophanes, and John of Ephesus) tell of severe persecutions, even of men in high position.[dubious – discuss]
83
+
84
+ The original Academy of Plato had been destroyed by the Roman dictator Sulla in 86 BC. Several centuries later, in 410 AD, a Neoplatonic Academy was established that had no institutional continuity with Plato's Academy, and which served as a center for Neoplatonism and mysticism. It persisted until 529 AD when it was finally closed by Justinian I. Other schools in Constantinople, Antioch, and Alexandria, which were the centers of Justinian's empire, continued.[77]
85
+
86
+ In Asia Minor alone, John of Ephesus was reported to have converted 70,000 pagans, which was probably an exaggerated number.[78] Other peoples also accepted Christianity: the Heruli,[79] the Huns dwelling near the Don,[80] the Abasgi,[81] and the Tzanni in Caucasia.[82]
87
+
88
+ The worship of Amun at the oasis of Awjila in the Libyan desert was abolished,[83] and so were the remnants of the worship of Isis on the island of Philae, at the first cataract of the Nile.[84] The Presbyter Julian[85] and the Bishop Longinus[86] conducted a mission among the Nabataeans, and Justinian attempted to strengthen Christianity in Yemen by dispatching a bishop from Egypt.[87]
89
+
90
+ The civil rights of Jews were restricted[88] and their religious privileges threatened.[89] Justinian also interfered in the internal affairs of the synagogue[90] and encouraged the Jews to use the Greek Septuagint in their synagogues in Constantinople.[91]
91
+
92
+ The Emperor faced significant opposition from the Samaritans, who resisted conversion to Christianity and were repeatedly in insurrection. He persecuted them with rigorous edicts, but could not prevent reprisals towards Christians from taking place in Samaria toward the close of his reign. The consistency of Justinian's policy meant that the Manicheans too suffered persecution, experiencing both exile and threat of capital punishment.[92] At Constantinople, on one occasion, not a few Manicheans, after strict inquisition, were executed in the emperor's very presence: some by burning, others by drowning.[93]
93
+
94
+ Justinian was a prolific builder; the historian Procopius bears witness to his activities in this area.[94] Under Justinian's reign, the San Vitale in Ravenna, which features two famous mosaics representing Justinian and Theodora, was completed under the sponsorship of Julius Argentarius.[22] Most notably, he had the Hagia Sophia, originally a basilica-style church that had been burnt down during the Nika riots, splendidly rebuilt according to a completely different ground plan, under the architectural supervision of Isidore of Miletus and Anthemius of Tralles. According to Pseudo-Codinus, Justinian stated at the completion of this edifice, "Solomon, I have outdone thee" (in reference to the first Jewish temple). This new cathedral, with its magnificent dome filled with mosaics, remained the centre of eastern Christianity for centuries.
95
+
96
+ Another prominent church in the capital, the Church of the Holy Apostles, which had been in a very poor state near the end of the 5th century, was likewise rebuilt.[95] The Church of Saints Sergius and Bacchus, later re-named Little Hagia Sophia, was also built between 532 and 536 by the imperial couple.[96] Works of embellishment were not confined to churches alone: excavations at the site of the Great Palace of Constantinople have yielded several high-quality mosaics dating from Justinian's reign, and a column topped by a bronze statue of Justinian on horseback and dressed in a military costume was erected in the Augustaeum in Constantinople in 543.[97] Rivalry with other, more established patrons from the Constantinopolitan and exiled Roman aristocracy might have enforced Justinian's building activities in the capital as a means of strengthening his dynasty's prestige.[98]
97
+
98
+ Justinian also strengthened the borders of the Empire from Africa to the East through the construction of fortifications and ensured Constantinople of its water supply through construction of underground cisterns (see Basilica Cistern). To prevent floods from damaging the strategically important border town Dara, an advanced arch dam was built. During his reign the large Sangarius Bridge was built in Bithynia, securing a major military supply route to the east. Furthermore, Justinian restored cities damaged by earthquake or war and built a new city near his place of birth called Justiniana Prima, which was intended to replace Thessalonica as the political and religious centre of Illyricum.
99
+
100
+ In Justinian's reign, and partly under his patronage, Byzantine culture produced noteworthy historians, including Procopius and Agathias, and poets such as Paul the Silentiary and Romanus the Melodist flourished. On the other hand, centres of learning such as the Neoplatonic Academy in Athens and the famous Law School of Beirut[99] lost their importance during his reign.
101
+
102
+ As was the case under Justinian's predecessors, the Empire's economic health rested primarily on agriculture. In addition, long-distance trade flourished, reaching as far north as Cornwall where tin was exchanged for Roman wheat.[100] Within the Empire, convoys sailing from Alexandria provided Constantinople with wheat and grains. Justinian made the traffic more efficient by building a large granary on the island of Tenedos for storage and further transport to Constantinople.[101] Justinian also tried to find new routes for the eastern trade, which was suffering badly from the wars with the Persians.
103
+
104
+ One important luxury product was silk, which was imported and then processed in the Empire. In order to protect the manufacture of silk products, Justinian granted a monopoly to the imperial factories in 541.[102] In order to bypass the Persian landroute, Justinian established friendly relations with the Abyssinians, whom he wanted to act as trade mediators by transporting Indian silk to the Empire; the Abyssinians, however, were unable to compete with the Persian merchants in India.[103] Then, in the early 550s, two monks succeeded in smuggling eggs of silk worms from Central Asia back to Constantinople,[104] and silk became an indigenous product.
105
+
106
+ Gold and silver were mined in the Balkans, Anatolia, Armenia, Cyprus, Egypt and Nubia.[105]
107
+
108
+ At the start of Justinian I's reign he had inherited a surplus 28,800,000 solidi (400,000 pounds of gold) in the imperial treasury from Anastasius I and Justin I.[50] Under Justinian's rule, measures were taken to counter corruption in the provinces and to make tax collection more efficient. Greater administrative power was given to both the leaders of the prefectures and of the provinces, while power was taken away from the vicariates of the dioceses, of which a number were abolished. The overall trend was towards a simplification of administrative infrastructure.[106] According to Brown (1971), the increased professionalization of tax collection did much to destroy the traditional structures of provincial life, as it weakened the autonomy of the town councils in the Greek towns.[107] It has been estimated that before Justinian I's reconquests the state had an annual revenue of 5,000,000 solidi in AD 530, but after his reconquests, the annual revenue was increased to 6,000,000 solidi in AD 550.[50]
109
+
110
+ Throughout Justinian's reign, the cities and villages of the East prospered, although Antioch was struck by two earthquakes (526, 528) and sacked and evacuated by the Persians (540). Justinian had the city rebuilt, but on a slightly smaller scale.[108]
111
+
112
+ Despite all these measures, the Empire suffered several major setbacks in the course of the 6th century. The first one was the plague, which lasted from 541 to 543 and, by decimating the Empire's population, probably created a scarcity of labor and a rising of wages.[109] The lack of manpower also led to a significant increase in the number of "barbarians" in the Byzantine armies after the early 540s.[110] The protracted war in Italy and the wars with the Persians themselves laid a heavy burden on the Empire's resources, and Justinian was criticized for curtailing the government-run post service, which he limited to only one eastern route of military importance.[111]
113
+
114
+ During the 530s, it seemed to many that God had abandoned the Christian Roman Empire. There were noxious fumes in the air and the Sun, while still providing daylight, refused to give much heat. This caused famine unlike anything those of the time had seen before, affecting both Europe and the Middle East.
115
+
116
+ The causes of these disasters aren't precisely known, but volcanoes at the Rabaul caldera, Lake Ilopango, Krakatoa, or, according to a recent finding, in Iceland[112] are suspected, as is an air burst event from a comet fragment.[citation needed]
117
+
118
+ Seven years later in 542, a devastating outbreak of Bubonic Plague, known as the Plague of Justinian and second only to Black Death of the 14th century, killed tens of millions. Justinian and members of his court, physically unaffected by the previous 535–536 famine, were afflicted, with Justinian himself contracting and surviving the pestilence. The impact of this outbreak of plague has recently been disputed, since evidence for tens of millions dying is uncertain.[113][114]
119
+
120
+ In July 551, the eastern Mediterranean was rocked by the 551 Beirut earthquake, which triggered a tsunami. The combined fatalities of both events likely exceeded 30,000, with tremors felt from Antioch to Alexandria.
121
+
122
+ In the Paradiso section of the Divine Comedy by Dante Alighieri, Justinian I is prominently featured as a spirit residing on the sphere of Mercury, which holds the ambitious souls of Heaven. His legacy is elaborated on, and he is portrayed as a defender of the Christian faith and the restorer of Rome to the Empire. However, Justinian confesses that he was partially motivated by fame rather than duty to God, which tainted the justice of his rule in spite of his proud accomplishments. In his introduction, "Cesare fui e son Iustinïano" ("Caesar I was, and am Justinian"[116]), his mortal title is contrasted with his immortal soul, to emphasize that "glory in life is ephemeral, while contributing to God's glory is eternal", according to Dorothy L. Sayers.[117] Dante also uses Justinian to criticize the factious politics of his 14th Century Italy, in contrast to the unified Italy of the Roman Empire.
123
+
124
+ Justinian is a major character in the 1938 novel Count Belisarius, by Robert Graves. He is depicted as a jealous and conniving Emperor obsessed with creating and maintaining his own historical legacy.
125
+
126
+ Justinian appears as a character in the 1939 time travel novel Lest Darkness Fall, by L. Sprague de Camp. The Glittering Horn: Secret Memoirs of the Court of Justinian was a novel written by Pierson Dixon in 1958 about the court of Justinian.
127
+
128
+ Justinian occasionally appears in the comic strip Prince Valiant, usually as a nemesis of the title character.
129
+
130
+ Procopius provides the primary source for the history of Justinian's reign. He became very bitter towards Justinian and his empress, Theodora.[118] The Syriac chronicle of John of Ephesus, which survives partially, was used as a source for later chronicles, contributing many additional details of value. Other sources include the writings of John Malalas, Agathias, John the Lydian, Menander Protector, the Paschal Chronicle, Evagrius Scholasticus, Pseudo-Zacharias Rhetor, Jordanes, the chronicles of Marcellinus Comes and Victor of Tunnuna. Justinian is widely regarded as a saint by Orthodox Christians, and is also commemorated by some Lutheran churches on 14 November.[119]
131
+
132
+
133
+
en/1724.html.txt ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The German Empire or the Imperial State of Germany[a][4][5][6][7] also referred to Imperial Germany,[8] was the German nation state[9] that existed from the unification of Germany in 1871 until the abdication of Emperor Wilhelm II in 1918.
4
+
5
+ It was founded on 1 January 1871 when the south German states, except for Austria, joined the North German Confederation and the new constitution came into force changing the name of the federal state to the German Empire and introduced the title of German Emperor for Wilhelm I, King of Prussia from the House of Hohenzollern.[10] Berlin remained its capital, and Bismarck, Minister-President of Prussia became Chancellor, the head of government. As these events occurred, the Prussian-led North German Confederation and its southern German allies were still engaged in the Franco-Prussian War.
6
+
7
+ The German Empire consisted of 26 states, most of them ruled by royal families. They included four kingdoms, six grand duchies, five duchies (six before 1876), seven principalities, three free Hanseatic cities, and one imperial territory. Although Prussia was one of four kingdoms in the realm, it contained about two thirds of Germany's population and territory. Prussian dominance had also been established constitutionally.
8
+
9
+ After 1850, the states of Germany had rapidly become industrialized, with particular strengths in coal, iron (and later steel), chemicals, and railways. In 1871, Germany had a population of 41 million people; by 1913, this had increased to 68 million. A heavily rural collection of states in 1815, the now united Germany became predominantly urban.[11] During its 47 years of existence, the German Empire was an industrial, technological, and scientific giant, gaining more Nobel Prizes in science than any other country.[12] Between 1901 and 1918, the Germans won 4 Nobel Prizes in Medicine, 6 Prizes in Physics, 7 Prizes in Chemistry and 3 Prizes in Literature. By 1913, Germany was the largest economy in Continental Europe, surpassing the United Kingdom (excluding its Empire and Dominions), as well as the third-largest in the world, only behind the United States and the British Empire.[13]
10
+
11
+ From 1867 to 1878/9, Otto von Bismarck's tenure as the first and to this day longest-serving Chancellor was marked by relative liberalism, but it became more conservative afterwards. Broad reforms, and the Kulturkampf marked his period in the office. Late in Bismarck's chancellorship and in spite of his personal opposition, Germany became involved in colonialism. Claiming much of the leftover territory that was yet unclaimed in the Scramble for Africa, it managed to build the third-largest colonial empire at the time, after the British and the French ones.[14] As a colonial state, it sometimes clashed with other European powers, especially the British Empire.
12
+
13
+ Germany became a great power, boasting a rapidly developing rail network, the world's strongest army,[15] and a fast-growing industrial base.[16] Starting very small in 1871, in a decade, the navy became second only to Britain's Royal Navy. After the removal of Otto von Bismarck by Wilhelm II in 1890, the Empire embarked on Weltpolitik – a bellicose new course that ultimately contributed to the outbreak of World War I. In addition, Bismarck's successors were incapable of maintaining their predecessor's complex, shifting, and overlapping alliances which had kept Germany from being diplomatically isolated. This period was marked by various factors influencing the Emperor's decisions, which were often perceived as contradictory or unpredictable by the public. In 1879, the German Empire consolidated the Dual Alliance with Austria-Hungary, followed by the Triple Alliance with Italy in 1882. It also retained strong diplomatic ties to the Ottoman Empire. When the great crisis of 1914 arrived, Italy left the alliance and the Ottoman Empire formally allied with Germany.
14
+
15
+ In the First World War, German plans to capture Paris quickly in the autumn of 1914 failed. The war on the Western Front became a stalemate. The Allied naval blockade caused severe shortages of food. However, Imperial Germany had success on the Eastern Front; it occupied a large amount of territory to its east following the Treaty of Brest-Litovsk. The German declaration of unrestricted submarine warfare in early 1917 contributed to bringing the United States into the war.
16
+
17
+ The high command under Paul von Hindenburg and Erich Ludendorff increasingly controlled the country, but in October after the failed offensive in spring 1918, the German armies were in retreat, allies Austria-Hungary and the Ottoman Empire had collapsed, and Bulgaria had surrendered. The Empire collapsed in the November 1918 Revolution with the abdications of its monarchs. This left a post-war federal republic and a devastated and unsatisfied populace, faced with post-war reparation costs of nearly 270 billion dollars,[17] all in all leading to the rise of Adolf Hitler and Nazism.[18]
18
+
19
+ The German Confederation had been created by an act of the Congress of Vienna on 8 June 1815 as a result of the Napoleonic Wars, after being alluded to in Article 6 of the 1814 Treaty of Paris.[19]
20
+
21
+ Bourgeois revolutions of 1848, associated with highly educated and middle class were crushed in favor of peasants, artisans and Otto von Bismarck's pragmatic Realpolitik.[20] Bismarck sought to extend Hohenzollern hegemony throughout the German states; to do so meant unification of the German states and the exclusion of Prussia's main German rival, Austria, from the subsequent German Empire. He envisioned a conservative, Prussian-dominated Germany. Three wars led to military successes and helped to persuade German people to do this: the Second Schleswig War against Denmark in 1864, the Austro-Prussian War in 1866, and the Franco-Prussian War in 1870–1871.
22
+
23
+ The German Confederation ended as a result of the Austro-Prussian War of 1866 between the constituent Confederation entities of the Austrian Empire and its allies on one side and the Prussia and its allies on the other. The war resulted in the partial replacement of the Confederation in 1867 by a North German Confederation, comprising the 22 states north of the Main River. The patriotic fervour generated by the Franco-Prussian War overwhelmed the remaining opposition to a unified Germany (aside from Austria) in the four states south of the Main and during November 1870 they joined the North German Confederation by treaty.[21]
24
+
25
+ On 10 December 1870, the North German Confederation Reichstag renamed the Confederation the "German Empire" and gave the title of German Emperor to William I, the King of Prussia, as Bundespräsidium of the Confederation.[22] The new constitution (Constitution of the German Confederation) and the title Emperor came into effect on 1 January 1871. During the Siege of Paris on 18 January 1871, William accepted to be proclaimed Emperor in the Hall of Mirrors at the Palace of Versailles.[23]
26
+
27
+ The second German Constitution, adopted by the Reichstag on 14 April 1871 and proclaimed by the Emperor on 16 April,[23] was substantially based upon Bismarck's North German Constitution. The political system remained the same. The empire had a parliament called the Reichstag, which was elected by universal male suffrage. However, the original constituencies drawn in 1871 were never redrawn to reflect the growth of urban areas. As a result, by the time of the great expansion of German cities in the 1890s and 1900s, rural areas were grossly over-represented.
28
+
29
+ Legislation also required the consent of the Bundesrat, the federal council of deputies from the 27 states. Executive power was vested in the emperor, or Kaiser, who was assisted by a Chancellor responsible only to him. The emperor was given extensive powers by the constitution. He alone appointed and dismissed the chancellor (so in practice the emperor ruled the empire through the chancellor), was supreme commander-in-chief of the armed forces, and final arbiter of all foreign affairs, and could also disband the Reichstag to call for new elections. Officially, the chancellor was a one-man cabinet and was responsible for the conduct of all state affairs; in practice, the State Secretaries (bureaucratic top officials in charge of such fields as finance, war, foreign affairs, etc.) functioned much like ministers in other monarchies. The Reichstag had the power to pass, amend, or reject bills and to initiate legislation. However, as mentioned above, in practice the real power was vested in the emperor, who exercised it through his chancellor.
30
+
31
+ Although nominally a federal empire and league of equals, in practice, the empire was dominated by the largest and most powerful state, Prussia. Prussia stretched across the northern two-thirds of the new Reich and contained three-fifths of its population. The imperial crown was hereditary in the ruling house of Prussia, the House of Hohenzollern. With the exception of 1872–1873 and 1892–1894, the chancellor was always simultaneously the prime minister of Prussia. With 17 out of 58 votes in the Bundesrat, Berlin needed only a few votes from the smaller states to exercise effective control.
32
+
33
+ The other states retained their own governments, but had only limited aspects of sovereignty. For example, both postage stamps and currency were issued for the empire as a whole. Coins through one mark were also minted in the name of the empire, while higher-valued pieces were issued by the states. However, these larger gold and silver issues were virtually commemorative coins and had limited circulation.
34
+
35
+ While the states issued their own decorations and some had their own armies, the military forces of the smaller ones were put under Prussian control. Those of the larger states, such as the Kingdoms of Bavaria and Saxony, were coordinated along Prussian principles and would in wartime be controlled by the federal government.
36
+
37
+ The evolution of the German Empire is somewhat in line with parallel developments in Italy, which became a united nation-state a decade earlier. Some key elements of the German Empire's authoritarian political structure were also the basis for conservative modernization in Imperial Japan under Meiji and the preservation of an authoritarian political structure under the tsars in the Russian Empire.
38
+
39
+ One factor in the social anatomy of these governments was the retention of a very substantial share in political power by the landed elite, the Junkers, resulting from the absence of a revolutionary breakthrough by the peasants in combination with urban areas.
40
+
41
+ Although authoritarian in many respects, the empire had some democratic features. Besides universal suffrage, it permitted the development of political parties. Bismarck's intention was to create a constitutional façade which would mask the continuation of authoritarian policies. In the process, he created a system with a serious flaw. There was a significant disparity between the Prussian and German electoral systems. Prussia used a highly restrictive three-class voting system in which the richest third of the population could choose 85% of the legislature, all but assuring a conservative majority. As mentioned above, the king and (with two exceptions) the prime minister of Prussia were also the emperor and chancellor of the empire – meaning that the same rulers had to seek majorities from legislatures elected from completely different franchises. Universal suffrage was significantly diluted by gross over-representation of rural areas from the 1890s onward. By the turn of the century, the urban-rural population balance was completely reversed from 1871; more than two-thirds of the empire's people lived in cities and towns.
42
+
43
+ Bismarck's domestic policies played an important role in forging the authoritarian political culture of the Kaiserreich. Less preoccupied by continental power politics following unification in 1871, Germany's semi-parliamentary government carried out a relatively smooth economic and political revolution from above that pushed them along the way towards becoming the world's leading industrial power of the time.
44
+
45
+ Bismarck's "revolutionary conservatism" was a conservative state-building strategy designed to make ordinary Germans—not just the Junker elite—more loyal to throne and empire. According to Kees van Kersbergen and Barbara Vis, his strategy was:
46
+
47
+ granting social rights to enhance the integration of a hierarchical society, to forge a bond between workers and the state so as to strengthen the latter, to maintain traditional relations of authority between social and status groups, and to provide a countervailing power against the modernist forces of liberalism and socialism.[24]
48
+
49
+ Bismarck created the modern welfare state in Germany in the 1880s and enacted universal male suffrage in 1871.[25] He became a great hero to German conservatives, who erected many monuments to his memory and tried to emulate his policies.[26]
50
+
51
+ Bismarck's post-1871 foreign policy was conservative and sought to preserve the balance of power in Europe. British historian Eric Hobsbawm concludes that he "remained undisputed world champion at the game of multilateral diplomatic chess for almost twenty years after 1871, [devoting] himself exclusively, and successfully, to maintaining peace between the powers".[27] This was a departure from his adventurous foreign policy for Prussia, where he favored strength and expansion, punctuating this by saying "The great question of the age are not settled by speeches and majority votes – this was the error of 1848–49 – but by iron and blood."[28]
52
+
53
+ Bismarck's chief concern was that France would plot revenge after its defeat in the Franco-Prussian War. As the French lacked the strength to defeat Germany by themselves, they sought an alliance with Russia, which would trap Germany between the two in a war (as would ultimately happen in 1914). Bismarck wanted to prevent this at all costs and maintain friendly relations with the Russians, and thereby formed an alliance with them and Austria-Hungary, the Dreikaiserbund (League of Three Emperors) in 1881. The alliance was further cemented by a separate non-aggression pact with Russia called Reinsurance Treaty, which was signed in 1887.[29] During this period, individuals within the German military were advocating a preemptive strike against Russia, but Bismarck knew that such ideas were foolhardy. He once wrote that "the most brilliant victories would not avail against the Russian nation, because of its climate, its desert, and its frugality, and having but one frontier to defend", and because it would leave Germany with another bitter, resentful neighbour.
54
+
55
+ Meanwhile, the chancellor remained wary of any foreign policy developments that looked even remotely warlike. In 1886, he moved to stop an attempted sale of horses to France on the grounds that they might be used for cavalry and also ordered an investigation into large Russian purchases of medicine from a German chemical works. Bismarck stubbornly refused to listen to Georg Herbert zu Munster (ambassador to France), who reported back that the French were not seeking a revanchist war, and in fact were desperate for peace at all costs.
56
+
57
+ Bismarck and most of his contemporaries were conservative-minded and focused their foreign policy attention on Germany's neighbouring states. In 1914, 60% of German foreign investment was in Europe, as opposed to just 5% of British investment. Most of the money went to developing nations such as Russia that lacked the capital or technical knowledge to industrialize on their own. The construction of the Baghdad Railway, financed by German banks, was designed to eventually connect Germany with the Ottoman Empire and the Persian Gulf, but it also collided with British and Russian geopolitical interests. Conflict over the Baghdad Railway was resolved in June 1914.
58
+
59
+ Many consider Bismarck's foreign policy as a coherent system and partly responsible for the preservation of Europe's stability.[30] It was also marked by the need to balance circumspect defensiveness and the desire to be free from the constraints of its position as a major European power.[30] Unfortunately, Bismark's successors did not pursue his foreign policy legacy. For instance, Kaiser Wilhelm II, who dismissed the chancellor in 1890, let the treaty with Russia lapse in favor of Germany's alliance with Austria, which finally led to a stronger coalition-building between Russia and France.[29]
60
+
61
+ Bismarck secured a number of German colonial possessions during the 1880s in Africa and the Pacific, but he never considered an overseas colonial empire valuable due to fierce resistance to German colonial rule from the natives. Thus, Germany's colonies remained badly undeveloped.[31] However they excited the interest of the religious-minded, who supported an extensive network of missionaries.
62
+
63
+ Germans had dreamed of colonial imperialism since 1848.[32] Bismarck began the process, and by 1884 had acquired German New Guinea.[33] By the 1890s, German colonial expansion in Asia and the Pacific (Kiauchau in China, Tientsin in China, the Marianas, the Caroline Islands, Samoa) led to frictions with the UK, Russia, Japan, and the US. The largest colonial enterprises were in Africa,[34] where the Herero Wars in what is now Namibia in 1906–1907 resulted in the Herero and Namaqua genocide.[35]
64
+
65
+ By 1900, Germany became the largest economy in Continental Europe and the third largest in the world behind the United States and the British Empire. Germany's main economic rivals were Great Britain and the United States. Throughout its existence, it experienced economic growth and modernization led by heavy industry. In 1871, it had a largely rural population of 41 million, while by 1913 this had increased to a predominantly urban population of 68 million.
66
+
67
+ For 30 years, Germany struggled against Britain to be Europe's leading industrial power. Representative of Germany's industry was the steel giant Krupp, whose first factory was built in Essen. By 1902, the factory alone became "A great city with its own streets, its own police force, fire department and traffic laws. There are 150 kilometres of rail, 60 different factory buildings, 8,500 machine tools, seven electrical stations, 140 kilometres of underground cable and 46 overhead."[36]
68
+
69
+ Under Bismarck, Germany was a world innovator in building the welfare state. German workers enjoyed health, accident and maternity benefits, canteens, changing rooms and a national pension scheme.[37]
70
+
71
+ Lacking a technological base at first, the Germans imported their engineering and hardware from Britain, but quickly learned the skills needed to operate and expand the railways. In many cities, the new railway shops were the centres of technological awareness and training, so that by 1850, Germany was self-sufficient in meeting the demands of railroad construction, and the railways were a major impetus for the growth of the new steel industry. However, German unification in 1870 stimulated consolidation, nationalisation into state-owned companies, and further rapid growth. Unlike the situation in France, the goal was support of industrialisation, and so heavy lines crisscrossed the Ruhr and other industrial districts, and provided good connections to the major ports of Hamburg and Bremen. By 1880, Germany had 9,400 locomotives pulling 43,000 passengers and 30,000 tons of freight, and forged ahead of France.[38] The total length of German railroad tracks expanded from 21,000 kilometers in 1871 to 63,000 kilometres by 1913, establishing the largest rail network in the world after the United States, and effectively surpassing the 32,000 kilometers of rail that connected Britain in the same year.[39]
72
+
73
+ Industrialisation progressed dynamically in Germany, and German manufacturers began to capture domestic markets from British imports, and also to compete with British industry abroad, particularly in the U.S. The German textile and metal industries had by 1870 surpassed those of Britain in organisation and technical efficiency and superseded British manufacturers in the domestic market. Germany became the dominant economic power on the continent and was the second largest exporting nation after Britain.
74
+
75
+ Technological progress during German industrialisation occurred in four waves: the railway wave (1877–1886), the dye wave (1887–1896), the chemical wave (1897–1902), and the wave of electrical engineering (1903–1918).[40] Since Germany industrialised later than Britain, it was able to model its factories after those of Britain, thus making more efficient use of its capital and avoiding legacy methods in its leap to the envelope of technology. Germany invested more heavily than the British in research, especially in chemistry, motors and electricity. Germany's dominance in physics and chemistry was such that one-third of all Nobel Prizes went to German inventors and researchers.
76
+
77
+ The German cartel system (known as Konzerne), being significantly concentrated, was able to make more efficient use of capital. Germany was not weighted down with an expensive worldwide empire that needed defense. Following Germany's annexation of Alsace-Lorraine in 1871, it absorbed parts of what had been France's industrial base.[41]
78
+
79
+ By 1900, the German chemical industry dominated the world market for synthetic dyes.[42] The three major firms BASF,[43] Bayer and Hoechst produced several hundred different dyes, along with the five smaller firms. In 1913, these eight firms produced almost 90% of the world supply of dyestuffs and sold about 80% of their production abroad. The three major firms had also integrated upstream into the production of essential raw materials and they began to expand into other areas of chemistry such as pharmaceuticals, photographic film, agricultural chemicals and electrochemicals. Top-level decision-making was in the hands of professional salaried managers; leading Chandler to call the German dye companies "the world's first truly managerial industrial enterprises".[44] There were many spinoffs from research—such as the pharmaceutical industry, which emerged from chemical research.[45]
80
+
81
+ By the start of World War I (1914–1918), German industry switched to war production. The heaviest demands were on coal and steel for artillery and shell production, and on chemicals for the synthesis of materials that were subject to import restrictions and for chemical weapons and war supplies.
82
+
83
+ The creation of the Empire under Prussian leadership was a victory for the concept of Kleindeutschland (Smaller Germany) over the Großdeutschland concept. This meant that Austria-Hungary, a multi-ethnic Empire with a considerable German-speaking population, would remain outside of the German nation state. Bismarck's policy was to pursue a solution diplomatically. The effective alliance between Germany and Austria played a major role in Germany's decision to enter World War I in 1914.
84
+
85
+ Bismarck announced there would be no more territorial additions to Germany in Europe, and his diplomacy after 1871 was focused on stabilizing the European system and preventing any wars. He succeeded, and only after his departure from office in 1890 did the diplomatic tensions start rising again.[46]
86
+
87
+ After achieving formal unification in 1871, Bismarck devoted much of his attention to the cause of national unity. He opposed Catholic civil rights and emancipation, especially the influence of the Vatican under Pope Pius IX, and working class radicalism, represented by the emerging Social Democratic Party.
88
+
89
+ Prussia in 1871 included 16,000,000 Protestants, both Reformed and Lutheran, and 8,000,000 Catholics. Most people were generally segregated into their own religious worlds, living in rural districts or city neighbourhoods that were overwhelmingly of the same religion, and sending their children to separate public schools where their religion was taught. There was little interaction or intermarriage. On the whole, the Protestants had a higher social status, and the Catholics were more likely to be peasant farmers or unskilled or semiskilled industrial workers. In 1870, the Catholics formed their own political party, the Centre Party, which generally supported unification and most of Bismarck's policies. However, Bismarck distrusted parliamentary democracy in general and opposition parties in particular, especially when the Centre Party showed signs of gaining support among dissident elements such as the Polish Catholics in Silesia. A powerful intellectual force of the time was anti-Catholicism, led by the liberal intellectuals who formed a vital part of Bismarck's coalition. They saw the Catholic Church as a powerful force of reaction and anti-modernity, especially after the proclamation of papal infallibility in 1870, and the tightening control of the Vatican over the local bishops.[47]
90
+
91
+ The Kulturkampf launched by Bismarck 1871–1880 affected Prussia; although there were similar movements in Baden and Hesse, the rest of Germany was not affected. According to the new imperial constitution, the states were in charge of religious and educational affairs; they funded the Protestant and Catholic schools. In July 1871 Bismarck abolished the Catholic section of the Prussian Ministry of ecclesiastical and educational affairs, depriving Catholics of their voice at the highest level. The system of strict government supervision of schools was applied only in Catholic areas; the Protestant schools were left alone.[48]
92
+
93
+ Much more serious were the May laws of 1873. One made the appointment of any priest dependent on his attendance at a German university, as opposed to the seminaries that the Catholics typically used. Furthermore, all candidates for the ministry had to pass an examination in German culture before a state board which weeded out intransigent Catholics. Another provision gave the government a veto power over most church activities. A second law abolished the jurisdiction of the Vatican over the Catholic Church in Prussia; its authority was transferred to a government body controlled by Protestants.[49]
94
+
95
+ Nearly all German bishops, clergy, and laymen rejected the legality of the new laws, and were defiant in the face of heavier and heavier penalties and imprisonments imposed by Bismarck's government. By 1876, all the Prussian bishops were imprisoned or in exile, and a third of the Catholic parishes were without a priest. In the face of systematic defiance, the Bismarck government increased the penalties and its attacks, and were challenged in 1875 when a papal encyclical declared the whole ecclesiastical legislation of Prussia was invalid, and threatened to excommunicate any Catholic who obeyed. There was no violence, but the Catholics mobilized their support, set up numerous civic organizations, raised money to pay fines, and rallied behind their church and the Centre Party. The "Old Catholic Church", which rejected the First Vatican Council, attracted only a few thousand members. Bismarck, a devout pietistic Protestant, realized his Kulturkampf was backfiring when secular and socialist elements used the opportunity to attack all religion. In the long run, the most significant result was the mobilization of the Catholic voters, and their insistence on protecting their religious identity. In the elections of 1874, the Centre party doubled its popular vote, and became the second-largest party in the national parliament—and remained a powerful force for the next 60 years, so that after Bismarck it became difficult to form a government without their support.[50][51]
96
+
97
+ Bismarck built on a tradition of welfare programs in Prussia and Saxony that began as early as in the 1840s. In the 1880s he introduced old-age pensions, accident insurance, medical care and unemployment insurance that formed the basis of the modern European welfare state. He came to realize that this sort of policy was very appealing, since it bound workers to the state, and also fit in very well with his authoritarian nature. The social security systems installed by Bismarck (health care in 1883, accident insurance in 1884, invalidity and old-age insurance in 1889) at the time were the largest in the world and, to a degree, still exist in Germany today.
98
+
99
+ Bismarck's paternalistic programs won the support of German industry because its goals were to win the support of the working classes for the Empire and reduce the outflow of immigrants to America, where wages were higher but welfare did not exist.[37][52] Bismarck further won the support of both industry and skilled workers by his high tariff policies, which protected profits and wages from American competition, although they alienated the liberal intellectuals who wanted free trade.[53]
100
+
101
+ One of the effects of the unification policies was the gradually increasing tendency to eliminate the use of non-German languages in public life, schools and academic settings with the intent of pressuring the non-German population to abandon their national identity in what was called "Germanisation". These policies often had the reverse effect of stimulating resistance, usually in the form of home schooling and tighter unity in the minority groups, especially the Poles.[54]
102
+
103
+ The Germanisation policies were targeted particularly against the significant Polish minority of the empire, gained by Prussia in the partitions of Poland. Poles were treated as an ethnic minority even where they made up the majority, as in the Province of Posen, where a series of anti-Polish measures was enforced.[55] Numerous anti-Polish laws had no great effect especially in the province of Posen where the German-speaking population dropped from 42.8% in 1871 to 38.1% in 1905, despite all efforts.[56]
104
+
105
+ Antisemitism was endemic in Germany during the period. Before Napoleon's decrees ended the ghettos in Germany, it had been religiously motivated, but by the 19th century, it was a factor in German nationalism. The last legal barriers on Jews in Prussia were lifted by the 1860s, and within 20 years, they were over-represented in the white-collar professions and much of academia.[citation needed] In the popular mind Jews became a symbol of capitalism and wealth. On the other hand, the constitution and legal system protected the rights of Jews as German citizens. Antisemitic parties were formed but soon collapsed.[57]
106
+
107
+ Bismarck's efforts also initiated the levelling of the enormous differences between the German states, which had been independent in their evolution for centuries, especially with legislation. The completely different legal histories and judicial systems posed enormous complications, especially for national trade. While a common trade code had already been introduced by the Confederation in 1861 (which was adapted for the Empire and, with great modifications, is still in effect today), there was little similarity in laws otherwise.
108
+
109
+ In 1871, a common Criminal Code (Reichsstrafgesetzbuch) was introduced; in 1877, common court procedures were established in the court system (Gerichtsverfassungsgesetz), civil procedures (Zivilprozessordnung) and criminal procedures (Strafprozessordnung). In 1873 the constitution was amended to allow the Empire to replace the various and greatly differing Civil Codes of the states (If they existed at all; for example, parts of Germany formerly occupied by Napoleon's France had adopted the French Civil Code, while in Prussia the Allgemeines Preußisches Landrecht of 1794 was still in effect). In 1881, a first commission was established to produce a common Civil Code for all of the Empire, an enormous effort that would produce the Bürgerliches Gesetzbuch (BGB), possibly one of the most impressive legal works in the world; it was eventually put into effect on 1 January 1900. All of these codifications are, albeit with many amendments, still in effect today.
110
+
111
+ Different legal systems in Germany prior to 1900
112
+
113
+ Fields of law in the German Empire
114
+
115
+ On 9 March 1888, Wilhelm I died shortly before his 91st birthday, leaving his son Frederick III as the new emperor. Frederick was a liberal and an admirer of the British constitution,[58] while his links to Britain strengthened further with his marriage to Princess Victoria, eldest child of Queen Victoria. With his ascent to the throne, many hoped that Frederick's reign would lead to a liberalisation of the Reich and an increase of parliament's influence on the political process. The dismissal of Robert von Puttkamer, the highly-conservative Prussian interior minister, on 8 June was a sign of the expected direction and a blow to Bismarck's administration.
116
+
117
+ By the time of his accession, however, Frederick had developed incurable laryngeal cancer, which had been diagnosed in 1887. He died on the 99th day of his rule, on 15 June 1888. His son Wilhelm II became emperor.
118
+
119
+ Wilhelm II wanted to reassert his ruling prerogatives at a time when other monarchs in Europe were being transformed into constitutional figureheads. This decision led the ambitious Kaiser into conflict with Bismarck. The old chancellor had hoped to guide Wilhelm as he had guided his grandfather, but the emperor wanted to be the master in his own house and had many sycophants telling him that Frederick the Great would not have been great with a Bismarck at his side.[59] A key difference between Wilhelm II and Bismarck was their approaches to handling political crises, especially in 1889, when German coal miners went on strike in Upper Silesia. Bismarck demanded that the German Army be sent in to crush the strike, but Wilhelm II rejected this authoritarian measure, responding "I do not wish to stain my reign with the blood of my subjects."[60] Instead of condoning repression, Wilhelm had the government negotiate with a delegation from the coal miners, which brought the strike to an end without violence.[59] The fractious relationship ended in March 1890, after Wilhelm II and Bismarck quarrelled, and the chancellor resigned days later.[59] Bismarck's last few years had seen power slip from his hands as he grew older, more irritable, more authoritarian, and less focused.
120
+
121
+ With Bismarck's departure, Wilhelm II became the dominant ruler of Germany. Unlike his grandfather, Wilhelm I, who had been largely content to leave government affairs to the chancellor, Wilhelm II wanted to be fully informed and actively involved in running Germany, not an ornamental figurehead, although most Germans found his claims of divine right to rule amusing.[61] Wilhelm allowed politician Walther Rathenau to tutor him in European economics and industrial and financial realities in Europe.[61]
122
+
123
+ As Hull (2004) notes, Bismarckian foreign policy "was too sedate for the reckless Kaiser".[62] Wilhelm became internationally notorious for his aggressive stance on foreign policy and his strategic blunders (such as the Tangier Crisis), which pushed the German Empire into growing political isolation and eventually helped to cause World War I.
124
+
125
+ Under Wilhelm II, Germany no longer had long-ruling strong chancellors like Bismarck. The new chancellors had difficulty in performing their roles, especially the additional role as Prime Minister of Prussia assigned to them in the German Constitution. The reforms of Chancellor Leo von Caprivi, which liberalized trade and so reduced unemployment, were supported by the Kaiser and most Germans except for Prussian landowners, who feared loss of land and power and launched several campaigns against the reforms[63]
126
+
127
+ While Prussian aristocrats challenged the demands of a united German state, in the 1890s several organizations were set up to challenge the authoritarian conservative Prussian militarism which was being imposed on the country. Educators opposed to the German state-run schools, which emphasized military education, set up their own independent liberal schools, which encouraged individuality and freedom.[64] However nearly all the schools in Imperial Germany had a very high standard and kept abreast with modern developments in knowledge.[65]
128
+
129
+ Artists began experimental art in opposition to Kaiser Wilhelm's support for traditional art, to which Wilhelm responded "art which transgresses the laws and limits laid down by me can no longer be called art".[66] It was largely thanks to Wilhelm's influence that most printed material in Germany used blackletter instead of the Roman type used in the rest of Western Europe. At the same time, a new generation of cultural creators emerged.[67]
130
+
131
+ From the 1890s onwards, the most effective opposition to the monarchy came from the newly formed Social Democratic Party of Germany (SPD), whose radicals advocated Marxism. The threat of the SPD to the German monarchy and industrialists caused the state both to crack down on the party's supporters and to implement its own programme of social reform to soothe discontent. Germany's large industries provided significant social welfare programmes and good care to their employees, as long as they were not identified as socialists or trade-union members. The larger industrial firms provided pensions, sickness benefits and even housing to their employees.[64]
132
+
133
+ Having learned from the failure of Bismarck's Kulturkampf, Wilhelm II maintained good relations with the Roman Catholic Church and concentrated on opposing socialism.[68] This policy failed when the Social Democrats won a third of the votes in the 1912 elections to the Reichstag, and became the largest political party in Germany. The government remained in the hands of a succession of conservative coalitions supported by right-wing liberals or Catholic clerics and heavily dependent on the Kaiser's favour. The rising militarism under Wilhelm II caused many Germans to emigrate to the U.S. and the British colonies to escape mandatory military service.
134
+
135
+ During World War I, the Kaiser increasingly devolved his powers to the leaders of the German High Command, particularly future President of Germany, Field Marshal Paul von Hindenburg and Generalquartiermeister Erich Ludendorff. Hindenburg took over the role of commander–in–chief from the Kaiser, while Ludendorff became de facto general chief of staff. By 1916, Germany was effectively a military dictatorship run by Hindenburg and Ludendorff, with the Kaiser reduced to a mere figurehead.[69]
136
+
137
+ Wilhelm II wanted Germany to have her "place in the sun", like Britain, which he constantly wished to emulate or rival.[70] With German traders and merchants already active worldwide, he encouraged colonial efforts in Africa and the Pacific ("new imperialism"), causing the German Empire to vie with other European powers for remaining "unclaimed" territories. With the encouragement or at least the acquiescence of Britain, which at this stage saw Germany as a counterweight to her old rival France, Germany acquired German Southwest Africa (modern Namibia), German Kamerun (modern Cameroon), Togoland (modern Togo) and German East Africa (modern Rwanda, Burundi, and the mainland part of current Tanzania). Islands were gained in the Pacific through purchase and treaties and also a 99-year lease for the territory of Kiautschou in northeast China. But of these German colonies only Togoland and German Samoa (after 1908) became self-sufficient and profitable; all the others required subsidies from the Berlin treasury for building infrastructure, school systems, hospitals and other institutions.
138
+
139
+ Bismarck had originally dismissed the agitation for colonies with contempt; he favoured a Eurocentric foreign policy, as the treaty arrangements made during his tenure in office show. As a latecomer to colonization, Germany repeatedly came into conflict with the established colonial powers and also with the United States, which opposed German attempts at colonial expansion in both the Caribbean and the Pacific. Native insurrections in German territories received prominent coverage in other countries, especially in Britain; the established powers had dealt with such uprisings decades earlier, often brutally, and had secured firm control of their colonies by then. The Boxer Rising in China, which the Chinese government eventually sponsored, began in the Shandong province, in part because Germany, as colonizer at Kiautschou, was an untested power and had only been active there for two years. Eight western nations, including the United States, mounted a joint relief force to rescue westerners caught up in the rebellion. During the departure ceremonies for the German contingent, Wilhelm II urged them to behave like the Hun invaders of continental Europe – an unfortunate remark that would later be resurrected by British propagandists to paint Germans as barbarians during World War I and World War II. On two occasions, a French-German conflict over the fate of Morocco seemed inevitable.
140
+
141
+ Upon acquiring Southwest Africa, German settlers were encouraged to cultivate land held by the Herero and Nama. Herero and Nama tribal lands were used for a variety of exploitative goals (much as the British did before in Rhodesia), including farming, ranching, and mining for minerals and diamonds. In 1904, the Herero and the Nama revolted against the colonists in Southwest Africa, killing farm families, their laborers and servants. In response to the attacks, troops were dispatched to quell the uprising which then resulted in the Herero and Namaqua Genocide. In total, some 65,000 Herero (80% of the total Herero population), and 10,000 Nama (50% of the total Nama population) perished. The commander of the punitive expedition, General Lothar von Trotha, was eventually relieved and reprimanded for his usurpation of orders and the cruelties he inflicted. These occurrences were sometimes referred to as "the first genocide of the 20th century" and officially condemned by the United Nations in 1985. In 2004 a formal apology by a government minister of the Federal Republic of Germany followed.
142
+
143
+ Bismarck and Wilhelm II after him sought closer economic ties with the Ottoman Empire. Under Wilhelm II, with the financial backing of the Deutsche Bank, the Baghdad Railway was begun in 1900, although by 1914 it was still 500 km (310 mi) short of its destination in Baghdad.[71] In an interview with Wilhelm in 1899, Cecil Rhodes had tried "to convince the Kaiser that the future of the German empire abroad lay in the Middle East" and not in Africa; with a grand Middle-Eastern empire, Germany could afford to allow Britain the unhindered completion of the Cape-to-Cairo railway that Rhodes favoured.[72] Britain initially supported the Baghdad Railway; but by 1911 British statesmen came to fear it might be extended to Basra on the Persian Gulf, threatening Britain's naval supremacy in the Indian Ocean. Accordingly, they asked to have construction halted, to which Germany and the Ottoman Empire acquiesced.
144
+
145
+ Wilhelm II and his advisers committed a fatal diplomatic error when they allowed the "Reinsurance Treaty" that Bismarck had negotiated with Tsarist Russia to lapse. Germany was left with no firm ally but Austria-Hungary, and her support for action in annexing Bosnia and Herzegovina in 1908 further soured relations with Russia.[73] Wilhelm missed the opportunity to secure an alliance with Britain in the 1890s when it was involved in colonial rivalries with France, and he alienated British statesmen further by openly supporting the Boers in the South African War and building a navy to rival Britain's. By 1911 Wilhelm had completely picked apart the careful power balance established by Bismarck and Britain turned to France in the Entente Cordiale. Germany's only other ally besides Austria was the Kingdom of Italy, but it remained an ally only pro forma. When war came, Italy saw more benefit in an alliance with Britain, France, and Russia, which, in the secret Treaty of London in 1915 promised it the frontier districts of Austria where Italians formed the majority of the population and also colonial concessions. Germany did acquire a second ally that same year when the Ottoman Empire entered the war on its side, but in the long run supporting the Ottoman war effort only drained away German resources from the main fronts.
146
+
147
+ Following the assassination of the Austro-Hungarian Archduke of Franz Ferdinand by a Bosnian Serb, the Kaiser offered Emperor Franz Joseph full support for Austro-Hungarian plans to invade the Kingdom of Serbia, which Austria-Hungary blamed for the assassination. This unconditional support for Austria-Hungary was called a "blank cheque" by historians, including German Fritz Fischer. Subsequent interpretation – for example at the Versailles Peace Conference – was that this "blank cheque" licensed Austro-Hungarian aggression regardless of the diplomatic consequences, and thus Germany bore responsibility for starting the war, or at least provoking a wider conflict.
148
+
149
+ Germany began the war by targeting its chief rival, France. Germany saw France as its principal danger on the European continent as it could mobilize much faster than Russia and bordered Germany's industrial core in the Rhineland. Unlike Britain and Russia, the French entered the war mainly for revenge against Germany, in particular for France's loss of Alsace-Lorraine to Germany in 1871. The German high command knew that France would muster its forces to go into Alsace-Lorraine. Aside from the very unofficial Septemberprogramm, the Germans never stated a clear list of goals that they wanted out of the war.[74]
150
+
151
+ Germany did not want to risk lengthy battles along the Franco-German border and instead adopted the Schlieffen Plan, a military strategy designed to cripple France by invading Belgium and Luxembourg, sweeping down to encircle and crush both Paris and the French forces along the Franco-German border in a quick victory. After defeating France, Germany would turn to attack Russia. The plan required violating the official neutrality of Belgium and Luxembourg, which Britain had guaranteed by treaty. However, the Germans had calculated that Britain would enter the war regardless of whether they had formal justification to do so.[citation needed] At first the attack was successful: the German Army swept down from Belgium and Luxembourg and advanced on Paris, at the nearby River Marne. However, the evolution of weapons over the last century heavily favored defense over offense, especially thanks to the machine gun, so that it took proportionally more offensive force to overcome a defensive position. This resulted in the German lines on the offense contracting to keep up the offensive time table while correspondingly the French lines were extending. In addition, some German units that were originally slotted for the German far right were transferred to the Eastern Front in reaction to Russia mobilizing far faster than anticipated. The combined effect had the German right flank sweeping down in front of Paris instead of behind it exposing the German Right flank to the extending French lines and attack from strategic French reserves stationed in Paris. Attacking the exposed German right flank, the French Army and the British Army put up a strong resistance to the defense of Paris at the First Battle of the Marne, resulting in the German Army retreating to defensive positions along the river Aisne. A subsequent Race to the Sea resulted in a long-held stalemate between the German Army and the Allies in dug-in trench warfare positions from Alsace to Flanders.
152
+
153
+ German attempts to break through failed at the two battles of Ypres (1st/2nd) with huge casualties. A series of allied offensives in 1915 against German positions in Artois and Champagne resulted in huge allied casualties and little territorial change. German Chief of Staff Erich von Falkenhayn decided to exploit the defensive advantages that had shown themselves in the 1915 Allied offensives by attempting to goad France into attacking strong defensive positions near the ancient city of Verdun. Verdun had been one of the last cities to hold out against the German Army in 1870, and Falkenhayn predicted that as a matter of national pride the French would do anything to ensure that it was not taken. He expected that he could take strong defensive positions in the hills overlooking Verdun on the east bank of the River Meuse to threaten the city and the French would launch desperate attacks against these positions. He predicted that French losses would be greater than those of the Germans and that continued French commitment of troops to Verdun would "bleed the French Army white." In 1916, the Battle of Verdun began, with the French positions under constant shelling and poison gas attack and taking large casualties under the assault of overwhelmingly large German forces. However, Falkenhayn's prediction of a greater ratio of French killed proved to be wrong as both sides took heavy casualties. Falkenhayn was replaced by Erich Ludendorff, and with no success in sight, the German Army pulled out of Verdun in December 1916 and the battle ended.
154
+
155
+ While the Western Front was a stalemate for the German Army, the Eastern Front eventually proved to be a great success. Despite initial setbacks due to the unexpectedly rapid mobilisation of the Russian army, which resulted in a Russian invasion of East Prussia and Austrian Galicia, the badly organised and supplied Russian Army faltered and the German and Austro-Hungarian armies thereafter steadily advanced eastward. The Germans benefited from political instability in Russia and its population's desire to end the war. In 1917 the German government allowed Russia's communist Bolshevik leader Vladimir Lenin to travel through Germany from Switzerland into Russia. Germany believed that if Lenin could create further political unrest, Russia would no longer be able to continue its war with Germany, allowing the German Army to focus on the Western Front.
156
+
157
+ In March 1917, the Tsar was ousted from the Russian throne, and in November a Bolshevik government came to power under the leadership of Lenin. Facing political opposition from the Bolsheviks, he decided to end Russia's campaign against Germany, Austria-Hungary, the Ottoman Empire and Bulgaria in order to redirect Bolshevik energy to eliminating internal dissent. In March 1918, by the Treaty of Brest-Litovsk, the Bolshevik government gave Germany and the Ottoman Empire enormous territorial and economic concessions in exchange for an end to war on the Eastern Front. All of the modern-day Baltic states (Estonia, Latvia and Lithuania) were given over to the German occupation authority Ober Ost, along with Belarus and Ukraine. Thus Germany had at last achieved its long-wanted dominance of "Mitteleuropa" (Central Europe) and could now focus fully on defeating the Allies on the Western Front. In practice, however, the forces that were needed to garrison and secure the new territories were a drain on the German war effort.
158
+
159
+ Germany quickly lost almost all its colonies. However, in German East Africa, an impressive guerrilla campaign was waged by the colonial army leader there, General Paul Emil von Lettow-Vorbeck. Using Germans and native Askaris, Lettow-Vorbeck launched multiple guerrilla raids against British forces in Kenya and Rhodesia. He also invaded Portuguese Mozambique to gain his forces supplies and to pick up more Askari recruits. His force was still active at war's end.[75]
160
+
161
+ The defeat of Russia in 1917 enabled Germany to transfer hundreds of thousands of troops from the Eastern to the Western Front, giving it a numerical advantage over the Allies. By retraining the soldiers in new stormtrooper tactics, the Germans expected to unfreeze the battlefield and win a decisive victory before the army of the United States, which had now entered the war on the side of the Allies, arrived in strength.[76] However, the repeated German offensives in the spring of 1918 all failed, as the Allies fell back and regrouped and the Germans lacked the reserves needed to consolidate their gains. Meanwhile, soldiers had become radicalised by the Russian Revolution and were less willing to continue fighting. The war effort sparked civil unrest in Germany, while the troops, who had been constantly in the field without relief, grew exhausted and lost all hope of victory. In the summer of 1918, the British Army was at its peak strength with as many as 4.5 million men on the western front and 4,000 tanks for the Hundred Days Offensive, the Americans arriving at the rate of 10,000 a day, Germany's allies facing collapse and the German Empire's manpower exhausted, it was only a matter of time before multiple Allied offensives destroyed the German army.[77]
162
+
163
+ The concept of "total war" meant that supplies had to be redirected towards the armed forces and, with German commerce being stopped by the Allied naval blockade, German civilians were forced to live in increasingly meagre conditions. First food prices were controlled, then rationing was introduced. During the war about 750,000 German civilians died from malnutrition.[78]
164
+
165
+ Towards the end of the war conditions deteriorated rapidly on the home front, with severe food shortages reported in all urban areas. The causes included the transfer of many farmers and food workers into the military, combined with the overburdened railway system, shortages of coal, and the British blockade. The winter of 1916–1917 was known as the "turnip winter", because the people had to survive on a vegetable more commonly reserved for livestock, as a substitute for potatoes and meat, which were increasingly scarce. Thousands of soup kitchens were opened to feed the hungry, who grumbled that the farmers were keeping the food for themselves. Even the army had to cut the soldiers' rations.[79] The morale of both civilians and soldiers continued to sink.
166
+
167
+ Many Germans wanted an end to the war and increasing numbers began to associate with the political left, such as the Social Democratic Party and the more radical Independent Social Democratic Party, which demanded an end to the war. The entry of the U.S. into the war in April 1917 tipped the long-run balance of power even more in favour of the Allies.
168
+
169
+ The end of October 1918, in Kiel, in northern Germany, saw the beginning of the German Revolution of 1918–1919. Units of the German Navy refused to set sail for a last, large-scale operation in a war which they saw as good as lost, initiating the uprising. On 3 November, the revolt spread to other cities and states of the country, in many of which workers' and soldiers' councils were established. Meanwhile, Hindenburg and the senior generals lost confidence in the Kaiser and his government.
170
+
171
+ Bulgaria signed the Armistice of Salonica on 29 September 1918. The Ottoman Empire signed the Armistice of Mudros on 30 October 1918. Between 24 October and 3 November 1918, Italy defeated Austria-Hungary in the battle of Vittorio Veneto, which forced Austria-Hungary to sign the Armistice of Villa Giusti on 3 November 1918. So, in November 1918, with internal revolution, the Allies advancing toward Germany on the Western Front, Austria-Hungary falling apart from multiple ethnic tensions, its other allies out of the war and pressure from the German high command, the Kaiser and all German ruling kings, dukes, and princes abdicated, and German nobility was abolished. On 9 November, the Social Democrat Philipp Scheidemann proclaimed a republic. The new government led by the German Social Democrats called for and received an armistice on 11 November. It was succeeded by the Weimar Republic.[80] Those opposed, including disaffected veterans, joined a diverse set of paramilitary and underground political groups such as the Freikorps, the Organisation Consul, and the Communists.
172
+
173
+ The Empire's legislation was based on two organs, the Bundesrat and the Reichstag (parliament). There was universal male suffrage for the Reichstag, however legislation would have to pass both houses. The Bundesrat contained representatives of the states.
174
+
175
+ Before unification, German territory (excluding Austria and Switzerland) was made up of 27 constituent states. These states consisted of kingdoms, grand duchies, duchies, principalities, free Hanseatic cities and one imperial territory. The free cities had a republican form of government on the state level, even though the Empire at large was constituted as a monarchy, and so were most of the states. Prussia was the largest of the constituent states, covering two-thirds of the empire's territory.
176
+
177
+ Several of these states had gained sovereignty following the dissolution of the Holy Roman Empire, and had been de facto sovereign from the mid-1600s onward. Others were created as sovereign states after the Congress of Vienna in 1815. Territories were not necessarily contiguous—many existed in several parts, as a result of historical acquisitions, or, in several cases, divisions of the ruling families. Some of the initially existing states, in particular Hanover, were abolished and annexed by Prussia as a result of the war of 1866.
178
+
179
+ Each component of the German Empire sent representatives to the Federal Council (Bundesrat) and, via single-member districts, the Imperial Diet (Reichstag). Relations between the Imperial centre and the Empire's components were somewhat fluid and were developed on an ongoing basis. The extent to which the German Emperor could, for example, intervene on occasions of disputed or unclear succession was much debated on occasion—for example in the inheritance crisis of the Lippe-Detmold.
180
+
181
+ Unusually for a federation and/or a nation-state, the German states maintained limited autonomy over foreign affairs and continued to exchange ambassadors and other diplomats (both with each other and directly with foreign nations) for the Empire's entire existence. Shortly after the Empire was proclaimed, Bismarck implemented a convention in which his sovereign would only send and receive envoys to and from other German states as the King of Prussia, while envoys from Berlin sent to foreign nations always received credentials from the monarch in his capacity as German Emperor. In this way, the Prussian foreign ministry was largely tasked with managing relations with the other German states while the Imperial foreign ministry managed Germany's external relations.
182
+
183
+ Administrative map
184
+
185
+ Population density (c. 1885)
186
+
187
+ Election constituencies for the Reichstag
188
+
189
+ Detailed map in 1893 with cities and larger towns
190
+
191
+ About 92% of the population spoke German as their first language. The only minority language with a significant number of speakers (5.4%) was Polish (a figure that rises to over 6% when including the related Kashubian and Masurian languages).
192
+
193
+ The non-German Germanic languages (0.5%), like Danish, Dutch and Frisian, were located in the north and northwest of the empire, near the borders with Denmark, the Netherlands, Belgium, and Luxembourg. Low German was spoken throughout northern Germany and, though linguistically as distinct from High German (Hochdeutsch) as from Dutch and English, is considered "German", hence also its name. Danish and Frisian were spoken predominantly in the north of the Prussian province of Schleswig-Holstein and Dutch in the western border areas of Prussia (Hanover, Westphalia, and the Rhine Province).
194
+
195
+ Polish and other Slavic languages (6.28%) were spoken chiefly in the east.[b]
196
+
197
+ A few (0.5%) spoke French, the vast majority of these in the Reichsland Elsass-Lothringen where francophones formed 11.6% of the total population.
198
+
199
+ Danish
200
+
201
+ Dutch
202
+
203
+ Frisian
204
+
205
+ Polish
206
+
207
+ Czech (and Moravian)
208
+
209
+ Masurian
210
+
211
+ Kashubian
212
+
213
+ Sorbian
214
+
215
+ French
216
+
217
+ Walloon
218
+
219
+ Italian
220
+
221
+ Lithuanian
222
+
223
+ non-German
224
+
225
+ Generally, religious demographics of the early modern period hardly changed. Still, there were almost entirely Catholic areas (Lower and Upper Bavaria, northern Westphalia, Upper Silesia, etc.) and almost entirely Protestant areas (Schleswig-Holstein, Pomerania, Saxony, etc.). Confessional prejudices, especially towards mixed marriages, were still common. Bit by bit, through internal migration, religious blending was more and more common. In eastern territories, confession was almost uniquely perceived to be connected to one's ethnicity and the equation "Protestant = German, Catholic = Polish" was held to be valid. In areas affected by immigration in the Ruhr area and Westphalia, as well as in some large cities, religious landscape changed substantially. This was especially true in largely Catholic areas of Westphalia, which changed through Protestant immigration from the eastern provinces.
226
+
227
+ Politically, the confessional division of Germany had considerable consequences. In Catholic areas, the Centre Party had a big electorate. On the other hand, Social Democrats and Free Trade Unions usually received hardly any votes in the Catholic areas of the Ruhr. This began to change with the secularization arising in the last decades of the German Empire.
228
+
229
+ In Germany's overseas colonial empire, millions of subjects practiced various Indigenous religions in addition to the Christian missionaries and colonists. Over two million Muslims also lived under German colonial rule, primarily in German East Africa.[82]
230
+
231
+ Distribution of Protestants and Catholics in Imperial Germany
232
+
233
+ Distribution of Protestants, Catholics and Jews in Imperial Germany (Meyers Konversationslexikon)
234
+
235
+ Distribution of Jews in Imperial Germany
236
+
237
+ Greater Imperial coat of arms of Germany
238
+
239
+ Middle Imperial coat of arms of Germany
240
+
241
+ Lesser Imperial coat of arms of Germany
242
+
243
+ The defeat and aftermath of the First World War and the penalties imposed by the Treaty of Versailles shaped the positive memory of the Empire, especially among Germans who distrusted and despised the Weimar Republic. Conservatives, liberals, socialists, nationalists, Catholics and Protestants all had their own interpretations, which led to a fractious political and social climate in Germany in the aftermath of the empire's collapse.
244
+
245
+ Under Bismarck, a united German state had finally been achieved, but it remained a Prussian-dominated state and did not include German Austria as Pan-German nationalists had desired. The influence of Prussian militarism, the Empire's colonial efforts and its vigorous, competitive industrial prowess all gained it the dislike and envy of other nations. The German Empire enacted a number of progressive reforms, such as Europe's first social welfare system and freedom of press. There was also a modern system for electing the federal parliament, the Reichstag, in which every adult man had one vote. This enabled the Socialists and the Catholic Centre Party to play considerable roles in the empire's political life despite the continued hostility of Prussian aristocrats.
246
+
247
+ The era of the German Empire is well remembered in Germany as one of great cultural and intellectual vigour. Thomas Mann published his novel Buddenbrooks in 1901. Theodor Mommsen received the Nobel prize for literature a year later for his Roman history. Painters like the groups Der Blaue Reiter and Die Brücke made a significant contribution to modern art. The AEG turbine factory in Berlin by Peter Behrens from 1909 was a milestone in classic modern architecture and an outstanding example of emerging functionalism. The social, economic, and scientific successes of this Gründerzeit, or founding epoch, have sometimes led the Wilhelmine era to be regarded as a golden age.
248
+
249
+ In the field of economics, the "Kaiserzeit" laid the foundation of Germany's status as one of the world's leading economic powers. The iron and coal industries of the Ruhr, the Saar and Upper Silesia especially contributed to that process. The first motorcar was built by Karl Benz in 1886. The enormous growth of industrial production and industrial potential also led to a rapid urbanisation of Germany, which turned the Germans into a nation of city dwellers. More than 5 million people left Germany for the United States during the 19th century.[83]
250
+
251
+ Many historians have emphasized the central importance of a German Sonderweg or "special path" (or "exceptionalism") as the root of Nazism and the German catastrophe in the 20th century. According to the historiography by Kocka (1988), the process of nation-building from above had very grievous long-term implications. In terms of parliamentary democracy, Parliament was kept weak, the parties were fragmented, and there was a high level of mutual distrust. The Nazis built on the illiberal, anti-pluralist elements of Weimar's political culture. The Junker elites (the large landowners in the east) and senior civil servants used their great power and influence well into the twentieth century to frustrate any movement toward democracy. They played an especially negative role in the crisis of 1930–1933. Bismarck's emphasis on military force amplified the voice of the officer corps, which combined advanced modernisation of military technology with reactionary politics. The rising upper-middle class elites, in the business, financial and professional worlds, tended to accept the values of the old traditional elites. The German Empire was for Hans-Ulrich Wehler a strange mixture of highly successful capitalist industrialisation and socio-economic modernisation on the one hand, and of surviving pre-industrial institutions, power relations and traditional cultures on the other. Wehler argues that it produced a high degree of internal tension, which led on the one hand to the suppression of socialists, Catholics and reformers, and on the other hand to a highly aggressive foreign policy. For these reasons Fritz Fischer and his students emphasised Germany's primary guilt for causing the First World War.[84]
252
+
253
+ Hans-Ulrich Wehler, a leader of the Bielefeld School of social history, places the origins of Germany's path to disaster in the 1860s–1870s, when economic modernisation took place, but political modernisation did not happen and the old Prussian rural elite remained in firm control of the army, diplomacy and the civil service. Traditional, aristocratic, premodern society battled an emerging capitalist, bourgeois, modernising society. Recognising the importance of modernising forces in industry and the economy and in the cultural realm, Wehler argues that reactionary traditionalism dominated the political hierarchy of power in Germany, as well as social mentalities and in class relations (Klassenhabitus). The catastrophic German politics between 1914 and 1945 are interpreted in terms of a delayed modernisation of its political structures. At the core of Wehler's interpretation is his treatment of "the middle class" and "revolution", each of which was instrumental in shaping the 20th century. Wehler's examination of Nazi rule is shaped by his concept of "charismatic domination", which focuses heavily on Hitler.[85]
254
+
255
+ The historiographical concept of a German Sonderweg has had a turbulent history. 19th century scholars who emphasised a separate German path to modernity saw it as a positive factor that differentiated Germany from the "western path" typified by Great Britain. They stressed the strong bureaucratic state, reforms initiated by Bismarck and other strong leaders, the Prussian service ethos, the high culture of philosophy and music, and Germany's pioneering of a social welfare state. In the 1950s, historians in West Germany argued that the Sonderweg led Germany to the disaster of 1933–1945. The special circumstances of German historical structures and experiences, were interpreted as preconditions that, while not directly causing National Socialism, did hamper the development of a liberal democracy and facilitate the rise of fascism. The Sonderweg paradigm has provided the impetus for at least three strands of research in German historiography: the "long 19th century", the history of the bourgeoisie, and comparisons with the West. After 1990, increased attention to cultural dimensions and to comparative and relational history moved German historiography to different topics, with much less attention paid to the Sonderweg. While some historians have abandoned the Sonderweg thesis, they have not provided a generally accepted alternative interpretation.[86]
256
+
257
+ The Empire of Germany had 2 militaries; the:
258
+
259
+ In addition to present-day Germany, large parts of what comprised the German Empire now belong to several other modern European countries.
en/1725.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/1726.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/1727.html.txt ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Japan (Japanese: 日本, Nippon [ɲippoꜜɴ] (listen) or Nihon [ɲihoꜜɴ] (listen)) is an island country of East Asia in the northwest Pacific Ocean. It borders the Sea of Japan to the west and extends from the Sea of Okhotsk in the north to the East China Sea and Taiwan in the south. Japan is part of the Pacific Ring of Fire and comprises an archipelago of 6,852 islands covering 377,975 square kilometers (145,937 sq mi); its five main islands, from north to south, are Hokkaido, Honshu, Shikoku, Kyushu, and Okinawa. Tokyo is the country's capital and largest city; other major cities include Osaka and Nagoya.
4
+
5
+ Japan is the 11th most populous country in the world, as well as one of the most densely populated and urbanized. About three-fourths of the country's terrain is mountainous, concentrating its population of 126.2 million on narrow coastal plains. Japan is administratively divided into 47 prefectures and traditionally divided into eight regions. The Greater Tokyo Area is the most populous metropolitan area in the world, with more than 37.4 million residents.
6
+
7
+ The islands of Japan were inhabited as early as the Upper Paleolithic period, though the first mentions of the archipelago appear in Chinese chronicles from the 1st century AD. Between the 4th and 9th centuries, the kingdoms of Japan became unified under an emperor and imperial court based in Heian-kyō. Starting in the 12th century, however, political power was held by a series of military dictators (shōgun), feudal lords (daimyō), and a class of warrior nobility (samurai). After a century-long period of civil war, the country was reunified in 1603 under the Tokugawa shogunate, which enacted a foreign policy of isolation. In 1854, a United States fleet forced Japan to open trade to the West, leading to the end of the shogunate and the restoration of imperial power in 1868. In the Meiji era, the Empire of Japan adopted a Western-style constitution and pursued industrialization and modernization. Japan invaded China in 1937; in 1941, it entered World War II as an Axis power. After suffering defeat in the Pacific War and two atomic bombings, Japan surrendered in 1945 and came under an Allied occupation, during which it adopted a post-war constitution. It has since maintained a unitary parliamentary constitutional monarchy with an elected legislature known as the National Diet.
8
+
9
+ Japan is a great power and a member of numerous international organizations, including the United Nations (since 1956), the OECD, and the G7. Although it has renounced its right to declare war, the country maintains a modern military ranked as the world's fourth most powerful. Following World War II, Japan experienced record economic growth, becoming the second-largest economy in the world by 1990. As of 2019, the country's economy is the third-largest by nominal GDP and fourth-largest by purchasing power parity. Japan is a global leader in the automotive and electronics industries and has made significant contributions to science and technology. Ranked "very high" on the Human Development Index, Japan has the world's second-highest life expectancy, though it is currently experiencing a decline in population. Culturally, Japan is renowned for its art, cuisine, music, and popular culture, including its prominent animation and video game industries.
10
+
11
+ The name for Japan in Japanese is written using the kanji 日本 and pronounced Nippon or Nihon.[8] Before it was adopted in the early 8th century, the country was known in China as Wa (倭) and in Japan by the endonym Yamato.[9] Nippon, the original Sino-Japanese reading of the characters, is favored today for official uses, including on banknotes and postage stamps.[8] Nihon is typically used in everyday speech and reflects shifts in Japanese phonology during the Edo period.[9] The characters 日本 mean "sun origin", in reference to Japan's relatively eastern location.[8] It is the source of the popular Western epithet "Land of the Rising Sun".[10]
12
+
13
+ The name Japan is based on the Chinese pronunciation and was introduced to European languages through early trade. In the 13th century, Marco Polo recorded the early Mandarin or Wu Chinese pronunciation of the characters 日本國 as Cipangu.[11] The old Malay name for Japan, Japang or Japun, was borrowed from a southern coastal Chinese dialect and encountered by Portuguese traders in Southeast Asia, who brought the word to Europe in the early 16th century.[12] The first version of the name in English appears in a book published in 1577, which spelled the name as Giapan in a translation of a 1565 Portuguese letter.[13][14]
14
+
15
+ A Paleolithic culture from around 30,000 BC constitutes the first known habitation of the islands of Japan.[15] This was followed from around 14,500 BC (the start of the Jōmon period) by a Mesolithic to Neolithic semi-sedentary hunter-gatherer culture characterized by pit dwelling and rudimentary agriculture.[16] Decorated clay vessels from the period are among the oldest surviving examples of pottery.[17] From around 1000 BC, Yayoi people began to enter the archipelago from Kyushu, intermingling with the Jōmon;[18] the Yayoi period saw the introduction of practices including wet-rice farming,[19] a new style of pottery,[20] and metallurgy from China and Korea.[21]
16
+
17
+ Japan first appears in written history in the Chinese Book of Han, completed in 111 AD.[22] The Records of the Three Kingdoms records that the most powerful state on the archipelago in the 3rd century was Yamato; according to legend, the kingdom was founded in 660 BC by Emperor Jimmu. Buddhism was introduced to Japan from Baekje (a Korean kingdom) in 552, but the subsequent development of Japanese Buddhism was primarily influenced by China.[23] Despite early resistance, Buddhism was promoted by the ruling class, including figures like Prince Shōtoku, and gained widespread acceptance beginning in the Asuka period (592–710).[24]
18
+
19
+ After defeat in the Battle of Baekgang by the Chinese Tang dynasty, the Japanese government devised and implemented the far-reaching Taika Reforms. It nationalized all land in Japan, to be distributed equally among cultivators, and ordered the compilation of a household registry as the basis for a new system of taxation.[25] The Jinshin War of 672, a bloody conflict between Prince Ōama and his nephew Prince Ōtomo, became a major catalyst for further administrative reforms.[26] These reforms culminated with the promulgation of the Taihō Code, which consolidated existing statutes and established the structure of the central and subordinate local governments.[25] These legal reforms created the ritsuryō state, a system of Chinese-style centralized government that remained in place for half a millennium.[26]
20
+
21
+ The Nara period (710–784) marked an emergence of a Japanese state centered on the Imperial Court in Heijō-kyō (modern Nara). The period is characterized by the appearance of a nascent literary culture with the completion of the Kojiki (712) and Nihon Shoki (720), as well as the development of Buddhist-inspired artwork and architecture.[27] A smallpox epidemic in 735–737 is believed to have killed as much as one-third of Japan's population.[28] In 784, Emperor Kanmu moved the capital from Nara to Nagaoka-kyō, then to Heian-kyō (modern Kyoto) in 794. This marked the beginning of the Heian period (794–1185), during which a distinctly indigenous Japanese culture emerged. Murasaki Shikibu's The Tale of Genji and the lyrics of Japan's national anthem "Kimigayo" were written during this time.[29]
22
+
23
+ Japan's feudal era was characterized by the emergence and dominance of a ruling class of warriors, the samurai. In 1185, following the defeat of the Taira clan in the Genpei War, samurai Minamoto no Yoritomo was appointed shōgun and established a military government at Kamakura.[30] After Yoritomo's death, the Hōjō clan came to power as regents for the shōguns. The Zen school of Buddhism was introduced from China in the Kamakura period (1185–1333) and became popular among the samurai class.[31] The Kamakura shogunate repelled Mongol invasions in 1274 and 1281 but was eventually overthrown by Emperor Go-Daigo. Go-Daigo was defeated by Ashikaga Takauji in 1336, beginning the Muromachi period (1336–1573). However, the succeeding Ashikaga shogunate failed to control the feudal warlords (daimyōs) and a civil war began in 1467, opening the century-long Sengoku period ("Warring States").[32]
24
+
25
+ During the 16th century, Portuguese traders and Jesuit missionaries reached Japan for the first time, initiating direct commercial and cultural exchange between Japan and the West. Oda Nobunaga used European technology and firearms to conquer many other daimyōs; his consolidation of power began what was known as the Azuchi–Momoyama period (1573–1603). After Nobunaga was assassinated in 1582 by Akechi Mitsuhide, his successor Toyotomi Hideyoshi unified the nation in 1590 and launched two unsuccessful invasions of Korea in 1592 and 1597.
26
+
27
+ Tokugawa Ieyasu served as regent for Hideyoshi's son Toyotomi Hideyori and used his position to gain political and military support. When open war broke out, Ieyasu defeated rival clans in the Battle of Sekigahara in 1600. He was appointed shōgun by Emperor Go-Yōzei in 1603 and established the Tokugawa shogunate at Edo (modern Tokyo).[33] The shogunate enacted measures including buke shohatto, as a code of conduct to control the autonomous daimyōs,[34] and in 1639 the isolationist sakoku ("closed country") policy that spanned the two and a half centuries of tenuous political unity known as the Edo period (1603–1868).[35] Modern Japan's economic growth began in this period, resulting in roads and water transportation routes, as well as financial instruments such as futures contracts, banking and insurance of the Osaka rice brokers.[36] The study of Western sciences (rangaku) continued through contact with the Dutch enclave at Dejima in Nagasaki. The Edo period also gave rise to kokugaku ("national studies"), the study of Japan by the Japanese.[37]
28
+
29
+ In 1854, Commodore Matthew Perry and the "Black Ships" of the United States Navy forced the opening of Japan to the outside world with the Convention of Kanagawa. Similar treaties with Western countries in the Bakumatsu period brought economic and political crises. The resignation of the shōgun led to the Boshin War and the establishment of a centralized state nominally unified under the emperor (the Meiji Restoration).[38] Adopting Western political, judicial, and military institutions, the Cabinet organized the Privy Council, introduced the Meiji Constitution, and assembled the Imperial Diet. During the Meiji era (1868–1912), the Empire of Japan emerged as the most developed nation in Asia and as an industrialized world power that pursued military conflict to expand its sphere of influence.[39][40][41] After victories in the First Sino-Japanese War (1894–1895) and the Russo-Japanese War (1904–1905), Japan gained control of Taiwan, Korea and the southern half of Sakhalin.[42] The Japanese population doubled from 35 million in 1873 to 70 million by 1935.[43]
30
+
31
+ The early 20th century saw a period of Taishō democracy (1912–1926) overshadowed by increasing expansionism and militarization. World War I allowed Japan, which joined the side of the victorious Allies, to capture German possessions in the Pacific and in China. The 1920s saw a political shift towards statism, the passing of laws against political dissent, and a series of attempted coups. This process accelerated during the 1930s, spawning a number of Radical Nationalist groups that shared a hostility to liberal democracy and a dedication to expansion in Asia. In 1931, Japan invaded and occupied Manchuria; following international condemnation of the occupation, it resigned from the League of Nations two years later. In 1936, Japan signed the Anti-Comintern Pact with Nazi Germany; the 1940 Tripartite Pact made it one of the Axis Powers.
32
+
33
+ The Empire of Japan invaded other parts of China in 1937, precipitating the Second Sino-Japanese War (1937–1945). In 1940, the Empire invaded French Indochina, after which the United States placed an oil embargo on Japan.[44] On December 7–8, 1941, Japanese forces carried out surprise attacks on Pearl Harbor, as well as on British forces in Malaya, Singapore, and Hong Kong, and declared war on the United States and the British Empire, beginning World War II in the Pacific. After Allied victories during the next four years, which culminated in the Soviet invasion of Manchuria and the atomic bombings of Hiroshima and Nagasaki in 1945, Japan agreed to an unconditional surrender.[45] The war cost Japan its colonies, China and the war's other combatants tens of millions of lives, and left much of Japan's industry and infrastructure destroyed. The Allies (led by the United States) repatriated millions of ethnic Japanese from colonies and military camps throughout Asia, largely eliminating the Japanese empire and its influence over its conquered territories.[46] The Allies also convened the International Military Tribunal for the Far East to prosecute Japanese leaders for war crimes.
34
+
35
+ In 1947, Japan adopted a new constitution emphasizing liberal democratic practices. The Allied occupation ended with the Treaty of San Francisco in 1952,[47] and Japan was granted membership in the United Nations in 1956. A period of record growth propelled Japan to become the second-largest economy in the world; this ended in the mid-1990s after the popping of an asset price bubble, beginning the "Lost Decade". In the 21st century, positive growth has signaled a gradual economic recovery.[48] On March 11, 2011, Japan suffered one of the largest earthquakes in its recorded history, triggering the Fukushima Daiichi nuclear disaster.[49] On May 1, 2019, after the historic abdication of Emperor Akihito, his son Naruhito became the new emperor, beginning the Reiwa era.[50]
36
+
37
+ Japan comprises 6,852 islands extending along the Pacific coast of Asia. It stretches over 3,000 km (1,900 mi) northeast–southwest from the Sea of Okhotsk to the East China and Philippine Seas.[51] The county's five main islands, from north to south, are Hokkaido, Honshu, Shikoku, Kyushu and Okinawa.[52] The Ryukyu Islands, which include Okinawa, are a chain to the south of Kyushu. The Nanpō Islands are south and east of the main islands of Japan. Together they are often known as the Japanese archipelago.[53] As of 2019[update], Japan's territory is 377,975.24 km2 (145,937.06 sq mi).[2] Japan has the sixth longest coastline in the world (29,751 km (18,486 mi)). Because of its many far-flung outlying islands, Japan has the eighth largest Exclusive Economic Zone in the world covering 4,470,000 km2 (1,730,000 sq mi).[54]
38
+
39
+ About 73 percent of Japan is forested, mountainous and unsuitable for agricultural, industrial or residential use.[55][56] As a result, the habitable zones, mainly in coastal areas, have extremely high population densities: Japan is one of the most densely populated countries.[57] Approximately 0.5% of Japan's total area is reclaimed land (umetatechi). Late 20th and early 21st century projects include artificial islands such as Chubu Centrair International Airport in Ise Bay, Kansai International Airport in the middle of Osaka Bay, Yokohama Hakkeijima Sea Paradise and Wakayama Marina City.[58]
40
+
41
+ Japan is substantially prone to earthquakes, tsunami and volcanoes because of its location along the Pacific Ring of Fire.[59] It has the 15th highest natural disaster risk as measured in the 2013 World Risk Index.[60] Japan has 108 active volcanoes, which are primarily the result of large oceanic movements occurring from the mid-Silurian to the Pleistocene as a result of the subduction of the Philippine Sea Plate beneath the continental Amurian Plate and Okinawa Plate to the south, and subduction of the Pacific Plate under the Okhotsk Plate to the north. Japan was originally attached to the Eurasian continent; the subducting plates opened the Sea of Japan around 15 million years ago.[61] During the twentieth century several new volcanoes emerged, including Shōwa-shinzan on Hokkaido and Myōjin-shō off the Bayonnaise Rocks. Destructive earthquakes, often resulting in tsunami, occur several times each century.[62] The 1923 Tokyo earthquake killed over 140,000 people.[63] More recent major quakes are the 1995 Great Hanshin earthquake and the 2011 Tōhoku earthquake, which triggered a large tsunami.[49]
42
+
43
+ The climate of Japan is predominantly temperate but varies greatly from north to south. Japan's geographical features divide it into six principal climatic zones: Hokkaido, Sea of Japan, Central Highland, Seto Inland Sea, Pacific Ocean, and Ryukyu Islands. The northernmost zone, Hokkaido, has a humid continental climate with long, cold winters and very warm to cool summers. Precipitation is not heavy, but the islands usually develop deep snowbanks in the winter.[64] In the Sea of Japan zone on Honshu's west coast, northwest winter winds bring heavy snowfall. In the summer, the region is cooler than the Pacific area, though it sometimes experiences extremely hot temperatures because of the foehn. The Central Highland has a typical inland humid continental climate, with large temperature differences between summer and winter, as well as large diurnal variation; precipitation is light, though winters are usually snowy. The mountains of the Chūgoku and Shikoku regions shelter the Seto Inland Sea from seasonal winds, bringing mild weather year-round.[64] The Pacific coast features a humid subtropical climate that experiences milder winters with occasional snowfall and hot, humid summers because of the southeast seasonal wind. The Ryukyu and Nanpō Islands have a subtropical climate, with warm winters and hot summers. Precipitation is very heavy, especially during the rainy season.[64]
44
+
45
+ The average winter temperature in Japan is 5.1 °C (41.2 °F) and the average summer temperature is 25.2 °C (77.4 °F).[65] The highest temperature ever measured in Japan, 41.1 °C (106.0 °F), was recorded on July 23, 2018.[66] The main rainy season begins in early May in Okinawa, and the rain front gradually moves north until reaching Hokkaido in late July. In late summer and early autumn, typhoons often bring heavy rain.[65]
46
+
47
+ Japan has nine forest ecoregions which reflect the climate and geography of the islands. They range from subtropical moist broadleaf forests in the Ryūkyū and Bonin Islands, to temperate broadleaf and mixed forests in the mild climate regions of the main islands, to temperate coniferous forests in the cold, winter portions of the northern islands.[67] Japan has over 90,000 species of wildlife, including the brown bear, the Japanese macaque, the Japanese raccoon dog, the large Japanese field mouse, and the Japanese giant salamander.[68] A large network of national parks has been established to protect important areas of flora and fauna as well as 37 Ramsar wetland sites.[69][70] Four sites have been inscribed on the UNESCO World Heritage List for their outstanding natural value.[71]
48
+
49
+ In the period of rapid economic growth after World War II, environmental policies were downplayed by the government and industrial corporations; as a result, environmental pollution was widespread in the 1950s and 1960s. Responding to rising concern, the government introduced several environmental protection laws in 1970.[72] The oil crisis in 1973 also encouraged the efficient use of energy because of Japan's lack of natural resources.[73]
50
+
51
+ As of 2015[update], more than 40 coal-fired power plants are planned or under construction in Japan, following the switching-off of Japan's nuclear fleet following the 2011 Fukushima nuclear disaster. Prior to this incident, Japan's emissions had been on the decline, largely because their nuclear power plants created no emissions. Japan ranks 20th in the 2018 Environmental Performance Index, which measures a nation's commitment to environmental sustainability.[74] As the host and signatory of the 1997 Kyoto Protocol, Japan is under treaty obligation to reduce its carbon dioxide emissions and to take other steps to curb climate change.[75] Current environmental issues include urban air pollution (NOx, suspended particulate matter, and toxics), waste management, water eutrophication, nature conservation, climate change, chemical management and international co-operation for conservation.[76]
52
+
53
+ Japan is a unitary state and constitutional monarchy in which the power of the Emperor is limited to a ceremonial role. He is defined in the Constitution as "the symbol of the state and of the unity of the people". Executive power is instead wielded by the Prime Minister of Japan and his Cabinet, whose sovereignty is vested in the Japanese people.[77] Naruhito is the current Emperor of Japan, having succeeded his father Akihito upon his accession to the Chrysanthemum Throne on May 1, 2019.
54
+
55
+ Japan's legislative organ is the National Diet, a bicameral parliament. It consists of a lower House of Representatives with 465 seats, elected by popular vote every four years or when dissolved, and an upper House of Councillors with 245 seats, whose popularly-elected members serve six-year terms. There is universal suffrage for adults over 18 years of age,[78] with a secret ballot for all elected offices.[77] The Diet is currently dominated by the conservative Liberal Democratic Party (LDP), which has enjoyed near-continuous electoral success since 1955. The prime minister is the head of government and is appointed by the emperor after being designated from among the members of the Diet. As the head of the Cabinet, the prime minister has the power to appoint and dismiss Ministers of State. Following the LDP victory in the 2012 general election, Shinzō Abe replaced Yoshihiko Noda as the prime minister.[79]
56
+
57
+ Historically influenced by Chinese law, the Japanese legal system developed independently during the Edo period through texts such as Kujikata Osadamegaki.[80] However, since the late 19th century, the judicial system has been largely based on the civil law of Europe, notably Germany. In 1896, Japan established a civil code based on the German Bürgerliches Gesetzbuch, which remains in effect with post–World War II modifications.[81] The Constitution of Japan, adopted in 1947, is the oldest unamended constitution in the world.[82] Statutory law originates in the legislature, and the constitution requires that the emperor promulgate legislation passed by the Diet without giving him the power to oppose legislation. The main body of Japanese statutory law is called the Six Codes.[83] Japan's court system is divided into four basic tiers: the Supreme Court and three levels of lower courts.[84]
58
+
59
+ Japan is divided into 47 prefectures, each overseen by an elected governor, legislature, and administrative bureaucracy.[85] Each prefecture is further divided into cities, towns and villages.[86] In the following table, the prefectures are grouped by region:
60
+
61
+ 1. Hokkaido
62
+
63
+ 2. Aomori
64
+ 3. Iwate
65
+ 4. Miyagi
66
+ 5. Akita
67
+ 6. Yamagata
68
+ 7. Fukushima
69
+
70
+ 8. Ibaraki
71
+ 9. Tochigi
72
+ 10. Gunma
73
+ 11. Saitama
74
+ 12. Chiba
75
+ 13. Tokyo
76
+ 14. Kanagawa
77
+
78
+ 15. Niigata
79
+ 16. Toyama
80
+ 17. Ishikawa
81
+ 18. Fukui
82
+ 19. Yamanashi
83
+ 20. Nagano
84
+ 21. Gifu
85
+ 22. Shizuoka
86
+ 23. Aichi
87
+
88
+ 24. Mie
89
+ 25. Shiga
90
+ 26. Kyoto
91
+ 27. Osaka
92
+ 28. Hyōgo
93
+ 29. Nara
94
+ 30. Wakayama
95
+
96
+ 31. Tottori
97
+ 32. Shimane
98
+ 33. Okayama
99
+ 34. Hiroshima
100
+ 35. Yamaguchi
101
+
102
+ 36. Tokushima
103
+ 37. Kagawa
104
+ 38. Ehime
105
+ 39. Kōchi
106
+
107
+ 40. Fukuoka
108
+ 41. Saga
109
+ 42. Nagasaki
110
+ 43. Kumamoto
111
+ 44. Ōita
112
+ 45. Miyazaki
113
+ 46. Kagoshima
114
+ 47. Okinawa
115
+
116
+ A member state of the United Nations since 1956, Japan has served as a non-permanent Security Council member for a total of 22 years. It is one of the G4 nations seeking permanent membership in the Security Council.[87] Japan is a member of the G7, APEC, and "ASEAN Plus Three", and is a participant in the East Asia Summit. Japan signed a security pact with Australia in March 2007[88] and with India in October 2008.[89] It is the world's fifth largest donor of official development assistance, donating US$9.2 billion in 2014.[90] In 2017, Japan had the fifth largest diplomatic network in the world.[91]
117
+
118
+ Japan has close economic and military relations with the United States; the US-Japan security alliance acts as the cornerstone of the nation's foreign policy.[92] The United States is a major market for Japanese exports and the primary source of Japanese imports and is committed to defending the country, having military bases in Japan for partially that purpose.[93]
119
+
120
+ Japan's relationship with South Korea has been strained because of Japan's treatment of Koreans during Japanese colonial rule, particularly over the issue of comfort women.[94] In December 2015, Japan agreed to settle the comfort women dispute with South Korea by issuing a formal apology and paying money to the surviving comfort women. Today, South Korea and Japan have a stronger and more economically-driven relationship. Since the 1990s, the Korean Wave has created a large fanbase in East Asia: Japan is the number one importer of Korean music (K-pop), television (K-dramas), and films.[95] Most recently, South Korean President Moon Jae-in met with Japanese Prime Minister Shinzo Abe at the 2017 G20 Summit to discuss the future of their relationship and specifically how to cooperate on finding solutions for North Korean aggression in the region.[96]
121
+
122
+ Japan is engaged in several territorial disputes with its neighbors. Japan contests Russia's control of the Southern Kuril Islands, which were occupied by the Soviet Union in 1945.[97] South Korea's control of the Liancourt Rocks is acknowledged but not accepted as they are claimed by Japan.[98] Japan has strained relations with China and Taiwan over the Senkaku Islands[99] and the status of Okinotorishima.
123
+
124
+ Japan maintains one of the largest military budgets of any country in the world.[100] The country's military (the Japan Self-Defense Forces) is restricted by Article 9 of the Japanese Constitution, which renounces Japan's right to declare war or use military force in international disputes. Japan is the highest-ranked Asian country in the Global Peace Index.
125
+
126
+ The military is governed by the Ministry of Defense, and primarily consists of the Japan Ground Self-Defense Force, the Japan Maritime Self-Defense Force, and the Japan Air Self-Defense Force. The Maritime Self-Defense Force is a regular participant in RIMPAC maritime exercises.[101] The forces have been recently used in peacekeeping operations; the deployment of troops to Iraq marked the first overseas use of Japan's military since World War II.[102] The Japan Business Federation has called on the government to lift the ban on arms exports so that Japan can join multinational projects such as the Joint Strike Fighter.[103]
127
+
128
+ The Government of Japan has been making changes to its security policy which include the establishment of the National Security Council, the adoption of the National Security Strategy, and the development of the National Defense Program Guidelines.[104] In May 2014, Prime Minister Shinzō Abe said Japan wanted to shed the passiveness it has maintained since the end of World War II and take more responsibility for regional security.[105] Recent tensions, particularly with North Korea,[106] have reignited the debate over the status of the JSDF and its relation to Japanese society.[107]
129
+
130
+ Domestic security in Japan is provided mainly by the prefectural police departments, under the oversight of the National Police Agency[108] and supervised by the Criminal Affairs Bureau of the National Police Agency.[109] As the central coordinating body for the Prefectural Police Departments, the National Police Agency is administered by the National Public Safety Commission.[110] The Special Assault Team comprises national-level counter-terrorism tactical units that cooperate with territorial-level Anti-Firearms Squads and Counter-NBC Terrorism Squads.[111]
131
+
132
+ Additionally, there is the Japan Coast Guard which guards territorial waters. The coast guard patrols the sea surrounding Japan and uses surveillance and control countermeasures against smuggling, marine environmental crime, poaching, piracy, spy ships, unauthorized foreign fishing vessels, and illegal immigration.[112]
133
+
134
+ The Firearm and Sword Possession Control Law strictly regulates the civilian ownership of guns, swords and other weaponry.[113][114] According to the United Nations Office on Drugs and Crime, among the member states of the UN that report statistics, the incidence rate of violent crimes such as murder, abduction, forced sexual intercourse and robbery is very low in Japan.[115][116][117][118][119]
135
+
136
+ Japan is the third largest national economy in the world, after the United States and China, in terms of nominal GDP,[120] and the fourth largest national economy in the world, after the United States, China and India, in terms of purchasing power parity. As of 2017[update], Japan's public debt was estimated at more than 230 percent of its annual gross domestic product, the largest of any nation in the world.[121] The service sector accounts for three quarters of the gross domestic product.[122]
137
+
138
+ As of 2017[update], Japan's labor force consisted of some 65 million workers.[55] Japan has a low unemployment rate of around three percent. Around 16 percent of the population were below the poverty line in 2013.[123] Housing in Japan is characterized by limited land supply in urban areas.[124]
139
+
140
+ Japan's exports amounted to US$5,430 per capita in 2017. As of 2017[update], Japan's main export markets were the United States (19.4 percent), China (19 percent), South Korea (7.6 percent), Hong Kong (5.1 percent) and Thailand (4.2 percent). Its main exports are transportation equipment, motor vehicles, iron and steel products, semiconductors and auto parts.[55] Japan's main import markets as of 2017[update] were China (24.5 percent), the United States (11 percent), Australia (5.8 percent), South Korea (4.2 percent), and Saudi Arabia (4.1 percent).[55] Japan's main imports are machinery and equipment, fossil fuels, foodstuffs (in particular beef), chemicals, textiles and raw materials for its industries. By market share measures, domestic markets are the least open of any OECD country.[125]
141
+
142
+ Japan ranks 34th of 190 countries in the 2018 ease of doing business index and has one of the smallest tax revenues of the developed world. The Japanese variant of capitalism has many distinct features: keiretsu enterprises are influential, and lifetime employment and seniority-based career advancement are relatively common in the Japanese work environment.[125][126] Japanese companies are known for management methods like "The Toyota Way", and shareholder activism is rare.[127] Japan also has a large cooperative sector, with three of the ten largest cooperatives in the world, including the largest consumer cooperative and the largest agricultural cooperative in the world.[128]
143
+
144
+ Japan ranks highly for competitiveness and economic freedom. It is ranked sixth in the Global Competitiveness Report for 2015–2016.[129][130]
145
+
146
+ The Japanese agricultural sector accounts for about 1.4% of the total country's GDP.[131] Only 12% of Japan's land is suitable for cultivation.[132][133] Because of this lack of arable land, a system of terraces is used to farm in small areas.[134] This results in one of the world's highest levels of crop yields per unit area, with an overall agricultural self-sufficiency rate of about 50% on fewer than 56,000 square kilometers (14,000,000 acres) cultivated. Japan's small agricultural sector, however, is also highly subsidized and protected, with government regulations that favor small-scale cultivation instead of large-scale agriculture.[132] Rice, the most protected crop, is subject to tariffs of 777.7%.[133][135] There has been a growing concern about farming as the current farmers are aging with a difficult time finding successors.[136]
147
+
148
+ In 1996, Japan ranked fourth in the world in tonnage of fish caught.[137] Japan ranked seventh and captured 3,167,610 metric tons of fish in 2016, down from an annual average of 4,000,000 tons over the previous decade.[138] In 2010, Japan's total fisheries production was 4,762,469 fish.[139] Japan maintains one of the world's largest fishing fleets and accounts for nearly 15% of the global catch,[140] prompting some claims that Japan's fishing is leading to depletion in fish stocks such as tuna.[141] Japan has also sparked controversy by supporting quasi-commercial whaling.[142]
149
+
150
+ Japan has a large industrial capacity and is home to some of the largest and most technologically advanced producers of motor vehicles, machine tools, steel and nonferrous metals, ships, chemical substances, textiles, and processed foods. Japan's industrial sector makes up approximately 27.5% of its GDP.[140] Some major Japanese industrial companies include Canon Inc., Toshiba and Nippon Steel.[140][144] The country's manufacturing output is the third highest in the world.[145]
151
+
152
+ Japan is the third largest automobile producer in the world and is home to Toyota, the world's largest automobile company.[143][146] Despite facing competition from South Korea and China, the Japanese shipbuilding industry is expected to remain strong through an increased focus on specialized, high-tech designs.[147]
153
+
154
+ Japan's service sector accounts for about three-quarters of its total economic output.[131] Banking, insurance, real estate, retailing, transportation, and telecommunications are all major industries, with companies such as Mitsubishi UFJ, Mizuho, NTT, TEPCO, Nomura, Mitsubishi Estate, ÆON, Mitsui Sumitomo, Softbank, JR East, Seven & I, KDDI and Japan Airlines listed as some of the largest in the world.[148][149] Four of the five most circulated newspapers in the world are Japanese newspapers.[150] The six major keiretsus are the Mitsubishi, Sumitomo, Fuyo, Mitsui, Dai-Ichi Kangyo and Sanwa Groups.[151]
155
+
156
+ Japan attracted 19.73 million international tourists in 2015[152] and increased by 21.8% to attract 24.03 million international tourists in 2016.[153][154][155] In 2008, the Japanese government set up Japan Tourism Agency and set the initial goal to increase foreign visitors to 20 million in 2020. In 2016, having met the 20 million target, the government revised up its target to 40 million by 2020 and to 60 million by 2030.[156][157] For inbound tourism, Japan was ranked 16th in the world in 2015.[158] Japan is one of the least visited countries in the OECD on a per capita basis,[159] and it was by far the least visited country in the G7 until 2014.[160]
157
+
158
+ Japan is a leading nation in scientific research, particularly in the natural sciences and engineering. The country ranks second among the most innovative countries in the Bloomberg Innovation Index.[161][162] Nearly 700,000 researchers share a US$130 billion research and development budget,[163] which relative to gross domestic product is the third highest budget in the world.[164] The country is a world leader in fundamental scientific research, having produced twenty-two Nobel laureates in either physics, chemistry or medicine[165] and three Fields medalists.[166]
159
+
160
+ Japanese scientists and engineers have contributed to the advancement of agricultural sciences, electronics, industrial robotics, optics, chemicals, semiconductors, life sciences and various fields of engineering. Japan leads the world in robotics production and use, possessing more than 20% of the world's industrial robots as of 2013[update].[needs update][167] Japan boasts the third highest number of scientists, technicians, and engineers per capita in the world with 83 per 10,000 employees.[168][169][170]
161
+
162
+ The Japanese consumer electronics industry, once considered the strongest in the world, is currently in a state of decline as competition arises in countries like South Korea, the United States and China.[171][172] However, video gaming in Japan remains a major industry. Japan became a major exporter of video games during the golden age of arcade video games, an era that began with the release of Taito's Space Invaders in 1978 and ended around the mid-1980s.[173][174][175] Japanese-made video game consoles have been popular since the 1980s,[176] and Japan dominated the industry until Microsoft's Xbox consoles began challenging Sony and Nintendo in the 2000s.[177][178][179] As of 2009[update], $6 billion of Japan's $20 billion gaming market is generated from arcades, which represent the largest sector of the Japanese video game market, followed by home console games and mobile games at $3.5 billion and $2 billion, respectively.[needs update][180] Japan is now the world's largest market for mobile games;[181] in 2014, Japan's consumer video game market grossed $9.6 billion, with $5.8 billion coming from mobile gaming.[182]
163
+
164
+ The Japan Aerospace Exploration Agency is Japan's national space agency; it conducts space, planetary, and aviation research, and leads development of rockets and satellites. It is a participant in the International Space Station: the Japanese Experiment Module (Kibō) was added to the station during Space Shuttle assembly flights in 2008.[183] The space probe Akatsuki was launched in 2010 and achieved orbit around Venus in 2015. Japan's plans in space exploration include building a moon base by 2030.[184] In 2007, it launched lunar explorer SELENE (Selenological and Engineering Explorer) from Tanegashima Space Center. The largest lunar mission since the Apollo program, its purpose was to gather data on the moon's origin and evolution. It entered a lunar orbit on October 4, 2007,[185][186] and was deliberately crashed into the Moon on June 11, 2009.[187]
165
+
166
+ Japan's road spending has been extensive.[188] Its 1.2 million kilometers (0.75 million miles) of paved road are the main means of transportation.[189] As of 2012[update], Japan has approximately 1,215,000 kilometers (755,000 miles) of roads made up of 1,022,000 kilometers (635,000 miles) of city, town and village roads, 129,000 kilometers (80,000 miles) of prefectural roads, 55,000 kilometers (34,000 miles) of general national highways and 8,050 kilometers (5,000 miles) of national expressways.[190][191] A single network of high-speed, divided, limited-access toll roads connects major cities on Honshu, Shikoku and Kyushu (Hokkaido has a separate network). Cars are inexpensive; car ownership fees and fuel levies are used to promote energy efficiency. However, at just 50 percent of all distance traveled, car usage is the lowest of all G8 countries.[192]
167
+
168
+ Since privatization in 1987, dozens of Japanese railway companies compete in regional and local passenger transportation markets; major companies include seven JR enterprises, Kintetsu, Seibu Railway and Keio Corporation. Some 250 high-speed Shinkansen trains connect major cities and Japanese trains are known for their safety and punctuality.[193][194] A new Maglev line called the Chūō Shinkansen is being constructed between Tokyo and Nagoya. It is due to be completed in 2027.[195]
169
+
170
+ There are 175 airports in Japan;[55] the largest domestic airport, Haneda Airport in Tokyo, is Asia's second-busiest airport.[196] The largest international gateways are Narita International Airport, Kansai International Airport and Chūbu Centrair International Airport.[197] Nagoya Port is the country's largest and busiest port, accounting for 10 percent of Japan's trade value.[198]
171
+
172
+ As of 2017[update], 39% of energy in Japan was produced from petroleum, 25% from coal, 23% from natural gas, 3.5% from hydropower and 1.5% from nuclear power. Nuclear power was down from 11.2 percent in 2010.[199] By May 2012 all of the country's nuclear power plants had been taken offline because of ongoing public opposition following the Fukushima Daiichi nuclear disaster in March 2011, though government officials continued to try to sway public opinion in favor of returning at least some to service.[200] The Sendai Nuclear Power Plant restarted in 2015,[201] and since then several other nuclear power plants have been restarted. Japan lacks significant domestic reserves and so has a heavy dependence on imported energy.[202] Japan has therefore aimed to diversify its sources and maintain high levels of energy efficiency.[203]
173
+
174
+ The government took responsibility for regulating the water and sanitation sector is shared between the Ministry of Health, Labor and Welfare in charge of water supply for domestic use; the Ministry of Land, Infrastructure, Transport and Tourism in charge of water resources development as well as sanitation; the Ministry of the Environment in charge of ambient water quality and environmental preservation; and the Ministry of Internal Affairs and Communications in charge of performance benchmarking of utilities.[204] Access to an improved water source is universal in Japan. 97% of the population receives piped water supply from public utilities and 3% receive water from their own wells or unregulated small systems, mainly in rural areas.[205]
175
+
176
+ Japan has a population of 126.3 million,[206] of which 124.8 million are Japanese nationals (2019).[207] Honshū is the world's second most populous island and has 80% of Japan's population. In 2010, 90.7% of the total Japanese population lived in cities.[208] The capital city Tokyo has a population of 13.8 million (2018).[209] It is part of the Greater Tokyo Area, the biggest metropolitan area in the world with 38,140,000 people (2016).[210][211]
177
+
178
+ Japanese society is linguistically, ethnically and culturally homogeneous,[212][213] composed of 98.1% ethnic Japanese,[55] with small populations of foreign workers.[212] The most dominant native ethnic group is the Yamato people; primary minority groups include the indigenous Ainu[214] and Ryukyuan people, as well as social minority groups like the burakumin.[215] Zainichi Koreans,[216] Chinese, Filipinos, Brazilians mostly of Japanese descent,[217] Peruvians mostly of Japanese descent, and Americans are among the small minority groups in Japan.[218] In 2003, there were about 134,700 non-Latin American Western (not including more than 33,000 American military personnel and their dependents) and 345,500 Latin American expatriates, 274,700 of whom were Brazilians,[217] the largest community of Westerners.[219]
179
+
180
+ Japan has the second longest overall life expectancy at birth of any country in the world: 83.5 years for persons born in the period 2010–2015.[220][221] The Japanese population is rapidly aging as a result of a post–World War II baby boom followed by a decrease in birth rates. In 2012, about 24.1 percent of the population was over 65, and the proportion is projected to rise to almost 40 percent by 2050.[222] On September 15, 2018, for the first time, one in five Japanese residents was aged 70 or older. 26.18 million people are 70 or older and accounted for 20.7 percent of the population. Elderly women crossed the 20 million line at 20.12 million, substantially outnumbering the nation's 15.45 million elderly men.[223] The changes in demographic structure have created a number of social issues, particularly a potential decline in workforce population and increase in the cost of social security benefits.[224] A growing number of younger Japanese are not marrying or remain childless.[225] Japan's population is expected to drop to 95 million by 2050.[222][226]
181
+
182
+ Immigration and birth incentives are sometimes suggested as a solution to provide younger workers to support the nation's aging population.[227][228] Japan accepts an average flow of 9,500 new naturalized citizens per year.[229] On April 1, 2019, Japan's revised immigration law was enacted, protecting the rights of foreign workers to help reduce labor shortages in certain sectors.[230]
183
+
184
+ Japan has full religious freedom based on its constitution. Upper estimates suggest that 84–96 percent of the Japanese population subscribe to Shinto as its indigenous religion (50% to 80% of which considering degrees of syncretism with Buddhism, shinbutsu-shūgō).[231][232] However, these estimates are based on people affiliated with a temple, rather than the number of true believers. Many Japanese people practice both Shinto and Buddhism;[233] they can either identify with both religions or describe themselves as non-religious or spiritual,[234] despite participating in religious ceremonies as a cultural tradition. As a result, religious statistics are often under-reported in Japan. Other studies have suggested that only 30 percent of the population identify themselves as belonging to a religion.[235] Nevertheless, the level of participation remains high, especially during festivals and occasions such as the first shrine visit of the New Year. Taoism and Confucianism from China have also influenced Japanese beliefs and customs.[236]
185
+
186
+ Christianity was first introduced into Japan by Jesuit missions starting in 1549.[237] Today, fewer than 1%[238][239][240] to 2.3% are Christians,[b] most of them living in the western part of the country. As of 2007[update], there were 32,036 Christian priests and pastors in Japan.[242] Throughout the latest century, some Western customs originally related to Christianity (including Western style weddings, Valentine's Day and Christmas) have become popular as secular customs among many Japanese.[243]
187
+
188
+ Islam in Japan is estimated to constitute about 80–90% of foreign-born migrants and their children, primarily from Indonesia, Pakistan, Bangladesh, and Iran.[244] Many of the ethnic Japanese Muslims are those who convert upon marrying immigrant Muslims.[245] The Pew Research Center estimated that there were 185,000 Muslims in Japan in 2010.[246]
189
+
190
+ Other minority religions include Hinduism, Sikhism, Judaism, and Bahá'í Faith;[247] since the mid-19th century numerous new religious movements have emerged in Japan.[248]
191
+
192
+ More than 99 percent of the population speaks Japanese as their first language.[55] Japanese writing uses kanji (Chinese characters) and two sets of kana (syllabaries based on cursive script and radical of kanji), as well as the Latin alphabet and Arabic numerals.[249] Public and private schools generally require students to take Japanese language classes as well as English language courses.[250]
193
+
194
+ Besides Japanese, the Ryukyuan languages (Amami, Kunigami, Okinawan, Miyako, Yaeyama, Yonaguni), also part of the Japonic language family, are spoken in the Ryukyu Islands chain. Few children learn these languages,[251] but in recent years local governments have sought to increase awareness of the traditional languages. The Okinawan Japanese dialect is also spoken in the region. The Ainu language, which is a language isolate, is moribund, with only a few elderly native speakers remaining in Hokkaido.[252]
195
+
196
+ Primary schools, secondary schools and universities were introduced in 1872 as a result of the Meiji Restoration.[253] Since 1947, compulsory education in Japan comprises elementary and junior high school, which together last for nine years (from age 6 to age 15). Almost all children continue their education at a three-year senior high school. The two top-ranking universities in Japan are the University of Tokyo and Kyoto University.[254] Japan's education system played a central part in the country's recovery after World War II when the Fundamental Law of Education and the School Education Law were enacted. The latter law defined the standard school system. Starting in April 2016, various schools began the academic year with elementary school and junior high school integrated into one nine-year compulsory schooling program; MEXT plans for this approach to be adopted nationwide.[255]
197
+
198
+ The Programme for International Student Assessment coordinated by the OECD currently ranks the overall knowledge and skills of Japanese 15-year-olds as the third best in the world.[256] Japan is one of the top-performing OECD countries in reading literacy, math and sciences with the average student scoring 529 and has one of the world's highest-educated labor forces among OECD countries.[257][256][258] In 2015, Japan's public spending on education amounted to just 4.1 percent of its GDP, below the OECD average of 5.0 percent.[259] The country's large pool of highly educated and skilled individuals is largely responsible for ushering Japan's post-war economic growth.[260] In 2017, the country ranked third for the percentage of 25 to 64 year-olds that have attained tertiary education with 51 percent.[260] In addition, 60.4 percent Japanese aged 25 to 34 have some form of tertiary education qualification and bachelor's degrees are held by 30.4 percent of Japanese aged 25 to 64, the second most in the OECD after South Korea.[260]
199
+
200
+ Health care is provided by national and local governments. Payment for personal medical services is offered through a universal health insurance system that provides relative equality of access, with fees set by a government committee. People without insurance through employers can participate in a national health insurance program administered by local governments. Since 1973, all elderly persons have been covered by government-sponsored insurance.[261] Japan has a high suicide rate;[262][263] suicide is the leading cause of death for people under 30.[264] Another significant public health issue is smoking. Japan has the lowest rate of heart disease in the OECD, and the lowest level of dementia in the developed world.[265]
201
+
202
+ Contemporary Japanese culture combines influences from Asia, Europe and North America.[266] Traditional Japanese arts include crafts such as ceramics, textiles, lacquerware, swords and dolls; performances of bunraku, kabuki, noh, dance, and rakugo; and other practices, the tea ceremony, ikebana, martial arts, calligraphy, origami, onsen, Geisha and games. Japan has a developed system for the protection and promotion of both tangible and intangible Cultural Properties and National Treasures.[267] Twenty-two sites have been inscribed on the UNESCO World Heritage List, eighteen of which are of cultural significance.[71]
203
+
204
+
205
+
206
+ Japanese sculpture, largely of wood, and Japanese painting are among the oldest of the Japanese arts, with early figurative paintings dating to at least 300 BC. The history of Japanese painting exhibits synthesis and competition between native Japanese esthetics and imported ideas.[268] The interaction between Japanese and European art has been significant: for example ukiyo-e prints, which began to be exported in the 19th century in the movement known as Japonism, had a significant influence on the development of modern art in the West, most notably on post-Impressionism.[268] Japanese manga developed in the 20th century and have become popular worldwide.[269]
207
+
208
+ Japanese architecture is a combination between local and other influences. It has traditionally been typified by wooden structures, elevated slightly off the ground, with tiled or thatched roofs. The Shrines of Ise have been celebrated as the prototype of Japanese architecture.[270] Largely of wood, traditional housing and many temple buildings see the use of tatami mats and sliding doors that break down the distinction between rooms and indoor and outdoor space.[271] Since the 19th century, however, Japan has incorporated much of Western, modern, and post-modern architecture into construction and design. Architects returning from study with western architects introduced the International Style of modernism into Japan. However, it was not until after World War II that Japanese architects made an impression on the international scene, firstly with the work of architects like Kenzō Tange and then with movements like Metabolism.
209
+
210
+ The earliest works of Japanese literature include the Kojiki and Nihon Shoki chronicles and the Man'yōshū poetry anthology, all from the 8th century and written in Chinese characters.[272][273] In the early Heian period, the system of phonograms known as kana (hiragana and katakana) was developed. The Tale of the Bamboo Cutter is considered the oldest Japanese narrative.[274] An account of court life is given in The Pillow Book by Sei Shōnagon, while The Tale of Genji by Murasaki Shikibu is often described as the world's first novel.[275][276]
211
+
212
+ During the Edo period, the chōnin ("townspeople") overtook the samurai aristocracy as producers and consumers of literature. The popularity of the works of Saikaku, for example, reveals this change in readership and authorship, while Bashō revivified the poetic tradition of the Kokinshū with his haikai (haiku) and wrote the poetic travelogue Oku no Hosomichi.[277] The Meiji era saw the decline of traditional literary forms as Japanese literature integrated Western influences. Natsume Sōseki and Mori Ōgai were the first "modern" novelists of Japan, followed by Ryūnosuke Akutagawa, Jun'ichirō Tanizaki, Yukio Mishima and, more recently, Haruki Murakami. Japan has two Nobel Prize-winning authors – Yasunari Kawabata (1968) and Kenzaburō Ōe (1994).[274]
213
+
214
+ Japanese philosophy has historically been a fusion of both foreign, particularly Chinese and Western, and uniquely Japanese elements. In its literary forms, Japanese philosophy began about fourteen centuries ago. Confucian ideals are still evident today in the Japanese concept of society and the self, and in the organization of the government and the structure of society.[278] Buddhism has profoundly impacted Japanese psychology, metaphysics, and esthetics.[279]
215
+
216
+ Japanese music is eclectic and diverse. Many instruments, such as the koto, were introduced in the 9th and 10th centuries. The popular folk music, with the guitar-like shamisen, dates from the 16th century.[280] Western classical music, introduced in the late 19th century, now forms an integral part of Japanese culture. The imperial court ensemble Gagaku has influenced the work of some modern Western composers.[281] Notable classical composers from Japan include Toru Takemitsu and Rentarō Taki. Popular music in post-war Japan has been heavily influenced by American and European trends, which has led to the evolution of J-pop.[282] Karaoke is the most widely practiced cultural activity in Japan.[283]
217
+
218
+ The four traditional theaters from Japan are noh, kyōgen, kabuki, and bunraku. Noh and kyōgen theater traditions are among the oldest continuous theater traditions in the world.
219
+
220
+ Ishin-denshin (以心伝心) is a Japanese idiom which denotes a form of interpersonal communication through unspoken mutual understanding.[284] Isagiyosa (潔さ) is a virtue of the capability of accepting death with composure. Cherry blossoms are a symbol of isagiyosa in the sense of embracing the transience of the world.[285] Hansei (反省) is a central idea in Japanese culture, meaning to acknowledge one's own mistake and to pledge improvement. Kotodama (言霊) refers to the Japanese belief that mystical powers dwell in words and names.[286] There are many annual festivals in Japan, which are called in Japanese matsuri (祭). There are no specific festival days for all of Japan; dates vary from area to area, and even within a specific area, but festival days do tend to cluster around traditional holidays such as Setsubun or Obon.
221
+
222
+ Officially, Japan has 16 national, government-recognized holidays. Public holidays in Japan are regulated by the Public Holiday Law (国民の祝日に関する法律, Kokumin no Shukujitsu ni Kansuru Hōritsu) of 1948.[287] Beginning in 2000, Japan implemented the Happy Monday System, which moved a number of national holidays to Monday in order to obtain a long weekend. The national holidays in Japan are New Year's Day on January 1, Coming of Age Day on Second Monday of January, National Foundation Day on February 11, The Emperor's Birthday on February 23, Vernal Equinox Day on March 20 or 21, Shōwa Day on April 29, Constitution Memorial Day on May 3, Greenery Day on May 4, Children's Day on May 5, Marine Day on Third Monday of July, Mountain Day on August 11, Respect for the Aged Day on Third Monday of September, Autumnal Equinox on September 23 or 24, Health and Sports Day on Second Monday of October, Culture Day on November 3, and Labor Thanksgiving Day on November 23.[288]
223
+
224
+ Japanese cuisine is known for its emphasis on seasonality of food, quality of ingredients and presentation. Japanese cuisine offers a vast array of regional specialties that use traditional recipes and local ingredients. Seafood and Japanese rice or noodles are traditional staple of Japanese cuisine, typically seasoned with a combination of dashi, soy sauce, mirin, vinegar, sugar, and salt. Dishes inspired by foreign food—in particular Chinese food—like ramen and gyōza, as well as foods like spaghetti, curry, and hamburgers have become adopted with variants for Japanese tastes and ingredients. Japanese curry, since its introduction to Japan from British India, is so widely consumed that it can be called a national dish.[289] Traditional Japanese sweets are known as wagashi.[290] Ingredients such as red bean paste and mochi are used. More modern-day tastes includes green tea ice cream.[291]
225
+
226
+ Popular Japanese beverages include sake, which is a brewed rice beverage that typically contains 14–17% alcohol and is made by multiple fermentation of rice.[292] Beer has been brewed in Japan since the late 17th century.[293] Green tea is produced in Japan and prepared in various forms such as matcha, used in the Japanese tea ceremony.[294]
227
+
228
+ Television and newspapers take an important role in Japanese mass media, though radio and magazines also take a part.[295][296] Over the 1990s, television surpassed newspapers as Japan's main information and entertainment medium.[297] There are six nationwide television networks: NHK (public broadcasting), Nippon Television (NTV), Tokyo Broadcasting System (TBS), Fuji Network System (FNS), TV Asahi (EX) and TV Tokyo Network (TXN).[296] Television networks were mostly established based on capital investments by existing radio networks. Variety shows, serial dramas, and news constitute a large percentage of Japanese television shows. According to the 2015 NHK survey on television viewing in Japan, 79 percent of Japanese watch television daily.[298]
229
+
230
+ Japanese readers have a choice of approximately 120 daily newspapers, with an average subscription rate of 1.13 newspapers per household.[299] The main newspapers are the Yomiuri Shimbun, Asahi Shimbun, Mainichi Shimbun, Nikkei Shimbun and Sankei Shimbun. According to a survey conducted by the Japanese Newspaper Association in 1999, 85.4 percent of men and 75 percent of women read a newspaper every day.[297]
231
+
232
+ Japan has one of the oldest and largest film industries in the world; movies have been produced in Japan since 1897.[300] Ishirō Honda's Godzilla became an international icon of Japan and spawned an entire subgenre of kaiju films, as well as the longest-running film franchise in history. Japan has won the Academy Award for the Best Foreign Language Film four times, more than any other Asian country. Japanese animated films and television series, known as anime, were largely influenced by Japanese manga and have been extensively popular in the West. Japan is a world-renowned powerhouse of animation.[301]
233
+
234
+ Traditionally, sumo is considered Japan's national sport.[302] Japanese martial arts such as judo, karate and kendo are also widely practiced and enjoyed by spectators in the country. After the Meiji Restoration, many Western sports were introduced.[303] Baseball is currently the most popular spectator sport in the country. Japan's top professional league, now known as Nippon Professional Baseball, was established in 1936[304] and is widely considered to be the highest level of professional baseball in the world outside of the North American Major Leagues. Since the establishment of the Japan Professional Football League in 1992, association football has also gained a wide following.[305] Japan was a venue of the Intercontinental Cup from 1981 to 2004 and co-hosted the 2002 FIFA World Cup with South Korea.[306] Japan has one of the most successful football teams in Asia, winning the Asian Cup four times,[307] and the FIFA Women's World Cup in 2011.[308] Golf is also popular in Japan.[309]
235
+
236
+ Japan has significant involvement in motorsport. Japanese automotive manufacturers have been successful in multiple different categories, with titles and victories in series such as Formula One, MotoGP, IndyCar, World Rally Championship, World Endurance Championship, World Touring Car Championship, British Touring Car Championship and the IMSA SportsCar Championship.[310][311][312] Three Japanese drivers have achieved podium finishes in Formula One, and drivers from Japan also have victories at the Indianapolis 500 and the 24 Hours of Le Mans, in addition to success in domestic championships.[313][314] Super GT is the most popular national series in Japan, while Super Formula is the top level domestic open-wheel series.[315] The country also hosts major races such as the Japanese Grand Prix, Japanese motorcycle Grand Prix, Suzuka 10 Hours, 6 Hours of Fuji, FIA WTCC Race of Japan and the Indy Japan 300.
237
+
238
+ Japan hosted the Summer Olympics in Tokyo in 1964 and the Winter Olympics in Sapporo in 1972 and Nagano in 1998.[316] Further, the country hosted the official 2006 Basketball World Championship.[317] Tokyo will host the 2020 Summer Olympics, making Tokyo the first Asian city to host the Olympics twice.[318] The country gained the hosting rights for the official Women's Volleyball World Championship on five occasions, more than any other nation.[319] Japan is the most successful Asian Rugby Union country, winning the Asian Five Nations a record six times and winning the newly formed IRB Pacific Nations Cup in 2011. Japan also hosted the 2019 IRB Rugby World Cup.[320]
239
+
240
+ Government
241
+
242
+ General information
243
+
244
+
245
+
246
+ Coordinates: 36°N 138°E / 36°N 138°E / 36; 138
247
+
en/1728.html.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A chine ( /ˈtʃaɪn/) is a steep-sided coastal gorge where a river flows to the sea through, typically, soft eroding cliffs of sandstone or clays. The word is still in use in central Southern England—notably in East Devon, Dorset, Hampshire and the Isle of Wight—to describe such topographical features. The term 'bunny' is sometimes used to describe a chine in Hampshire. The term chine is also used in some Vancouver suburbs in Canada to describe similar features.
4
+
5
+ Chines appear at the outlet of small river valleys when a particular combination of geology, stream volume, and coastal recession rate creates a knickpoint, usually starting at a waterfall at the cliff edge, that initiates rapid erosion and deepening of the stream bed into a gully leading down to the sea.[1]
6
+
7
+ All chines are in a state of constant change due to erosion. The Blackgang Chine on the Isle of Wight, for example, has been destroyed by landslides and coastal erosion during the 20th century. As the walls of the chines and cliffs are so unstable and erode continually, particularly those of the south coast of the Isle of Wight, the strata are clearly visible. Chines are, therefore, very important for their fossil records, their archaeology and the unique flora and fauna, such as invertebrates and rare insects, for which they provide shelter.[2]
8
+
9
+ In Devon, Sherbrooke Chine is west of Budleigh Salterton,[3] and Seaton Chine is at the western end of the West Walk esplanade, Seaton. In Dorset, west of Bournemouth are found Flaghead Chine, Branksome Chine, Alum Chine, Middle Chine and Durley Chine, and east towards Boscombe, Boscombe Chine and Honeycombe Chine. Bournemouth town centre itself is built in the former Bourne Chine (the Pleasure Gardens being the original valley floor), although urban development since the late 19th century has altered the topography somewhat. Becton Bunny and Chewton Bunny are other examples of chines near Barton on Sea, Hampshire ("Bunny" being the New Forest equivalent to "Chine").[4][5]
10
+
11
+ A rare example of the use of 'Chine' in a non-coastal setting is Chineham, a civil parish near Basingstoke.
12
+
13
+ There are twenty chines on the Isle of Wight, to which fascinating folklore is attached because of their history with local smuggling, fishing and shipwrecks. The popular tourist attraction of Shanklin Chine is also famous for its involvement in the Second World War, when it was used to carry one of the Operation Pluto pipelines and as training area for the 40 Royal Marine Commando battalion before the 1942 Dieppe Raid.[6]
14
+
15
+ Geologically, the chines in Alum Bay, in Totland (Widdick Chine), and the three in Colwell Bay (Colwell Chine, Brambles Chine and Linstone Chine) are in Tertiary rocks. The remainder on the island's south coast are in Cretaceous rocks.
16
+
17
+ An inventory of chines on the Isle of Wight follows, listing chines clockwise from Cowes:[2]
18
+
19
+ The Vancouver suburb of Coquitlam has a neighbourhood called Harbour Chines that was built in the 1950s, along with the adjoining neighbourhood of Chineside to the east. Both are situated upon the tops of cliffs that overlook a large number of streams flowing down to the adjoining suburb of Port Moody's Chines Park, from where they flow to Burrard Inlet, onwards out to the Georgia Strait of the Salish Sea, and the Pacific Ocean. [7]
en/1729.html.txt ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Inca Empire (Quechua: Tawantinsuyu, lit. "The Four Regions"[4]), also known as the Incan Empire and the Inka Empire, was the largest empire in pre-Columbian America.[5] The administrative, political and military center of the empire was located in the city of Cusco. The Inca civilization arose from the Peruvian highlands sometime in the early 13th century. Its last stronghold was conquered by the Spanish in 1572.
4
+
5
+ From 1438 to 1533, the Incas incorporated a large portion of western South America, centered on the Andean Mountains, using conquest and peaceful assimilation, among other methods. At its largest, the empire joined Peru, western Ecuador, western and south central Bolivia, northwest Argentina, a large portion of what is today Chile, and the southwesternmost tip of Colombia into a state comparable to the historical empires of Eurasia. Its official language was Quechua.[6] Many local forms of worship persisted in the empire, most of them concerning local sacred Huacas, but the Inca leadership encouraged the sun worship of Inti – their sun god – and imposed its sovereignty above other cults such as that of Pachamama.[7] The Incas considered their king, the Sapa Inca, to be the "son of the sun."[8]
6
+
7
+ The Inca Empire was unusual in that it lacked many features associated with civilization in the Old World. Anthropologist Gordon McEwan wrote that:[9]
8
+
9
+ The Incas lacked the use of wheeled vehicles. They lacked animals to ride and draft animals that could pull wagons and plows... [They] lacked the knowledge of iron and steel... Above all, they lacked a system of writing... Despite these supposed handicaps, the Incas were still able to construct one of the greatest imperial states in human history.
10
+
11
+ Notable features of the Inca Empire include its monumental architecture, especially stonework, extensive road network reaching all corners of the empire, finely-woven textiles, use of knotted strings (quipu) for record keeping and communication, agricultural innovations in a difficult environment, and the organization and management fostered or imposed on its people and their labor.
12
+
13
+ The Incan economy has been described in contradictory ways by scholars:[10]
14
+
15
+ ... feudal, slave, socialist (here one may choose between socialist paradise or socialist tyranny)
16
+
17
+ The Inca Empire functioned largely without money and without markets. Instead, exchange of goods and services was based on reciprocity between individuals and among individuals, groups, and Inca rulers. "Taxes" consisted of a labour obligation of a person to the Empire. The Inca rulers (who theoretically owned all the means of production) reciprocated by granting access to land and goods and providing food and drink in celebratory feasts for their subjects.[11]
18
+
19
+ The Inca referred to their empire as Tawantinsuyu,[4] "the four suyu". In Quechua, tawa is four and -ntin is a suffix naming a group, so that a tawantin is a quartet, a group of four things taken together, in this case the four suyu ("regions" or "provinces") whose corners met at the capital. The four suyu were: Chinchaysuyu (north), Antisuyu (east; the Amazon jungle), Qullasuyu (south) and Kuntisuyu (west). The name Tawantinsuyu was, therefore, a descriptive term indicating a union of provinces. The Spanish transliterated the name as Tahuatinsuyo or Tahuatinsuyu.
20
+
21
+ The term Inka means "ruler" or "lord" in Quechua and was used to refer to the ruling class or the ruling family.[12] The Incas were a very small percentage of the total population of the empire, probably numbering only 15,000 to 40,000, but ruling a population of around 10 million people.[13] The Spanish adopted the term (transliterated as Inca in Spanish) as an ethnic term referring to all subjects of the empire rather than simply the ruling class. As such, the name Imperio inca ("Inca Empire") referred to the nation that they encountered and subsequently conquered.
22
+
23
+ The Inca Empire was the last chapter of thousands of years of Andean civilizations. The Andean civilization was one of five civilizations in the world deemed by scholars to be "pristine", that is indigenous and not derivative from other civilizations.[14]
24
+
25
+ The Inca Empire was preceded by two large-scale empires in the Andes: the Tiwanaku (c. 300–1100 AD), based around Lake Titicaca and the Wari or Huari (c. 600–1100 AD) centered near the city of Ayacucho. The Wari occupied the Cuzco area for about 400 years. Thus, many of the characteristics of the Inca Empire derived from earlier multi-ethnic and expansive Andean cultures.[15]
26
+
27
+ Carl Troll has argued that the development of the Inca state in the central Andes was aided by conditions that allow for the elaboration of the staple food chuño. Chuño, which can be stored for long periods, is made of potato dried at the freezing temperatures that are common at nighttime in the southern Peruvian highlands. Such a link between the Inca state and chuño may be questioned, as potatoes and other crops such as maize can also be dried with only sunlight.[16] Troll did also argue that llamas, the Inca's pack animal, can be found in its largest numbers in this very same region.[16] It is worth considering the maximum extent of the Inca Empire roughly coincided with the greatest distribution of llamas and alpacas in Pre-Hispanic America.[17] The link between the Andean biomes of puna and páramo, pastoralism and the Inca state is a matter of research.[18] As a third point Troll pointed out irrigation technology as advantageous to the Inca state-building.[18] While Troll theorized environmental influences on the Inca Empire, he opposed environmental determinism, arguing that culture lay at the core of the Inca civilization.[18]
28
+
29
+ The Inca people were a pastoral tribe in the Cusco area around the 12th century. Incan oral history tells an origin story of three caves. The center cave at Tampu T'uqu (Tambo Tocco) was named Qhapaq T'uqu ("principal niche", also spelled Capac Tocco). The other caves were Maras T'uqu (Maras Tocco) and Sutiq T'uqu (Sutic Tocco).[19] Four brothers and four sisters stepped out of the middle cave. They were: Ayar Manco, Ayar Cachi, Ayar Awqa (Ayar Auca) and Ayar Uchu; and Mama Ocllo, Mama Raua, Mama Huaco and Mama Qura (Mama Cora). Out of the side caves came the people who were to be the ancestors of all the Inca clans.
30
+
31
+ Ayar Manco carried a magic staff made of the finest gold. Where this staff landed, the people would live. They traveled for a long time. On the way, Ayar Cachi boasted about his strength and power. His siblings tricked him into returning to the cave to get a sacred llama. When he went into the cave, they trapped him inside to get rid of him.
32
+
33
+ Ayar Uchu decided to stay on the top of the cave to look over the Inca people. The minute he proclaimed that, he turned to stone. They built a shrine around the stone and it became a sacred object. Ayar Auca grew tired of all this and decided to travel alone. Only Ayar Manco and his four sisters remained.
34
+
35
+ Finally, they reached Cusco. The staff sank into the ground. Before they arrived, Mama Ocllo had already borne Ayar Manco a child, Sinchi Roca. The people who were already living in Cusco fought hard to keep their land, but Mama Huaca was a good fighter. When the enemy attacked, she threw her bolas (several stones tied together that spun through the air when thrown) at a soldier (gualla) and killed him instantly. The other people became afraid and ran away.
36
+
37
+ After that, Ayar Manco became known as Manco Cápac, the founder of the Inca. It is said that he and his sisters built the first Inca homes in the valley with their own hands. When the time came, Manco Cápac turned to stone like his brothers before him. His son, Sinchi Roca, became the second emperor of the Inca.[20]
38
+
39
+ Under the leadership of Manco Cápac, the Inca formed the small city-state Kingdom of Cusco (Quechua Qusqu', Qosqo). In 1438, they began a far-reaching expansion under the command of Sapa Inca (paramount leader) Pachacuti-Cusi Yupanqui, whose name literally meant "earth-shaker". The name of Pachacuti was given to him after he conquered the Tribe of Chancas (modern Apurímac). During his reign, he and his son Tupac Yupanqui brought much of the modern-day territory of Peru under Inca control.[21]
40
+
41
+ Pachacuti reorganized the kingdom of Cusco into the Tahuantinsuyu, which consisted of a central government with the Inca at its head and four provincial governments with strong leaders: Chinchasuyu (NW), Antisuyu (NE), Kuntisuyu (SW) and Qullasuyu (SE).[22] Pachacuti is thought to have built Machu Picchu, either as a family home or summer retreat, although it may have been an agricultural station.[23]
42
+
43
+ Pachacuti sent spies to regions he wanted in his empire and they brought to him reports on political organization, military strength and wealth. He then sent messages to their leaders extolling the benefits of joining his empire, offering them presents of luxury goods such as high quality textiles and promising that they would be materially richer as his subjects.
44
+
45
+ Most accepted the rule of the Inca as a fait accompli and acquiesced peacefully. Refusal to accept Inca rule resulted in military conquest. Following conquest the local rulers were executed. The ruler's children were brought to Cusco to learn about Inca administration systems, then return to rule their native lands. This allowed the Inca to indoctrinate them into the Inca nobility and, with luck, marry their daughters into families at various corners of the empire.
46
+
47
+ Traditionally the son of the Inca ruler led the army. Pachacuti's son Túpac Inca Yupanqui began conquests to the north in 1463 and continued them as Inca ruler after Pachacuti's death in 1471. Túpac Inca's most important conquest was the Kingdom of Chimor, the Inca's only serious rival for the Peruvian coast. Túpac Inca's empire then stretched north into modern-day Ecuador and Colombia.
48
+
49
+ Túpac Inca's son Huayna Cápac added a small portion of land to the north in modern-day Ecuador. At its height, the Inca Empire included Peru, western and south central Bolivia, southwest Ecuador and a large portion of what is today Chile, north of the Maule River. Traditional historiography claims the advance south halted after the Battle of the Maule where they met determined resistance from the Mapuche.[24] This view is challenged by historian Osvaldo Silva who argues instead that it was the social and political framework of the Mapuche that posed the main difficulty in imposing imperial rule.[24] Silva does accept that the battle of the Maule was a stalemate, but argues the Incas lacked incentives for conquest they had had when fighting more complex societies such as the Chimú Empire.[24] Silva also disputes the date given by traditional historiography for the battle: the late 15th century during the reign of Topa Inca Yupanqui (1471–93).[24] Instead, he places it in 1532 during the Inca Civil War.[24] Nevertheless, Silva agrees on the claim that the bulk of the Incan conquests were made during the late 15th century.[24] At the time of the Incan Civil War an Inca army was, according to Diego de Rosales, subduing a revolt among the Diaguitas of Copiapó and Coquimbo.[24]
50
+
51
+ The empire's push into the Amazon Basin near the Chinchipe River was stopped by the Shuar in 1527.[25] The empire extended into corners of Argentina and Colombia. However, most of the southern portion of the Inca empire, the portion denominated as Qullasuyu, was located in the Altiplano.
52
+
53
+ The Inca Empire was an amalgamation of languages, cultures and peoples. The components of the empire were not all uniformly loyal, nor were the local cultures all fully integrated. The Inca empire as a whole had an economy based on exchange and taxation of luxury goods and labour. The following quote describes a method of taxation:
54
+
55
+ For as is well known to all, not a single village of the highlands or the plains failed to pay the tribute levied on it by those who were in charge of these matters. There were even provinces where, when the natives alleged that they were unable to pay their tribute, the Inca ordered that each inhabitant should be obliged to turn in every four months a large quill full of live lice, which was the Inca's way of teaching and accustoming them to pay tribute.[26]
56
+
57
+ Spanish conquistadors led by Francisco Pizarro and his brothers explored south from what is today Panama, reaching Inca territory by 1526.[27] It was clear that they had reached a wealthy land with prospects of great treasure, and after another expedition in 1529 Pizarro traveled to Spain and received royal approval to conquer the region and be its viceroy. This approval was received as detailed in the following quote: "In July 1529 the Queen of Spain signed a charter allowing Pizarro to conquer the Incas. Pizarro was named governor and captain of all conquests in Peru, or New Castile, as the Spanish now called the land."[28]
58
+
59
+ When the conquistadors returned to Peru in 1532, a war of succession between the sons of Sapa Inca Huayna Capac, Huáscar and Atahualpa, and unrest among newly conquered territories weakened the empire. Perhaps more importantly, smallpox, influenza, typhus and measles had spread from Central America.
60
+
61
+ The forces led by Pizarro consisted of 168 men, one cannon, and 27 horses. Conquistadors ported lances, arquebuses, steel armor and long swords. In contrast, the Inca used weapons made out of wood, stone, copper and bronze, while using an Alpaca fiber based armor, putting them at significant technological disadvantage—none of their weapons could pierce the Spanish steel armor. In addition, due to the absence of horses in the Americas, the Inca did not develop tactics to fight cavalry. However, the Inca were still effective warriors, being able to successfully fight the Mapuche, which later would strategically defeat the Spanish as they expanded further south.
62
+
63
+ The first engagement between the Inca and the Spanish was the Battle of Puná, near present-day Guayaquil, Ecuador, on the Pacific Coast; Pizarro then founded the city of Piura in July 1532. Hernando de Soto was sent inland to explore the interior and returned with an invitation to meet the Inca, Atahualpa, who had defeated his brother in the civil war and was resting at Cajamarca with his army of 80,000 troops, that were at the moment armed only with hunting tools (knives and lassos for hunting llamas).
64
+
65
+ Pizarro and some of his men, most notably a friar named Vincente de Valverde, met with the Inca, who had brought only a small retinue. The Inca offered them ceremonial chicha in a golden cup, which the Spanish rejected. The Spanish interpreter, Friar Vincente, read the "Requerimiento" that demanded that he and his empire accept the rule of King Charles I of Spain and convert to Christianity. Atahualpa dismissed the message and asked them to leave. After this, the Spanish began their attack against the mostly unarmed Inca, captured Atahualpa as hostage, and forced the Inca to collaborate.
66
+
67
+ Atahualpa offered the Spaniards enough gold to fill the room he was imprisoned in and twice that amount of silver. The Inca fulfilled this ransom, but Pizarro deceived them, refusing to release the Inca afterwards. During Atahualpa's imprisonment Huáscar was assassinated elsewhere. The Spaniards maintained that this was at Atahualpa's orders; this was used as one of the charges against Atahualpa when the Spaniards finally executed him, in August 1533.[29]
68
+
69
+ Although "defeat" often implies an unwanted loss in battle, much of the Inca elite "actually welcomed the Spanish invaders as liberators and willingly settled down with them to share rule of Andean farmers and miners."[30]
70
+
71
+ The Spanish installed Atahualpa's brother Manco Inca Yupanqui in power; for some time Manco cooperated with the Spanish while they fought to put down resistance in the north. Meanwhile, an associate of Pizarro, Diego de Almagro, attempted to claim Cusco. Manco tried to use this intra-Spanish feud to his advantage, recapturing Cusco in 1536, but the Spanish retook the city afterwards. Manco Inca then retreated to the mountains of Vilcabamba and established the small Neo-Inca State, where he and his successors ruled for another 36 years, sometimes raiding the Spanish or inciting revolts against them. In 1572 the last Inca stronghold was conquered and the last ruler, Túpac Amaru, Manco's son, was captured and executed.[31] This ended resistance to the Spanish conquest under the political authority of the Inca state.
72
+
73
+ After the fall of the Inca Empire many aspects of Inca culture were systematically destroyed, including their sophisticated farming system, known as the vertical archipelago model of agriculture.[32] Spanish colonial officials used the Inca mita corvée labor system for colonial aims, sometimes brutally. One member of each family was forced to work in the gold and silver mines, the foremost of which was the titanic silver mine at Potosí. When a family member died, which would usually happen within a year or two, the family was required to send a replacement.[citation needed]
74
+
75
+ The effects of smallpox on the Inca empire were even more devastating. Beginning in Colombia, smallpox spread rapidly before the Spanish invaders first arrived in the empire. The spread was probably aided by the efficient Inca road system. Smallpox was only the first epidemic.[33] Other diseases, including a probable Typhus outbreak in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, and measles in 1618, all ravaged the Inca people.
76
+
77
+ The number of people inhabiting Tawantinsuyu at its peak is uncertain, with estimates ranging from 4–37 million. Most population estimates are in the range of 6 to 14 million. In spite of the fact that the Inca kept excellent census records using their quipus, knowledge of how to read them was lost as almost all fell into disuse and disintegrated over time or were destroyed by the Spaniards.[34]
78
+
79
+ The empire was extremely linguistically diverse. Some of the most important languages were Quechua, Aymara, Puquina and Mochica, respectively mainly spoken in the Central Andes, the Altiplano or (Qullasuyu), the south Peruvian coast (Kuntisuyu), and the area of the north Peruvian coast (Chinchaysuyu) around Chan Chan, today Trujillo. Other languages included Quignam, Jaqaru, Leco, Uru-Chipaya languages, Kunza, Humahuaca, Cacán, Mapudungun, Culle, Chachapoya, Catacao languages, Manta, and Barbacoan languages, as well as numerous Amazonian languages on the frontier regions. The exact linguistic topography of the pre-Columbian and early colonial Andes remains incompletely understood, owing to the extinction of several languages and the loss of historical records.
80
+
81
+ In order to manage this diversity, the Inca lords promoted the usage of Quechua, especially the variety of modern-day Lima [35] as the Qhapaq Runasimi ("great language of the people"), or the official language/lingua franca. Defined by mutual intelligibility, Quechua is actually a family of languages rather than one single language, parallel to the Romance or Slavic languages in Europe. Most communities within the empire, even those resistant to Inca rule, learned to speak a variety of Quechua (forming new regional varieties with distinct phonetics) in order to communicate with the Inca lords and mitma colonists, as well as the wider integrating society, but largely retained their native languages as well. The Incas also had their own ethnic language, referred to as Qhapaq simi ("royal language"), which is thought to have been closely related to or a dialect of Puquina, which appears to have been the official language of the former Tiwanaku Empire, from which the Incas claimed descent, making Qhapaq simi a source of prestige for them. The split between Qhapaq simi and Qhapaq Runasimi also exemplifies the larger split between hatun and hunin (high and low) society in general.
82
+
83
+ There are several common misconceptions about the history of Quechua, as it is frequently identified as the "Inca language". Quechua did not originate with the Incas, had been a lingua franca in multiple areas before the Inca expansions, was diverse before the rise of the Incas, and it was not the native or original language of the Incas. In addition, the main official language of the Inca Empire was the coastal Quechua variety, native to modern Lima, not the Cusco dialect. The pre-Inca Chincha Kingdom, with whom the Incas struck an alliance, had made this variety into a local prestige language by their extensive trading activities. The Peruvian coast was also the most populous and economically active region of the Inca Empire, and employing coastal Quechua offered an alternative to neighboring Mochica, the language of the rival state of Chimu. Trade had also been spreading Quechua northwards before the Inca expansions, towards Cajamarca and Ecuador, and was likely the official language of the older Wari Empire. However, the Incas have left an impressive linguistic legacy, in that they introduced Quechua to many areas where it is still widely spoken today, including Ecuador, southern Bolivia, southern Colombia, and parts of the Amazon basin. The Spanish conquerors continued the official usage of Quechua during the early colonial period, and transformed it into a literary language.[36]
84
+
85
+ The Incas were not known to develop a written form of language; however, they visually recorded narratives through paintings on vases and cups (qirus).[37] These paintings are usually accompanied by geometric patterns known as toqapu, which are also found in textiles. Researchers have speculated that toqapu patterns could have served as a form of written communication (e.g.: heraldry, or glyphs), however this remains unclear.[38] The Incas also kept records by using quipus.
86
+
87
+ The high infant mortality rates that plagued the Inca Empire caused all newborn infants to be given the term ‘wawa’ when they were born. Most families did not invest very much into their child until they reached the age of two or three years old. Once the child reached the age of three, a "coming of age" ceremony occurred, called the rutuchikuy. For the Incas, this ceremony indicated that the child had entered the stage of "ignorance". During this ceremony, the family would invite all relatives to their house for food and dance, and then each member of the family would receive a lock of hair from the child. After each family member had received a lock, the father would shave the child's head. This stage of life was categorized by a stage of "ignorance, inexperience, and lack of reason, a condition that the child would overcome with time."[39] For Incan society, in order to advance from the stage of ignorance to development the child must learn the roles associated with their gender.
88
+
89
+ The next important ritual was to celebrate the maturity of a child. Unlike the coming of age ceremony, the celebration of maturity signified the child's sexual potency. This celebration of puberty was called warachikuy for boys and qikuchikuy for girls. The warachikuy ceremony included dancing, fasting, tasks to display strength, and family ceremonies. The boy would also be given new clothes and taught how to act as an unmarried man. The qikuchikuy signified the onset of menstruation, upon which the girl would go into the forest alone and return only once the bleeding had ended. In the forest she would fast, and, once returned, the girl would be given a new name, adult clothing, and advice. This "folly" stage of life was the time young adults were allowed to have sex without being a parent.[39]
90
+
91
+ Between the ages of 20 and 30, people were considered young adults, "ripe for serious thought and labor."[39] Young adults were able to retain their youthful status by living at home and assisting in their home community. Young adults only reached full maturity and independence once they had married.
92
+
93
+ At the end of life, the terms for men and women denote loss of sexual vitality and humanity. Specifically, the "decrepitude" stage signifies the loss of mental well-being and further physical decline.
94
+
95
+ In the Incan Empire, the age of marriage differed for men and women: men typically married at the age of 20, while women usually got married about four years earlier at the age of 16.[40] Men who were highly ranked in society could have multiple wives, but those lower in the ranks could only take a single wife.[41] Marriages were typically within classes and resembled a more business-like agreement. Once married, the women were expected to cook, collect food and watch over the children and livestock.[40] Girls and mothers would also work around the house to keep it orderly to please the public inspectors.[42] These duties remained the same even after wives became pregnant and with the added responsibility of praying and making offerings to Kanopa, who was the god of pregnancy.[40] It was typical for marriages to begin on a trial basis with both men and women having a say in the longevity of the marriage. If the man felt that it wouldn't work out or if the woman wanted to return to her parents’ home the marriage would end. Once the marriage was final, the only way the two could be divorced was if they did not have a child together.[40] Marriage within the Empire was crucial for survival. A family was considered disadvantaged if there was not a married couple at the center because everyday life centered around the balance of male and female tasks.[43]
96
+
97
+ According to some historians, such as Terence N. D'Altroy, male and female roles were considered equal in Inca society. The "indigenous cultures saw the two genders as complementary parts of a whole."[43] In other words, there was not a hierarchical structure in the domestic sphere for the Incas. Within the domestic sphere, women were known as the weavers. Women's everyday tasks included: spinning, watching the children, weaving cloth, cooking, brewing chichi, preparing fields for cultivation, planting seeds, bearing children, harvesting, weeding, hoeing, herding, and carrying water.[44] Men on the other hand, "weeded, plowed, participated in combat, helped in the harvest, carried firewood, built houses, herded llama and alpaca, and spun and wove when necessary".[44] This relationship between the genders may have been complementary. Unsurprisingly, onlooking Spaniards believed women were treated like slaves, because women did not work in Spanish society to the same extent, and certainly did not work in fields.[45] Women were sometimes allowed to own land and herds because inheritance was passed down from both the mother's and father's side of the family.[46] Kinship within the Inca society followed a parallel line of descent. In other words, women ascended from women and men ascended from men. Due to the parallel descent, a woman had access to land and other necessities through her mother.[44]
98
+
99
+ Inca myths were transmitted orally until early Spanish colonists recorded them; however, some scholars claim that they were recorded on quipus, Andean knotted string records.[47]
100
+
101
+ The Inca believed in reincarnation.[48] After death, the passage to the next world was fraught with difficulties. The spirit of the dead, camaquen, would need to follow a long road and during the trip the assistance of a black dog that could see in the dark was required. Most Incas imagined the after world to be like an earthly paradise with flower-covered fields and snow-capped mountains.
102
+
103
+ It was important to the Inca that they not die as a result of burning or that the body of the deceased not be incinerated. Burning would cause their vital force to disappear and threaten their passage to the after world. Those who obeyed the Inca moral code – ama suwa, ama llulla, ama quella (do not steal, do not lie, do not be lazy) – "went to live in the Sun's warmth while others spent their eternal days in the cold earth".[49] The Inca nobility practiced cranial deformation.[50] They wrapped tight cloth straps around the heads of newborns to shape their soft skulls into a more conical form, thus distinguishing the nobility from other social classes.
104
+
105
+ The Incas made human sacrifices. As many as 4,000 servants, court officials, favorites and concubines were killed upon the death of the Inca Huayna Capac in 1527.[51] The Incas performed child sacrifices around important events, such as the death of the Sapa Inca or during a famine. These sacrifices were known as qhapaq hucha.[52]
106
+
107
+ The Incas were polytheists who worshipped many gods. These included:
108
+
109
+ The Inca Empire employed central planning. The Inca Empire traded with outside regions, although they did not operate a substantial internal market economy. While axe-monies were used along the northern coast, presumably by the provincial mindaláe trading class,[53] most households in the empire lived in a traditional economy in which households were required to pay taxes, usually in the form of the mit'a corvée labor, and military obligations,[54] though barter (or trueque) was present in some areas.[55] In return, the state provided security, food in times of hardship through the supply of emergency resources, agricultural projects (e.g. aqueducts and terraces) to increase productivity and occasional feasts. While mit'a was used by the state to obtain labor, individual villages had a pre-inca system of communal work, known as mink'a. This system survives to the modern day, known as mink'a or faena. The economy rested on the material foundations of the vertical archipelago, a system of ecological complementarity in accessing resources[56] and the cultural foundation of ayni, or reciprocal exchange.[57][58]
110
+
111
+ The Sapa Inca was conceptualized as divine and was effectively head of the state religion. The Willaq Umu (or Chief Priest) was second to the emperor. Local religious traditions continued and in some cases such as the Oracle at Pachacamac on the Peruvian coast, were officially venerated. Following Pachacuti, the Sapa Inca claimed descent from Inti, who placed a high value on imperial blood; by the end of the empire, it was common to incestuously wed brother and sister. He was "son of the sun," and his people the intip churin, or "children of the sun," and both his right to rule and mission to conquer derived from his holy ancestor. The Sapa Inca also presided over ideologically important festivals, notably during the Inti Raymi, or "Sunfest" attended by soldiers, mummified rulers, nobles, clerics and the general population of Cusco beginning on the June solstice and culminating nine days later with the ritual breaking of the earth using a foot plow by the Inca. Moreover, Cusco was considered cosmologically central, loaded as it was with huacas and radiating ceque lines and geographic center of the Four-Quarters; Inca Garcilaso de la Vega called it "the navel of the universe".[59][60][61][62]
112
+
113
+ The Inca Empire was a federalist system consisting of a central government with the Inca at its head and four-quarters, or suyu: Chinchay Suyu (NW), Anti Suyu (NE), Kunti Suyu (SW) and Qulla Suyu (SE). The four corners of these quarters met at the center, Cusco. These suyu were likely created around 1460 during the reign of Pachacuti before the empire reached its largest territorial extent. At the time the suyu were established they were roughly of equal size and only later changed their proportions as the empire expanded north and south along the Andes.[63]
114
+
115
+ Cusco was likely not organized as a wamani, or province. Rather, it was probably somewhat akin to a modern federal district, like Washington, DC or Mexico City. The city sat at the center of the four suyu and served as the preeminent center of politics and religion. While Cusco was essentially governed by the Sapa Inca, his relatives and the royal panaqa lineages, each suyu was governed by an Apu, a term of esteem used for men of high status and for venerated mountains. Both Cusco as a district and the four suyu as administrative regions were grouped into upper hanan and lower hurin divisions. As the Inca did not have written records, it is impossible to exhaustively list the constituent wamani. However, colonial records allow us to reconstruct a partial list. There were likely more than 86 wamani, with more than 48 in the highlands and more than 38 on the coast.[64][65][66]
116
+
117
+ The most populous suyu was Chinchaysuyu, which encompassed the former Chimu empire and much of the northern Andes. At its largest extent, it extended through much of modern Ecuador and into modern Colombia.
118
+
119
+ The largest suyu by area was Qullasuyu, named after the Aymara-speaking Qulla people. It encompassed the Bolivian Altiplano and much of the southern Andes, reaching Argentina and as far south as the Maipo or Maule river in Central Chile.[67] Historian José Bengoa singled out Quillota as likely being the foremost Inca settlement in Chile.[68]
120
+
121
+ The second smallest suyu, Antisuyu, was northwest of Cusco in the high Andes. Its name is the root of the word "Andes."[69]
122
+
123
+ Kuntisuyu was the smallest suyu, located along the southern coast of modern Peru, extending into the highlands towards Cusco.[70]
124
+
125
+ The Inca state had no separate judiciary or codified laws. Customs, expectations and traditional local power holders governed behavior. The state had legal force, such as through tokoyrikoq (lit. "he who sees all"), or inspectors. The highest such inspector, typically a blood relative to the Sapa Inca, acted independently of the conventional hierarchy, providing a point of view for the Sapa Inca free of bureaucratic influence.[71]
126
+
127
+ The Inca had three moral precepts that governed their behavior:
128
+
129
+ Colonial sources are not entirely clear or in agreement about Inca government structure, such as exact duties and functions of government positions. But the basic structure can be broadly described. The top was the Sapa Inca. Below that may have been the Willaq Umu, literally the "priest who recounts", the High Priest of the Sun.[72] However, beneath the Sapa Inca also sat the Inkap rantin, who was a confidant and assistant to the Sapa Inca, perhaps similar to a Prime Minister.[73] Starting with Topa Inca Yupanqui, a "Council of the Realm" was composed of 16 nobles: 2 from hanan Cusco; 2 from hurin Cusco; 4 from Chinchaysuyu; 2 from Cuntisuyu; 4 from Collasuyu; and 2 from Antisuyu. This weighting of representation balanced the hanan and hurin divisions of the empire, both within Cusco and within the Quarters (hanan suyukuna and hurin suyukuna).[74]
130
+
131
+ While provincial bureaucracy and government varied greatly, the basic organization was decimal. Taxpayers – male heads of household of a certain age range – were organized into corvée labor units (often doubling as military units) that formed the state's muscle as part of mit'a service. Each unit of more than 100 tax-payers were headed by a kuraka, while smaller units were headed by a kamayuq, a lower, non-hereditary status. However, while kuraka status was hereditary and typically served for life, the position of a kuraka in the hierarchy was subject to change based on the privileges of superiors in the hierarchy; a pachaka kuraka could be appointed to the position by a waranqa kuraka. Furthermore, one kuraka in each decimal level could serve as the head of one of the nine groups at a lower level, so that a pachaka kuraka might also be a waranqa kuraka, in effect directly responsible for one unit of 100 tax-payers and less directly responsible for nine other such units.[75][76][77]
132
+
133
+ Francisco Pizarro
134
+
135
+ Architecture was the most important of the Incan arts, with textiles reflecting architectural motifs. The most notable example is Machu Picchu, which was constructed by Inca engineers. The prime Inca structures were made of stone blocks that fit together so well that a knife could not be fitted through the stonework. These constructs have survived for centuries, with no use of mortar to sustain them.
136
+
137
+ This process was first used on a large scale by the Pucara (c. 300 BC–AD 300) peoples to the south in Lake Titicaca and later in the city of Tiwanaku (c. AD 400–1100) in present-day Bolivia. The rocks were sculpted to fit together exactly by repeatedly lowering a rock onto another and carving away any sections on the lower rock where the dust was compressed. The tight fit and the concavity on the lower rocks made them extraordinarily stable, despite the ongoing challenge of earthquakes and volcanic activity.
138
+
139
+ Physical measures used by the Inca were based on human body parts. Units included fingers, the distance from thumb to forefinger, palms, cubits and wingspans. The most basic distance unit was thatkiy or thatki, or one pace. The next largest unit was reported by Cobo to be the topo or tupu, measuring 6,000 thatkiys, or about 7.7 km (4.8 mi); careful study has shown that a range of 4.0 to 6.3 km (2.5 to 3.9 mi) is likely. Next was the wamani, composed of 30 topos (roughly 232 km or 144 mi). To measure area, 25 by 50 wingspans were used, reckoned in topos (roughly 3,280 km2 or 1,270 sq mi). It seems likely that distance was often interpreted as one day's walk; the distance between tambo way-stations varies widely in terms of distance, but far less in terms of time to walk that distance.[80][81]
140
+
141
+ Inca calendars were strongly tied to astronomy. Inca astronomers understood equinoxes, solstices and zenith passages, along with the Venus cycle. They could not, however, predict eclipses. The Inca calendar was essentially lunisolar, as two calendars were maintained in parallel, one solar and one lunar. As 12 lunar months fall 11 days short of a full 365-day solar year, those in charge of the calendar had to adjust every winter solstice. Each lunar month was marked with festivals and rituals.[82] Apparently, the days of the week were not named and days were not grouped into weeks. Similarly, months were not grouped into seasons. Time during a day was not measured in hours or minutes, but in terms of how far the sun had travelled or in how long it had taken to perform a task.[83]
142
+
143
+ The sophistication of Inca administration, calendrics and engineering required facility with numbers. Numerical information was stored in the knots of quipu strings, allowing for compact storage of large numbers.[84][85] These numbers were stored in base-10 digits, the same base used by the Quechua language[86] and in administrative and military units.[76] These numbers, stored in quipu, could be calculated on yupanas, grids with squares of positionally varying mathematical values, perhaps functioning as an abacus.[87] Calculation was facilitated by moving piles of tokens, seeds or pebbles between compartments of the yupana. It is likely that Inca mathematics at least allowed division of integers into integers or fractions and multiplication of integers and fractions.[88]
144
+
145
+ According to mid-17th-century Jesuit chronicler Bernabé Cobo,[89] the Inca designated officials to perform accounting-related tasks. These officials were called quipo camayos. Study of khipu sample VA 42527 (Museum für Völkerkunde, Berlin)[90] revealed that the numbers arranged in calendrically significant patterns were used for agricultural purposes in the "farm account books" kept by the khipukamayuq (accountant or warehouse keeper) to facilitate the closing of accounting books.[91]
146
+
147
+ Ceramics were painted using the polychrome technique portraying numerous motifs including animals, birds, waves, felines (popular in the Chavin culture) and geometric patterns found in the Nazca style of ceramics. In a culture without a written language, ceramics portrayed the basic scenes of everyday life, including the smelting of metals, relationships and scenes of tribal warfare. The most distinctive Inca ceramic objects are the Cusco bottles or "aryballos".[92] Many of these pieces are on display in Lima in the Larco Archaeological Museum and the National Museum of Archaeology, Anthropology and History.
148
+
149
+ Almost all of the gold and silver work of the Incan empire was melted down by the conquistadors, and shipped back to Spain.[93]
150
+
151
+ The Inca recorded information on assemblages of knotted strings, known as Quipu, although they can no longer be decoded. Originally it was thought that Quipu were used only as mnemonic devices or to record numerical data. Quipus are also believed to record history and literature.[94]
152
+
153
+ The Inca made many discoveries in medicine.[95] They performed successful skull surgery, by cutting holes in the skull to alleviate fluid buildup and inflammation caused by head wounds. Many skull surgeries performed by Inca surgeons were successful. Survival rates were 80–90%, compared to about 30% before Inca times.[96]
154
+
155
+ The Incas revered the coca plant as sacred/magical. Its leaves were used in moderate amounts to lessen hunger and pain during work, but were mostly used for religious and health purposes.[97] The Spaniards took advantage of the effects of chewing coca leaves.[97] The Chasqui, messengers who ran throughout the empire to deliver messages, chewed coca leaves for extra energy. Coca leaves were also used as an anaesthetic during surgeries.
156
+
157
+ The Inca army was the most powerful at that time, because any ordinary villager or farmer could be recruited as a soldier as part of the mit'a system of mandatory public service. Every able bodied male Inca of fighting age had to take part in war in some capacity at least once and to prepare for warfare again when needed. By the time the empire reached its largest size, every section of the empire contributed in setting up an army for war.
158
+
159
+ The Incas had no iron or steel and their weapons were not much more effective than those of their opponents so they often defeated opponents by sheer force of numbers, or else by persuading them to surrender beforehand by offering generous terms.[98] Inca weaponry included "hardwood spears launched using throwers, arrows, javelins, slings, the bolas, clubs, and maces with star-shaped heads made of copper or bronze."[98][99] Rolling rocks downhill onto the enemy was a common strategy, taking advantage of the hilly terrain.[100] Fighting was sometimes accompanied by drums and trumpets made of wood, shell or bone.[101][102] Armor included:[98][103]
160
+
161
+ Roads allowed quick movement (on foot) for the Inca army and shelters called tambo and storage silos called qullqas were built one day's travelling distance from each other, so that an army on campaign could always be fed and rested. This can be seen in names of ruins such as Ollantay Tambo, or My Lord's Storehouse. These were set up so the Inca and his entourage would always have supplies (and possibly shelter) ready as they traveled.
162
+
163
+ Chronicles and references from the 16th and 17th centuries support the idea of a banner. However, it represented the Inca (emperor), not the empire.
164
+
165
+ Francisco López de Jerez[106] wrote in 1534:
166
+
167
+ ... todos venían repartidos en sus escuadras con sus banderas y capitanes que los mandan, con tanto concierto como turcos.(... all of them came distributed into squads, with their flags and captains commanding them, as well-ordered as Turks.)
168
+
169
+ Chronicler Bernabé Cobo wrote:
170
+
171
+ The royal standard or banner was a small square flag, ten or twelve spans around, made of cotton or wool cloth, placed on the end of a long staff, stretched and stiff such that it did not wave in the air and on it each king painted his arms and emblems, for each one chose different ones, though the sign of the Incas was the rainbow and two parallel snakes along the width with the tassel as a crown, which each king used to add for a badge or blazon those preferred, like a lion, an eagle and other figures.
172
+ (... el guión o estandarte real era una banderilla cuadrada y pequeña, de diez o doce palmos de ruedo, hecha de lienzo de algodón o de lana, iba puesta en el remate de una asta larga, tendida y tiesa, sin que ondease al aire, y en ella pintaba cada rey sus armas y divisas, porque cada uno las escogía diferentes, aunque las generales de los Incas eran el arco celeste y dos culebras tendidas a lo largo paralelas con la borda que le servía de corona, a las cuales solía añadir por divisa y blasón cada rey las que le parecía, como un león, un águila y otras figuras.)-Bernabé Cobo, Historia del Nuevo Mundo (1653)
173
+
174
+ Guaman Poma's 1615 book, El primer nueva corónica y buen gobierno, shows numerous line drawings of Inca flags.[107] In his 1847 book A History of the Conquest of Peru, "William H. Prescott ... says that in the Inca army each company had its particular banner and that the imperial standard, high above all, displayed the glittering device of the rainbow, the armorial ensign of the Incas."[108] A 1917 world flags book says the Inca "heir-apparent ... was entitled to display the royal standard of the rainbow in his military campaigns."[109]
175
+
176
+ In modern times the rainbow flag has been wrongly associated with the Tawantinsuyu and displayed as a symbol of Inca heritage by some groups in Peru and Bolivia. The city of Cusco also flies the Rainbow Flag, but as an official flag of the city. The Peruvian president Alejandro Toledo (2001–2006) flew the Rainbow Flag in Lima's presidential palace. However, according to Peruvian historiography, the Inca Empire never had a flag. Peruvian historian María Rostworowski said, "I bet my life, the Inca never had that flag, it never existed, no chronicler mentioned it".[110] Also, to the Peruvian newspaper El Comercio, the flag dates to the first decades of the 20th century,[111] and even the Congress of the Republic of Peru has determined that flag is a fake by citing the conclusion of National Academy of Peruvian History:
177
+
178
+ "The official use of the wrongly called 'Tawantinsuyu flag' is a mistake. In the Pre-Hispanic Andean World there did not exist the concept of a flag, it did not belong to their historic context".[111]
179
+ National Academy of Peruvian History
180
+
181
+ Incas were able to adapt to their high-altitude living through successful acclimatization, which is characterized by increasing oxygen supply to the blood tissues. For the native Inca living in the Andean highlands, this was achieved through the development of a larger lung capacity, and an increase in red blood cell counts, hemoglobin concentration, and capillary beds.[112]
182
+
183
+ Compared to other humans, the Incas had slower heart rates, almost one-third larger lung capacity, about 2 L (4 pints) more blood volume and double the amount of hemoglobin, which transfers oxygen from the lungs to the rest of the body. While the Conquistadors may have been slightly taller, the Inca had the advantage of coping with the extraordinary altitude.
184
+
en/173.html.txt ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Coordinates: 14h 29m 42.9487s, −62° 40′ 46.141″
2
+
3
+ Proxima Centauri is a small, low-mass star located 4.244 light-years (1.301 pc) away from the Sun in the southern constellation of Centaurus. Its Latin name means the "nearest [star] of Centaurus". This object was discovered in 1915 by Robert Innes and is the nearest-known star to the Sun. With a quiescent apparent magnitude of 11.13, it is too faint to be seen with the naked eye. Proxima Centauri is a member of the Alpha Centauri system, being identified as component Alpha Centauri C, and is 2.18° to the southwest of the Alpha Centauri AB pair. It is currently 12,950 AU (1.94 trillion km) from AB, which it orbits with a period of about 550,000 years.
4
+
5
+ Proxima Centauri is a red dwarf star with a mass about an eighth of the Sun's mass (M☉), and average density about 33 times that of the Sun. Because of Proxima Centauri's proximity to Earth, its angular diameter can be measured directly. Its actual diameter is about one-seventh the diameter of the Sun. Although it has a very low average luminosity, Proxima is a flare star that undergoes random dramatic increases in brightness because of magnetic activity. The star's magnetic field is created by convection throughout the stellar body, and the resulting flare activity generates a total X-ray emission similar to that produced by the Sun. The mixing of the fuel at Proxima Centauri's core through convection, and its relatively low energy-production rate, mean that it will be a main-sequence star for another four trillion years.
6
+
7
+ In 2016, astronomers announced the discovery of Proxima Centauri b, a planet orbiting the star at a distance of roughly 0.05 AU (7.5 million km) with an orbital period of approximately 11.2 Earth days. According to updated measurements by the ESPRESSO spectrograph, its estimated mass is at least 1.17 times that of the Earth.[19] The equilibrium temperature of Proxima b is estimated to be within the range of where water could exist as liquid on its surface, thus placing it within the habitable zone of Proxima Centauri, although because Proxima Centauri is a red dwarf and a flare star, whether it could support life is disputed.
8
+
9
+ In 1915, the Scottish astronomer Robert Innes, Director of the Union Observatory in Johannesburg, South Africa, discovered a star that had the same proper motion as Alpha Centauri.[20][21][22][23] He suggested that it be named Proxima Centauri[24] (actually Proxima Centaurus).[25] In 1917, at the Royal Observatory at the Cape of Good Hope, the Dutch astronomer Joan Voûte measured the star's trigonometric parallax at 0.755″±0.028″ and determined that Proxima Centauri was approximately the same distance from the Sun as Alpha Centauri. It was also found to be the lowest-luminosity star known at the time.[26] An equally accurate parallax determination of Proxima Centauri was made by American astronomer Harold L. Alden in 1928, who confirmed Innes's view that it is closer, with a parallax of 0.783″±0.005″.[21][24]
10
+
11
+ In 1951, American astronomer Harlow Shapley announced that Proxima Centauri is a flare star. Examination of past photographic records showed that the star displayed a measurable increase in magnitude on about 8% of the images, making it the most active flare star then known.[27][28]
12
+ The proximity of the star allows for detailed observation of its flare activity. In 1980, the Einstein Observatory produced a detailed X-ray energy curve of a stellar flare on Proxima Centauri. Further observations of flare activity were made with the EXOSAT and ROSAT satellites, and the X-ray emissions of smaller, solar-like flares were observed by the Japanese ASCA satellite in 1995.[29] Proxima Centauri has since been the subject of study by most X-ray observatories, including XMM-Newton and Chandra.[30]
13
+
14
+ In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars.[31] The WGSN approved the name Proxima Centauri for this star on August 21, 2016 and it is now so included in the List of IAU approved Star Names.[32]
15
+
16
+ Because of Proxima Centauri's southern declination, it can only be viewed south of latitude 27° N.[nb 3] Red dwarfs such as Proxima Centauri are too faint to be seen with the naked eye. Even from Alpha Centauri A or B, Proxima would only be seen as a fifth magnitude star.[33][34] It has an apparent visual magnitude of 11, so a telescope with an aperture of at least 8 cm (3.1 in) is needed to observe it, even under ideal viewing conditions—under clear, dark skies with Proxima Centauri well above the horizon.[35]
17
+
18
+ In 2018, a superflare was observed from Proxima Centauri, the strongest flare ever seen. The optical brightness increased by a factor of 68 to approximately magnitude 6.8. It is estimated that similar flares occur around five times every year but are of such short duration, just a few minutes, that they have never been observed before.[36]
19
+
20
+ On 2020 April 22 and 23, the New Horizons spacecraft took images of two of the nearest stars, Proxima Centauri & Wolf 359. When combined with Earth-based images, the result will be a record-setting parallax measurement.[37]
21
+
22
+ Proxima Centauri is a red dwarf, because it belongs to the main sequence on the Hertzsprung–Russell diagram and is of spectral class M5.5. M5.5 means that it falls in the low-mass end of M-type stars.[17] Its absolute visual magnitude, or its visual magnitude as viewed from a distance of 10 parsecs (33 ly), is 15.5.[38] Its total luminosity over all wavelengths is 0.17% that of the Sun,[11] although when observed in the wavelengths of visible light the eye is most sensitive to, it is only 0.0056% as luminous as the Sun.[39] More than 85% of its radiated power is at infrared wavelengths.[40] It has a regular activity cycle of starspots.[41]
23
+
24
+ In 2002, optical interferometry with the Very Large Telescope (VLTI) found that the angular diameter of Proxima Centauri is 1.02±0.08 mas. Because its distance is known, the actual diameter of Proxima Centauri can be calculated to be about 1/7 that of the Sun, or 1.5 times that of Jupiter. The star's mass, estimated from stellar theory, is 12.2% M☉, or 129 Jupiter masses (MJ).[42] The mass has been calculated directly, although with less precision, from observations of microlensing events to be 0.150+0.062−0.051 M☉.[43]
25
+
26
+ Lower mass main-sequence stars have higher mean density than higher mass ones,[44] and Proxima Centauri is no exception: it has a mean density of 47.1×103 kg/m3 (47.1 g/cm3), compared with the Sun's mean density of 1.411×103 kg/m3 (1.411 g/cm3).[nb 4]
27
+
28
+ A 1998 study of photometric variations indicates that Proxima Centauri rotates once every 83.5 days.[45] A subsequent time series analysis of chromospheric indicators in 2002 suggests a longer rotation period of 116.6±0.7 days.[46] This was subsequently ruled out in favor of a rotation period of 82.6±0.1 days.[16]
29
+
30
+ Because of its low mass, the interior of the star is completely convective,[47] causing energy to be transferred to the exterior by the physical movement of plasma rather than through radiative processes. This convection means that the helium ash left over from the thermonuclear fusion of hydrogen does not accumulate at the core, but is instead circulated throughout the star. Unlike the Sun, which will only burn through about 10% of its total hydrogen supply before leaving the main sequence, Proxima Centauri will consume nearly all of its fuel before the fusion of hydrogen comes to an end after about 4 trillion years.[48]
31
+
32
+ Convection is associated with the generation and persistence of a magnetic field. The magnetic energy from this field is released at the surface through stellar flares that briefly increase the overall luminosity of the star. These flares can grow as large as the star and reach temperatures measured as high as 27 million K[30]—hot enough to radiate X-rays.[49] Proxima Centauri's quiescent X-ray luminosity, approximately (4–16) × 1026 erg/s ((4–16) × 1019 W), is roughly equal to that of the much larger Sun. The peak X-ray luminosity of the largest flares can reach 1028 erg/s (1021 W).[30]
33
+
34
+ Proxima Centauri's chromosphere is active, and its spectrum displays a strong emission line of singly ionized magnesium at a wavelength of 280 nm.[50] About 88% of the surface of Proxima Centauri may be active, a percentage that is much higher than that of the Sun even at the peak of the solar cycle. Even during quiescent periods with few or no flares, this activity increases the corona temperature of Proxima Centauri to 3.5 million K, compared to the 2 million K of the Sun's corona,[51] and its total X-ray emission is comparable to the sun's.[52] Proxima Centauri's overall activity level is considered low compared to other red dwarfs,[52] which is consistent with the star's estimated age of 4.85 × 109 years,[17] since the activity level of a red dwarf is expected to steadily wane over billions of years as its stellar rotation rate decreases.[53] The activity level also appears to vary with a period of roughly 442 days, which is shorter than the solar cycle of 11 years.[54]
35
+
36
+ Proxima Centauri has a relatively weak stellar wind, no more than 20% of the mass loss rate of the solar wind. Because the star is much smaller than the Sun, the mass loss per unit surface area from Proxima Centauri may be eight times that from the solar surface.[55]
37
+
38
+ A red dwarf with the mass of Proxima Centauri will remain on the main sequence for about four trillion years. As the proportion of helium increases because of hydrogen fusion, the star will become smaller and hotter, gradually transforming into a so-called "blue dwarf". Near the end of this period it will become significantly more luminous, reaching 2.5% of the Sun's luminosity (L☉) and warming up any orbiting bodies for a period of several billion years. When the hydrogen fuel is exhausted, Proxima Centauri will then evolve into a white dwarf (without passing through the red giant phase) and steadily lose any remaining heat energy.[48]
39
+
40
+ Based on a parallax of 768.5004±0.2030 mas, published in 2018 in Gaia Data Release 2, Proxima Centauri is about 4.244 light-years (1.301 pc; 268,400 AU) from the Sun.[9] Previously published parallaxes include: 768.13±1.04 mas, in 2014 by the Research Consortium On Nearby Stars;[56] 772.33±2.42 mas, in the original Hipparcos Catalogue, in 1997;[57] 771.64±2.60 mas in the Hipparcos New Reduction, in 2007;[3] and 768.77±0.37 mas using the Hubble Space Telescope's Fine Guidance Sensors, in 1999.[10] From Earth's vantage point, Proxima is separated from Alpha Centauri by 2.18 degrees,[58] or four times the angular diameter of the full Moon.[59] Proxima also has a relatively large proper motion—moving 3.85 arcseconds per year across the sky.[60] It has a radial velocity toward the Sun of 22.2 km/s.[8]
41
+
42
+ Among the known stars, Proxima Centauri has been the closest star to the Sun for about 32,000 years and will be so for about another 25,000 years, after which Alpha Centauri A and Alpha Centauri B will alternate approximately every 79.91 years as the closest star to the Sun. In 2001, J. García-Sánchez et al. predicted that Proxima will make its closest approach to the Sun in approximately 26,700 years, coming within 3.11 ly (0.95 pc).[61] A 2010 study by V. V. Bobylev predicted a closest approach distance of 2.90 ly (0.89 pc) in about 27,400 years,[62] followed by a 2014 study by C. A. L. Bailer-Jones predicting a perihelion approach of 3.07 ly (0.94 pc) in roughly 26,710 years.[63] Proxima Centauri is orbiting through the Milky Way at a distance from the Galactic Centre that varies from 27 to 31 kly (8.3 to 9.5 kpc), with an orbital eccentricity of 0.07.[64]
43
+
44
+ Ever since the discovery of Proxima, it has been suspected to be a true companion of the Alpha Centauri binary star system. Data from the Hipparcos satellite, combined with ground-based observations, were consistent with the hypothesis that the three stars are a bound system. For this reason, Proxima is sometimes referred to as Alpha Centauri C. Kervella et al. (2017) used high-precision radial velocity measurements to determine with a high degree of confidence that Proxima and Alpha Centauri are gravitationally bound.[8] Proxima's orbital period around the Alpha Centauri AB barycenter is 547000+6600−4000 years with an eccentricity of 0.5±0.08; it approaches Alpha Centauri to 4300+1100−900 AU at periastron and retreats to 13000+300−100 AU at apastron.[8] At present, Proxima is 12,947 ± 260 AU (1.94 ± 0.04 trillion km) from the Alpha Centauri AB barycenter, nearly to the farthest point in its orbit.[8]
45
+
46
+ Such a triple system can form naturally through a low-mass star being dynamically captured by a more massive binary of 1.5–2 M☉ within their embedded star cluster before the cluster disperses.[65] However, more accurate measurements of the radial velocity are needed to confirm this hypothesis.[66] If Proxima was bound to the Alpha Centauri system during its formation, the stars are likely to share the same elemental composition. The gravitational influence of Proxima might also have stirred up the Alpha Centauri protoplanetary disks. This would have increased the delivery of volatiles such as water to the dry inner regions, so possibly enriching any terrestrial planets in the system with this material.[66] Alternatively, Proxima may have been captured at a later date during an encounter, resulting in a highly eccentric orbit that was then stabilized by the galactic tide and additional stellar encounters. Such a scenario may mean that Proxima's planetary companion has had a much lower chance for orbital disruption by Alpha Centauri.[15]
47
+
48
+ Six single stars, two binary star systems, and a triple star share a common motion through space with Proxima Centauri and the Alpha Centauri system. The space velocities of these stars are all within 10 km/s of Alpha Centauri's peculiar motion. Thus, they may form a moving group of stars, which would indicate a common point of origin,[67] such as in a star cluster.
49
+
50
+ Proxima Centauri b, or Alpha Centauri Cb, is a planet orbiting the star at a distance of roughly 0.05 AU (7.5 million km) with an orbital period of approximately 11.2 Earth days. Its estimated mass is at least 1.3 times that of the Earth. Moreover, the equilibrium temperature of Proxima b is estimated to be within the range where water could exist as liquid on its surface; thus, placing it within the habitable zone of Proxima Centauri.[68][75][76]
51
+
52
+ The first indications of the exoplanet Proxima Centauri b were found in 2013 by Mikko Tuomi of the University of Hertfordshire from archival observation data.[77][78] To confirm the possible discovery, a team of astronomers launched the Pale Red Dot[nb 6] project in January 2016.[79] On August 24, 2016, the team of 31 scientists from all around the world,[80] led by Guillem Anglada-Escudé of Queen Mary University of London, confirmed the existence of Proxima Centauri b[81] through a peer-reviewed article published in Nature.[68][82] The measurements were performed using two spectrographs: HARPS on the ESO 3.6 m Telescope at La Silla Observatory and UVES on the 8 m Very Large Telescope at Paranal Observatory.[68] Several attempts to detect a transit of this planet across the face of Proxima Centauri have been made. A transit-like signal appearing on September 8, 2016 was tentatively identified, using the Bright Star Survey Telescope at the Zhongshan Station in Antarctica.[83]
53
+
54
+ Prior to this discovery, multiple measurements of the star's radial velocity constrained the maximum mass that a detectable companion to Proxima Centauri could possess.[10][84] The activity level of the star adds noise to the radial velocity measurements, complicating detection of a companion using this method.[85] In 1998, an examination of Proxima Centauri using the Faint Object Spectrograph on board the Hubble Space Telescope appeared to show evidence of a companion orbiting at a distance of about 0.5 AU.[86] A subsequent search using the Wide Field Planetary Camera 2 failed to locate any companions.[87] Astrometric measurements at the Cerro Tololo Inter-American Observatory appear to rule out a Jupiter-sized planet with an orbital period of 2−12 years.[88] A second signal in the range of 60 to 500 days was also detected, but its nature is still unclear due to stellar activity.[68]
55
+
56
+ Proxima c is a Super-Earth about 7 times as heavy as Earth, orbiting at roughly 1.5 astronomical units (220,000,000 km) every 1,900 days (5.2 yr)[89]. Due to its large distance from Proxima Centauri, the exoplanet is unlikely to be habitable, with a low equilibrium temperature of around 39 K[90]. The planet was first reported by Italian astrophysicist Mario Damasso and his colleagues in April 2019.[90][89] Damasso's team had noticed minor movements of Proxima Centauri in the radial velocity data from the ESO's HARPS instrument, indicating a possible additional planet orbiting Proxima Centauri.[90] In 2020, the planet's existence was confirmed by precovery images from Hubble c. 1995, making it the first and closest planet ever to be directly imaged.[91] Due to the differences in how its light shines throughout its orbit, the planet is speculated to have a ring system.
57
+
58
+ In 2017, a team of astronomers using the Atacama Large Millimeter/submillimeter Array reported detecting a belt of cold dust orbiting Proxima Centauri at a range of 1−4 AU from the star. This dust has a temperature of around 40 K and has a total estimated mass of 1% of the planet Earth. They also tentatively detected two additional features: a cold belt with a temperature of 10 K orbiting around 30 AU and a compact emission source about 1.2 arcseconds from the star. There was also a hint at an additional warm dust belt at a distance of 0.4 AU from the star.[92] However, upon further analysis, these emissions were determined to be most likely the result of a large flare emitted by the star in March 2017. The presence of dust is not needed to model the observations.[93][94]
59
+
60
+ In 2019, a team of astronomers revisited the data from ESPRESSO about Proxima b to refine its mass. While doing so, the team found another radial velocity spike with a periodicity of 5.15 days. They estimated that if it were a planetary companion, it would be no less than 0.29 masses of the Earth.[19]
61
+
62
+ Prior to the discovery of Proxima Centauri b, the TV documentary Alien Worlds hypothesized that a life-sustaining planet could exist in orbit around Proxima Centauri or other red dwarfs. Such a planet would lie within the habitable zone of Proxima Centauri, about 0.023–0.054 AU (3.4–8.1 million km) from the star, and would have an orbital period of 3.6–14 days.[95] A planet orbiting within this zone may experience tidal locking to the star. If the orbital eccentricity of this hypothetical planet is low, Proxima Centauri would move little in the planet's sky, and most of the surface would experience either day or night perpetually. The presence of an atmosphere could serve to redistribute the energy from the star-lit side to the far side of the planet.[96]
63
+
64
+ Proxima Centauri's flare outbursts could erode the atmosphere of any planet in its habitable zone, but the documentary's scientists thought that this obstacle could be overcome. Gibor Basri of the University of California, Berkeley, mentioned that "no one [has] found any showstoppers to habitability". For example, one concern was that the torrents of charged particles from the star's flares could strip the atmosphere off any nearby planet. If the planet had a strong magnetic field, the field would deflect the particles from the atmosphere; even the slow rotation of a tidally locked planet that spins once for every time it orbits its star would be enough to generate a magnetic field, as long as part of the planet's interior remained molten.[97]
65
+
66
+ Other scientists, especially proponents of the rare-Earth hypothesis,[98] disagree that red dwarfs can sustain life. Any exoplanet in this star's habitable zone would likely be tidally locked, resulting in a relatively weak planetary magnetic moment, leading to strong atmospheric erosion by coronal mass ejections from Proxima Centauri.[99]
67
+
68
+ Because of the star's proximity to Earth, Proxima Centauri has been proposed as a flyby destination for interstellar travel.[100]
69
+ Proxima currently moves toward Earth at a rate of 22.2 km/s.[8] After 26,700 years, when it will come within 3.11 light-years, it will begin to move farther away.[61]
70
+
71
+ If non-nuclear, conventional propulsion technologies are used, the flight of a spacecraft to a planet orbiting Proxima Centauri would probably require thousands of years.[101] For example, Voyager 1, which is now travelling 17 km/s (38,000 mph)[102] relative to the Sun, would reach Proxima in 73,775 years, were the spacecraft travelling in the direction of that star. A slow-moving probe would have only several tens of thousands of years to catch Proxima Centauri near its closest approach, and could end up watching it recede into the distance.[103]
72
+
73
+ Nuclear pulse propulsion might enable such interstellar travel with a trip timescale of a century, inspiring several studies such as Project Orion, Project Daedalus, and Project Longshot.[103]
74
+
75
+ Project Breakthrough Starshot aims to reach the Alpha Centauri system within the first half of the 21st century, with microprobes travelling at 20% of the speed of light propelled by around 100 gigawatts of Earth-based lasers.[104] The probes would perform a fly-by of Proxima Centauri to take photos and collect data of its planets' atmospheric compositions. It would take 4.22 years for the information collected to be sent back to Earth.[105]
76
+
77
+ From Proxima Centauri, the Sun would appear as a bright 0.4-magnitude star in the constellation Cassiopeia, similar to that of Achernar from Earth.[nb 7]
78
+
79
+ where
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+ ρ
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+ {\displaystyle {\begin{smallmatrix}\rho _{\odot }\end{smallmatrix}}}
102
+
103
+ is the average solar density.
104
+ See:
en/1730.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/1731.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/1732.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/1733.html.txt ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Empire State Building is a 102-story[c] Art Deco skyscraper in Midtown Manhattan in New York City. It was designed by Shreve, Lamb & Harmon and built from 1930 to 1931. Its name is derived from "Empire State", the nickname of the state of New York. The building has a roof height of 1,250 feet (380 m) and stands a total of 1,454 feet (443.2 m) tall, including its antenna. The Empire State Building stood as the world's tallest building until the construction of the World Trade Center in 1970; following its collapse in the September 11, 2001 attacks, the Empire State Building was again the city's tallest skyscraper until 2012. As of 2020[update], the building is the seventh-tallest building in New York City, the ninth-tallest completed skyscraper in the United States, the 48th-tallest in the world, and the fifth-tallest freestanding structure in the Americas.
4
+
5
+ The site of the Empire State Building, located in Midtown South on the west side of Fifth Avenue between West 33rd and 34th Streets, was originally part of an early 18th-century farm. It was developed in 1893 as the site of the Waldorf–Astoria Hotel. In 1929, Empire State Inc. acquired the site and devised plans for a skyscraper there. The design for the Empire State Building was changed fifteen times until it was ensured to be the world's tallest building. Construction started on March 17, 1930, and the building opened thirteen and a half months afterward on May 1, 1931. Despite favorable publicity related to the building's construction, because of the Great Depression and World War II, its owners did not make a profit until the early 1950s.
6
+
7
+ The building's Art Deco architecture, height, and observation decks have made it a popular attraction. Around 4 million tourists from around the world annually visit the building's 86th and 102nd floor observatories; an additional indoor observatory on the 80th floor opened in 2019. The Empire State Building is an American cultural icon: it has been featured in more than 250 TV shows and movies since the film King Kong was released in 1933. A symbol of New York City, the tower has been named as one of the Seven Wonders of the Modern World by the American Society of Civil Engineers. It was ranked first on the American Institute of Architects' List of America's Favorite Architecture in 2007. Additionally, the Empire State Building and its ground-floor interior were designated city landmarks by the New York City Landmarks Preservation Commission in 1980, and were added to the National Register of Historic Places as a National Historic Landmark in 1986.
8
+
9
+ The Empire State Building is located on the west side of Fifth Avenue in Manhattan, between 33rd Street to the south and 34th Street to the north.[14] Tenants enter the building through the Art Deco lobby located at 350 Fifth Avenue. Visitors to the observatories use an entrance at 20 West 34th Street; prior to August 2018, visitors entered through the Fifth Avenue lobby.[1] Although physically located in South Midtown,[15] a mixed residential and commercial area,[16] the building is so large that it was assigned its own ZIP Code, 10118;[17][18] as of 2012[update], it is one of 43 buildings in New York City that has its own ZIP code.[19][b]
10
+
11
+ The areas surrounding the Empire State Building are home to other major points of interest, including Macy's at Herald Square on Sixth Avenue and 34th Street,[22] Koreatown on 32nd Street between Fifth and Sixth Avenues,[22][23] Penn Station and Madison Square Garden on Seventh Avenue between 32nd and 34th Streets,[22] and the Flower District on 28th Street between Sixth and Seventh Avenues.[24] The nearest New York City Subway stations are 34th Street–Penn Station at Seventh Avenue, two blocks west; 34th Street–Herald Square, one block west; and 33rd Street at Park Avenue, two blocks east.[d] There is also a PATH station at 33rd Street and Sixth Avenue.[25]
12
+
13
+ To the east of the Empire State Building is Murray Hill,[25] a neighborhood with a mix of residential, commercial, and entertainment activity.[26] One block east of the Empire State Building, on Madison Avenue at 34th Street, is the New York Public Library's Science, Industry and Business Library, which is located on the same block as the City University of New York's Graduate Center. Bryant Park and the New York Public Library Main Branch are located six blocks north of the Empire State Building, on the block bounded by Fifth Avenue, Sixth Avenue, 40th Street, and 42nd Street. Grand Central Terminal is located two blocks east of the library's Main Branch, at Park Avenue and 42nd Street.[25]
14
+
15
+ The tract was originally part of Mary and John Murray's farm on Murray Hill.[27][28] The earliest recorded major action on the site was during the American Revolutionary War, when General George Washington's troops retreated from the British following the Battle of Kip's Bay.[28] In 1799, John Thompson (or Thomson; accounts vary)[28] bought a 20-acre (8 ha) tract of land roughly bounded by present-day Madison Avenue, 36th Street, Sixth Avenue, and 33rd Street, immediately north of the Caspar Samler farm. He paid a total of 482 British pounds for the parcel, equivalent to roughly $2,400 at the time, or about £39,203 ($47,565) today.[29][e] Thompson was said to have sold the farm to Charles Lawton for $10,000 (equal to $275,287 today) on September 24, 1825.[31] The full details of this sale are unclear, as parts of the deed that certified the sale were later lost.[28] In 1826, John Jacob Astor of the prominent Astor family bought the land from Lawton for $20,500.[32][33][f] The Astors also purchased a parcel from the Murrays.[32] John Jacob's son William Backhouse Astor Sr. bought a half interest in the properties for $20,500 on July 28, 1827,[30][31][36] securing a tract of land on Fifth Avenue from 32nd to 35th streets.[35]
16
+
17
+ On March 13, 1893, John Jacob Astor Sr's grandson William Waldorf Astor opened the Waldorf Hotel on the site[37][38] with the help of hotelier George Boldt.[39] On November 1, 1897, Waldorf's cousin, John Jacob Astor IV, opened the 16-story Astoria Hotel on an adjacent site.[40][37][41] Together, the combined hotels had a total of 1,300 bedrooms making it the largest hotel in the world at the time.[42] After Boldt died, in early 1918, the hotel lease was purchased by Thomas Coleman du Pont.[43][44] By the 1920s, the hotel was becoming dated and the elegant social life of New York had moved much farther north than 34th Street.[45] The Astor family decided to build a replacement hotel further uptown,[37] and sold the hotel to Bethlehem Engineering Corporation in 1928 for $14–16 million.[45][30] The hotel on the site of today's Empire State Building closed on May 3, 1929.[40]
18
+
19
+ Bethlehem Engineering Corporation originally intended to build a 25-story office building on the Waldorf–Astoria site. The company's president, Floyd De L. Brown, paid $100,000 of the $1 million down payment required to start construction on the tower, with the promise that the difference would be paid later.[37] Brown borrowed $900,000 from a bank, but then defaulted on the loan.[46][47]
20
+
21
+ The land was then resold to Empire State Inc., a group of wealthy investors that included Louis G. Kaufman, Ellis P. Earle, John J. Raskob, Coleman du Pont, and Pierre S. du Pont.[46][47][48] The name came from the state nickname for New York.[49] Alfred E. Smith, a former Governor of New York and U.S. presidential candidate whose 1928 campaign had been managed by Raskob,[50] was appointed head of the company.[46][47][30] The group also purchased nearby land so they would have the 2 acres (1 ha) needed for the tower's base, with the combined plot measuring 425 feet (130 m) wide by 200 feet (61 m) long.[51] The Empire State Inc. consortium was announced to the public in August 1929.[52][53][51]
22
+
23
+ Empire State Inc. contracted William F. Lamb, of architectural firm Shreve, Lamb and Harmon, to create the building design.[2][54] Lamb produced the building drawings in just two weeks using the firm's earlier designs for the Reynolds Building in Winston-Salem, North Carolina as the basis.[49] Concurrently, Lamb's partner Richmond Shreve created "bug diagrams" of the project requirements.[55] The 1916 Zoning Act forced Lamb to design a structure that incorporated setbacks resulting in the lower floors being larger than the upper floors.[g] Consequently, the tower was designed from the top down,[56] giving it a "pencil"-like shape.[57]
24
+
25
+ The original plan of the building was 50 stories,[58] but was later increased to 60 and then 80 stories.[51] Height restrictions were placed on nearby buildings[51] to ensure that the top fifty floors of the planned 80-story, 1,000-foot-tall (300 m) building[59][60] would have unobstructed views of the city.[51] The New York Times lauded the site's proximity to mass transit, with the Brooklyn–Manhattan Transit's 34th Street station and the Hudson and Manhattan Railroad's 33rd Street terminal one block away, as well as Penn Station two blocks away and the Grand Central Terminal nine blocks away at its closest. It also praised the 3,000,000 square feet (280,000 m2) of proposed floor space near "one of the busiest sections in the world".[51]
26
+
27
+ While plans for the Empire State Building were being finalized, an intense competition in New York for the title of "world's tallest building" was underway. 40 Wall Street (then the Bank of Manhattan Building) and the Chrysler Building in Manhattan both vied for this distinction and were already under construction when work began on the Empire State Building.[59] The "Race into the Sky", as popular media called it at the time, was representative of the country's optimism in the 1920s, fueled by the building boom in major cities.[61] The 40 Wall Street tower was revised, in April 1929, from 840 feet (260 m) to 925 feet (282 m) making it the world's tallest.[62] The Chrysler Building added its 185-foot (56 m) steel tip to its roof in October 1929, thus bringing it to a height of 1,046 feet (319 m) and greatly exceeding the height of 40 Wall Street.[59] The Chrysler Building's developer, Walter Chrysler, realized that his tower's height would exceed the Empire State Building's as well, having instructed his architect, William Van Alen, to change the Chrysler's original roof from a stubby Romanesque dome to a narrow steel spire.[62] Raskob, wishing to have the Empire State Building be the world's tallest, reviewed the plans and had five floors added as well as a spire; however, the new floors would need to be set back because of projected wind pressure on the extension.[63] On November 18, 1929, Smith acquired a lot at 27–31 West 33rd Street, adding 75 feet (23 m) to the width of the proposed office building's site.[64][65] Two days later, Smith announced the updated plans for the skyscraper. The plans included an observation deck on the 86th-floor roof at a height of 1,050 feet (320 m), higher than the Chrysler's 71st-floor observation deck.[63][66]
28
+
29
+ The 1,050-foot Empire State Building would only be 4 feet (1.2 m) taller than the Chrysler Building,[63][67][68] and Raskob was afraid that Chrysler might try to "pull a trick like hiding a rod in the spire and then sticking it up at the last minute."[58][69][67] The plans were revised one last time in December 1929, to include a 16-story, 200-foot (61 m) metal "crown" and an additional 222-foot (68 m) mooring mast intended for dirigibles. The roof height was now 1,250 feet (380 m), making it the tallest building in the world by far, even without the antenna.[70][58][71] The addition of the dirigible station meant that another floor, the now-enclosed 86th floor, would have to be built below the crown;[71] however, unlike the Chrysler's spire, the Empire State's mast would serve a practical purpose.[69] The final plan was announced to the public on January 8, 1930, just before the start of construction. The New York Times reported that the spire was facing some "technical problems", but they were "no greater than might be expected under such a novel plan."[72] By this time the blueprints for the building had gone through up to fifteen versions before they were approved.[58][73][74] Lamb described the other specifications he was given for the final, approved plan:
30
+
31
+ The program was short enough—a fixed budget, no space more than 28 feet from window to corridor, as many stories of such space as possible, an exterior of limestone, and completion date of [May 1], 1931, which meant a year and six months from the beginning of sketches.[75][58]
32
+
33
+ The contractors were Starrett Brothers and Eken, Paul and William A. Starrett and Andrew J. Eken,[76] who would later construct other New York City buildings such as Stuyvesant Town, Starrett City and Trump Tower.[77] The project was financed primarily by Raskob and Pierre du Pont,[78] while James Farley's General Builders Supply Corporation supplied the building materials.[2] John W. Bowser was the construction superintendent of the project,[79] and the structural engineer of the building was Homer G. Balcom.[54][80] The tight completion schedule necessitated the commencement of construction even though the design had yet to be finalized.[81]
34
+
35
+ Demolition of the old Waldorf–Astoria began on October 1, 1929.[82] Stripping the building down was an arduous process, as the hotel had been constructed using more rigid material than earlier buildings had been. Furthermore, the old hotel's granite, wood chips, and "'precious' metals such as lead, brass, and zinc" were not in high demand resulting in issues with disposal.[83] Most of the wood was deposited into a woodpile on nearby 30th Street or was burned in a swamp elsewhere. Much of the other materials that made up the old hotel, including the granite and bronze, were dumped into the Atlantic Ocean near Sandy Hook, New Jersey.[84][85]
36
+
37
+ By the time the hotel's demolition started, Raskob had secured the required funding for the construction of the building.[86] The plan was to start construction later that year but, on October 24, the New York Stock Exchange suffered a sudden crash marking the beginning of the decade-long Great Depression. Despite the economic downturn, Raskob refused to cancel the project because of the progress that had been made up to that point.[52] Neither Raskob, who had ceased speculation in the stock market the previous year, nor Smith, who had no stock investments, suffered financially in the crash.[86] However, most of the investors were affected and as a result, in December 1929, Empire State Inc. obtained a $27.5 million loan from Metropolitan Life Insurance Company so construction could begin.[87] The stock market crash resulted in no demand in new office space, Raskob and Smith nonetheless started construction,[88] as canceling the project would have resulted in greater losses for the investors.[52]
38
+
39
+ A structural steel contract was awarded on January 12, 1930,[89] with excavation of the site beginning ten days later on January 22,[90] before the old hotel had been completely demolished.[91] Two twelve-hour shifts, consisting of 300 men each, worked continuously to dig the 55-foot (17 m) foundation.[90] Small pier holes were sunk into the ground to house the concrete footings that would support the steelwork.[92] Excavation was nearly complete by early March,[93] and construction on the building itself started on March 17,[94][2] with the builders placing the first steel columns on the completed footings before the rest of the footings had been finished.[95] Around this time, Lamb held a press conference on the building plans. He described the reflective steel panels parallel to the windows, the large-block Indiana Limestone facade that was slightly more expensive than smaller bricks, and the tower's lines and rise.[70] Four colossal columns, intended for installation in the center of the building site, were delivered; they would support a combined 10,000,000 pounds (4,500,000 kg) when the building was finished.[96]
40
+
41
+ The structural steel was pre-ordered and pre-fabricated in anticipation of a revision to the city's building code that would have allowed the Empire State Building's structural steel to carry 18,000 pounds per square inch (120,000 kPa), up from 16,000 pounds per square inch (110,000 kPa), thus reducing the amount of steel needed for the building. Although the 18,000-psi regulation had been safely enacted in other cities, Mayor Jimmy Walker did not sign the new codes into law until March 26, 1930, just before construction was due to commence.[94][97] The first steel framework was installed on April 1, 1930.[98] From there, construction proceeded at a rapid pace; during one stretch of 10 working days, the builders erected fourteen floors.[99][2] This was made possible through precise coordination of the building's planning, as well as the mass production of common materials such as windows and spandrels.[100] On one occasion, when a supplier could not provide timely delivery of dark Hauteville marble, Starrett switched to using Rose Famosa marble from a German quarry that was purchased specifically to provide the project with sufficient marble.[92]
42
+
43
+ The scale of the project was massive, with trucks carrying "16,000 partition tiles, 5,000 bags of cement, 450 cubic yards [340 m3] of sand and 300 bags of lime" arriving at the construction site every day.[101] There were also cafes and concession stands on five of the incomplete floors so workers did not have to descend to the ground level to eat lunch.[3][102] Temporary water taps were also built so workers did not waste time buying water bottles from the ground level.[3][103] Additionally, carts running on a small railway system transported materials from the basement storage[3] to elevators that brought the carts to the desired floors where they would then be distributed throughout that level using another set of tracks.[101][104][102] The 57,480 short tons (51,320 long tons) of steel ordered for the project was the largest-ever single order of steel at the time, comprising more steel than was ordered for the Chrysler Building and 40 Wall Street combined.[105][106] According to historian John Tauranac, building materials were sourced from numerous, and distant, sources with "limestone from Indiana, steel girders from Pittsburgh, cement and mortar from upper New York State, marble from Italy, France, and England, wood from northern and Pacific Coast forests, [and] hardware from New England."[99] The facade, too, used a variety of material, most prominently Indiana limestone but also Swedish black granite, terracotta, and brick.[107]
44
+
45
+ By June 20, the skyscraper's supporting steel structure had risen to the 26th floor, and by July 27, half of the steel structure had been completed.[101] Starrett Bros. and Eken endeavored to build one floor a day in order to speed up construction, a goal that they almost reached with their pace of ​4 1⁄2 stories per week;[108][109] prior to this, the fastest pace of construction for a building of similar height had been ​3 1⁄2 stories per week.[108] While construction progressed, the final designs for the floors were being designed from the ground up (as opposed to the general design, which had been from the roof down). Some of the levels were still undergoing final approval, with several orders placed within an hour of a plan being finalized.[108] On September 10, as steelwork was nearing completion, Smith laid the building's cornerstone during a ceremony attended by thousands. The stone contained a box with contemporary artifacts including the previous day's New York Times, a U.S. currency set containing all denominations of notes and coins minted in 1930, a history of the site and building, and photographs of the people involved in construction.[110][111] The steel structure was topped out at 1,048 feet (319 m) on September 19, twelve days ahead of schedule and 23 weeks after the start of construction.[112] Workers raised a flag atop the 86th floor to signify this milestone.[108][113]
46
+
47
+ Afterward, work on the building's interior and crowning mast commenced.[113] The mooring mast topped out on November 21, two months after the steelwork had been completed.[111][114] Meanwhile, work on the walls and interior was progressing at a quick pace, with exterior walls built up to the 75th floor by the time steelwork had been built to the 95th floor.[115] The majority of the facade was already finished by the middle of November.[3] Because of the building's height, it was deemed infeasible to have many elevators or large elevator cabins, so the builders contracted with the Otis Elevator Company to make 66 cars that could speed at 1,200 feet per minute (366 m/min), which represented the largest-ever elevator order at the time.[116]
48
+
49
+ In addition to the time constraint builders had, there were also space limitations because construction materials had to be delivered quickly, and trucks needed to drop off these materials without congesting traffic. This was solved by creating a temporary driveway for the trucks between 33rd and 34th Streets, and then storing the materials in the building's first floor and basements. Concrete mixers, brick hoppers, and stone hoists inside the building ensured that materials would be able to ascend quickly and without endangering or inconveniencing the public.[115] At one point, over 200 trucks made material deliveries at the building site every day.[3] A series of relay and erection derricks, placed on platforms erected near the building, lifted the steel from the trucks below and installed the beams at the appropriate locations.[117] The Empire State Building was structurally completed on April 11, 1931, twelve days ahead of schedule and 410 days after construction commenced.[3] Al Smith shot the final rivet, which was made of solid gold.[118]
50
+
51
+ The project involved more than 3,500 workers at its peak,[2] including 3,439 on a single day, August 14, 1930.[119] Many of the workers were Irish and Italian immigrants,[120] with a sizable minority of Mohawk ironworkers from the Kahnawake reserve near Montreal.[120][121][122] According to official accounts, five workers died during the construction,[123][124] although the New York Daily News gave reports of 14 deaths[3] and a headline in the socialist magazine The New Masses spread unfounded rumors of up to 42 deaths.[125][124] The Empire State Building cost $40,948,900 to build, including demolition of the Waldorf–Astoria (equivalent to $554,644,100 in 2018). This was lower than the $60 million budgeted for construction.[5]
52
+
53
+ Lewis Hine captured many photographs of the construction, documenting not only the work itself but also providing insight into the daily life of workers in that era.[90][126][127] Hine's images were used extensively by the media to publish daily press releases.[128] According to the writer Jim Rasenberger, Hine "climbed out onto the steel with the ironworkers and dangled from a derrick cable hundreds of feet above the city to capture, as no one ever had before (or has since), the dizzy work of building skyscrapers". In Rasenberger's words, Hine turned what might have been an assignment of "corporate flak" into "exhilarating art".[129] These images were later organized into their own collection.[130] Onlookers were enraptured by the sheer height at which the steelworkers operated. New York magazine wrote of the steelworkers: "Like little spiders they toiled, spinning a fabric of steel against the sky".[117]
54
+
55
+ The Empire State Building officially opened on May 1, 1931, forty-five days ahead of its projected opening date, and eighteen months from the start of construction.[2][131][132] The opening was marked with an event featuring United States President Herbert Hoover, who turned on the building's lights with the ceremonial button push from Washington, D.C..[133][134][4] Over 350 guests attended the opening ceremony, and following luncheon, at the 86th floor including Jimmy Walker, Governor Franklin D. Roosevelt, and Al Smith.[4] An account from that day stated that the view from the luncheon was obscured by a fog, with other landmarks such as the Statue of Liberty being "lost in the mist" enveloping New York City.[135] The building officially opened the next day.[135][79] Advertisements for the building's observatories were placed in local newspapers, while nearby hotels also capitalized on the events by releasing advertisements that lauded their proximity to the newly opened tower.[136]
56
+
57
+ According to The New York Times, builders and real estate speculators predicted that the 1,250-foot-tall (380 m) Empire State Building would be the world's tallest building "for many years", thus ending the great New York City skyscraper rivalry. At the time, most engineers agreed that it would be difficult to build a building taller than 1,200 feet (370 m), even with the hardy Manhattan bedrock as a foundation.[137] Technically, it was believed possible to build a tower of up to 2,000 feet (610 m), but it was deemed uneconomical to do so, especially during the Great Depression.[104][138] As the tallest building in the world, at that time, and the first one to exceed 100 floors, the Empire State Building became an icon of the city and, ultimately, of the nation.[139]
58
+
59
+ In 1932, the Fifth Avenue Association gave the tower its 1931 "gold medal" for architectural excellence, signifying that the Empire State had been the best-designed building on Fifth Avenue to open in 1931.[140] A year later, on March 2, 1933, the movie King Kong was released. The movie, which depicted a large stop motion ape named Kong climbing the Empire State Building, made the still-new building into a cinematic icon.[141][142]
60
+
61
+ The Empire State Building's opening coincided with the Great Depression in the United States, and as a result much of its office space was vacant from its opening.[130] In the first year, only 23% of the available space was rented,[143][144] as compared to the early 1920s, where the average building would have occupancy of 52% upon opening and 90% rented within five years.[145] The lack of renters led New Yorkers to deride the building as the "Empty State Building".[130][146]
62
+
63
+ Jack Brod, one of the building's longest resident tenants,[147][148] co-established the Empire Diamond Corporation with his father in the building in mid-1931[149] and rented space in the building until he died in 2008.[149] Brod recalled that there were only about 20 tenants at the time of opening, including him,[148] and that Al Smith was the only real tenant in the space above his seventh-floor offices.[147] Generally, during the early 1930s, it was rare for more than a single office space to be rented in the building, despite Smith's and Raskob's aggressive marketing efforts in the newspapers and to anyone they knew.[150] The building's lights were continuously left on, even in the unrented spaces, to give the impression of occupancy. This was exacerbated by competition from Rockefeller Center[143] as well as from buildings on 42nd Street, which, when combined with the Empire State Building, resulted in surplus of office space in a slow market during the 1930s.[151]
64
+
65
+ Aggressive marketing efforts served to reinforce the Empire State Building's status as the world's tallest.[152] The observatory was advertised in local newspapers as well as on railroad tickets.[153] The building became a popular tourist attraction, with one million people each paying one dollar to ride elevators to the observation decks in 1931.[154] In its first year of operation, the observation deck made approximately $2 million in revenue, as much as its owners made in rent that year.[143][130] By 1936, the observation deck was crowded on a daily basis, with food and drink available for purchase at the top,[155] and by 1944 the tower had received its 5 millionth visitor.[156] In 1931, NBC took up tenancy, leasing space on the 85th floor for radio broadcasts.[157][158] From the outset the building was in debt, losing $1 million per year by 1935. Real estate developer Seymour Durst recalled that the building was so underused in 1936 that there was no elevator service above the 45th floor, as the building above the 41st floor was empty except for the NBC offices and the Raskob/Du Pont offices on the 81st floor.[159]
66
+
67
+ Per the original plans, the Empire State Building's spire was intended to be an airship docking station. Raskob and Smith had proposed dirigible ticketing offices and passenger waiting rooms on the 86th floor, while the airships themselves would be tied to the spire at the equivalent of the building's 106th floor.[160][161] An elevator would ferry passengers from the 86th to the 101st floor[h] after they had checked in on the 86th floor,[163] after which passengers would have climbed steep ladders to board the airship.[160] The idea, however, was impractical and dangerous due to powerful updrafts caused by the building itself,[164] the wind currents across Manhattan,[160] and the spires of nearby skyscrapers.[165] Furthermore, even if the airship were to successfully navigate all these obstacles, its crew would have to jettison some ballast by releasing water onto the streets below in order to maintain stability, and then tie the craft's nose to the spire with no mooring lines securing the tail end of the craft.[13][160][165] On September 15, 1931, in the first and only instance of an airship using the building's mast, a small commercial United States Navy airship circled 25 times in 45-mile-per-hour (72 km/h) winds.[166] The airship then attempted to dock at the mast, but its ballast spilled and the craft was rocked by unpredictable eddies.[167][168] The near-disaster scuttled plans to turn the building's spire into an airship terminal, although one blimp did manage to make a single newspaper delivery afterward.[160]
68
+
69
+ On July 28, 1945, a B-25 Mitchell bomber crashed into the north side of the Empire State Building, between the 79th and 80th floors.[169] One engine completely penetrated the building and landed in a neighboring block, while the other engine and part of the landing gear plummeted down an elevator shaft. Fourteen people were killed in the incident,[170][74] but the building escaped severe damage and was reopened two days later.[170][171]
70
+
71
+ The Empire State Building only started becoming profitable in the 1950s, when it was finally able to break even for the first time.[130][172] At the time, mass transit options in the building's vicinity were limited compared to the present day. Despite this challenge, the Empire State Building began to attract renters due to its reputation.[173] A 222-foot (68 m) radio antenna was erected on top of the towers starting in 1950,[174] allowing the area's television stations to be broadcast from the building.[175]
72
+
73
+ However, despite the turnaround in the building's fortunes, Raskob put the tower up for sale in 1951,[176] with a minimum asking price of $50 million.[177] The property was purchased by business partners Roger L. Stevens, Henry Crown, Alfred R. Glancy and Ben Tobin.[178][179][180] The sale was brokered by the Charles F. Noyes Company, a prominent real estate firm in upper Manhattan,[177] for $51 million, the highest price paid for a single structure at the time.[181] By this time, the Empire State had been fully leased for several years with a waiting list of parties looking to lease space in the building, according to the Cortland Standard.[182] That same year, six news companies formed a partnership to pay a combined annual fee of $600,000 to use the tower's antenna,[177] which was completed in 1953.[175] Crown bought out his partners' ownership stakes in 1954, becoming the sole owner.[183] The following year, the American Society of Civil Engineers named the building one of the "Seven Modern Civil Engineering Wonders".[184][185]
74
+
75
+ In 1961, Lawrence A. Wien signed a contract to purchase the Empire State Building for $65 million, with Harry B. Helmsley acting as partners in the building's operating lease.[178][186] This became the new highest price for a single structure.[186] Over 3,000 people paid $10,000 for one share each in a company called Empire State Building Associates. The company in turn subleased the building to another company headed by Helmsley and Wien, raising $33 million of the funds needed to pay the purchase price.[178][186] In a separate transaction,[186] the land underneath the building was sold to Prudential Insurance for $29 million.[178][187] Helmsley, Wien, and Peter Malkin quickly started a program of minor improvement projects, including the first-ever full-building facade refurbishment and window-washing in 1962,[188][189] the installation of new flood lights on the 72nd floor in 1964,[190][191] and replacement of the manually operated elevators with automatic units in 1966.[192] The little-used western end of the second floor was used as a storage space until 1964, at which point it received escalators to the first floor as part of its conversion into a highly sought retail area.[193][194]
76
+
77
+ In 1961, the same year that Helmsley, Wien, and Malkin had purchased the Empire State Building, the Port Authority of New York and New Jersey formally backed plans for a new World Trade Center in Lower Manhattan.[197] The plan originally included 66-story twin towers with column-free open spaces. The Empire State's owners and real estate speculators were worried that the twin towers' 7.6 million square feet (710,000 m2) of office space would create a glut of rentable space in Manhattan as well as take away the Empire State Building's profits from lessees.[198] A revision in the World Trade Center's plan brought the twin towers to 1,370 feet (420 m) each or 110 stories, taller than the Empire State.[199] Opponents of the new project included prominent real-estate developer Robert Tishman, as well as Wien's Committee for a Reasonable World Trade Center.[199] In response to Wien's opposition, Port Authority executive director Austin J. Tobin said that Wien was only opposing the project because it would overshadow his Empire State Building as the world's tallest building.[200]
78
+
79
+ The World Trade Center's twin towers started construction in 1966.[201] The following year, the Ostankino Tower succeeded the Empire State Building as the tallest freestanding structure in the world.[202] In 1970, the Empire State surrendered its position as the world's tallest building,[203] when the World Trade Center's still-under-construction North Tower surpassed it, on October 19;[195][196] the North Tower was topped out, on December 23, 1970.[196][204]
80
+
81
+ In December 1975, the observation deck was opened on the 110th floor of the Twin Towers, significantly higher than the 86th floor observatory on the Empire State Building.[74] The latter was also losing revenue during this period, particularly as a number of broadcast stations had moved to the World Trade Center in 1971; although the Port Authority continued to pay the broadcasting leases for the Empire State until 1984.[205]
82
+
83
+ By 1980, there were nearly two million annual visitors,[154] although a building official had previously estimated between 1.5 million and 1.75 million annual visitors.[206] The building received its own ZIP code in May 1980 in a roll out of 63 new postal codes in Manhattan. At the time, the tenants of the tower collectively received 35,000 pieces of mail daily.[21] The Empire State Building celebrated its 50th anniversary on May 1, 1981, with a much-publicized, but poorly received, laser light show,[207] as well as an "Empire State Building Week" that ran through to May 8.[208][209]
84
+
85
+ The New York City Landmarks Preservation Commission voted to make the lobby a city landmark on May 19, 1981, citing the historic nature of the first and second floors, as well as "the fixtures and interior components" of the upper floors.[210] The building became a National Historic Landmark in 1986[10] in close alignment to the New York City Landmarks report.[211] The Empire State Building was added to the National Register of Historic Places the following year due to its architectural significance.[212]
86
+
87
+ Capital improvements were made to the Empire State Building during the early to mid-1990s at a cost of $55 million.[213] These improvements entailed replacing alarm systems, elevators, windows, and air conditioning; making the observation deck compliant with the Americans with Disabilities Act of 1990 (ADA); and refurbishing the limestone facade.[214] The observatory renovation was added after disability rights groups and the United States Department of Justice filed a lawsuit against the building in 1992, in what was the first lawsuit filed by an organization under the new law.[215] A settlement was reached in 1994, in which the Empire State Building Associates agreed to add ADA-compliant elements, such as new elevators, ramps, and automatic doors, during its ongoing renovation.[216]
88
+
89
+ Prudential sold the land under the building in 1991 for $42 million to a buyer representing hotelier Hideki Yokoi, who was imprisoned at the time in connection with a deadly fire at the Hotel New Japan hotel in Tokyo.[217] In 1994, Donald Trump entered into a joint-venture agreement with Yokoi, with a shared goal of breaking the Empire State Building's lease on the land in an effort to gain total ownership of the building so that, if successful, the two could reap the potential profits of merging the ownership of the building with the land beneath it.[218] Having secured a half-ownership of the land, Trump devised plans to take ownership of the building itself so he could renovate it, even though Helmsley and Malkin had already started their refurbishment project.[213] He sued Empire State Building Associates in February 1995, claiming that the latter had caused the building to become a "high-rise slum"[178] and a "second-rate, rodent-infested" office tower.[219] Trump had intended to have Empire State Building Associates evicted for violating the terms of their lease,[219] but was denied.[220] this led to Helmsley's companies countersuing Trump in May.[221] This sparked a series of lawsuits and countersuits that lasted several years,[178] partly arising from Trump's desire to obtain the building's master lease by taking it from Empire State Building Associates.[214] Upon Harry Helmsley's death in 1997, the Malkins sued Helmsley's widow, Leona Helmsley, for control of the building.[222]
90
+
91
+ Following the destruction of the World Trade Center during the September 11 attacks in 2001, the Empire State Building again became the tallest building in New York City, but was only the second-tallest building in the Americas after the Sears Tower (now Willis Tower) in Chicago.[202][223][224] As a result of the attacks, transmissions from nearly all of the city's commercial television and FM radio stations were again broadcast from the Empire State Building.[225] The attacks also led to an increase in security due to persistent terror threats against New York City landmarks.[226]
92
+
93
+ In 2002, Trump and Yokoi sold their land claim to the Empire State Building Associates, now headed by Malkin, in a $57.5 million sale.[178][227] This action merged the building's title and lease for the first time in half a century.[227] Despite the lingering threat posed by the 9/11 attacks, the Empire State Building remained popular with 3.5 million visitors to the observatories in 2004, compared to about 2.8 million in 2003.[228]
94
+
95
+ Even though she maintained her ownership stake in the building until the post-consolidation IPO in October 2013, Leona Helmsley handed over day-to-day operations of the building in 2006 to Peter Malkin's company.[178][229] In 2008, the building was temporarily "stolen" by the New York Daily News to show how easy it was to transfer the deed on a property, since city clerks were not required to validate the submitted information, as well as to help demonstrate how fraudulent deeds could be used to obtain large mortgages and then have individuals disappear with the money. The paperwork submitted to the city included the names of Fay Wray, the famous star of King Kong, and Willie Sutton, a notorious New York bank robber. The newspaper then transferred the deed back over to the legitimate owners, who at that time were Empire State Land Associates.[230]
96
+
97
+ Starting in 2009, the building's public areas received a $550 million renovation, with improvements to the air conditioning and waterproofing, renovations to the observation deck and main lobby,[231] and relocation of the gift shop to the 80th floor.[232][233] About $120 million was spent on improving the energy efficiency of the building, with the goal of reducing energy emissions by 38% within five years.[233][234] For example, all of the windows were refurbished onsite into film-coated "superwindows" which block heat but pass light.[234][235][236] Air conditioning operating costs on hot days were reduced, saving $17 million of the project's capital cost immediately and partially funding some of the other retrofits.[235] The Empire State Building won the Leadership in Energy and Environmental Design (LEED) Gold for Existing Buildings rating in September 2011, as well as the World Federation of Great Towers' Excellence in Environment Award for 2010.[236] For the LEED Gold certification, the building's energy reduction was considered, as was a large purchase of carbon offsets. Other factors included low-flow bathroom fixtures, green cleaning supplies, and use of recycled paper products.[237]
98
+
99
+ On April 30, 2012, One World Trade Center topped out, taking the Empire State Building's record of tallest in the city.[238] By 2014, the building was owned by the Empire State Realty Trust (ESRT), with Anthony Malkin as chairman, CEO, and president.[239] The ESRT was a public company, having begun trading publicly on the New York Stock Exchange the previous year.[240] In August 2016, the Qatar Investment Authority (QIA) was issued new fully diluted shares equivalent to 9.9% of the trust; this investment gave them partial ownership of the entirety of the ESRT's portfolio, and as a result, partial ownership of the Empire State Building.[241] The trust's president John Kessler called it an "endorsement of the company's irreplaceable assets".[242] The investment has been described by the real-estate magazine The Real Deal as "an unusual move for a sovereign wealth fund", as these funds typically buy direct stakes in buildings rather than real estate companies.[243] Other foreign entities that have a stake in the ESRT include investors from Norway, Japan, and Australia.[242]
100
+
101
+ A renovation of the Empire State Building was commenced in the 2010s to further improve energy efficiency, public areas, and amenities.[1] In August 2018, to improve the flow of visitor traffic, the main visitor's entrance was shifted to 20 West 34th Street as part of a major renovation of the observatory lobby.[244] The new lobby includes several technological features, including large LED panels, digital ticket kiosks in nine languages, and a two-story architectural model of the building surrounded by two metal staircases.[1][244] The first phase of the renovation, completed in 2019, features an updated exterior lighting system and digital hosts.[244] The new lobby also features free Wi-Fi provided for those waiting.[1][245] A 10,000-square-foot (930 m2) exhibit with nine galleries, opened in July 2019.[246][247] The 102nd floor observatory, the third phase of the redesign, re-opened to the public on October 12, 2019.[248][249] That portion of the project included outfitting the space with floor-to-ceiling glass windows and a brand-new glass elevator.[250] The final portion of the renovations to be completed was a new observatory on the 80th floor, which opened on December 2, 2019. In total, the renovation had cost $165 million and taken four years to finish.[251][252]
102
+
103
+ The height of the Empire State Building, to its 102nd floor, is 1,250 ft (381 m), 1,453 feet 8 9⁄16 inches (443.092 m) including its 203-foot (61.9 m) pinnacle.[59] The building has 85 stories of commercial and office space representing a total of 2.158 million square feet (200,500 m2) of rentable space. It has an indoor and outdoor observation deck on the 86th floor, the highest floor within the actual tower.[59] The remaining 16 stories are part of the Art Deco spire, which is capped by an observatory on the 102nd floor. The spire is hollow with no floors between levels 86 and 102.[59] Atop the tower is the 203 ft (61.9 m) pinnacle, much of which is covered by broadcast antennas, and surmounted with a lightning rod.[169]
104
+
105
+ According to the official fact sheets the building rises 1,860 steps from the first to the 102nd floor, weighs 365,000 short tons (331,122 t), has an internal volume of 37 million cubic feet (1,000,000 m3), and an exterior with 200,000 cubic feet (5,700 m3) of limestone and granite. Construction of the tower's exterior required ten million bricks and 730 short tons (650 long tons) of aluminum and stainless steel,[253] and the interior required 1,172 miles (1,886 km) of elevator cable and 2 million feet (609,600 m) of electrical wires.[254] The building has a capacity for 20,000 tenants and 15,000 visitors.[255]
106
+
107
+ The building has been named as one of the Seven Wonders of the Modern World by the American Society of Civil Engineers.[256] The building and its street floor interior are designated landmarks of the New York City Landmarks Preservation Commission, and confirmed by the New York City Board of Estimate.[257] It was designated as a National Historic Landmark in 1986.[10][211][258] In 2007, it was ranked number one on the AIA's List of America's Favorite Architecture.[259]
108
+
109
+ The Empire State Building's art deco design is typical of pre–World War II architecture in New York. The modernistic, stainless steel canopies of the entrances on 33rd and 34th Streets lead to two-story-high corridors around the elevator core, crossed by stainless steel and glass-enclosed bridges at the second-floor level.[257] The riveted steel frame of the building was originally designed to handle all of the building's gravitational stresses and wind loads.[260] The exterior of the building is clad in Indiana limestone panels sourced from the Empire Mill in Sanders, Indiana,[261] which give the building its signature blonde color.[49] The amount of material used in the building's construction resulted in a very stiff structure when compared to other skyscrapers, with a structural stiffness of 42 pounds per square foot (2.0 kPa) versus the Willis Tower's 33 pounds per square foot (1.6 kPa) and the John Hancock Center's 26 pounds per square foot (1.2 kPa).[262] A December 1930 feature in Popular Mechanics estimated that a building with the Empire State's dimensions would still stand even if hit with an impact of 50 short tons (45 long tons).[255]
110
+
111
+ The Empire State Building design has a recessed design, including one major setback and several smaller ones that reduce the level dimensions as the height increases, thus making upper 81 floors much smaller than the lower five floors. However, this design allows sunlight to illuminate the interiors of the top floors and, in addition, positions these floors away from the noisy streets below.[57][263] This design was mandated as per the 1916 Zoning Resolution, which was intended to allow sunlight to reach the streets as well.[g] Normally, a building of the Empire State's dimensions would be permitted to build up to 12 stories on the Fifth Avenue side, and up to 17 stories on the 33rd/34th Streets side, before it would have to utilize setbacks.[72] However, the setbacks were arranged such that the largest setback was on the sixth floor, above the five-floor "base",[72] so the rest of the building above the sixth floor would have a facade of uniform shape.[255][269][58]
112
+
113
+ The Empire State Building was the first building to have more than 100 floors.[139] It has 6,514 windows;[270] 73 elevators;[234] a total floor area of 2,768,591 sq ft (257,211 m2); and a base covering 2 acres (1 ha).[271] Its original 64 elevators, built by the Otis Elevator Company,[271] are located in a central core and are of varying heights, with the longest of these elevators reaching from the lobby to the 80th floor.[72][272] As originally built, there were four "express" elevators that connected the lobby, 80th floor, and several landings in between; the other 60 "local" elevators connected the landings with the floors above these intermediate landings.[269] Of the 64 total elevators, 58 were for passenger use (comprising the four express elevators and 54 local elevators), and eight were for freight deliveries.[58] The elevators were designed to move at 1,200 feet per minute (366 m/min). At the time of the skyscraper's construction, their practical speed was limited to 700 feet per minute (213 m/min) as per city law, but this limit was removed shortly after the building opened.[271][58] Additional elevators connect the 80th floor to the six floors above it, as the six extra floors were built after the original 80 stories were approved.[59][273] The elevators were mechanically operated until 2011, when they were replaced with digital elevators during the $550 million renovation of the building.[274] The Empire State Building has 73 elevators in all, including service elevators.[260]
114
+
115
+ Utilities are grouped in a central shaft.[72] On each floor between levels 6 and 86, the central shaft is surrounded by a main corridor on all four sides.[58] As per the final specifications of the building, the corridor is surrounded in turn by office space 28 feet (8.5 m) deep.[75] Each of the floors has 210 structural columns that pass through it, which provide structural stability, but limits the amount of open space on these floors.[58] However, the relative dearth of stone in the building allows for more space overall, with a 1:200 stone-to-building ratio in the Empire State compared to a 1:50 ratio in similar buildings.[104]
116
+
117
+ The original main lobby is accessed from Fifth Avenue, on the building's east side, and contains an entrance with one set of double doors between a pair of revolving doors. At the top of each doorway is a bronze motif depicting one of three "crafts or industries" used in the building's construction—Electricity, Masonry, and Heating.[275] The lobby contains two tiers of marble, a lighter marble on the top, above the storefronts, and a darker marble on the bottom, flush with the storefronts. There is a pattern of zigzagging terrazzo tiles on the lobby floor, which leads from the entrance on the east to the aluminum relief on the west.[276] The chapel-like three-story-high lobby, which runs parallel to 33rd and 34th Streets, contains storefronts on both its northern and southern sides.[277] These storefronts are framed on each side by tubes of dark "modernistically rounded marble", according to the New York City Landmarks Preservation Commission, and above by a vertical band of grooves set into the marble.[276] Immediately inside the lobby is an airport-style security checkpoint.[278]
118
+
119
+ The walls on both the northern and southern sides of the lobby house storefronts and escalators to a mezzanine level.[276][i] At the west end of the lobby is an aluminum relief of the skyscraper as it was originally built (i.e. without the antenna).[279] The relief, which was intended to provide a welcoming effect,[280] contains an embossing of the building's outline, accompanied by what the Landmarks Preservation Commission describes as "the rays of an aluminum sun shining out behind [the tower] and mingling with aluminum rays emanating from the spire of the Empire State Building". In the background is a state map of New York with the building's location marked by a "medallion" in the very southeast portion of the outline. A compass is located in the bottom right and a plaque to the tower's major developers is on the bottom left.[281]
120
+
121
+ The plaque at the western end of the lobby is located on the eastern interior wall of a one-story tall rectangular-shaped corridor that surrounds the banks of escalators, with a similar design to the lobby.[282] The rectangular shaped corridor actually consists of two long hallways on the northern and southern sides of the rectangle,[283] as well as a shorter hallway on the eastern side and another long hallway on the western side.[282] At both ends of the northern and southern corridors, there is a bank of four low-rise elevators in between the corridors.[209] The western side of the rectangular elevator-bank corridor extends north to the 34th Street entrance and south to the 33rd Street entrance. It borders three large storefronts and leads to escalators that go both to the second floor and to the basement. Going from west to east, there are secondary entrances to 34th and 33rd Streets from both the northern and southern corridors, respectively, at approximately the two-thirds point of each corridor.[276][i]
122
+
123
+ Until the 1960s, an art deco mural, inspired by both the sky and the Machine Age, was installed in the lobby ceilings.[279] Subsequent damage to these murals, designed by artist Leif Neandross, resulted in reproductions being installed. Renovations to the lobby in 2009, such as replacing the clock over the information desk in the Fifth Avenue lobby with an anemometer and installing two chandeliers intended to be part of the building when it originally opened, revived much of its original grandeur.[231] The north corridor contained eight illuminated panels created in 1963 by Roy Sparkia and Renée Nemorov, in time for the 1964 World's Fair, depicting the building as the Eighth Wonder of the World alongside the traditional seven.[209][284] The building's owners installed a series of paintings by the New York artist Kysa Johnson in the concourse level. Johnson later filed a federal lawsuit, in January 2014, under the Visual Artists Rights Act alleging the negligent destruction of the paintings and damage to her reputation as an artist.[285] As part of the building's 2010 renovation, Denise Amses commissioned a work consisting of 15,000 stars and 5,000 circles, superimposed on a 13-by-5-foot (4.0 by 1.5 m) etched-glass installation, in the lobby.[286]
124
+
125
+ The final stage of the building was the installation of a hollow mast, a 158-foot (48 m) steel shaft fitted with elevators and utilities, above the 86th floor. At the top would be a conical roof and the 102nd-floor docking station.[287] The elevators would ascend 167 feet (51 m) from the 86th floor ticket offices to a 33-foot-wide (10 m) 101st-floor[h] waiting room.[163][160] From there, stairs would lead to the 102nd floor,[h] where passengers would enter the airships.[287] The airships would have been moored to the spire at the equivalent of the building's 106th floor.[160][161]
126
+
127
+ On the 102nd floor of the Empire State Building (formerly the 101st floor), there is a door with stairs ascending to the 103rd floor (formerly the 102nd).[h] This was built as a disembarkation floor for airships tethered to the building's spire, and has a circular balcony outside.[13] It is now an access point to reach the spire for maintenance. The room now contains electrical equipment, but celebrities and dignitaries may also be given permission to take pictures there.[288][289] Above the 103rd floor, there is a set of stairs and a ladder to reach the spire for maintenance work.[288] The mast's 480 windows were all replaced in 2015.[290]
128
+
129
+ Broadcasting began at the Empire State Building on December 22, 1931, when NBC and RCA began transmitting experimental television broadcasts from a small antenna erected atop the spire, with two separate transmitters for the visual and audio data. They leased the 85th floor and built a laboratory there.[158] In 1934, RCA was joined by Edwin Howard Armstrong in a cooperative venture to test his FM system from the building's antenna.[291][292] This setup, which entailed the installation of the world's first FM transmitter,[292] continued only until October of the next year due to disputes between RCA and Armstrong.[158][291] Specifically, NBC wanted to install more TV equipment in the room where Armstrong's transmitter was located.[292]
130
+
131
+ After some time, the 85th floor became home to RCA's New York television operations initially as experimental station W2XBS channel 1 then, from 1941, as commercial station WNBT channel 1 (now WNBC channel 4). NBC's FM station, W2XDG, began transmitting from the antenna in 1940.[158][293] NBC retained exclusive use of the top of the building until 1950 when the Federal Communications Commission (FCC) ordered the exclusive deal be terminated. The FCC directive was based on consumer complaints that a common location was necessary for the seven extant New York-area television stations to transmit from so that receiving antennas would not have to be constantly adjusted. Other television broadcasters would later join RCA at the building on the 81st through 83rd floors, often along with sister FM stations.[158] Construction of a dedicated broadcast tower began on July 27, 1950,[174] with TV, and FM, transmissions starting in 1951. The broadcast tower was completed by 1953.[49][73][175] From 1951, six broadcasters agreed to pay a combined $600,000 per year for the use of the antenna.[177] In 1965, a separate set of FM antennae was constructed ringing the 103rd floor observation area to act as a master antenna.[158]
132
+
133
+ The placement of the stations in the Empire State Building became a major issue with the construction of the World Trade Center Twin Towers in the late 1960s, and early 1970s. The greater height of the Twin Towers would reflect radio waves broadcast from the Empire State Building, eventually resulting in some broadcasters relocating to the newer towers instead of suing the developer, the Port Authority of New York and New Jersey.[294] Even though the nine stations who were broadcasting from the Empire State Building were leasing their broadcast space until 1984, most of these stations moved to the World Trade Center as soon as it was completed in 1971. The broadcasters obtained a court order stipulating that the Port Authority had to build a mast and transmission equipment in the North Tower, as well as pay the broadcasters' leases in the Empire State Building until 1984.[205] Only a few broadcasters renewed their leases in the Empire State Building.[295]
134
+
135
+ The September 11 attacks in 2001 destroyed the World Trade Center and the broadcast centers atop it, leaving most of the city's stations without a station for ten days until a temporary tower was built in Alpine, New Jersey.[296] By October 2001, nearly all of the city's commercial broadcast stations (both television and FM radio) were again transmitting from the top of the Empire State Building. In a report that Congress commissioned about the transition from analog television to digital television, it was stated that the placement of broadcast stations in the Empire State Building was considered "problematic" due to interference from other nearby towers. In comparison, the Congressional report stated that the former Twin Towers had very few buildings of comparable height nearby thus signals suffered little interference.[225] In 2003, a few FM stations were relocated to the nearby Condé Nast Building to reduce the number of broadcast stations using the Empire State Building.[297] Eleven television stations and twenty-two FM stations had signed 15-year leases in the building by May 2003. It was expected that a taller broadcast tower in Bayonne, New Jersey, or Governors Island, would be built in the meantime with the Empire State Building being used as a "backup" since signal transmissions from the building were generally of poorer quality.[298] Following the construction of One World Trade Center in the late 2000s and early 2010s, some TV stations began moving their transmitting facilities there.[299]
136
+
137
+ As of 2018[update], the Empire State Building is home to the following stations:[300]
138
+
139
+ The 80th, 86th, and 102nd floors contain observatories.[301][279][252] The latter two observatories saw a combined average of 4 million visitors per year in 2010.[109][302][303] Since opening, the observatories have been more popular than similar observatories at 30 Rockefeller Plaza, the Chrysler Building, the first One World Trade Center, or the Woolworth Building, despite being more expensive.[302] There are variable charges to enter the observatories; one ticket allows visitors to go as high as the 86th floor, and there is an additional charge to visit the 102nd floor. Other ticket options for visitors include scheduled access to view the sunrise from the observatory, a "premium" guided tour with VIP access, and the "AM/PM" package which allows for two visits in the same day.[304]
140
+
141
+ The 86th floor observatory contains both an enclosed viewing gallery and an open-air outdoor viewing gallery, allowing for it to remain open 365 days a year regardless of the weather. The 102nd floor observatory is completely enclosed and much smaller in size. The 102nd floor observatory was closed to the public from the late 1990s to 2005 due to limited viewing capacity and long lines.[305][306] The observation decks were redesigned in mid-1979.[206] The 102nd floor was again redesigned in a project that was completed in 2019.[248][249] An observatory on the 80th floor, opened in 2019, includes various exhibits as well as a mural of the skyline drawn by British artist Stephen Wiltshire.[251][252]
142
+
143
+ According to a 2010 report by Concierge.com, the five lines to enter the observation decks are "as legendary as the building itself". Concierge.com stated that there are five lines: the sidewalk line, the lobby elevator line, the ticket purchase line, the second elevator line, and the line to get off the elevator and onto the observation deck.[307] However, in 2016, New York City's official tourism website, NYCgo.com, made note of only three lines: the security check line, the ticket purchase line, and the second elevator line.[308] Following renovations completed in 2019, designed to streamline queuing and reduce wait times, guests enter from a single entrance on 34th Street, where they make their way through 10,000-square-foot (930 m2) exhibits on their way up to the observatories. Guests were offered a variety of ticket packages, including a package that enables them to skip the lines throughout the duration of their stay.[249] The Empire State Building garners significant revenue from ticket sales for its observation decks, making more money from ticket sales than it does from renting office space during some years.[302][309]
144
+
145
+ In early 1994, a motion simulator attraction was built on the 2nd floor,[310] as a complement to the observation deck.[311] The original cinematic presentation lasted approximately 25 minutes, while the simulation was about eight minutes.[312]
146
+
147
+ The ride had two incarnations. The original version, which ran from 1994 until around 2002, featured James Doohan, Star Trek's Scotty, as the airplane's pilot who humorously tried to keep the flight under control during a storm.[313][314] After the World Trade Center terrorist attacks on September 11, 2001, the ride was closed.[311] An updated version debuted in mid-2002, featuring actor Kevin Bacon as the pilot, with the new flight also going haywire.[315] This new version served a more informative goal, as opposed to the old version's main purpose of entertainment, and contained details about the 9/11 attacks.[316] The simulator received mixed reviews, with assessments of the ride ranging from "great" to "satisfactory" to "corny".[317]
148
+
149
+ The building was originally equipped with white searchlights atop the tower. They saw their first use in November 1932 when they lit up to signal Roosevelt's victory over Hoover in the presidential election of that year.[318] These were later swapped for four "Freedom Lights" in 1956.[318] In February 1964, flood lights were added on the 72nd floor[190] to illuminate the top of the building at night so that the building could be seen from the World Fair later that year.[191] The lights were shut off from November 1973 to July 1974 because of the energy crisis at the time.[40] In 1976, the businessman Douglas Leigh suggested that Wien and Helmsley install 204 metal-halide lights, which were four times as bright as the 1,000 incandescent lights they were to replace.[319] New red, white, and blue metal-halide lights were installed in time for the country's bicentennial that July.[40][320] After the bicentennial, Helmsley retained the new lights due to the reduced maintenance cost, about $116 a year.[319]
150
+
151
+ Since 1976, the spire has been lit in colors chosen to match seasonal events and holidays. Organizations are allowed to make requests through the building's website.[321] The building is also lit in the colors of New York-based sports teams on nights when they host games: for example, orange, blue, and white for the New York Knicks; red, white, and blue for the New York Rangers.[322] It was twice lit in scarlet to support New Jersey's Rutgers University, once for a football game against the University of Louisville on November 9, 2006, and again on April 3, 2007, when the women's basketball team played in the national championship game.[323]
152
+
153
+ There have been special occasions where the lights are modified from the usual schedule. The structure was lit in red, white, and blue for several months after the destruction of the World Trade Center in September 2001, then reverted to the standard schedule.[324] On June 4, 2002, the Empire State Building was lit in purple and gold (the royal colors of Elizabeth II), in thanks for the United Kingdom playing the Star Spangled Banner during the Changing of the Guard at Buckingham Palace on the day after the September 2001 attacks.[325] On January 13, 2012, the building was lit in red, orange, and yellow to honor the 60th anniversary of NBC program The Today Show.[326] During June 1–3, 2012, the building was lit in blue and white, the colors of the Israeli flag, in honor of the 49th annual Celebrate Israel Parade.[327]
154
+
155
+ The building also has been lit to commemorate the deaths of notable personalities. After the eightieth birthday, and subsequent death, of Frank Sinatra in 1998, for example, the building was bathed in blue light to represent the singer's nickname "Ol' Blue Eyes".[328] After actress Fay Wray, who starred in King Kong, died in September 2004, the building lights were extinguished for 15 minutes.[329] Following retired basketball player Kobe Bryant's January 2020 death, the building was lit in purple and gold, this time signifying the colors of his former team, the Los Angeles Lakers.[330]
156
+
157
+ In 2012, the building's four hundred metal halide lamps and floodlights were replaced with 1,200 LED fixtures, increasing the available colors from nine to over 16 million.[331] The computer-controlled system allows the building to be illuminated in ways that were unable to be done previously with plastic gels.[332] For instance, on November 6, 2012, CNN used the top of the Empire State Building as a scoreboard for the 2012 United States presidential election. When incumbent president Barack Obama had reached the 270 electoral votes necessary to win re-election, the lights turned blue, representing the color of Obama's Democratic Party. Had Republican challenger Mitt Romney won, the building would have been lit red, the color of the Republican Party.[333] Also, on November 26, 2012, the building had its first synchronized light show, using music from recording artist Alicia Keys.[334] Artists such as Eminem and OneRepublic have been featured in later shows, including the building's annual Holiday Music-to-Lights Show.[335] The building's owners adhere to strict standards in using the lights; for instance, they do not use the lights to play advertisements.[332]
158
+
159
+ The longest world record held by the Empire State Building was for the tallest skyscraper (to structural height), which it held for 42 years until it was surpassed by the North Tower of the World Trade Center in October 1970.[202][223][336] The Empire State Building was also the tallest man-made structure in the world before it was surpassed by the Griffin Television Tower Oklahoma (KWTV Mast) in 1954,[337] and the tallest freestanding structure in the world until the completion of the Ostankino Tower in 1967.[202] An early-1970s proposal to dismantle the spire and replace it with an additional 11 floors, which would have brought the building's height to 1,494 feet (455 m) and made it once again the world's tallest at the time, was considered but ultimately rejected.[338]
160
+
161
+ With the destruction of the World Trade Center in the September 11 attacks, the Empire State Building again became the tallest building in New York City, and the second-tallest building in the Americas, surpassed only by the Willis Tower in Chicago. The Empire State Building remained the tallest building in New York until the new One World Trade Center reached a greater height in April 2012.[202][223][224][339] As of July 2018[update], it is the fourth-tallest building in New York City after One World Trade Center, 432 Park Avenue, and 30 Hudson Yards. It is the fifth-tallest completed skyscraper in the United States behind the two other tallest buildings in New York City, as well as the Willis Tower and Trump International Hotel and Tower in Chicago.[340] The Empire State Building is the 28th-tallest in the world as of October 2017[update], the tallest being Burj Khalifa in Dubai.[341] It is also the sixth-tallest freestanding structure in the Americas behind the five tallest buildings and the CN Tower.[342]
162
+
163
+ As of 2013[update], the building houses around 1,000 businesses.[343] Current tenants include:
164
+
165
+ Former tenants include:
166
+
167
+ At 9:40 am on July 28, 1945, a B-25 Mitchell bomber, piloted in thick fog by Lieutenant Colonel William Franklin Smith Jr.,[371] crashed into the north side of the Empire State Building between the 79th and 80th floors where the offices of the National Catholic Welfare Council were located.[169] One engine completely penetrated the building, landing on the roof of a nearby building where it started a fire that destroyed a penthouse.[365][372] The other engine and part of the landing gear plummeted down an elevator shaft causing a fire, which was extinguished in 40 minutes. Fourteen people were killed in the incident.[170][74] Elevator operator Betty Lou Oliver survived a plunge of 75 stories inside an elevator, which still stands as the Guinness World Record for the longest survived elevator fall recorded.[373]
168
+
169
+ Despite the damage and loss of life, the building was open for business on many floors two days later.[170][171] The crash helped spur the passage of the long-pending Federal Tort Claims Act of 1946, as well as the insertion of retroactive provisions into the law, allowing people to sue the government for the incident.[374] Also as a result of the crash, the Civil Aeronautics Administration enacted strict regulations regarding flying over New York City, setting a minimum flying altitude of 2,500 feet (760 m) above sea level regardless of the weather conditions.[375][170]
170
+
171
+ A year later, on July 24, 1946, another aircraft narrowly missed striking the building. The unidentified twin-engine plane scraped past the observation deck, scaring the tourists there.[376]
172
+
173
+ On January 24, 2000, an elevator in the building suddenly descended 40 stories after a cable that controlled the cabin's maximum speed was severed.[377] The elevator fell from the 44th floor to the fourth floor, where a narrowed elevator shaft provided a second safety system. Despite the 40-floor fall, both of the passengers in the cabin at the time were only slightly injured.[378] Since that elevator had no fourth-floor doors, the passengers were rescued by an adjacent elevator.[379] After the fall, building inspectors reviewed all of the building's elevators.[378]
174
+
175
+ Because of the building's iconic status, it and other Midtown landmarks are popular locations for suicide attempts.[380] More than 30 people have attempted suicide over the years by jumping from the upper parts of the building, with most attempts being successful.[381][382]
176
+
177
+ The first suicide from the building occurred on April 7, 1931, before the tower was even completed, when a carpenter who had been laid-off went to the 58th floor and jumped.[383] The first suicide after the building's opening occurred from the 86th floor observatory in February 1935, when Irma P. Eberhardt fell 1,029 feet (314 m) onto a marquee sign.[384] On December 16, 1943, William Lloyd Rambo jumped to his death from the 86th floor, landing amidst Christmas shoppers on the street below.[385] In the early morning of September 27, 1946, shell-shocked Marine Douglas W. Brashear Jr. jumped from the 76th-floor window of the Grant Advertising Agency; police found his shoes 50 feet (15 m) from his body.[386]
178
+
179
+ On May 1, 1947, Evelyn McHale leapt to her death from the 86th floor observation deck and landed on a limousine parked at the curb. Photography student Robert Wiles took a photo of McHale's oddly intact corpse a few minutes after her death. The police found a suicide note among possessions that she left on the observation deck: "He is much better off without me.... I wouldn't make a good wife for anybody". The photo ran in the May 12, 1947 edition of Life magazine[387] and is often referred to as "The Most Beautiful Suicide". It was later used by visual artist Andy Warhol in one of his prints entitled Suicide (Fallen Body).[388] A 7-foot (2.1 m) mesh fence was put up around the 86th floor terrace in December 1947 after five people tried to jump during a three-week span in October and November of that year.[389][390] By then, sixteen people had died from suicide jumps.[389]
180
+
181
+ Only one person has jumped from the upper observatory. Frederick Eckert of Astoria ran past a guard in the enclosed 102nd floor gallery on November 3, 1932, and jumped a gate leading to an outdoor catwalk intended for dirigible passengers. He landed and died on the roof of the 86th floor observation promenade.[391]
182
+
183
+ Two people have survived falls by not falling more than a floor. On December 2, 1979, Elvita Adams jumped from the 86th floor, only to be blown back onto a ledge on the 85th floor by a gust of wind and left with a broken hip.[392][393][394] On April 25, 2013, a man fell from the 86th floor observation deck, but he landed alive with minor injuries on an 85th-floor ledge where security guards brought him inside and paramedics transferred him to a hospital for a psychiatric evaluation.[395]
184
+
185
+ Two fatal shootings have occurred in the direct vicinity of the Empire State Building. Abu Kamal, a 69-year-old Palestinian teacher, shot seven people on the 86th floor observation deck during the afternoon of February 23, 1997. He killed one person and wounded six others before committing suicide.[396] Kamal reportedly committed the shooting in response to events happening in Palestine and Israel.[397]
186
+
187
+ On the morning of August 24, 2012, 58-year-old Jeffrey T. Johnson shot and killed a former co-worker on the building's Fifth Avenue sidewalk. He had been laid off from his job in 2011. Two police officers confronted the gunman, and he aimed his firearm at them. They responded by firing 16 shots, killing him but also wounding nine bystanders. Most of the injured were hit by bullet fragments, although three took direct hits from bullets.[12][398]
188
+
189
+ As the tallest building in the world and the first one to exceed 100 floors, the Empire State Building immediately became an icon of the city and of the nation.[130][139] In 2013, Time magazine noted that the Empire State Building "seems to completely embody the city it has become synonymous with".[399] The historian John Tauranac calls the tower "'the' twentieth-century New York building", despite the existence of taller and more modernist buildings.[400] Early in the building's history, travel companies such as Short Line Motor Coach Service and New York Central Railroad used the building as an icon to symbolize the city.[401] After the construction of the first World Trade Center, architect Paul Goldberger noted that the Empire State Building "is famous for being tall, but it is good enough to be famous for being good."[206]
190
+
191
+ As an icon of the United States, it is also very popular among Americans. In a 2007 survey, the American Institute of Architects found that the Empire State Building was "America's favorite building".[402] The building was originally a symbol of hope in a country devastated by the Depression, as well as a work of accomplishment by newer immigrants.[130] The writer Benjamin Flowers states that the Empire State was "a building intended to celebrate a new America, built by men (both clients and construction workers) who were themselves new Americans."[125] The architectural critic Jonathan Glancey refers to the building as an "icon of American design".[343]
192
+
193
+ The Empire State Building has been hailed as an example of a "wonder of the world" due to the massive effort expended during construction. The Washington Star listed it as part of one of the "seven wonders of the modern world" in 1931, while Holiday magazine wrote in 1958 that the Empire State's height would be taller than the combined heights of the Eiffel Tower and the Great Pyramid of Giza.[400] The American Society of Civil Engineers also declared the building "A Modern Civil Engineering Wonder of the United States" in 1958, and one of the Seven Wonders of the Modern World in 1994.[185] Ron Miller, in a 2010 book, also described the Empire State Building as one of the "seven wonders of engineering".[403] It has often been called the Eighth Wonder of the World as well, an appellation that it has held since shortly after opening.[73][167][404] The panels installed in the lobby in 1963 reflected this, showing the seven original wonders alongside the Empire State Building.[284]
194
+
195
+ As an icon of New York City, the Empire State Building has been featured in various films, books, TV shows, and video games. According to the building's official website, more than 250 movies contain depictions of the Empire State Building.[405] In his book about the building, John Tauranac writes that the first documented appearance of the tower in popular culture was Swiss Family Manhattan, a 1932 children's story by Christopher Morley.[406] A year later, the film King Kong depicted Kong, a large stop motion ape that climbs the Empire State Building,[141][142][278] bringing the building into the popular imagination.[278] Later movies such as An Affair to Remember (1957), Sleepless in Seattle (1993), and Independence Day (1996) also featured the building.[407][405] The building has also been featured in other works, such as "Daleks in Manhattan", a 2007 episode of the TV series Doctor Who;[407] and Empire, an eight-hour black-and-white silent film by Andy Warhol,[407] which was later added to the Library of Congress's National Film Registry.[408]
196
+
197
+ The Empire State Building Run-Up, a foot race from ground level to the 86th-floor observation deck, has been held annually since 1978. Its participants are referred to both as runners and as climbers, and are often tower running enthusiasts. The race covers a vertical distance of 1,050 ft (320 m) and takes in 1,576 steps. The record time is 9 minutes and 33 seconds, achieved by Australian professional cyclist Paul Crake in 2003, at a climbing rate of 6,593 ft (2,010 m) per hour.[409][410]
en/1734.html.txt ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Enceladus (/ɛnˈsɛlədəs/) is the sixth-largest moon of Saturn. It is about 500 kilometers (310 mi) in diameter,[5] about a tenth of that of Saturn's largest moon, Titan. Enceladus is mostly covered by fresh, clean ice, making it one of the most reflective bodies of the Solar System. Consequently, its surface temperature at noon only reaches −198 °C (−324 °F), far colder than a light-absorbing body would be. Despite its small size, Enceladus has a wide range of surface features, ranging from old, heavily cratered regions to young, tectonically deformed terrains.
4
+
5
+ Enceladus was discovered on August 28, 1789, by William Herschel,[1][17][18] but little was known about it until the two Voyager spacecraft, Voyager 1 and Voyager 2, passed nearby in 1980 and 1981.[19] In 2005, the Cassini spacecraft started multiple close flybys of Enceladus, revealing its surface and environment in greater detail. In particular, Cassini discovered water-rich plumes venting from the south polar region.[20] Cryovolcanoes near the south pole shoot geyser-like jets of water vapor, molecular hydrogen, other volatiles, and solid material, including sodium chloride crystals and ice particles, into space, totaling about 200 kg (440 lb) per second.[16][19][21] Over 100 geysers have been identified.[22] Some of the water vapor falls back as "snow"; the rest escapes, and supplies most of the material making up Saturn's E ring.[23][24] According to NASA scientists, the plumes are similar in composition to comets.[25] In 2014, NASA reported that Cassini found evidence for a large south polar subsurface ocean of liquid water with a thickness of around 10 km (6 mi).[26][27][28]
6
+
7
+ These geyser observations, along with the finding of escaping internal heat and very few (if any) impact craters in the south polar region, show that Enceladus is currently geologically active. Like many other satellites in the extensive systems of the giant planets, Enceladus is trapped in an orbital resonance. Its resonance with Dione excites its orbital eccentricity, which is damped by tidal forces, tidally heating its interior and driving the geological activity.[29]
8
+
9
+ On June 27, 2018, scientists reported the detection of complex macromolecular organics on Enceladus's jet plumes, as sampled by the Cassini orbiter.[30][31]
10
+
11
+ Enceladus was discovered by William Herschel on August 28, 1789, during the first use of his new 1.2 m (47 in) 40-foot telescope, then the largest in the world, at Observatory House in Slough, England.[18][32] Its faint apparent magnitude (HV = +11.7) and its proximity to the much brighter Saturn and Saturn's rings make Enceladus difficult to observe from Earth with smaller telescopes. Like many satellites of Saturn discovered prior to the Space Age, Enceladus was first observed during a Saturnian equinox, when Earth is within the ring plane. At such times, the reduction in glare from the rings makes the moons easier to observe.[33] Prior to the Voyager missions the view of Enceladus improved little from the dot first observed by Herschel. Only its orbital characteristics were known, with estimations of its mass, density and albedo.
12
+
13
+ Enceladus is named after the giant Enceladus of Greek mythology.[1] The name, like the names of each of the first seven satellites of Saturn to be discovered, was suggested by William Herschel's son John Herschel in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope.[34] He chose these names because Saturn, known in Greek mythology as Cronus, was the leader of the Titans.
14
+
15
+ Features on Enceladus are named by the International Astronomical Union (IAU) after characters and places from Burton's translation of The Book of One Thousand and One Nights.[35] Impact craters are named after characters, whereas other feature types, such as fossae (long, narrow depressions), dorsa (ridges), planitiae (plains), sulci (long parallel grooves), and rupes (cliffs) are named after places. The IAU has officially named 85 features on Enceladus, most recently Samaria Rupes, formerly called Samaria Fossa.[36][37]
16
+
17
+ Enceladus is one of the major inner satellites of Saturn along with Dione, Tethys, and Mimas. It orbits at 238,000 km from Saturn's center and 180,000 km from its cloud tops, between the orbits of Mimas and Tethys. It orbits Saturn every 32.9 hours, fast enough for its motion to be observed over a single night of observation. Enceladus is currently in a 2:1 mean-motion orbital resonance with Dione, completing two orbits around Saturn for every one orbit completed by Dione. This resonance maintains Enceladus's orbital eccentricity (0.0047), which is known as a forced eccentricity. This non-zero eccentricity results in tidal deformation of Enceladus. The dissipated heat resulting from this deformation is the main heating source for Enceladus's geologic activity.[6] Enceladus orbits within the densest part of Saturn's E ring, the outermost of its major rings, and is the main source of the ring's material composition.[38]
18
+
19
+ Like most of Saturn's larger satellites, Enceladus rotates synchronously with its orbital period, keeping one face pointed toward Saturn. Unlike Earth's Moon, Enceladus does not appear to librate more than 1.5° about its spin axis. However, analysis of the shape of Enceladus suggests that at some point it was in a 1:4 forced secondary spin–orbit libration.[6] This libration could have provided Enceladus with an additional heat source.[29]
20
+ [39]
21
+ [40]
22
+
23
+ Plumes from Enceladus, which are similar in composition to comets,[25] have been shown to be the source of the material in Saturn's E ring.[23] The E ring is the widest and outermost ring of Saturn (except for the tenuous Phoebe ring). It is an extremely wide but diffuse disk of microscopic icy or dusty material distributed between the orbits of Mimas and Titan.[41]
24
+
25
+ Mathematical models show that the E ring is unstable, with a lifespan between 10,000 and 1,000,000 years; therefore, particles composing it must be constantly replenished.[42] Enceladus is orbiting inside the ring, at its narrowest but highest density point. In the 1980s some suspected that Enceladus is the main source of particles for the ring.[43][44][45][46] This hypothesis was confirmed by Cassini's first two close flybys in 2005.[47][48]
26
+
27
+ The CDA "detected a large increase in the number of particles near Enceladus", confirming Enceladus as the primary source for the E ring.[47] Analysis of the CDA and INMS data suggest that the gas cloud Cassini flew through during the July encounter, and observed from a distance with its magnetometer and UVIS, was actually a water-rich cryovolcanic plume, originating from vents near the south pole.[49]
28
+ Visual confirmation of venting came in November 2005, when ISS imaged geyser-like jets of icy particles rising from Enceladus's south polar region.[6][24] (Although the plume was imaged before, in January and February 2005, additional studies of the camera's response at high phase angles, when the Sun is almost behind Enceladus, and comparison with equivalent high-phase-angle images taken of other Saturnian satellites, were required before this could be confirmed.[50])
29
+
30
+ Enceladus orbiting within Saturn's E ring
31
+
32
+ Enceladus geyser tendrils - comparison of images ("a";"c") with computer simulations
33
+
34
+ Enceladus south polar region - locations of most active tendril-producing geysers
35
+
36
+ Voyager 2 was the first spacecraft to observe Enceladus's surface in detail, in August 1981. Examination of the resulting highest-resolution imagery revealed at least five different types of terrain, including several regions of cratered terrain, regions of smooth (young) terrain, and lanes of ridged terrain often bordering the smooth areas.[51] In addition, extensive linear cracks[52] and scarps were observed. Given the relative lack of craters on the smooth plains, these regions are probably less than a few hundred million years old. Accordingly, Enceladus must have been recently active with "water volcanism" or other processes that renew the surface.[53] The fresh, clean ice that dominates its surface gives Enceladus the most reflective surface of any body in the Solar System, with a visual geometric albedo of 1.38[10] and bolometric Bond albedo of 0.81±0.04.[11] Because it reflects so much sunlight, its surface only reaches a mean noon temperature of −198 °C (−324 °F), somewhat colder than other Saturnian satellites.[12]
37
+
38
+ Observations during three flybys by Cassini on February 17, March 9, and July 14, 2005, revealed Enceladus's surface features in much greater detail than the Voyager 2 observations. The smooth plains, which Voyager 2 had observed, resolved into relatively crater-free regions filled with numerous small ridges and scarps. Numerous fractures were found within the older, cratered terrain, suggesting that the surface has been subjected to extensive deformation since the craters were formed.[54] Some areas contain no craters, indicating major resurfacing events in the geologically recent past. There are fissures, plains, corrugated terrain and other crustal deformations. Several additional regions of young terrain were discovered in areas not well-imaged by either Voyager spacecraft, such as the bizarre terrain near the south pole.[6] All of this indicates that Enceladus's interior is liquid today, even though it should have been frozen long ago.[53]
39
+
40
+ Impact cratering is a common occurrence on many Solar System bodies. Much of Enceladus's surface is covered with craters at various densities and levels of degradation.[55] This subdivision of cratered terrains on the basis of crater density (and thus surface age) suggests that Enceladus has been resurfaced in multiple stages.[53]
41
+
42
+ Cassini observations provided a much closer look at the crater distribution and size, showing that many of Enceladus's craters are heavily degraded through viscous relaxation and fracturing.[56] Viscous relaxation allows gravity, over geologic time scales, to deform craters and other topographic features formed in water ice, reducing the amount of topography over time. The rate at which this occurs is dependent on the temperature of the ice: warmer ice is easier to deform than colder, stiffer ice. Viscously relaxed craters tend to have domed floors, or are recognized as craters only by a raised, circular rim. Dunyazad crater is a prime example of a viscously relaxed crater on Enceladus, with a prominent domed floor.[57]
43
+
44
+ Voyager 2 found several types of tectonic features on Enceladus, including troughs, scarps, and belts of grooves and ridges.[51] Results from Cassini suggest that tectonics is the dominant mode of deformation on Enceladus, including rifts, one of the more dramatic types of tectonic features that were noted. These canyons can be up to 200 km long, 5–10 km wide, and 1 km deep. Such features are geologically young, because they cut across other tectonic features and have sharp topographic relief with prominent outcrops along the cliff faces.[58]
45
+
46
+ Evidence of tectonics on Enceladus is also derived from grooved terrain, consisting of lanes of curvilinear grooves and ridges. These bands, first discovered by Voyager 2, often separate smooth plains from cratered regions.[51] Grooved terrains such as the Samarkand Sulci are reminiscent of grooved terrain on Ganymede. However, unlike those seen on Ganymede, grooved topography on Enceladus is generally more complex. Rather than parallel sets of grooves, these lanes often appear as bands of crudely aligned, chevron-shaped features. In other areas, these bands bow upwards with fractures and ridges running the length of the feature. Cassini observations of the Samarkand Sulci have revealed dark spots (125 and 750 m wide) located parallel to the narrow fractures. Currently, these spots are interpreted as collapse pits within these ridged plain belts.[56]
47
+
48
+ In addition to deep fractures and grooved lanes, Enceladus has several other types of tectonic terrain. Many of these fractures are found in bands cutting across cratered terrain. These fractures probably propagate down only a few hundred meters into the crust. Many have probably been influenced during their formation by the weakened regolith produced by impact craters, often changing the strike of the propagating fracture.[56][59] Another example of tectonic features on Enceladus are the linear grooves first found by Voyager 2 and seen at a much higher resolution by Cassini. These linear grooves can be seen cutting across other terrain types, like the groove and ridge belts. Like the deep rifts, they are among the youngest features on Enceladus. However, some linear grooves have been softened like the craters nearby, suggesting that they are older. Ridges have also been observed on Enceladus, though not nearly to the extent as those seen on Europa. These ridges are relatively limited in extent and are up to one kilometer tall. One-kilometer high domes have also been observed.[56] Given the level of resurfacing found on Enceladus, it is clear that tectonic movement has been an important driver of geology for much of its history.[58]
49
+
50
+ Two regions of smooth plains were observed by Voyager 2. They generally have low relief and have far fewer craters than in the cratered terrains, indicating a relatively young surface age.[55] In one of the smooth plain regions, Sarandib Planitia, no impact craters were visible down to the limit of resolution. Another region of smooth plains to the southwest of Sarandib is criss-crossed by several troughs and scarps. Cassini has since viewed these smooth plains regions, like Sarandib Planitia and Diyar Planitia at much higher resolution. Cassini images show these regions filled with low-relief ridges and fractures, probably caused by shear deformation.[56] The high-resolution images of Sarandib Planitia revealed a number of small impact craters, which allow for an estimate of the surface age, either 170 million years or 3.7 billion years, depending on assumed impactor population.[6][b]
51
+
52
+ The expanded surface coverage provided by Cassini has allowed for the identification of additional regions of smooth plains, particularly on Enceladus's leading hemisphere (the side of Enceladus that faces the direction of motion as it orbits Saturn). Rather than being covered in low-relief ridges, this region is covered in numerous criss-crossing sets of troughs and ridges, similar to the deformation seen in the south polar region. This area is on the opposite side of Enceladus from Sarandib and Diyar Planitiae, suggesting that the placement of these regions is influenced by Saturn's tides on Enceladus.[60]
53
+
54
+ Images taken by Cassini during the flyby on July 14, 2005, revealed a distinctive, tectonically deformed region surrounding Enceladus's south pole. This area, reaching as far north as 60° south latitude, is covered in tectonic fractures and ridges.[6][61] The area has few sizable impact craters, suggesting that it is the youngest surface on Enceladus and on any of the mid-sized icy satellites; modeling of the cratering rate suggests that some regions of the south polar terrain are possibly as young as 500,000 years or less.[6] Near the center of this terrain are four fractures bounded by ridges, unofficially called "tiger stripes".[62] They appear to be the youngest features in this region and are surrounded by mint-green-colored (in false color, UV–green–near IR images), coarse-grained water ice, seen elsewhere on the surface within outcrops and fracture walls.[61] Here the "blue" ice is on a flat surface, indicating that the region is young enough not to have been coated by fine-grained water ice from the E ring. Results from the visual and infrared spectrometer (VIMS) instrument suggest that the green-colored material surrounding the tiger stripes is chemically distinct from the rest of the surface of Enceladus. VIMS detected crystalline water ice in the stripes, suggesting that they are quite young (likely less than 1,000 years old) or the surface ice has been thermally altered in the recent past.[63] VIMS also detected simple organic (carbon-containing) compounds in the tiger stripes, chemistry not found anywhere else on Enceladus thus far.[64]
55
+
56
+ One of these areas of "blue" ice in the south polar region was observed at high resolution during the July 14, 2005 flyby, revealing an area of extreme tectonic deformation and blocky terrain, with some areas covered in boulders 10–100 m across.[65]
57
+
58
+ The boundary of the south polar region is marked by a pattern of parallel, Y- and V-shaped ridges and valleys. The shape, orientation, and location of these features suggest they are caused by changes in the overall shape of Enceladus. As of 2006 there were two theories for what could cause such a shift in shape: the orbit of Enceladus may have migrated inward, leading to an increase in Enceladus's rotation rate. Such a shift would lead to a more oblate shape;[6] or a rising mass of warm, low-density material in Enceladus's interior may have led to a shift in the position of the current south polar terrain from Enceladus's southern mid-latitudes to its south pole.[60] Consequently, the moon's ellipsoid shape would have adjusted to match the new orientation. One problem of the polar flattening hypothesis is that both polar regions should have similar tectonic deformation histories.[6] However, the north polar region is densely cratered, and has a much older surface age than the south pole.[55] Thickness variations in Enceladus's lithosphere is one explanation for this discrepancy. Variations in lithospheric thickness are supported by the correlation between the Y-shaped discontinuities and the V-shaped cusps along the south polar terrain margin and the relative surface age of the adjacent non-south polar terrain regions. The Y-shaped discontinuities, and the north-south trending tension fractures into which they lead, are correlated with younger terrain with presumably thinner lithospheres. The V-shaped cusps are adjacent to older, more heavily cratered terrains.[6]
59
+
60
+ Following Voyager's encounters with Enceladus in the early 1980s, scientists postulated it to be geologically active based on its young, reflective surface and location near the core of the E ring.[51] Based on the connection between Enceladus and the E ring, scientists suspected that Enceladus was the source of material in the E ring, perhaps through venting of water vapor.[43][44] Readings from Cassini's 2005 passage suggested that cryovolcanism, where water and other volatiles are the materials erupted instead of silicate rock, had been discovered on Enceladus. The first Cassini sighting of a plume of icy particles above Enceladus's south pole came from the Imaging Science Subsystem (ISS) images taken in January and February 2005,[6] though the possibility of a camera artifact delayed an official announcement. Data from the magnetometer instrument during the February 17, 2005, encounter provided evidence for a planetary atmosphere. The magnetometer observed a deflection or "draping" of the magnetic field, consistent with local ionization of neutral gas. In addition, an increase in the power of ion cyclotron waves near the orbit of Enceladus was observed, which was further evidence of the ionization of neutral gas. These waves are produced by the interaction of ionized particles and magnetic fields, and the waves' frequency is close to the gyrofrequency of the freshly produced ions, in this case water vapor.[15] During the two following encounters, the magnetometer team determined that gases in Enceladus's atmosphere are concentrated over the south polar region, with atmospheric density away from the pole being much lower.[15] The Ultraviolet Imaging Spectrograph (UVIS) confirmed this result by observing two stellar occultations during the February 17 and July 14 encounters. Unlike the magnetometer, UVIS failed to detect an atmosphere above Enceladus during the February encounter when it looked over the equatorial region, but did detect water vapor during an occultation over the south polar region during the July encounter.[16]
61
+
62
+ Cassini flew through this gas cloud on a few encounters, allowing instruments such as the ion and neutral mass spectrometer (INMS) and the cosmic dust analyzer (CDA) to directly sample the plume. (See 'Composition' section.) The November 2005 images showed the plume's fine structure, revealing numerous jets (perhaps issuing from numerous distinct vents) within a larger, faint component extending out nearly 500 km from the surface.[49] The particles have a bulk velocity of 1.25 ±0.1 km/s,[66] and a maximum velocity of 3.40 km/s.[67] Cassini's UVIS later observed gas jets coinciding with the dust jets seen by ISS during a non-targeted encounter with Enceladus in October 2007.
63
+
64
+ The combined analysis of imaging, mass spectrometry, and magnetospheric data suggests that the observed south polar plume emanates from pressurized subsurface chambers, similar to Earth's geysers or fumaroles.[6] Fumaroles are probably the closer analogy, since periodic or episodic emission is an inherent property of geysers. The plumes of Enceladus were observed to be continuous to within a factor of a few. The mechanism that drives and sustains the eruptions is thought to be tidal heating.[68] The intensity of the eruption of the south polar jets varies significantly as a function of the position of Enceladus in its orbit. The plumes are about four times brighter when Enceladus is at apoapsis (the point in its orbit most distant from Saturn) than when it is at periapsis.[69][70][71] This is consistent with geophysical calculations which predict the south polar fissures are under compression near periapsis, pushing them shut, and under tension near apoapsis, pulling them open.[72]
65
+
66
+ Much of the plume activity consists of broad curtain-like eruptions. Optical illusions from a combination of viewing direction and local fracture geometry previously made the plumes look like discrete jets.[73][74][75]
67
+
68
+ The extent to which cryovolcanism really occurs is a subject of some debate, as water, being denser than ice by about 8%, has difficulty erupting under normal circumstances. At Enceladus, it appears that cryovolcanism occurs because water-filled cracks are periodically exposed to vacuum, the cracks being opened and closed by tidal stresses.[76][77][78]
69
+
70
+ Enceladus – plume animation (00:48)
71
+
72
+ Enceladus and south polar jets (April 13, 2017).
73
+
74
+ Plumes above the limb of Enceladus feeding the E ring
75
+
76
+ A false-color Cassini image of the jets
77
+
78
+ Before the Cassini mission, little was known about the interior of Enceladus. However, flybys by Cassini provided information for models of Enceladus's interior, including a better determination of the mass and shape, high-resolution observations of the surface, and new insights on the interior.[79][80]
79
+
80
+ Mass estimates from the Voyager program missions suggested that Enceladus was composed almost entirely of water ice.[51] However, based on the effects of Enceladus's gravity on Cassini, its mass was determined to be much higher than previously thought, yielding a density of 1.61 g/cm3.[6] This density is higher than Saturn's other mid-sized icy satellites, indicating that Enceladus contains a greater percentage of silicates and iron.
81
+
82
+ Castillo et al. (2005) suggested that Iapetus and the other icy satellites of Saturn formed relatively quickly after the formation of the Saturnian subnebula, and thus were rich in short-lived radionuclides.[81][82] These radionuclides, like aluminium-26 and iron-60, have short half-lives and would produce interior heating relatively quickly. Without the short-lived variety, Enceladus's complement of long-lived radionuclides would not have been enough to prevent rapid freezing of the interior, even with Enceladus's comparatively high rock–mass fraction, given its small size.[83] Given Enceladus's relatively high rock–mass fraction, the proposed enhancement in 26Al and 60Fe would result in a differentiated body, with an icy mantle and a rocky core.[84][82] Subsequent radioactive and tidal heating would raise the temperature of the core to 1,000 K, enough to melt the inner mantle. However, for Enceladus to still be active, part of the core must have also melted, forming magma chambers that would flex under the strain of Saturn's tides. Tidal heating, such as from the resonance with Dione or from libration, would then have sustained these hot spots in the core and would power the current geological activity.[40][85]
83
+
84
+ In addition to its mass and modeled geochemistry, researchers have also examined Enceladus's shape to determine if it is differentiated. Porco et al. (2006) used limb measurements to determine that its shape, assuming hydrostatic equilibrium, is consistent with an undifferentiated interior, in contradiction to the geological and geochemical evidence.[6] However, the current shape also supports the possibility that Enceladus is not in hydrostatic equilibrium, and may have rotated faster at some point in the recent past (with a differentiated interior).[84] Gravity measurements by Cassini show that the density of the core is low, indicating that the core contains water in addition to silicates.[86]
85
+
86
+ Evidence of liquid water on Enceladus began to accumulate in 2005, when scientists observed plumes containing water vapor spewing from its south polar surface,[6][87] with jets moving 250 kg of water vapor every second[87] at up to 2,189 km/h (1,360 mph) into space.[88] Soon after, in 2006 it was determined that Enceladus's plumes are the source of Saturn's E Ring.[6][47] The sources of salty particles are uniformly distributed along the tiger stripes, whereas sources of "fresh" particles are closely related to the high-speed gas jets. The "salty" particles are heavier and mostly fall back to the surface, whereas the fast "fresh" particles escape to the E ring, explaining its salt-poor composition of 0.5–2% of sodium salts by mass.[89]
87
+
88
+ Gravimetric data from Cassini's December 2010 flybys showed that Enceladus likely has a liquid water ocean beneath its frozen surface, but at the time it was thought the subsurface ocean was limited to the south pole.[26][27][28][90] The top of the ocean probably lies beneath a 30 to 40 kilometers (19 to 25 mi) thick ice shelf. The ocean may be 10 kilometers (6.2 mi) deep at the south pole.[26][91]
89
+
90
+ Measurements of Enceladus's "wobble" as it orbits Saturn—called libration—suggests that the entire icy crust is detached from the rocky core and therefore that a global ocean is present beneath the surface.[92] The amount of libration (0.120° ± 0.014°) implies that this global ocean is about 26 to 31 kilometers (16-19 mi) deep.[93][94][95][96] For comparison, Earth's ocean has an average depth of 3.7 kilometers.[95]
91
+
92
+ The Cassini spacecraft flew through the southern plumes on several occasions to sample and analyze its composition. As of 2019, the data gathered is still being analyzed and interpreted. The plumes' salty composition (-Na, -Cl, -CO3) indicates that the source is a salty subsurface ocean.[97]
93
+
94
+ The INMS instrument detected mostly water vapor, as well as traces of molecular nitrogen, carbon dioxide,[14] and trace amounts of simple hydrocarbons such as methane, propane, acetylene and formaldehyde.[98][99] The plumes' composition, as measured by the INMS, is similar to that seen at most comets.[99] Cassini also found traces of simple organic compounds in some dust grains,[89][100] as well as larger organics such as benzene (C6H6),[101] and complex macromolecular organics as large as 200 atomic mass units,[30][30][31] and at least 15 carbon atoms in size.[102]
95
+
96
+ The mass spectrometer detected molecular hydrogen (H2) which was in "thermodynamic disequilibrium" with the other components,[103] and found traces of ammonia (NH3).[104]
97
+
98
+ A model suggests that Enceladus's salty ocean (-Na, -Cl, -CO3) has an alkaline pH of 11 to 12.[105][106] The high pH is interpreted to be a consequence of serpentinization of chondritic rock that leads to the generation of H2, a geochemical source of energy that could support both abiotic and biological synthesis of organic molecules such as those that have been detected in Enceladus's plumes.[105][107]
99
+
100
+ Further analysis in 2019 was done of the spectral characteristics of ice grains in Enceladus's erupting plumes. The study found that nitrogen-bearing and oxygen-bearing amines were likely present, with significant implications for the availability of amino acids in the internal ocean. The researchers suggested that the compounds on Enceladus could be precursors for "biologically relevant organic compounds".[108][109]
101
+
102
+ During the flyby of July 14, 2005, the Composite Infrared Spectrometer (CIRS) found a warm region near the south pole. Temperatures in this region ranged from 85–90 K, with small areas showing as high as 157 K (−116 °C), much too warm to be explained by solar heating, indicating that parts of the south polar region are heated from the interior of Enceladus.[12] The presence of a subsurface ocean under the south polar region is now accepted,[110] but it cannot explain the source of the heat, with an estimated heat flux of 200 mW/m2, which is about 10 times higher than that from radiogenic heating alone.[111]
103
+
104
+ Several explanations for the observed elevated temperatures and the resulting plumes have been proposed, including venting from a subsurface reservoir of liquid water, sublimation of ice,[112] decompression and dissociation of clathrates, and shear heating,[113] but a complete explanation of all the heat sources causing the observed thermal power output of Enceladus has not yet been settled.
105
+
106
+ Heating in Enceladus has occurred through various mechanisms ever since its formation. Radioactive decay in its core may have initially heated it,[114] giving it a warm core and a subsurface ocean, which is now kept above freezing through an unidentified mechanism. Geophysical models indicate that tidal heating is a main heat source, perhaps aided by radioactive decay and some heat-producing chemical reactions.[115][116][117][118] A 2007 study predicted the internal heat of Enceladus, if generated by tidal forces, could be no greater than 1.1 gigawatts,[119] but data from Cassini's infrared spectrometer of the south polar terrain over 16 months, indicate that the internal heat generated power is about 4.7 gigawatts,[119] and suggest that it is in thermal equilibrium.[12][63][120]
107
+
108
+ The observed power output of 4.7 gigawatts is challenging to explain from tidal heating alone, so the main source of heat remains a mystery.[6][115] Most scientists think the observed heat flux of Enceladus is not enough to maintain the subsurface ocean, and therefore any subsurface ocean must be a remnant of a period of higher eccentricity and tidal heating, or the heat is produced through another mechanism.[121][122]
109
+
110
+ Tidal heating occurs through the tidal friction processes: orbital and rotational energy are dissipated as heat in the crust of an object. In addition, to the extent that tides produce heat along fractures, libration may affect the magnitude and distribution of such tidal shear heating.[40] Tidal dissipation of Enceladus's ice crust is significant because Enceladus has a subsurface ocean. A computer simulation that used data from Cassini was published in November 2017, and it indicates that friction heat from the sliding rock fragments within the permeable and fragmented core of Enceladus could keep its underground ocean warm for up to billions of years.[123][124][125] It is thought that if Enceladus had a more eccentric orbit in the past, the enhanced tidal forces could be sufficient to maintain a subsurface ocean, such that a periodic enhancement in eccentricity could maintain a subsurface ocean that periodically changes in size.[122] A more recent analysis claimed that "a model of the tiger stripes as tidally flexed slots that puncture the ice shell can simultaneously explain the persistence of the eruptions through the tidal cycle, the phase lag, and the total power output of the tiger stripe terrain, while suggesting that eruptions are maintained over geological timescales."[68] Previous models suggest that resonant perturbations of Dione could provide the necessary periodic eccentricity changes to maintain the subsurface ocean of Enceladus, if the ocean contains a substantial amount of ammonia.[6] The surface of Enceladus indicates that the entire moon has experienced periods of enhanced heat flux in the past.[126]
111
+
112
+ The "hot start" model of heating suggests Enceladus began as ice and rock that contained rapidly decaying short-lived radioactive isotopes of aluminium, iron and manganese. Enormous amounts of heat were then produced as these isotopes decayed for about 7 million years, resulting in the consolidation of rocky material at the core surrounded by a shell of ice. Although the heat from radioactivity would decrease over time, the combination of radioactivity and tidal forces from Saturn's gravitational tug could prevent the subsurface ocean from freezing.[114] The present-day radiogenic heating rate is 3.2 × 1015 ergs/s (or 0.32 gigawatts), assuming Enceladus has a composition of ice, iron and silicate materials.[6] Heating from long-lived radioactive isotopes uranium-238, uranium-235, thorium-232 and potassium-40 inside Enceladus would add 0.3 gigawatts to the observed heat flux.[115] The presence of Enceladus's regionally thick subsurface ocean suggests a heat flux ~10 times higher than that from radiogenic heating in the silicate core.[66]
113
+
114
+ Because no ammonia was initially found in the vented material by INMS or UVIS, which could act as an antifreeze, it was thought such a heated, pressurized chamber would consist of nearly pure liquid water with a temperature of at least 270 K (−3 °C), because pure water requires more energy to melt.
115
+
116
+ In July 2009 it was announced that traces of ammonia had been found in the plumes during flybys in July and October 2008.[104][127] Reducing the freezing point of water with ammonia would also allow for outgassing and higher gas pressure,[128] and less heat required to power the water plumes.[129] The subsurface layer heating the surface water ice could be an ammonia–water slurry at temperatures as low as 170 K (−103 °C), and thus less energy is required to produce the plume activity. However, the observed 4.7 gigawatts heat flux is enough to power the cryovolcanism without the presence of ammonia.[119][129]
117
+
118
+ Enceladus is a relatively small satellite composed of ice and rock.[130] It is a scalene ellipsoid in shape; its diameters, calculated from images taken by Cassini's ISS (Imaging Science Subsystem) instrument, are 513 km between the sub- and anti-Saturnian poles, 503 km between the leading and trailing hemispheres, and 497 km between the north and south poles.[6] Enceladus is only one-seventh the diameter of Earth's Moon. It ranks sixth in both mass and size among the satellites of Saturn, after Titan (5,150 km), Rhea (1,530 km), Iapetus (1,440 km), Dione (1,120 km) and Tethys (1,050 km).[131][132]
119
+
120
+ Enceladus transiting the moon Titan
121
+
122
+ Size comparison of Earth, the Moon, and Enceladus
123
+
124
+ A size comparison of Enceladus against the North Sea
125
+
126
+ Mimas, the innermost of the round moons of Saturn and directly interior to Enceladus, is a geologically dead body, even though it should experience stronger tidal forces than Enceladus. This apparent paradox can be explained in part by temperature-dependent properties of water ice (the main constituent of the interiors of Mimas and Enceladus). The tidal heating per unit mass is given by the formula
127
+
128
+
129
+
130
+
131
+ q
132
+
133
+ t
134
+ i
135
+ d
136
+
137
+
138
+ =
139
+
140
+
141
+
142
+ 63
143
+ ρ
144
+
145
+ n
146
+
147
+ 5
148
+
149
+
150
+
151
+ r
152
+
153
+ 4
154
+
155
+
156
+
157
+ e
158
+
159
+ 2
160
+
161
+
162
+
163
+
164
+ 38
165
+ μ
166
+ Q
167
+
168
+
169
+
170
+
171
+
172
+ {\displaystyle q_{tid}={\frac {63\rho n^{5}r^{4}e^{2}}{38\mu Q}}}
173
+
174
+ , where ρ is the (mass) density of the satellite, n is its mean orbital motion, r is the satellite's radius, e is the orbital eccentricity of the satellite, μ is the shear modulus and Q is the dimensionless dissipation factor. For a same-temperature approximation, the expected value of qtid for Mimas is about 40 times that of Enceladus. However, the material parameters μ and Q are temperature dependent. At high temperatures (close to the melting point), μ and Q are low, so tidal heating is high. Modeling suggests that for Enceladus, both a 'basic' low-energy thermal state with little internal temperature gradient, and an 'excited' high-energy thermal state with a significant temperature gradient, and consequent convection (endogenic geologic activity), once established, would be stable. For Mimas, only a low-energy state is expected to be stable, despite its being closer to Saturn. So the model predicts a low-internal-temperature state for Mimas (values of μ and Q are high) but a possible higher-temperature state for Enceladus (values of μ and Q are low).[133] Additional historical information is needed to explain how Enceladus first entered the high-energy state (e.g. more radiogenic heating or a more eccentric orbit in the past).[134]
175
+
176
+ The significantly higher density of Enceladus relative to Mimas (1.61 vs. 1.15 g/cm3), implying a larger content of rock and more radiogenic heating in its early history, has also been cited as an important factor in resolving the Mimas paradox.[135]
177
+
178
+ It has been suggested that for an icy satellite the size of Mimas or Enceladus to enter an 'excited state' of tidal heating and convection, it would need to enter an orbital resonance before it lost too much of its primordial internal heat. Because Mimas, being smaller, would cool more rapidly than Enceladus, its window of opportunity for initiating orbital resonance-driven convection would have been considerably shorter.[136]
179
+
180
+ Enceladus is losing mass at a rate of 200 kg/second. If mass loss at this rate continued for 4.5 Gyr, the satellite would have lost approximately 30% of its initial mass. A similar value is obtained by assuming that the initial densities of Enceladus and Mimas were equal.[136] It suggests that tectonics in the south polar region is probably mainly related to subsidence and associated subduction caused by the process of mass loss [137].
181
+
182
+ In 2016, a study of how the orbits of Saturn's moons should have changed due to tidal effects suggested that all of Saturn's satellites inward of Titan, including Enceladus (whose geologic activity was used to derive the strength of tidal effects on Saturn's satellites), may have formed as little as 100 million years ago.[138]
183
+
184
+ Enceladus ejects plumes of salted water laced with grains of silica-rich sand,[139] nitrogen (in ammonia),[140] and organic molecules, including trace amounts of simple hydrocarbons such as methane (CH4), propane (C3H8), acetylene (C2H2) and formaldehyde (CH2O), which are carbon-bearing molecules.[98][99][141] This indicates that hydrothermal activity —an energy source— may be at work in Enceladus's subsurface ocean.[139][142] In addition, models indicate[143] that the large rocky core is porous, allowing water to flow through it, transferring heat and chemicals. It was confirmed by observations and other research [144][145]
185
+ [146] Molecular hydrogen (H2), a geochemical source of energy that can be metabolized by methanogen microbes to provide energy for life, could be present if, as models suggest, Enceladus's salty ocean has an alkaline pH from serpentinization of chondritic rock.[105][106][107]
186
+
187
+ The presence of an internal global salty ocean with an aquatic environment supported by global ocean circulation patterns,[144] with an energy source and complex organic compounds[30] in contact with Enceladus's rocky core,[27][28][147] may advance the study of astrobiology and the study of potentially habitable environments for microbial extraterrestrial life.[26][90][91][148][149][150] The presence of a wide range of organic compounds and ammonia indicates their source may be similar to the water/rock reactions known to occur on Earth and that are known to support life.[151] Therefore, several robotic missions have been proposed to further explore Enceladus and assess its habitability; some of the proposed missions are: Journey to Enceladus and Titan (JET), Enceladus Explorer (En-Ex), Enceladus Life Finder (ELF), Life Investigation For Enceladus (LIFE), and Enceladus Life Signatures and Habitability (ELSAH).
188
+
189
+ On April 13, 2017, NASA announced the discovery of possible hydrothermal activity on Enceladus's sub-surface ocean floor. In 2015, the Cassini probe made a close fly-by of Enceladus's south pole, flying within 48.3 km (30 mi) of the surface, as well as through a plume in the process. A mass spectrometer on the craft detected molecular hydrogen (H2) from the plume, and after months of analysis, the conclusion was made that the hydrogen was most likely the result of hydrothermal activity beneath the surface.[153] It has been speculated that such activity could be a potential oasis of habitability.[154][155][156]
190
+
191
+ The presence of ample hydrogen in Enceladus's ocean means that microbes – if any exist there – could use it to obtain energy by combining the hydrogen with carbon dioxide dissolved in the water. The chemical reaction is known as "methanogenesis" because it produces methane as a byproduct, and is at the root of the tree of life on Earth, the birthplace of all life that is known to exist.[157][158]
192
+
193
+ The two Voyager spacecraft made the first close-up images of Enceladus. Voyager 1 was the first to fly past Enceladus, at a distance of 202,000 km on November 12, 1980.[159] Images acquired from this distance had very poor spatial resolution, but revealed a highly reflective surface devoid of impact craters, indicating a youthful surface.[160] Voyager 1 also confirmed that Enceladus was embedded in the densest part of Saturn's diffuse E ring. Combined with the apparent youthful appearance of the surface, Voyager scientists suggested that the E ring consisted of particles vented from Enceladus's surface.[160]
194
+
195
+ Voyager 2 passed closer to Enceladus (87,010 km) on August 26, 1981, allowing higher-resolution images to be obtained.[159] These images showed a young surface.[51] They also revealed a surface with different regions with vastly different surface ages, with a heavily cratered mid- to high-northern latitude region, and a lightly cratered region closer to the equator. This geologic diversity contrasts with the ancient, heavily cratered surface of Mimas, another moon of Saturn slightly smaller than Enceladus. The geologically youthful terrains came as a great surprise to the scientific community, because no theory was then able to predict that such a small (and cold, compared to Jupiter's highly active moon Io) celestial body could bear signs of such activity.
196
+
197
+ The answers to many remaining mysteries of Enceladus had to wait until the arrival of the Cassini spacecraft on July 1, 2004, when it entered orbit around Saturn. Given the results from the Voyager 2 images, Enceladus was considered a priority target by the Cassini mission planners, and several targeted flybys within 1,500 km of the surface were planned as well as numerous, "non-targeted" opportunities within 100,000 km of Enceladus. The flybys have yielded significant information concerning Enceladus's surface, as well as the discovery of water vapor with traces of simple hydrocarbons venting from the geologically active south polar region. These discoveries prompted the adjustment of Cassini's flight plan to allow closer flybys of Enceladus, including an encounter in March 2008 that took it to within 48 km of the surface.[164] Cassini's extended mission included seven close flybys of Enceladus between July 2008 and July 2010, including two passes at only 50 km in the later half of 2008.[165] Cassini performed a flyby on October 28, 2015, passing as close as 49 km (30 mi) and through a plume.[166] Confirmation of molecular hydrogen (H2) would be an independent line of evidence that hydrothermal activity is taking place in the Enceladus seafloor, increasing its habitability.[107]
198
+
199
+ Cassini has provided strong evidence that Enceladus has an ocean with an energy source, nutrients and organic molecules, making Enceladus one of the best places for the study of potentially habitable environments for extraterrestrial life.[167][168] By contrast, the water thought to be on Jupiter's moon Europa is located under a much thicker layer of ice.[169]
200
+
201
+ The discoveries Cassini made at Enceladus have prompted studies into follow-up mission concepts, including a probe flyby (Journey to Enceladus and Titan or JET) to analyze plume contents in-situ,[170][171] a lander by the German Aerospace Center to study the habitability potential of its subsurface ocean (Enceladus Explorer),[172][173][174] and two astrobiology-oriented mission concepts (the Enceladus Life Finder[175][176] and Life Investigation For Enceladus (LIFE)).[140][167][177][178]
202
+
203
+ The European Space Agency (ESA) was assessing concepts in 2008 to send a probe to Enceladus in a mission to be combined with studies of Titan: Titan Saturn System Mission (TSSM).[179] TSSM was a joint NASA/ESA flagship-class proposal for exploration of Saturn's moons, with a focus on Enceladus, and it was competing against the Europa Jupiter System Mission (EJSM) proposal for funding. In February 2009, it was announced that NASA/ESA had given the EJSM mission priority ahead of TSSM,[180] although TSSM will continue to be studied and evaluated.
204
+
205
+ In November 2017, Russian billionaire Yuri Milner expressed interest in funding a "low-cost, privately funded mission to Enceladus which can be launched relatively soon."[181][182] In September 2018, NASA and the Breakthrough Initiatives, founded by Milner, signed a cooperation agreement for the mission's initial concept phase.[183] The spacecraft would be low-cost, low mass, and would be launched at high speed on an affordable rocket. The spacecraft would be directed to perform a single flyby through Enceladus plumes in order to sample and analyze its content for biosignatures.[184][185] NASA will be providing scientific and technical expertise through various reviews, from March 2019 to December 2019.[186]
206
+
207
+ Informational notes
208
+
209
+ Citations
210
+
211
+ Further reading
212
+
en/1735.html.txt ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Enceladus (/ɛnˈsɛlədəs/) is the sixth-largest moon of Saturn. It is about 500 kilometers (310 mi) in diameter,[5] about a tenth of that of Saturn's largest moon, Titan. Enceladus is mostly covered by fresh, clean ice, making it one of the most reflective bodies of the Solar System. Consequently, its surface temperature at noon only reaches −198 °C (−324 °F), far colder than a light-absorbing body would be. Despite its small size, Enceladus has a wide range of surface features, ranging from old, heavily cratered regions to young, tectonically deformed terrains.
4
+
5
+ Enceladus was discovered on August 28, 1789, by William Herschel,[1][17][18] but little was known about it until the two Voyager spacecraft, Voyager 1 and Voyager 2, passed nearby in 1980 and 1981.[19] In 2005, the Cassini spacecraft started multiple close flybys of Enceladus, revealing its surface and environment in greater detail. In particular, Cassini discovered water-rich plumes venting from the south polar region.[20] Cryovolcanoes near the south pole shoot geyser-like jets of water vapor, molecular hydrogen, other volatiles, and solid material, including sodium chloride crystals and ice particles, into space, totaling about 200 kg (440 lb) per second.[16][19][21] Over 100 geysers have been identified.[22] Some of the water vapor falls back as "snow"; the rest escapes, and supplies most of the material making up Saturn's E ring.[23][24] According to NASA scientists, the plumes are similar in composition to comets.[25] In 2014, NASA reported that Cassini found evidence for a large south polar subsurface ocean of liquid water with a thickness of around 10 km (6 mi).[26][27][28]
6
+
7
+ These geyser observations, along with the finding of escaping internal heat and very few (if any) impact craters in the south polar region, show that Enceladus is currently geologically active. Like many other satellites in the extensive systems of the giant planets, Enceladus is trapped in an orbital resonance. Its resonance with Dione excites its orbital eccentricity, which is damped by tidal forces, tidally heating its interior and driving the geological activity.[29]
8
+
9
+ On June 27, 2018, scientists reported the detection of complex macromolecular organics on Enceladus's jet plumes, as sampled by the Cassini orbiter.[30][31]
10
+
11
+ Enceladus was discovered by William Herschel on August 28, 1789, during the first use of his new 1.2 m (47 in) 40-foot telescope, then the largest in the world, at Observatory House in Slough, England.[18][32] Its faint apparent magnitude (HV = +11.7) and its proximity to the much brighter Saturn and Saturn's rings make Enceladus difficult to observe from Earth with smaller telescopes. Like many satellites of Saturn discovered prior to the Space Age, Enceladus was first observed during a Saturnian equinox, when Earth is within the ring plane. At such times, the reduction in glare from the rings makes the moons easier to observe.[33] Prior to the Voyager missions the view of Enceladus improved little from the dot first observed by Herschel. Only its orbital characteristics were known, with estimations of its mass, density and albedo.
12
+
13
+ Enceladus is named after the giant Enceladus of Greek mythology.[1] The name, like the names of each of the first seven satellites of Saturn to be discovered, was suggested by William Herschel's son John Herschel in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope.[34] He chose these names because Saturn, known in Greek mythology as Cronus, was the leader of the Titans.
14
+
15
+ Features on Enceladus are named by the International Astronomical Union (IAU) after characters and places from Burton's translation of The Book of One Thousand and One Nights.[35] Impact craters are named after characters, whereas other feature types, such as fossae (long, narrow depressions), dorsa (ridges), planitiae (plains), sulci (long parallel grooves), and rupes (cliffs) are named after places. The IAU has officially named 85 features on Enceladus, most recently Samaria Rupes, formerly called Samaria Fossa.[36][37]
16
+
17
+ Enceladus is one of the major inner satellites of Saturn along with Dione, Tethys, and Mimas. It orbits at 238,000 km from Saturn's center and 180,000 km from its cloud tops, between the orbits of Mimas and Tethys. It orbits Saturn every 32.9 hours, fast enough for its motion to be observed over a single night of observation. Enceladus is currently in a 2:1 mean-motion orbital resonance with Dione, completing two orbits around Saturn for every one orbit completed by Dione. This resonance maintains Enceladus's orbital eccentricity (0.0047), which is known as a forced eccentricity. This non-zero eccentricity results in tidal deformation of Enceladus. The dissipated heat resulting from this deformation is the main heating source for Enceladus's geologic activity.[6] Enceladus orbits within the densest part of Saturn's E ring, the outermost of its major rings, and is the main source of the ring's material composition.[38]
18
+
19
+ Like most of Saturn's larger satellites, Enceladus rotates synchronously with its orbital period, keeping one face pointed toward Saturn. Unlike Earth's Moon, Enceladus does not appear to librate more than 1.5° about its spin axis. However, analysis of the shape of Enceladus suggests that at some point it was in a 1:4 forced secondary spin–orbit libration.[6] This libration could have provided Enceladus with an additional heat source.[29]
20
+ [39]
21
+ [40]
22
+
23
+ Plumes from Enceladus, which are similar in composition to comets,[25] have been shown to be the source of the material in Saturn's E ring.[23] The E ring is the widest and outermost ring of Saturn (except for the tenuous Phoebe ring). It is an extremely wide but diffuse disk of microscopic icy or dusty material distributed between the orbits of Mimas and Titan.[41]
24
+
25
+ Mathematical models show that the E ring is unstable, with a lifespan between 10,000 and 1,000,000 years; therefore, particles composing it must be constantly replenished.[42] Enceladus is orbiting inside the ring, at its narrowest but highest density point. In the 1980s some suspected that Enceladus is the main source of particles for the ring.[43][44][45][46] This hypothesis was confirmed by Cassini's first two close flybys in 2005.[47][48]
26
+
27
+ The CDA "detected a large increase in the number of particles near Enceladus", confirming Enceladus as the primary source for the E ring.[47] Analysis of the CDA and INMS data suggest that the gas cloud Cassini flew through during the July encounter, and observed from a distance with its magnetometer and UVIS, was actually a water-rich cryovolcanic plume, originating from vents near the south pole.[49]
28
+ Visual confirmation of venting came in November 2005, when ISS imaged geyser-like jets of icy particles rising from Enceladus's south polar region.[6][24] (Although the plume was imaged before, in January and February 2005, additional studies of the camera's response at high phase angles, when the Sun is almost behind Enceladus, and comparison with equivalent high-phase-angle images taken of other Saturnian satellites, were required before this could be confirmed.[50])
29
+
30
+ Enceladus orbiting within Saturn's E ring
31
+
32
+ Enceladus geyser tendrils - comparison of images ("a";"c") with computer simulations
33
+
34
+ Enceladus south polar region - locations of most active tendril-producing geysers
35
+
36
+ Voyager 2 was the first spacecraft to observe Enceladus's surface in detail, in August 1981. Examination of the resulting highest-resolution imagery revealed at least five different types of terrain, including several regions of cratered terrain, regions of smooth (young) terrain, and lanes of ridged terrain often bordering the smooth areas.[51] In addition, extensive linear cracks[52] and scarps were observed. Given the relative lack of craters on the smooth plains, these regions are probably less than a few hundred million years old. Accordingly, Enceladus must have been recently active with "water volcanism" or other processes that renew the surface.[53] The fresh, clean ice that dominates its surface gives Enceladus the most reflective surface of any body in the Solar System, with a visual geometric albedo of 1.38[10] and bolometric Bond albedo of 0.81±0.04.[11] Because it reflects so much sunlight, its surface only reaches a mean noon temperature of −198 °C (−324 °F), somewhat colder than other Saturnian satellites.[12]
37
+
38
+ Observations during three flybys by Cassini on February 17, March 9, and July 14, 2005, revealed Enceladus's surface features in much greater detail than the Voyager 2 observations. The smooth plains, which Voyager 2 had observed, resolved into relatively crater-free regions filled with numerous small ridges and scarps. Numerous fractures were found within the older, cratered terrain, suggesting that the surface has been subjected to extensive deformation since the craters were formed.[54] Some areas contain no craters, indicating major resurfacing events in the geologically recent past. There are fissures, plains, corrugated terrain and other crustal deformations. Several additional regions of young terrain were discovered in areas not well-imaged by either Voyager spacecraft, such as the bizarre terrain near the south pole.[6] All of this indicates that Enceladus's interior is liquid today, even though it should have been frozen long ago.[53]
39
+
40
+ Impact cratering is a common occurrence on many Solar System bodies. Much of Enceladus's surface is covered with craters at various densities and levels of degradation.[55] This subdivision of cratered terrains on the basis of crater density (and thus surface age) suggests that Enceladus has been resurfaced in multiple stages.[53]
41
+
42
+ Cassini observations provided a much closer look at the crater distribution and size, showing that many of Enceladus's craters are heavily degraded through viscous relaxation and fracturing.[56] Viscous relaxation allows gravity, over geologic time scales, to deform craters and other topographic features formed in water ice, reducing the amount of topography over time. The rate at which this occurs is dependent on the temperature of the ice: warmer ice is easier to deform than colder, stiffer ice. Viscously relaxed craters tend to have domed floors, or are recognized as craters only by a raised, circular rim. Dunyazad crater is a prime example of a viscously relaxed crater on Enceladus, with a prominent domed floor.[57]
43
+
44
+ Voyager 2 found several types of tectonic features on Enceladus, including troughs, scarps, and belts of grooves and ridges.[51] Results from Cassini suggest that tectonics is the dominant mode of deformation on Enceladus, including rifts, one of the more dramatic types of tectonic features that were noted. These canyons can be up to 200 km long, 5–10 km wide, and 1 km deep. Such features are geologically young, because they cut across other tectonic features and have sharp topographic relief with prominent outcrops along the cliff faces.[58]
45
+
46
+ Evidence of tectonics on Enceladus is also derived from grooved terrain, consisting of lanes of curvilinear grooves and ridges. These bands, first discovered by Voyager 2, often separate smooth plains from cratered regions.[51] Grooved terrains such as the Samarkand Sulci are reminiscent of grooved terrain on Ganymede. However, unlike those seen on Ganymede, grooved topography on Enceladus is generally more complex. Rather than parallel sets of grooves, these lanes often appear as bands of crudely aligned, chevron-shaped features. In other areas, these bands bow upwards with fractures and ridges running the length of the feature. Cassini observations of the Samarkand Sulci have revealed dark spots (125 and 750 m wide) located parallel to the narrow fractures. Currently, these spots are interpreted as collapse pits within these ridged plain belts.[56]
47
+
48
+ In addition to deep fractures and grooved lanes, Enceladus has several other types of tectonic terrain. Many of these fractures are found in bands cutting across cratered terrain. These fractures probably propagate down only a few hundred meters into the crust. Many have probably been influenced during their formation by the weakened regolith produced by impact craters, often changing the strike of the propagating fracture.[56][59] Another example of tectonic features on Enceladus are the linear grooves first found by Voyager 2 and seen at a much higher resolution by Cassini. These linear grooves can be seen cutting across other terrain types, like the groove and ridge belts. Like the deep rifts, they are among the youngest features on Enceladus. However, some linear grooves have been softened like the craters nearby, suggesting that they are older. Ridges have also been observed on Enceladus, though not nearly to the extent as those seen on Europa. These ridges are relatively limited in extent and are up to one kilometer tall. One-kilometer high domes have also been observed.[56] Given the level of resurfacing found on Enceladus, it is clear that tectonic movement has been an important driver of geology for much of its history.[58]
49
+
50
+ Two regions of smooth plains were observed by Voyager 2. They generally have low relief and have far fewer craters than in the cratered terrains, indicating a relatively young surface age.[55] In one of the smooth plain regions, Sarandib Planitia, no impact craters were visible down to the limit of resolution. Another region of smooth plains to the southwest of Sarandib is criss-crossed by several troughs and scarps. Cassini has since viewed these smooth plains regions, like Sarandib Planitia and Diyar Planitia at much higher resolution. Cassini images show these regions filled with low-relief ridges and fractures, probably caused by shear deformation.[56] The high-resolution images of Sarandib Planitia revealed a number of small impact craters, which allow for an estimate of the surface age, either 170 million years or 3.7 billion years, depending on assumed impactor population.[6][b]
51
+
52
+ The expanded surface coverage provided by Cassini has allowed for the identification of additional regions of smooth plains, particularly on Enceladus's leading hemisphere (the side of Enceladus that faces the direction of motion as it orbits Saturn). Rather than being covered in low-relief ridges, this region is covered in numerous criss-crossing sets of troughs and ridges, similar to the deformation seen in the south polar region. This area is on the opposite side of Enceladus from Sarandib and Diyar Planitiae, suggesting that the placement of these regions is influenced by Saturn's tides on Enceladus.[60]
53
+
54
+ Images taken by Cassini during the flyby on July 14, 2005, revealed a distinctive, tectonically deformed region surrounding Enceladus's south pole. This area, reaching as far north as 60° south latitude, is covered in tectonic fractures and ridges.[6][61] The area has few sizable impact craters, suggesting that it is the youngest surface on Enceladus and on any of the mid-sized icy satellites; modeling of the cratering rate suggests that some regions of the south polar terrain are possibly as young as 500,000 years or less.[6] Near the center of this terrain are four fractures bounded by ridges, unofficially called "tiger stripes".[62] They appear to be the youngest features in this region and are surrounded by mint-green-colored (in false color, UV–green–near IR images), coarse-grained water ice, seen elsewhere on the surface within outcrops and fracture walls.[61] Here the "blue" ice is on a flat surface, indicating that the region is young enough not to have been coated by fine-grained water ice from the E ring. Results from the visual and infrared spectrometer (VIMS) instrument suggest that the green-colored material surrounding the tiger stripes is chemically distinct from the rest of the surface of Enceladus. VIMS detected crystalline water ice in the stripes, suggesting that they are quite young (likely less than 1,000 years old) or the surface ice has been thermally altered in the recent past.[63] VIMS also detected simple organic (carbon-containing) compounds in the tiger stripes, chemistry not found anywhere else on Enceladus thus far.[64]
55
+
56
+ One of these areas of "blue" ice in the south polar region was observed at high resolution during the July 14, 2005 flyby, revealing an area of extreme tectonic deformation and blocky terrain, with some areas covered in boulders 10–100 m across.[65]
57
+
58
+ The boundary of the south polar region is marked by a pattern of parallel, Y- and V-shaped ridges and valleys. The shape, orientation, and location of these features suggest they are caused by changes in the overall shape of Enceladus. As of 2006 there were two theories for what could cause such a shift in shape: the orbit of Enceladus may have migrated inward, leading to an increase in Enceladus's rotation rate. Such a shift would lead to a more oblate shape;[6] or a rising mass of warm, low-density material in Enceladus's interior may have led to a shift in the position of the current south polar terrain from Enceladus's southern mid-latitudes to its south pole.[60] Consequently, the moon's ellipsoid shape would have adjusted to match the new orientation. One problem of the polar flattening hypothesis is that both polar regions should have similar tectonic deformation histories.[6] However, the north polar region is densely cratered, and has a much older surface age than the south pole.[55] Thickness variations in Enceladus's lithosphere is one explanation for this discrepancy. Variations in lithospheric thickness are supported by the correlation between the Y-shaped discontinuities and the V-shaped cusps along the south polar terrain margin and the relative surface age of the adjacent non-south polar terrain regions. The Y-shaped discontinuities, and the north-south trending tension fractures into which they lead, are correlated with younger terrain with presumably thinner lithospheres. The V-shaped cusps are adjacent to older, more heavily cratered terrains.[6]
59
+
60
+ Following Voyager's encounters with Enceladus in the early 1980s, scientists postulated it to be geologically active based on its young, reflective surface and location near the core of the E ring.[51] Based on the connection between Enceladus and the E ring, scientists suspected that Enceladus was the source of material in the E ring, perhaps through venting of water vapor.[43][44] Readings from Cassini's 2005 passage suggested that cryovolcanism, where water and other volatiles are the materials erupted instead of silicate rock, had been discovered on Enceladus. The first Cassini sighting of a plume of icy particles above Enceladus's south pole came from the Imaging Science Subsystem (ISS) images taken in January and February 2005,[6] though the possibility of a camera artifact delayed an official announcement. Data from the magnetometer instrument during the February 17, 2005, encounter provided evidence for a planetary atmosphere. The magnetometer observed a deflection or "draping" of the magnetic field, consistent with local ionization of neutral gas. In addition, an increase in the power of ion cyclotron waves near the orbit of Enceladus was observed, which was further evidence of the ionization of neutral gas. These waves are produced by the interaction of ionized particles and magnetic fields, and the waves' frequency is close to the gyrofrequency of the freshly produced ions, in this case water vapor.[15] During the two following encounters, the magnetometer team determined that gases in Enceladus's atmosphere are concentrated over the south polar region, with atmospheric density away from the pole being much lower.[15] The Ultraviolet Imaging Spectrograph (UVIS) confirmed this result by observing two stellar occultations during the February 17 and July 14 encounters. Unlike the magnetometer, UVIS failed to detect an atmosphere above Enceladus during the February encounter when it looked over the equatorial region, but did detect water vapor during an occultation over the south polar region during the July encounter.[16]
61
+
62
+ Cassini flew through this gas cloud on a few encounters, allowing instruments such as the ion and neutral mass spectrometer (INMS) and the cosmic dust analyzer (CDA) to directly sample the plume. (See 'Composition' section.) The November 2005 images showed the plume's fine structure, revealing numerous jets (perhaps issuing from numerous distinct vents) within a larger, faint component extending out nearly 500 km from the surface.[49] The particles have a bulk velocity of 1.25 ±0.1 km/s,[66] and a maximum velocity of 3.40 km/s.[67] Cassini's UVIS later observed gas jets coinciding with the dust jets seen by ISS during a non-targeted encounter with Enceladus in October 2007.
63
+
64
+ The combined analysis of imaging, mass spectrometry, and magnetospheric data suggests that the observed south polar plume emanates from pressurized subsurface chambers, similar to Earth's geysers or fumaroles.[6] Fumaroles are probably the closer analogy, since periodic or episodic emission is an inherent property of geysers. The plumes of Enceladus were observed to be continuous to within a factor of a few. The mechanism that drives and sustains the eruptions is thought to be tidal heating.[68] The intensity of the eruption of the south polar jets varies significantly as a function of the position of Enceladus in its orbit. The plumes are about four times brighter when Enceladus is at apoapsis (the point in its orbit most distant from Saturn) than when it is at periapsis.[69][70][71] This is consistent with geophysical calculations which predict the south polar fissures are under compression near periapsis, pushing them shut, and under tension near apoapsis, pulling them open.[72]
65
+
66
+ Much of the plume activity consists of broad curtain-like eruptions. Optical illusions from a combination of viewing direction and local fracture geometry previously made the plumes look like discrete jets.[73][74][75]
67
+
68
+ The extent to which cryovolcanism really occurs is a subject of some debate, as water, being denser than ice by about 8%, has difficulty erupting under normal circumstances. At Enceladus, it appears that cryovolcanism occurs because water-filled cracks are periodically exposed to vacuum, the cracks being opened and closed by tidal stresses.[76][77][78]
69
+
70
+ Enceladus – plume animation (00:48)
71
+
72
+ Enceladus and south polar jets (April 13, 2017).
73
+
74
+ Plumes above the limb of Enceladus feeding the E ring
75
+
76
+ A false-color Cassini image of the jets
77
+
78
+ Before the Cassini mission, little was known about the interior of Enceladus. However, flybys by Cassini provided information for models of Enceladus's interior, including a better determination of the mass and shape, high-resolution observations of the surface, and new insights on the interior.[79][80]
79
+
80
+ Mass estimates from the Voyager program missions suggested that Enceladus was composed almost entirely of water ice.[51] However, based on the effects of Enceladus's gravity on Cassini, its mass was determined to be much higher than previously thought, yielding a density of 1.61 g/cm3.[6] This density is higher than Saturn's other mid-sized icy satellites, indicating that Enceladus contains a greater percentage of silicates and iron.
81
+
82
+ Castillo et al. (2005) suggested that Iapetus and the other icy satellites of Saturn formed relatively quickly after the formation of the Saturnian subnebula, and thus were rich in short-lived radionuclides.[81][82] These radionuclides, like aluminium-26 and iron-60, have short half-lives and would produce interior heating relatively quickly. Without the short-lived variety, Enceladus's complement of long-lived radionuclides would not have been enough to prevent rapid freezing of the interior, even with Enceladus's comparatively high rock–mass fraction, given its small size.[83] Given Enceladus's relatively high rock–mass fraction, the proposed enhancement in 26Al and 60Fe would result in a differentiated body, with an icy mantle and a rocky core.[84][82] Subsequent radioactive and tidal heating would raise the temperature of the core to 1,000 K, enough to melt the inner mantle. However, for Enceladus to still be active, part of the core must have also melted, forming magma chambers that would flex under the strain of Saturn's tides. Tidal heating, such as from the resonance with Dione or from libration, would then have sustained these hot spots in the core and would power the current geological activity.[40][85]
83
+
84
+ In addition to its mass and modeled geochemistry, researchers have also examined Enceladus's shape to determine if it is differentiated. Porco et al. (2006) used limb measurements to determine that its shape, assuming hydrostatic equilibrium, is consistent with an undifferentiated interior, in contradiction to the geological and geochemical evidence.[6] However, the current shape also supports the possibility that Enceladus is not in hydrostatic equilibrium, and may have rotated faster at some point in the recent past (with a differentiated interior).[84] Gravity measurements by Cassini show that the density of the core is low, indicating that the core contains water in addition to silicates.[86]
85
+
86
+ Evidence of liquid water on Enceladus began to accumulate in 2005, when scientists observed plumes containing water vapor spewing from its south polar surface,[6][87] with jets moving 250 kg of water vapor every second[87] at up to 2,189 km/h (1,360 mph) into space.[88] Soon after, in 2006 it was determined that Enceladus's plumes are the source of Saturn's E Ring.[6][47] The sources of salty particles are uniformly distributed along the tiger stripes, whereas sources of "fresh" particles are closely related to the high-speed gas jets. The "salty" particles are heavier and mostly fall back to the surface, whereas the fast "fresh" particles escape to the E ring, explaining its salt-poor composition of 0.5–2% of sodium salts by mass.[89]
87
+
88
+ Gravimetric data from Cassini's December 2010 flybys showed that Enceladus likely has a liquid water ocean beneath its frozen surface, but at the time it was thought the subsurface ocean was limited to the south pole.[26][27][28][90] The top of the ocean probably lies beneath a 30 to 40 kilometers (19 to 25 mi) thick ice shelf. The ocean may be 10 kilometers (6.2 mi) deep at the south pole.[26][91]
89
+
90
+ Measurements of Enceladus's "wobble" as it orbits Saturn—called libration—suggests that the entire icy crust is detached from the rocky core and therefore that a global ocean is present beneath the surface.[92] The amount of libration (0.120° ± 0.014°) implies that this global ocean is about 26 to 31 kilometers (16-19 mi) deep.[93][94][95][96] For comparison, Earth's ocean has an average depth of 3.7 kilometers.[95]
91
+
92
+ The Cassini spacecraft flew through the southern plumes on several occasions to sample and analyze its composition. As of 2019, the data gathered is still being analyzed and interpreted. The plumes' salty composition (-Na, -Cl, -CO3) indicates that the source is a salty subsurface ocean.[97]
93
+
94
+ The INMS instrument detected mostly water vapor, as well as traces of molecular nitrogen, carbon dioxide,[14] and trace amounts of simple hydrocarbons such as methane, propane, acetylene and formaldehyde.[98][99] The plumes' composition, as measured by the INMS, is similar to that seen at most comets.[99] Cassini also found traces of simple organic compounds in some dust grains,[89][100] as well as larger organics such as benzene (C6H6),[101] and complex macromolecular organics as large as 200 atomic mass units,[30][30][31] and at least 15 carbon atoms in size.[102]
95
+
96
+ The mass spectrometer detected molecular hydrogen (H2) which was in "thermodynamic disequilibrium" with the other components,[103] and found traces of ammonia (NH3).[104]
97
+
98
+ A model suggests that Enceladus's salty ocean (-Na, -Cl, -CO3) has an alkaline pH of 11 to 12.[105][106] The high pH is interpreted to be a consequence of serpentinization of chondritic rock that leads to the generation of H2, a geochemical source of energy that could support both abiotic and biological synthesis of organic molecules such as those that have been detected in Enceladus's plumes.[105][107]
99
+
100
+ Further analysis in 2019 was done of the spectral characteristics of ice grains in Enceladus's erupting plumes. The study found that nitrogen-bearing and oxygen-bearing amines were likely present, with significant implications for the availability of amino acids in the internal ocean. The researchers suggested that the compounds on Enceladus could be precursors for "biologically relevant organic compounds".[108][109]
101
+
102
+ During the flyby of July 14, 2005, the Composite Infrared Spectrometer (CIRS) found a warm region near the south pole. Temperatures in this region ranged from 85–90 K, with small areas showing as high as 157 K (−116 °C), much too warm to be explained by solar heating, indicating that parts of the south polar region are heated from the interior of Enceladus.[12] The presence of a subsurface ocean under the south polar region is now accepted,[110] but it cannot explain the source of the heat, with an estimated heat flux of 200 mW/m2, which is about 10 times higher than that from radiogenic heating alone.[111]
103
+
104
+ Several explanations for the observed elevated temperatures and the resulting plumes have been proposed, including venting from a subsurface reservoir of liquid water, sublimation of ice,[112] decompression and dissociation of clathrates, and shear heating,[113] but a complete explanation of all the heat sources causing the observed thermal power output of Enceladus has not yet been settled.
105
+
106
+ Heating in Enceladus has occurred through various mechanisms ever since its formation. Radioactive decay in its core may have initially heated it,[114] giving it a warm core and a subsurface ocean, which is now kept above freezing through an unidentified mechanism. Geophysical models indicate that tidal heating is a main heat source, perhaps aided by radioactive decay and some heat-producing chemical reactions.[115][116][117][118] A 2007 study predicted the internal heat of Enceladus, if generated by tidal forces, could be no greater than 1.1 gigawatts,[119] but data from Cassini's infrared spectrometer of the south polar terrain over 16 months, indicate that the internal heat generated power is about 4.7 gigawatts,[119] and suggest that it is in thermal equilibrium.[12][63][120]
107
+
108
+ The observed power output of 4.7 gigawatts is challenging to explain from tidal heating alone, so the main source of heat remains a mystery.[6][115] Most scientists think the observed heat flux of Enceladus is not enough to maintain the subsurface ocean, and therefore any subsurface ocean must be a remnant of a period of higher eccentricity and tidal heating, or the heat is produced through another mechanism.[121][122]
109
+
110
+ Tidal heating occurs through the tidal friction processes: orbital and rotational energy are dissipated as heat in the crust of an object. In addition, to the extent that tides produce heat along fractures, libration may affect the magnitude and distribution of such tidal shear heating.[40] Tidal dissipation of Enceladus's ice crust is significant because Enceladus has a subsurface ocean. A computer simulation that used data from Cassini was published in November 2017, and it indicates that friction heat from the sliding rock fragments within the permeable and fragmented core of Enceladus could keep its underground ocean warm for up to billions of years.[123][124][125] It is thought that if Enceladus had a more eccentric orbit in the past, the enhanced tidal forces could be sufficient to maintain a subsurface ocean, such that a periodic enhancement in eccentricity could maintain a subsurface ocean that periodically changes in size.[122] A more recent analysis claimed that "a model of the tiger stripes as tidally flexed slots that puncture the ice shell can simultaneously explain the persistence of the eruptions through the tidal cycle, the phase lag, and the total power output of the tiger stripe terrain, while suggesting that eruptions are maintained over geological timescales."[68] Previous models suggest that resonant perturbations of Dione could provide the necessary periodic eccentricity changes to maintain the subsurface ocean of Enceladus, if the ocean contains a substantial amount of ammonia.[6] The surface of Enceladus indicates that the entire moon has experienced periods of enhanced heat flux in the past.[126]
111
+
112
+ The "hot start" model of heating suggests Enceladus began as ice and rock that contained rapidly decaying short-lived radioactive isotopes of aluminium, iron and manganese. Enormous amounts of heat were then produced as these isotopes decayed for about 7 million years, resulting in the consolidation of rocky material at the core surrounded by a shell of ice. Although the heat from radioactivity would decrease over time, the combination of radioactivity and tidal forces from Saturn's gravitational tug could prevent the subsurface ocean from freezing.[114] The present-day radiogenic heating rate is 3.2 × 1015 ergs/s (or 0.32 gigawatts), assuming Enceladus has a composition of ice, iron and silicate materials.[6] Heating from long-lived radioactive isotopes uranium-238, uranium-235, thorium-232 and potassium-40 inside Enceladus would add 0.3 gigawatts to the observed heat flux.[115] The presence of Enceladus's regionally thick subsurface ocean suggests a heat flux ~10 times higher than that from radiogenic heating in the silicate core.[66]
113
+
114
+ Because no ammonia was initially found in the vented material by INMS or UVIS, which could act as an antifreeze, it was thought such a heated, pressurized chamber would consist of nearly pure liquid water with a temperature of at least 270 K (−3 °C), because pure water requires more energy to melt.
115
+
116
+ In July 2009 it was announced that traces of ammonia had been found in the plumes during flybys in July and October 2008.[104][127] Reducing the freezing point of water with ammonia would also allow for outgassing and higher gas pressure,[128] and less heat required to power the water plumes.[129] The subsurface layer heating the surface water ice could be an ammonia–water slurry at temperatures as low as 170 K (−103 °C), and thus less energy is required to produce the plume activity. However, the observed 4.7 gigawatts heat flux is enough to power the cryovolcanism without the presence of ammonia.[119][129]
117
+
118
+ Enceladus is a relatively small satellite composed of ice and rock.[130] It is a scalene ellipsoid in shape; its diameters, calculated from images taken by Cassini's ISS (Imaging Science Subsystem) instrument, are 513 km between the sub- and anti-Saturnian poles, 503 km between the leading and trailing hemispheres, and 497 km between the north and south poles.[6] Enceladus is only one-seventh the diameter of Earth's Moon. It ranks sixth in both mass and size among the satellites of Saturn, after Titan (5,150 km), Rhea (1,530 km), Iapetus (1,440 km), Dione (1,120 km) and Tethys (1,050 km).[131][132]
119
+
120
+ Enceladus transiting the moon Titan
121
+
122
+ Size comparison of Earth, the Moon, and Enceladus
123
+
124
+ A size comparison of Enceladus against the North Sea
125
+
126
+ Mimas, the innermost of the round moons of Saturn and directly interior to Enceladus, is a geologically dead body, even though it should experience stronger tidal forces than Enceladus. This apparent paradox can be explained in part by temperature-dependent properties of water ice (the main constituent of the interiors of Mimas and Enceladus). The tidal heating per unit mass is given by the formula
127
+
128
+
129
+
130
+
131
+ q
132
+
133
+ t
134
+ i
135
+ d
136
+
137
+
138
+ =
139
+
140
+
141
+
142
+ 63
143
+ ρ
144
+
145
+ n
146
+
147
+ 5
148
+
149
+
150
+
151
+ r
152
+
153
+ 4
154
+
155
+
156
+
157
+ e
158
+
159
+ 2
160
+
161
+
162
+
163
+
164
+ 38
165
+ μ
166
+ Q
167
+
168
+
169
+
170
+
171
+
172
+ {\displaystyle q_{tid}={\frac {63\rho n^{5}r^{4}e^{2}}{38\mu Q}}}
173
+
174
+ , where ρ is the (mass) density of the satellite, n is its mean orbital motion, r is the satellite's radius, e is the orbital eccentricity of the satellite, μ is the shear modulus and Q is the dimensionless dissipation factor. For a same-temperature approximation, the expected value of qtid for Mimas is about 40 times that of Enceladus. However, the material parameters μ and Q are temperature dependent. At high temperatures (close to the melting point), μ and Q are low, so tidal heating is high. Modeling suggests that for Enceladus, both a 'basic' low-energy thermal state with little internal temperature gradient, and an 'excited' high-energy thermal state with a significant temperature gradient, and consequent convection (endogenic geologic activity), once established, would be stable. For Mimas, only a low-energy state is expected to be stable, despite its being closer to Saturn. So the model predicts a low-internal-temperature state for Mimas (values of μ and Q are high) but a possible higher-temperature state for Enceladus (values of μ and Q are low).[133] Additional historical information is needed to explain how Enceladus first entered the high-energy state (e.g. more radiogenic heating or a more eccentric orbit in the past).[134]
175
+
176
+ The significantly higher density of Enceladus relative to Mimas (1.61 vs. 1.15 g/cm3), implying a larger content of rock and more radiogenic heating in its early history, has also been cited as an important factor in resolving the Mimas paradox.[135]
177
+
178
+ It has been suggested that for an icy satellite the size of Mimas or Enceladus to enter an 'excited state' of tidal heating and convection, it would need to enter an orbital resonance before it lost too much of its primordial internal heat. Because Mimas, being smaller, would cool more rapidly than Enceladus, its window of opportunity for initiating orbital resonance-driven convection would have been considerably shorter.[136]
179
+
180
+ Enceladus is losing mass at a rate of 200 kg/second. If mass loss at this rate continued for 4.5 Gyr, the satellite would have lost approximately 30% of its initial mass. A similar value is obtained by assuming that the initial densities of Enceladus and Mimas were equal.[136] It suggests that tectonics in the south polar region is probably mainly related to subsidence and associated subduction caused by the process of mass loss [137].
181
+
182
+ In 2016, a study of how the orbits of Saturn's moons should have changed due to tidal effects suggested that all of Saturn's satellites inward of Titan, including Enceladus (whose geologic activity was used to derive the strength of tidal effects on Saturn's satellites), may have formed as little as 100 million years ago.[138]
183
+
184
+ Enceladus ejects plumes of salted water laced with grains of silica-rich sand,[139] nitrogen (in ammonia),[140] and organic molecules, including trace amounts of simple hydrocarbons such as methane (CH4), propane (C3H8), acetylene (C2H2) and formaldehyde (CH2O), which are carbon-bearing molecules.[98][99][141] This indicates that hydrothermal activity —an energy source— may be at work in Enceladus's subsurface ocean.[139][142] In addition, models indicate[143] that the large rocky core is porous, allowing water to flow through it, transferring heat and chemicals. It was confirmed by observations and other research [144][145]
185
+ [146] Molecular hydrogen (H2), a geochemical source of energy that can be metabolized by methanogen microbes to provide energy for life, could be present if, as models suggest, Enceladus's salty ocean has an alkaline pH from serpentinization of chondritic rock.[105][106][107]
186
+
187
+ The presence of an internal global salty ocean with an aquatic environment supported by global ocean circulation patterns,[144] with an energy source and complex organic compounds[30] in contact with Enceladus's rocky core,[27][28][147] may advance the study of astrobiology and the study of potentially habitable environments for microbial extraterrestrial life.[26][90][91][148][149][150] The presence of a wide range of organic compounds and ammonia indicates their source may be similar to the water/rock reactions known to occur on Earth and that are known to support life.[151] Therefore, several robotic missions have been proposed to further explore Enceladus and assess its habitability; some of the proposed missions are: Journey to Enceladus and Titan (JET), Enceladus Explorer (En-Ex), Enceladus Life Finder (ELF), Life Investigation For Enceladus (LIFE), and Enceladus Life Signatures and Habitability (ELSAH).
188
+
189
+ On April 13, 2017, NASA announced the discovery of possible hydrothermal activity on Enceladus's sub-surface ocean floor. In 2015, the Cassini probe made a close fly-by of Enceladus's south pole, flying within 48.3 km (30 mi) of the surface, as well as through a plume in the process. A mass spectrometer on the craft detected molecular hydrogen (H2) from the plume, and after months of analysis, the conclusion was made that the hydrogen was most likely the result of hydrothermal activity beneath the surface.[153] It has been speculated that such activity could be a potential oasis of habitability.[154][155][156]
190
+
191
+ The presence of ample hydrogen in Enceladus's ocean means that microbes – if any exist there – could use it to obtain energy by combining the hydrogen with carbon dioxide dissolved in the water. The chemical reaction is known as "methanogenesis" because it produces methane as a byproduct, and is at the root of the tree of life on Earth, the birthplace of all life that is known to exist.[157][158]
192
+
193
+ The two Voyager spacecraft made the first close-up images of Enceladus. Voyager 1 was the first to fly past Enceladus, at a distance of 202,000 km on November 12, 1980.[159] Images acquired from this distance had very poor spatial resolution, but revealed a highly reflective surface devoid of impact craters, indicating a youthful surface.[160] Voyager 1 also confirmed that Enceladus was embedded in the densest part of Saturn's diffuse E ring. Combined with the apparent youthful appearance of the surface, Voyager scientists suggested that the E ring consisted of particles vented from Enceladus's surface.[160]
194
+
195
+ Voyager 2 passed closer to Enceladus (87,010 km) on August 26, 1981, allowing higher-resolution images to be obtained.[159] These images showed a young surface.[51] They also revealed a surface with different regions with vastly different surface ages, with a heavily cratered mid- to high-northern latitude region, and a lightly cratered region closer to the equator. This geologic diversity contrasts with the ancient, heavily cratered surface of Mimas, another moon of Saturn slightly smaller than Enceladus. The geologically youthful terrains came as a great surprise to the scientific community, because no theory was then able to predict that such a small (and cold, compared to Jupiter's highly active moon Io) celestial body could bear signs of such activity.
196
+
197
+ The answers to many remaining mysteries of Enceladus had to wait until the arrival of the Cassini spacecraft on July 1, 2004, when it entered orbit around Saturn. Given the results from the Voyager 2 images, Enceladus was considered a priority target by the Cassini mission planners, and several targeted flybys within 1,500 km of the surface were planned as well as numerous, "non-targeted" opportunities within 100,000 km of Enceladus. The flybys have yielded significant information concerning Enceladus's surface, as well as the discovery of water vapor with traces of simple hydrocarbons venting from the geologically active south polar region. These discoveries prompted the adjustment of Cassini's flight plan to allow closer flybys of Enceladus, including an encounter in March 2008 that took it to within 48 km of the surface.[164] Cassini's extended mission included seven close flybys of Enceladus between July 2008 and July 2010, including two passes at only 50 km in the later half of 2008.[165] Cassini performed a flyby on October 28, 2015, passing as close as 49 km (30 mi) and through a plume.[166] Confirmation of molecular hydrogen (H2) would be an independent line of evidence that hydrothermal activity is taking place in the Enceladus seafloor, increasing its habitability.[107]
198
+
199
+ Cassini has provided strong evidence that Enceladus has an ocean with an energy source, nutrients and organic molecules, making Enceladus one of the best places for the study of potentially habitable environments for extraterrestrial life.[167][168] By contrast, the water thought to be on Jupiter's moon Europa is located under a much thicker layer of ice.[169]
200
+
201
+ The discoveries Cassini made at Enceladus have prompted studies into follow-up mission concepts, including a probe flyby (Journey to Enceladus and Titan or JET) to analyze plume contents in-situ,[170][171] a lander by the German Aerospace Center to study the habitability potential of its subsurface ocean (Enceladus Explorer),[172][173][174] and two astrobiology-oriented mission concepts (the Enceladus Life Finder[175][176] and Life Investigation For Enceladus (LIFE)).[140][167][177][178]
202
+
203
+ The European Space Agency (ESA) was assessing concepts in 2008 to send a probe to Enceladus in a mission to be combined with studies of Titan: Titan Saturn System Mission (TSSM).[179] TSSM was a joint NASA/ESA flagship-class proposal for exploration of Saturn's moons, with a focus on Enceladus, and it was competing against the Europa Jupiter System Mission (EJSM) proposal for funding. In February 2009, it was announced that NASA/ESA had given the EJSM mission priority ahead of TSSM,[180] although TSSM will continue to be studied and evaluated.
204
+
205
+ In November 2017, Russian billionaire Yuri Milner expressed interest in funding a "low-cost, privately funded mission to Enceladus which can be launched relatively soon."[181][182] In September 2018, NASA and the Breakthrough Initiatives, founded by Milner, signed a cooperation agreement for the mission's initial concept phase.[183] The spacecraft would be low-cost, low mass, and would be launched at high speed on an affordable rocket. The spacecraft would be directed to perform a single flyby through Enceladus plumes in order to sample and analyze its content for biosignatures.[184][185] NASA will be providing scientific and technical expertise through various reviews, from March 2019 to December 2019.[186]
206
+
207
+ Informational notes
208
+
209
+ Citations
210
+
211
+ Further reading
212
+
en/1736.html.txt ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ An auction is usually a process of buying and selling goods or services by offering them up for bid, taking bids, and then selling the item to the highest bidder or buying the item from the lowest bidder. Some exceptions to this definition exist and are described in the section about different types. The branch of economic theory dealing with auction types and participants' behavior in auctions is called auction theory.
2
+
3
+ The open ascending price auction is arguably the most common form of auction in use throughout history.[1] Participants bid openly against one another, with each subsequent bid required to be higher than the previous bid.[2] An auctioneer may announce prices, bidders may call out their bids themselves or have a proxy call out a bid on their behalf, or bids may be submitted electronically with the highest current bid publicly displayed.[2]
4
+
5
+ Auctions were and are applied for trade in diverse contexts. These contexts are antiques, paintings, rare collectibles, expensive wines, commodities, livestock, radio spectrum, used cars, even emission trading and many more.
6
+
7
+ The word "auction" is derived from the Latin auctum, the supine of augeō, "I increase".[1] For most of history, auctions have been a relatively uncommon way to negotiate the exchange of goods and commodities. In practice, both haggling and sale by set-price have been significantly more common.[3] Indeed, before the 17th century only a few sporadic auctions were held.[4]
8
+
9
+ Nonetheless, auctions have a long history, having been recorded as early as 500 BC.[5] According to Herodotus, in Babylon auctions of women for marriage were held annually. The auctions began with the woman the auctioneer considered to be the most beautiful and progressed to the least. It was considered illegal to allow a daughter to be sold outside of the auction method.[4] Attractive maidens were offered in a forward auction to determine the price to be paid by a swain, while in the case of maidens lacking attractivity a reverse auction was needed to determine the price to be paid to a swain.[6]
10
+
11
+ During the Roman Empire, after a military victory, Roman soldiers would often drive a spear into the ground around which the spoils of war were left, to be auctioned off. Later slaves, often captured as the "spoils of war", were auctioned in the Forum under the sign of the spear, with the proceeds of sale going towards the war effort.[4]
12
+
13
+ The Romans also used auctions to liquidate the assets of debtors whose property had been confiscated.[7] For example, Marcus Aurelius sold household furniture to pay off debts, the sales lasting for months.[8] One of the most significant historical auctions occurred in the year 193 AD when the entire Roman Empire was put on the auction block by the Praetorian Guard. On 28 March 193, the Praetorian Guard first killed emperor Pertinax, then offered the empire to the highest bidder. Didius Julianus outbid everyone else for the price of 6,250 drachmas per guard,[9][10][11] an act that initiated a brief civil war. Didius was then beheaded two months later when Septimius Severus conquered Rome.[7]
14
+
15
+ From the end of the Roman Empire to the 18th century, auctions lost favor in Europe,[7] while they had never been widespread in Asia.[4] In China, the personal belongings of deceased buddhist monks were sold at auction as early as seventh century AD.[6]
16
+
17
+ The first mention of auction appeared in Oxford English Dictionary in 1595.[6] In some parts of England during the seventeenth and eighteenth centuries auctions by candle began to be used for the sale of goods and leaseholds.[14] In a candle auction, the end of the auction was signaled by the expiration of a candle flame, which was intended to ensure that no one could know exactly when the auction would end and make a last-second bid. Sometimes, other unpredictable events, such as a footrace, were used in place of the expiration of a candle. This type of auction was first mentioned in 1641 in the records of the House of Lords.[15] The practice rapidly became popular, and in 1660 Samuel Pepys' diary recorded two occasions when the Admiralty sold surplus ships "by an inch of candle". Pepys also relates a hint from a highly successful bidder, who had observed that, just before expiring, a candle-wick always flares up slightly: on seeing this, he would shout his final - and winning - bid.
18
+
19
+ The London Gazette began reporting on the auctioning of artwork in the coffeehouses and taverns of London in the late 17th century.
20
+ The first known auction house in the world was Stockholm Auction House, Sweden (Stockholms Auktionsverk), founded by Baron Claes Rålamb in 1674.[16][17] Sotheby's, currently the world's second-largest auction house,[16] was founded in London on 11 March 1744, when Samuel Baker presided over the disposal of "several hundred scarce and valuable" books from the library of an acquaintance. Christie's, now the world's largest auction house,[16] was founded by James Christie in 1766 in London[18] and published its first auction catalog in that year, although newspaper advertisements of Christie's sales dating from 1759 have been found.[19]
21
+
22
+ Other early auction houses that are still in operation include Göteborgs Auktionsverk (1681), Dorotheum (1707), Uppsala auktionskammare (1731), Mallams (1788), Bonhams (1793), Phillips de Pury & Company (1796), Freeman's (1805) and Lyon & Turnbull (1826).[20]
23
+
24
+ By the end of the 18th century, auctions of art works were commonly held in taverns and coffeehouses. These auctions were held daily, and auction catalogs were printed to announce available items. In some cases these catalogs were elaborate works of art themselves, containing considerable detail about the items being auctioned. At this time, Christie's established a reputation as a leading auction house, taking advantage of London's status as the major centre of the international art trade after the French Revolution. The Great Slave Auction took place in 1859 and is recorded as the largest single sale of enslaved people in U.S. history — 436 men, women and children.[21] During the American Civil War, goods seized by armies were sold at auction by the Colonel of the division. Thus some of today's auctioneers in the U.S. carry the unofficial title of "colonel".[8] Tobacco auctioneers in the southern United States in the late 19th century had a style that mixed traditions of 17th century England with chants of slaves from Africa.[22]
25
+
26
+ The development of the internet has led to a significant rise in the use of auctions as auctioneers can solicit bids via the internet from a wide range of buyers in a much wider range of commodities than was previously practical.[3]
27
+ In the 1990s, Multi-attribute auction was invented to negotiate extensive conditions of construction and electricity contracts via auction.[23][24] Also in the 1990s, OnSale.com has developed Yankee auction as its trademark.[25] In the early 2000, Brazilian auction was invented as a new type of auctions to trade gas by electronic auctions for Linde plc in Brazil.[26][27]
28
+
29
+ In 2008, the US National Auctioneers Association reported that the gross revenue of the auction industry for that year was approximately $268.4 billion, with the fastest growing sectors being agricultural, machinery, and equipment auctions and residential real estate auctions.[28]
30
+
31
+ Auctions come in a variety of types and categories, which are sometimes not mutually exclusive. Auction types share features, which can be summarized into the following list. Typification of auctions is considered to be a part of Auction theory.[29]
32
+
33
+ Auctions can differ in the number and type of participants. One should distinguish a buyer from a seller. A buyer pays to acquire a certain good or service, while a seller offers goods or services for money or barter exchange. There can be single or multiple buyers and single or multiple sellers in an auction. If just one seller and one buyer are participating, the issue is not considered to be an auction.[30][31][32]
34
+
35
+ The forward auction is the most common case — a seller offers an item or items for sale and expects the highest price. The reverse auction is a type of auction in which the roles of the buyer and the seller are reversed, with the primary objective to drive purchase prices downward.[33] While ordinary auctions provide suppliers the opportunity to find the best price among interested buyers, reverse auctions or also buyer-determined auctions give buyers a chance to find the lowest-price supplier. During a reverse auction, suppliers may submit multiple offers, usually as a response to competing suppliers’ offers, bidding down the price of a good or service to the lowest price they are willing to receive. Reverse price auction is not obligatory 'descending-price' — reverse Dutch auction is an ascending-price auction because forward Dutch auctions are descending.[34] By revealing the competing bids in real-time to every participating supplier, reverse auctions promote “information transparency”. This, coupled with the dynamic bidding process, improves the chances of reaching the fair market value of the item.[35]
36
+
37
+ Double auction is a combination of both, forward and reverse auctions. Walrasian auction or Walrasian tâtonnement is a double auction in which the auctioneer takes bids from both buyers and sellers in a market of multiple goods.[36] The auctioneer progressively either raises or drops the current proposed price depending on the bids of both buyers and sellers, the auction concluding when supply and demand exactly balance.[37] As a high price tends to dampen demand while a low price tends to increase demand, in theory there is a particular price somewhere in the middle where supply and demand will match.[36] A Barter double auction is an auction, where every participant has a demand and an offer consisting of multiple attributes and no money is involved.[38] For the mathematical modelling of satisfaction level Euclidean distance is used, where the offer and demand are treated as vectors.
38
+
39
+ Auctions can be categorized into three types of procedures for auctions depending on the occurrence of a price development[32] during an auction run and its causes.
40
+
41
+ Multiunit auctions sell more than one identical item at the same time, rather than having separate auctions for each. This type can be further classified as either a uniform price auction or a discriminatory price auction. An example for them is spectrum auctions.
42
+
43
+ Combinatorial auction is any auction for the simultaneous sale of more than one item where bidders can place bids on an "all-or-nothing" basis on "packages" rather than just individual items. That is, a bidder can specify that he or she will pay for items A and B, but only if he or she gets both.[47] In combinatorial auctions, determining the winning bidder(s) can be a complex process where even the bidder with the highest individual bid is not guaranteed to win.[47] For example, in an auction with four items (W, X, Y and Z), if Bidder A offers $50 for items W & Y, Bidder B offers $30 for items W & X, Bidder C offers $5 for items X & Z and Bidder D offers $30 for items Y & Z, the winners will be Bidders B & D while Bidder A misses out because the combined bids of Bidders B & D is higher ($60) than for Bidders A and C ($55). Deferred-acceptance auction is a special case of a combinatorial auction.[48]
44
+
45
+ Generalized first-price auction and Generalized second-price auction offer slots for multiple bidders instead of making a single deal. The bidders get the slots according to the ranking of their bids. The second-price ruling is derived from Vickrey auction and means the final deal sealing for the number one bidder is based on the second bidder's price.
46
+
47
+ No-reserve auction (NR), also known as an absolute auction, is an auction in which the item for sale will be sold regardless of price.[49][50] From the seller's perspective, advertising an auction as having no reserve price can be desirable because it potentially attracts a greater number of bidders due to the possibility of a bargain.[49] If more bidders attend the auction, a higher price might ultimately be achieved because of heightened competition from bidders.[50] This contrasts with a reserve auction, where the item for sale may not be sold if the final bid is not high enough to satisfy the seller. In practice, an auction advertised as "absolute" or "no-reserve" may nonetheless still not sell to the highest bidder on the day, for example, if the seller withdraws the item from the auction or extends the auction period indefinitely,[51] although these practices may be restricted by law in some jurisdictions or under the terms of sale available from the auctioneer.
48
+
49
+ Reserve auction is an auction where the item for sale may not be sold if the final bid is not high enough to satisfy the seller; that is, the seller reserves the right to accept or reject the highest bid.[50] In these cases a set 'reserve' price known to the auctioneer, but not necessarily to the bidders, may have been set, below which the item may not be sold.[49] If the seller announces to the bidders the reserve price, it is a public reserve price auction.[52] In contrast, if the seller does not announce the reserve price before the sale but only after the sale, it is a secret reserve price auction.[53] The reserve price may be fixed or discretionary. In the latter case, the decision to accept a bid is deferred to the auctioneer, who may accept a bid that is marginally below it. A reserve auction is safer for the seller than a no-reserve auction as they are not required to accept a low bid, but this could result in a lower final price if less interest is generated in the sale.[50]
50
+
51
+ All-pay auction is an auction in which all bidders must pay their bids regardless of whether they win. The highest bidder wins the item. All-pay auctions are primarily of academic interest, and may be used to model lobbying or bribery (bids are political contributions) or competitions such as a running race.[54] Bidding fee auction, a variation of all-pay auction, also known as a penny auction, often requires that each participant must pay a fixed price to place each bid, typically one penny (hence the name) higher than the current bid. When an auction's time expires, the highest bidder wins the item and must pay a final bid price.[55] Unlike in a conventional auction, the final price is typically much lower than the value of the item, but all bidders (not just the winner) will have paid for each bid placed; the winner will buy the item at a very low price (plus price of rights-to-bid used), all the losers will have paid, and the seller will typically receive significantly more than the value of the item.[56] Senior auction is a variation on the all-pay auction, and has a defined loser in addition to the winner. The top two bidders must pay their full final bid amounts, and only the highest wins the auction. The intent is to make the high bidders bid above their upper limits. In the final rounds of bidding, when the current losing party has hit their maximum bid, they are encouraged to bid over their maximum (seen as a small loss) to avoid losing their maximum bid with no return (a very large loss). Another variation of all-pay auction, Top-up auction is primarily used for charity events. Losing bidders must pay the difference between their bid and the next lowest bid. The winning bidder pays the amount bid for the item, without top-up. In a Chinese auction, the bidders pay sealed bids in advance and their probability to win grows with the relative size of their bids.[57]
52
+
53
+ In usual auctions like the English one, bids are prices. In Dutch and Japanese auction, the bids are confirmations. In a version of Brazilian auction, bids are numbers of units being traded. Structure elements of a bid are called attributes. If a bid is one number like price, it is a single-attribute auction. If bids consists of multiple-attributes, it is a multi-attribute auction.[58][59]
54
+
55
+ A Yankee auction is a single-attribute multiunit auction running like a Dutch auction, where the bids are the portions of a total amount of identical units.[60][61][62] The amount of auctioned items is firm in a Yankee auction unlike a Brazilian auction. The portions of the total amount, bidders can bid, are limited to lower numbers than the total amount. Therefore, only a portion of the total amount will be traded for the best price and the rest to the suboptimal prices.
56
+
57
+ In an English auction, all bids are visible to all bidders and in a sealed-bid auction, bidders only get to know if their bid was the best. Best/not best auctions are sealed-bid auctions with multiple bids, where the bidders submit their prices like in English auction and get responses about the leadership of their bid.[63] Rank auction is an extension of best/not best auction, where the bidders also see the rank of their bids.[64] Traffic-light auction shows traffic lights to bidders as a response to their bids.[65] These traffic lights depend on the position of the last bid in the distribution of all bids.
58
+
59
+ Buyout auction is an auction with an additional set price (the 'buyout' price) that any bidder can accept at any time during the auction, thereby immediately ending the auction and winning the item.[66] If no bidder chooses to utilize the buyout option before the end of bidding the highest bidder wins and pays their bid.[66] Buyout options can be either temporary or permanent.[66] In a temporary-buyout auction the option to buy out the auction is not available after the first bid is placed.[66] In a permanent-buyout auction the buyout option remains available throughout the entire auction until the close of bidding.[66] The buyout price can either remain the same throughout the entire auction, or vary throughout according to rules or simply as decided by the seller.[66]
60
+
61
+ The winner selection in most auction selects the best bid. Unique bid auctions offers a special winner selection.[67] The winner is the bidder with the lowest unique bid. Chinese auction as already mentioned selects a winner partially based on random.[57] Swedish auction leaves the winner selection to the auctioneer.
62
+
63
+ The final deal with the selected winner is not always conducted according to his or her final bid. In the case of the second-price ruling as in Vickrey auction, the final deal for the winner is based on the second bidder's price. Proxy bid is a special case of second-price ruling used by eBay, where a predefined increment is added to the second-price.
64
+
65
+ Auctions with more than one winner are called multi-winner auctions.[68] Multiunit auction, Combinatorial auction, Generalized first-price auction and Generalized second-price auction are multi-winner auctions.
66
+
67
+ Auctions can be cascaded, one after the other. For instance, Amsterdam auction is a type of premium auction which begins as an English auction. Once only two bidders remain, each submits a sealed bid. The higher bidder wins, paying either the first or second price. Both finalists receive a premium: a proportion of the excess of the second price over the third price (at which English auction ended).[69] Anglo-Dutch auction starts as an English or Japanese auction and then continues as a Dutch auction with a reduced number of bidders.[70][71] French auction is a preliminary sealed-bid auction before the actual auction, whose reserve price it determines. Sequential auction is an auction, where the bidders can participate in a sequence of auctions. Simultaneous Ascending Auction is an opposite of sequential auction, where the auctions are run in parallel.[72]
68
+
69
+ Silent auction is a variant of the English auction in which bids are written on a sheet of paper. At the predetermined end of the auction, the highest listed bidder wins the item.[73] This auction is often used in charity events, with many items auctioned simultaneously and "closed" at a common finish time.[73][74] The auction is "silent" in that there is no auctioneer selling individual items,[73] the bidders writing their bids on a bidding sheet often left on a table near the item.[75] At charity auctions, bid sheets usually have a fixed starting amount, predetermined bid increments, and a "guaranteed bid" amount which works the same as a "buy now" amount. Other variations of this type of auction may include sealed bids.[73] The highest bidder pays the price he or she submitted.[73]
70
+
71
+ In private value auctions, every bidder has his own valuation of the auctioned good.[76] Common value auction is opposite, where the valuation of auctioned good identical among the bidders.
72
+
73
+ The range of auctions' contexts is extremely wide and one can buy almost anything, from a house to an endowment policy and everything in-between. Some of the recent developments have been the use of the Internet both as a means of disseminating information about various auctions and as a vehicle for hosting auctions themselves.
74
+
75
+ As already mentioned in the history section, auctions have been used to trade commodified people from the very first. Auctions have been used in slave markets throughout history until modern times in today's Libya.[77][78][79] The word for slave auction in Atlantic slave trade is scramble. A child auction is a Swedish and Finnish historical practice of selling children into slavery-like conditions by authorities using a descending English auction. Trade of wives by auctions was also a common practice throughout history. For instance, in the old English custom of wife selling, a wife was divorced by selling her in a public auction for the highest bid. A virginity auction is the voluntary practice of individuals seeking to sell their own virginity to the highest bid.
76
+
77
+ In some countries, such as Australia, auction is a common method for the sale of real estate. Used as an alternative to the private sale/treaty method, where a price is disclosed and offers can be made to purchase the property that price, auctions were traditionally used to sell property that, due to its unique characteristics, was difficult to determine a price for. The law does not require a vendor to disclose their reserve price prior to the auction. During the 1990s and 2000s, auction became the primary method for the sale of real estate in the two largest cities, Melbourne and Sydney. This was largely due to the fact that in a private sale the vendor has disclosed the price that they want, and potential purchasers would attempt to low-ball the price, whereas in an auction purchasers do not know what the vendor wants, and thus need to keep lifting the price until the reserve price is reached.
78
+
79
+ The method has been the subject of increased controversy during the twenty-first century as house prices sky-rocketed. The rapidly rising housing market saw many homes, especially in Victoria and New South Wales, selling for significantly more than both the vendors' reserve price and the advertised price range. Subsequently, the auction systems' lack of transparency about the value of the property was brought into question, with estate agents and their vendor clients being accused of "under-quoting". Significant attention was given to the matter by the Australian media, with the government in Victoria eventually bowing to pressure and implementing changes to legislation in an effort to increase transparency.[80]
80
+
81
+ A government auction is simply an auction held on behalf of a government body generally at a general sale. Items for sale are often surplus needed to be liquidated. Auctions ordered by estate executors enter the assets of individuals who have perhaps died intestate (those who have died without leaving a will), or in debt. In legal contexts where forced auctions occur, as when one's farm or house is sold at auction on the courthouse steps. Property seized for non-payment of property taxes, or under foreclosure, is sold in this manner. Police auctions are generally held at general auctions although some forces use online sites including eBay to dispose of lost and found and seized goods. Debt auctions, in which governments sell debt instruments, such as bonds, to investors. The auction is usually sealed and the uniform price paid by the investors is typically the best non-winning bid. In most cases, investors can also place so-called non-competitive bids, which indicates interest to purchase the debt instrument at the resulting price, whatever it may be. Some states use courts to run such auctions. In Spectrum auctions conducted by the government, companies purchase licenses to use portions of the electromagnetic spectrum for communications (e.g., mobile phone networks). In certain jurisdictions, if a storage facility's tenant fails to pay rent, the contents of their locker(s) may be sold at a public auction. Several television shows focus on such auctions, including Storage Wars and Auction Hunters.
82
+
83
+ Auctions are used to trade commodities; for example, fish wholesale auctions. In wool auctions, wool is traded in the international market.[81] The wine auction business offers serious collectors an opportunity to gain access to rare bottles and mature vintages, not typically available through retail channels. In livestock auctions, sheep, cattle, pigs and other livestock are sold. Sometimes very large numbers of stock are auctioned, such as the regular sales of 50,000 or more sheep during a day in New South Wales.[82] In timber auctions, companies purchase licenses to log on government land. In timber allocation auctions, companies purchase timber directly from the government.[83] In electricity auctions, large-scale generators and distributors of electricity bid on generating contracts. Produce auctions link growers to localized wholesale buyers (buyers who are interested in acquiring large quantities of locally grown produce).[84]
84
+
85
+ Websites like eBay provide a potential audience of millions to sellers. Established auction houses, as well as specialist internet auctions, sell everything from antiques and collectibles to holidays, air travel, brand new computers, and household equipment. Private electronic markets use combinatorial auction techniques to continuously sell commodities (coal, iron ore, grain, water...) online to a pre-qualified group of buyers (based on price and non-price factors).
86
+
87
+ Katehakis and Puranam provided the first model[96]
88
+ for the problem of optimal bidding for a firm that in each period procures items to meet a random demand by participating in a finite sequence of auctions. In this model an item valuation derives from the sale of the acquired items via their demand distribution, sale price, acquisition cost, salvage value and lost sales. They established monotonicity properties for the value function and the optimal dynamic bid policy. They also provided a model[97]
89
+ for the case in which the buyer must acquire a fixed number of items either at a fixed buy-it-now price in the open market or by participating in a sequence of auctions. The objective of the buyer is to minimize his expected total cost for acquiring the fixed number of items.
90
+
91
+ Bid shading is placing a bid which is below the bidder's actual value for the item. Such a strategy risks losing the auction, but has the possibility of winning at a low price. Bid shading can also be a strategy to avoid the winner's curse.
92
+
93
+ This is the practice, especially by high-end art auctioneers,[98] of raising false bids at crucial times in the bidding in order to create the appearance of greater demand or to extend bidding momentum for a work on offer. To call out these nonexistent bids auctioneers might fix their gaze at a point in the auction room that is difficult for the audience to pin down.[99] The practice is frowned upon in the industry.[99] In the United States, chandelier bidding is not illegal. In fact, an auctioneer may bid up the price of an item to the reserve price, which is a threshold below which the consignor may refuse to sell the item. However, the auction house is required to disclose this information.
94
+
95
+ In the United Kingdom this practice is legal on property auctions up to but not including the reserve price, and is also known as off-the-wall bidding.[100]
96
+
97
+ Whenever bidders at an auction are aware of the identity of the other bidders there is a risk that they will form a "ring" or "pool" and thus manipulate the auction result, a practice known as collusion. By agreeing to bid only against outsiders, never against members of the "ring", competition becomes weaker, which may dramatically affect the final price level. After the end of the official auction an unofficial auction may take place among the "ring" members. The difference in price between the two auctions could then be split among the members. This form of a ring was used as a central plot device in the opening episode of the 1979 British television series The House of Caradus, 'For Love or Money', uncovered by Helena Caradus on her return from Paris.
98
+
99
+ A ring can also be used to increase the price of an auction lot, in which the owner of the object being auctioned may increase competition by taking part in the bidding him or herself, but drop out of the bidding just before the final bid. In Britain and many other countries, rings and other forms of bidding on one's own object are illegal. This form of a ring was used as a central plot device in an episode of the British television series Lovejoy (series 4, episode 3), in which the price of a watercolour by the (fictional) Jessie Webb is inflated so that others by the same artist could be sold for more than their purchase price.
100
+
101
+ In an English auction, a dummy bid is a bid made by a dummy bidder acting in collusion with the auctioneer or vendor, designed to deceive genuine bidders into paying more. In a first-price auction, a dummy bid is an unfavourable bid designed so as not to become the winning bid. (The bidder does not want to win this auction, but he or she wants to make sure to be invited to the next auction).
102
+
103
+ In Australia, a dummy bid (shill, schill) is a criminal offence, but a vendor bid or a co-owner bid below the reserve price is permitted, if clearly declared as such by the auctioneer. These are all official legal terms in Australia, but may have other meanings elsewhere. A co-owner is one of two or several owners (who disagree among themselves).
104
+
105
+ In Sweden and many other countries there are no legal restrictions, but it will severely hurt the reputation of an auction house that knowingly permits any other bids except genuine bids. If the reserve is not reached this should be clearly declared.
106
+
107
+ In South Africa auctioneers can use their staff or any bidder to raise the price as long as its disclosed before the auction sale. The Auction Alliance[101] controversy focused on vendor bidding and it was proven to be legal and acceptable in terms of the South African consumer laws.
108
+
109
+ There will usually be an estimate of what price the lot will fetch. In an ascending open auction it is considered important to get at least a 50-percent increase in the bids from start to finish. To accomplish this, the auctioneer must start the auction by announcing a suggested opening bid (SOB) that is low enough to be immediately accepted by one of the bidders. Once there is an opening bid, there will quickly be several other, higher bids submitted. Experienced auctioneers will often select an SOB that is about 45 percent of the (lowest) estimate. Thus there is a certain margin of safety to ensure that there will indeed be a lively auction with many bids submitted. Several observations indicate that the lower the SOB, the higher the final winning bid. This is due to the increase in the number of bidders attracted by the low SOB.
110
+
111
+ A chi-squared distribution shows many low bids but few high bids. Bids "show up together"; without several low bids there will not be any high bids.
112
+
113
+ Another approach to choosing an SOB: The auctioneer may achieve good success by asking the expected final sales price for the item, as this method suggests to the potential buyers the item's particular value. For instance, say an auctioneer is about to sell a $1,000 car at a sale. Instead of asking $100, hoping to entice wide interest (for who wouldn't want a $1,000 car for $100?), the auctioneer may suggest an opening bid of $1,000; although the first bidder may begin bidding at a mere $100, the final bid may more likely approach $1,000.
114
+
115
+ The Journal of Economic Literature (JEL) classification code for auctions is D44.[106]
en/1737.html.txt ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ An encyclopedia or encyclopaedia (British English) is a reference work or compendium providing summaries of knowledge either from all branches or from a particular field or discipline.[1]
4
+ Encyclopedias are divided into articles or entries that are often arranged alphabetically by article name[2] and sometimes by thematic categories. Encyclopedia entries are longer and more detailed than those in most dictionaries.[2] Generally speaking, unlike dictionary entries—which focus on linguistic information about words, such as their etymology, meaning, pronunciation, use, and grammatical forms—encyclopedia articles focus on factual information concerning the subject named in the article's title.[3][4][5][6]
5
+
6
+ Encyclopedias have existed for around 2,000 years and have evolved considerably during that time as regards to language (written in a major international or a vernacular language), size (few or many volumes), intent (presentation of a global or a limited range of knowledge), cultural perspective (authoritative, ideological, didactic, utilitarian), authorship (qualifications, style), readership (education level, background, interests, capabilities), and the technologies available for their production and distribution (hand-written manuscripts, small or large print runs, Internet). As a valued source of reliable information compiled by experts, printed versions found a prominent place in libraries, schools and other educational institutions.
7
+
8
+ The appearance of digital and open-source versions in the 21st century has vastly expanded the accessibility, authorship, readership, and variety of encyclopedia entries.
9
+
10
+ Diderot[7]
11
+
12
+ The word encyclopedia (encyclo|pedia) comes from the Koine Greek ἐγκύκλιος παιδεία,[8] transliterated enkyklios paedia, meaning "general education" from enkyklios (ἐγκύκλιος), meaning "circular, recurrent, required regularly, general"[9] and paedia (παιδεία), meaning "education, rearing of a child"; together, the phrase literally translates as "complete instruction" or "complete knowledge".[10] However, the two separate words were reduced to a single word due to a scribal error[11] by copyists of a Latin manuscript edition of Quintillian in 1470.[12] The copyists took this phrase to be a single Greek word, enkyklopaedia, with the same meaning, and this spurious Greek word became the New Latin word "encyclopaedia", which in turn came into English. Because of this compounded word, fifteenth century readers and since have often, and incorrectly, thought that the Roman authors Quintillian and Pliny described an ancient genre.[13]
13
+
14
+ In the sixteenth century there was a level of ambiguity as to how to use this new word. As several titles illustrate, there was not a settled notion about its spelling nor its status as a noun. For example: Jacobus Philomusus's Margarita philosophica encyclopaediam exhibens (1508); Johannes Aventinus's Encyclopedia orbisque doctrinarum, hoc est omnium artium, scientiarum, ipsius philosophiae index ac divisio; Joachimus Fortius Ringelbergius's Lucubrationes vel potius absolutissima kyklopaideia (1538, 1541); Paul Skalich's Encyclopaediæ, seu orbis disciplinarum, tam sacrarum quam prophanarum, epistemon (1559); Gregor Reisch's Margarita philosophica (1503, retitled Encyclopaedia in 1583); and Samuel Eisenmenger's Cyclopaedia Paracelsica (1585).[15]
15
+
16
+ There have been two examples of the oldest vernacular use of the compounded word. In approximately 1490, Franciscus Puccius wrote a letter to Politianus thanking him for his Miscellanea, calling it an encyclopedia.[16] More commonly, François Rabelais is cited for his use of the term in Pantagruel (1532).[17][18]
17
+
18
+ Several encyclopedias have names that include the suffix -p(a)edia, to mark the text as belonging to the genre of encyclopedias. An example is Banglapedia (on matters relevant for Bangladesh).
19
+
20
+ Today in English, the word is most commonly spelled encyclopedia, though encyclopaedia (from encyclopædia) is also used in Britain.[19]
21
+
22
+ The modern encyclopedia was developed from the dictionary in the 18th century. Historically, both encyclopedias and dictionaries have been researched and written by well-educated, well-informed content experts, but they are significantly different in structure. A dictionary is a linguistic work which primarily focuses on alphabetical listing of words and their definitions. Synonymous words and those related by the subject matter are to be found scattered around the dictionary, giving no obvious place for in-depth treatment. Thus, a dictionary typically provides limited information, analysis or background for the word defined. While it may offer a definition, it may leave the reader lacking in understanding the meaning, significance or limitations of a term, and how the term relates to a broader field of knowledge. An encyclopedia is, theoretically, not written in order to convince, although one of its goals is indeed to convince its reader of its own veracity.
23
+
24
+ To address those needs, an encyclopedia article is typically not limited to simple definitions, and is not limited to defining an individual word, but provides a more extensive meaning for a subject or discipline. In addition to defining and listing synonymous terms for the topic, the article is able to treat the topic's more extensive meaning in more depth and convey the most relevant accumulated knowledge on that subject. An encyclopedia article also often includes many maps and illustrations, as well as bibliography and statistics.
25
+
26
+ Four major elements define an encyclopedia: its subject matter, its scope, its method of organization, and its method of production:
27
+
28
+ Some works entitled "dictionaries" are actually similar to encyclopedias, especially those concerned with a particular field (such as the Dictionary of the Middle Ages, the Dictionary of American Naval Fighting Ships, and Black's Law Dictionary). The Macquarie Dictionary, Australia's national dictionary, became an encyclopedic dictionary after its first edition in recognition of the use of proper nouns in common communication, and the words derived from such proper nouns.
29
+
30
+ There are some broad differences between encyclopedias and dictionaries. Most noticeably, encyclopedia articles are longer, fuller and more thorough than entries in most general-purpose dictionaries.[2][21] There are differences in content as well. Generally speaking, dictionaries provide linguistic information about words themselves, while encyclopedias focus more on the thing for which those words stand.[3][4][5][6] Thus, while dictionary entries are inextricably fixed to the word described, encyclopedia articles can be given a different entry name. As such, dictionary entries are not fully translatable into other languages, but encyclopedia articles can be.[3]
31
+
32
+ In practice, however, the distinction is not concrete, as there is no clear-cut difference between factual, "encyclopedic" information and linguistic information such as appears in dictionaries.[5][21][22] Thus encyclopedias may contain material that is also found in dictionaries, and vice versa.[22] In particular, dictionary entries often contain factual information about the thing named by the word.[21][22]
33
+
34
+ Information in traditional encyclopedias can be assessed by measures related to such quality dimension as authority, completeness, format, objectivity, style, timeliness and uniqueness.[20]
35
+
36
+ Encyclopedias have progressed from written form in antiquity, to print in modern times. Today they can also be distributed and displayed electronically.
37
+
38
+ One of the earliest encyclopedic works to have survived to modern times is the Naturalis Historiae of Pliny the Elder, a Roman statesman living in the first century AD. He compiled a work of 37 chapters covering natural history, architecture, medicine, geography, geology, and other aspects of the world around him. He stated in the preface that he had compiled 20,000 facts from 2000 works by over 200 authors, and added many others from his own experience. The work was published around AD 77–79, although Pliny probably never finished editing the work before his death in the eruption of Vesuvius in AD 79.[23]
39
+
40
+ Isidore of Seville, one of the greatest scholars of the early Middle Ages, is widely recognized for writing the first encyclopedia of the Middle Ages, the Etymologiae (The Etymologies) or Origines (around 630), in which he compiled a sizable portion of the learning available at his time, both ancient and contemporary. The work has 448 chapters in 20 volumes, and is valuable because of the quotes and fragments of texts by other authors that would have been lost had he not collected them.
41
+
42
+ The most popular encyclopedia of the Carolingian Age was the De universo or De rerum naturis by Rabanus Maurus, written about 830; it was based on Etymologiae.[24]
43
+
44
+ The encyclopedia of Suda, a massive 10th-century Byzantine encyclopedia, had 30 000 entries, many drawing from ancient sources that have since been lost, and often derived from medieval Christian compilers. The text was arranged alphabetically with some slight deviations from common vowel order and place in the Greek alphabet.
45
+
46
+ The early Muslim compilations of knowledge in the Middle Ages included many comprehensive works. Around year 960, the Brethren of Purity of Basra were engaged in their Encyclopedia of the Brethren of Purity.[25] Notable works include Abu Bakr al-Razi's encyclopedia of science, the Mutazilite Al-Kindi's prolific output of 270 books, and Ibn Sina's medical encyclopedia, which was a standard reference work for centuries. Also notable are works of universal history (or sociology) from Asharites, al-Tabri, al-Masudi, Tabari's History of the Prophets and Kings, Ibn Rustah, al-Athir, and Ibn Khaldun, whose Muqadimmah contains cautions regarding trust in written records that remain wholly applicable today.
47
+
48
+ The enormous encyclopedic work in China of the Four Great Books of Song, compiled by the 11th century during the early Song dynasty (960–1279), was a massive literary undertaking for the time. The last encyclopedia of the four, the Prime Tortoise of the Record Bureau, amounted to 9.4 million Chinese characters in 1000 written volumes. The 'period of the encyclopedists' spanned from the tenth to seventeenth centuries, during which the government of China employed hundreds of scholars to assemble massive encyclopedias.[26] The largest of which is the Yongle Encyclopedia; it was completed in 1408 and consisted of almost 23,000 folio volumes in manuscript form.[26]
49
+
50
+ In late medieval Europe, several authors had the ambition of compiling the sum of human knowledge in a certain field or overall, for example Bartholomew of England, Vincent of Beauvais, Radulfus Ardens, Sydrac, Brunetto Latini, Giovanni da Sangiminiano, Pierre Bersuire. Some were women, like Hildegard of Bingen and Herrad of Landsberg. The most successful of those publications were the Speculum maius (Great Mirror) of Vincent of Beauvais and the De proprietatibus rerum (On the Properties of Things) by Bartholomew of England. The latter was translated (or adapted) into French, Provençal, Italian, English, Flemish, Anglo-Norman, Spanish, and German during the Middle Ages. Both were written in the middle of the 13th century. No medieval encyclopedia bore the title Encyclopaedia – they were often called On nature (De natura, De naturis rerum), Mirror (Speculum maius, Speculum universale), Treasure (Trésor).[27]
51
+
52
+ Medieval encyclopedias were all hand-copied and thus available mostly to wealthy patrons or monastic men of learning; they were expensive, and usually written for those extending knowledge rather than those using it.[28]
53
+
54
+ During the Renaissance, the creation of printing allowed a wider diffusion of encyclopedias and every scholar could have his or her own copy. The De expetendis et fugiendis rebus by Giorgio Valla was posthumously printed in 1501 by Aldo Manuzio in Venice. This work followed the traditional scheme of liberal arts. However, Valla added the translation of ancient Greek works on mathematics (firstly by Archimedes), newly discovered and translated. The Margarita Philosophica by Gregor Reisch, printed in 1503, was a complete encyclopedia explaining the seven liberal arts.
55
+
56
+ The term encyclopaedia was coined by 16th-century humanists who misread copies of their texts of Pliny[29] and Quintilian,[30] and combined the two Greek words "enkyklios paedia" into one word, έγκυκλοπαιδεία.[31] The phrase enkyklios paedia (ἐγκύκλιος παιδεία) was used by Plutarch and the Latin word encyclopaedia came from him.
57
+
58
+ The first work titled in this way was the Encyclopedia orbisque doctrinarum, hoc est omnium artium, scientiarum, ipsius philosophiae index ac divisio written by Johannes Aventinus in 1517.[citation needed]
59
+
60
+ The English physician and philosopher, Sir Thomas Browne used the word 'encyclopaedia' in 1646 in the preface to the reader to define his Pseudodoxia Epidemica, a major work of the 17th-century scientific revolution. Browne structured his encyclopaedia upon the time-honoured scheme of the Renaissance, the so-called 'scale of creation' which ascends through the mineral, vegetable, animal, human, planetary, and cosmological worlds. Pseudodoxia Epidemica was a European best-seller, translated into French, Dutch, and German as well as Latin it went through no fewer than five editions, each revised and augmented, the last edition appearing in 1672.
61
+
62
+ Financial, commercial, legal, and intellectual factors changed the size of encyclopedias. During the Renaissance, middle classes had more time to read and encyclopedias helped them to learn more. Publishers wanted to increase their output so some countries like Germany started selling books missing alphabetical sections, to publish faster. Also, publishers could not afford all the resources by themselves, so multiple publishers would come together with their resources to create better encyclopedias. When publishing at the same rate became financially impossible, they turned to subscriptions and serial publications. This was risky for publishers because they had to find people that would pay all upfront or make payments. When this worked, capital would rise and there would be a steady income for encyclopedias. Later, rivalry grew, causing copyright to occur due to weak underdeveloped laws. Some publishers would copy another publisher's work to produce an encyclopedia faster and cheaper so consumers did not have to pay a lot and they would sell more. Encyclopedias made it to where middle-class citizens could basically have a small library in their own house. Europeans were becoming more curious about their society around them causing them to revolt against their government.[32]
63
+
64
+ The beginnings of the modern idea of the general-purpose, widely distributed printed encyclopedia precede the 18th century encyclopedists. However, Chambers' Cyclopaedia, or Universal Dictionary of Arts and Sciences (1728), and the Encyclopédie of Denis Diderot and Jean le Rond d'Alembert (1751 onwards), as well as Encyclopædia Britannica and the Conversations-Lexikon, were the first to realize the form we would recognize today, with a comprehensive scope of topics, discussed in depth and organized in an accessible, systematic method. Chambers, in 1728, followed the earlier lead of John Harris's Lexicon Technicum of 1704 and later editions (see also below); this work was by its title and content "A Universal English Dictionary of Arts and Sciences: Explaining not only the Terms of Art, but the Arts Themselves".
65
+
66
+ Popular and affordable encyclopedias such as Harmsworth's Universal Encyclopaedia and the Children's Encyclopaedia appeared in the early 1920s.
67
+
68
+ In the United States, the 1950s and 1960s saw the introduction of several large popular encyclopedias, often sold on installment plans. The best known of these were World Book and Funk and Wagnalls. As many as 90% were sold door to door. Jack Lynch says in his book You Could Look It Up that encyclopedia salespeople were so common that they became the butt of jokes. He describes their sales pitch saying, “They were selling not books but a lifestyle, a future, a promise of social mobility." A 1961 World Book ad said, “You are holding your family’s future in your hands right now,” while showing a feminine hand holding an order form.[33]
69
+
70
+ The second half of the 20th century also saw the proliferation of specialized encyclopedias that compiled topics in specific fields, mainly to support specific industries and professionals. This trend has continued. Encyclopedias of at least one volume in size now exist for most if not all academic disciplines, including such narrow topics such as bioethics.
71
+
72
+ By the late 20th century, encyclopedias were being published on CD-ROMs for use with personal computers. Microsoft's Encarta, published between 1993 and 2009, was a landmark example as it had no printed equivalent. Articles were supplemented with both video and audio files as well as numerous high-quality images.[34]
73
+
74
+ Digital technologies and online crowdsourcing allowed encyclopedias to break away from traditional limitations in both breath and depth of topics covered. Wikipedia, a crowd-sourced, multilingual, open licence, free online encyclopedia supported by the non-profit Wikimedia Foundation and open source MediaWiki software opened in 2001. Unlike commercial online encyclopedias such as Encyclopædia Britannica Online, which are written by experts, Wikipedia is collaboratively created and maintained by volunteer editors, organized by collaboratively agreed guidelines and user-roles. Most contributors use pseudonyms and stay anonymous. Content is therefore reviewed, checked, kept or removed based on its own intrinsic value and external sources supporting it.
75
+
76
+ Traditional encyclopedias' reliability, on their side, stand upon authorship and associated professional expertise. Many academics, teachers, and journalists rejected and continue to reject open, crowd sourced encyclopedias, especially Wikipedia, as a reliable source of information, and Wikipedia is itself not a reliable source according to its own standards because of its openly editable and anonymous crowdsourcing model.[35] A study by Nature in 2005 found that Wikipedia's science articles were roughly comparable in accuracy to those of Encyclopædia Britannica, containing the same number of serious errors and about 1/3 more minor factual inaccuracies, but that Wikipedia's writing tended to be confusing and less readable.[36] Encyclopædia Britannica rejected the study's conclusions, deeming the study fatally flawed.[37] As of February 2014, Wikipedia had 18 billion page views and nearly 500 million unique visitors each month.[38] Critics argue Wikipedia exhibits systemic bias.[39][40]
77
+
78
+ There are several much smaller, usually more specialized, encyclopedias on various themes, sometimes dedicated to a specific geographic region or time period.[41] One example is the Stanford Encyclopedia of Philosophy.
79
+
80
+ As of the early 2020s, the largest encyclopedias are the Chinese Baidu Baike (16 million articles) and Hudong Baike (13 million), followed by Wikipedias for English (6 million), German (+2 million) and French (+2 million).[42] More than a dozen other Wikipedias have 1 million articles or more, of variable quality and length.[42] Measuring an encyclopedia's size by its articles is an ambiguous method since the online Chinese encyclopedias cited above allow multiple articles on the same topic, while Wikipedias accept only one single common article per topic but allow automated creation of nearly empty articles.
en/1738.html.txt ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ An encyclopedia or encyclopaedia (British English) is a reference work or compendium providing summaries of knowledge either from all branches or from a particular field or discipline.[1]
4
+ Encyclopedias are divided into articles or entries that are often arranged alphabetically by article name[2] and sometimes by thematic categories. Encyclopedia entries are longer and more detailed than those in most dictionaries.[2] Generally speaking, unlike dictionary entries—which focus on linguistic information about words, such as their etymology, meaning, pronunciation, use, and grammatical forms—encyclopedia articles focus on factual information concerning the subject named in the article's title.[3][4][5][6]
5
+
6
+ Encyclopedias have existed for around 2,000 years and have evolved considerably during that time as regards to language (written in a major international or a vernacular language), size (few or many volumes), intent (presentation of a global or a limited range of knowledge), cultural perspective (authoritative, ideological, didactic, utilitarian), authorship (qualifications, style), readership (education level, background, interests, capabilities), and the technologies available for their production and distribution (hand-written manuscripts, small or large print runs, Internet). As a valued source of reliable information compiled by experts, printed versions found a prominent place in libraries, schools and other educational institutions.
7
+
8
+ The appearance of digital and open-source versions in the 21st century has vastly expanded the accessibility, authorship, readership, and variety of encyclopedia entries.
9
+
10
+ Diderot[7]
11
+
12
+ The word encyclopedia (encyclo|pedia) comes from the Koine Greek ἐγκύκλιος παιδεία,[8] transliterated enkyklios paedia, meaning "general education" from enkyklios (ἐγκύκλιος), meaning "circular, recurrent, required regularly, general"[9] and paedia (παιδεία), meaning "education, rearing of a child"; together, the phrase literally translates as "complete instruction" or "complete knowledge".[10] However, the two separate words were reduced to a single word due to a scribal error[11] by copyists of a Latin manuscript edition of Quintillian in 1470.[12] The copyists took this phrase to be a single Greek word, enkyklopaedia, with the same meaning, and this spurious Greek word became the New Latin word "encyclopaedia", which in turn came into English. Because of this compounded word, fifteenth century readers and since have often, and incorrectly, thought that the Roman authors Quintillian and Pliny described an ancient genre.[13]
13
+
14
+ In the sixteenth century there was a level of ambiguity as to how to use this new word. As several titles illustrate, there was not a settled notion about its spelling nor its status as a noun. For example: Jacobus Philomusus's Margarita philosophica encyclopaediam exhibens (1508); Johannes Aventinus's Encyclopedia orbisque doctrinarum, hoc est omnium artium, scientiarum, ipsius philosophiae index ac divisio; Joachimus Fortius Ringelbergius's Lucubrationes vel potius absolutissima kyklopaideia (1538, 1541); Paul Skalich's Encyclopaediæ, seu orbis disciplinarum, tam sacrarum quam prophanarum, epistemon (1559); Gregor Reisch's Margarita philosophica (1503, retitled Encyclopaedia in 1583); and Samuel Eisenmenger's Cyclopaedia Paracelsica (1585).[15]
15
+
16
+ There have been two examples of the oldest vernacular use of the compounded word. In approximately 1490, Franciscus Puccius wrote a letter to Politianus thanking him for his Miscellanea, calling it an encyclopedia.[16] More commonly, François Rabelais is cited for his use of the term in Pantagruel (1532).[17][18]
17
+
18
+ Several encyclopedias have names that include the suffix -p(a)edia, to mark the text as belonging to the genre of encyclopedias. An example is Banglapedia (on matters relevant for Bangladesh).
19
+
20
+ Today in English, the word is most commonly spelled encyclopedia, though encyclopaedia (from encyclopædia) is also used in Britain.[19]
21
+
22
+ The modern encyclopedia was developed from the dictionary in the 18th century. Historically, both encyclopedias and dictionaries have been researched and written by well-educated, well-informed content experts, but they are significantly different in structure. A dictionary is a linguistic work which primarily focuses on alphabetical listing of words and their definitions. Synonymous words and those related by the subject matter are to be found scattered around the dictionary, giving no obvious place for in-depth treatment. Thus, a dictionary typically provides limited information, analysis or background for the word defined. While it may offer a definition, it may leave the reader lacking in understanding the meaning, significance or limitations of a term, and how the term relates to a broader field of knowledge. An encyclopedia is, theoretically, not written in order to convince, although one of its goals is indeed to convince its reader of its own veracity.
23
+
24
+ To address those needs, an encyclopedia article is typically not limited to simple definitions, and is not limited to defining an individual word, but provides a more extensive meaning for a subject or discipline. In addition to defining and listing synonymous terms for the topic, the article is able to treat the topic's more extensive meaning in more depth and convey the most relevant accumulated knowledge on that subject. An encyclopedia article also often includes many maps and illustrations, as well as bibliography and statistics.
25
+
26
+ Four major elements define an encyclopedia: its subject matter, its scope, its method of organization, and its method of production:
27
+
28
+ Some works entitled "dictionaries" are actually similar to encyclopedias, especially those concerned with a particular field (such as the Dictionary of the Middle Ages, the Dictionary of American Naval Fighting Ships, and Black's Law Dictionary). The Macquarie Dictionary, Australia's national dictionary, became an encyclopedic dictionary after its first edition in recognition of the use of proper nouns in common communication, and the words derived from such proper nouns.
29
+
30
+ There are some broad differences between encyclopedias and dictionaries. Most noticeably, encyclopedia articles are longer, fuller and more thorough than entries in most general-purpose dictionaries.[2][21] There are differences in content as well. Generally speaking, dictionaries provide linguistic information about words themselves, while encyclopedias focus more on the thing for which those words stand.[3][4][5][6] Thus, while dictionary entries are inextricably fixed to the word described, encyclopedia articles can be given a different entry name. As such, dictionary entries are not fully translatable into other languages, but encyclopedia articles can be.[3]
31
+
32
+ In practice, however, the distinction is not concrete, as there is no clear-cut difference between factual, "encyclopedic" information and linguistic information such as appears in dictionaries.[5][21][22] Thus encyclopedias may contain material that is also found in dictionaries, and vice versa.[22] In particular, dictionary entries often contain factual information about the thing named by the word.[21][22]
33
+
34
+ Information in traditional encyclopedias can be assessed by measures related to such quality dimension as authority, completeness, format, objectivity, style, timeliness and uniqueness.[20]
35
+
36
+ Encyclopedias have progressed from written form in antiquity, to print in modern times. Today they can also be distributed and displayed electronically.
37
+
38
+ One of the earliest encyclopedic works to have survived to modern times is the Naturalis Historiae of Pliny the Elder, a Roman statesman living in the first century AD. He compiled a work of 37 chapters covering natural history, architecture, medicine, geography, geology, and other aspects of the world around him. He stated in the preface that he had compiled 20,000 facts from 2000 works by over 200 authors, and added many others from his own experience. The work was published around AD 77–79, although Pliny probably never finished editing the work before his death in the eruption of Vesuvius in AD 79.[23]
39
+
40
+ Isidore of Seville, one of the greatest scholars of the early Middle Ages, is widely recognized for writing the first encyclopedia of the Middle Ages, the Etymologiae (The Etymologies) or Origines (around 630), in which he compiled a sizable portion of the learning available at his time, both ancient and contemporary. The work has 448 chapters in 20 volumes, and is valuable because of the quotes and fragments of texts by other authors that would have been lost had he not collected them.
41
+
42
+ The most popular encyclopedia of the Carolingian Age was the De universo or De rerum naturis by Rabanus Maurus, written about 830; it was based on Etymologiae.[24]
43
+
44
+ The encyclopedia of Suda, a massive 10th-century Byzantine encyclopedia, had 30 000 entries, many drawing from ancient sources that have since been lost, and often derived from medieval Christian compilers. The text was arranged alphabetically with some slight deviations from common vowel order and place in the Greek alphabet.
45
+
46
+ The early Muslim compilations of knowledge in the Middle Ages included many comprehensive works. Around year 960, the Brethren of Purity of Basra were engaged in their Encyclopedia of the Brethren of Purity.[25] Notable works include Abu Bakr al-Razi's encyclopedia of science, the Mutazilite Al-Kindi's prolific output of 270 books, and Ibn Sina's medical encyclopedia, which was a standard reference work for centuries. Also notable are works of universal history (or sociology) from Asharites, al-Tabri, al-Masudi, Tabari's History of the Prophets and Kings, Ibn Rustah, al-Athir, and Ibn Khaldun, whose Muqadimmah contains cautions regarding trust in written records that remain wholly applicable today.
47
+
48
+ The enormous encyclopedic work in China of the Four Great Books of Song, compiled by the 11th century during the early Song dynasty (960–1279), was a massive literary undertaking for the time. The last encyclopedia of the four, the Prime Tortoise of the Record Bureau, amounted to 9.4 million Chinese characters in 1000 written volumes. The 'period of the encyclopedists' spanned from the tenth to seventeenth centuries, during which the government of China employed hundreds of scholars to assemble massive encyclopedias.[26] The largest of which is the Yongle Encyclopedia; it was completed in 1408 and consisted of almost 23,000 folio volumes in manuscript form.[26]
49
+
50
+ In late medieval Europe, several authors had the ambition of compiling the sum of human knowledge in a certain field or overall, for example Bartholomew of England, Vincent of Beauvais, Radulfus Ardens, Sydrac, Brunetto Latini, Giovanni da Sangiminiano, Pierre Bersuire. Some were women, like Hildegard of Bingen and Herrad of Landsberg. The most successful of those publications were the Speculum maius (Great Mirror) of Vincent of Beauvais and the De proprietatibus rerum (On the Properties of Things) by Bartholomew of England. The latter was translated (or adapted) into French, Provençal, Italian, English, Flemish, Anglo-Norman, Spanish, and German during the Middle Ages. Both were written in the middle of the 13th century. No medieval encyclopedia bore the title Encyclopaedia – they were often called On nature (De natura, De naturis rerum), Mirror (Speculum maius, Speculum universale), Treasure (Trésor).[27]
51
+
52
+ Medieval encyclopedias were all hand-copied and thus available mostly to wealthy patrons or monastic men of learning; they were expensive, and usually written for those extending knowledge rather than those using it.[28]
53
+
54
+ During the Renaissance, the creation of printing allowed a wider diffusion of encyclopedias and every scholar could have his or her own copy. The De expetendis et fugiendis rebus by Giorgio Valla was posthumously printed in 1501 by Aldo Manuzio in Venice. This work followed the traditional scheme of liberal arts. However, Valla added the translation of ancient Greek works on mathematics (firstly by Archimedes), newly discovered and translated. The Margarita Philosophica by Gregor Reisch, printed in 1503, was a complete encyclopedia explaining the seven liberal arts.
55
+
56
+ The term encyclopaedia was coined by 16th-century humanists who misread copies of their texts of Pliny[29] and Quintilian,[30] and combined the two Greek words "enkyklios paedia" into one word, έγκυκλοπαιδεία.[31] The phrase enkyklios paedia (ἐγκύκλιος παιδεία) was used by Plutarch and the Latin word encyclopaedia came from him.
57
+
58
+ The first work titled in this way was the Encyclopedia orbisque doctrinarum, hoc est omnium artium, scientiarum, ipsius philosophiae index ac divisio written by Johannes Aventinus in 1517.[citation needed]
59
+
60
+ The English physician and philosopher, Sir Thomas Browne used the word 'encyclopaedia' in 1646 in the preface to the reader to define his Pseudodoxia Epidemica, a major work of the 17th-century scientific revolution. Browne structured his encyclopaedia upon the time-honoured scheme of the Renaissance, the so-called 'scale of creation' which ascends through the mineral, vegetable, animal, human, planetary, and cosmological worlds. Pseudodoxia Epidemica was a European best-seller, translated into French, Dutch, and German as well as Latin it went through no fewer than five editions, each revised and augmented, the last edition appearing in 1672.
61
+
62
+ Financial, commercial, legal, and intellectual factors changed the size of encyclopedias. During the Renaissance, middle classes had more time to read and encyclopedias helped them to learn more. Publishers wanted to increase their output so some countries like Germany started selling books missing alphabetical sections, to publish faster. Also, publishers could not afford all the resources by themselves, so multiple publishers would come together with their resources to create better encyclopedias. When publishing at the same rate became financially impossible, they turned to subscriptions and serial publications. This was risky for publishers because they had to find people that would pay all upfront or make payments. When this worked, capital would rise and there would be a steady income for encyclopedias. Later, rivalry grew, causing copyright to occur due to weak underdeveloped laws. Some publishers would copy another publisher's work to produce an encyclopedia faster and cheaper so consumers did not have to pay a lot and they would sell more. Encyclopedias made it to where middle-class citizens could basically have a small library in their own house. Europeans were becoming more curious about their society around them causing them to revolt against their government.[32]
63
+
64
+ The beginnings of the modern idea of the general-purpose, widely distributed printed encyclopedia precede the 18th century encyclopedists. However, Chambers' Cyclopaedia, or Universal Dictionary of Arts and Sciences (1728), and the Encyclopédie of Denis Diderot and Jean le Rond d'Alembert (1751 onwards), as well as Encyclopædia Britannica and the Conversations-Lexikon, were the first to realize the form we would recognize today, with a comprehensive scope of topics, discussed in depth and organized in an accessible, systematic method. Chambers, in 1728, followed the earlier lead of John Harris's Lexicon Technicum of 1704 and later editions (see also below); this work was by its title and content "A Universal English Dictionary of Arts and Sciences: Explaining not only the Terms of Art, but the Arts Themselves".
65
+
66
+ Popular and affordable encyclopedias such as Harmsworth's Universal Encyclopaedia and the Children's Encyclopaedia appeared in the early 1920s.
67
+
68
+ In the United States, the 1950s and 1960s saw the introduction of several large popular encyclopedias, often sold on installment plans. The best known of these were World Book and Funk and Wagnalls. As many as 90% were sold door to door. Jack Lynch says in his book You Could Look It Up that encyclopedia salespeople were so common that they became the butt of jokes. He describes their sales pitch saying, “They were selling not books but a lifestyle, a future, a promise of social mobility." A 1961 World Book ad said, “You are holding your family’s future in your hands right now,” while showing a feminine hand holding an order form.[33]
69
+
70
+ The second half of the 20th century also saw the proliferation of specialized encyclopedias that compiled topics in specific fields, mainly to support specific industries and professionals. This trend has continued. Encyclopedias of at least one volume in size now exist for most if not all academic disciplines, including such narrow topics such as bioethics.
71
+
72
+ By the late 20th century, encyclopedias were being published on CD-ROMs for use with personal computers. Microsoft's Encarta, published between 1993 and 2009, was a landmark example as it had no printed equivalent. Articles were supplemented with both video and audio files as well as numerous high-quality images.[34]
73
+
74
+ Digital technologies and online crowdsourcing allowed encyclopedias to break away from traditional limitations in both breath and depth of topics covered. Wikipedia, a crowd-sourced, multilingual, open licence, free online encyclopedia supported by the non-profit Wikimedia Foundation and open source MediaWiki software opened in 2001. Unlike commercial online encyclopedias such as Encyclopædia Britannica Online, which are written by experts, Wikipedia is collaboratively created and maintained by volunteer editors, organized by collaboratively agreed guidelines and user-roles. Most contributors use pseudonyms and stay anonymous. Content is therefore reviewed, checked, kept or removed based on its own intrinsic value and external sources supporting it.
75
+
76
+ Traditional encyclopedias' reliability, on their side, stand upon authorship and associated professional expertise. Many academics, teachers, and journalists rejected and continue to reject open, crowd sourced encyclopedias, especially Wikipedia, as a reliable source of information, and Wikipedia is itself not a reliable source according to its own standards because of its openly editable and anonymous crowdsourcing model.[35] A study by Nature in 2005 found that Wikipedia's science articles were roughly comparable in accuracy to those of Encyclopædia Britannica, containing the same number of serious errors and about 1/3 more minor factual inaccuracies, but that Wikipedia's writing tended to be confusing and less readable.[36] Encyclopædia Britannica rejected the study's conclusions, deeming the study fatally flawed.[37] As of February 2014, Wikipedia had 18 billion page views and nearly 500 million unique visitors each month.[38] Critics argue Wikipedia exhibits systemic bias.[39][40]
77
+
78
+ There are several much smaller, usually more specialized, encyclopedias on various themes, sometimes dedicated to a specific geographic region or time period.[41] One example is the Stanford Encyclopedia of Philosophy.
79
+
80
+ As of the early 2020s, the largest encyclopedias are the Chinese Baidu Baike (16 million articles) and Hudong Baike (13 million), followed by Wikipedias for English (6 million), German (+2 million) and French (+2 million).[42] More than a dozen other Wikipedias have 1 million articles or more, of variable quality and length.[42] Measuring an encyclopedia's size by its articles is an ambiguous method since the online Chinese encyclopedias cited above allow multiple articles on the same topic, while Wikipedias accept only one single common article per topic but allow automated creation of nearly empty articles.
en/1739.html.txt ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In Greco-Roman mythology, Aeneas (/ɪˈniːəs/;[1] Greek: Αἰνείας, Aineías, possibly derived from Greek αἰνή meaning "praised") was a Trojan hero, the son of the prince Anchises and the goddess Aphrodite (Venus). His father was a first cousin of King Priam of Troy (both being grandsons of Ilus, founder of Troy), making Aeneas a second cousin to Priam's children (such as Hector and Paris). He is a character in Greek mythology and is mentioned in Homer's Iliad. Aeneas receives full treatment in Roman mythology, most extensively in Virgil's Aeneid, where he is cast as an ancestor of Romulus and Remus. He became the first true hero of Rome. Snorri Sturluson identifies him with the Norse god Vidarr of the Æsir.[2]
2
+
3
+ Aeneas is the Romanization of the Greek Αἰνείας
4
+ (Aineías). Aineías is first introduced in the Homeric Hymn to Aphrodite when Aphrodite gives him his name from the adjective αὶνóν
5
+ (ainon,
6
+ "terrible"), for the "terrible grief" (αὶνóν ἄχος) has caused her.[a][3] It is a popular etymology for the name, apparently exploited by Homer in the Iliad.[4] Later in the Medieval period there were writers who held that, because the Aeneid was written by a philosopher, it is meant to be read philosophically.[5] As such, in the "natural order", the meaning of Aeneas' name combines Greek ennos ("dweller") with demas ("body"), which becomes ennaios or "in-dweller"—i.e. as a god inhabiting a mortal body.[6] However, there is no certainty regarding the origin of his name.
7
+
8
+ In imitation of the Iliad, Virgil borrows epithets of Homer, including: Anchisiades, magnanimum, magnus, heros, and bonus. Though he borrows many, Virgil gives Aeneas two epithets of his own in the Aeneid: pater and pius. The epithets applied by Virgil are an example of an attitude different from that of Homer, for whilst Odysseus is poikilios ("wily"), Aeneas is described as pius ("pious"), which conveys a strong moral tone. The purpose of these epithets seems to enforce the notion of Aeneas' divine hand as father and founder of the Roman race, and their use seems circumstantial: when Aeneas is praying he refers to himself as pius, and is referred to as such by the author only when the character is acting on behalf of the gods to fulfill his divine mission. Likewise, Aeneas is called pater when acting in the interest of his men.[7]
9
+
10
+ The story of the birth of Aeneas is told in the "Hymn to Aphrodite", one of the major Homeric Hymns. Aphrodite has caused Zeus to fall in love with mortal women. In retaliation, Zeus puts desire in her heart for Anchises, who is tending his cattle among the hills near Mount Ida. When Aphrodite sees him she is smitten. She adorns herself as if for a wedding among the gods and appears before him. He is overcome by her beauty, believing that she is a goddess, but Aphrodite identifies herself as a Phrygian princess. After they make love, Aphrodite reveals her true identity to him and Anchises fears what might happen to him as a result of their liaison. Aphrodite assures him that he will be protected, and tells him that she will bear him a son to be called Aeneas. However, she warns him that he must never tell anyone that he has lain with a goddess. When Aeneas is born, Aphrodite takes him to the nymphs of Mount Ida. She directs them to raise the child to age five, then take him to Anchises.[3] According to other sources, Anchises later brags about his encounter with Aphrodite, and as a result is struck in the foot with a thunderbolt by Zeus. Thereafter he is lame in that foot, so that Aeneas has to carry him from the flames of Troy.[8]
11
+
12
+ Aeneas is a minor character in the Iliad, where he is twice saved from death by the gods as if for an as-yet-unknown destiny, but is an honorable warrior in his own right. Having held back from the fighting, aggrieved with Priam because in spite of his brave deeds he was not given his due share of honour, he leads an attack against Idomeneus to recover the body of his brother-in-law Alcathous at the urging of Deiphobus.[9] He is the leader of the Trojans' Dardanian allies, as well as a second cousin and principal lieutenant of Hector, son of the Trojan king Priam. Aeneas's mother Aphrodite frequently comes to his aid on the battlefield, and he is a favorite of Apollo. Aphrodite and Apollo rescue Aeneas from combat with Diomedes of Argos, who nearly kills him, and carry him away to Pergamos for healing. Even Poseidon, who normally favors the Greeks, comes to Aeneas's rescue after he falls under the assault of Achilles, noting that Aeneas, though from a junior branch of the royal family, is destined to become king of the Trojan people. Bruce Louden presents Aeneas as a "type" in the tradition of Utnapishtim, Baucis and Philemon, and Lot; the just man spared the general destruction.[10] Apollodorus explains that "...the Greeks let him alone on account of his piety."[11]
13
+
14
+ The Roman mythographer Gaius Julius Hyginus (c. 64 BCE – CE 17) in his Fabulae[12] credits Aeneas with killing 28 enemies in the Trojan War. Aeneas also appears in the Trojan narratives attributed to Dares Phrygius and Dictys of Crete.
15
+
16
+ The history of Aeneas was continued by Roman authors. One influential source was the account of Rome's founding in Cato the Elder's Origines.[13] The Aeneas legend was well known in Virgil's day and appeared in various historical works, including the Roman Antiquities of the Greek historian Dionysius of Halicarnassus (relying on Marcus Terentius Varro), Ab Urbe Condita by Livy (probably dependent on Quintus Fabius Pictor, fl. 200 BCE), and Gnaeus Pompeius Trogus (now extant only in an epitome by Justin).
17
+
18
+ The Aeneid explains that Aeneas is one of the few Trojans who were not killed or enslaved when Troy fell. Aeneas, after being commanded by the gods to flee, gathered a group, collectively known as the Aeneads, who then traveled to Italy and became progenitors of Romans. The Aeneads included Aeneas's trumpeter Misenus, his father Anchises, his friends Achates, Sergestus, and Acmon, the healer Iapyx, the helmsman Palinurus, and his son Ascanius (also known as Iulus, Julus, or Ascanius Julius). He carried with him the Lares and Penates, the statues of the household gods of Troy, and transplanted them to Italy.
19
+
20
+ Several attempts to find a new home failed; one such stop was on Sicily, where in Drepanum, on the island's western coast, his father, Anchises, died peacefully.
21
+
22
+ After a brief but fierce storm sent up against the group at Juno's request, Aeneas and his fleet made landfall at Carthage after six years of wanderings. Aeneas had a year-long affair with the Carthaginian queen Dido (also known as Elissa), who proposed that the Trojans settle in her land and that she and Aeneas reign jointly over their peoples. A marriage of sorts was arranged between Dido and Aeneas at the instigation of Juno, who was told that her favorite city would eventually be defeated by the Trojans' descendants. Aeneas's mother Venus (the Roman adaptation of Aphrodite) realized that her son and his company needed a temporary respite to reinforce themselves for the journey to come. However, the messenger god Mercury was sent by Jupiter and Venus to remind Aeneas of his journey and his purpose, compelling him to leave secretly. When Dido learned of this, she uttered a curse that would forever pit Carthage against Rome, an enmity that would culminate in the Punic Wars. She then committed suicide by stabbing herself with the same sword she gave Aeneas when they first met.
23
+
24
+ After the sojourn in Carthage, the Trojans returned to Sicily where Aeneas organized funeral games to honor his father, who had died a year before. The company traveled on and landed on the western coast of Italy. Aeneas descended into the underworld where he met Dido (who turned away from him to return to her husband) and his father, who showed him the future of his descendants and thus the history of Rome.
25
+
26
+ Latinus, king of the Latins, welcomed Aeneas's army of exiled Trojans and let them reorganize their lives in Latium. His daughter Lavinia had been promised to Turnus, king of the Rutuli, but Latinus received a prophecy that Lavinia would be betrothed to one from another land – namely, Aeneas. Latinus heeded the prophecy, and Turnus consequently declared war on Aeneas at the urging of Juno, who was aligned with King Mezentius of the Etruscans and Queen Amata of the Latins. Aeneas's forces prevailed. Turnus was killed, and Virgil's account ends abruptly.
27
+
28
+ The rest of Aeneas's biography is gleaned from other ancient sources, including Livy and Ovid's Metamorphoses. According to Livy, Aeneas was victorious but Latinus died in the war. Aeneas founded the city of Lavinium, named after his wife. He later welcomed Dido's sister, Anna Perenna, who then committed suicide after learning of Lavinia's jealousy. After Aeneas's death, Venus asked Jupiter to make her son immortal. Jupiter agreed. The river god Numicus cleansed Aeneas of all his mortal parts and Venus anointed him with ambrosia and nectar, making him a god. Aeneas was recognized as the god Jupiter Indiges.[14]
29
+
30
+ Snorri Sturlason, in the Prologue of the Prose Edda, tells of the world as parted in three continents: Africa, Asia and the third part called Europe or Enea.[2][15] Snorri also tells of a Trojan named Munon or Menon, who marries the daughter of the High King (Yfirkonungr) Priam called Troan and travels to distant lands, marries the Sybil and got a son, Tror, who, as Snorri tells, is identical to Thor. This tale resemble some episodes of the Aeneid.[16]
31
+ Continuations of Trojan matter in the Middle Ages had their effects on the character of Aeneas as well. The 12th-century French Roman d'Enéas addresses Aeneas's sexuality. Though Virgil appears to deflect all homoeroticism onto Nisus and Euryalus, making his Aeneas a purely heterosexual character, in the Middle Ages there was at least a suspicion of homoeroticism in Aeneas. The Roman d'Enéas addresses that charge, when Queen Amata opposes Aeneas's marrying Lavinia, claiming that Aeneas loved boys.[17]
32
+
33
+ Medieval interpretations of Aeneas were greatly influenced by both Virgil and other Latin sources. Specifically, the accounts by Dares and Dictys, which were reworked by 13th-century Italian writer Guido delle Colonne (in Historia destructionis Troiae), colored many later readings. From Guido, for instance, the Pearl Poet and other English writers get the suggestion[18] that Aeneas's safe departure from Troy with his possessions and family was a reward for treason, for which he was chastised by Hecuba.[19] In Sir Gawain and the Green Knight (late 14th century) the Pearl Poet, like many other English writers, employed Aeneas to establish a genealogy for the foundation of Britain,[18] and explains that Aeneas was "impeached for his perfidy, proven most true" (line 4).[20]
34
+
35
+ Aeneas had an extensive family tree. His wet-nurse was Caieta,[21] and he is the father of Ascanius with Creusa, and of Silvius with Lavinia. Ascanius, also known as Iulus (or Julius),[22] founded Alba Longa and was the first in a long series of kings. According to the mythology outlined by Virgil in the Aeneid, Romulus and Remus were both descendants of Aeneas through their mother Rhea Silvia, making Aeneas the progenitor of the Roman people.[23] Some early sources call him their father or grandfather,[24] but considering the commonly accepted dates of the fall of Troy (1184 BCE) and the founding of Rome (753 BCE), this seems unlikely. The Julian family of Rome, most notably Julius Cæsar and Augustus, traced their lineage to Ascanius and Aeneas,[25] thus to the goddess Venus. Through the Julians, the Palemonids make this claim. The legendary kings of Britain – including King Arthur – trace their family through a grandson of Aeneas, Brutus.[26]
36
+
37
+ Aeneas's consistent epithet in Virgil and other Latin authors is pius, a term that connotes reverence toward the gods and familial dutifulness.
38
+
39
+ In the Aeneid, Aeneas is described as strong and handsome, but neither his hair colour nor complexion are described.[27] In late antiquity however sources add further physical descriptions. The De excidio Troiae of Dares Phrygius describes Aeneas as "auburn-haired, stocky, eloquent, courteous, prudent, pious, and charming".[28] There is also a brief physical description found in 6th century AD John Malalas' Chronographia: "Aeneas: short, fat, with a good chest, powerful, with a ruddy complexion, a broad face, a good nose, fair skin, bald on the forehead, a good beard, grey eyes."[29]
40
+
41
+ Aeneas appears as a character in William Shakespeare's play Troilus and Cressida, set during the Trojan War.
42
+
43
+ Aeneas and Dido are the main characters of a 17th-century broadside ballad called "The Wandering Prince of Troy". The ballad ultimately alters Aeneas's fate from traveling on years after Dido's death to joining her as a spirit soon after her suicide.[30]
44
+
45
+ In modern literature, Aeneas is the speaker in two poems by Allen Tate, "Aeneas at Washington" and "Aeneas at New York". He is a main character in Ursula K. Le Guin's Lavinia, a re-telling of the last six books of the Aeneid told from the point of view of Lavinia, daughter of King Latinus of Latium.
46
+
47
+ Aeneas appears in David Gemmell's Troy series as a main heroic character who goes by the name Helikaon.
48
+
49
+ In Rick Riordan's book series, The Heroes of Olympus, Aeneas is regarded as the first Roman demigod, son of Venus rather than Aphrodite.
50
+
51
+ Will Adams' novel City of the Lost assumes that much of the information provided by Virgil is mistaken, and that the true Aeneas and Dido did not meet and love in Carthage but in a Phoenician colony at Cyprus, on the site of the modern Famagusta. Their tale is interspersed with that of modern activists who, while striving to stop an ambitious Turkish Army general
52
+ trying to stage a coup, accidentally discover the hidden ruins of Dido's palace.
53
+
54
+ Aeneas is a title character in Henry Purcell's opera Dido and Aeneas (c. 1688), and Jakob Greber's Enea in Cartagine (Aeneas in Carthage) (1711), and one of the principal roles in Hector Berlioz' opera Les Troyens (c. 1857), as well as in Metastasio's immensely popular[31] opera libretto Didone abbandonata. Canadian composer James Rolfe composed his opera Aeneas and Dido (2007; to a libretto by André Alexis) as a companion piece to Purcell's opera.
55
+
56
+ Despite its many dramatic elements, Aeneas's story has generated little interest from the film industry. Ronald Lewis portrayed Aeneas in Helen of Troy, directed by Robert Wise, as a supporting character, who is a member of the Trojan Royal family, and a close and loyal friend to Paris, and escapes at the end of the film. Portrayed by Steve Reeves, he was the main character in the 1961 sword and sandal film Guerra di Troia (The Trojan War). Reeves reprised the role the following year in the film The Avenger, about Aeneas's arrival in Latium and his conflicts with local tribes as he tries to settle his fellow Trojan refugees there.
57
+
58
+ Giulio Brogi, portrayed as Aeneas in the 1971 Italian TV miniseries series called Eneide, which gives the whole story of the Aeneid, from Aeneas escape from to Troy, to his meeting of Dido, his arrival in Italy, and his duel with Turnus.[32]
59
+
60
+ The most recent cinematic portrayal of Aeneas was in the film Troy, in which he appears as a youth charged by Paris to protect the Trojan refugees, and to continue the ideals of the city and its people. Paris gives Aeneas Priam's sword, in order to give legitimacy and continuity to the royal line of Troy – and lay the foundations of Roman culture. In this film, he is not a member of the royal family and does not appear to fight in the war.
61
+
62
+ In the role-playing game Vampire: The Requiem by White Wolf Game Studios, Aeneas figures as one of the mythical founders of the Ventrue Clan.
63
+
64
+ in the action game Warriors: Legends of Troy, Aeneas is a playable character. The game ends with him and the Aeneans fleeing Troy's destruction and, spurned by the words of a prophetess thought crazed, goes to a new country (Italy) where he will start an empire greater than Greece and Troy combined that shall rule the world for 1000 years, never to be outdone in the tale of men (The Roman Empire).
65
+
66
+ In the 2018 TV miniseries Troy: Fall of a City, Aeneas is portrayed by Alfred Enoch.[33]
67
+
68
+ Scenes depicting Aeneas, especially from the Aeneid, have been the focus of study for centuries. They have been the frequent subject of art and literature since their debut in the 1st century.
69
+
70
+ The artist Giovanni Battista Tiepolo was commissioned by Gaetano Valmarana in 1757 to fresco several rooms in the Villa Valmarana, the family villa situated outside Vicenza. Tiepolo decorated the palazzina with scenes from epics such as Homer's Iliad and Virgil's Aeneid.[34]
en/174.html.txt ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Alps[a] are the highest and most extensive mountain range system that lies entirely in Europe,[2][b] and stretch approximately 1,200 kilometres (750 mi) across eight Alpine countries (from west to east): France, Switzerland, Monaco, Italy, Liechtenstein, Austria, Germany, and Slovenia.[3] The Alpine arch generally extends from Nice on the western Mediterranean to Trieste on the Adriatic and Vienna at the beginning of the Pannonian basin. The mountains were formed over tens of millions of years as the African and Eurasian tectonic plates collided. Extreme shortening caused by the event resulted in marine sedimentary rocks rising by thrusting and folding into high mountain peaks such as Mont Blanc and the Matterhorn. Mont Blanc spans the French–Italian border, and at 4,809 m (15,778 ft) is the highest mountain in the Alps. The Alpine region area contains about a hundred peaks higher than 4,000 metres (13,000 ft).
4
+
5
+ The altitude and size of the range affects the climate in Europe; in the mountains precipitation levels vary greatly and climatic conditions consist of distinct zones. Wildlife such as ibex live in the higher peaks to elevations of 3,400 m (11,155 ft), and plants such as Edelweiss grow in rocky areas in lower elevations as well as in higher elevations. Evidence of human habitation in the Alps goes back to the Palaeolithic era. A mummified man, determined to be 5,000 years old, was discovered on a glacier at the Austrian–Italian border in 1991.
6
+
7
+ By the 6th century BC, the Celtic La Tène culture was well established. Hannibal famously crossed the Alps with a herd of elephants, and the Romans had settlements in the region. In 1800, Napoleon crossed one of the mountain passes with an army of 40,000. The 18th and 19th centuries saw an influx of naturalists, writers, and artists, in particular, the Romantics, followed by the golden age of alpinism as mountaineers began to ascend the peaks.
8
+
9
+ The Alpine region has a strong cultural identity. The traditional culture of farming, cheesemaking, and woodworking still exists in Alpine villages, although the tourist industry began to grow early in the 20th century and expanded greatly after World War II to become the dominant industry by the end of the century. The Winter Olympic Games have been hosted in the Swiss, French, Italian, Austrian and German Alps. At present, the region is home to 14 million people and has 120 million annual visitors.[4]
10
+
11
+ The English word Alps derives from the Latin Alpes (through French).
12
+
13
+ The Latin word Alpes could possibly come from the adjective albus[5] ("white"), which could possibly come from the Greek goddess Alphito, whose name is related to alphita, the "white flour"; alphos, a dull white leprosy; and finally the Proto-Indo-European word alphos. Similarly, the river god Alpheus (deity) also derives from the Greek alphos and means whitish.
14
+
15
+ In his commentary on the Aeneid of Vergil, the late fourth-century grammarian Maurus Servius Honoratus says that all high mountains are called Alpes by Celts.[6] The term may be common to Italo-Celtic, because the Celtic languages have terms for high mountains derived from alp.[citation needed].
16
+
17
+ This may be consistent with the theory that in Greek Alpes is a name of non-Indo-European origin (which is common for prominent mountains and mountain ranges in the Mediterranean region). According to the Oxford English Dictionary, the Latin Alpes might possibly derive from a pre-Indo-European word *alb "hill"; "Albania" is a related derivation. Albania, a name not native to the region known as the country of Albania, has been used as a name for a number of mountainous areas across Europe. In Roman times, "Albania" was a name for the eastern Caucasus, while in the English languages "Albania" (or "Albany") was occasionally used as a name for Scotland,[7] although it is more likely derived from the Latin word albus,[5] the color white.
18
+
19
+ In modern languages the term alp, alm, albe or alpe refers to a grazing pastures in the alpine regions below the glaciers, not the peaks.[8] An alp refers to a high mountain pasture where cows are taken to be grazed during the summer months and where hay barns can be found, and the term "the Alps", referring to the mountains, is a misnomer.[9][10] The term for the mountain peaks varies by nation and language: words such as Horn, Kogel, Kopf, Gipfel, Spitze, Stock, and Berg are used in German speaking regions; Mont, Pic, Tête, Pointe, Dent, Roche, and Aiguille in French speaking regions; and Monte, Picco, Corno, Punta, Pizzo, or Cima in Italian speaking regions.[11]
20
+
21
+ The Alps are a crescent shaped geographic feature of central Europe that ranges in an 800 km (500 mi) arc (curved line) from east to west and is 200 km (120 mi) in width. The mean height of the mountain peaks is 2.5 km (1.6 mi).[12] The range stretches from the Mediterranean Sea north above the Po basin, extending through France from Grenoble, and stretching eastward through mid and southern Switzerland. The range continues onward toward Vienna, Austria, and east to the Adriatic Sea and Slovenia.[13][14][15] To the south it dips into northern Italy and to the north extends to the southern border of Bavaria in Germany.[15] In areas like Chiasso, Switzerland, and Allgäu, Bavaria, the demarcation between the mountain range and the flatlands are clear; in other places such as Geneva, the demarcation is less clear. The countries with the greatest alpine territory are Austria (28.7% of the total area), Italy (27.2%), France (21.4%) and Switzerland (13.2%).[16]
22
+
23
+ The highest portion of the range is divided by the glacial trough of the Rhône valley, from Mont Blanc to the Matterhorn and Monte Rosa on the southern side, and the Bernese Alps on the northern. The peaks in the easterly portion of the range, in Austria and Slovenia, are smaller than those in the central and western portions.[15]
24
+
25
+ The variances in nomenclature in the region spanned by the Alps makes classification of the mountains and subregions difficult, but a general classification is that of the Eastern Alps and Western Alps with the divide between the two occurring in eastern Switzerland according to geologist Stefan Schmid,[8] near the Splügen Pass.
26
+
27
+ The highest peaks of the Western Alps and Eastern Alps, respectively, are Mont Blanc, at 4,810 m (15,780 ft)[17] and Piz Bernina at 4,049 metres (13,284 ft). The second-highest major peaks are Monte Rosa at 4,634 m (15,200 ft) and Ortler[18] at 3,905 m (12,810 ft), respectively
28
+
29
+ Series of lower mountain ranges run parallel to the main chain of the Alps, including the French Prealps in France and the Jura Mountains in Switzerland and France. The secondary chain of the Alps follows the watershed from the Mediterranean Sea to the Wienerwald, passing over many of the highest and most well-known peaks in the Alps. From the Colle di Cadibona to Col de Tende it runs westwards, before turning to the northwest and then, near the Colle della Maddalena, to the north. Upon reaching the Swiss border, the line of the main chain heads approximately east-northeast, a heading it follows until its end near Vienna.[19]
30
+
31
+ The northeast end of the Alpine arc directly on the Danube, which flows into the Black Sea, is the Leopoldsberg near Vienna. In contrast, the southeastern part of the Alps ends on the Adriatic Sea in the area around Trieste towards Duino and Barcola.[20]
32
+
33
+ The Alps have been crossed for war and commerce, and by pilgrims, students and tourists. Crossing routes by road, train or foot are known as passes, and usually consist of depressions in the mountains in which a valley leads from the plains and hilly pre-mountainous zones.[21] In the medieval period hospices were established by religious orders at the summits of many of the main passes.[10] The most important passes are the Col de l'Iseran (the highest), the Col Agnel, the Brenner Pass, the Mont-Cenis, the Great St. Bernard Pass, the Col de Tende, the Gotthard Pass, the Semmering Pass, the Simplon Pass, and the Stelvio Pass.[22]
34
+ Crossing the Italian-Austrian border, the Brenner Pass separates the Ötztal Alps and Zillertal Alps and has been in use as a trading route since the 14th century. The lowest of the Alpine passes at 985 m (3,232 ft), the Semmering crosses from Lower Austria to Styria; since the 12th century when a hospice was built there, it has seen continuous use. A railroad with a tunnel 1.6 kilometres (1 mi) long was built along the route of the pass in the mid-19th century. With a summit of 2,469 m (8,100 ft), the Great St. Bernard Pass is one of the highest in the Alps, crossing the Italian-Swiss border east of the Pennine Alps along the flanks of Mont Blanc. The pass was used by Napoleon Bonaparte to cross 40,000 troops in 1800.
35
+
36
+ The Mont Cenis pass has been a major commercial and military road between Western Europe and Italy. The pass was crossed by many troops on their way to the Italian peninsula. From Constantine I, Pepin the Short and Charlemagne to Henry IV, Napoléon and more recently the German Gebirgsjägers during World War II.
37
+ Now the pass has been supplanted by the Fréjus Highway Tunnel (opened 1980) and Rail Tunnel (opened 1871).
38
+
39
+ The Saint Gotthard Pass crosses from Central Switzerland to Ticino; in 1882 the 15-kilometre-long (9.3 mi) Saint Gotthard Railway Tunnel was opened connecting Lucerne in Switzerland, with Milan in Italy. 98 years later followed Gotthard Road Tunnel (16.9 km or 10.5 mi long) connecting the A2 motorway in Göschenen on the German-Swiss side with Airolo on the Italian-Swiss side, exactly like the railway tunnel. On June 1, 2016 the world's longest railway tunnel, the Gotthard Base Tunnel was opened, which connects Erstfeld in canton of Uri with Bodio in canton of Ticino by two single tubes of 57.1 kilometres (35.5 mi).[23] It is the first tunnel that traverses the Alps on a flat route.[24] From December 11, 2016 it is part of the regular railway timetable and be used hourly as standard way to ride between Basel/Lucerne/Zurich and Bellinzona/Lugano/Milano.[25]
40
+
41
+ The highest pass in the alps is the col de l'Iseran in Savoy (France) at 2,770 m (9,088 ft), followed by the Stelvio Pass in northern Italy at 2,756 m (9,042 ft); the road was built in the 1820s.[22]
42
+
43
+ Important geological concepts were established as naturalists began studying the rock formations of the Alps in the 18th century. In the mid-19th century the now defunct theory of geosynclines was used to explain the presence of "folded" mountain chains but by the mid-20th century the theory of plate tectonics became widely accepted.[26]
44
+
45
+ The formation of the Alps (the Alpine orogeny) was an episodic process that began about 300 million years ago.[28] In the Paleozoic Era the Pangaean supercontinent consisted of a single tectonic plate; it broke into separate plates during the Mesozoic Era and the Tethys sea developed between Laurasia and Gondwana during the Jurassic Period.[26] The Tethys was later squeezed between colliding plates causing the formation of mountain ranges called the Alpide belt, from Gibraltar through the Himalayas to Indonesia—a process that began at the end of the Mesozoic and continues into the present. The formation of the Alps was a segment of this orogenic process,[26] caused by the collision between the African and the Eurasian plates[29] that began in the late Cretaceous Period.[30]
46
+
47
+ Under extreme compressive stresses and pressure, marine sedimentary rocks were uplifted, creating characteristic recumbent folds, or nappes, and thrust faults.[31] As the rising peaks underwent erosion, a layer of marine flysch sediments was deposited in the foreland basin, and the sediments became involved in younger nappes (folds) as the orogeny progressed. Coarse sediments from the continual uplift and erosion were later deposited in foreland areas as molasse.[29] The molasse regions in Switzerland and Bavaria were well-developed and saw further upthrusting of flysch.[32]
48
+
49
+ The Alpine orogeny occurred in ongoing cycles through to the Paleogene causing differences in nappe structures, with a late-stage orogeny causing the development of the Jura Mountains.[33] A series of tectonic events in the Triassic, Jurassic and Cretaceous periods caused different paleogeographic regions.[33] The Alps are subdivided by different lithology (rock composition) and nappe structure according to the orogenic events that affected them.[8] The geological subdivision differentiates the Western, Eastern Alps and Southern Alps: the Helveticum in the north, the Penninicum and Austroalpine system in the centre and, south of the Periadriatic Seam, the Southern Alpine system.[34]
50
+
51
+ According to geologist Stefan Schmid, because the Western Alps underwent a metamorphic event in the Cenozoic Era while the Austroalpine peaks underwent an event in the Cretaceous Period, the two areas show distinct differences in nappe formations.[33] Flysch deposits in the Southern Alps of Lombardy probably occurred in the Cretaceous or later.[33]
52
+
53
+ Peaks in France, Italy and Switzerland lie in the "Houillière zone", which consists of basement with sediments from the Mesozoic Era.[34] High "massifs" with external sedimentary cover are more common in the Western Alps and were affected by Neogene Period thin-skinned thrusting whereas the Eastern Alps have comparatively few high peaked massifs.[32] Similarly the peaks in eastern Switzerland extending to western Austria (Helvetic nappes) consist of thin-skinned sedimentary folding that detached from former basement rock.[35]
54
+
55
+ In simple terms the structure of the Alps consists of layers of rock of European, African and oceanic (Tethyan) origin.[36] The bottom nappe structure is of continental European origin, above which are stacked marine sediment nappes, topped off by nappes derived from the African plate.[37] The Matterhorn is an example of the ongoing orogeny and shows evidence of great folding. The tip of the mountain consists of gneisses from the African plate; the base of the peak, below the glaciated area, consists of European basement rock. The sequence of Tethyan marine sediments and their oceanic basement is sandwiched between rock derived from the African and European plates.[27]
56
+
57
+ The core regions of the Alpine orogenic belt have been folded and fractured in such a manner that erosion created the characteristic steep vertical peaks of the Swiss Alps that rise seemingly straight out of the foreland areas.[30] Peaks such as Mont Blanc, the Matterhorn, and high peaks in the Pennine Alps, the Briançonnais, and Hohe Tauern consist of layers of rock from the various orogenies including exposures of basement rock.[38]
58
+
59
+ Due to the ever-present geologic instability, earthquakes continue in the Alps to this day.[39] Typically, the largest earthquakes in the alps have been between magnitude 6 and 7 on the Richter scale.[40]
60
+
61
+ The Union Internationale des Associations d'Alpinisme (UIAA) has defined a list of 82 "official" Alpine summits that reach at least 4,000 m (13,123 ft).[41] The list includes not only mountains, but also subpeaks with little prominence that are considered important mountaineering objectives. Below are listed the 22 "four-thousanders" with at least 500 m (1,640 ft) of prominence.
62
+
63
+ While Mont Blanc was first climbed in 1786, most of the Alpine four-thousanders were climbed during the second half of the 19th century; the ascent of the Matterhorn in 1865 marked the end of the golden age of alpinism. Karl Blodig (1859–1956) was among the first to successfully climb all the major 4,000 m peaks. He completed his series of ascents in 1911.[42]
64
+
65
+ The first British Mont Blanc ascent was in 1788; the first female ascent in 1819. By the mid-1850s Swiss mountaineers had ascended most of the peaks and were eagerly sought as mountain guides. Edward Whymper reached the top of the Matterhorn in 1865 (after seven attempts), and in 1938 the last of the six great north faces of the Alps was climbed with the first ascent of the Eiger Nordwand (north face of the Eiger).[43]
66
+
67
+ The Alps are a source of minerals that have been mined for thousands of years. In the 8th to 6th centuries BC during the Hallstatt culture, Celtic tribes mined copper; later the Romans mined gold for coins in the Bad Gastein area. Erzberg in Styria furnishes high-quality iron ore for the steel industry. Crystals are found throughout much of the Alpine region such as cinnabar, amethyst, and quartz. The cinnabar deposits in Slovenia are a notable source of cinnabar pigments.[45]
68
+
69
+ Alpine crystals have been studied and collected for hundreds of years, and began to be classified in the 18th century. Leonhard Euler studied the shapes of crystals, and by the 19th century crystal hunting was common in Alpine regions. David Friedrich Wiser amassed a collection of 8000 crystals that he studied and documented. In the 20th century Robert Parker wrote a well-known work about the rock crystals of the Swiss Alps; at the same period a commission was established to control and standardize the naming of Alpine minerals.[46]
70
+
71
+ In the Miocene Epoch the mountains underwent severe erosion because of glaciation,[30] which was noted in the mid-19th century by naturalist Louis Agassiz who presented a paper proclaiming the Alps were covered in ice at various intervals—a theory he formed when studying rocks near his Neuchâtel home which he believed originated to the west in the Bernese Oberland. Because of his work he came to be known as the "father of the ice-age concept" although other naturalists before him put forth similar ideas.[47]
72
+
73
+ Agassiz studied glacier movement in the 1840s at the Unteraar Glacier where he found the glacier moved 100 m (328 ft) per year, more rapidly in the middle than at the edges. His work was continued by other scientists and now a permanent laboratory exists inside a glacier under the Jungfraujoch, devoted exclusively to the study of Alpine glaciers.[47]
74
+
75
+ Glaciers pick up rocks and sediment with them as they flow. This causes erosion and the formation of valleys over time. The Inn valley is an example of a valley carved by glaciers during the ice ages with a typical terraced structure caused by erosion. Eroded rocks from the most recent ice age lie at the bottom of the valley while the top of the valley consists of erosion from earlier ice ages.[47] Glacial valleys have characteristically steep walls (reliefs); valleys with lower reliefs and talus slopes are remnants of glacial troughs or previously infilled valleys.[48] Moraines, piles of rock picked up during the movement of the glacier, accumulate at edges, centre and the terminus of glaciers.[47]
76
+
77
+ Alpine glaciers can be straight rivers of ice, long sweeping rivers, spread in a fan-like shape (Piedmont glaciers), and curtains of ice that hang from vertical slopes of the mountain peaks. The stress of the movement causes the ice to break and crack loudly, perhaps explaining why the mountains were believed to be home to dragons in the medieval period. The cracking creates unpredictable and dangerous crevasses, often invisible under new snowfall, which cause the greatest danger to mountaineers.[49]
78
+
79
+ Glaciers end in ice caves (the Rhône Glacier), by trailing into a lake or river, or by shedding snowmelt on a meadow. Sometimes a piece of glacier will detach or break resulting in flooding, property damage and loss of life.[49]
80
+
81
+ High levels of precipitation cause the glaciers to descend to permafrost levels in some areas whereas in other, more arid regions, glaciers remain above about the 3,500 m (11,483 ft) level.[50] The 1,817 square kilometres (702 sq mi) of the Alps covered by glaciers in 1876 had shrunk to 1,342 km2 (518 sq mi) by 1973, resulting in decreased river run-off levels.[51] Forty percent of the glaciation in Austria has disappeared since 1850, and 30% of that in Switzerland.[52]
82
+
83
+ The Alps provide lowland Europe with drinking water, irrigation, and hydroelectric power.[54] Although the area is only about 11 percent of the surface area of Europe, the Alps provide up to 90 percent of water to lowland Europe, particularly to arid areas and during the summer months. Cities such as Milan depend on 80 percent of water from Alpine runoff.[13][55][56] Water from the rivers is used in over 500 hydroelectricity power plants, generating as much as 2900 GWh[clarification needed] of electricity.[4]
84
+
85
+ Major European rivers flow from the Alps, such as the Rhine, the Rhône, the Inn, and the Po, all of which have headwaters in the Alps and flow into neighbouring countries, finally emptying into the North Sea, the Mediterranean Sea, the Adriatic Sea and the Black Sea. Other rivers such as the Danube have major tributaries flowing into them that originate in the Alps.[13] The Rhône is second to the Nile as a freshwater source to the Mediterranean Sea; the river begins as glacial meltwater, flows into Lake Geneva, and from there to France where one of its uses is to cool nuclear power plants.[57] The Rhine originates in a 30-square-kilometre (12 sq mi) area in Switzerland and represents almost 60 percent of water exported from the country.[57] Tributary valleys, some of which are complicated, channel water to the main valleys which can experience flooding during the snow melt season when rapid runoff causes debris torrents and swollen rivers.[58]
86
+
87
+ The rivers form lakes, such as Lake Geneva, a crescent shaped lake crossing the Swiss border with Lausanne on the Swiss side and the town of Evian-les-Bains on the French side. In Germany, the medieval St. Bartholomew's chapel was built on the south side of the Königssee, accessible only by boat or by climbing over the abutting peaks.[59]
88
+
89
+ Additionally, the Alps have led to the creation of large lakes in Italy. For instance, the Sarca, the primary inflow of Lake Garda, originates in the Italian Alps.[60]
90
+
91
+ Scientists have been studying the impact of climate change and water use. For example, each year more water is diverted from rivers for snowmaking in the ski resorts, the effect of which is yet unknown. Furthermore, the decrease of glaciated areas combined with a succession of winters with lower-than-expected precipitation may have a future impact on the rivers in the Alps as well as an effect on the water availability to the lowlands.[55][61]
92
+
93
+ The Alps are a classic example of what happens when a temperate area at lower altitude gives way to higher-elevation terrain. Elevations around the world that have cold climates similar to those of the polar regions have been called Alpine. A rise from sea level into the upper regions of the atmosphere causes the temperature to decrease (see adiabatic lapse rate). The effect of mountain chains on prevailing winds is to carry warm air belonging to the lower region into an upper zone, where it expands in volume at the cost of a proportionate loss of temperature, often accompanied by precipitation in the form of snow or rain.[62] The height of the Alps is sufficient to divide the weather patterns in Europe into a wet north and a dry south because moisture is sucked from the air as it flows over the high peaks.[63]
94
+
95
+ The severe weather in the Alps has been studied since the 18th century; particularly the weather patterns such as the seasonal foehn wind. Numerous weather stations were placed in the mountains early in the early 20th century, providing continuous data for climatologists.[12] Some of the valleys are quite arid such as the Aosta valley in Italy, the Maurienne in France, the Valais in Switzerland, and northern Tyrol.[12]
96
+
97
+ The areas that are not arid and receive high precipitation experience periodic flooding from rapid snowmelt and runoff.[58] The mean precipitation in the Alps ranges from a low of 2,600 mm (100 in) per year to 3,600 mm (140 in) per year, with the higher levels occurring at high altitudes. At altitudes between 1,000 and 3,000 m (3,300 and 9,800 ft), snowfall begins in November and accumulates through to April or May when the melt begins. Snow lines vary from 2,400 to 3,000 m (7,900 to 9,800 ft), above which the snow is permanent and the temperatures hover around the freezing point even during July and August. High-water levels in streams and rivers peak in June and July when the snow is still melting at the higher altitudes.[64]
98
+
99
+ The Alps are split into five climatic zones, each with different vegetation. The climate, plant life and animal life vary among the different sections or zones of the mountains. The lowest zone is the colline zone, which exists between 500 and 1,000 m (1,600 and 3,300 ft), depending on the location. The montane zone extends from 800 to 1,700 m (2,600 to 5,600 ft), followed by the sub-Alpine zone from 1,600 to 2,400 m (5,200 to 7,900 ft). The Alpine zone, extending from tree line to snow line, is followed by the glacial zone, which covers the glaciated areas of the mountain. Climatic conditions show variances within the same zones; for example, weather conditions at the head of a mountain valley, extending directly from the peaks, are colder and more severe than those at the mouth of a valley which tend to be less severe and receive less snowfall.[65]
100
+
101
+ Various models of climate change have been projected into the 22nd century for the Alps, with an expectation that a trend toward increased temperatures will have an effect on snowfall, snowpack, glaciation, and river runoff.[66] Significant changes, of both natural and anthropogenic origins, have already been diagnosed from observations.[67][68][69]
102
+
103
+ Thirteen thousand species of plants have been identified in the Alpine regions.[4] Alpine plants are grouped by habitat and soil type which can be limestone or non-calcareous. The habitats range from meadows, bogs, woodland (deciduous and coniferous) areas to soil-less scree and moraines, and rock faces and ridges.[9] A natural vegetation limit with altitude is given by the presence of the chief deciduous trees—oak, beech, ash and sycamore maple. These do not reach exactly to the same elevation, nor are they often found growing together; but their upper limit corresponds accurately enough to the change from a temperate to a colder climate that is further proved by a change in the presence of wild herbaceous vegetation.[70] This limit usually lies about 1,200 m (3,900 ft) above the sea on the north side of the Alps, but on the southern slopes it often rises to 1,500 m (4,900 ft), sometimes even to 1,700 m (5,600 ft).[71]
104
+
105
+ Above the forestry, there is often a band of short pine trees (Pinus mugo), which is in turn superseded by Alpenrosen, dwarf shrubs, typically Rhododendron ferrugineum (on acid soils) or Rhododendron hirsutum (on alkaline soils).[72] Although the Alpenrose prefers acidic soil, the plants are found throughout the region.[9] Above the tree line is the area defined as "alpine" where in the alpine meadow plants are found that have adapted well to harsh conditions of cold temperatures, aridity, and high altitudes. The alpine area fluctuates greatly because of regional fluctuations in tree lines.[73]
106
+
107
+ Alpine plants such as the Alpine gentian grow in abundance in areas such as the meadows above the Lauterbrunnental. Gentians are named after the Illyrian king Gentius, and 40 species of the early-spring blooming flower grow in the Alps, in a range of 1,500 to 2,400 m (4,900 to 7,900 ft).[74] Writing about the gentians in Switzerland D. H. Lawrence described them as "darkening the day-time, torch-like with the smoking blueness of Pluto's gloom."[75] Gentians tend to "appear" repeatedly as the spring blooming takes place at progressively later dates, moving from the lower altitude to the higher altitude meadows where the snow melts much later than in the valleys. On the highest rocky ledges the spring flowers bloom in the summer.[9]
108
+
109
+ At these higher altitudes, the plants tend to form isolated cushions. In the Alps, several species of flowering plants have been recorded above 4,000 m (13,000 ft), including Ranunculus glacialis, Androsace alpina and Saxifraga biflora. Eritrichium nanum, commonly known as the King of the Alps, is the most elusive of the alpine flowers, growing on rocky ridges at 2,600 to 3,750 m (8,530 to 12,300 ft).[76] Perhaps the best known of the alpine plants is Edelweiss which grows in rocky areas and can be found at altitudes as low as 1,200 m (3,900 ft) and as high as 3,400 m (11,200 ft).[9] The plants that grow at the highest altitudes have adapted to conditions by specialization such as growing in rock screes that give protection from winds.[77]
110
+
111
+ The extreme and stressful climatic conditions give way to the growth of plant species with secondary metabolites important for medicinal purposes. Origanum vulgare, Prunella vulgaris, Solanum nigrum and Urtica dioica are some of the more useful medicinal species found in the Alps.[78]
112
+
113
+ Human interference has nearly exterminated the trees in many areas, and, except for the beech forests of the Austrian Alps, forests of deciduous trees are rarely found after the extreme deforestation between the 17th and 19th centuries.[79] The vegetation has changed since the second half of the 20th century, as the high alpine meadows cease to be harvested for hay or used for grazing which eventually might result in a regrowth of forest. In some areas the modern practice of building ski runs by mechanical means has destroyed the underlying tundra from which the plant life cannot recover during the non-skiing months, whereas areas that still practice a natural piste type of ski slope building preserve the fragile underlayers.[77]
114
+
115
+ The Alps are a habitat for 30,000 species of wildlife, ranging from the tiniest snow fleas to brown bears, many of which have made adaptations to the harsh cold conditions and high altitudes to the point that some only survive in specific micro-climates either directly above or below the snow line.[4][80]
116
+
117
+ The largest mammal to live in the highest altitudes are the alpine ibex, which have been sighted as high as 3,000 m (9,800 ft). The ibex live in caves and descend to eat the succulent alpine grasses.[81] Classified as antelopes,[9] chamois are smaller than ibex and found throughout the Alps, living above the tree line and are common in the entire alpine range.[82] Areas of the eastern Alps are still home to brown bears. In Switzerland the canton of Bern was named for the bears but the last bear is recorded as having been killed in 1792 above Kleine Scheidegg by three hunters from Grindelwald.[83]
118
+
119
+ Many rodents such as voles live underground. Marmots live almost exclusively above the tree line as high as 2,700 m (8,900 ft). They hibernate in large groups to provide warmth,[84] and can be found in all areas of the Alps, in large colonies they build beneath the alpine pastures.[9] Golden eagles and bearded vultures are the largest birds to be found in the Alps; they nest high on rocky ledges and can be found at altitudes of 2,400 m (7,900 ft). The most common bird is the alpine chough which can be found scavenging at climber's huts or at the Jungfraujoch, a high altitude tourist destination.[85]
120
+
121
+ Reptiles such as adders and vipers live up to the snow line; because they cannot bear the cold temperatures they hibernate underground and soak up the warmth on rocky ledges.[86] The high-altitude Alpine salamanders have adapted to living above the snow line by giving birth to fully developed young rather than laying eggs. Brown trout can be found in the streams up to the snow line.[86] Molluscs such as the wood snail live up the snow line. Popularly gathered as food, the snails are now protected.[87]
122
+
123
+ A number of species of moths live in the Alps, some of which are believed to have evolved in the same habitat up to 120 million years ago, long before the Alps were created. Blue butterflies can commonly be seen drinking from the snow melt; some species of blues fly as high as 1,800 m (5,900 ft).[88] The butterflies tend to be large, such as those from the swallowtail Parnassius family, with a habitat that ranges to 1,800 m (5,900 ft). Twelve species of beetles have habitats up to the snow line; the most beautiful and formerly collected for its colours but now protected is Rosalia alpina.[89] Spiders, such as the large wolf spider, live above the snow line and can be seen as high as 400 m (1,300 ft). Scorpions can be found in the Italian Alps.[87]
124
+
125
+ Some of the species of moths and insects show evidence of having been indigenous to the area from as long ago as the Alpine orogeny. In Emosson in Valais, Switzerland, dinosaur tracks were found in the 1970s, dating probably from the Triassic Period.[90]
126
+
127
+ About 10,000 years ago, when the ice melted after the Würm glaciation, late Palaeolithic communities were established along the lake shores and in cave systems. Evidence of human habitation has been found in caves near Vercors, close to Grenoble; in Austria the Mondsee culture shows evidence of houses built on piles to keep them dry. Standing stones have been found in Alpine areas of France and Italy. The Rock Drawings in Valcamonica are more than 5000 years old; more than 200,000 drawings and etchings have been identified at the site.[91]
128
+
129
+ In 1991 a mummy of a neolithic body, known as Ötzi the Iceman, was discovered by hikers on the Similaun glacier. His clothing and gear indicate that he lived in an alpine farming community, while the location and manner of his death – an arrowhead was discovered in his shoulder – suggests he was travelling from one place to another.[92] Analysis of the mitochondrial DNA of Ötzi, has shown that he belongs to the K1 subclade which cannot be categorized into any of the three modern branches of that subclade. The new subclade has provisionally been named K1ö for Ötzi.[93]
130
+
131
+ Celtic tribes settled in Switzerland between 1500 and 1000 BC. The Raetians lived in the eastern regions, while the west was occupied by the Helvetii and the Allobrogi settled in the Rhône valley and in Savoy. The Ligurians and Adriatic Veneti lived in north-west Italy and Triveneto respectively. Among the many substances Celtic tribes mined was salt in areas such as Salzburg in Austria where evidence of the Hallstatt culture was found by a mine manager in the 19th century.[91] By the 6th century BC the La Tène culture was well established in the region,[94] and became known for high quality decorated weapons and jewellery.[95] The Celts were the most widespread of the mountain tribes—they had warriors that were strong, tall and fair skinned, and skilled with iron weapons, which gave them an advantage in warfare.[96]
132
+
133
+ During the Second Punic War in 218 BC, the Carthaginian general Hannibal probably crossed the Alps with an army numbering 38,000 infantry, 8,000 cavalry, and 37 war elephants. This was one of the most celebrated achievements of any military force in ancient warfare,[97] although no evidence exists of the actual crossing or the place of crossing. The Romans, however, had built roads along the mountain passes, which continued to be used through the medieval period to cross the mountains and Roman road markers can still be found on the mountain passes.[98]
134
+
135
+ The Roman expansion brought the defeat of the Allobrogi in 121 BC and during the Gallic Wars in 58 BC Julius Caesar overcame the Helvetii. The Rhaetians continued to resist but were eventually conquered when the Romans turned northward to the Danube valley in Austria and defeated the Brigantes.[99] The Romans built settlements in the Alps; towns such as Aosta (named for Augustus) in Italy, Martigny and Lausanne in Switzerland, and Partenkirchen in Bavaria show remains of Roman baths, villas, arenas and temples.[100] Much of the Alpine region was gradually settled by Germanic tribes, (Lombards, Alemanni, Bavarii, and Franks) from the 6th to the 13th centuries mixing with the local Celtic tribes.[101]
136
+
137
+ Christianity was established in the region by the Romans, and saw the establishment of monasteries and churches in the high regions. The Frankish expansion of the Carolingian Empire and the Bavarian expansion in the eastern Alps introduced feudalism and the building of castles to support the growing number of dukedoms and kingdoms. Castello del Buonconsiglio in Trento, Italy, still has intricate frescoes, excellent examples of Gothic art, in a tower room. In Switzerland, Château de Chillon is preserved as an example of medieval architecture.[102]
138
+
139
+ Much of the medieval period was a time of power struggles between competing dynasties such as the House of Savoy, the Visconti in northern Italy and the House of Habsburg in Austria and Slovenia.[103] In 1291 to protect themselves from incursions by the Habsburgs, four cantons in the middle of Switzerland drew up a charter that is considered to be a declaration of independence from neighbouring kingdoms. After a series of battles fought in the 13th, 14th and 15th centuries, more cantons joined the confederacy and by the 16th century Switzerland was well-established as a separate state.[104]
140
+
141
+ During the Napoleonic Wars in the late 18th century and early 19th century, Napoleon annexed territory formerly controlled by the Habsburgs and Savoys. In 1798 he established the Helvetic Republic in Switzerland; two years later he led an army across the St. Bernard pass and conquered almost all of the Alpine regions.[105]
142
+
143
+ After the fall of Napoléon, many alpine countries developed heavy protections to prevent any new invasion. Thus, Savoy built a series of fortifications in the Maurienne valley in order to protect the major alpine passes, such as the col du Mont-Cenis that was even crossed by Charlemagne and his father to defeat the Lombards. The later indeed became very popular after the construction of a paved road ordered by Napoléon Bonaparte.
144
+ The Barrière de l'Esseillon is a serie of forts with heavy batteries, built on a cliff with a perfect view on the valley, a gorge on one side and steep mountains on the other side.
145
+
146
+ In the 19th century, the monasteries built in the high Alps during the medieval period to shelter travellers and as places of pilgrimage, became tourist destinations. The Benedictines had built monasteries in Lucerne, Switzerland, and Oberammergau; the Cistercians in the Tyrol and at Lake Constance; and the Augustinians had abbeys in the Savoy and one in the centre of Interlaken, Switzerland.[106] The Great St Bernard Hospice, built in the 9th or 10th centuries, at the summit of the Great Saint Bernard Pass was shelter for travellers and place for pilgrims since its inception; by the 19th century it became a tourist attraction with notable visitors such as author Charles Dickens and mountaineer Edward Whymper.[107]
147
+
148
+ Radiocarbon-dated charcoal placed around 50,000 years ago was found in the Drachloch (Dragon's Hole) cave above the village of Vattis in the canton of St. Gallen, proving that the high peaks were visited by prehistoric people. Seven bear skulls from the cave may have been buried by the same prehistoric people.[108] The peaks, however, were mostly ignored except for a few notable examples, and long left to the exclusive attention of the people of the adjoining valleys.[109][110] The mountain peaks were seen as terrifying, the abode of dragons and demons, to the point that people blindfolded themselves to cross the Alpine passes.[111] The glaciers remained a mystery and many still believed the highest areas to be inhabited by dragons.[112]
149
+
150
+ Charles VII of France ordered his chamberlain to climb Mont Aiguille in 1356. The knight reached the summit of Rocciamelone where he left a bronze triptych of three crosses, a feat which he conducted with the use of ladders to traverse the ice.[113] In 1492 Antoine de Ville climbed Mont Aiguille, without reaching the summit, an experience he described as "horrifying and terrifying."[110] Leonardo da Vinci was fascinated by variations of light in the higher altitudes, and climbed a mountain—scholars are uncertain which one; some believe it may have been Monte Rosa. From his description of a "blue like that of a gentian" sky it is thought that he reached a significantly high altitude.[114] In the 18th century four Chamonix men almost made the summit of Mont Blanc but were overcome by altitude sickness and snowblindness.[115]
151
+
152
+ Conrad Gessner was the first naturalist to ascend the mountains in the 16th century, to study them, writing that in the mountains he found the "theatre of the Lord".[116] By the 19th century more naturalists began to arrive to explore, study and conquer the high peaks.[117] Two men who first explored the regions of ice and snow were Horace-Bénédict de Saussure (1740–1799) in the Pennine Alps,[118] and the Benedictine monk of Disentis Placidus a Spescha (1752–1833).[117] Born in Geneva, Saussure was enamoured with the mountains from an early age; he left a law career to become a naturalist and spent many years trekking through the Bernese Oberland, the Savoy, the Piedmont and Valais, studying the glaciers and the geology, as he became an early proponent of the theory of rock upheaval.[119] Saussure, in 1787, was a member of the third ascent of Mont Blanc—today the summits of all the peaks have been climbed.[43]
153
+
154
+ Albrecht von Haller's poem Die Alpen (1732) described the mountains as an area of mythical purity.[120] Jean-Jacques Rousseau was another writer who presented the Alps as a place of allure and beauty, in his novel Julie, or the New Heloise (1761), Later the first wave of Romantics such as Goethe and Turner came to admire the scenery;[citation needed] Wordsworth visited the area in 1790, writing of his experiences in The Prelude (1799). Schiller later wrote the play William Tell (1804), which tells the story the legendary Swiss marksman William Tell as part of the greater Swiss struggle for independence from the Habsburg Empire in the early 14th century. At the end of the Napoleonic Wars, the Alpine countries began to see an influx of poets, artists, and musicians,[121] as visitors came to experience the sublime effects of monumental nature.[122]
155
+
156
+ In 1816 Byron, Percy Bysshe Shelley and his wife Mary Shelley visited Geneva and all three were inspired by the scenery in their writings.[121] During these visits Shelley wrote the poem "Mont Blanc", Byron wrote "The Prisoner of Chillon" and the dramatic poem Manfred, and Mary Shelley, who found the scenery overwhelming, conceived the idea for the novel Frankenstein in her villa on the shores of Lake Geneva in the midst of a thunderstorm. When Coleridge travelled to Chamonix, he declaimed, in defiance of Shelley, who had signed himself "Atheos" in the guestbook of the Hotel de Londres near Montenvers,[123] "Who would be, who could be an atheist in this valley of wonders".[124]
157
+
158
+ By the mid-19th century scientists began to arrive en masse to study the geology and ecology of the region.[125]
159
+
160
+ Austrian-born Adolf Hitler had a lifelong romantic fascination with the Alps and by the 1930s established a home at Berghof, in the Obersalzberg region outside of Berchtesgaden. His first visit to the area was in 1923 and he maintained a strong tie there until the end of his life. At the end of World War II the US Army occupied Obersalzberg, to prevent Hitler from retreating with the Wehrmacht into the mountains.[126]
161
+
162
+ By 1940 many of the Alpine countries were under the control of the Axis powers. Austria underwent a political coup that made it part of the Third Reich; France had been invaded and Italy was a fascist regime. Switzerland and Liechtenstein were the only countries to avoid Axis takeover.[127] The Swiss Confederation mobilized its troops—the country follows the doctrine of "armed neutrality" with all males required to have military training—a number that General Eisenhower estimated to be about 850,000. The Swiss commanders wired the infrastructure leading into the country with explosives, and threatened to destroy bridges, railway tunnels and roads across passes in the event of a Nazi invasion; and if there was an invasion the Swiss army would then have retreated to the heart of the mountain peaks, where conditions were harsher, and a military invasion would involve difficult and protracted battles.[128]
163
+
164
+ German Ski troops were trained for the war, and battles were waged in mountainous areas such as the battle at Riva Ridge in Italy, where the American 10th Mountain Division encountered heavy resistance in February 1945.[129] At the end of the war, a substantial amount of Nazi plunder was found stored in Austria, where Hitler had hoped to retreat as the war drew to a close. The salt mines surrounding the Altaussee area, where American troops found 75 kilograms (165 lb) of gold coins stored in a single mine, were used to store looted art, jewels, and currency; vast quantities of looted art were found and returned to the owners.[130]
165
+
166
+ The largest city within the Alps is the city of Grenoble in France. Other larger and important cities within the Alps with over 100,000 inhabitants are in Tyrol with Bolzano (Italy), Trento (Italy) and Innsbruck (Austria). Larger cities outside the Alps are Milan, Verona, Turin (Italy), Munich (Germany), Vienna, Salzburg (Austria), Zurich, Geneva (Switzerland) and Lyon (France).
167
+
168
+ Cities with over 100,000 inhabitants in the Alps are:
169
+
170
+ The population of the region is 14 million spread across eight countries.[4] On the rim of the mountains, on the plateaus and the plains the economy consists of manufacturing and service jobs whereas in the higher altitudes and in the mountains farming is still essential to the economy.[131] Farming and forestry continue to be mainstays of Alpine culture, industries that provide for export to the cities and maintain the mountain ecology.[132]
171
+
172
+ Much of the Alpine culture is unchanged since the medieval period when skills that guaranteed survival in the mountain valleys and in the highest villages became mainstays, leading to strong traditions of carpentry, woodcarving, baking and pastry-making, and cheesemaking.[133]
173
+
174
+ Farming had been a traditional occupation for centuries, although it became less dominant in the 20th century with the advent of tourism. Grazing and pasture land are limited because of the steep and rocky topography of the Alps. In mid-June cows are moved to the highest pastures close to the snowline, where they are watched by herdsmen who stay in the high altitudes often living in stone huts or wooden barns during the summers.[133] Villagers celebrate the day the cows are herded up to the pastures and again when they return in mid-September. The Almabtrieb, Alpabzug, Alpabfahrt, Désalpes ("coming down from the alps") is celebrated by decorating the cows with garlands and enormous cowbells while the farmers dress in traditional costumes.[133]
175
+
176
+ Cheesemaking is an ancient tradition in most Alpine countries. A wheel of cheese from the Emmental in Switzerland can weigh up to 45 kg (100 lb), and the Beaufort in Savoy can weigh up to 70 kg (150 lb). Owners of the cows traditionally receive from the cheesemakers a portion in relation to the proportion of the cows' milk from the summer months in the high alps. Haymaking is an important farming activity in mountain villages which has become somewhat mechanized in recent years, although the slopes are so steep that usually scythes are necessary to cut the grass. Hay is normally brought in twice a year, often also on festival days.[133] Alpine festivals vary from country to country and often include the display of local costumes such as dirndl and trachten, the playing of Alpenhorns, wrestling matches, some pagan traditions such as Walpurgis Night and, in many areas, Carnival is celebrated before Lent.[134]
177
+
178
+ In the high villages people live in homes built according to medieval designs that withstand cold winters. The kitchen is separated from the living area (called the stube, the area of the home heated by a stove), and second-floor bedrooms benefit from rising heat. The typical Swiss chalet originated in the Bernese Oberland. Chalets often face south or downhill, and are built of solid wood, with a steeply gabled roof to allow accumulated snow to slide off easily. Stairs leading to upper levels are sometimes built on the outside, and balconies are sometimes enclosed.[133][135]
179
+
180
+ Food is passed from the kitchen to the stube, where the dining room table is placed. Some meals are communal, such as fondue, where a pot is set in the middle of the table for each person to dip into. Other meals are still served in a traditional manner on carved wooden plates. Furniture has been traditionally elaborately carved and in many Alpine countries carpentry skills are passed from generation to generation.
181
+
182
+ Roofs are traditionally constructed from Alpine rocks such as pieces of schist, gneiss or slate.[136] Such chalets are typically found in the higher parts of the valleys, as in the Maurienne valley in Savoy, where the amount of snow during the cold months is important. The inclination of the roof cannot exceed 40%, allowing the snow to stay on top, thereby functioning as insulation from the cold.[137] In the lower areas where the forests are widespread, wooden tiles are traditionally used. Commonly made of Norway spruce, they are called "tavaillon".
183
+ The Alpine regions are multicultural and linguistically diverse. Dialects are common, and vary from valley to valley and region to region. In the Slavic Alps alone 19 dialects have been identified. Some of the French dialects spoken in the French, Swiss and Italian alps of Aosta Valley derive from Arpitan, while the southern part of the western range is related to Old Provençal; the German dialects derive from Germanic tribal languages.[138] Romansh, spoken by two percent of the population in southeast Switzerland, is an ancient Rhaeto-Romanic language derived from Latin, remnants of ancient Celtic languages and perhaps Etruscan.[138]
184
+
185
+ The Alps are one of the more popular tourist destinations in the world with many resorts such Oberstdorf, in Bavaria, Saalbach in Austria, Davos in Switzerland, Chamonix in France, and Cortina d'Ampezzo in Italy recording more than a million annual visitors. With over 120 million visitors a year, tourism is integral to the Alpine economy with much it coming from winter sports, although summer visitors are also an important component.[139]
186
+
187
+ The tourism industry began in the early 19th century when foreigners visited the Alps, travelled to the bases of the mountains to enjoy the scenery, and stayed at the spa-resorts. Large hotels were built during the Belle Époque; cog-railways, built early in the 20th century, brought tourists to ever higher elevations, with the Jungfraubahn terminating at the Jungfraujoch, well above the eternal snow-line, after going through a tunnel in Eiger. During this period winter sports were slowly introduced: in 1882 the first figure skating championship was held in St. Moritz, and downhill skiing became a popular sport with English visitors early in the 20th century,[139] as the first ski-lift was installed in 1908 above Grindelwald.[140]
188
+
189
+ In the first half of the 20th century the Olympic Winter Games were held three times in Alpine venues: the 1924 Winter Olympics in Chamonix, France; the 1928 Winter Olympics in St. Moritz, Switzerland; and the 1936 Winter Olympics in Garmisch-Partenkirchen, Germany. During World War II the winter games were cancelled but after that time the Winter Games have been held in St. Moritz (1948), Cortina d'Ampezzo (1956), Innsbruck, Austria (1964 and 1976), Grenoble, France, (1968), Albertville, France, (1992), and Torino (2006).[141] In 1930 the Lauberhorn Rennen (Lauberhorn Race), was run for the first time on the Lauberhorn above Wengen;[142] the equally demanding Hahnenkamm was first run in the same year in Kitzbühl, Austria.[143] Both races continue to be held each January on successive weekends. The Lauberhorn is the more strenuous downhill race at 4.5 km (2.8 mi) and poses danger to racers who reach 130 km/h (81 mph) within seconds of leaving the start gate.[144]
190
+
191
+ During the post-World War I period ski-lifts were built in Swiss and Austrian towns to accommodate winter visitors, but summer tourism continued to be important; by the mid-20th century the popularity of downhill skiing increased greatly as it became more accessible and in the 1970s several new villages were built in France devoted almost exclusively to skiing, such as Les Menuires. Until this point Austria and Switzerland had been the traditional and more popular destinations for winter sports, but by the end of the 20th century and into the early 21st century, France, Italy and the Tyrol began to see increases in winter visitors.[139] From 1980 to the present, ski-lifts have been modernized and snow-making machines installed at many resorts, leading to concerns regarding the loss of traditional Alpine culture and questions regarding sustainable development as the winter ski industry continues to develop quickly and the number of summer tourists decline.[139]
192
+
193
+ In the 17th century about 2500 people were killed by an avalanche in a village on the French-Italian border; in the 19th century 120 homes in a village near Zermatt were destroyed by an avalanche.[145]
194
+
195
+ The region is serviced by 4,200 km (2,600 mi) of roads used by six million vehicles per year.[4] Train travel is well established in the Alps, with, for instance 120 km (75 mi) of track for every 1,000 km2 (390 sq mi) in a country such as Switzerland.[146] Most of Europe's highest railways are located there. In 2007 the new 34.57-kilometre-long (21.48 mi) Lötschberg Base Tunnel was opened, which circumvents the 100 years older Lötschberg Tunnel. With the opening of the 57.1-kilometre-long (35.5 mi) Gotthard Base Tunnel on June 1, 2016 it bypasses the Gotthard Tunnel built in the 19th century and realizes the first flat route through the Alps.[147]
196
+
197
+ Some high mountain villages are car free either because of inaccessibility or by choice. Wengen, and Zermatt (in Switzerland) are accessible only by cable car or cog-rail trains. Avoriaz (in France), is car free, with other Alpine villages considering becoming car free zones or limiting the number of cars for reasons of sustainability of the fragile Alpine terrain.[148]
198
+
199
+ The lower regions and larger towns of the Alps are well-served by motorways and main roads, but higher mountain passes and byroads, which are amongst the highest in Europe, can be treacherous even in summer due to steep slopes. Many passes are closed in winter. A number of airports around the Alps (and some within), as well as long-distance rail links from all neighbouring countries, afford large numbers of travellers easy access.[4]
en/1740.html.txt ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In Greco-Roman mythology, Aeneas (/ɪˈniːəs/;[1] Greek: Αἰνείας, Aineías, possibly derived from Greek αἰνή meaning "praised") was a Trojan hero, the son of the prince Anchises and the goddess Aphrodite (Venus). His father was a first cousin of King Priam of Troy (both being grandsons of Ilus, founder of Troy), making Aeneas a second cousin to Priam's children (such as Hector and Paris). He is a character in Greek mythology and is mentioned in Homer's Iliad. Aeneas receives full treatment in Roman mythology, most extensively in Virgil's Aeneid, where he is cast as an ancestor of Romulus and Remus. He became the first true hero of Rome. Snorri Sturluson identifies him with the Norse god Vidarr of the Æsir.[2]
2
+
3
+ Aeneas is the Romanization of the Greek Αἰνείας
4
+ (Aineías). Aineías is first introduced in the Homeric Hymn to Aphrodite when Aphrodite gives him his name from the adjective αὶνóν
5
+ (ainon,
6
+ "terrible"), for the "terrible grief" (αὶνóν ἄχος) has caused her.[a][3] It is a popular etymology for the name, apparently exploited by Homer in the Iliad.[4] Later in the Medieval period there were writers who held that, because the Aeneid was written by a philosopher, it is meant to be read philosophically.[5] As such, in the "natural order", the meaning of Aeneas' name combines Greek ennos ("dweller") with demas ("body"), which becomes ennaios or "in-dweller"—i.e. as a god inhabiting a mortal body.[6] However, there is no certainty regarding the origin of his name.
7
+
8
+ In imitation of the Iliad, Virgil borrows epithets of Homer, including: Anchisiades, magnanimum, magnus, heros, and bonus. Though he borrows many, Virgil gives Aeneas two epithets of his own in the Aeneid: pater and pius. The epithets applied by Virgil are an example of an attitude different from that of Homer, for whilst Odysseus is poikilios ("wily"), Aeneas is described as pius ("pious"), which conveys a strong moral tone. The purpose of these epithets seems to enforce the notion of Aeneas' divine hand as father and founder of the Roman race, and their use seems circumstantial: when Aeneas is praying he refers to himself as pius, and is referred to as such by the author only when the character is acting on behalf of the gods to fulfill his divine mission. Likewise, Aeneas is called pater when acting in the interest of his men.[7]
9
+
10
+ The story of the birth of Aeneas is told in the "Hymn to Aphrodite", one of the major Homeric Hymns. Aphrodite has caused Zeus to fall in love with mortal women. In retaliation, Zeus puts desire in her heart for Anchises, who is tending his cattle among the hills near Mount Ida. When Aphrodite sees him she is smitten. She adorns herself as if for a wedding among the gods and appears before him. He is overcome by her beauty, believing that she is a goddess, but Aphrodite identifies herself as a Phrygian princess. After they make love, Aphrodite reveals her true identity to him and Anchises fears what might happen to him as a result of their liaison. Aphrodite assures him that he will be protected, and tells him that she will bear him a son to be called Aeneas. However, she warns him that he must never tell anyone that he has lain with a goddess. When Aeneas is born, Aphrodite takes him to the nymphs of Mount Ida. She directs them to raise the child to age five, then take him to Anchises.[3] According to other sources, Anchises later brags about his encounter with Aphrodite, and as a result is struck in the foot with a thunderbolt by Zeus. Thereafter he is lame in that foot, so that Aeneas has to carry him from the flames of Troy.[8]
11
+
12
+ Aeneas is a minor character in the Iliad, where he is twice saved from death by the gods as if for an as-yet-unknown destiny, but is an honorable warrior in his own right. Having held back from the fighting, aggrieved with Priam because in spite of his brave deeds he was not given his due share of honour, he leads an attack against Idomeneus to recover the body of his brother-in-law Alcathous at the urging of Deiphobus.[9] He is the leader of the Trojans' Dardanian allies, as well as a second cousin and principal lieutenant of Hector, son of the Trojan king Priam. Aeneas's mother Aphrodite frequently comes to his aid on the battlefield, and he is a favorite of Apollo. Aphrodite and Apollo rescue Aeneas from combat with Diomedes of Argos, who nearly kills him, and carry him away to Pergamos for healing. Even Poseidon, who normally favors the Greeks, comes to Aeneas's rescue after he falls under the assault of Achilles, noting that Aeneas, though from a junior branch of the royal family, is destined to become king of the Trojan people. Bruce Louden presents Aeneas as a "type" in the tradition of Utnapishtim, Baucis and Philemon, and Lot; the just man spared the general destruction.[10] Apollodorus explains that "...the Greeks let him alone on account of his piety."[11]
13
+
14
+ The Roman mythographer Gaius Julius Hyginus (c. 64 BCE – CE 17) in his Fabulae[12] credits Aeneas with killing 28 enemies in the Trojan War. Aeneas also appears in the Trojan narratives attributed to Dares Phrygius and Dictys of Crete.
15
+
16
+ The history of Aeneas was continued by Roman authors. One influential source was the account of Rome's founding in Cato the Elder's Origines.[13] The Aeneas legend was well known in Virgil's day and appeared in various historical works, including the Roman Antiquities of the Greek historian Dionysius of Halicarnassus (relying on Marcus Terentius Varro), Ab Urbe Condita by Livy (probably dependent on Quintus Fabius Pictor, fl. 200 BCE), and Gnaeus Pompeius Trogus (now extant only in an epitome by Justin).
17
+
18
+ The Aeneid explains that Aeneas is one of the few Trojans who were not killed or enslaved when Troy fell. Aeneas, after being commanded by the gods to flee, gathered a group, collectively known as the Aeneads, who then traveled to Italy and became progenitors of Romans. The Aeneads included Aeneas's trumpeter Misenus, his father Anchises, his friends Achates, Sergestus, and Acmon, the healer Iapyx, the helmsman Palinurus, and his son Ascanius (also known as Iulus, Julus, or Ascanius Julius). He carried with him the Lares and Penates, the statues of the household gods of Troy, and transplanted them to Italy.
19
+
20
+ Several attempts to find a new home failed; one such stop was on Sicily, where in Drepanum, on the island's western coast, his father, Anchises, died peacefully.
21
+
22
+ After a brief but fierce storm sent up against the group at Juno's request, Aeneas and his fleet made landfall at Carthage after six years of wanderings. Aeneas had a year-long affair with the Carthaginian queen Dido (also known as Elissa), who proposed that the Trojans settle in her land and that she and Aeneas reign jointly over their peoples. A marriage of sorts was arranged between Dido and Aeneas at the instigation of Juno, who was told that her favorite city would eventually be defeated by the Trojans' descendants. Aeneas's mother Venus (the Roman adaptation of Aphrodite) realized that her son and his company needed a temporary respite to reinforce themselves for the journey to come. However, the messenger god Mercury was sent by Jupiter and Venus to remind Aeneas of his journey and his purpose, compelling him to leave secretly. When Dido learned of this, she uttered a curse that would forever pit Carthage against Rome, an enmity that would culminate in the Punic Wars. She then committed suicide by stabbing herself with the same sword she gave Aeneas when they first met.
23
+
24
+ After the sojourn in Carthage, the Trojans returned to Sicily where Aeneas organized funeral games to honor his father, who had died a year before. The company traveled on and landed on the western coast of Italy. Aeneas descended into the underworld where he met Dido (who turned away from him to return to her husband) and his father, who showed him the future of his descendants and thus the history of Rome.
25
+
26
+ Latinus, king of the Latins, welcomed Aeneas's army of exiled Trojans and let them reorganize their lives in Latium. His daughter Lavinia had been promised to Turnus, king of the Rutuli, but Latinus received a prophecy that Lavinia would be betrothed to one from another land – namely, Aeneas. Latinus heeded the prophecy, and Turnus consequently declared war on Aeneas at the urging of Juno, who was aligned with King Mezentius of the Etruscans and Queen Amata of the Latins. Aeneas's forces prevailed. Turnus was killed, and Virgil's account ends abruptly.
27
+
28
+ The rest of Aeneas's biography is gleaned from other ancient sources, including Livy and Ovid's Metamorphoses. According to Livy, Aeneas was victorious but Latinus died in the war. Aeneas founded the city of Lavinium, named after his wife. He later welcomed Dido's sister, Anna Perenna, who then committed suicide after learning of Lavinia's jealousy. After Aeneas's death, Venus asked Jupiter to make her son immortal. Jupiter agreed. The river god Numicus cleansed Aeneas of all his mortal parts and Venus anointed him with ambrosia and nectar, making him a god. Aeneas was recognized as the god Jupiter Indiges.[14]
29
+
30
+ Snorri Sturlason, in the Prologue of the Prose Edda, tells of the world as parted in three continents: Africa, Asia and the third part called Europe or Enea.[2][15] Snorri also tells of a Trojan named Munon or Menon, who marries the daughter of the High King (Yfirkonungr) Priam called Troan and travels to distant lands, marries the Sybil and got a son, Tror, who, as Snorri tells, is identical to Thor. This tale resemble some episodes of the Aeneid.[16]
31
+ Continuations of Trojan matter in the Middle Ages had their effects on the character of Aeneas as well. The 12th-century French Roman d'Enéas addresses Aeneas's sexuality. Though Virgil appears to deflect all homoeroticism onto Nisus and Euryalus, making his Aeneas a purely heterosexual character, in the Middle Ages there was at least a suspicion of homoeroticism in Aeneas. The Roman d'Enéas addresses that charge, when Queen Amata opposes Aeneas's marrying Lavinia, claiming that Aeneas loved boys.[17]
32
+
33
+ Medieval interpretations of Aeneas were greatly influenced by both Virgil and other Latin sources. Specifically, the accounts by Dares and Dictys, which were reworked by 13th-century Italian writer Guido delle Colonne (in Historia destructionis Troiae), colored many later readings. From Guido, for instance, the Pearl Poet and other English writers get the suggestion[18] that Aeneas's safe departure from Troy with his possessions and family was a reward for treason, for which he was chastised by Hecuba.[19] In Sir Gawain and the Green Knight (late 14th century) the Pearl Poet, like many other English writers, employed Aeneas to establish a genealogy for the foundation of Britain,[18] and explains that Aeneas was "impeached for his perfidy, proven most true" (line 4).[20]
34
+
35
+ Aeneas had an extensive family tree. His wet-nurse was Caieta,[21] and he is the father of Ascanius with Creusa, and of Silvius with Lavinia. Ascanius, also known as Iulus (or Julius),[22] founded Alba Longa and was the first in a long series of kings. According to the mythology outlined by Virgil in the Aeneid, Romulus and Remus were both descendants of Aeneas through their mother Rhea Silvia, making Aeneas the progenitor of the Roman people.[23] Some early sources call him their father or grandfather,[24] but considering the commonly accepted dates of the fall of Troy (1184 BCE) and the founding of Rome (753 BCE), this seems unlikely. The Julian family of Rome, most notably Julius Cæsar and Augustus, traced their lineage to Ascanius and Aeneas,[25] thus to the goddess Venus. Through the Julians, the Palemonids make this claim. The legendary kings of Britain – including King Arthur – trace their family through a grandson of Aeneas, Brutus.[26]
36
+
37
+ Aeneas's consistent epithet in Virgil and other Latin authors is pius, a term that connotes reverence toward the gods and familial dutifulness.
38
+
39
+ In the Aeneid, Aeneas is described as strong and handsome, but neither his hair colour nor complexion are described.[27] In late antiquity however sources add further physical descriptions. The De excidio Troiae of Dares Phrygius describes Aeneas as "auburn-haired, stocky, eloquent, courteous, prudent, pious, and charming".[28] There is also a brief physical description found in 6th century AD John Malalas' Chronographia: "Aeneas: short, fat, with a good chest, powerful, with a ruddy complexion, a broad face, a good nose, fair skin, bald on the forehead, a good beard, grey eyes."[29]
40
+
41
+ Aeneas appears as a character in William Shakespeare's play Troilus and Cressida, set during the Trojan War.
42
+
43
+ Aeneas and Dido are the main characters of a 17th-century broadside ballad called "The Wandering Prince of Troy". The ballad ultimately alters Aeneas's fate from traveling on years after Dido's death to joining her as a spirit soon after her suicide.[30]
44
+
45
+ In modern literature, Aeneas is the speaker in two poems by Allen Tate, "Aeneas at Washington" and "Aeneas at New York". He is a main character in Ursula K. Le Guin's Lavinia, a re-telling of the last six books of the Aeneid told from the point of view of Lavinia, daughter of King Latinus of Latium.
46
+
47
+ Aeneas appears in David Gemmell's Troy series as a main heroic character who goes by the name Helikaon.
48
+
49
+ In Rick Riordan's book series, The Heroes of Olympus, Aeneas is regarded as the first Roman demigod, son of Venus rather than Aphrodite.
50
+
51
+ Will Adams' novel City of the Lost assumes that much of the information provided by Virgil is mistaken, and that the true Aeneas and Dido did not meet and love in Carthage but in a Phoenician colony at Cyprus, on the site of the modern Famagusta. Their tale is interspersed with that of modern activists who, while striving to stop an ambitious Turkish Army general
52
+ trying to stage a coup, accidentally discover the hidden ruins of Dido's palace.
53
+
54
+ Aeneas is a title character in Henry Purcell's opera Dido and Aeneas (c. 1688), and Jakob Greber's Enea in Cartagine (Aeneas in Carthage) (1711), and one of the principal roles in Hector Berlioz' opera Les Troyens (c. 1857), as well as in Metastasio's immensely popular[31] opera libretto Didone abbandonata. Canadian composer James Rolfe composed his opera Aeneas and Dido (2007; to a libretto by André Alexis) as a companion piece to Purcell's opera.
55
+
56
+ Despite its many dramatic elements, Aeneas's story has generated little interest from the film industry. Ronald Lewis portrayed Aeneas in Helen of Troy, directed by Robert Wise, as a supporting character, who is a member of the Trojan Royal family, and a close and loyal friend to Paris, and escapes at the end of the film. Portrayed by Steve Reeves, he was the main character in the 1961 sword and sandal film Guerra di Troia (The Trojan War). Reeves reprised the role the following year in the film The Avenger, about Aeneas's arrival in Latium and his conflicts with local tribes as he tries to settle his fellow Trojan refugees there.
57
+
58
+ Giulio Brogi, portrayed as Aeneas in the 1971 Italian TV miniseries series called Eneide, which gives the whole story of the Aeneid, from Aeneas escape from to Troy, to his meeting of Dido, his arrival in Italy, and his duel with Turnus.[32]
59
+
60
+ The most recent cinematic portrayal of Aeneas was in the film Troy, in which he appears as a youth charged by Paris to protect the Trojan refugees, and to continue the ideals of the city and its people. Paris gives Aeneas Priam's sword, in order to give legitimacy and continuity to the royal line of Troy – and lay the foundations of Roman culture. In this film, he is not a member of the royal family and does not appear to fight in the war.
61
+
62
+ In the role-playing game Vampire: The Requiem by White Wolf Game Studios, Aeneas figures as one of the mythical founders of the Ventrue Clan.
63
+
64
+ in the action game Warriors: Legends of Troy, Aeneas is a playable character. The game ends with him and the Aeneans fleeing Troy's destruction and, spurned by the words of a prophetess thought crazed, goes to a new country (Italy) where he will start an empire greater than Greece and Troy combined that shall rule the world for 1000 years, never to be outdone in the tale of men (The Roman Empire).
65
+
66
+ In the 2018 TV miniseries Troy: Fall of a City, Aeneas is portrayed by Alfred Enoch.[33]
67
+
68
+ Scenes depicting Aeneas, especially from the Aeneid, have been the focus of study for centuries. They have been the frequent subject of art and literature since their debut in the 1st century.
69
+
70
+ The artist Giovanni Battista Tiepolo was commissioned by Gaetano Valmarana in 1757 to fresco several rooms in the Villa Valmarana, the family villa situated outside Vicenza. Tiepolo decorated the palazzina with scenes from epics such as Homer's Iliad and Virgil's Aeneid.[34]
en/1741.html.txt ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In Greco-Roman mythology, Aeneas (/ɪˈniːəs/;[1] Greek: Αἰνείας, Aineías, possibly derived from Greek αἰνή meaning "praised") was a Trojan hero, the son of the prince Anchises and the goddess Aphrodite (Venus). His father was a first cousin of King Priam of Troy (both being grandsons of Ilus, founder of Troy), making Aeneas a second cousin to Priam's children (such as Hector and Paris). He is a character in Greek mythology and is mentioned in Homer's Iliad. Aeneas receives full treatment in Roman mythology, most extensively in Virgil's Aeneid, where he is cast as an ancestor of Romulus and Remus. He became the first true hero of Rome. Snorri Sturluson identifies him with the Norse god Vidarr of the Æsir.[2]
2
+
3
+ Aeneas is the Romanization of the Greek Αἰνείας
4
+ (Aineías). Aineías is first introduced in the Homeric Hymn to Aphrodite when Aphrodite gives him his name from the adjective αὶνóν
5
+ (ainon,
6
+ "terrible"), for the "terrible grief" (αὶνóν ἄχος) has caused her.[a][3] It is a popular etymology for the name, apparently exploited by Homer in the Iliad.[4] Later in the Medieval period there were writers who held that, because the Aeneid was written by a philosopher, it is meant to be read philosophically.[5] As such, in the "natural order", the meaning of Aeneas' name combines Greek ennos ("dweller") with demas ("body"), which becomes ennaios or "in-dweller"—i.e. as a god inhabiting a mortal body.[6] However, there is no certainty regarding the origin of his name.
7
+
8
+ In imitation of the Iliad, Virgil borrows epithets of Homer, including: Anchisiades, magnanimum, magnus, heros, and bonus. Though he borrows many, Virgil gives Aeneas two epithets of his own in the Aeneid: pater and pius. The epithets applied by Virgil are an example of an attitude different from that of Homer, for whilst Odysseus is poikilios ("wily"), Aeneas is described as pius ("pious"), which conveys a strong moral tone. The purpose of these epithets seems to enforce the notion of Aeneas' divine hand as father and founder of the Roman race, and their use seems circumstantial: when Aeneas is praying he refers to himself as pius, and is referred to as such by the author only when the character is acting on behalf of the gods to fulfill his divine mission. Likewise, Aeneas is called pater when acting in the interest of his men.[7]
9
+
10
+ The story of the birth of Aeneas is told in the "Hymn to Aphrodite", one of the major Homeric Hymns. Aphrodite has caused Zeus to fall in love with mortal women. In retaliation, Zeus puts desire in her heart for Anchises, who is tending his cattle among the hills near Mount Ida. When Aphrodite sees him she is smitten. She adorns herself as if for a wedding among the gods and appears before him. He is overcome by her beauty, believing that she is a goddess, but Aphrodite identifies herself as a Phrygian princess. After they make love, Aphrodite reveals her true identity to him and Anchises fears what might happen to him as a result of their liaison. Aphrodite assures him that he will be protected, and tells him that she will bear him a son to be called Aeneas. However, she warns him that he must never tell anyone that he has lain with a goddess. When Aeneas is born, Aphrodite takes him to the nymphs of Mount Ida. She directs them to raise the child to age five, then take him to Anchises.[3] According to other sources, Anchises later brags about his encounter with Aphrodite, and as a result is struck in the foot with a thunderbolt by Zeus. Thereafter he is lame in that foot, so that Aeneas has to carry him from the flames of Troy.[8]
11
+
12
+ Aeneas is a minor character in the Iliad, where he is twice saved from death by the gods as if for an as-yet-unknown destiny, but is an honorable warrior in his own right. Having held back from the fighting, aggrieved with Priam because in spite of his brave deeds he was not given his due share of honour, he leads an attack against Idomeneus to recover the body of his brother-in-law Alcathous at the urging of Deiphobus.[9] He is the leader of the Trojans' Dardanian allies, as well as a second cousin and principal lieutenant of Hector, son of the Trojan king Priam. Aeneas's mother Aphrodite frequently comes to his aid on the battlefield, and he is a favorite of Apollo. Aphrodite and Apollo rescue Aeneas from combat with Diomedes of Argos, who nearly kills him, and carry him away to Pergamos for healing. Even Poseidon, who normally favors the Greeks, comes to Aeneas's rescue after he falls under the assault of Achilles, noting that Aeneas, though from a junior branch of the royal family, is destined to become king of the Trojan people. Bruce Louden presents Aeneas as a "type" in the tradition of Utnapishtim, Baucis and Philemon, and Lot; the just man spared the general destruction.[10] Apollodorus explains that "...the Greeks let him alone on account of his piety."[11]
13
+
14
+ The Roman mythographer Gaius Julius Hyginus (c. 64 BCE – CE 17) in his Fabulae[12] credits Aeneas with killing 28 enemies in the Trojan War. Aeneas also appears in the Trojan narratives attributed to Dares Phrygius and Dictys of Crete.
15
+
16
+ The history of Aeneas was continued by Roman authors. One influential source was the account of Rome's founding in Cato the Elder's Origines.[13] The Aeneas legend was well known in Virgil's day and appeared in various historical works, including the Roman Antiquities of the Greek historian Dionysius of Halicarnassus (relying on Marcus Terentius Varro), Ab Urbe Condita by Livy (probably dependent on Quintus Fabius Pictor, fl. 200 BCE), and Gnaeus Pompeius Trogus (now extant only in an epitome by Justin).
17
+
18
+ The Aeneid explains that Aeneas is one of the few Trojans who were not killed or enslaved when Troy fell. Aeneas, after being commanded by the gods to flee, gathered a group, collectively known as the Aeneads, who then traveled to Italy and became progenitors of Romans. The Aeneads included Aeneas's trumpeter Misenus, his father Anchises, his friends Achates, Sergestus, and Acmon, the healer Iapyx, the helmsman Palinurus, and his son Ascanius (also known as Iulus, Julus, or Ascanius Julius). He carried with him the Lares and Penates, the statues of the household gods of Troy, and transplanted them to Italy.
19
+
20
+ Several attempts to find a new home failed; one such stop was on Sicily, where in Drepanum, on the island's western coast, his father, Anchises, died peacefully.
21
+
22
+ After a brief but fierce storm sent up against the group at Juno's request, Aeneas and his fleet made landfall at Carthage after six years of wanderings. Aeneas had a year-long affair with the Carthaginian queen Dido (also known as Elissa), who proposed that the Trojans settle in her land and that she and Aeneas reign jointly over their peoples. A marriage of sorts was arranged between Dido and Aeneas at the instigation of Juno, who was told that her favorite city would eventually be defeated by the Trojans' descendants. Aeneas's mother Venus (the Roman adaptation of Aphrodite) realized that her son and his company needed a temporary respite to reinforce themselves for the journey to come. However, the messenger god Mercury was sent by Jupiter and Venus to remind Aeneas of his journey and his purpose, compelling him to leave secretly. When Dido learned of this, she uttered a curse that would forever pit Carthage against Rome, an enmity that would culminate in the Punic Wars. She then committed suicide by stabbing herself with the same sword she gave Aeneas when they first met.
23
+
24
+ After the sojourn in Carthage, the Trojans returned to Sicily where Aeneas organized funeral games to honor his father, who had died a year before. The company traveled on and landed on the western coast of Italy. Aeneas descended into the underworld where he met Dido (who turned away from him to return to her husband) and his father, who showed him the future of his descendants and thus the history of Rome.
25
+
26
+ Latinus, king of the Latins, welcomed Aeneas's army of exiled Trojans and let them reorganize their lives in Latium. His daughter Lavinia had been promised to Turnus, king of the Rutuli, but Latinus received a prophecy that Lavinia would be betrothed to one from another land – namely, Aeneas. Latinus heeded the prophecy, and Turnus consequently declared war on Aeneas at the urging of Juno, who was aligned with King Mezentius of the Etruscans and Queen Amata of the Latins. Aeneas's forces prevailed. Turnus was killed, and Virgil's account ends abruptly.
27
+
28
+ The rest of Aeneas's biography is gleaned from other ancient sources, including Livy and Ovid's Metamorphoses. According to Livy, Aeneas was victorious but Latinus died in the war. Aeneas founded the city of Lavinium, named after his wife. He later welcomed Dido's sister, Anna Perenna, who then committed suicide after learning of Lavinia's jealousy. After Aeneas's death, Venus asked Jupiter to make her son immortal. Jupiter agreed. The river god Numicus cleansed Aeneas of all his mortal parts and Venus anointed him with ambrosia and nectar, making him a god. Aeneas was recognized as the god Jupiter Indiges.[14]
29
+
30
+ Snorri Sturlason, in the Prologue of the Prose Edda, tells of the world as parted in three continents: Africa, Asia and the third part called Europe or Enea.[2][15] Snorri also tells of a Trojan named Munon or Menon, who marries the daughter of the High King (Yfirkonungr) Priam called Troan and travels to distant lands, marries the Sybil and got a son, Tror, who, as Snorri tells, is identical to Thor. This tale resemble some episodes of the Aeneid.[16]
31
+ Continuations of Trojan matter in the Middle Ages had their effects on the character of Aeneas as well. The 12th-century French Roman d'Enéas addresses Aeneas's sexuality. Though Virgil appears to deflect all homoeroticism onto Nisus and Euryalus, making his Aeneas a purely heterosexual character, in the Middle Ages there was at least a suspicion of homoeroticism in Aeneas. The Roman d'Enéas addresses that charge, when Queen Amata opposes Aeneas's marrying Lavinia, claiming that Aeneas loved boys.[17]
32
+
33
+ Medieval interpretations of Aeneas were greatly influenced by both Virgil and other Latin sources. Specifically, the accounts by Dares and Dictys, which were reworked by 13th-century Italian writer Guido delle Colonne (in Historia destructionis Troiae), colored many later readings. From Guido, for instance, the Pearl Poet and other English writers get the suggestion[18] that Aeneas's safe departure from Troy with his possessions and family was a reward for treason, for which he was chastised by Hecuba.[19] In Sir Gawain and the Green Knight (late 14th century) the Pearl Poet, like many other English writers, employed Aeneas to establish a genealogy for the foundation of Britain,[18] and explains that Aeneas was "impeached for his perfidy, proven most true" (line 4).[20]
34
+
35
+ Aeneas had an extensive family tree. His wet-nurse was Caieta,[21] and he is the father of Ascanius with Creusa, and of Silvius with Lavinia. Ascanius, also known as Iulus (or Julius),[22] founded Alba Longa and was the first in a long series of kings. According to the mythology outlined by Virgil in the Aeneid, Romulus and Remus were both descendants of Aeneas through their mother Rhea Silvia, making Aeneas the progenitor of the Roman people.[23] Some early sources call him their father or grandfather,[24] but considering the commonly accepted dates of the fall of Troy (1184 BCE) and the founding of Rome (753 BCE), this seems unlikely. The Julian family of Rome, most notably Julius Cæsar and Augustus, traced their lineage to Ascanius and Aeneas,[25] thus to the goddess Venus. Through the Julians, the Palemonids make this claim. The legendary kings of Britain – including King Arthur – trace their family through a grandson of Aeneas, Brutus.[26]
36
+
37
+ Aeneas's consistent epithet in Virgil and other Latin authors is pius, a term that connotes reverence toward the gods and familial dutifulness.
38
+
39
+ In the Aeneid, Aeneas is described as strong and handsome, but neither his hair colour nor complexion are described.[27] In late antiquity however sources add further physical descriptions. The De excidio Troiae of Dares Phrygius describes Aeneas as "auburn-haired, stocky, eloquent, courteous, prudent, pious, and charming".[28] There is also a brief physical description found in 6th century AD John Malalas' Chronographia: "Aeneas: short, fat, with a good chest, powerful, with a ruddy complexion, a broad face, a good nose, fair skin, bald on the forehead, a good beard, grey eyes."[29]
40
+
41
+ Aeneas appears as a character in William Shakespeare's play Troilus and Cressida, set during the Trojan War.
42
+
43
+ Aeneas and Dido are the main characters of a 17th-century broadside ballad called "The Wandering Prince of Troy". The ballad ultimately alters Aeneas's fate from traveling on years after Dido's death to joining her as a spirit soon after her suicide.[30]
44
+
45
+ In modern literature, Aeneas is the speaker in two poems by Allen Tate, "Aeneas at Washington" and "Aeneas at New York". He is a main character in Ursula K. Le Guin's Lavinia, a re-telling of the last six books of the Aeneid told from the point of view of Lavinia, daughter of King Latinus of Latium.
46
+
47
+ Aeneas appears in David Gemmell's Troy series as a main heroic character who goes by the name Helikaon.
48
+
49
+ In Rick Riordan's book series, The Heroes of Olympus, Aeneas is regarded as the first Roman demigod, son of Venus rather than Aphrodite.
50
+
51
+ Will Adams' novel City of the Lost assumes that much of the information provided by Virgil is mistaken, and that the true Aeneas and Dido did not meet and love in Carthage but in a Phoenician colony at Cyprus, on the site of the modern Famagusta. Their tale is interspersed with that of modern activists who, while striving to stop an ambitious Turkish Army general
52
+ trying to stage a coup, accidentally discover the hidden ruins of Dido's palace.
53
+
54
+ Aeneas is a title character in Henry Purcell's opera Dido and Aeneas (c. 1688), and Jakob Greber's Enea in Cartagine (Aeneas in Carthage) (1711), and one of the principal roles in Hector Berlioz' opera Les Troyens (c. 1857), as well as in Metastasio's immensely popular[31] opera libretto Didone abbandonata. Canadian composer James Rolfe composed his opera Aeneas and Dido (2007; to a libretto by André Alexis) as a companion piece to Purcell's opera.
55
+
56
+ Despite its many dramatic elements, Aeneas's story has generated little interest from the film industry. Ronald Lewis portrayed Aeneas in Helen of Troy, directed by Robert Wise, as a supporting character, who is a member of the Trojan Royal family, and a close and loyal friend to Paris, and escapes at the end of the film. Portrayed by Steve Reeves, he was the main character in the 1961 sword and sandal film Guerra di Troia (The Trojan War). Reeves reprised the role the following year in the film The Avenger, about Aeneas's arrival in Latium and his conflicts with local tribes as he tries to settle his fellow Trojan refugees there.
57
+
58
+ Giulio Brogi, portrayed as Aeneas in the 1971 Italian TV miniseries series called Eneide, which gives the whole story of the Aeneid, from Aeneas escape from to Troy, to his meeting of Dido, his arrival in Italy, and his duel with Turnus.[32]
59
+
60
+ The most recent cinematic portrayal of Aeneas was in the film Troy, in which he appears as a youth charged by Paris to protect the Trojan refugees, and to continue the ideals of the city and its people. Paris gives Aeneas Priam's sword, in order to give legitimacy and continuity to the royal line of Troy – and lay the foundations of Roman culture. In this film, he is not a member of the royal family and does not appear to fight in the war.
61
+
62
+ In the role-playing game Vampire: The Requiem by White Wolf Game Studios, Aeneas figures as one of the mythical founders of the Ventrue Clan.
63
+
64
+ in the action game Warriors: Legends of Troy, Aeneas is a playable character. The game ends with him and the Aeneans fleeing Troy's destruction and, spurned by the words of a prophetess thought crazed, goes to a new country (Italy) where he will start an empire greater than Greece and Troy combined that shall rule the world for 1000 years, never to be outdone in the tale of men (The Roman Empire).
65
+
66
+ In the 2018 TV miniseries Troy: Fall of a City, Aeneas is portrayed by Alfred Enoch.[33]
67
+
68
+ Scenes depicting Aeneas, especially from the Aeneid, have been the focus of study for centuries. They have been the frequent subject of art and literature since their debut in the 1st century.
69
+
70
+ The artist Giovanni Battista Tiepolo was commissioned by Gaetano Valmarana in 1757 to fresco several rooms in the Villa Valmarana, the family villa situated outside Vicenza. Tiepolo decorated the palazzina with scenes from epics such as Homer's Iliad and Virgil's Aeneid.[34]
en/1742.html.txt ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object.[note 1] Energy is a conserved quantity; the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the energy transferred to an object by the work of moving it a distance of 1 metre against a force of 1 newton.
6
+
7
+ Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field (gravitational, electric or magnetic), the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, and the thermal energy due to an object's temperature.
8
+
9
+ Mass and energy are closely related. Due to mass–energy equivalence, any object that has mass when stationary (called rest mass) also has an equivalent amount of energy whose form is called rest energy, and any additional energy (of any form) acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale.
10
+
11
+ Living organisms require energy to stay alive, such as the energy humans get from food. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth.
12
+
13
+ The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, and generally is a function of the position of an object within a field or may be stored in the field itself.
14
+
15
+ While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, macroscopic mechanical energy is the sum of translational and rotational kinetic and potential energy in a system neglects the kinetic energy due to temperature, and nuclear energy which combines utilize potentials from the nuclear force and the weak force), among others.[citation needed]
16
+
17
+
18
+
19
+ The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation',[1] which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
20
+
21
+ In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two.
22
+
23
+ In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense.[2] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
24
+
25
+ These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.[3] Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
26
+
27
+ In 1843, Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle.
28
+
29
+ In the International System of Units (SI), the unit of energy is the joule, named after James Prescott Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
30
+
31
+ The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
32
+
33
+ In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
34
+
35
+ Work, a function of energy, is force times distance.
36
+
37
+ This says that the work (
38
+
39
+
40
+
41
+ W
42
+
43
+
44
+ {\displaystyle W}
45
+
46
+ ) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
47
+
48
+ The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.[4]
49
+
50
+ Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
51
+
52
+ Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
53
+
54
+ In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the case of endothermic reactions the situation is the reverse. Chemical reactions are almost invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT – that is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
55
+
56
+ In biology, energy is an attribute of all biological systems from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy used in respiration is mostly stored in molecular oxygen [5] and can be unlocked by reactions with molecules of substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum.[6] The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.[7]
57
+
58
+ Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, and proteins and high-energy compounds like oxygen [5] and ATP. Carbohydrates, lipids, and proteins can release the energy of oxygen, which is utilized by living organisms as an electron acceptor. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark, in a forest fire, or it may be made available more slowly for animal or human metabolism, when organic molecules are ingested, and catabolism is triggered by enzyme action.
59
+
60
+ Any living organism relies on an external source of energy – radiant energy from the Sun in the case of green plants, chemical energy in some form in the case of animals – to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria
61
+
62
+ and some of the energy is used to convert ADP into ATP.
63
+
64
+ The rest of the chemical energy in O2[8] and the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[note 2]
65
+
66
+ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy), and it is true that most real machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").[note 3] Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[9] i.e. reconverted into carbon dioxide and heat.
67
+
68
+ In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior,[10] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth.
69
+
70
+ Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives many weather phenomena, save those generated by volcanic events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement.
71
+
72
+ In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may be later released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars created these atoms.
73
+
74
+ In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
75
+
76
+
77
+
78
+ In quantum mechanics, energy is defined in terms of the energy operator
79
+ as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation:
80
+
81
+
82
+
83
+ E
84
+ =
85
+ h
86
+ ν
87
+
88
+
89
+ {\displaystyle E=h\nu }
90
+
91
+ (where
92
+
93
+
94
+
95
+ h
96
+
97
+
98
+ {\displaystyle h}
99
+
100
+ is Planck's constant and
101
+
102
+
103
+
104
+ ν
105
+
106
+
107
+ {\displaystyle \nu }
108
+
109
+ the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
110
+
111
+ When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
112
+
113
+ where
114
+
115
+ For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
116
+
117
+ In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.[11]
118
+
119
+ Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
120
+
121
+ In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector).[11] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts).
122
+
123
+
124
+
125
+ Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator; or a heat engine, from heat to work.
126
+
127
+ Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that in itself (since it still contains the same total energy even if in different forms), but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
128
+
129
+ There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
130
+
131
+ Energy transformations in the universe over time are characterized by various kinds of potential energy that has been available since the Big Bang later being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released that was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
132
+
133
+ Energy is also transferred from potential energy (
134
+
135
+
136
+
137
+
138
+ E
139
+
140
+ p
141
+
142
+
143
+
144
+
145
+ {\displaystyle E_{p}}
146
+
147
+ ) to kinetic energy (
148
+
149
+
150
+
151
+
152
+ E
153
+
154
+ k
155
+
156
+
157
+
158
+
159
+ {\displaystyle E_{k}}
160
+
161
+ ) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+ (4)
172
+
173
+ The equation can then be simplified further since
174
+
175
+
176
+
177
+
178
+ E
179
+
180
+ p
181
+
182
+
183
+ =
184
+ m
185
+ g
186
+ h
187
+
188
+
189
+ {\displaystyle E_{p}=mgh}
190
+
191
+ (mass times acceleration due to gravity times the height) and
192
+
193
+
194
+
195
+
196
+ E
197
+
198
+ k
199
+
200
+
201
+ =
202
+
203
+
204
+ 1
205
+ 2
206
+
207
+
208
+ m
209
+
210
+ v
211
+
212
+ 2
213
+
214
+
215
+
216
+
217
+ {\displaystyle E_{k}={\frac {1}{2}}mv^{2}}
218
+
219
+ (half mass times velocity squared). Then the total amount of energy can be found by adding
220
+
221
+
222
+
223
+
224
+ E
225
+
226
+ p
227
+
228
+
229
+ +
230
+
231
+ E
232
+
233
+ k
234
+
235
+
236
+ =
237
+
238
+ E
239
+
240
+ t
241
+ o
242
+ t
243
+ a
244
+ l
245
+
246
+
247
+
248
+
249
+ {\displaystyle E_{p}+E_{k}=E_{total}}
250
+
251
+ .
252
+
253
+ Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between rest-mass and rest-energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information).
254
+
255
+ Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since
256
+
257
+
258
+
259
+
260
+ c
261
+
262
+ 2
263
+
264
+
265
+
266
+
267
+ {\displaystyle c^{2}}
268
+
269
+ is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~
270
+
271
+
272
+
273
+ 9
274
+ ×
275
+
276
+ 10
277
+
278
+ 16
279
+
280
+
281
+
282
+
283
+ {\displaystyle 9\times 10^{16}}
284
+
285
+ joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics.
286
+
287
+ Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomisation in a crystal).
288
+
289
+ As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less.
290
+
291
+ The fact that energy can be neither created nor be destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out by work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[12]
292
+
293
+ While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations.[13] The total energy of a system can be calculated by adding up all forms of energy in the system.
294
+
295
+ Richard Feynman said during a 1961 lecture:[14]
296
+
297
+ There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.
298
+
299
+ Most kinds of energy (with gravitational energy being a notable exception)[15] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[13][14]
300
+
301
+ This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time,[16] a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in principle be defined and measured.
302
+
303
+ Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appears as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
304
+
305
+ In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
306
+
307
+ which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
308
+
309
+ In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena.
310
+
311
+ Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat.[note 4] Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[note 5] and the conductive transfer of thermal energy.
312
+
313
+ Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:[note 6]
314
+
315
+
316
+
317
+
318
+
319
+
320
+
321
+
322
+
323
+ (1)
324
+
325
+ where
326
+
327
+
328
+
329
+ E
330
+
331
+
332
+ {\displaystyle E}
333
+
334
+ is the amount of energy transferred,
335
+
336
+
337
+
338
+ W
339
+
340
+
341
+ {\displaystyle W}
342
+
343
+   represents the work done on the system, and
344
+
345
+
346
+
347
+ Q
348
+
349
+
350
+ {\displaystyle Q}
351
+
352
+ represents the heat flow into the system. As a simplification, the heat term,
353
+
354
+
355
+
356
+ Q
357
+
358
+
359
+ {\displaystyle Q}
360
+
361
+ , is sometimes ignored, especially when the thermal efficiency of the transfer is high.
362
+
363
+
364
+
365
+
366
+
367
+
368
+
369
+
370
+
371
+ (2)
372
+
373
+ This simplified equation is the one used to define the joule, for example.
374
+
375
+ Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by
376
+
377
+
378
+
379
+ E
380
+
381
+
382
+ {\displaystyle E}
383
+
384
+ , one may write
385
+
386
+
387
+
388
+
389
+
390
+
391
+
392
+
393
+
394
+ (3)
395
+
396
+ Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.[17]
397
+
398
+ The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved[18] and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
399
+
400
+ where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
401
+
402
+ This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and pV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
403
+
404
+ where
405
+
406
+
407
+
408
+ δ
409
+ Q
410
+
411
+
412
+ {\displaystyle \delta Q}
413
+
414
+ is the heat supplied to the system and
415
+
416
+
417
+
418
+ δ
419
+ W
420
+
421
+
422
+ {\displaystyle \delta W}
423
+
424
+ is the work applied to the system.
425
+
426
+ The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over the whole cycle, or over many cycles, net energy is thus equally split between kinetic and potential. This is called equipartition principle; total energy of a system with many degrees of freedom is equally split among all available degrees of freedom.
427
+
428
+ This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics. The second law of thermodynamics is valid only for systems which are near or in equilibrium state. For non-equilibrium systems, the laws governing system's behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production.[19][20] It states that nonequilibrium systems behave in such a way to maximize its entropy production.[21]
en/1743.html.txt ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Electricity is the set of physical phenomena associated with the presence and motion of matter that has a property of electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Various common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others.
4
+
5
+ The presence of an electric charge, which can be either positive or negative, produces an electric field. The movement of electric charges is an electric current and produces a magnetic field.
6
+
7
+ When a charge is placed in a location with a non-zero electric field, a force will act on it. The magnitude of this force is given by Coulomb's law. If the charge moves, the electric field would be doing work on the electric charge. Thus we can speak of electric potential at a certain point in space, which is equal to the work done by an external agent in carrying a unit of positive charge from an arbitrarily chosen reference point to that point without any acceleration and is typically measured in volts.
8
+
9
+ Electricity is at the heart of many modern technologies, being used for:
10
+
11
+ Electrical phenomena have been studied since antiquity, though progress in theoretical understanding remained slow until the seventeenth and eighteenth centuries. The theory of electromagnetism was developed in the 19th century, and by the end of that century electricity was being put to industrial and residential use by electrical engineers. The rapid expansion in electrical technology at this time transformed industry and society, becoming a driving force for the Second Industrial Revolution. Electricity's extraordinary versatility means it can be put to an almost limitless set of applications which include transport, heating, lighting, communications, and computation. Electrical power is now the backbone of modern industrial society.[1]
12
+
13
+ Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians.[2] Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects.[3] Patients suffering from ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them.[4]
14
+
15
+ Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing.[5][6][7][8] Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature.[9]
16
+
17
+ Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber.[5] He coined the New Latin word electricus ("of amber" or "like amber", from ἤλεκτρον, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed.[10] This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.[11]
18
+
19
+ Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay.[12] Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky.[13] A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature.[14] He also explained the apparently paradoxical behavior[15] of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges.[12]
20
+
21
+ In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles.[16][17][12] Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used.[16][17] The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827.[17] Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862.[18]
22
+
23
+ While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.
24
+
25
+ In 1887, Heinrich Hertz[19]:843–44[20] discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect".[21] The photoelectric effect is also employed in photocells such as can be found in solar panels and this is frequently used to make electricity commercially.
26
+
27
+ The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect.[22] In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.[23][24]
28
+
29
+ Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947,[25] followed by the bipolar junction transistor in 1948.[26] These early transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis.[27]:168 They were followed by the silicon-based MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor), invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.[28][29][30] It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses,[27]:165,179 leading to the silicon revolution.[31] Solid-state devices started becoming prevalent from the 1960s, with the transition from vacuum tubes to semiconductor diodes, transistors, integrated circuit (IC) chips, MOSFETs, and light-emitting diode (LED) technology.
30
+
31
+ The most common electronic device is the MOSFET,[29][32] which has become the most widely manufactured device in history.[33] Common solid-state MOS devices include microprocessor chips[34] and semiconductor memory.[35][36] A special type of semiconductor memory is flash memory, which is used in USB flash drives and mobile devices, as well as solid-state drive (SSD) technology to replace mechanically rotating magnetic disc hard disk drive (HDD) technology.
32
+
33
+ The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity.[19]:457 A lightweight ball suspended from a string can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract.[19]
34
+
35
+ The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them.[37][38]:35 The electromagnetic force is very strong, second only in strength to the strong interaction,[39] but unlike that force it operates over all distances.[40] In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together.[41]
36
+
37
+ Study has shown that the origin of charge is from certain types of subatomic particles which have the property of electric charge. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. The most familiar carriers of electrical charge are the electron and proton. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system.[42] Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire.[38]:2–5 The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.
38
+
39
+ The charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin.[43] The amount of charge is usually given the symbol Q and expressed in coulombs;[44] each electron carries the same charge of approximately −1.6022×10−19 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×10−19  coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.[45]
40
+
41
+ Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer.[38]:2–5
42
+
43
+ The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.[46]
44
+
45
+ By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons.[47] However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.
46
+
47
+ The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second,[38]:17 the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.[48]
48
+
49
+ Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840.[38]:23–24 One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass.[49] He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment.[50]
50
+
51
+ In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative.[51]:11 If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave.[51]:206–07 Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance.[51]:223–25 These properties however can become important when circuitry is subjected to transients, such as when first energised.
52
+
53
+ The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance.[40] However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.[41]
54
+
55
+ An electric field generally varies in space,[52] and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point.[19]:469–70 The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, so it follows that an electric field is a vector field.[19]:469–70
56
+
57
+ The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday,[53] whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines.[53] Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves.[19]:479
58
+
59
+ A hollow conducting body carries all its charge on its outer surface. The field is therefore zero at all places inside the body.[38]:88 This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects.
60
+
61
+ The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre.[54] The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.[55]
62
+
63
+ The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning stroke to develop there, rather than to the building it serves to protect[56]:155
64
+
65
+ The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity.[19]:494–98 This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated.[19]:494–98 The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.
66
+
67
+ For practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable.[57]
68
+
69
+ Electric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field.[58] As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, otherwise this would produce a force that will move the charge carriers to even the potential of the surface.
70
+
71
+ The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together.[38]:60
72
+
73
+ Ørsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it.[49] Ørsted's words were that "the electric conflict acts in a revolving manner." The force also depended on the direction of the current, for if the flow was reversed, then the force did too.[59]
74
+
75
+ Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart.[60] The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.[60]
76
+
77
+ This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.[61]
78
+
79
+ Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy.[61] Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.
80
+
81
+ The ability of chemical reactions to produce electricity, and conversely the ability of electricity to drive chemical reactions has a wide array of uses.
82
+
83
+ Electrochemistry has always been an important part of electricity. From the initial invention of the Voltaic pile, electrochemical cells have evolved into the many different types of batteries, electroplating and electrolysis cells. Aluminium is produced in vast quantities this way, and many portable devices are electrically powered using rechargeable cells.
84
+
85
+ An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.
86
+
87
+ The components in an electric circuit can take many forms, which can include elements such as resistors, capacitors, switches, transformers and electronics. Electronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.[62]:15–16
88
+
89
+ The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm's law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as 'ohmic'. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp.[62]:30–35
90
+
91
+ The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.[62]:216–20
92
+
93
+ The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor's behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one.[62]:226–29
94
+
95
+ Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second.
96
+
97
+ Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean "electric power in watts." The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is
98
+
99
+ where
100
+
101
+ Electricity generation is often done with electric generators, but can also be supplied by chemical sources such as electric batteries or by other means from a wide variety of sources of energy. Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.[63]
102
+
103
+ Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, optoelectronics, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes amplification of weak signals possible and electronics is widely used in information processing, telecommunications, and signal processing. The ability of electronic devices to act as switches makes digital information processing possible. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.
104
+
105
+ Today, most electronic devices use semiconductor components to perform electron control. The study of semiconductor devices and related technology is considered a branch of solid state physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering.
106
+
107
+ Faraday's and Ampère's work showed that a time-varying magnetic field acted as a source of an electric field, and a time-varying electric field was a source of a magnetic field. Thus, when either field is changing in time, then a field of the other is necessarily induced.[19]:696–700 Such a phenomenon has the properties of a wave, and is naturally referred to as an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that such a wave would necessarily travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell's Laws, which unify light, fields, and charge are one of the great milestones of theoretical physics.[19]:696–700
108
+
109
+ Thus, the work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents, and via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances.
110
+
111
+ In the 6th century BC, the Greek philosopher Thales of Miletus experimented with amber rods and these experiments were the first studies into the production of electrical energy. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient.[64] It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electrical energy.[64] The battery is a versatile and very common power source which is ideally suited to many applications, but its energy storage is finite, and once discharged it must be disposed of or recharged. For large electrical demands electrical energy must be generated and transmitted continuously over conductive transmission lines.
112
+
113
+ Electrical power is usually generated by electro-mechanical generators driven by steam produced from fossil fuel combustion, or the heat released from nuclear reactions; or from other sources such as kinetic energy extracted from wind or flowing water. The modern steam turbine invented by Sir Charles Parsons in 1884 today generates about 80 percent of the electric power in the world using a variety of heat sources. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends.[65] The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed.[66][67]
114
+
115
+ Since electrical energy cannot easily be stored in quantities large enough to meet demands on a national scale, at all times exactly as much must be produced as is required.[66] This requires electricity utilities to make careful predictions of their electrical loads, and maintain constant co-ordination with their power stations. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses.
116
+
117
+ Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century,[68] a rate of growth that is now being experienced by emerging economies such as those of India or China.[69][70] Historically, the growth rate for electricity demand has outstripped that for other forms of energy.[71]:16
118
+
119
+ Environmental concerns with electricity generation have led to an increased focus on generation from renewable sources, in particular from wind and solar. While debate can be expected to continue over the environmental impact of different means of electricity production, its final form is relatively clean.[71]:89
120
+
121
+ Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses.[72] The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories.[73] Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector.[74]
122
+
123
+ The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station.[75] A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings.[76] Electricity is however still a highly practical energy source for heating and refrigeration,[77] with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate.[78]
124
+
125
+ Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.
126
+
127
+ The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains,[79] and an increasing number of battery-powered electric cars in private ownership.
128
+
129
+ Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century,[80] and a fundamental building block of all modern circuitry. A modern integrated circuit may contain several billion miniaturised transistors in a region only a few centimetres square.[81]
130
+
131
+ A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current.[82] The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions.[83] If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns.[82] The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock is referred to as electrocution. Electrocution is still the means of judicial execution in some jurisdictions, though its use has become rarer in recent times.[84]
132
+
133
+ Electricity is not a human invention, and may be observed in several forms in nature, a prominent manifestation of which is lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is thought to arise from a natural dynamo of circulating currents in the planet's core.[85] Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when subjected to external pressure.[86] This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal, and when a piezoelectric material is subjected to an electric field, a small change in physical dimensions takes place.[86]
134
+
135
+ §Bioelectrogenesis in microbial life is a prominent phenomenon in soils and sediment ecology resulting from anaerobic respiration. The microbial fuel cell mimics this ubiquitous natural phenomenon.
136
+
137
+ Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception,[87] while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon.[3] The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes.[3][4] All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles.[88] An electric shock stimulates this system, and causes muscles to contract.[89] Action potentials are also responsible for coordinating activities in certain plants.[88]
138
+
139
+ In 1850, William Gladstone asked the scientist Michael Faraday why electricity was valuable. Faraday answered, “One day sir, you may tax it.”[90]
140
+
141
+ In the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature.[91] This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. "Revitalization" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films.
142
+
143
+ As the public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light,[92] such as the workers who "finger death at their gloves' end as they piece and repiece the living wires" in Rudyard Kipling's 1907 poem Sons of Martha.[92] Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books.[92] The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers.[92]
144
+
145
+ With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it required particular attention by popular culture only when it stops flowing,[92] an event that usually signals disaster.[92] The people who keep it flowing, such as the nameless hero of Jimmy Webb’s song "Wichita Lineman" (1968),[92] are still often cast as heroic, wizard-like figures.[92]
146
+
en/1744.html.txt ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A wind turbine, or alternatively referred to as a wind energy converter, is a device that converts the wind's kinetic energy into electrical energy.
4
+
5
+ Wind turbines are manufactured in a wide range of vertical and horizontal axis. The smallest turbines are used for applications such as battery charging for auxiliary power for boats or caravans or to power traffic warning signs. Larger turbines can be used for making contributions to a domestic power supply while selling unused power back to the utility supplier via the electrical grid. Arrays of large turbines, known as wind farms, are becoming an increasingly important source of intermittent renewable energy and are used by many countries as part of a strategy to reduce their reliance on fossil fuels. One assessment claimed that, as of 2009[update], wind had the "lowest relative greenhouse gas emissions, the least water consumption demands and... the most favourable social impacts" compared to photovoltaic, hydro, geothermal, coal and gas.[1]
6
+
7
+ The windwheel of Hero of Alexandria (10 AD – 70 AD) marks one of the first recorded instances of wind powering a machine in history.[2][3] However, the first known practical wind power plants were built in Sistan, an Eastern province of Persia (now Iran), from the 7th century. These "Panemone" were vertical axle windmills, which had long vertical drive shafts with rectangular blades.[4] Made of six to twelve sails covered in reed matting or cloth material, these windmills were used to grind grain or draw up water, and were used in the gristmilling and sugarcane industries.[5]
8
+
9
+ Wind power first appeared in Europe during the Middle Ages. The first historical records of their use in England date to the 11th or 12th centuries, there are reports of German crusaders taking their windmill-making skills to Syria around 1190.[6] By the 14th century, Dutch windmills were in use to drain areas of the Rhine delta. Advanced wind turbines were described by Croatian inventor Fausto Veranzio. In his book Machinae Novae (1595) he described vertical axis wind turbines with curved or V-shaped blades.
10
+
11
+ The first electricity-generating wind turbine was a battery charging machine installed in July 1887 by Scottish academic James Blyth to light his holiday home in Marykirk, Scotland.[7] Some months later American inventor Charles F. Brush was able to build the first automatically operated wind turbine after consulting local University professors and colleagues Jacob S. Gibbs and Brinsley Coleberd and successfully getting the blueprints peer-reviewed for electricity production in Cleveland, Ohio.[7] Although Blyth's turbine was considered uneconomical in the United Kingdom,[7] electricity generation by wind turbines was more cost effective in countries with widely scattered populations.[6]
12
+
13
+ In Denmark by 1900, there were about 2500 windmills for mechanical loads such as pumps and mills, producing an estimated combined peak power of about 30 MW. The largest machines were on 24-meter (79 ft) towers with four-bladed 23-meter (75 ft) diameter rotors. By 1908, there were 72 wind-driven electric generators operating in the United States from 5 kW to 25 kW. Around the time of World War I, American windmill makers were producing 100,000 farm windmills each year, mostly for water-pumping.[9]
14
+
15
+ By the 1930s, wind generators for electricity were common on farms, mostly in the United States where distribution systems had not yet been installed. In this period, high-tensile steel was cheap, and the generators were placed atop prefabricated open steel lattice towers.
16
+
17
+ A forerunner of modern horizontal-axis wind generators was in service at Yalta, USSR in 1931. This was a 100 kW generator on a 30-meter (98 ft) tower, connected to the local 6.3 kV distribution system. It was reported to have an annual capacity factor of 32 percent, not much different from current wind machines.[10][11]
18
+
19
+ In the autumn of 1941, the first megawatt-class wind turbine was synchronized to a utility grid in Vermont. The Smith–Putnam wind turbine only ran for 1,100 hours before suffering a critical failure. The unit was not repaired, because of a shortage of materials during the war.
20
+
21
+ The first utility grid-connected wind turbine to operate in the UK was built by John Brown & Company in 1951 in the Orkney Islands.[7][12]
22
+
23
+ Despite these diverse developments, developments in fossil fuel systems almost entirely eliminated any wind turbine systems larger than supermicro size. In the early 1970s, however, anti-nuclear protests in Denmark spurred artisan mechanics to develop microturbines of 22 kW. Organizing owners into associations and co-operatives lead to the lobbying of the government and utilities and provided incentives for larger turbines throughout the 1980s and later. Local activists in Germany, nascent turbine manufacturers in Spain, and large investors in the United States in the early 1990s then lobbied for policies that stimulated the industry in those countries.
24
+
25
+ It has been argued that expanding use of wind power will lead to increasing geopolitical competition over critical materials for wind turbines such as rare earth elements neodymium, praseodymium, and dysprosium. But this perspective has been criticised for failing to recognise that most wind turbines do not use permanent magnets and for underestimating the power of economic incentives for expanded production of these minerals.[13]
26
+
27
+ Wind Power Density (WPD) is a quantitative measure of wind energy available at any location. It is the mean annual power available per square meter of swept area of a turbine, and is calculated for different heights above ground. Calculation of wind power density includes the effect of wind velocity and air density.[14]
28
+
29
+ Wind turbines are classified by the wind speed they are designed for, from class I to class III, with A to C referring to the turbulence intensity of the wind.[15]
30
+
31
+ Conservation of mass requires that the amount of air entering and exiting a turbine must be equal. Accordingly, Betz's law gives the maximal achievable extraction of wind power by a wind turbine as 16/27 (59.3%) of the rate at which the kinetic energy of the air arrives at the turbine.[16]
32
+
33
+ The maximum theoretical power output of a wind machine is thus 16/27 times the rate at which kinetic energy of the air arrives at the effective disk area of the machine. If the effective area of the disk is A, and the wind velocity v, the maximum theoretical power output P is:
34
+
35
+ where ρ is the air density.
36
+
37
+ Wind-to-rotor efficiency (including rotor blade friction and drag) are among the factors affecting the final price of wind power.[17]
38
+ Further inefficiencies, such as gearbox losses, generator and converter losses, reduce the power delivered by a wind turbine. To protect components from undue wear, extracted power is held constant above the rated operating speed as theoretical power increases at the cube of wind speed, further reducing theoretical efficiency. In 2001, commercial utility-connected turbines delivered 75% to 80% of the Betz limit of power extractable from the wind, at rated operating speed.[18][19][needs update]
39
+
40
+ Efficiency can decrease slightly over time, one of the main reasons being dust and insect carcasses on the blades which alters the aerodynamic profile and essentially reduces the lift to drag ratio of the airfoil. Analysis of 3128 wind turbines older than 10 years in Denmark showed that half of the turbines had no decrease, while the other half saw a production decrease of 1.2% per year.[20] Ice accretion on turbine blades has also been found to greatly reduce the efficiency of wind turbines, which is a common challenge in cold climates where in-cloud icing and freezing rain events occur.[21] Vertical turbine designs have much lower efficiency than standard horizontal designs.[22]
41
+
42
+ In general, more stable and constant weather conditions (most notably wind speed) result in an average of 15% greater efficiency than that of a wind turbine in unstable weather conditions, thus allowing up to a 7% increase in wind speed under stable conditions. This is due to a faster recovery wake and greater flow entrainment that occur in conditions of higher atmospheric stability. However, wind turbine wakes have been found to recover faster under unstable atmospheric conditions as opposed to a stable environment.[23]
43
+
44
+ Different materials have been found to have varying effects on the efficiency of wind turbines. In an Ege University experiment, three wind turbines (Each with three blades with diameters of one meter) were constructed with blades made of different materials: A glass and glass/carbon epoxy, glass/carbon, and glass/polyester. When tested, the results showed that the materials with higher overall masses had a greater friction moment and thus a lower power coefficient.[24]
45
+
46
+ Wind turbines can rotate about either a horizontal or a vertical axis, the former being both older and more common.[25] They can also include blades, or be bladeless.[26] Vertical designs produce less power and are less common.[27]
47
+
48
+ Large three-bladed horizontal-axis wind turbines (HAWT) with the blades upwind of the tower produce the overwhelming majority of wind power in the world today. These turbines have the main rotor shaft and electrical generator at the top of a tower, and must be pointed into the wind. Small turbines are pointed by a simple wind vane, while large turbines generally use a wind sensor coupled with a yaw system. Most have a gearbox, which turns the slow rotation of the blades into a quicker rotation that is more suitable to drive an electrical generator.[28] Some turbines use a different type of generator suited to slower rotational speed input. These don't need a gearbox and are called direct-drive, meaning they couple the rotor directly to the generator with no gearbox in between. While permanent magnet direct-drive generators can be more costly due to the rare earth materials required, these gearless turbines are sometimes preferred over gearbox generators because they "eliminate the gear-speed increaser, which is susceptible to significant accumulated fatigue torque loading, related reliability issues, and maintenance costs."[29] There is also the pseudo direct drive mechanism, which has some advantages over the permanent magnet direct drive mechanism.[30][31]
49
+
50
+ Most horizontal axis turbines have their rotors upwind of the supporting tower. Downwind machines have been built, because they don't need an additional mechanism for keeping them in line with the wind. In high winds, the blades can also be allowed to bend, which reduces their swept area and thus their wind resistance. Despite these advantages, upwind designs are preferred, because the change in loading from the wind as each blade passes behind the supporting tower can cause damage to the turbine.
51
+
52
+ Turbines used in wind farms for commercial production of electric power are usually three-bladed. These have low torque ripple, which contributes to good reliability. The blades are usually colored white for daytime visibility by aircraft and range in length from 20 to 80 meters (66 to 262 ft). The size and height of turbines increase year by year. Offshore wind turbines are built up to 8 MW today and have a blade length up to 80 meters (260 ft). Designs with 10 to 12 MW are in preparation.[32] Usual multi megawatt turbines have tubular steel towers with a height of 70 m to 120 m and in extremes up to 160 m.
53
+
54
+ Vertical-axis wind turbines (or VAWTs) have the main rotor shaft arranged vertically. One advantage of this arrangement is that the turbine does not need to be pointed into the wind to be effective, which is an advantage on a site where the wind direction is highly variable. It is also an advantage when the turbine is integrated into a building because it is inherently less steerable. Also, the generator and gearbox can be placed near the ground, using a direct drive from the rotor assembly to the ground-based gearbox, improving accessibility for maintenance. However, these designs produce much less energy averaged over time, which is a major drawback.[27][33]
55
+
56
+ The key disadvantages include the relatively low rotational speed with the consequential higher torque and hence higher cost of the drive train, the inherently lower power coefficient, the 360-degree rotation of the aerofoil within the wind flow during each cycle and hence the highly dynamic loading on the blade, the pulsating torque generated by some rotor designs on the drive train, and the difficulty of modelling the wind flow accurately and hence the challenges of analysing and designing the rotor prior to fabricating a prototype.[34]
57
+
58
+ When a turbine is mounted on a rooftop the building generally redirects wind over the roof and this can double the wind speed at the turbine. If the height of a rooftop mounted turbine tower is approximately 50% of the building height it is near the optimum for maximum wind energy and minimum wind turbulence. While wind speeds within the built environment are generally much lower than at exposed rural sites,[35][36] noise may be a concern and an existing structure may not adequately resist the additional stress.
59
+
60
+ Subtypes of the vertical axis design include:
61
+
62
+ "Eggbeater" turbines, or Darrieus turbines, were named after the French inventor, Georges Darrieus.[37] They have good efficiency, but produce large torque ripple and cyclical stress on the tower, which contributes to poor reliability. They also generally require some external power source, or an additional Savonius rotor to start turning, because the starting torque is very low. The torque ripple is reduced by using three or more blades, which results in greater solidity of the rotor. Solidity is measured by blade area divided by the rotor area. Newer Darrieus type turbines are not held up by guy-wires but have an external superstructure connected to the top bearing.[38]
63
+
64
+ A subtype of Darrieus turbine with straight, as opposed to curved, blades. The cycloturbine variety has variable pitch to reduce the torque pulsation and is self-starting.[39] The advantages of variable pitch are: high starting torque; a wide, relatively flat torque curve; a higher coefficient of performance; more efficient operation in turbulent winds; and a lower blade speed ratio which lowers blade bending stresses. Straight, V, or curved blades may be used.[40]
65
+
66
+ These are drag-type devices with two (or more) scoops that are used in anemometers, Flettner vents (commonly seen on bus and van roofs), and in some high-reliability low-efficiency power turbines. They are always self-starting if there are at least three scoops.
67
+
68
+ Twisted Savonius is a modified savonius, with long helical scoops to provide smooth torque. This is often used as a rooftop wind turbine and has even been adapted for ships.[41]
69
+
70
+ The parallel turbine is similar to the crossflow fan or centrifugal fan. It uses the ground effect. Vertical axis turbines of this type have been tried for many years: a unit producing 10 kW was built by Israeli wind pioneer Bruce Brill in the 1980s.[42][unreliable source?]
71
+
72
+ Wind turbine design is a careful balance of cost, energy output, and fatigue life.
73
+
74
+ Wind turbines convert wind energy to electrical energy for distribution. Conventional horizontal axis turbines can be divided into three components:
75
+
76
+ A 1.5 (MW) wind turbine of a type frequently seen in the United States has a tower 80 meters (260 ft) high. The rotor assembly (blades and hub) weighs 22,000 kilograms (48,000 lb). The nacelle, which contains the generator, weighs 52,000 kilograms (115,000 lb). The concrete base for the tower is constructed using 26,000 kilograms (58,000 lb) reinforcing steel and contains 190 cubic meters (250 cu yd) of concrete. The base is 15 meters (50 ft) in diameter and 2.4 meters (8 ft) thick near the center.[48]
77
+
78
+ Due to data transmission problems, structural health monitoring of wind turbines is usually performed using several accelerometers and strain gages attached to the nacelle to monitor the gearbox and equipment. Currently, digital image correlation and stereophotogrammetry are used to measure dynamics of wind turbine blades. These methods usually measure displacement and strain to identify location of defects. Dynamic characteristics of non-rotating wind turbines have been measured using digital image correlation and photogrammetry.[49] Three dimensional point tracking has also been used to measure rotating dynamics of wind turbines.[50]
79
+
80
+ Wind turbine rotor blades are being made longer to increase efficiency. This requires them to be stiff, strong, light and resistant to fatigue.[51] Materials with these properties are composites such as polyester and epoxy, while glass fiber and carbon fiber have been used for the reinforcing.[52] Construction may use manual layup or injection molding.
81
+
82
+ Companies seek ways to draw greater efficiency from their designs. A predominant way has been to increase blade length and thus rotor diameter. Retrofitting existing turbines with larger blades reduces the work and risks of redesigning the system. The current longest blade is 88.4 m (from LM Wind Power), but by 2021 offshore turbines are expected to be 10-MW with 100 m blades. Longer blades need to be stiffer to avoid deflection, which requires materials with higher stiffness-to-weight ratio. Because the blades need to function over a 100 million load cycles over a period of 20–25 years, the fatigue of the blade materials is also critical.
83
+
84
+ Materials commonly used in wind turbine blades are described below.
85
+
86
+ The stiffness of composites is determined by the stiffness of fibers and their volume content. Typically, E-glass fibers are used as main reinforcement in the composites. Typically, the glass/epoxy composites for wind turbine blades contain up to 75% glass by weight. This increases the stiffness, tensile and compression strength. A promising composite material is glass fiber with modified compositions like S-glass, R-glass etc. Other glass fibers developed by Owens Corning are ECRGLAS, Advantex and WindStrand.[53]
87
+
88
+ Carbon fiber has more tensile strength, higher stiffness and lower density than glass fiber. An ideal candidate for these properties is the spar cap, a structural element of a blade which experiences high tensile loading.[52] A 100-m glass fiber blade could weigh up to 50 metric tons, while using carbon fiber in the spar saves 20% to 30% weight, about 15 metric tons.[54] However, because carbon fiber is ten times more expensive, glass fiber is still dominant.
89
+
90
+ Instead of making wind turbine blade reinforcements from pure glass or pure carbon, hybrid designs trade weight for cost. For example, for an 8 m blade, a full replacement by carbon fiber would save 80% of weight but increase costs by 150%, while a 30% replacement would save 50% of weight and increase costs by 90%. Hybrid reinforcement materials include E-glass/carbon, E-glass/aramid. The current longest blade by LM Wind Power is made of carbon/glass hybrid composites. More research is needed about the optimal composition of materials [55]
91
+
92
+ Additions of small amount (0.5 weight %) of nanoreinforcement (carbon nanotubes or nanoclay) in the polymer matrix of composites, fiber sizing or interlaminar layers can improve fatigue resistance, shear or compressive strength, and fracture toughness of the composites by 30% to 80%. Research has also shown that incorporating small amounts of carbon nanotubes (CNT) can increase the lifetime up to 1500%.
93
+
94
+ As of 2019[update], a wind turbine may cost around $1 million per megawatt.[56]
95
+
96
+ For the wind turbine blades, while the material cost is much higher for hybrid glass/carbon fiber blades than all-glass fiber blades, labor costs can be lower. Using carbon fiber allows simpler designs that use less raw material. The chief manufacturing process in blade fabrication is the layering of plies. Thinner blades allow reducing the number of layers and so the labor, and in some cases, equate to the cost of labor for glass fiber blades.[57]
97
+
98
+ Wind turbine parts other than the rotor blades (including the rotor hub, gearbox, frame, and tower) are largely made of steel. Smaller turbines (as well as megawatt-scale Enercon turbines) have begun using aluminum alloys for these components to make turbines lighter and more efficient. This trend may grow if fatigue and strength properties can be improved.
99
+ Pre-stressed concrete has been increasingly used for the material of the tower, but still requires much reinforcing steel to meet the strength requirement of the turbine. Additionally, step-up gearboxes are being increasingly replaced with variable speed generators, which requires magnetic materials.[51] In particular, this would require an greater supply of the rare earth metal neodymium.
100
+
101
+ Modern turbines use a couple of tons of copper for generators, cables and such.[58] As of 2018[update], global production of wind turbines use 450,000 tonnes of copper per year.[59]
102
+
103
+ A study of the material consumption trends and requirements for wind energy in Europe found that bigger turbines have a higher consumption of precious metals but lower material input per kW generated. The current material consumption and stock was compared to input materials for various onshore system sizes. In all EU countries the estimates for 2020 doubled the values consumed in 2009. These countries would need to expand their resources to meet the estimated demand for 2020. For example, currently the EU has 3% of world supply of fluorspar and it requires 14% by 2020. Globally, the main exporting countries are South Africa, Mexico and China. This is similar with other critical and valuable materials required for energy systems such as magnesium, silver and indium. The levels of recycling of these materials are very low and focusing on that could alleviate supply. Because most of these valuable materials are also used in other emerging technologies, like light emitting diodes (LEDs), photo voltaics (PVs) and liquid crystal displays (LCDs), their demand is expected to grow.[60]
104
+
105
+ A study by the United States Geological Survey estimated resources required to fulfill the US commitment to supplying 20% of its electricity from wind power by 2030. It did not consider requirements for small turbines or offshore turbines because those were not common in 2008 when the study was done. Common materials such as cast iron, steel and concrete would increase by 2%–3% compared to 2008. Between 110,000 and 115,000 metric tons of fiber glass would be required per year, a 14% increase. Rare metal use would not increase much compared to available supply, however rare metals that are also used for other technologies such as batteries which are increasing its global demand need to be taken into account. Land required would be 50,000 square kilometers onshore and 11,000 offshore. This would not be a problem in the US due to its vast area and because the same land can be used for farming. A greater challenge would be the variability and transmission to areas of high demand.[61]
106
+
107
+ Permanent magnets for wind turbine generators contain rare metals such as neodymium (Nd), praseodymium (Pr), Terbium (Tb) and dysprosium (Dy). Systems that use magnetic direct drive turbines require greater amounts of rare metals. Therefore, an increase in wind turbine manufacture would increase the demand for these resources. By 2035, the demand for Nd is estimated to increase by 4,000 to 18,000 tons and for Dy by 200 to 1200 tons. These values are a quarter to half of current production. However, these estimates are very uncertain because technologies are developing rapidly.[62]
108
+
109
+ Reliance on rare earth minerals for components has risked expense and price volatility as China has been main producer of rare earth minerals (96% in 2009) and was reducing its export quotas.[63] However, in recent years other producers have increased production and China has increased export quotas, leading to a higher supply and lower cost, and a greater viability of large scale use of variable-speed generators.[64]
110
+
111
+ Glass fiber is the most common material for reinforcement. Its demand has grown due to growth in construction, transportation and wind turbines. Its global market might reach US$17.4 billion by 2024, compared to US$8.5 billion in 2014. In 2014, Asia Pacific produced more than 45% of the market; now China is the largest producer. The industry receives subsidies from the Chinese government allowing it to export cheaper to the US and Europe. However, price wars have led to anti-dumping measures such as tariffs on Chinese glass fiber.[65]
112
+
113
+ Interest in recycling blades varies in different markets and depends on the waste legislation and local economics. A challenge in recycling blades is related to the composite material, which is made of a thermosetting matrix and glass fibers or a combination of glass and carbon fibers. Thermosetting matrix cannot be remolded to form new composites. So the options are either to send the blade to landfill, to reuse the blade and the composite material elements found in the blade, or to transform the composite material into a new source of material. In Germany, wind turbine blades are commercially recycled as part of an alternative fuel mix for a cement factory. In the USA the town of Casper, Wyoming has buried 1,000 non-recyclable blades in its landfill site, earning $675,000 for the town. It pointed out that wind farm waste is less toxic than other garbage. Wind turbine blades represent a “vanishingly small fraction” of overall waste in the US, according to the American Wind Energy Association.[66]
114
+
115
+ A few localities have exploited the attention-getting nature of wind turbines by placing them on public display, either with visitor centers around their bases, or with viewing areas farther away.[67] The wind turbines are generally of conventional horizontal-axis, three-bladed design, and generate power to feed electrical grids, but they also serve the unconventional roles of technology demonstration, public relations, and education.
116
+
117
+ Small wind turbines may be used for a variety of applications including on- or off-grid residences, telecom towers, offshore platforms, rural schools and clinics, remote monitoring and other purposes that require energy where there is no electric grid, or where the grid is unstable. Small wind turbines may be as small as a fifty-watt generator for boat or caravan use. Hybrid solar and wind powered units are increasingly being used for traffic signage, particularly in rural locations, as they avoid the need to lay long cables from the nearest mains connection point.[68] The U.S. Department of Energy's National Renewable Energy Laboratory (NREL) defines small wind turbines as those smaller than or equal to 100 kilowatts.[69] Small units often have direct drive generators, direct current output, aeroelastic blades, lifetime bearings and use a vane to point into the wind.
118
+
119
+ Larger, more costly turbines generally have geared power trains, alternating current output, and flaps, and are actively pointed into the wind. Direct drive generators and aeroelastic blades for large wind turbines are being researched.
120
+
121
+ On most horizontal wind turbine farms, a spacing of about 6–10 times the rotor diameter is often upheld. However, for large wind farms distances of about 15 rotor diameters should be more economical, taking into account typical wind turbine and land costs. This conclusion has been reached by research[70] conducted by Charles Meneveau of Johns Hopkins University[71] and Johan Meyers of Leuven University in Belgium, based on computer simulations[72] that take into account the detailed interactions among wind turbines (wakes) as well as with the entire turbulent atmospheric boundary layer.
122
+
123
+ Recent research by John Dabiri of Caltech suggests that vertical wind turbines may be placed much more closely together so long as an alternating pattern of rotation is created allowing blades of neighbouring turbines to move in the same direction as they approach one another.[73]
124
+
125
+ Wind turbines need regular maintenance to stay reliable and available. In the best case turbines are available to generate energy 98% of the time.[74][75]
126
+
127
+ Modern turbines usually have a small onboard crane for hoisting maintenance tools and minor components. However, large, heavy components like generator, gearbox, blades, and so on are rarely replaced, and a heavy lift external crane is needed in those cases. If the turbine has a difficult access road, a containerized crane can be lifted up by the internal crane to provide heavier lifting.[76]
128
+
129
+ Installation of new wind turbines can be controversial. An alternative is repowering, where existing wind turbines are replaced with bigger, more powerful ones, sometimes in smaller numbers while keeping or increasing capacity.
130
+
131
+ Older turbines were in some early cases not required to be removed when reaching the end of their life. Some still stand, waiting to be recycled or repowered.[77][78]
132
+
133
+ A demolition industry develops to recycle offshore turbines at a cost of DKK 2–4 million per (MW), to be guaranteed by the owner.[79]
134
+
135
+ Wind turbines produce electricity at between two and six cents per kilowatt hour, which is one of the lowest-priced renewable energy sources.[80][81] As technology needed for wind turbines continued to improve, the prices decreased as well. In addition, there is currently no competitive market for wind energy, because wind is a freely available natural resource, most of which is untapped.[80] The main cost of small wind turbines is the purchase and installation process, which averages between $48,000 and $65,000 per installation. The energy harvested from the turbine will offset the installation cost, as well as provide virtually free energy for years.[82]
136
+
137
+ Wind turbines provide a clean energy source, use little water,[1] emitting no greenhouse gases and no waste products. Over 1,500 tons of carbon dioxide per year can be eliminated by using a one-megawatt turbine instead of one megawatt of energy from a fossil fuel.[83]
138
+
139
+ Wind turbines can be very large, reaching over 140 m (460 ft) tall and with blades 55 m (180 ft) long,[84] and people have often complained about their visual impact.
140
+
141
+ Environmental impact of wind power includes effect on wildlife, but can be mitigated if proper monitoring and mitigation strategies are implemented.[85] Thousands of birds, including rare species, have been killed by the blades of wind turbines,[86] though wind turbines contribute relatively insignificantly to anthropogenic avian mortality. Wind farms and nuclear power stations are responsible for between 0.3 and 0.4 bird deaths per gigawatt-hour (GWh) of electricity while fossil fueled power stations are responsible for about 5.2 fatalities per GWh. In 2009, for every bird killed by a wind turbine in the US, nearly 500,000 were killed by cats and another 500,000 by buildings.[87] In comparison, conventional coal fired generators contribute significantly more to bird mortality, by incineration when caught in updrafts of smoke stacks and by poisoning with emissions byproducts (including particulates and heavy metals downwind of flue gases). Further, marine life is affected by water intakes of steam turbine cooling towers (heat exchangers) for nuclear and fossil fuel generators, by coal dust deposits in marine ecosystems (e.g. damaging Australia's Great Barrier Reef) and by water acidification from combustion monoxides.
142
+
143
+ Energy harnessed by wind turbines is intermittent, and is not a "dispatchable" source of power; its availability is based on whether the wind is blowing, not whether electricity is needed. Turbines can be placed on ridges or bluffs to maximize the access of wind they have, but this also limits the locations where they can be placed.[80] In this way, wind energy is not a particularly reliable source of energy. However, it can form part of the energy mix, which also includes power from other sources. Notably, the relative available output from wind and solar sources is often inversely proportional (balancing)[citation needed]. Technology is also being developed to store excess energy, which can then make up for any deficits in supplies.
144
+
145
+ See also List of most powerful wind turbines
en/1745.html.txt ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ A fossil fuel is a fuel formed by natural processes, such as anaerobic decomposition of buried dead organisms, containing organic molecules originating in ancient photosynthesis[1] that release energy in combustion.[2]
6
+ Such organisms and their resulting fossil fuels typically have an age of millions of years, and sometimes more than 650 million years.[3]
7
+ Fossil fuels contain high percentages of carbon and include petroleum, coal, and natural gas.[4] Peat is also sometimes considered a fossil fuel.[5]
8
+ Commonly used derivatives of fossil fuels include kerosene and propane.
9
+ Fossil fuels range from volatile materials with low carbon-to-hydrogen ratios (like methane), to liquids (like petroleum), to nonvolatile materials composed of almost pure carbon, like anthracite coal.
10
+ Methane can be found in hydrocarbon fields alone, associated with oil, or in the form of methane clathrates.
11
+
12
+ As of 2018, the world's main primary energy sources consisted of petroleum (34%), coal (27%), and natural gas (24%), amounting to an 85% share for fossil fuels in primary energy consumption in the world.
13
+ Non-fossil sources included nuclear (4.4%), hydroelectric (6.8%), and other renewables (4.0%, including geothermal, solar, tidal, wind, wood, and waste).[6]
14
+ The share of renewables (including traditional biomass) in the world's total final energy consumption was 18% in 2018.[7] Compared with 2017, world energy-consumption grew at a rate of 2.9%, almost double its 10-year average of 1.5% per year, and the fastest since 2010.[8]
15
+
16
+ Although fossil fuels are continually formed by natural processes, they are generally classified as non-renewable resources because they take millions of years to form and known viable reserves are being depleted much faster than new ones are generated.[9][10]
17
+
18
+ Most air pollution deaths are due to fossil fuel combustion products, it is estimated to cost over 3% of global GDP,[11] and fossil fuel phase-out would save 3.6 million lives each year.[12]
19
+
20
+ The use of fossil fuels raises serious environmental concerns.
21
+ The burning of fossil fuels produces around 35 billion tonnes (35 gigatonnes) of carbon dioxide (CO2) per year.[13]
22
+ It is estimated that natural processes can only absorb a small part of that amount, so there is a net increase of many billion tonnes of atmospheric carbon dioxide per year.[14]
23
+ CO2 is a greenhouse gas that increases radiative forcing and contributes to global warming and ocean acidification.
24
+ A global movement towards the generation of low-carbon renewable energy is underway to help reduce global greenhouse-gas emissions.
25
+
26
+ The theory that fossil fuels formed from the fossilized remains of dead plants by exposure to heat and pressure in the Earth's crust over millions of years was first introduced by Andreas Libavius "in his 1597 Alchemia [Alchymia]" and later by Mikhail Lomonosov "as early as 1757 and certainly by 1763".[16] The first use of the term "fossil fuel" occurs in the work of the German chemist Caspar Neumann, in English translation in 1759.[17] The Oxford English Dictionary notes that in the phrase "fossil fuel" the adjective "fossil" means "[o]btained by digging; found buried in the earth", which dates to at least 1652,[18] before the English noun "fossil" came to refer primarily to long-dead organisms in the early 18th century.[19]
27
+
28
+ Aquatic phytoplankton and zooplankton that died and sedimented in large quantities under anoxic conditions millions of years ago began forming petroleum and natural gas as a result of anaerobic decomposition. Over geological time this organic matter, mixed with mud, became buried under further heavy layers of inorganic sediment. The resulting high temperature and pressure caused the organic matter to chemically alter, first into a waxy material known as kerogen, which is found in oil shales, and then with more heat into liquid and gaseous hydrocarbons in a process known as catagenesis. Despite these heat driven transformations (which increase the energy density compared to typical organic matter by removal of oxygen atoms)[2], the energy released in combustion is still photosynthetic in origin.[1]
29
+
30
+ Terrestrial plants, on the other hand, tended to form coal and methane. Many of the coal fields date to the Carboniferous period of Earth's history. Terrestrial plants also form type III kerogen, a source of natural gas.
31
+
32
+ There is a wide range of organic compounds in any given fuel. The specific mixture of hydrocarbons gives a fuel its characteristic properties, such as density, viscosity, boiling point, melting point, etc. Some fuels like natural gas, for instance, contain only very low boiling, gaseous components. Others such as gasoline or diesel contain much higher boiling components.
33
+
34
+ Fossil fuels are of great importance because they can be burned (oxidized to carbon dioxide and water), producing significant amounts of energy per unit mass. The use of coal as a fuel predates recorded history. Coal was used to run furnaces for the smelting of metal ore. While semi-solid hydrocarbons from seeps were also burned in ancient times,[20] they were mostly used for waterproofing and embalming.[21]
35
+
36
+ Commercial exploitation of petroleum began in the 19th century, largely to replace oils from animal sources (notably whale oil) for use in oil lamps.[22]
37
+
38
+ Natural gas, once flared-off as an unneeded byproduct of petroleum production, is now considered a very valuable resource.[23] Natural gas deposits are also the main source of helium.
39
+
40
+ Heavy crude oil, which is much more viscous than conventional crude oil, and oil sands, where bitumen is found mixed with sand and clay, began to become more important as sources of fossil fuel in the early 2000s.[24] Oil shale and similar materials are sedimentary rocks containing kerogen, a complex mixture of high-molecular weight organic compounds, which yield synthetic crude oil when heated (pyrolyzed). With additional processing, they can be employed in lieu of other established fossil fuels. More recently, there has been disinvestment from exploitation of such resources due to their high carbon cost relative to more easily processed reserves.[25]
41
+
42
+ Prior to the latter half of the 18th century, windmills and watermills provided the energy needed for industry such as milling flour, sawing wood or pumping water, while burning wood or peat provided domestic heat. The wide-scale use of fossil fuels, coal at first and petroleum later, in steam engines enabled the Industrial Revolution. At the same time, gas lights using natural gas or coal gas were coming into wide use. The invention of the internal combustion engine and its use in automobiles and trucks greatly increased the demand for gasoline and diesel oil, both made from fossil fuels. Other forms of transportation, railways and aircraft, also require fossil fuels. The other major use for fossil fuels is in generating electricity and as feedstock for the petrochemical industry. Tar, a leftover of petroleum extraction, is used in construction of roads.
43
+
44
+ Levels of primary energy sources are the reserves in the ground. Flows are production of fossil fuels from these reserves. The most important primary energy sources are carbon-based fossil energy sources.
45
+
46
+ P. E. Hodgson, a senior research fellow emeritus in physics at Corpus Christi College, Oxford, expects the world energy use to double every fourteen years and the need to increase faster still, and he insisted in 2008 that the world oil production, a main resource of fossil fuel, was expected to peak in ten years and thereafter fall.[26]
47
+
48
+ The principle of supply and demand holds that as hydrocarbon supplies diminish, prices will rise. Therefore, higher prices will lead to increased alternative, renewable energy supplies as previously uneconomic sources become sufficiently economical to exploit. Artificial gasolines and other renewable energy sources currently require more expensive production and processing technologies than conventional petroleum reserves, but may become economically viable in the near future.
49
+ Different alternative sources of energy include nuclear, hydroelectric, solar, wind, and geothermal.
50
+
51
+ One of the more promising energy alternatives is the use of inedible feed stocks and biomass for carbon dioxide capture as well as biofuel production. While these processes are not without problems, they are currently in practice around the world. Biodiesels are being produced by several companies and are the subject of research at several universities. Processes for converting renewable lipids into usable fuels include hydrotreating and decarboxylation.
52
+
53
+ The United States holds less than 5% of the world's population, but due to large houses and private cars, uses more than 25% of the world's supply of fossil fuels.[27] As the largest source of U.S. greenhouse gas emissions, CO2 from fossil fuel combustion accounted for 80 percent of weighted emissions in 1998.[28] Combustion of fossil fuels also produces other air pollutants, such as nitrogen oxides, sulfur dioxide, volatile organic compounds and heavy metals.
54
+
55
+ According to Environment Canada:
56
+
57
+ "The electricity sector is unique among industrial sectors in its very large contribution to emissions associated with nearly all air issues. Electricity generation produces a large share of Canadian nitrogen oxides and sulphur dioxide emissions, which contribute to smog and acid rain and the formation of fine particulate matter. It is the largest uncontrolled industrial source of mercury emissions in Canada. Fossil fuel-fired electric power plants also emit carbon dioxide, which may contribute to climate change. In addition, the sector has significant impacts on water and habitat and species. In particular, hydropower dams and transmission lines have significant effects on water and biodiversity."[29]
58
+
59
+ According to U.S. scientist Jerry Mahlman, who crafted the IPCC language used to define levels of scientific certainty, the new report will blame fossil fuels for global warming with "virtual certainty," meaning 99% sure. That's a significant jump from "likely," or 66% sure, in the group's last report in 2001. More than 1,600 pages of research went into the new assessment.[30]
60
+
61
+ Combustion of fossil fuels generates sulfuric and nitric acids, which fall to Earth as acid rain, impacting both natural areas and the built environment. Monuments and sculptures made from marble and limestone are particularly vulnerable, as the acids dissolve calcium carbonate.
62
+
63
+ Fossil fuels also contain radioactive materials, mainly uranium and thorium, which are released into the atmosphere. In 2000, about 12,000 tonnes of thorium and 5,000 tonnes of uranium were released worldwide from burning coal.[31] It is estimated that during 1982, US coal burning released 155 times as much radioactivity into the atmosphere as the Three Mile Island accident.[32]
64
+
65
+ Burning coal also generates large amounts of bottom ash and fly ash. These materials are used in a wide variety of applications, utilizing, for example, about 40% of the US production.[33]
66
+
67
+ Harvesting, processing, and distributing fossil fuels can also create environmental concerns. Coal mining methods, particularly mountaintop removal and strip mining, have negative environmental impacts, and offshore oil drilling poses a hazard to aquatic organisms. Fossil fuel wells can contribute to methane release via fugitive gas emissions. Oil refineries also have negative environmental impacts, including air and water pollution. Transportation of coal requires the use of diesel-powered locomotives, while crude oil is typically transported by tanker ships, requiring the combustion of additional fossil fuels.
68
+
69
+ Environmental regulation uses a variety of approaches to limit these emissions, such as command-and-control (which mandates the amount of pollution or the technology used), economic incentives, or voluntary programs.
70
+
71
+ An example of such regulation in the USA is the "EPA is implementing policies to reduce airborne mercury emissions. Under regulations issued in 2005, coal-fired power plants will need to reduce their emissions by 70 percent by 2018.".[34]
72
+
73
+ In economic terms, pollution from fossil fuels is regarded as a negative externality. Taxation is considered as one way to make societal costs explicit, in order to 'internalize' the cost of pollution. This aims to make fossil fuels more expensive, thereby reducing their use and the amount of associated pollution, along with raising the funds necessary to counteract these effects.[citation needed]
74
+
75
+ According to Rodman D. Griffin, "The burning of coal and oil have saved inestimable amounts of time and labor while substantially raising living standards around the world".[35] Although the use of fossil fuels may seem beneficial to our lives, it plays a role in global warming and it is said to be dangerous for the future.[35]
76
+
77
+ Moreover, this environmental pollution impacts humans because particulates and other air pollution from fossil fuel combustion cause illness and death when inhaled by people. These health effects include premature death, acute respiratory illness, aggravated asthma, chronic bronchitis and decreased lung function. The poor, undernourished, very young and very old, and people with preexisting respiratory disease and other ill health, are more at risk.[36]
78
+
79
+ In 2014, the global energy industry revenue was about US$8 trillion,[37] with about 84% fossil fuel, 4% nuclear, and 12% renewable (including hydroelectric).[38]
80
+
81
+ In 2014, there were 1,469 oil and gas firms listed on stock exchanges around the world, with a combined market capitalization of US$4.65 trillion.[39] In 2019, Saudi Aramco was listed and it touched a US$2 trillion valuation on its second day of trading,[40] after the world's largest initial public offering.[41]
82
+
83
+ Air pollution from fossil fuels in 2018 has been estimated to cost US$2.9 trillion, or 3.3% of global GDP.[11]
84
+
85
+ The International Energy Agency estimated 2017 global government fossil fuel subsidies to have been $300 billion.[42]
86
+
87
+ A 2015 report studied 20 fossil fuel companies and found that, while highly profitable, the hidden economic cost to society was also large.[43][44] The report spans the period 2008–2012 and notes that: "For all companies and all years, the economic cost to society of their CO2 emissions was greater than their after‐tax profit, with the single exception of ExxonMobil in 2008."[43]:4 Pure coal companies fare even worse: "the economic cost to society exceeds total revenue in all years, with this cost varying between nearly $2 and nearly $9 per $1 of revenue."[43]:5 In this case, total revenue includes "employment, taxes, supply purchases, and indirect employment."[43]:4
88
+
89
+ Fossil fuel prices generally are below their actual costs, or their "efficient prices," when economic externalities, such as the costs of air pollution and global climate destruction, are taken into account. Fossil fuels are subsidized in the amount of $4.7 trillion in 2015, which is equivalent to 6.3% of the 2015 global GDP and are estimated to grow to $5.2 trillion in 2017, which is equivalent to 6.5% of global GDP. The largest five subsidizers in 2015 were the following: China with $1.4 trillion in fossil fuel subsidies, United States with $649 billion, Russia with $551 billion, the European Union with $289 billion, and India with $209 billion. Had there been no subsidies for fossil fuels, global carbon emissions would have been lowered by an estimated 28% in 2015, air-pollution related deaths reduced by 46%, and government revenue increased by $2.8 trillion or 3.8% of GDP.[45]
en/1746.html.txt ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ A fossil fuel is a fuel formed by natural processes, such as anaerobic decomposition of buried dead organisms, containing organic molecules originating in ancient photosynthesis[1] that release energy in combustion.[2]
6
+ Such organisms and their resulting fossil fuels typically have an age of millions of years, and sometimes more than 650 million years.[3]
7
+ Fossil fuels contain high percentages of carbon and include petroleum, coal, and natural gas.[4] Peat is also sometimes considered a fossil fuel.[5]
8
+ Commonly used derivatives of fossil fuels include kerosene and propane.
9
+ Fossil fuels range from volatile materials with low carbon-to-hydrogen ratios (like methane), to liquids (like petroleum), to nonvolatile materials composed of almost pure carbon, like anthracite coal.
10
+ Methane can be found in hydrocarbon fields alone, associated with oil, or in the form of methane clathrates.
11
+
12
+ As of 2018, the world's main primary energy sources consisted of petroleum (34%), coal (27%), and natural gas (24%), amounting to an 85% share for fossil fuels in primary energy consumption in the world.
13
+ Non-fossil sources included nuclear (4.4%), hydroelectric (6.8%), and other renewables (4.0%, including geothermal, solar, tidal, wind, wood, and waste).[6]
14
+ The share of renewables (including traditional biomass) in the world's total final energy consumption was 18% in 2018.[7] Compared with 2017, world energy-consumption grew at a rate of 2.9%, almost double its 10-year average of 1.5% per year, and the fastest since 2010.[8]
15
+
16
+ Although fossil fuels are continually formed by natural processes, they are generally classified as non-renewable resources because they take millions of years to form and known viable reserves are being depleted much faster than new ones are generated.[9][10]
17
+
18
+ Most air pollution deaths are due to fossil fuel combustion products, it is estimated to cost over 3% of global GDP,[11] and fossil fuel phase-out would save 3.6 million lives each year.[12]
19
+
20
+ The use of fossil fuels raises serious environmental concerns.
21
+ The burning of fossil fuels produces around 35 billion tonnes (35 gigatonnes) of carbon dioxide (CO2) per year.[13]
22
+ It is estimated that natural processes can only absorb a small part of that amount, so there is a net increase of many billion tonnes of atmospheric carbon dioxide per year.[14]
23
+ CO2 is a greenhouse gas that increases radiative forcing and contributes to global warming and ocean acidification.
24
+ A global movement towards the generation of low-carbon renewable energy is underway to help reduce global greenhouse-gas emissions.
25
+
26
+ The theory that fossil fuels formed from the fossilized remains of dead plants by exposure to heat and pressure in the Earth's crust over millions of years was first introduced by Andreas Libavius "in his 1597 Alchemia [Alchymia]" and later by Mikhail Lomonosov "as early as 1757 and certainly by 1763".[16] The first use of the term "fossil fuel" occurs in the work of the German chemist Caspar Neumann, in English translation in 1759.[17] The Oxford English Dictionary notes that in the phrase "fossil fuel" the adjective "fossil" means "[o]btained by digging; found buried in the earth", which dates to at least 1652,[18] before the English noun "fossil" came to refer primarily to long-dead organisms in the early 18th century.[19]
27
+
28
+ Aquatic phytoplankton and zooplankton that died and sedimented in large quantities under anoxic conditions millions of years ago began forming petroleum and natural gas as a result of anaerobic decomposition. Over geological time this organic matter, mixed with mud, became buried under further heavy layers of inorganic sediment. The resulting high temperature and pressure caused the organic matter to chemically alter, first into a waxy material known as kerogen, which is found in oil shales, and then with more heat into liquid and gaseous hydrocarbons in a process known as catagenesis. Despite these heat driven transformations (which increase the energy density compared to typical organic matter by removal of oxygen atoms)[2], the energy released in combustion is still photosynthetic in origin.[1]
29
+
30
+ Terrestrial plants, on the other hand, tended to form coal and methane. Many of the coal fields date to the Carboniferous period of Earth's history. Terrestrial plants also form type III kerogen, a source of natural gas.
31
+
32
+ There is a wide range of organic compounds in any given fuel. The specific mixture of hydrocarbons gives a fuel its characteristic properties, such as density, viscosity, boiling point, melting point, etc. Some fuels like natural gas, for instance, contain only very low boiling, gaseous components. Others such as gasoline or diesel contain much higher boiling components.
33
+
34
+ Fossil fuels are of great importance because they can be burned (oxidized to carbon dioxide and water), producing significant amounts of energy per unit mass. The use of coal as a fuel predates recorded history. Coal was used to run furnaces for the smelting of metal ore. While semi-solid hydrocarbons from seeps were also burned in ancient times,[20] they were mostly used for waterproofing and embalming.[21]
35
+
36
+ Commercial exploitation of petroleum began in the 19th century, largely to replace oils from animal sources (notably whale oil) for use in oil lamps.[22]
37
+
38
+ Natural gas, once flared-off as an unneeded byproduct of petroleum production, is now considered a very valuable resource.[23] Natural gas deposits are also the main source of helium.
39
+
40
+ Heavy crude oil, which is much more viscous than conventional crude oil, and oil sands, where bitumen is found mixed with sand and clay, began to become more important as sources of fossil fuel in the early 2000s.[24] Oil shale and similar materials are sedimentary rocks containing kerogen, a complex mixture of high-molecular weight organic compounds, which yield synthetic crude oil when heated (pyrolyzed). With additional processing, they can be employed in lieu of other established fossil fuels. More recently, there has been disinvestment from exploitation of such resources due to their high carbon cost relative to more easily processed reserves.[25]
41
+
42
+ Prior to the latter half of the 18th century, windmills and watermills provided the energy needed for industry such as milling flour, sawing wood or pumping water, while burning wood or peat provided domestic heat. The wide-scale use of fossil fuels, coal at first and petroleum later, in steam engines enabled the Industrial Revolution. At the same time, gas lights using natural gas or coal gas were coming into wide use. The invention of the internal combustion engine and its use in automobiles and trucks greatly increased the demand for gasoline and diesel oil, both made from fossil fuels. Other forms of transportation, railways and aircraft, also require fossil fuels. The other major use for fossil fuels is in generating electricity and as feedstock for the petrochemical industry. Tar, a leftover of petroleum extraction, is used in construction of roads.
43
+
44
+ Levels of primary energy sources are the reserves in the ground. Flows are production of fossil fuels from these reserves. The most important primary energy sources are carbon-based fossil energy sources.
45
+
46
+ P. E. Hodgson, a senior research fellow emeritus in physics at Corpus Christi College, Oxford, expects the world energy use to double every fourteen years and the need to increase faster still, and he insisted in 2008 that the world oil production, a main resource of fossil fuel, was expected to peak in ten years and thereafter fall.[26]
47
+
48
+ The principle of supply and demand holds that as hydrocarbon supplies diminish, prices will rise. Therefore, higher prices will lead to increased alternative, renewable energy supplies as previously uneconomic sources become sufficiently economical to exploit. Artificial gasolines and other renewable energy sources currently require more expensive production and processing technologies than conventional petroleum reserves, but may become economically viable in the near future.
49
+ Different alternative sources of energy include nuclear, hydroelectric, solar, wind, and geothermal.
50
+
51
+ One of the more promising energy alternatives is the use of inedible feed stocks and biomass for carbon dioxide capture as well as biofuel production. While these processes are not without problems, they are currently in practice around the world. Biodiesels are being produced by several companies and are the subject of research at several universities. Processes for converting renewable lipids into usable fuels include hydrotreating and decarboxylation.
52
+
53
+ The United States holds less than 5% of the world's population, but due to large houses and private cars, uses more than 25% of the world's supply of fossil fuels.[27] As the largest source of U.S. greenhouse gas emissions, CO2 from fossil fuel combustion accounted for 80 percent of weighted emissions in 1998.[28] Combustion of fossil fuels also produces other air pollutants, such as nitrogen oxides, sulfur dioxide, volatile organic compounds and heavy metals.
54
+
55
+ According to Environment Canada:
56
+
57
+ "The electricity sector is unique among industrial sectors in its very large contribution to emissions associated with nearly all air issues. Electricity generation produces a large share of Canadian nitrogen oxides and sulphur dioxide emissions, which contribute to smog and acid rain and the formation of fine particulate matter. It is the largest uncontrolled industrial source of mercury emissions in Canada. Fossil fuel-fired electric power plants also emit carbon dioxide, which may contribute to climate change. In addition, the sector has significant impacts on water and habitat and species. In particular, hydropower dams and transmission lines have significant effects on water and biodiversity."[29]
58
+
59
+ According to U.S. scientist Jerry Mahlman, who crafted the IPCC language used to define levels of scientific certainty, the new report will blame fossil fuels for global warming with "virtual certainty," meaning 99% sure. That's a significant jump from "likely," or 66% sure, in the group's last report in 2001. More than 1,600 pages of research went into the new assessment.[30]
60
+
61
+ Combustion of fossil fuels generates sulfuric and nitric acids, which fall to Earth as acid rain, impacting both natural areas and the built environment. Monuments and sculptures made from marble and limestone are particularly vulnerable, as the acids dissolve calcium carbonate.
62
+
63
+ Fossil fuels also contain radioactive materials, mainly uranium and thorium, which are released into the atmosphere. In 2000, about 12,000 tonnes of thorium and 5,000 tonnes of uranium were released worldwide from burning coal.[31] It is estimated that during 1982, US coal burning released 155 times as much radioactivity into the atmosphere as the Three Mile Island accident.[32]
64
+
65
+ Burning coal also generates large amounts of bottom ash and fly ash. These materials are used in a wide variety of applications, utilizing, for example, about 40% of the US production.[33]
66
+
67
+ Harvesting, processing, and distributing fossil fuels can also create environmental concerns. Coal mining methods, particularly mountaintop removal and strip mining, have negative environmental impacts, and offshore oil drilling poses a hazard to aquatic organisms. Fossil fuel wells can contribute to methane release via fugitive gas emissions. Oil refineries also have negative environmental impacts, including air and water pollution. Transportation of coal requires the use of diesel-powered locomotives, while crude oil is typically transported by tanker ships, requiring the combustion of additional fossil fuels.
68
+
69
+ Environmental regulation uses a variety of approaches to limit these emissions, such as command-and-control (which mandates the amount of pollution or the technology used), economic incentives, or voluntary programs.
70
+
71
+ An example of such regulation in the USA is the "EPA is implementing policies to reduce airborne mercury emissions. Under regulations issued in 2005, coal-fired power plants will need to reduce their emissions by 70 percent by 2018.".[34]
72
+
73
+ In economic terms, pollution from fossil fuels is regarded as a negative externality. Taxation is considered as one way to make societal costs explicit, in order to 'internalize' the cost of pollution. This aims to make fossil fuels more expensive, thereby reducing their use and the amount of associated pollution, along with raising the funds necessary to counteract these effects.[citation needed]
74
+
75
+ According to Rodman D. Griffin, "The burning of coal and oil have saved inestimable amounts of time and labor while substantially raising living standards around the world".[35] Although the use of fossil fuels may seem beneficial to our lives, it plays a role in global warming and it is said to be dangerous for the future.[35]
76
+
77
+ Moreover, this environmental pollution impacts humans because particulates and other air pollution from fossil fuel combustion cause illness and death when inhaled by people. These health effects include premature death, acute respiratory illness, aggravated asthma, chronic bronchitis and decreased lung function. The poor, undernourished, very young and very old, and people with preexisting respiratory disease and other ill health, are more at risk.[36]
78
+
79
+ In 2014, the global energy industry revenue was about US$8 trillion,[37] with about 84% fossil fuel, 4% nuclear, and 12% renewable (including hydroelectric).[38]
80
+
81
+ In 2014, there were 1,469 oil and gas firms listed on stock exchanges around the world, with a combined market capitalization of US$4.65 trillion.[39] In 2019, Saudi Aramco was listed and it touched a US$2 trillion valuation on its second day of trading,[40] after the world's largest initial public offering.[41]
82
+
83
+ Air pollution from fossil fuels in 2018 has been estimated to cost US$2.9 trillion, or 3.3% of global GDP.[11]
84
+
85
+ The International Energy Agency estimated 2017 global government fossil fuel subsidies to have been $300 billion.[42]
86
+
87
+ A 2015 report studied 20 fossil fuel companies and found that, while highly profitable, the hidden economic cost to society was also large.[43][44] The report spans the period 2008–2012 and notes that: "For all companies and all years, the economic cost to society of their CO2 emissions was greater than their after‐tax profit, with the single exception of ExxonMobil in 2008."[43]:4 Pure coal companies fare even worse: "the economic cost to society exceeds total revenue in all years, with this cost varying between nearly $2 and nearly $9 per $1 of revenue."[43]:5 In this case, total revenue includes "employment, taxes, supply purchases, and indirect employment."[43]:4
88
+
89
+ Fossil fuel prices generally are below their actual costs, or their "efficient prices," when economic externalities, such as the costs of air pollution and global climate destruction, are taken into account. Fossil fuels are subsidized in the amount of $4.7 trillion in 2015, which is equivalent to 6.3% of the 2015 global GDP and are estimated to grow to $5.2 trillion in 2017, which is equivalent to 6.5% of global GDP. The largest five subsidizers in 2015 were the following: China with $1.4 trillion in fossil fuel subsidies, United States with $649 billion, Russia with $551 billion, the European Union with $289 billion, and India with $209 billion. Had there been no subsidies for fossil fuels, global carbon emissions would have been lowered by an estimated 28% in 2015, air-pollution related deaths reduced by 46%, and government revenue increased by $2.8 trillion or 3.8% of GDP.[45]
en/1747.html.txt ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object.[note 1] Energy is a conserved quantity; the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the energy transferred to an object by the work of moving it a distance of 1 metre against a force of 1 newton.
6
+
7
+ Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field (gravitational, electric or magnetic), the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, and the thermal energy due to an object's temperature.
8
+
9
+ Mass and energy are closely related. Due to mass–energy equivalence, any object that has mass when stationary (called rest mass) also has an equivalent amount of energy whose form is called rest energy, and any additional energy (of any form) acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale.
10
+
11
+ Living organisms require energy to stay alive, such as the energy humans get from food. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth.
12
+
13
+ The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, and generally is a function of the position of an object within a field or may be stored in the field itself.
14
+
15
+ While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, macroscopic mechanical energy is the sum of translational and rotational kinetic and potential energy in a system neglects the kinetic energy due to temperature, and nuclear energy which combines utilize potentials from the nuclear force and the weak force), among others.[citation needed]
16
+
17
+
18
+
19
+ The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation',[1] which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
20
+
21
+ In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two.
22
+
23
+ In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense.[2] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
24
+
25
+ These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.[3] Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
26
+
27
+ In 1843, Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle.
28
+
29
+ In the International System of Units (SI), the unit of energy is the joule, named after James Prescott Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
30
+
31
+ The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
32
+
33
+ In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
34
+
35
+ Work, a function of energy, is force times distance.
36
+
37
+ This says that the work (
38
+
39
+
40
+
41
+ W
42
+
43
+
44
+ {\displaystyle W}
45
+
46
+ ) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
47
+
48
+ The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.[4]
49
+
50
+ Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
51
+
52
+ Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
53
+
54
+ In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the case of endothermic reactions the situation is the reverse. Chemical reactions are almost invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT – that is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
55
+
56
+ In biology, energy is an attribute of all biological systems from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy used in respiration is mostly stored in molecular oxygen [5] and can be unlocked by reactions with molecules of substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum.[6] The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.[7]
57
+
58
+ Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, and proteins and high-energy compounds like oxygen [5] and ATP. Carbohydrates, lipids, and proteins can release the energy of oxygen, which is utilized by living organisms as an electron acceptor. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark, in a forest fire, or it may be made available more slowly for animal or human metabolism, when organic molecules are ingested, and catabolism is triggered by enzyme action.
59
+
60
+ Any living organism relies on an external source of energy – radiant energy from the Sun in the case of green plants, chemical energy in some form in the case of animals – to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria
61
+
62
+ and some of the energy is used to convert ADP into ATP.
63
+
64
+ The rest of the chemical energy in O2[8] and the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[note 2]
65
+
66
+ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy), and it is true that most real machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").[note 3] Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[9] i.e. reconverted into carbon dioxide and heat.
67
+
68
+ In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior,[10] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth.
69
+
70
+ Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives many weather phenomena, save those generated by volcanic events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement.
71
+
72
+ In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may be later released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars created these atoms.
73
+
74
+ In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
75
+
76
+
77
+
78
+ In quantum mechanics, energy is defined in terms of the energy operator
79
+ as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation:
80
+
81
+
82
+
83
+ E
84
+ =
85
+ h
86
+ ν
87
+
88
+
89
+ {\displaystyle E=h\nu }
90
+
91
+ (where
92
+
93
+
94
+
95
+ h
96
+
97
+
98
+ {\displaystyle h}
99
+
100
+ is Planck's constant and
101
+
102
+
103
+
104
+ ν
105
+
106
+
107
+ {\displaystyle \nu }
108
+
109
+ the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
110
+
111
+ When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
112
+
113
+ where
114
+
115
+ For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
116
+
117
+ In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.[11]
118
+
119
+ Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
120
+
121
+ In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector).[11] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts).
122
+
123
+
124
+
125
+ Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator; or a heat engine, from heat to work.
126
+
127
+ Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that in itself (since it still contains the same total energy even if in different forms), but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
128
+
129
+ There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
130
+
131
+ Energy transformations in the universe over time are characterized by various kinds of potential energy that has been available since the Big Bang later being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released that was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
132
+
133
+ Energy is also transferred from potential energy (
134
+
135
+
136
+
137
+
138
+ E
139
+
140
+ p
141
+
142
+
143
+
144
+
145
+ {\displaystyle E_{p}}
146
+
147
+ ) to kinetic energy (
148
+
149
+
150
+
151
+
152
+ E
153
+
154
+ k
155
+
156
+
157
+
158
+
159
+ {\displaystyle E_{k}}
160
+
161
+ ) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+ (4)
172
+
173
+ The equation can then be simplified further since
174
+
175
+
176
+
177
+
178
+ E
179
+
180
+ p
181
+
182
+
183
+ =
184
+ m
185
+ g
186
+ h
187
+
188
+
189
+ {\displaystyle E_{p}=mgh}
190
+
191
+ (mass times acceleration due to gravity times the height) and
192
+
193
+
194
+
195
+
196
+ E
197
+
198
+ k
199
+
200
+
201
+ =
202
+
203
+
204
+ 1
205
+ 2
206
+
207
+
208
+ m
209
+
210
+ v
211
+
212
+ 2
213
+
214
+
215
+
216
+
217
+ {\displaystyle E_{k}={\frac {1}{2}}mv^{2}}
218
+
219
+ (half mass times velocity squared). Then the total amount of energy can be found by adding
220
+
221
+
222
+
223
+
224
+ E
225
+
226
+ p
227
+
228
+
229
+ +
230
+
231
+ E
232
+
233
+ k
234
+
235
+
236
+ =
237
+
238
+ E
239
+
240
+ t
241
+ o
242
+ t
243
+ a
244
+ l
245
+
246
+
247
+
248
+
249
+ {\displaystyle E_{p}+E_{k}=E_{total}}
250
+
251
+ .
252
+
253
+ Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between rest-mass and rest-energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information).
254
+
255
+ Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since
256
+
257
+
258
+
259
+
260
+ c
261
+
262
+ 2
263
+
264
+
265
+
266
+
267
+ {\displaystyle c^{2}}
268
+
269
+ is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~
270
+
271
+
272
+
273
+ 9
274
+ ×
275
+
276
+ 10
277
+
278
+ 16
279
+
280
+
281
+
282
+
283
+ {\displaystyle 9\times 10^{16}}
284
+
285
+ joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics.
286
+
287
+ Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomisation in a crystal).
288
+
289
+ As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less.
290
+
291
+ The fact that energy can be neither created nor be destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out by work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[12]
292
+
293
+ While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations.[13] The total energy of a system can be calculated by adding up all forms of energy in the system.
294
+
295
+ Richard Feynman said during a 1961 lecture:[14]
296
+
297
+ There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.
298
+
299
+ Most kinds of energy (with gravitational energy being a notable exception)[15] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[13][14]
300
+
301
+ This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time,[16] a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in principle be defined and measured.
302
+
303
+ Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appears as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
304
+
305
+ In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
306
+
307
+ which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
308
+
309
+ In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena.
310
+
311
+ Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat.[note 4] Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[note 5] and the conductive transfer of thermal energy.
312
+
313
+ Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:[note 6]
314
+
315
+
316
+
317
+
318
+
319
+
320
+
321
+
322
+
323
+ (1)
324
+
325
+ where
326
+
327
+
328
+
329
+ E
330
+
331
+
332
+ {\displaystyle E}
333
+
334
+ is the amount of energy transferred,
335
+
336
+
337
+
338
+ W
339
+
340
+
341
+ {\displaystyle W}
342
+
343
+   represents the work done on the system, and
344
+
345
+
346
+
347
+ Q
348
+
349
+
350
+ {\displaystyle Q}
351
+
352
+ represents the heat flow into the system. As a simplification, the heat term,
353
+
354
+
355
+
356
+ Q
357
+
358
+
359
+ {\displaystyle Q}
360
+
361
+ , is sometimes ignored, especially when the thermal efficiency of the transfer is high.
362
+
363
+
364
+
365
+
366
+
367
+
368
+
369
+
370
+
371
+ (2)
372
+
373
+ This simplified equation is the one used to define the joule, for example.
374
+
375
+ Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by
376
+
377
+
378
+
379
+ E
380
+
381
+
382
+ {\displaystyle E}
383
+
384
+ , one may write
385
+
386
+
387
+
388
+
389
+
390
+
391
+
392
+
393
+
394
+ (3)
395
+
396
+ Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.[17]
397
+
398
+ The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved[18] and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
399
+
400
+ where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
401
+
402
+ This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and pV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
403
+
404
+ where
405
+
406
+
407
+
408
+ δ
409
+ Q
410
+
411
+
412
+ {\displaystyle \delta Q}
413
+
414
+ is the heat supplied to the system and
415
+
416
+
417
+
418
+ δ
419
+ W
420
+
421
+
422
+ {\displaystyle \delta W}
423
+
424
+ is the work applied to the system.
425
+
426
+ The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over the whole cycle, or over many cycles, net energy is thus equally split between kinetic and potential. This is called equipartition principle; total energy of a system with many degrees of freedom is equally split among all available degrees of freedom.
427
+
428
+ This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics. The second law of thermodynamics is valid only for systems which are near or in equilibrium state. For non-equilibrium systems, the laws governing system's behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production.[19][20] It states that nonequilibrium systems behave in such a way to maximize its entropy production.[21]
en/1748.html.txt ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ A fossil fuel is a fuel formed by natural processes, such as anaerobic decomposition of buried dead organisms, containing organic molecules originating in ancient photosynthesis[1] that release energy in combustion.[2]
6
+ Such organisms and their resulting fossil fuels typically have an age of millions of years, and sometimes more than 650 million years.[3]
7
+ Fossil fuels contain high percentages of carbon and include petroleum, coal, and natural gas.[4] Peat is also sometimes considered a fossil fuel.[5]
8
+ Commonly used derivatives of fossil fuels include kerosene and propane.
9
+ Fossil fuels range from volatile materials with low carbon-to-hydrogen ratios (like methane), to liquids (like petroleum), to nonvolatile materials composed of almost pure carbon, like anthracite coal.
10
+ Methane can be found in hydrocarbon fields alone, associated with oil, or in the form of methane clathrates.
11
+
12
+ As of 2018, the world's main primary energy sources consisted of petroleum (34%), coal (27%), and natural gas (24%), amounting to an 85% share for fossil fuels in primary energy consumption in the world.
13
+ Non-fossil sources included nuclear (4.4%), hydroelectric (6.8%), and other renewables (4.0%, including geothermal, solar, tidal, wind, wood, and waste).[6]
14
+ The share of renewables (including traditional biomass) in the world's total final energy consumption was 18% in 2018.[7] Compared with 2017, world energy-consumption grew at a rate of 2.9%, almost double its 10-year average of 1.5% per year, and the fastest since 2010.[8]
15
+
16
+ Although fossil fuels are continually formed by natural processes, they are generally classified as non-renewable resources because they take millions of years to form and known viable reserves are being depleted much faster than new ones are generated.[9][10]
17
+
18
+ Most air pollution deaths are due to fossil fuel combustion products, it is estimated to cost over 3% of global GDP,[11] and fossil fuel phase-out would save 3.6 million lives each year.[12]
19
+
20
+ The use of fossil fuels raises serious environmental concerns.
21
+ The burning of fossil fuels produces around 35 billion tonnes (35 gigatonnes) of carbon dioxide (CO2) per year.[13]
22
+ It is estimated that natural processes can only absorb a small part of that amount, so there is a net increase of many billion tonnes of atmospheric carbon dioxide per year.[14]
23
+ CO2 is a greenhouse gas that increases radiative forcing and contributes to global warming and ocean acidification.
24
+ A global movement towards the generation of low-carbon renewable energy is underway to help reduce global greenhouse-gas emissions.
25
+
26
+ The theory that fossil fuels formed from the fossilized remains of dead plants by exposure to heat and pressure in the Earth's crust over millions of years was first introduced by Andreas Libavius "in his 1597 Alchemia [Alchymia]" and later by Mikhail Lomonosov "as early as 1757 and certainly by 1763".[16] The first use of the term "fossil fuel" occurs in the work of the German chemist Caspar Neumann, in English translation in 1759.[17] The Oxford English Dictionary notes that in the phrase "fossil fuel" the adjective "fossil" means "[o]btained by digging; found buried in the earth", which dates to at least 1652,[18] before the English noun "fossil" came to refer primarily to long-dead organisms in the early 18th century.[19]
27
+
28
+ Aquatic phytoplankton and zooplankton that died and sedimented in large quantities under anoxic conditions millions of years ago began forming petroleum and natural gas as a result of anaerobic decomposition. Over geological time this organic matter, mixed with mud, became buried under further heavy layers of inorganic sediment. The resulting high temperature and pressure caused the organic matter to chemically alter, first into a waxy material known as kerogen, which is found in oil shales, and then with more heat into liquid and gaseous hydrocarbons in a process known as catagenesis. Despite these heat driven transformations (which increase the energy density compared to typical organic matter by removal of oxygen atoms)[2], the energy released in combustion is still photosynthetic in origin.[1]
29
+
30
+ Terrestrial plants, on the other hand, tended to form coal and methane. Many of the coal fields date to the Carboniferous period of Earth's history. Terrestrial plants also form type III kerogen, a source of natural gas.
31
+
32
+ There is a wide range of organic compounds in any given fuel. The specific mixture of hydrocarbons gives a fuel its characteristic properties, such as density, viscosity, boiling point, melting point, etc. Some fuels like natural gas, for instance, contain only very low boiling, gaseous components. Others such as gasoline or diesel contain much higher boiling components.
33
+
34
+ Fossil fuels are of great importance because they can be burned (oxidized to carbon dioxide and water), producing significant amounts of energy per unit mass. The use of coal as a fuel predates recorded history. Coal was used to run furnaces for the smelting of metal ore. While semi-solid hydrocarbons from seeps were also burned in ancient times,[20] they were mostly used for waterproofing and embalming.[21]
35
+
36
+ Commercial exploitation of petroleum began in the 19th century, largely to replace oils from animal sources (notably whale oil) for use in oil lamps.[22]
37
+
38
+ Natural gas, once flared-off as an unneeded byproduct of petroleum production, is now considered a very valuable resource.[23] Natural gas deposits are also the main source of helium.
39
+
40
+ Heavy crude oil, which is much more viscous than conventional crude oil, and oil sands, where bitumen is found mixed with sand and clay, began to become more important as sources of fossil fuel in the early 2000s.[24] Oil shale and similar materials are sedimentary rocks containing kerogen, a complex mixture of high-molecular weight organic compounds, which yield synthetic crude oil when heated (pyrolyzed). With additional processing, they can be employed in lieu of other established fossil fuels. More recently, there has been disinvestment from exploitation of such resources due to their high carbon cost relative to more easily processed reserves.[25]
41
+
42
+ Prior to the latter half of the 18th century, windmills and watermills provided the energy needed for industry such as milling flour, sawing wood or pumping water, while burning wood or peat provided domestic heat. The wide-scale use of fossil fuels, coal at first and petroleum later, in steam engines enabled the Industrial Revolution. At the same time, gas lights using natural gas or coal gas were coming into wide use. The invention of the internal combustion engine and its use in automobiles and trucks greatly increased the demand for gasoline and diesel oil, both made from fossil fuels. Other forms of transportation, railways and aircraft, also require fossil fuels. The other major use for fossil fuels is in generating electricity and as feedstock for the petrochemical industry. Tar, a leftover of petroleum extraction, is used in construction of roads.
43
+
44
+ Levels of primary energy sources are the reserves in the ground. Flows are production of fossil fuels from these reserves. The most important primary energy sources are carbon-based fossil energy sources.
45
+
46
+ P. E. Hodgson, a senior research fellow emeritus in physics at Corpus Christi College, Oxford, expects the world energy use to double every fourteen years and the need to increase faster still, and he insisted in 2008 that the world oil production, a main resource of fossil fuel, was expected to peak in ten years and thereafter fall.[26]
47
+
48
+ The principle of supply and demand holds that as hydrocarbon supplies diminish, prices will rise. Therefore, higher prices will lead to increased alternative, renewable energy supplies as previously uneconomic sources become sufficiently economical to exploit. Artificial gasolines and other renewable energy sources currently require more expensive production and processing technologies than conventional petroleum reserves, but may become economically viable in the near future.
49
+ Different alternative sources of energy include nuclear, hydroelectric, solar, wind, and geothermal.
50
+
51
+ One of the more promising energy alternatives is the use of inedible feed stocks and biomass for carbon dioxide capture as well as biofuel production. While these processes are not without problems, they are currently in practice around the world. Biodiesels are being produced by several companies and are the subject of research at several universities. Processes for converting renewable lipids into usable fuels include hydrotreating and decarboxylation.
52
+
53
+ The United States holds less than 5% of the world's population, but due to large houses and private cars, uses more than 25% of the world's supply of fossil fuels.[27] As the largest source of U.S. greenhouse gas emissions, CO2 from fossil fuel combustion accounted for 80 percent of weighted emissions in 1998.[28] Combustion of fossil fuels also produces other air pollutants, such as nitrogen oxides, sulfur dioxide, volatile organic compounds and heavy metals.
54
+
55
+ According to Environment Canada:
56
+
57
+ "The electricity sector is unique among industrial sectors in its very large contribution to emissions associated with nearly all air issues. Electricity generation produces a large share of Canadian nitrogen oxides and sulphur dioxide emissions, which contribute to smog and acid rain and the formation of fine particulate matter. It is the largest uncontrolled industrial source of mercury emissions in Canada. Fossil fuel-fired electric power plants also emit carbon dioxide, which may contribute to climate change. In addition, the sector has significant impacts on water and habitat and species. In particular, hydropower dams and transmission lines have significant effects on water and biodiversity."[29]
58
+
59
+ According to U.S. scientist Jerry Mahlman, who crafted the IPCC language used to define levels of scientific certainty, the new report will blame fossil fuels for global warming with "virtual certainty," meaning 99% sure. That's a significant jump from "likely," or 66% sure, in the group's last report in 2001. More than 1,600 pages of research went into the new assessment.[30]
60
+
61
+ Combustion of fossil fuels generates sulfuric and nitric acids, which fall to Earth as acid rain, impacting both natural areas and the built environment. Monuments and sculptures made from marble and limestone are particularly vulnerable, as the acids dissolve calcium carbonate.
62
+
63
+ Fossil fuels also contain radioactive materials, mainly uranium and thorium, which are released into the atmosphere. In 2000, about 12,000 tonnes of thorium and 5,000 tonnes of uranium were released worldwide from burning coal.[31] It is estimated that during 1982, US coal burning released 155 times as much radioactivity into the atmosphere as the Three Mile Island accident.[32]
64
+
65
+ Burning coal also generates large amounts of bottom ash and fly ash. These materials are used in a wide variety of applications, utilizing, for example, about 40% of the US production.[33]
66
+
67
+ Harvesting, processing, and distributing fossil fuels can also create environmental concerns. Coal mining methods, particularly mountaintop removal and strip mining, have negative environmental impacts, and offshore oil drilling poses a hazard to aquatic organisms. Fossil fuel wells can contribute to methane release via fugitive gas emissions. Oil refineries also have negative environmental impacts, including air and water pollution. Transportation of coal requires the use of diesel-powered locomotives, while crude oil is typically transported by tanker ships, requiring the combustion of additional fossil fuels.
68
+
69
+ Environmental regulation uses a variety of approaches to limit these emissions, such as command-and-control (which mandates the amount of pollution or the technology used), economic incentives, or voluntary programs.
70
+
71
+ An example of such regulation in the USA is the "EPA is implementing policies to reduce airborne mercury emissions. Under regulations issued in 2005, coal-fired power plants will need to reduce their emissions by 70 percent by 2018.".[34]
72
+
73
+ In economic terms, pollution from fossil fuels is regarded as a negative externality. Taxation is considered as one way to make societal costs explicit, in order to 'internalize' the cost of pollution. This aims to make fossil fuels more expensive, thereby reducing their use and the amount of associated pollution, along with raising the funds necessary to counteract these effects.[citation needed]
74
+
75
+ According to Rodman D. Griffin, "The burning of coal and oil have saved inestimable amounts of time and labor while substantially raising living standards around the world".[35] Although the use of fossil fuels may seem beneficial to our lives, it plays a role in global warming and it is said to be dangerous for the future.[35]
76
+
77
+ Moreover, this environmental pollution impacts humans because particulates and other air pollution from fossil fuel combustion cause illness and death when inhaled by people. These health effects include premature death, acute respiratory illness, aggravated asthma, chronic bronchitis and decreased lung function. The poor, undernourished, very young and very old, and people with preexisting respiratory disease and other ill health, are more at risk.[36]
78
+
79
+ In 2014, the global energy industry revenue was about US$8 trillion,[37] with about 84% fossil fuel, 4% nuclear, and 12% renewable (including hydroelectric).[38]
80
+
81
+ In 2014, there were 1,469 oil and gas firms listed on stock exchanges around the world, with a combined market capitalization of US$4.65 trillion.[39] In 2019, Saudi Aramco was listed and it touched a US$2 trillion valuation on its second day of trading,[40] after the world's largest initial public offering.[41]
82
+
83
+ Air pollution from fossil fuels in 2018 has been estimated to cost US$2.9 trillion, or 3.3% of global GDP.[11]
84
+
85
+ The International Energy Agency estimated 2017 global government fossil fuel subsidies to have been $300 billion.[42]
86
+
87
+ A 2015 report studied 20 fossil fuel companies and found that, while highly profitable, the hidden economic cost to society was also large.[43][44] The report spans the period 2008–2012 and notes that: "For all companies and all years, the economic cost to society of their CO2 emissions was greater than their after‐tax profit, with the single exception of ExxonMobil in 2008."[43]:4 Pure coal companies fare even worse: "the economic cost to society exceeds total revenue in all years, with this cost varying between nearly $2 and nearly $9 per $1 of revenue."[43]:5 In this case, total revenue includes "employment, taxes, supply purchases, and indirect employment."[43]:4
88
+
89
+ Fossil fuel prices generally are below their actual costs, or their "efficient prices," when economic externalities, such as the costs of air pollution and global climate destruction, are taken into account. Fossil fuels are subsidized in the amount of $4.7 trillion in 2015, which is equivalent to 6.3% of the 2015 global GDP and are estimated to grow to $5.2 trillion in 2017, which is equivalent to 6.5% of global GDP. The largest five subsidizers in 2015 were the following: China with $1.4 trillion in fossil fuel subsidies, United States with $649 billion, Russia with $551 billion, the European Union with $289 billion, and India with $209 billion. Had there been no subsidies for fossil fuels, global carbon emissions would have been lowered by an estimated 28% in 2015, air-pollution related deaths reduced by 46%, and government revenue increased by $2.8 trillion or 3.8% of GDP.[45]
en/1749.html.txt ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ World electricity generation by source in 2017. Total generation was 26 PWh.[1]
6
+
7
+ Renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat.[3] Renewable energy often provides energy in four important areas: electricity generation, air and water heating/cooling, transportation, and rural (off-grid) energy services.[4]
8
+
9
+ Based on REN21's 2017 report, renewables contributed 19.3% to humans' global energy consumption and 24.5% to their generation of electricity in 2015 and 2016, respectively. This energy consumption is divided as 8.9% coming from traditional biomass, 4.2% as heat energy (modern biomass, geothermal and solar heat), 3.9% from hydroelectricity and the remaining 2.2% is electricity from wind, solar, geothermal, and other forms of biomass. Worldwide investments in renewable technologies amounted to more than US$286 billion in 2015.[5] In 2017, worldwide investments in renewable energy amounted to US$279.8 billion with China accounting for US$126.6 billion or 45% of the global investments, the United States for US$40.5 billion and Europe for US$40.9 billion.[6] Globally there are an estimated 7.7 million jobs associated with the renewable energy industries, with solar photovoltaics being the largest renewable employer.[7] Renewable energy systems are rapidly becoming more efficient and cheaper and their share of total energy consumption is increasing.[8] As of 2019, more than two-thirds of worldwide newly installed electricity capacity was renewable.[9] Growth in consumption of coal and oil could end by 2020 due to increased uptake of renewables and natural gas.[10][11]
10
+
11
+ At the national level, at least 30 nations around the world already have renewable energy contributing more than 20 percent of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.[12]
12
+ Some places and at least two countries, Iceland and Norway, generate all their electricity using renewable energy already, and many other countries have the set a goal to reach 100% renewable energy in the future.[13]
13
+ At least 47 nations around the world already have over 50 percent of electricity from renewable resources.[14][15][16] Renewable energy resources exist over wide geographical areas, in contrast to fossil fuels, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency technologies is resulting in significant energy security, climate change mitigation, and economic benefits.[17] In international public opinion surveys there is strong support for promoting renewable sources such as solar power and wind power.[18][19]
14
+
15
+ While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development.[20] As most of renewable energy technologies provide electricity, renewable energy deployment is often applied in conjunction with further electrification, which has several benefits: electricity can be converted to heat (where necessary generating higher temperatures than fossil fuels), can be converted into mechanical energy with high efficiency, and is clean at the point of consumption.[21][22] In addition, electrification with renewable energy is more efficient and therefore leads to significant reductions in primary energy requirements.[23]
16
+
17
+ Renewable energy flows involve natural phenomena such as sunlight, wind, tides, plant growth, and geothermal heat, as the International Energy Agency explains:[24]
18
+
19
+ Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
20
+
21
+ Renewable energy resources and significant opportunities for energy efficiency exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency, and technological diversification of energy sources, would result in significant energy security and economic benefits.[17] It would also reduce environmental pollution such as air pollution caused by burning of fossil fuels and improve public health, reduce premature mortalities due to pollution and save associated health costs that amount to several hundred billion dollars annually only in the United States.[25] Renewable energy sources, that derive their energy from the sun, either directly or indirectly, such as hydro and wind, are expected to be capable of supplying humanity energy for almost another 1 billion years, at which point the predicted increase in heat from the Sun is expected to make the surface of the Earth too hot for liquid water to exist.[26][27][28]
22
+
23
+ Climate change and global warming concerns, coupled with the continuing fall in the costs of some renewable energy equipment, such as wind turbines and solar panels, are driving increased use of renewables.[18] New government spending, regulation and policies helped the industry weather the global financial crisis better than many other sectors.[29] As of 2019[update], however, according to the International Renewable Energy Agency, renewables overall share in the energy mix (including power, heat and transport) needs to grow six times faster, in order to keep the rise in average global temperatures "well below" 2.0 °C (3.6 °F) during the present century, compared to pre-industrial levels.[30]
24
+
25
+ As of 2011, small solar PV systems provide electricity to a few million households, and micro-hydro configured into mini-grids serves many more. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] [needs update] United Nations' eighth Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity.[32] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond, and some 120 countries have various policy targets for longer-term shares of renewable energy, including a 20% target of all electricity generated for the European Union by 2020. Some countries have much higher long-term policy targets of up to 100% renewables. Outside Europe, a diverse group of 20 or more other countries target renewable energy shares in the 2020–2030 time frame that range from 10% to 50%.[12]
26
+
27
+ Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services:[4]
28
+
29
+ Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. Almost without a doubt the oldest known use of renewable energy, in the form of traditional biomass to fuel fires, dates from more than a million years ago. Use of biomass for fire did not become commonplace until many hundreds of thousands of years later.[37] Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile.[38] From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times.[39] Moving into the time of recorded history, the primary sources of traditional renewable energy were human labor, animal power, water power, wind, in grain crushing windmills, and firewood, a traditional biomass.
30
+
31
+ In the 1860s and 1870s there were already fears that civilization would run out of fossil fuels and the need was felt for a better source. In 1873 Professor Augustin Mouchot wrote:
32
+
33
+ The time will arrive when the industry of Europe will cease to find those natural resources, so necessary for it. Petroleum springs and coal mines are not inexhaustible but are rapidly diminishing in many places. Will man, then, return to the power of water and wind? Or will he emigrate where the most powerful source of heat sends its rays to all? History will show what will come.[40]
34
+
35
+ In 1885, Werner von Siemens, commenting on the discovery of the photovoltaic effect in the solid state, wrote:
36
+
37
+ In conclusion, I would say that however great the scientific importance of this discovery may be, its practical value will be no less obvious when we reflect that the supply of solar energy is both without limit and without cost, and that it will continue to pour down upon us for countless ages after all the coal deposits of the earth have been exhausted and forgotten.[41]
38
+
39
+ Max Weber mentioned the end of fossil fuel in the concluding paragraphs of his Die protestantische Ethik und der Geist des Kapitalismus (The Protestant Ethic and the Spirit of Capitalism), published in 1905.[42] Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911 Scientific American article: "in the far distant future, natural fuels having been exhausted [solar power] will remain as the only means of existence of the human race".[43]
40
+
41
+ The theory of peak oil was published in 1956.[44] In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventual depletion of oil, as well as for an escape from dependence on oil, and the first electricity-generating wind turbines appeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980.[45]
42
+
43
+ In 2018, worldwide installed capacity of wind power was 564��GW.[47]
44
+
45
+ Air flow can be used to run wind turbines. Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine.[48] Areas where winds are stronger and more constant, such as offshore and high-altitude sites, are preferred locations for wind farms. Typically, full load hours of wind turbines vary between 16 and 57 percent annually, but might be higher in particularly favorable offshore sites.[49]
46
+
47
+ Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%.
48
+
49
+ Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore. As offshore wind speeds average ~90% greater than that of land, so offshore resources can contribute substantially more energy than land-stationed turbines.[50]
50
+
51
+ In 2017, worldwide renewable hydropower capacity was 1,154 GW.[15]
52
+
53
+ Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. There are many forms of water energy:
54
+
55
+ Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. For countries having the largest percentage of electricity from renewables, the top 50 are primarily hydroelectric. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity stations larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela.[54]
56
+
57
+ Wave power, which captures the energy of ocean surface waves, and tidal power, converting the energy of tides, are two forms of hydropower with future potential; however, they are not yet widely employed commercially. A demonstration project operated by the Ocean Renewable Power Company on the coast of Maine, and connected to the grid, harnesses tidal power from the Bay of Fundy, location of world's highest tidal flow. Ocean thermal energy conversion, which uses the temperature difference between cooler deep and warmer surface waters, currently has no economic feasibility.[55][56]
58
+
59
+ In 2017, global installed solar capacity was 390 GW.[15]
60
+
61
+ Solar energy, radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, concentrated solar power (CSP), concentrator photovoltaics (CPV), solar architecture and artificial photosynthesis.[58][59] Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert, and distribute solar energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air. Active solar technologies encompass solar thermal energy, using solar collectors for heating, and solar power, converting sunlight into electricity either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP).
62
+
63
+ A photovoltaic system converts light into electrical direct current (DC) by taking advantage of the photoelectric effect.[60] Solar PV has turned into a multi-billion, fast-growing industry, continues to improve its cost-effectiveness, and has the most potential of any renewable technologies together with CSP.[61][62] Concentrated solar power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Commercial concentrated solar power plants were first developed in the 1980s. CSP-Stirling has by far the highest efficiency among all solar energy technologies.
64
+
65
+ In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared".[58] Italy has the largest proportion of solar electricity in the world; in 2015, solar supplied 7.7% of electricity demand in Italy.[63] In 2017, after another year of rapid growth, solar generated approximately 2% of global power, or 460 TWh.[64]
66
+
67
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
68
+
69
+ High temperature geothermal energy is from thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. Earth's geothermal energy originates from the original formation of the planet and from radioactive decay of minerals (in currently uncertain[65] but possibly roughly equal[66] proportions). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots geo, meaning earth, and thermos, meaning heat.
70
+
71
+ The heat that is used for geothermal energy can be from deep within the Earth, all the way down to Earth's core – 4,000 miles (6,400 km) down. At the core, temperatures may reach over 9,000 °F (5,000 °C). Heat conducts from the core to surrounding rock. Extremely high temperature and pressure cause some rock to melt, which is commonly known as magma. Magma convects upward since it is lighter than the solid rock. This magma then heats rock and water in the crust, sometimes up to 700 °F (371 °C).[67]
72
+
73
+ Low temperature geothermal[35] refers to the use of the outer crust of the Earth as a thermal battery to facilitate renewable thermal energy for heating and cooling buildings, and other refrigeration and industrial uses. In this form of geothermal, a geothermal heat pump and ground-coupled heat exchanger are used together to move heat energy into the Earth (for cooling) and out of the Earth (for heating) on a varying seasonal basis. Low temperature geothermal (generally referred to as "GHP") is an increasingly important renewable technology because it both reduces total annual energy loads associated with heating and cooling, and it also flattens the electric demand curve eliminating the extreme summer and winter peak electric supply requirements. Thus low temperature geothermal/GHP is becoming an increasing national priority with multiple tax credit support[68] and focus as part of the ongoing movement toward net zero energy.[36]
74
+
75
+ Bioenergy global capacity in 2017 was 109 GW.[15]
76
+
77
+ Biomass is biological material derived from living, or recently living organisms. It most often refers to plants or plant-derived materials which are specifically called lignocellulosic biomass.[69] As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel. Conversion of biomass to biofuel can be achieved by different methods which are broadly classified into: thermal, chemical, and biochemical methods. Wood remains the largest biomass energy source today;[70] examples include forest residues – such as dead trees, branches and tree stumps –, yard clippings, wood chips and even municipal solid waste. In the second sense, biomass includes plant or animal matter that can be converted into fibers or other industrial chemicals, including biofuels. Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, bamboo,[71] and a variety of tree species, ranging from eucalyptus to oil palm (palm oil).
78
+
79
+ Plant energy is produced by crops specifically grown for use as fuel that offer high biomass output per hectare with low input energy.[72] The grain can be used for liquid transportation fuels while the straw can be burned to produce heat or electricity. Plant biomass can also be degraded from cellulose to glucose through a series of chemical treatments, and the resulting sugar can then be used as a first generation biofuel.
80
+
81
+ Biomass can be converted to other usable forms of energy such as methane gas[73] or transportation fuels such as ethanol and biodiesel. Rotting garbage, and agricultural and human waste, all release methane gas – also called landfill gas or biogas. Crops, such as corn and sugarcane, can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products such as vegetable oils and animal fats.[74] Also, biomass to liquids (BTLs) and cellulosic ethanol are still under research.[75][76] There is a great deal of research involving algal fuel or algae-derived biomass due to the fact that it is a non-food resource and can be produced at rates 5 to 10 times those of other types of land-based agriculture, such as corn and soy. Once harvested, it can be fermented to produce biofuels such as ethanol, butanol, and methane, as well as biodiesel and hydrogen. The biomass used for electricity generation varies by region. Forest by-products, such as wood residues, are common in the United States. Agricultural waste is common in Mauritius (sugar cane residue) and Southeast Asia (rice husks). Animal husbandry residues, such as poultry litter, are common in the United Kingdom.[77]
82
+
83
+ Biofuels include a wide range of fuels which are derived from biomass. The term covers solid, liquid, and gaseous fuels.[78] Liquid biofuels include bioalcohols, such as bioethanol, and oils, such as biodiesel. Gaseous biofuels include biogas, landfill gas and synthetic gas. Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. These include maize, sugarcane and, more recently, sweet sorghum. The latter crop is particularly suitable for growing in dryland conditions, and is being investigated by International Crops Research Institute for the Semi-Arid Tropics for its potential to provide fuel, along with food and animal feed, in arid parts of Asia and Africa.[79]
84
+
85
+ With advanced technology being developed, cellulosic biomass, such as trees and grasses, are also used as feedstocks for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the United States and in Brazil. The energy costs for producing bio-ethanol are almost equal to, the energy yields from bio-ethanol. However, according to the European Environment Agency, biofuels do not address global warming concerns.[80] Biodiesel is made from vegetable oils, animal fats or recycled greases. It can be used as a fuel for vehicles in its pure form, or more commonly as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. Biofuels provided 2.7% of the world's transport fuel in 2010.[81]
86
+
87
+ Biomass, biogas and biofuels are burned to produce heat/power and in doing so harm the environment. Pollutants such as sulphurous oxides (SOx), nitrous oxides (NOx), and particulate matter (PM) are produced from the combustion of biomass; the World Health Organisation estimates that 7 million premature deaths are caused each year by air pollution.[82] Biomass combustion is a major contributor.[82][83][84]
88
+
89
+ Renewable energy production from some sources such as wind and solar is more variable and more geographically spread than technology based on fossil fuels and nuclear. While integrating it into the wider energy system is feasible, it does lead to some additional challenges. In order for the energy system to remain stable, a set of measurements can be taken. Implementation of energy storage, using a wide variety of renewable energy technologies, and implementing a smart grid in which energy is automatically used at the moment it is produced can reduce risks and costs of renewable energy implementation.[85] In some locations, individual households can opt to purchase renewable energy through a consumer green energy program.
90
+
91
+ Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity accounts for more than 90% of all grid power storage. Costs of lithium-ion batteries are dropping rapidly, and are increasingly being deployed grid ancillary services and for domestic storage.
92
+
93
+ Renewable power has been more effective in creating jobs than coal or oil in the United States.[86] In 2016, employment in the sector increased 6 percent in the United States, causing employment in the non-renewable energy sector to decrease 18 percent. Worldwide, renewables employ about 8.1 million as of 2016.[87]
94
+
95
+ From the end of 2004, worldwide renewable energy capacity grew at rates of 10–60% annually for many technologies. In 2015 global investment in renewables rose 5% to $285.9 billion, breaking the previous record of $278.5 billion in 2011. 2015 was also the first year that saw renewables, excluding large hydro, account for the majority of all new power capacity (134 GW, making up 53.6% of the total). Of the renewables total, wind accounted for 72 GW and solar photovoltaics 56 GW; both record-breaking numbers and sharply up from 2014 figures (49 GW and 45 GW respectively). In financial terms, solar made up 56% of total new investment and wind accounted for 38%.
96
+
97
+ In 2014 global wind power capacity expanded 16% to 369,553 MW.[90] Yearly wind energy production is also growing rapidly and has reached around 4% of worldwide electricity usage,[91] 11.4% in the EU,[92] and it is widely used in Asia, and the United States. In 2015, worldwide installed photovoltaics capacity increased to 227 gigawatts (GW), sufficient to supply 1 percent of global electricity demands.[93] Solar thermal energy stations operate in the United States and Spain, and as of 2016, the largest of these is the 392 MW Ivanpah Solar Electric Generating System in California.[94][95] The world's largest geothermal power installation is The Geysers in California, with a rated capacity of 750 MW. Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18% of the country's automotive fuel. Ethanol fuel is also widely available in the United States.
98
+
99
+ In 2017, investments in renewable energy amounted to US$279.8 billion worldwide, with China accounting for US$126.6 billion or 45% of the global investments, the US for US$40.5 billion, and Europe for US$40.9 billion.[6] The results of a recent review of the literature concluded that as greenhouse gas (GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies.[96]
100
+
101
+ Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2018 report from the International Renewable Energy Agency (IRENA), found that the cost of renewable energy is quickly falling, and will likely be equal to or less than the cost non-renewables such as fossil fuels by 2020. The report found that solar power costs have dropped 73% since 2010 and onshore wind costs have dropped by 23% in that same timeframe.[106]
102
+
103
+ Current projections concerning the future cost of renewables vary however. The EIA has predicted that almost two thirds of net additions to power capacity will come from renewables by 2020 due to the combined policy benefits of local pollution, decarbonisation and energy diversification.
104
+
105
+ According to a 2018 report by Bloomberg New Energy Finance, wind and solar power are expected to generate roughly 50% of the world's energy needs by 2050, while coal powered electricity plants are expected to drop to just 11%.[107]
106
+ Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies.[108] Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today".[108] A series of studies by the US National Renewable Energy Laboratory modeled the "grid in the Western US under a number of different scenarios where intermittent renewables accounted for 33 percent of the total power." In the models, inefficiencies in cycling the fossil fuel plants to compensate for the variation in solar and wind energy resulted in an additional cost of "between $0.47 and $1.28 to each MegaWatt hour generated"; however, the savings in the cost of the fuels saved "adds up to $7 billion, meaning the added costs are, at most, two percent of the savings."[109]
107
+
108
+ In 2017 the world renewable hydropower capacity was 1,154 GW.[15] Only a quarter of the worlds estimated hydroelectric potential of 14,000 TWh/year has been developed, the regional potentials for the growth of hydropower around the world are, 71% Europe, 75% North America, 79% South America, 95% Africa, 95% Middle East, 82% Asia Pacific. However, the political realities of new reservoirs in western countries, economic limitations in the third world and the lack of a transmission system in undeveloped areas, result in the possibility of developing 25% of the remaining potential before 2050, with the bulk of that being in the Asia Pacific area.[110] There is slow growth taking place in Western counties,[citation needed] but not in the conventional dam and reservoir style of the past. New projects take the form of run-of-the-river and small hydro, neither using large reservoirs. It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid.[111] Where circumstances permit existing dams such as the Russell Dam built in 1985 may be updated with "pump back" facilities for pumped-storage which is useful for peak loads or to support intermittent wind and solar power. Countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro.[112]
109
+
110
+ Wind power is widely used in Europe, China, and the United States. From 2004 to 2017, worldwide installed capacity of wind power has been growing from 47 GW to 514 GW—a more than tenfold increase within 13 years[15] As of the end of 2014, China, the United States and Germany combined accounted for half of total global capacity.[90] Several other countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, and 14% in Ireland in 2010 and have since continued to expand their installed capacity.[113][114] More than 80 countries around the world are using wind power on a commercial basis.[81]
111
+
112
+ Wind turbines are increasing in power with some commercially deployed models generating over 8MW per turbine.[115][116][117] More powerful models are in development, see list of most powerful wind turbines.
113
+
114
+ Solar thermal energy capacity has increased from 1.3 GW in 2012 to 5.0 GW in 2017.[15]
115
+
116
+ Spain is the world leader in solar thermal power deployment with 2.3 GW deployed.[15] The United States has 1.8 GW,[15] most of it in California where 1.4 GW of solar thermal power projects are operational.[121] Several power plants have been constructed in the Mojave Desert, Southwestern United States. As of 2017 only 4 other countries have deployments above 100 MW:[15] South Africa (300 MW) India (229 MW) Morocco (180 MW) and United Arab Emirates (100 MW).
117
+
118
+ The United States conducted much early research in photovoltaics and concentrated solar power. The U.S. is among the top countries in the world in electricity generated by the Sun and several of the world's largest utility-scale installations are located in the desert Southwest.
119
+
120
+ The oldest solar thermal power plant in the world is the 354 megawatt (MW) SEGS thermal power plant, in California.[122] The Ivanpah Solar Electric Generating System is a solar thermal power project in the California Mojave Desert, 40 miles (64 km) southwest of Las Vegas, with a gross capacity of 377 MW.[123] The 280 MW Solana Generating Station is a solar power plant near Gila Bend, Arizona, about 70 miles (110 km) southwest of Phoenix, completed in 2013. When commissioned it was the largest parabolic trough plant in the world and the first U.S. solar plant with molten salt thermal energy storage.[124]
121
+
122
+ In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.[125]
123
+
124
+ Worldwide growth of PV capacity grouped by region in MW (2006–2014)
125
+
126
+ Photovoltaics (PV) is rapidly-growing with global capacity increasing from 177 GW at the end of 2014 to 385 GW in 2017.[15]
127
+
128
+ PV uses solar cells assembled into solar panels to convert sunlight into electricity. PV systems range from small, residential and commercial rooftop or building integrated installations, to large utility-scale photovoltaic power station. The predominant PV technology is crystalline silicon, while thin-film solar cell technology accounts for about 10 percent of global photovoltaic deployment. In recent years, PV technology has improved its electricity generating efficiency, reduced the installation cost per watt as well as its energy payback time, and reached grid parity in at least 30 different markets by 2014.[126]
129
+ Building-integrated photovoltaics or "onsite" PV systems use existing land and structures and generate power close to where it is consumed.[127]
130
+
131
+ Photovoltaics grew fastest in China, followed by Japan and the United States. Italy meets 7.9 percent of its electricity demands with photovoltaic power—the highest share worldwide.[128] Solar power is forecasted to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16% and 11%, respectively. This requires an increase of installed PV capacity to 4,600 GW, of which more than half is expected to be deployed in China and India.[129]
132
+
133
+ Commercial concentrated solar power plants were first developed in the 1980s. As the cost of solar electricity has fallen, the number of grid-connected solar PV systems has grown into the millions and utility-scale solar power stations with hundreds of megawatts are being built. Many solar photovoltaic power stations have been built, mainly in Europe, China and the United States.[130] The 1.5 GW Tengger Desert Solar Park, in China is the world's largest PV power station. Many of these plants are integrated with agriculture and some use tracking systems that follow the sun's daily path across the sky to generate more electricity than fixed-mounted systems.
134
+
135
+ Bioenergy global capacity in 2017 was 109 GW.[15]
136
+ Biofuels provided 3% of the world's transport fuel in 2017.[131]
137
+
138
+ Mandates for blending biofuels exist in 31 countries at the national level and in 29 states/provinces.[81] According to the International Energy Agency, biofuels have the potential to meet more than a quarter of world demand for transportation fuels by 2050.[132]
139
+
140
+ Since the 1970s, Brazil has had an ethanol fuel program which has allowed the country to become the world's second largest producer of ethanol (after the United States) and the world's largest exporter.[133] Brazil's ethanol fuel program uses modern equipment and cheap sugarcane as feedstock, and the residual cane-waste (bagasse) is used to produce heat and power.[134] There are no longer light vehicles in Brazil running on pure gasoline. By the end of 2008 there were 35,000 filling stations throughout Brazil with at least one ethanol pump.[135] Unfortunately, Operation Car Wash has seriously eroded public trust in oil companies and has implicated several high ranking Brazilian officials.
141
+
142
+ Nearly all the gasoline sold in the United States today is mixed with 10% ethanol,[136] and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, Daimler AG, and GM are among the automobile companies that sell "flexible-fuel" cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol. By mid-2006, there were approximately 6 million ethanol compatible vehicles on U.S. roads.[137]
143
+
144
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
145
+
146
+ Geothermal power is cost effective, reliable, sustainable, and environmentally friendly,[138] but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are usually much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels.
147
+
148
+ In 2017, the United States led the world in geothermal electricity production with 12.9 GW of installed capacity.[15] The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California.[139] The Philippines follows the US as the second highest producer of geothermal power in the world, with 1.9 GW of capacity online.[15]
149
+
150
+ Renewable energy technology has sometimes been seen as a costly luxury item by critics, and affordable only in the affluent developed world. This erroneous view has persisted for many years, however between 2016 and 2017, investments in renewable energy were higher in developing countries than in developed countries, with China leading global investment with a record 126.6 billion dollars. Many Latin American and African countries increased their investments significantly as well.[140]
151
+ Renewable energy can be particularly suitable for developing countries. In rural and remote areas, transmission and distribution of energy generated from fossil fuels can be difficult and expensive. Producing renewable energy locally can offer a viable alternative.[141]
152
+
153
+ Technology advances are opening up a huge new market for solar power: the approximately 1.3 billion people around the world who don't have access to grid electricity. Even though they are typically very poor, these people have to pay far more for lighting than people in rich countries because they use inefficient kerosene lamps. Solar power costs half as much as lighting with kerosene.[142] As of 2010, an estimated 3 million households get power from small solar PV systems.[143] Kenya is the world leader in the number of solar power systems installed per capita. More than 30,000 very small solar panels, each producing 1[144] 2 to 30 watts, are sold in Kenya annually. Some Small Island Developing States (SIDS) are also turning to solar power to reduce their costs and increase their sustainability.
154
+
155
+ Micro-hydro configured into mini-grids also provide power. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] Clean liquid fuel sourced from renewable feedstocks are used for cooking and lighting in energy-poor areas of the developing world. Alcohol fuels (ethanol and methanol) can be produced sustainably from non-food sugary, starchy, and cellulostic feedstocks. Project Gaia, Inc. and CleanStar Mozambique are implementing clean cooking programs with liquid ethanol stoves in Ethiopia, Kenya, Nigeria and Mozambique.[145]
156
+
157
+ Renewable energy projects in many developing countries have demonstrated that renewable energy can directly contribute to poverty reduction by providing the energy needed for creating businesses and employment. Renewable energy technologies can also make indirect contributions to alleviating poverty by providing energy for cooking, space heating, and lighting. Renewable energy can also contribute to education, by providing electricity to schools.[146]
158
+
159
+ Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishing energy policy in early 2000s, most countries around the world now have some form of energy policy.[147]
160
+
161
+ The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed in 2009, by 75 countries signing the charter of IRENA.[149] As of April 2019, IRENA has 160 member states.[150] The then United Nations' Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity,[32] and in September 2011 he launched the UN Sustainable Energy for All initiative to improve energy access, efficiency and the deployment of renewable energy.[151]
162
+
163
+ The 2015 Paris Agreement on climate change motivated many countries to develop or improve renewable energy policies.[12] In 2017, a total of 121 countries have adapted some form of renewable energy policy.[147] National targets that year existed in at 176 countries.[12] In addition, there is also a wide range of policies at state/provincial and local levels.[81] Some public utilities help plan or install residential energy upgrades. Under president Barack Obama, the United States policy encouraged the uptake of renewable energy in line with commitments to the Paris agreement. Even though Trump has abandoned these goals, renewable investment is still on the rise.[152]
164
+
165
+ Many national, state, and local governments have created green banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies.[153] Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy. The US military has also focused on the use of renewable fuels for military vehicles. Unlike fossil fuels, renewable fuels can be produced in any country, creating a strategic advantage. The US military has already committed itself to have 50% of its energy consumption come from alternative sources.[154]
166
+
167
+ The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. Renewable energy use has grown much faster than even advocates anticipated.[155] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Professors S. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges".[156]
168
+
169
+ Using 100% renewable energy was first suggested in a Science paper published in 1975 by Danish physicist Bent Sørensen.[157] It was followed by several other proposals, until in 1998 the first detailed analysis of scenarios with very high shares of renewables were published. These were followed by the first detailed 100% scenarios. In 2006 a PhD thesis was published by Czisch in which it was shown that in a 100% renewable scenario energy supply could match demand in every hour of the year in Europe and North Africa. In the same year Danish Energy professor Henrik Lund published a first paper[158] in which he addresses the optimal combination of renewables, which was followed by several other papers on the transition to 100% renewable energy in Denmark. Since then Lund has been publishing several papers on 100% renewable energy. After 2009 publications began to rise steeply, covering 100% scenarios for countries in Europe, America, Australia and other parts of the world.[159]
170
+
171
+ In 2011 Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University, and Mark Delucchi published a study on 100% renewable global energy supply in the journal Energy Policy. They found producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic".[160] They also found that energy costs with a wind, solar, water system should be similar to today's energy costs.[161]
172
+
173
+ Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."[162]
174
+
175
+ The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological.[163][164] According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.[165]
176
+
177
+ According to World Bank the "below 2°C" climate scenario requires 3 billions of tonnes of metals and minerals by 2050. Supply of mined resources such as zinc, molybdenum, silver, nickel, copper must increase by up to 500%.[166] A 2018 analysis estimated required increases in stock of metals required by various sectors from 1000% (wind power) to 87'000% (personal vehicle batteries).[167]
178
+
179
+ Other renewable energy technologies are still under development, and include cellulosic ethanol, hot-dry-rock geothermal power, and marine energy.[168] These technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research, development and demonstration (RD&D) funding.[168]
180
+
181
+ There are numerous organizations within the academic, federal, and commercial sectors conducting large scale advanced research in the field of renewable energy. This research spans several areas of focus across the renewable energy spectrum. Most of the research is targeted at improving efficiency and increasing overall energy yields.[169]
182
+ Multiple federally supported research organizations have focused on renewable energy in recent years. Two of the most prominent of these labs are Sandia National Laboratories and the National Renewable Energy Laboratory (NREL), both of which are funded by the United States Department of Energy and supported by various corporate partners.[170] Sandia has a total budget of $2.4 billion[171] while NREL has a budget of $375 million.[172]
183
+
184
+ Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful in low-income countries with relative air humidity over 60%.[203]
185
+
186
+ Renewable electricity production, from sources such as wind power and solar power, is intermittent which results in reduced capacity factor and require either energy storage of capacity equal to its total output, or base load power sources based on fossil fuels or nuclear power.
187
+
188
+ Since renewable energy sources power density per land area is at best three orders of magnitude smaller than fossil or nuclear power,[204] renewable power plants tends to occupy thousands of hectares causing environmental concerns and opposition from local residents, especially in densely populated countries. Solar power plants are competing with arable land and nature reserves,[205] while on-shore wind farms face opposition due to aesthetic concerns and noise, which is impacting both humans and wildlife.[206][207][208][209] In the United States, the Massachusetts Cape Wind project was delayed for years partly because of aesthetic concerns. However, residents in other areas have been more positive. According to a town councilor, the overwhelming majority of locals believe that the Ardrossan Wind Farm in Scotland has enhanced the area.[210] These concerns, when directed against renewable energy, are sometimes described as "not in my back yard" attitude (NIMBY).
189
+
190
+ A recent[when?] UK Government document states that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake".[211] In countries such as Germany and Denmark many renewable projects are owned by communities, particularly through cooperative structures, and contribute significantly to overall levels of renewable energy deployment.[212][213]
191
+
192
+ The market for renewable energy technologies has continued to grow. Climate change concerns and increasing in green jobs, coupled with high oil prices, peak oil, oil wars, oil spills, promotion of electric vehicles and renewable electricity, nuclear disasters and increasing government support, are driving increasing renewable energy legislation, incentives and commercialization.[18] New government spending, regulation and policies helped the industry weather the 2009 economic crisis better than many other sectors.[29]
193
+
194
+ While renewables have been very successful in their ever-growing contribution to electrical power there are no countries dominated by fossil fuels who have a plan to stop and get that power from renwables. Only Scotland and Ontario have stopped burning coal, largely due to good natural gas supplies. In the area of transportation, fossil fuels are even more entrenched and solutions harder to find.[214] It's unclear if there are failures with policy or renewable energy, but twenty years after the Kyoto Protocol fossil fuels are still our primary energy source and consumption continues to grow.[215]
195
+
196
+ The International Energy Agency has stated that deployment of renewable technologies usually increases the diversity of electricity sources and, through local generation, contributes to the flexibility of the system and its resistance to central shocks.[216]
197
+
198
+ From around 2010 onwards, there was increasing discussion about the geopolitical impact of the growing use of renewable energy.[217] It was argued that former fossil fuels exporters would experience a weakening of their position in international affairs, while countries with abundant sunshine, wind, hydropower, or geothermal resources would be strengthened.[218] Also countries rich in critical materials for renewable energy technologies were expected to rise in importance in international affairs.[219]
199
+
200
+ The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former fossil fuels exporters are expected to lose power, while the positions of former fossil fuel importers and countries rich in renewable energy resources is expected to strengthen.[220]
201
+
202
+ The ability of biomass and biofuels to contribute to a reduction in CO2 emissions is limited because both biomass and biofuels emit large amounts of air pollution when burned and in some cases compete with food supply. Furthermore, biomass and biofuels consume large amounts of water.[221] Other renewable sources such as wind power, photovoltaics, and hydroelectricity have the advantage of being able to conserve water, lower pollution and reduce CO2 emissions.
203
+ The installations used to produce wind, solar and hydro power are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts.[222] More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphazised that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.[223]
204
+
205
+ Renewable energy devices depend on non-renewable resources such as mined metals and use vast amounts of land due to their small surface power density. Manufacturing of photovoltaic panels, wind turbines and batteries requires significant amounts of rare-earth elements[224] and increases mining operations, which have social and environmental impact.[225] Due to co-occurrence of rare-earth and radioactive elements (thorium, uranium and radium), rare-earth mining results in production of low-level radioactive waste.[226]
206
+
207
+ Solar panels change the albedo of the surface what increases their contribution to global warming.[227]
208
+
209
+ Burbo, NW-England
210
+
211
+ Sunrise at the Fenton Wind Farm in Minnesota, US
212
+
213
+ The CSP-station Andasol in Andalusia, Spain
214
+
215
+ Ivanpah solar plant in the Mojave Desert, California, United States
216
+
217
+ Three Gorges Dam and Gezhouba Dam, China
218
+
219
+ Shop selling PV panels in Ouagadougou, Burkina Faso
220
+
221
+ Stump harvesting increases recovery of biomass from forests
222
+
223
+ A small, roof-top mounted PV system in Bonn, Germany
224
+
225
+ The community-owned Westmill Solar Park in South East England
226
+
227
+ Komekurayama photovoltaic power station in Kofu, Japan
228
+
229
+ Krafla, a geothermal power station in Iceland
en/175.html.txt ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ In meteorology, a cloud is an aerosol consisting of a visible mass of minute liquid droplets, frozen crystals, or other particles suspended in the atmosphere of a planetary body or similar space.[1] Water or various other chemicals may compose the droplets and crystals. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature.
6
+
7
+ They are seen in the Earth's homosphere, which includes the troposphere, stratosphere, and mesosphere. Nephology is the science of clouds, which is undertaken in the cloud physics branch of meteorology. There are two methods of naming clouds in their respective layers of the homosphere, Latin and common.
8
+
9
+ Genus types in the troposphere, the atmospheric layer closest to Earth's surface, have Latin names due to the universal adoption of Luke Howard's nomenclature that was formally proposed in 1802. It became the basis of a modern international system that divides clouds into five physical forms which can be divided or classified further into altitude levels to derive the ten basic genera. The main representative cloud types for each of these forms are stratus, cirrus, stratocumulus, cumulus, and cumulonimbus. Low-level stratiform and stratocumuliform genera do not have any altitude-related prefixes. However mid-level variants of the same physical forms are given the prefix alto- while high-level types carry the prefix cirro-. The other main forms never have prefixes indicating altitude level. Cirriform clouds are always high-level while cumuliform and cumulonimbiform clouds are classified formally as low-level. The latter are also more informally characterized as multi-level or vertical as indicated by the cumulo- prefix. Most of the ten genera derived by this method of classification can be subdivided into species and further subdivided into varieties. Very low stratiform clouds that extend down to the Earth's surface are given the common names fog and mist, but have no Latin names.
10
+
11
+ In the stratosphere and mesosphere, clouds have common names for their main types. They may have the appearance of stratiform veils or sheets, cirriform wisps, or stratocumuliform bands or ripples. They are seen infrequently, mostly in the polar regions of Earth. Clouds have been observed in the atmospheres of other planets and moons in the Solar System and beyond. However, due to their different temperature characteristics, they are often composed of other substances such as methane, ammonia, and sulfuric acid, as well as water.
12
+
13
+ Tropospheric clouds can have a direct effect on climate change on Earth. They may reflect incoming rays from the sun which can contribute to a cooling effect where and when these clouds occur, or trap longer wave radiation that reflects back up from the Earth's surface which can cause a warming effect. The altitude, form, and thickness of the clouds are the main factors that affect the local heating or cooling of Earth and the atmosphere. Clouds that form above the troposphere are too scarce and too thin to have any influence on climate change.
14
+
15
+ The tabular overview that follows is very broad in scope. It draws from several methods of cloud classification, both formal and informal, used in different levels of the Earth's homosphere by a number of cited authorities. Despite some differences in methodologies and terminologies, the classification schemes seen in this article can be harmonized by using an informal cross-classification of physical forms and altitude levels to derive the 10 tropospheric genera, the fog and mist that forms at surface level, and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure which can affect both forms and levels. This table should not be seen as a strict or singular classification, but as an illustration of how various major cloud types are related to each other and defined through a full range of altitude levels from Earth's surface to the "edge of space".
16
+
17
+ The origin of the term "cloud" can be found in the Old English words clud or clod, meaning a hill or a mass of rock. Around the beginning of the 13th century, the word came to be used as a metaphor for rain clouds, because of the similarity in appearance between a mass of rock and cumulus heap cloud. Over time, the metaphoric usage of the word supplanted the Old English weolcan, which had been the literal term for clouds in general.[2][3]
18
+
19
+ Ancient cloud studies were not made in isolation, but were observed in combination with other weather elements and even other natural sciences. Around 340 BC, Greek philosopher Aristotle wrote Meteorologica, a work which represented the sum of knowledge of the time about natural science, including weather and climate. For the first time, precipitation and the clouds from which precipitation fell were called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word came the modern term meteorology, the study of clouds and weather. Meteorologica was based on intuition and simple observation, but not on what is now considered the scientific method. Nevertheless, it was the first known work that attempted to treat a broad range of meteorological topics in a systematic way, especially the hydrological cycle.[4]
20
+
21
+ After centuries of speculative theories about the formation and behavior of clouds, the first truly scientific studies were undertaken by Luke Howard in England and Jean-Baptiste Lamarck in France. Howard was a methodical observer with a strong grounding in the Latin language, and used his background to classify the various tropospheric cloud types during 1802. He believed that the changing cloud forms in the sky could unlock the key to weather forecasting. Lamarck had worked independently on cloud classification the same year and had come up with a different naming scheme that failed to make an impression even in his home country of France because it used unusual French names for cloud types. His system of nomenclature included 12 categories of clouds, with such names as (translated from French) hazy clouds, dappled clouds, and broom-like clouds. By contrast, Howard used universally accepted Latin, which caught on quickly after it was published in 1803.[5] As a sign of the popularity of the naming scheme, German dramatist and poet Johann Wolfgang von Goethe composed four poems about clouds, dedicating them to Howard. An elaboration of Howard's system was eventually formally adopted by the International Meteorological Conference in 1891.[5] This system covered only the tropospheric cloud types, but the discovery of clouds above the troposphere during the late 19th century eventually led to the creation of separate classification schemes using common names for these very high clouds, which were still broadly similar to some cloud forms identiified in the troposphhere.[6]
22
+
23
+ Terrestrial clouds can be found throughout most of the homosphere, which includes the troposphere, stratosphere, and mesosphere. Within these layers of the atmosphere, air can become saturated as a result of being cooled to its dew point or by having moisture added from an adjacent source.[7] In the latter case, saturation occurs when the dew point is raised to the ambient air temperature.
24
+
25
+ Adiabatic cooling occurs when one or more of three possible lifting agents – convective, cyclonic/frontal, or orographic – cause a parcel of air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling.[8] As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air.[9][10]
26
+
27
+ One agent is the convective upward motion of air caused by daytime solar heating at surface level.[9] Airmass instability allows for the formation of cumuliform clouds that can produce showers if the air is sufficiently moist.[11] On moderately rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.[12]
28
+
29
+ Frontal and cyclonic lift occur when stable air is forced aloft at weather fronts and around centers of low pressure by a process called convergence.[13] Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds are usually embedded in the main precipitating cloud layer.[14] Cold fronts are usually faster moving and generate a narrower line of clouds, which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm airmass just ahead of the front.[15]
30
+
31
+ A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift).[9] If the air is generally stable, nothing more than lenticular cap clouds form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.[16]
32
+
33
+ Along with adiabatic cooling that requires a lifting agent, three major nonadiabatic mechanisms exist for lowering the temperature of the air to its dew point. Conductive, radiational, and evaporative cooling require no lifting mechanism and can cause condensation at surface level resulting in the formation of fog.[17][18][19]
34
+
35
+ Several main sources of water vapor can be added to the air as a way of achieving saturation without any cooling process: water or moist ground,[20][21][22] precipitation or virga,[23] and transpiration from plants[24]
36
+
37
+ Tropospheric classification is based on a hierarchy of categories with physical forms and altitude levels at the top.[25][26] These are cross-classified into a total of ten genus types, most of which can be divided into species and further subdivided into varieties which are at the bottom of the hierarchy.[27]
38
+
39
+ Clouds in the troposphere assume five physical forms based on structure and process of formation. These forms are commonly used for the purpose of satellite analysis.[25] They are given below in approximate ascending order of instability or convective activity.[28]
40
+
41
+ Nonconvective stratiform clouds appear in stable airmass conditions and, in general, have flat, sheet-like structures that can form at any altitude in the troposphere.[29] The stratiform group is divided by altitude range into the genera cirrostratus (high-level), altostratus (mid-level), stratus (low-level), and nimbostratus (multi-level).[26] Fog is commonly considered a surface-based cloud layer.[16] The fog may form at surface level in clear air or it may be the result of a very low stratus cloud subsiding to ground or sea level. Conversely, low stratiform clouds result when advection fog is lifted above surface level during breezy conditions.
42
+
43
+ Cirriform clouds in the troposphere are of the genus cirrus and have the appearance of detached or semimerged filaments. They form at high tropospheric altitudes in air that is mostly stable with little or no convective activity, although denser patches may occasionally show buildups caused by limited high-level convection where the air is partly unstable.[30] Clouds resembling cirrus can be found above the troposphere but are classified separately using common names.
44
+
45
+ Clouds of this structure have both cumuliform and stratiform characteristics in the form of rolls, ripples, or elements.[31] They generally form as a result of limited convection in an otherwise mostly stable airmass topped by an inversion layer.[32] If the inversion layer is absent or higher in the troposphere, increased airmass instability may cause the cloud layers to develop tops in the form of turrets consisting of embedded cumuliform buildups.[33] The stratocumuliform group is divided into cirrocumulus (high-level), altocumulus (mid-level), and stratocumulus (low-level).[31]
46
+
47
+ Cumuliform clouds generally appear in isolated heaps or tufts.[34][35] They are the product of localized but generally free-convective lift where no inversion layers are in the troposphere to limit vertical growth. In general, small cumuliform clouds tend to indicate comparatively weak instability. Larger cumuliform types are a sign of greater atmospheric instability and convective activity.[36] Depending on their vertical size, clouds of the cumulus genus type may be low-level or multi-level with moderate to towering vertical extent.[26]
48
+
49
+ The largest free-convective clouds comprise the genus cumulonimbus, which have towering vertical extent. They occur in highly unstable air[9] and often have fuzzy outlines at the upper parts of the clouds that sometimes include anvil tops.[31] These clouds are the product of very strong convection that can penetrate the lower stratosphere.
50
+
51
+ Tropospheric clouds form in any of three levels (formerly called étages) based on altitude range above the Earth's surface. The grouping of clouds into levels is commonly done for the purposes of cloud atlases, surface weather observations,[26] and weather maps.[37] The base-height range for each level varies depending on the latitudinal geographical zone.[26] Each altitude level comprises two or three genus-types differentiated mainly by physical form.[38][31]
52
+
53
+ The standard levels and genus-types are summarised below in approximate descending order of the altitude at which each is normally based.[39] Multi-level clouds with significant vertical extent are separately listed and summarized in approximate ascending order of instability or convective activity.[28]
54
+
55
+ High clouds form at altitudes of 3,000 to 7,600 m (10,000 to 25,000 ft) in the polar regions, 5,000 to 12,200 m (16,500 to 40,000 ft) in the temperate regions, and 6,100 to 18,300 m (20,000 to 60,000 ft) in the tropics.[26] All cirriform clouds are classified as high, thus constitute a single genus cirrus (Ci). Stratocumuliform and stratiform clouds in the high altitude range carry the prefix cirro-, yielding the respective genus names cirrocumulus (Cc) and cirrostratus (Cs). When limited-resolution satellite images of high clouds are analysed without supporting data from direct human observations, distinguishing between individual forms or genus types becomes impossible, and they are then collectively identified as high-type (or informally as cirrus-type, though not all high clouds are of the cirrus form or genus).[40]
56
+
57
+ Nonvertical clouds in the middle level are prefixed by alto-, yielding the genus names altocumulus (Ac) for stratocumuliform types and altostratus (As) for stratiform types. These clouds can form as low as 2,000 m (6,500 ft) above surface at any latitude, but may be based as high as 4,000 m (13,000 ft) near the poles, 7,000 m (23,000 ft) at midlatitudes, and 7,600 m (25,000 ft) in the tropics.[26] As with high clouds, the main genus types are easily identified by the human eye, but distinguishing between them using satellite photography is not possible. Without the support of human observations, these clouds are usually collectively identified as middle-type on satellite images.[40]
58
+
59
+ Low clouds are found from near the surface up to 2,000 m (6,500 ft).[26] Genus types in this level either have no prefix or carry one that refers to a characteristic other than altitude. Clouds that form in the low level of the troposphere are generally of larger structure than those that form in the middle and high levels, so they can usually be identified by their forms and genus types using satellite photography alone.[40]
60
+
61
+
62
+
63
+ These clouds have low- to mid-level bases that form anywhere from near the surface to about 2,400 m (8,000 ft) and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus.
64
+
65
+ This is a diffuse, dark grey, multi-level stratiform layer with great horizontal extent and usually moderate to deep vertical development. It lacks towering structure and looks feebly illuminated from the inside.[58] Nimbostratus normally forms from mid-level altostratus, and develops at least moderate vertical extent[59][60] when the base subsides into the low level during precipitation that can reach moderate to heavy intensity. It achieves even greater vertical development when it simultaneously grows upward into the high level due to large-scale frontal or cyclonic lift.[61] The nimbo- prefix refers to its ability to produce continuous rain or snow over a wide area, especially ahead of a warm front.[62] This thick cloud layer may be accompanied by embedded towering cumuliform or cumulonimbiform types.[60][63] Meteorologists affiliated with the World Meteorological Organization (WMO) officially classify nimbostratus as mid-level for synoptic purposes while informally characterizing it as multi-level.[26] Independent meteorologists and educators appear split between those who largely follow the WMO model[59][60] and those who classify nimbostratus as low-level, despite its considerable vertical extent and its usual initial formation in the middle altitude range.[64][65]
66
+
67
+ These very large cumuliform and cumulonimbiform types have similar low- to mid-level cloud bases as the multi-level and moderate vertical types, and tops that nearly always extend into the high levels. They are required to be identified by their standard names or abbreviations in all aviation observations (METARS) and forecasts (TAFS) to warn pilots of possible severe weather and turbulence.[66]
68
+
69
+ Genus types are commonly divided into subtypes called species that indicate specific structural details which can vary according to the stability and windshear characteristics of the atmosphere at any given time and location. Despite this hierarchy, a particular species may be a subtype of more than one genus, especially if the genera are of the same physical form and are differentiated from each other mainly by altitude or level. There are a few species, each of which can be associated with genera of more than one physical form.[72] The species types are grouped below according to the physical forms and genera with which each is normally associated. The forms, genera, and species are listed in approximate ascending order of instability or convective activity.[28]
70
+
71
+ Of the stratiform group, high-level cirrostratus comprises two species. Cirrostratus nebulosus has a rather diffuse appearance lacking in structural detail.[73] Cirrostratus fibratus is a species made of semi-merged filaments that are transitional to or from cirrus.[74] Mid-level altostratus and multi-level nimbostratus always have a flat or diffuse appearance and are therefore not subdivided into species. Low stratus is of the species nebulosus[73] except when broken up into ragged sheets of stratus fractus (see below).[59][72][75]
72
+
73
+ Cirriform clouds have three non-convective species that can form in mostly stable airmass conditions. Cirrus fibratus comprise filaments that may be straight, wavy, or occasionally twisted by non-convective wind shear.[74] The species uncinus is similar but has upturned hooks at the ends. Cirrus spissatus appear as opaque patches that can show light grey shading.[72]
74
+
75
+ Stratocumuliform genus-types (cirrocumulus, altocumulus, and stratocumulus) that appear in mostly stable air have two species each. The stratiformis species normally occur in extensive sheets or in smaller patches where there is only minimal convective activity.[76] Clouds of the lenticularis species tend to have lens-like shapes tapered at the ends. They are most commonly seen as orographic mountain-wave clouds, but can occur anywhere in the troposphere where there is strong wind shear combined with sufficient airmass stability to maintain a generally flat cloud structure. These two species can be found in the high, middle, or low levels of the troposphere depending on the stratocumuliform genus or genera present at any given time.[59][72][75]
76
+
77
+ The species fractus shows variable instability because it can be a subdivision of genus-types of different physical forms that have different stability characteristics. This subtype can be in the form of ragged but mostly stable stratiform sheets (stratus fractus) or small ragged cumuliform heaps with somewhat greater instability (cumulus fractus).[72][75][77] When clouds of this species are associated with precipitating cloud systems of considerable vertical and sometimes horizontal extent, they are also classified as accessory clouds under the name pannus (see section on supplementary features).[78]
78
+
79
+ These species are subdivisions of genus types that can occur in partly unstable air. The species castellanus appears when a mostly stable stratocumuliform or cirriform layer becomes disturbed by localized areas of airmass instability, usually in the morning or afternoon. This results in the formation of cumuliform buildups of limited convection arising from a common stratiform base.[79] Castellanus resembles the turrets of a castle when viewed from the side, and can be found with stratocumuliform genera at any tropospheric altitude level and with limited-convective patches of high-level cirrus.[80] Tufted clouds of the more detached floccus species are subdivisions of genus-types which may be cirriform or stratocumuliform in overall structure. They are sometimes seen with cirrus, cirrocumulus, altocumulus, and stratocumulus.[81]
80
+
81
+ A newly recognized species of stratocumulus or altocumulus has been given the name volutus, a roll cloud that can occur ahead of a cumulonimbus formation.[82] There are some volutus clouds that form as a consequence of interactions with specific geographical features rather than with a parent cloud. Perhaps the strangest geographically specific cloud of this type is the Morning Glory, a rolling cylindrical cloud that appears unpredictably over the Gulf of Carpentaria in Northern Australia. Associated with a powerful "ripple" in the atmosphere, the cloud may be "surfed" in glider aircraft.[83]
82
+
83
+ More general airmass instability in the troposphere tends to produce clouds of the more freely convective cumulus genus type, whose species are mainly indicators of degrees of atmospheric instability and resultant vertical development of the clouds. A cumulus cloud initially forms in the low level of the troposphere as a cloudlet of the species humilis that shows only slight vertical development. If the air becomes more unstable, the cloud tends to grow vertically into the species mediocris, then congestus, the tallest cumulus species[72] which is the same type that the International Civil Aviation Organization refers to as 'towering cumulus'.[66]
84
+
85
+ With highly unstable atmospheric conditions, large cumulus may continue to grow into cumulonimbus calvus (essentially a very tall congestus cloud that produces thunder), then ultimately into the species capillatus when supercooled water droplets at the top of the cloud turn into ice crystals giving it a cirriform appearance.[72][75]
86
+
87
+ Genus and species types are further subdivided into varieties whose names can appear after the species name to provide a fuller description of a cloud. Some cloud varieties are not restricted to a specific altitude level or form, and can therefore be common to more than one genus or species.[84]
88
+
89
+ All cloud varieties fall into one of two main groups. One group identifies the opacities of particular low and mid-level cloud structures and comprises the varieties translucidus (thin translucent), perlucidus (thick opaque with translucent or very small clear breaks), and opacus (thick opaque). These varieties are always identifiable for cloud genera and species with variable opacity. All three are associated with the stratiformis species of altocumulus and stratocumulus. However, only two varieties are seen with altostratus and stratus nebulosus whose uniform structures prevent the formation of a perlucidus variety. Opacity-based varieties are not applied to high clouds because they are always translucent, or in the case of cirrus spissatus, always opaque.[84][85]
90
+
91
+ A second group describes the occasional arrangements of cloud structures into particular patterns that are discernible by a surface-based observer (cloud fields usually being visible only from a significant altitude above the formations). These varieties are not always present with the genera and species with which they are otherwise associated, but only appear when atmospheric conditions favor their formation. Intortus and vertebratus varieties occur on occasion with cirrus fibratus. They are respectively filaments twisted into irregular shapes, and those that are arranged in fishbone patterns, usually by uneven wind currents that favor the formation of these varieties. The variety radiatus is associated with cloud rows of a particular type that appear to converge at the horizon. It is sometimes seen with the fibratus and uncinus species of cirrus, the stratiformis species of altocumulus and stratocumulus, the mediocris and sometimes humilis species of cumulus,[87][88] and with the genus altostratus.[89]
92
+
93
+ Another variety, duplicatus (closely spaced layers of the same type, one above the other), is sometimes found with cirrus of both the fibratus and uncinus species, and with altocumulus and stratocumulus of the species stratiformis and lenticularis. The variety undulatus (having a wavy undulating base) can occur with any clouds of the species stratiformis or lenticularis, and with altostratus. It is only rarely observed with stratus nebulosus. The variety lacunosus is caused by localized downdrafts that create circular holes in the form of a honeycomb or net. It is occasionally seen with cirrocumulus and altocumulus of the species stratiformis, castellanus, and floccus, and with stratocumulus of the species stratiformis and castellanus.[84][85]
94
+
95
+ It is possible for some species to show combined varieties at one time, especially if one variety is opacity-based and the other is pattern-based. An example of this would be a layer of altocumulus stratiformis arranged in seemingly converging rows separated by small breaks. The full technical name of a cloud in this configuration would be altocumulus stratiformis radiatus perlucidus, which would identify respectively its genus, species, and two combined varieties.[75][84][85]
96
+
97
+ Supplementary features and accessory clouds are not further subdivisions of cloud types below the species and variety level. Rather, they are either hydrometeors or special cloud types with their own Latin names that form in association with certain cloud genera, species, and varieties.[75][85] Supplementary features, whether in the form of clouds or precipitation, are directly attached to the main genus-cloud. Accessory clouds, by contrast, are generally detached from the main cloud.[90]
98
+
99
+ One group of supplementary features are not actual cloud formations, but precipitation that falls when water droplets or ice crystals that make up visible clouds have grown too heavy to remain aloft. Virga is a feature seen with clouds producing precipitation that evaporates before reaching the ground, these being of the genera cirrocumulus, altocumulus, altostratus, nimbostratus, stratocumulus, cumulus, and cumulonimbus.[90]
100
+
101
+ When the precipitation reaches the ground without completely evaporating, it is designated as the feature praecipitatio.[91] This normally occurs with altostratus opacus, which can produce widespread but usually light precipitation, and with thicker clouds that show significant vertical development. Of the latter, upward-growing cumulus mediocris produces only isolated light showers, while downward growing nimbostratus is capable of heavier, more extensive precipitation. Towering vertical clouds have the greatest ability to produce intense precipitation events, but these tend to be localized unless organized along fast-moving cold fronts. Showers of moderate to heavy intensity can fall from cumulus congestus clouds. Cumulonimbus, the largest of all cloud genera, has the capacity to produce very heavy showers. Low stratus clouds usually produce only light precipitation, but this always occurs as the feature praecipitatio due to the fact this cloud genus lies too close to the ground to allow for the formation of virga.[75][85][90]
102
+
103
+ Incus is the most type-specific supplementary feature, seen only with cumulonimbus of the species capillatus. A cumulonimbus incus cloud top is one that has spread out into a clear anvil shape as a result of rising air currents hitting the stability layer at the tropopause where the air no longer continues to get colder with increasing altitude.[92]
104
+
105
+ The mamma feature forms on the bases of clouds as downward-facing bubble-like protuberances caused by localized downdrafts within the cloud. It is also sometimes called mammatus, an earlier version of the term used before a standardization of Latin nomenclature brought about by the World Meteorological Organization during the 20th century. The best-known is cumulonimbus with mammatus, but the mamma feature is also seen occasionally with cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus.[90]
106
+
107
+ A tuba feature is a cloud column that may hang from the bottom of a cumulus or cumulonimbus. A newly formed or poorly organized column might be comparatively benign, but can quickly intensify into a funnel cloud or tornado.[90][93][94]
108
+
109
+ An arcus feature is a roll cloud with ragged edges attached to the lower front part of cumulus congestus or cumulonimbus that forms along the leading edge of a squall line or thunderstorm outflow.[95] A large arcus formation can have the appearance of a dark menacing arch.[90]
110
+
111
+ Several new supplementary features have been formally recognized by the World Meteorological Organization (WMO). The feature fluctus can form under conditions of strong atmospheric wind shear when a stratocumulus, altocumulus, or cirrus cloud breaks into regularly spaced crests. This variant is sometimes known informally as a Kelvin–Helmholtz (wave) cloud. This phenomenon has also been observed in cloud formations over other planets and even in the sun's atmosphere.[96] Another highly disturbed but more chaotic wave-like cloud feature associated with stratocumulus or altocumulus cloud has been given the Latin name asperitas. The supplementary feature cavum is a circular fall-streak hole that occasionally forms in a thin layer of supercooled altocumulus or cirrocumulus. Fall streaks consisting of virga or wisps of cirrus are usually seen beneath the hole as ice crystals fall out to a lower altitude. This type of hole is usually larger than typical lacunosus holes. A murus feature is a cumulonimbus wall cloud with a lowering, rotating cloud base than can lead to the development of tornadoes. A cauda feature is a tail cloud that extends horizontally away from the murus cloud and is the result of air feeding into the storm.[82]
112
+
113
+ Supplementary cloud formations detached from the main cloud are known as accessory clouds.[75][85][90] The heavier precipitating clouds, nimbostratus, towering cumulus (cumulus congestus), and cumulonimbus typically see the formation in precipitation of the pannus feature, low ragged clouds of the genera and species cumulus fractus or stratus fractus.[78]
114
+
115
+ A group of accessory clouds comprise formations that are associated mainly with upward-growing cumuliform and cumulonimbiform clouds of free convection. Pileus is a cap cloud that can form over a cumulonimbus or large cumulus cloud,[97] whereas a velum feature is a thin horizontal sheet that sometimes forms like an apron around the middle or in front of the parent cloud.[90] An accessory cloud recently officially recognized the World meteorological Organization is the flumen, also known more informally as the beaver's tail. It is formed by the warm, humid inflow of a super-cell thunderstorm, and can be mistaken for a tornado. Although the flumen can indicate a tornado risk, it is similar in appearance to pannus or scud clouds and does not rotate.[82]
116
+
117
+ Clouds initially form in clear air or become clouds when fog rises above surface level. The genus of a newly formed cloud is determined mainly by air mass characteristics such as stability and moisture content. If these characteristics change over time, the genus tends to change accordingly. When this happens, the original genus is called a mother cloud. If the mother cloud retains much of its original form after the appearance of the new genus, it is termed a genitus cloud. One example of this is stratocumulus cumulogenitus, a stratocumulus cloud formed by the partial spreading of a cumulus type when there is a loss of convective lift. If the mother cloud undergoes a complete change in genus, it is considered to be a mutatus cloud.[98]
118
+
119
+ The genitus and mutatus categories have been expanded to include certain types that do not originate from pre-existing clouds. The term flammagenitus (Latin for 'fire-made') applies to cumulus congestus or cumulonimbus that are formed by large scale fires or volcanic eruptions. Smaller low-level "pyrocumulus" or "fumulus" clouds formed by contained industrial activity are now classified as cumulus homogenitus (Latin for 'man-made'). Contrails formed from the exhaust of aircraft flying in the upper level of the troposphere can persist and spread into formations resembling cirrus which are designated cirrus homogenitus. If a cirrus homogenitus cloud changes fully to any of the high-level genera, they are termed cirrus, cirrostratus, or cirrocumulus homomutatus. Stratus cataractagenitus (Latin for 'cataract-made') are generated by the spray from waterfalls. Silvagenitus (Latin for 'forest-made') is a stratus cloud that forms as water vapor is added to the air above a forest canopy.[98]
120
+
121
+ Stratocumulus clouds can be organized into "fields" that take on certain specially classified shapes and characteristics. In general, these fields are more discernible from high altitudes than from ground level. They can often be found in the following forms:
122
+
123
+ These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán,.[101] Wind driven clouds can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.[102]
124
+
125
+ Although the local distribution of clouds can be significantly influenced by topography, the global prevalence of cloud cover in the troposphere tends to vary more by latitude. It is most prevalent in and along low pressure zones of surface tropospheric convergence which encircle the Earth close to the equator and near the 50th parallels of latitude in the northern and southern hemispheres.[105] The adiabatic cooling processes that lead to the creation of clouds by way of lifting agents are all associated with convergence; a process that involves the horizontal inflow and accumulation of air at a given location, as well as the rate at which this happens.[106] Near the equator, increased cloudiness is due to the presence of the low-pressure Intertropical Convergence Zone (ITCZ) where very warm and unstable air promotes mostly cumuliform and cumulonimbiform clouds.[107] Clouds of virtually any type can form along the mid-latitude convergence zones depending on the stability and moisture content of the air. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin.[108] This leads to the formation of weather-making extratropical cyclones composed of cloud systems that may be stable or unstable to varying degrees according to the stability characteristics of the various airmasses that are in conflict.[109]
126
+
127
+ Divergence is the opposite of convergence. In the Earth's troposphere, it involves the horizontal outflow of air from the upper part of a rising column of air, or from the lower part of a subsiding column often associated with an area or ridge of high pressure.[106] Cloudiness tends to be least prevalent near the poles and in the subtropics close to the 30th parallels, north and south. The latter are sometimes referred to as the horse latitudes. The presence of a large-scale high-pressure subtropical ridge on each side of the equator reduces cloudiness at these low latitudes.[110] Similar patterns also occur at higher latitudes in both hemispheres.[111]
128
+
129
+ The luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. Its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows.[112] In the troposphere, dense, deep clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top.[113] Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very-dark-grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. High thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. However, a thick dense ice-crystal cloud appears brilliant white with pronounced grey shading because of its greater reflectivity.[112]
130
+
131
+ As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets. If the droplets become too large and heavy to be kept aloft by the air circulation, they will fall from the cloud as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, a percentage of the light that enters the cloud is not reflected back out but is absorbed giving the cloud a darker look. A simple example of this is one's being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.[114]
132
+
133
+ Striking cloud colorations can be seen at any altitude, with the color of a cloud usually being the same as the incident light.[115] During daytime when the sun is relatively high in the sky, tropospheric clouds generally appear bright white on top with varying shades of grey underneath. Thin clouds may look white or appear to have acquired the color of their environment or background. Red, orange, and pink clouds occur almost entirely at sunrise/sunset and are the result of the scattering of sunlight by the atmosphere. When the sun is just below the horizon, low-level clouds are gray, middle clouds appear rose-colored, and high clouds are white or off-white. Clouds at night are black or dark grey in a moonless sky, or whitish when illuminated by the moon. They may also reflect the colors of large fires, city lights, or auroras that might be present.[115]
134
+
135
+ A cumulonimbus cloud that appears to have a greenish or bluish tint is a sign that it contains extremely high amounts of water; hail or rain which scatter light in a way that gives the cloud a blue color. A green colorization occurs mostly late in the day when the sun is comparatively low in the sky and the incident sunlight has a reddish tinge that appears green when illuminating a very tall bluish cloud. Supercell type storms are more likely to be characterized by this but any storm can appear this way. Coloration such as this does not directly indicate that it is a severe thunderstorm, it only confirms its potential. Since a green/blue tint signifies copious amounts of water, a strong updraft to support it, high winds from the storm raining out, and wet hail; all elements that improve the chance for it to become severe, can all be inferred from this. In addition, the stronger the updraft is, the more likely the storm is to undergo tornadogenesis and to produce large hail and high winds.[116]
136
+
137
+ Yellowish clouds may be seen in the troposphere in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds are caused by the presence of nitrogen dioxide and are sometimes seen in urban areas with high air pollution levels.[117]
138
+
139
+ Stratocumulus stratiformis and small castellanus made orange by the sun rising
140
+
141
+ An occurrence of cloud iridescence with altocumulus volutus and cirrocumulus stratiformis
142
+
143
+ Sunset reflecting shades of pink onto grey stratocumulus stratiformis translucidus (becoming perlucidus in the background)
144
+
145
+ Stratocumulus stratiformis perlucidus before sunset. Bangalore, India.
146
+
147
+ Late-summer rainstorm in Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
148
+
149
+ Particles in the atmosphere and the sun's angle enhance colors of stratocumulus cumulogenitus at evening twilight
150
+
151
+ Tropospheric clouds exert numerous influences on Earth's troposphere and climate. First and foremost, they are the source of precipitation, thereby greatly influencing the distribution and amount of precipitation. Because of their differential buoyancy relative to surrounding cloud-free air, clouds can be associated with vertical motions of the air that may be convective, frontal, or cyclonic. The motion is upward if the clouds are less dense because condensation of water vapor releases heat, warming the air and thereby decreasing its density. This can lead to downward motion because lifting of the air results in cooling that increases its density. All of these effects are subtly dependent on the vertical temperature and moisture structure of the atmosphere and result in major redistribution of heat that affect the Earth's climate.[118]
152
+
153
+ The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.[118]
154
+
155
+ High-level genus-types particularly show this duality with both short-wave albedo cooling and long-wave greenhouse warming effects. On the whole, ice-crystal clouds in the upper troposphere (cirrus) tend to favor net warming.[119][120] However, the cooling effect is dominant with mid-level and low clouds, especially when they form in extensive sheets.[119] Measurements by NASA indicate that on the whole, the effects of low and mid-level clouds that tend to promote cooling outweigh the warming effects of high layers and the variable outcomes associated with vertically developed clouds.[119]
156
+
157
+ As difficult as it is to evaluate the influences of current clouds on current climate, it is even more problematic to predict changes in cloud patterns and properties in a future, warmer climate, and the resultant cloud influences on future climate. In a warmer climate more water would enter the atmosphere by evaporation at the surface; as clouds are formed from water vapor, cloudiness would be expected to increase. But in a warmer climate, higher temperatures would tend to evaporate clouds.[121] Both of these statements are considered accurate, and both phenomena, known as cloud feedbacks, are found in climate model calculations. Broadly speaking, if clouds, especially low clouds, increase in a warmer climate, the resultant cooling effect leads to a negative feedback in climate response to increased greenhouse gases. But if low clouds decrease, or if high clouds increase, the feedback is positive. Differing amounts of these feedbacks are the principal reason for differences in climate sensitivities of current global climate models. As a consequence, much research has focused on the response of low and vertical clouds to a changing climate. Leading global models produce quite different results, however, with some showing increasing low clouds and others showing decreases.[122][123] For these reasons the role of tropospheric clouds in regulating weather and climate remains a leading source of uncertainty in global warming projections.[124][125]
158
+
159
+ Polar stratospheric clouds (PSC's) form in the lowest part of the stratosphere during the winter, at the altitude and during the season that produces the coldest temperatures and therefore the best chances of triggering condensation caused by adiabatic cooling. Moisture is scarce in the stratosphere, so nacreous and non-nacreous cloud at this altitude range is restricted to polar regions in the winter where the air is coldest.[6]
160
+
161
+ PSC's show some variation in structure according to their chemical makeup and atmospheric conditions, but are limited to a single very high range of altitude of about 15,000–25,000 m (49,200–82,000 ft), so they are not classified into altitude levels, genus types, species, or varieties. There is no Latin nomenclature in the manner of tropospheric clouds, but rather descriptive names using common English.[6]
162
+
163
+ Supercooled nitric acid and water PSC's, sometimes known as type 1, typically have a stratiform appearance resembling cirrostratus or haze, but because they are not frozen into crystals, do not show the pastel colours of the nacreous types. This type of PSC has been identified as a cause of ozone depletion in the stratosphere.[126] The frozen nacreous types are typically very thin with mother-of-pearl colorations and an undulating cirriform or lenticular (stratocumuliform) appearance. These are sometimes known as type 2.[127][128]
164
+
165
+ Polar mesospheric clouds form at an extreme-level altitude range of about 80 to 85 km (50 to 53 mi). They are given the Latin name noctilucent because of their illumination well after sunset and before sunrise. They typically have a bluish or silvery white coloration that can resemble brightly illuminated cirrus. Noctilucent clouds may occasionally take on more of a red or orange hue.[6] They are not common or widespread enough to have a significant effect on climate.[129] However, an increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change.[130]
166
+
167
+ Noctilucent clouds are the highest in the atmosphere and form near the top of the mesosphere at about ten times the altitude of tropospheric high clouds.[131] From ground level, they can occasionally be seen illuminated by the sun during deep twilight. Ongoing research indicates that convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapour to the point of saturation. This tends to produce the coldest temperatures in the entire atmosphere just below the mesopause. These conditions result in the best environment for the formation of polar mesospheric clouds.[129] There is also evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud.[132]
168
+
169
+ Noctilucent clouds have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus.[133] Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds.[134] Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus.[135] Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.[136]
170
+
171
+ Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. A major seasonal difference is that convective lift from below the mesosphere pushes very scarce water vapor to higher colder altitudes required for cloud formation during the respective summer seasons in the northern and southern hemispheres. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.[6]
172
+
173
+ Cloud cover has been seen on most other planets in the Solar System. Venus's thick clouds are composed of sulfur dioxide (due to volcanic activity) and appear to be almost entirely stratiform.[137] They are arranged in three main layers at altitudes of 45 to 65 km that obscure the planet's surface and can produce virga. No embedded cumuliform types have been identified, but broken stratocumuliform wave formations are sometimes seen in the top layer that reveal more continuous layer clouds underneath.[138] On Mars, noctilucent, cirrus, cirrocumulus and stratocumulus composed of water-ice have been detected mostly near the poles.[139][140] Water-ice fogs have also been detected on Mars.[141]
174
+
175
+ Both Jupiter and Saturn have an outer cirriform cloud deck composed of ammonia,[142][143] an intermediate stratiform haze-cloud layer made of ammonium hydrosulfide, and an inner deck of cumulus water clouds.[144][145] Embedded cumulonimbus are known to exist near the Great Red Spot on Jupiter.[146][147] The same category-types can be found covering Uranus, and Neptune, but are all composed of methane.[148][149][150][151][152][153] Saturn's moon Titan has cirrus clouds believed to be composed largely of methane.[154][155] The Cassini–Huygens Saturn mission uncovered evidence of polar stratospheric clouds[156] and a methane cycle on Titan, including lakes near the poles and fluvial channels on the surface of the moon.[157]
176
+
177
+ Some planets outside the Solar System are known to have atmospheric clouds. In October 2013, the detection of high altitude optically thick clouds in the atmosphere of exoplanet Kepler-7b was announced,[158][159] and, in December 2013, in the atmospheres of GJ 436 b and GJ 1214 b.[160][161][162][163]
178
+
179
+ Clouds play an important role in various cultures and religious traditions. The ancient Akkadians believed that the clouds were the breasts of the sky goddess Antu[165] and that rain was milk from her breasts.[165] In Exodus 13:21–22, Yahweh is described as guiding the Israelites through the desert in the form of a "pillar of cloud" by day and a "pillar of fire" by night.[164]
180
+
181
+ In the ancient Greek comedy The Clouds, written by Aristophanes and first performed at the City Dionysia in 423 BC, the philosopher Socrates declares that the Clouds are the only true deities[166] and tells the main character Strepsiades not to worship any deities other than the Clouds, but to pay homage to them alone.[166] In the play, the Clouds change shape to reveal the true nature of whoever is looking at them,[167][166][168] turning into centaurs at the sight of a long-haired politician, wolves at the sight of the embezzler Simon, deer at the sight of the coward Cleonymus, and mortal women at the sight of the effeminate informer Cleisthenes.[167][168][166] They are hailed the source of inspiration to comic poets and philosophers;[166] they are masters of rhetoric, regarding eloquence and sophistry alike as their "friends".[166]
182
+
183
+ In China, clouds are symbols of luck and happiness.[169] Overlapping clouds are thought to imply eternal happiness[169] and clouds of different colors are said to indicate "multiplied blessings".[169]
en/1750.html.txt ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ World electricity generation by source in 2017. Total generation was 26 PWh.[1]
6
+
7
+ Renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat.[3] Renewable energy often provides energy in four important areas: electricity generation, air and water heating/cooling, transportation, and rural (off-grid) energy services.[4]
8
+
9
+ Based on REN21's 2017 report, renewables contributed 19.3% to humans' global energy consumption and 24.5% to their generation of electricity in 2015 and 2016, respectively. This energy consumption is divided as 8.9% coming from traditional biomass, 4.2% as heat energy (modern biomass, geothermal and solar heat), 3.9% from hydroelectricity and the remaining 2.2% is electricity from wind, solar, geothermal, and other forms of biomass. Worldwide investments in renewable technologies amounted to more than US$286 billion in 2015.[5] In 2017, worldwide investments in renewable energy amounted to US$279.8 billion with China accounting for US$126.6 billion or 45% of the global investments, the United States for US$40.5 billion and Europe for US$40.9 billion.[6] Globally there are an estimated 7.7 million jobs associated with the renewable energy industries, with solar photovoltaics being the largest renewable employer.[7] Renewable energy systems are rapidly becoming more efficient and cheaper and their share of total energy consumption is increasing.[8] As of 2019, more than two-thirds of worldwide newly installed electricity capacity was renewable.[9] Growth in consumption of coal and oil could end by 2020 due to increased uptake of renewables and natural gas.[10][11]
10
+
11
+ At the national level, at least 30 nations around the world already have renewable energy contributing more than 20 percent of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.[12]
12
+ Some places and at least two countries, Iceland and Norway, generate all their electricity using renewable energy already, and many other countries have the set a goal to reach 100% renewable energy in the future.[13]
13
+ At least 47 nations around the world already have over 50 percent of electricity from renewable resources.[14][15][16] Renewable energy resources exist over wide geographical areas, in contrast to fossil fuels, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency technologies is resulting in significant energy security, climate change mitigation, and economic benefits.[17] In international public opinion surveys there is strong support for promoting renewable sources such as solar power and wind power.[18][19]
14
+
15
+ While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development.[20] As most of renewable energy technologies provide electricity, renewable energy deployment is often applied in conjunction with further electrification, which has several benefits: electricity can be converted to heat (where necessary generating higher temperatures than fossil fuels), can be converted into mechanical energy with high efficiency, and is clean at the point of consumption.[21][22] In addition, electrification with renewable energy is more efficient and therefore leads to significant reductions in primary energy requirements.[23]
16
+
17
+ Renewable energy flows involve natural phenomena such as sunlight, wind, tides, plant growth, and geothermal heat, as the International Energy Agency explains:[24]
18
+
19
+ Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
20
+
21
+ Renewable energy resources and significant opportunities for energy efficiency exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency, and technological diversification of energy sources, would result in significant energy security and economic benefits.[17] It would also reduce environmental pollution such as air pollution caused by burning of fossil fuels and improve public health, reduce premature mortalities due to pollution and save associated health costs that amount to several hundred billion dollars annually only in the United States.[25] Renewable energy sources, that derive their energy from the sun, either directly or indirectly, such as hydro and wind, are expected to be capable of supplying humanity energy for almost another 1 billion years, at which point the predicted increase in heat from the Sun is expected to make the surface of the Earth too hot for liquid water to exist.[26][27][28]
22
+
23
+ Climate change and global warming concerns, coupled with the continuing fall in the costs of some renewable energy equipment, such as wind turbines and solar panels, are driving increased use of renewables.[18] New government spending, regulation and policies helped the industry weather the global financial crisis better than many other sectors.[29] As of 2019[update], however, according to the International Renewable Energy Agency, renewables overall share in the energy mix (including power, heat and transport) needs to grow six times faster, in order to keep the rise in average global temperatures "well below" 2.0 °C (3.6 °F) during the present century, compared to pre-industrial levels.[30]
24
+
25
+ As of 2011, small solar PV systems provide electricity to a few million households, and micro-hydro configured into mini-grids serves many more. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] [needs update] United Nations' eighth Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity.[32] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond, and some 120 countries have various policy targets for longer-term shares of renewable energy, including a 20% target of all electricity generated for the European Union by 2020. Some countries have much higher long-term policy targets of up to 100% renewables. Outside Europe, a diverse group of 20 or more other countries target renewable energy shares in the 2020–2030 time frame that range from 10% to 50%.[12]
26
+
27
+ Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services:[4]
28
+
29
+ Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. Almost without a doubt the oldest known use of renewable energy, in the form of traditional biomass to fuel fires, dates from more than a million years ago. Use of biomass for fire did not become commonplace until many hundreds of thousands of years later.[37] Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile.[38] From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times.[39] Moving into the time of recorded history, the primary sources of traditional renewable energy were human labor, animal power, water power, wind, in grain crushing windmills, and firewood, a traditional biomass.
30
+
31
+ In the 1860s and 1870s there were already fears that civilization would run out of fossil fuels and the need was felt for a better source. In 1873 Professor Augustin Mouchot wrote:
32
+
33
+ The time will arrive when the industry of Europe will cease to find those natural resources, so necessary for it. Petroleum springs and coal mines are not inexhaustible but are rapidly diminishing in many places. Will man, then, return to the power of water and wind? Or will he emigrate where the most powerful source of heat sends its rays to all? History will show what will come.[40]
34
+
35
+ In 1885, Werner von Siemens, commenting on the discovery of the photovoltaic effect in the solid state, wrote:
36
+
37
+ In conclusion, I would say that however great the scientific importance of this discovery may be, its practical value will be no less obvious when we reflect that the supply of solar energy is both without limit and without cost, and that it will continue to pour down upon us for countless ages after all the coal deposits of the earth have been exhausted and forgotten.[41]
38
+
39
+ Max Weber mentioned the end of fossil fuel in the concluding paragraphs of his Die protestantische Ethik und der Geist des Kapitalismus (The Protestant Ethic and the Spirit of Capitalism), published in 1905.[42] Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911 Scientific American article: "in the far distant future, natural fuels having been exhausted [solar power] will remain as the only means of existence of the human race".[43]
40
+
41
+ The theory of peak oil was published in 1956.[44] In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventual depletion of oil, as well as for an escape from dependence on oil, and the first electricity-generating wind turbines appeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980.[45]
42
+
43
+ In 2018, worldwide installed capacity of wind power was 564��GW.[47]
44
+
45
+ Air flow can be used to run wind turbines. Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine.[48] Areas where winds are stronger and more constant, such as offshore and high-altitude sites, are preferred locations for wind farms. Typically, full load hours of wind turbines vary between 16 and 57 percent annually, but might be higher in particularly favorable offshore sites.[49]
46
+
47
+ Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%.
48
+
49
+ Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore. As offshore wind speeds average ~90% greater than that of land, so offshore resources can contribute substantially more energy than land-stationed turbines.[50]
50
+
51
+ In 2017, worldwide renewable hydropower capacity was 1,154 GW.[15]
52
+
53
+ Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. There are many forms of water energy:
54
+
55
+ Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. For countries having the largest percentage of electricity from renewables, the top 50 are primarily hydroelectric. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity stations larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela.[54]
56
+
57
+ Wave power, which captures the energy of ocean surface waves, and tidal power, converting the energy of tides, are two forms of hydropower with future potential; however, they are not yet widely employed commercially. A demonstration project operated by the Ocean Renewable Power Company on the coast of Maine, and connected to the grid, harnesses tidal power from the Bay of Fundy, location of world's highest tidal flow. Ocean thermal energy conversion, which uses the temperature difference between cooler deep and warmer surface waters, currently has no economic feasibility.[55][56]
58
+
59
+ In 2017, global installed solar capacity was 390 GW.[15]
60
+
61
+ Solar energy, radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, concentrated solar power (CSP), concentrator photovoltaics (CPV), solar architecture and artificial photosynthesis.[58][59] Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert, and distribute solar energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air. Active solar technologies encompass solar thermal energy, using solar collectors for heating, and solar power, converting sunlight into electricity either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP).
62
+
63
+ A photovoltaic system converts light into electrical direct current (DC) by taking advantage of the photoelectric effect.[60] Solar PV has turned into a multi-billion, fast-growing industry, continues to improve its cost-effectiveness, and has the most potential of any renewable technologies together with CSP.[61][62] Concentrated solar power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Commercial concentrated solar power plants were first developed in the 1980s. CSP-Stirling has by far the highest efficiency among all solar energy technologies.
64
+
65
+ In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared".[58] Italy has the largest proportion of solar electricity in the world; in 2015, solar supplied 7.7% of electricity demand in Italy.[63] In 2017, after another year of rapid growth, solar generated approximately 2% of global power, or 460 TWh.[64]
66
+
67
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
68
+
69
+ High temperature geothermal energy is from thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. Earth's geothermal energy originates from the original formation of the planet and from radioactive decay of minerals (in currently uncertain[65] but possibly roughly equal[66] proportions). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots geo, meaning earth, and thermos, meaning heat.
70
+
71
+ The heat that is used for geothermal energy can be from deep within the Earth, all the way down to Earth's core – 4,000 miles (6,400 km) down. At the core, temperatures may reach over 9,000 °F (5,000 °C). Heat conducts from the core to surrounding rock. Extremely high temperature and pressure cause some rock to melt, which is commonly known as magma. Magma convects upward since it is lighter than the solid rock. This magma then heats rock and water in the crust, sometimes up to 700 °F (371 °C).[67]
72
+
73
+ Low temperature geothermal[35] refers to the use of the outer crust of the Earth as a thermal battery to facilitate renewable thermal energy for heating and cooling buildings, and other refrigeration and industrial uses. In this form of geothermal, a geothermal heat pump and ground-coupled heat exchanger are used together to move heat energy into the Earth (for cooling) and out of the Earth (for heating) on a varying seasonal basis. Low temperature geothermal (generally referred to as "GHP") is an increasingly important renewable technology because it both reduces total annual energy loads associated with heating and cooling, and it also flattens the electric demand curve eliminating the extreme summer and winter peak electric supply requirements. Thus low temperature geothermal/GHP is becoming an increasing national priority with multiple tax credit support[68] and focus as part of the ongoing movement toward net zero energy.[36]
74
+
75
+ Bioenergy global capacity in 2017 was 109 GW.[15]
76
+
77
+ Biomass is biological material derived from living, or recently living organisms. It most often refers to plants or plant-derived materials which are specifically called lignocellulosic biomass.[69] As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel. Conversion of biomass to biofuel can be achieved by different methods which are broadly classified into: thermal, chemical, and biochemical methods. Wood remains the largest biomass energy source today;[70] examples include forest residues – such as dead trees, branches and tree stumps –, yard clippings, wood chips and even municipal solid waste. In the second sense, biomass includes plant or animal matter that can be converted into fibers or other industrial chemicals, including biofuels. Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, bamboo,[71] and a variety of tree species, ranging from eucalyptus to oil palm (palm oil).
78
+
79
+ Plant energy is produced by crops specifically grown for use as fuel that offer high biomass output per hectare with low input energy.[72] The grain can be used for liquid transportation fuels while the straw can be burned to produce heat or electricity. Plant biomass can also be degraded from cellulose to glucose through a series of chemical treatments, and the resulting sugar can then be used as a first generation biofuel.
80
+
81
+ Biomass can be converted to other usable forms of energy such as methane gas[73] or transportation fuels such as ethanol and biodiesel. Rotting garbage, and agricultural and human waste, all release methane gas – also called landfill gas or biogas. Crops, such as corn and sugarcane, can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products such as vegetable oils and animal fats.[74] Also, biomass to liquids (BTLs) and cellulosic ethanol are still under research.[75][76] There is a great deal of research involving algal fuel or algae-derived biomass due to the fact that it is a non-food resource and can be produced at rates 5 to 10 times those of other types of land-based agriculture, such as corn and soy. Once harvested, it can be fermented to produce biofuels such as ethanol, butanol, and methane, as well as biodiesel and hydrogen. The biomass used for electricity generation varies by region. Forest by-products, such as wood residues, are common in the United States. Agricultural waste is common in Mauritius (sugar cane residue) and Southeast Asia (rice husks). Animal husbandry residues, such as poultry litter, are common in the United Kingdom.[77]
82
+
83
+ Biofuels include a wide range of fuels which are derived from biomass. The term covers solid, liquid, and gaseous fuels.[78] Liquid biofuels include bioalcohols, such as bioethanol, and oils, such as biodiesel. Gaseous biofuels include biogas, landfill gas and synthetic gas. Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. These include maize, sugarcane and, more recently, sweet sorghum. The latter crop is particularly suitable for growing in dryland conditions, and is being investigated by International Crops Research Institute for the Semi-Arid Tropics for its potential to provide fuel, along with food and animal feed, in arid parts of Asia and Africa.[79]
84
+
85
+ With advanced technology being developed, cellulosic biomass, such as trees and grasses, are also used as feedstocks for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the United States and in Brazil. The energy costs for producing bio-ethanol are almost equal to, the energy yields from bio-ethanol. However, according to the European Environment Agency, biofuels do not address global warming concerns.[80] Biodiesel is made from vegetable oils, animal fats or recycled greases. It can be used as a fuel for vehicles in its pure form, or more commonly as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. Biofuels provided 2.7% of the world's transport fuel in 2010.[81]
86
+
87
+ Biomass, biogas and biofuels are burned to produce heat/power and in doing so harm the environment. Pollutants such as sulphurous oxides (SOx), nitrous oxides (NOx), and particulate matter (PM) are produced from the combustion of biomass; the World Health Organisation estimates that 7 million premature deaths are caused each year by air pollution.[82] Biomass combustion is a major contributor.[82][83][84]
88
+
89
+ Renewable energy production from some sources such as wind and solar is more variable and more geographically spread than technology based on fossil fuels and nuclear. While integrating it into the wider energy system is feasible, it does lead to some additional challenges. In order for the energy system to remain stable, a set of measurements can be taken. Implementation of energy storage, using a wide variety of renewable energy technologies, and implementing a smart grid in which energy is automatically used at the moment it is produced can reduce risks and costs of renewable energy implementation.[85] In some locations, individual households can opt to purchase renewable energy through a consumer green energy program.
90
+
91
+ Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity accounts for more than 90% of all grid power storage. Costs of lithium-ion batteries are dropping rapidly, and are increasingly being deployed grid ancillary services and for domestic storage.
92
+
93
+ Renewable power has been more effective in creating jobs than coal or oil in the United States.[86] In 2016, employment in the sector increased 6 percent in the United States, causing employment in the non-renewable energy sector to decrease 18 percent. Worldwide, renewables employ about 8.1 million as of 2016.[87]
94
+
95
+ From the end of 2004, worldwide renewable energy capacity grew at rates of 10–60% annually for many technologies. In 2015 global investment in renewables rose 5% to $285.9 billion, breaking the previous record of $278.5 billion in 2011. 2015 was also the first year that saw renewables, excluding large hydro, account for the majority of all new power capacity (134 GW, making up 53.6% of the total). Of the renewables total, wind accounted for 72 GW and solar photovoltaics 56 GW; both record-breaking numbers and sharply up from 2014 figures (49 GW and 45 GW respectively). In financial terms, solar made up 56% of total new investment and wind accounted for 38%.
96
+
97
+ In 2014 global wind power capacity expanded 16% to 369,553 MW.[90] Yearly wind energy production is also growing rapidly and has reached around 4% of worldwide electricity usage,[91] 11.4% in the EU,[92] and it is widely used in Asia, and the United States. In 2015, worldwide installed photovoltaics capacity increased to 227 gigawatts (GW), sufficient to supply 1 percent of global electricity demands.[93] Solar thermal energy stations operate in the United States and Spain, and as of 2016, the largest of these is the 392 MW Ivanpah Solar Electric Generating System in California.[94][95] The world's largest geothermal power installation is The Geysers in California, with a rated capacity of 750 MW. Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18% of the country's automotive fuel. Ethanol fuel is also widely available in the United States.
98
+
99
+ In 2017, investments in renewable energy amounted to US$279.8 billion worldwide, with China accounting for US$126.6 billion or 45% of the global investments, the US for US$40.5 billion, and Europe for US$40.9 billion.[6] The results of a recent review of the literature concluded that as greenhouse gas (GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies.[96]
100
+
101
+ Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2018 report from the International Renewable Energy Agency (IRENA), found that the cost of renewable energy is quickly falling, and will likely be equal to or less than the cost non-renewables such as fossil fuels by 2020. The report found that solar power costs have dropped 73% since 2010 and onshore wind costs have dropped by 23% in that same timeframe.[106]
102
+
103
+ Current projections concerning the future cost of renewables vary however. The EIA has predicted that almost two thirds of net additions to power capacity will come from renewables by 2020 due to the combined policy benefits of local pollution, decarbonisation and energy diversification.
104
+
105
+ According to a 2018 report by Bloomberg New Energy Finance, wind and solar power are expected to generate roughly 50% of the world's energy needs by 2050, while coal powered electricity plants are expected to drop to just 11%.[107]
106
+ Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies.[108] Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today".[108] A series of studies by the US National Renewable Energy Laboratory modeled the "grid in the Western US under a number of different scenarios where intermittent renewables accounted for 33 percent of the total power." In the models, inefficiencies in cycling the fossil fuel plants to compensate for the variation in solar and wind energy resulted in an additional cost of "between $0.47 and $1.28 to each MegaWatt hour generated"; however, the savings in the cost of the fuels saved "adds up to $7 billion, meaning the added costs are, at most, two percent of the savings."[109]
107
+
108
+ In 2017 the world renewable hydropower capacity was 1,154 GW.[15] Only a quarter of the worlds estimated hydroelectric potential of 14,000 TWh/year has been developed, the regional potentials for the growth of hydropower around the world are, 71% Europe, 75% North America, 79% South America, 95% Africa, 95% Middle East, 82% Asia Pacific. However, the political realities of new reservoirs in western countries, economic limitations in the third world and the lack of a transmission system in undeveloped areas, result in the possibility of developing 25% of the remaining potential before 2050, with the bulk of that being in the Asia Pacific area.[110] There is slow growth taking place in Western counties,[citation needed] but not in the conventional dam and reservoir style of the past. New projects take the form of run-of-the-river and small hydro, neither using large reservoirs. It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid.[111] Where circumstances permit existing dams such as the Russell Dam built in 1985 may be updated with "pump back" facilities for pumped-storage which is useful for peak loads or to support intermittent wind and solar power. Countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro.[112]
109
+
110
+ Wind power is widely used in Europe, China, and the United States. From 2004 to 2017, worldwide installed capacity of wind power has been growing from 47 GW to 514 GW—a more than tenfold increase within 13 years[15] As of the end of 2014, China, the United States and Germany combined accounted for half of total global capacity.[90] Several other countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, and 14% in Ireland in 2010 and have since continued to expand their installed capacity.[113][114] More than 80 countries around the world are using wind power on a commercial basis.[81]
111
+
112
+ Wind turbines are increasing in power with some commercially deployed models generating over 8MW per turbine.[115][116][117] More powerful models are in development, see list of most powerful wind turbines.
113
+
114
+ Solar thermal energy capacity has increased from 1.3 GW in 2012 to 5.0 GW in 2017.[15]
115
+
116
+ Spain is the world leader in solar thermal power deployment with 2.3 GW deployed.[15] The United States has 1.8 GW,[15] most of it in California where 1.4 GW of solar thermal power projects are operational.[121] Several power plants have been constructed in the Mojave Desert, Southwestern United States. As of 2017 only 4 other countries have deployments above 100 MW:[15] South Africa (300 MW) India (229 MW) Morocco (180 MW) and United Arab Emirates (100 MW).
117
+
118
+ The United States conducted much early research in photovoltaics and concentrated solar power. The U.S. is among the top countries in the world in electricity generated by the Sun and several of the world's largest utility-scale installations are located in the desert Southwest.
119
+
120
+ The oldest solar thermal power plant in the world is the 354 megawatt (MW) SEGS thermal power plant, in California.[122] The Ivanpah Solar Electric Generating System is a solar thermal power project in the California Mojave Desert, 40 miles (64 km) southwest of Las Vegas, with a gross capacity of 377 MW.[123] The 280 MW Solana Generating Station is a solar power plant near Gila Bend, Arizona, about 70 miles (110 km) southwest of Phoenix, completed in 2013. When commissioned it was the largest parabolic trough plant in the world and the first U.S. solar plant with molten salt thermal energy storage.[124]
121
+
122
+ In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.[125]
123
+
124
+ Worldwide growth of PV capacity grouped by region in MW (2006–2014)
125
+
126
+ Photovoltaics (PV) is rapidly-growing with global capacity increasing from 177 GW at the end of 2014 to 385 GW in 2017.[15]
127
+
128
+ PV uses solar cells assembled into solar panels to convert sunlight into electricity. PV systems range from small, residential and commercial rooftop or building integrated installations, to large utility-scale photovoltaic power station. The predominant PV technology is crystalline silicon, while thin-film solar cell technology accounts for about 10 percent of global photovoltaic deployment. In recent years, PV technology has improved its electricity generating efficiency, reduced the installation cost per watt as well as its energy payback time, and reached grid parity in at least 30 different markets by 2014.[126]
129
+ Building-integrated photovoltaics or "onsite" PV systems use existing land and structures and generate power close to where it is consumed.[127]
130
+
131
+ Photovoltaics grew fastest in China, followed by Japan and the United States. Italy meets 7.9 percent of its electricity demands with photovoltaic power—the highest share worldwide.[128] Solar power is forecasted to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16% and 11%, respectively. This requires an increase of installed PV capacity to 4,600 GW, of which more than half is expected to be deployed in China and India.[129]
132
+
133
+ Commercial concentrated solar power plants were first developed in the 1980s. As the cost of solar electricity has fallen, the number of grid-connected solar PV systems has grown into the millions and utility-scale solar power stations with hundreds of megawatts are being built. Many solar photovoltaic power stations have been built, mainly in Europe, China and the United States.[130] The 1.5 GW Tengger Desert Solar Park, in China is the world's largest PV power station. Many of these plants are integrated with agriculture and some use tracking systems that follow the sun's daily path across the sky to generate more electricity than fixed-mounted systems.
134
+
135
+ Bioenergy global capacity in 2017 was 109 GW.[15]
136
+ Biofuels provided 3% of the world's transport fuel in 2017.[131]
137
+
138
+ Mandates for blending biofuels exist in 31 countries at the national level and in 29 states/provinces.[81] According to the International Energy Agency, biofuels have the potential to meet more than a quarter of world demand for transportation fuels by 2050.[132]
139
+
140
+ Since the 1970s, Brazil has had an ethanol fuel program which has allowed the country to become the world's second largest producer of ethanol (after the United States) and the world's largest exporter.[133] Brazil's ethanol fuel program uses modern equipment and cheap sugarcane as feedstock, and the residual cane-waste (bagasse) is used to produce heat and power.[134] There are no longer light vehicles in Brazil running on pure gasoline. By the end of 2008 there were 35,000 filling stations throughout Brazil with at least one ethanol pump.[135] Unfortunately, Operation Car Wash has seriously eroded public trust in oil companies and has implicated several high ranking Brazilian officials.
141
+
142
+ Nearly all the gasoline sold in the United States today is mixed with 10% ethanol,[136] and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, Daimler AG, and GM are among the automobile companies that sell "flexible-fuel" cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol. By mid-2006, there were approximately 6 million ethanol compatible vehicles on U.S. roads.[137]
143
+
144
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
145
+
146
+ Geothermal power is cost effective, reliable, sustainable, and environmentally friendly,[138] but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are usually much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels.
147
+
148
+ In 2017, the United States led the world in geothermal electricity production with 12.9 GW of installed capacity.[15] The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California.[139] The Philippines follows the US as the second highest producer of geothermal power in the world, with 1.9 GW of capacity online.[15]
149
+
150
+ Renewable energy technology has sometimes been seen as a costly luxury item by critics, and affordable only in the affluent developed world. This erroneous view has persisted for many years, however between 2016 and 2017, investments in renewable energy were higher in developing countries than in developed countries, with China leading global investment with a record 126.6 billion dollars. Many Latin American and African countries increased their investments significantly as well.[140]
151
+ Renewable energy can be particularly suitable for developing countries. In rural and remote areas, transmission and distribution of energy generated from fossil fuels can be difficult and expensive. Producing renewable energy locally can offer a viable alternative.[141]
152
+
153
+ Technology advances are opening up a huge new market for solar power: the approximately 1.3 billion people around the world who don't have access to grid electricity. Even though they are typically very poor, these people have to pay far more for lighting than people in rich countries because they use inefficient kerosene lamps. Solar power costs half as much as lighting with kerosene.[142] As of 2010, an estimated 3 million households get power from small solar PV systems.[143] Kenya is the world leader in the number of solar power systems installed per capita. More than 30,000 very small solar panels, each producing 1[144] 2 to 30 watts, are sold in Kenya annually. Some Small Island Developing States (SIDS) are also turning to solar power to reduce their costs and increase their sustainability.
154
+
155
+ Micro-hydro configured into mini-grids also provide power. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] Clean liquid fuel sourced from renewable feedstocks are used for cooking and lighting in energy-poor areas of the developing world. Alcohol fuels (ethanol and methanol) can be produced sustainably from non-food sugary, starchy, and cellulostic feedstocks. Project Gaia, Inc. and CleanStar Mozambique are implementing clean cooking programs with liquid ethanol stoves in Ethiopia, Kenya, Nigeria and Mozambique.[145]
156
+
157
+ Renewable energy projects in many developing countries have demonstrated that renewable energy can directly contribute to poverty reduction by providing the energy needed for creating businesses and employment. Renewable energy technologies can also make indirect contributions to alleviating poverty by providing energy for cooking, space heating, and lighting. Renewable energy can also contribute to education, by providing electricity to schools.[146]
158
+
159
+ Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishing energy policy in early 2000s, most countries around the world now have some form of energy policy.[147]
160
+
161
+ The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed in 2009, by 75 countries signing the charter of IRENA.[149] As of April 2019, IRENA has 160 member states.[150] The then United Nations' Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity,[32] and in September 2011 he launched the UN Sustainable Energy for All initiative to improve energy access, efficiency and the deployment of renewable energy.[151]
162
+
163
+ The 2015 Paris Agreement on climate change motivated many countries to develop or improve renewable energy policies.[12] In 2017, a total of 121 countries have adapted some form of renewable energy policy.[147] National targets that year existed in at 176 countries.[12] In addition, there is also a wide range of policies at state/provincial and local levels.[81] Some public utilities help plan or install residential energy upgrades. Under president Barack Obama, the United States policy encouraged the uptake of renewable energy in line with commitments to the Paris agreement. Even though Trump has abandoned these goals, renewable investment is still on the rise.[152]
164
+
165
+ Many national, state, and local governments have created green banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies.[153] Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy. The US military has also focused on the use of renewable fuels for military vehicles. Unlike fossil fuels, renewable fuels can be produced in any country, creating a strategic advantage. The US military has already committed itself to have 50% of its energy consumption come from alternative sources.[154]
166
+
167
+ The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. Renewable energy use has grown much faster than even advocates anticipated.[155] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Professors S. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges".[156]
168
+
169
+ Using 100% renewable energy was first suggested in a Science paper published in 1975 by Danish physicist Bent Sørensen.[157] It was followed by several other proposals, until in 1998 the first detailed analysis of scenarios with very high shares of renewables were published. These were followed by the first detailed 100% scenarios. In 2006 a PhD thesis was published by Czisch in which it was shown that in a 100% renewable scenario energy supply could match demand in every hour of the year in Europe and North Africa. In the same year Danish Energy professor Henrik Lund published a first paper[158] in which he addresses the optimal combination of renewables, which was followed by several other papers on the transition to 100% renewable energy in Denmark. Since then Lund has been publishing several papers on 100% renewable energy. After 2009 publications began to rise steeply, covering 100% scenarios for countries in Europe, America, Australia and other parts of the world.[159]
170
+
171
+ In 2011 Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University, and Mark Delucchi published a study on 100% renewable global energy supply in the journal Energy Policy. They found producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic".[160] They also found that energy costs with a wind, solar, water system should be similar to today's energy costs.[161]
172
+
173
+ Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."[162]
174
+
175
+ The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological.[163][164] According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.[165]
176
+
177
+ According to World Bank the "below 2°C" climate scenario requires 3 billions of tonnes of metals and minerals by 2050. Supply of mined resources such as zinc, molybdenum, silver, nickel, copper must increase by up to 500%.[166] A 2018 analysis estimated required increases in stock of metals required by various sectors from 1000% (wind power) to 87'000% (personal vehicle batteries).[167]
178
+
179
+ Other renewable energy technologies are still under development, and include cellulosic ethanol, hot-dry-rock geothermal power, and marine energy.[168] These technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research, development and demonstration (RD&D) funding.[168]
180
+
181
+ There are numerous organizations within the academic, federal, and commercial sectors conducting large scale advanced research in the field of renewable energy. This research spans several areas of focus across the renewable energy spectrum. Most of the research is targeted at improving efficiency and increasing overall energy yields.[169]
182
+ Multiple federally supported research organizations have focused on renewable energy in recent years. Two of the most prominent of these labs are Sandia National Laboratories and the National Renewable Energy Laboratory (NREL), both of which are funded by the United States Department of Energy and supported by various corporate partners.[170] Sandia has a total budget of $2.4 billion[171] while NREL has a budget of $375 million.[172]
183
+
184
+ Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful in low-income countries with relative air humidity over 60%.[203]
185
+
186
+ Renewable electricity production, from sources such as wind power and solar power, is intermittent which results in reduced capacity factor and require either energy storage of capacity equal to its total output, or base load power sources based on fossil fuels or nuclear power.
187
+
188
+ Since renewable energy sources power density per land area is at best three orders of magnitude smaller than fossil or nuclear power,[204] renewable power plants tends to occupy thousands of hectares causing environmental concerns and opposition from local residents, especially in densely populated countries. Solar power plants are competing with arable land and nature reserves,[205] while on-shore wind farms face opposition due to aesthetic concerns and noise, which is impacting both humans and wildlife.[206][207][208][209] In the United States, the Massachusetts Cape Wind project was delayed for years partly because of aesthetic concerns. However, residents in other areas have been more positive. According to a town councilor, the overwhelming majority of locals believe that the Ardrossan Wind Farm in Scotland has enhanced the area.[210] These concerns, when directed against renewable energy, are sometimes described as "not in my back yard" attitude (NIMBY).
189
+
190
+ A recent[when?] UK Government document states that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake".[211] In countries such as Germany and Denmark many renewable projects are owned by communities, particularly through cooperative structures, and contribute significantly to overall levels of renewable energy deployment.[212][213]
191
+
192
+ The market for renewable energy technologies has continued to grow. Climate change concerns and increasing in green jobs, coupled with high oil prices, peak oil, oil wars, oil spills, promotion of electric vehicles and renewable electricity, nuclear disasters and increasing government support, are driving increasing renewable energy legislation, incentives and commercialization.[18] New government spending, regulation and policies helped the industry weather the 2009 economic crisis better than many other sectors.[29]
193
+
194
+ While renewables have been very successful in their ever-growing contribution to electrical power there are no countries dominated by fossil fuels who have a plan to stop and get that power from renwables. Only Scotland and Ontario have stopped burning coal, largely due to good natural gas supplies. In the area of transportation, fossil fuels are even more entrenched and solutions harder to find.[214] It's unclear if there are failures with policy or renewable energy, but twenty years after the Kyoto Protocol fossil fuels are still our primary energy source and consumption continues to grow.[215]
195
+
196
+ The International Energy Agency has stated that deployment of renewable technologies usually increases the diversity of electricity sources and, through local generation, contributes to the flexibility of the system and its resistance to central shocks.[216]
197
+
198
+ From around 2010 onwards, there was increasing discussion about the geopolitical impact of the growing use of renewable energy.[217] It was argued that former fossil fuels exporters would experience a weakening of their position in international affairs, while countries with abundant sunshine, wind, hydropower, or geothermal resources would be strengthened.[218] Also countries rich in critical materials for renewable energy technologies were expected to rise in importance in international affairs.[219]
199
+
200
+ The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former fossil fuels exporters are expected to lose power, while the positions of former fossil fuel importers and countries rich in renewable energy resources is expected to strengthen.[220]
201
+
202
+ The ability of biomass and biofuels to contribute to a reduction in CO2 emissions is limited because both biomass and biofuels emit large amounts of air pollution when burned and in some cases compete with food supply. Furthermore, biomass and biofuels consume large amounts of water.[221] Other renewable sources such as wind power, photovoltaics, and hydroelectricity have the advantage of being able to conserve water, lower pollution and reduce CO2 emissions.
203
+ The installations used to produce wind, solar and hydro power are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts.[222] More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphazised that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.[223]
204
+
205
+ Renewable energy devices depend on non-renewable resources such as mined metals and use vast amounts of land due to their small surface power density. Manufacturing of photovoltaic panels, wind turbines and batteries requires significant amounts of rare-earth elements[224] and increases mining operations, which have social and environmental impact.[225] Due to co-occurrence of rare-earth and radioactive elements (thorium, uranium and radium), rare-earth mining results in production of low-level radioactive waste.[226]
206
+
207
+ Solar panels change the albedo of the surface what increases their contribution to global warming.[227]
208
+
209
+ Burbo, NW-England
210
+
211
+ Sunrise at the Fenton Wind Farm in Minnesota, US
212
+
213
+ The CSP-station Andasol in Andalusia, Spain
214
+
215
+ Ivanpah solar plant in the Mojave Desert, California, United States
216
+
217
+ Three Gorges Dam and Gezhouba Dam, China
218
+
219
+ Shop selling PV panels in Ouagadougou, Burkina Faso
220
+
221
+ Stump harvesting increases recovery of biomass from forests
222
+
223
+ A small, roof-top mounted PV system in Bonn, Germany
224
+
225
+ The community-owned Westmill Solar Park in South East England
226
+
227
+ Komekurayama photovoltaic power station in Kofu, Japan
228
+
229
+ Krafla, a geothermal power station in Iceland
en/1751.html.txt ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ World electricity generation by source in 2017. Total generation was 26 PWh.[1]
6
+
7
+ Renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat.[3] Renewable energy often provides energy in four important areas: electricity generation, air and water heating/cooling, transportation, and rural (off-grid) energy services.[4]
8
+
9
+ Based on REN21's 2017 report, renewables contributed 19.3% to humans' global energy consumption and 24.5% to their generation of electricity in 2015 and 2016, respectively. This energy consumption is divided as 8.9% coming from traditional biomass, 4.2% as heat energy (modern biomass, geothermal and solar heat), 3.9% from hydroelectricity and the remaining 2.2% is electricity from wind, solar, geothermal, and other forms of biomass. Worldwide investments in renewable technologies amounted to more than US$286 billion in 2015.[5] In 2017, worldwide investments in renewable energy amounted to US$279.8 billion with China accounting for US$126.6 billion or 45% of the global investments, the United States for US$40.5 billion and Europe for US$40.9 billion.[6] Globally there are an estimated 7.7 million jobs associated with the renewable energy industries, with solar photovoltaics being the largest renewable employer.[7] Renewable energy systems are rapidly becoming more efficient and cheaper and their share of total energy consumption is increasing.[8] As of 2019, more than two-thirds of worldwide newly installed electricity capacity was renewable.[9] Growth in consumption of coal and oil could end by 2020 due to increased uptake of renewables and natural gas.[10][11]
10
+
11
+ At the national level, at least 30 nations around the world already have renewable energy contributing more than 20 percent of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.[12]
12
+ Some places and at least two countries, Iceland and Norway, generate all their electricity using renewable energy already, and many other countries have the set a goal to reach 100% renewable energy in the future.[13]
13
+ At least 47 nations around the world already have over 50 percent of electricity from renewable resources.[14][15][16] Renewable energy resources exist over wide geographical areas, in contrast to fossil fuels, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency technologies is resulting in significant energy security, climate change mitigation, and economic benefits.[17] In international public opinion surveys there is strong support for promoting renewable sources such as solar power and wind power.[18][19]
14
+
15
+ While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development.[20] As most of renewable energy technologies provide electricity, renewable energy deployment is often applied in conjunction with further electrification, which has several benefits: electricity can be converted to heat (where necessary generating higher temperatures than fossil fuels), can be converted into mechanical energy with high efficiency, and is clean at the point of consumption.[21][22] In addition, electrification with renewable energy is more efficient and therefore leads to significant reductions in primary energy requirements.[23]
16
+
17
+ Renewable energy flows involve natural phenomena such as sunlight, wind, tides, plant growth, and geothermal heat, as the International Energy Agency explains:[24]
18
+
19
+ Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
20
+
21
+ Renewable energy resources and significant opportunities for energy efficiency exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency, and technological diversification of energy sources, would result in significant energy security and economic benefits.[17] It would also reduce environmental pollution such as air pollution caused by burning of fossil fuels and improve public health, reduce premature mortalities due to pollution and save associated health costs that amount to several hundred billion dollars annually only in the United States.[25] Renewable energy sources, that derive their energy from the sun, either directly or indirectly, such as hydro and wind, are expected to be capable of supplying humanity energy for almost another 1 billion years, at which point the predicted increase in heat from the Sun is expected to make the surface of the Earth too hot for liquid water to exist.[26][27][28]
22
+
23
+ Climate change and global warming concerns, coupled with the continuing fall in the costs of some renewable energy equipment, such as wind turbines and solar panels, are driving increased use of renewables.[18] New government spending, regulation and policies helped the industry weather the global financial crisis better than many other sectors.[29] As of 2019[update], however, according to the International Renewable Energy Agency, renewables overall share in the energy mix (including power, heat and transport) needs to grow six times faster, in order to keep the rise in average global temperatures "well below" 2.0 °C (3.6 °F) during the present century, compared to pre-industrial levels.[30]
24
+
25
+ As of 2011, small solar PV systems provide electricity to a few million households, and micro-hydro configured into mini-grids serves many more. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] [needs update] United Nations' eighth Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity.[32] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond, and some 120 countries have various policy targets for longer-term shares of renewable energy, including a 20% target of all electricity generated for the European Union by 2020. Some countries have much higher long-term policy targets of up to 100% renewables. Outside Europe, a diverse group of 20 or more other countries target renewable energy shares in the 2020–2030 time frame that range from 10% to 50%.[12]
26
+
27
+ Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services:[4]
28
+
29
+ Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. Almost without a doubt the oldest known use of renewable energy, in the form of traditional biomass to fuel fires, dates from more than a million years ago. Use of biomass for fire did not become commonplace until many hundreds of thousands of years later.[37] Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile.[38] From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times.[39] Moving into the time of recorded history, the primary sources of traditional renewable energy were human labor, animal power, water power, wind, in grain crushing windmills, and firewood, a traditional biomass.
30
+
31
+ In the 1860s and 1870s there were already fears that civilization would run out of fossil fuels and the need was felt for a better source. In 1873 Professor Augustin Mouchot wrote:
32
+
33
+ The time will arrive when the industry of Europe will cease to find those natural resources, so necessary for it. Petroleum springs and coal mines are not inexhaustible but are rapidly diminishing in many places. Will man, then, return to the power of water and wind? Or will he emigrate where the most powerful source of heat sends its rays to all? History will show what will come.[40]
34
+
35
+ In 1885, Werner von Siemens, commenting on the discovery of the photovoltaic effect in the solid state, wrote:
36
+
37
+ In conclusion, I would say that however great the scientific importance of this discovery may be, its practical value will be no less obvious when we reflect that the supply of solar energy is both without limit and without cost, and that it will continue to pour down upon us for countless ages after all the coal deposits of the earth have been exhausted and forgotten.[41]
38
+
39
+ Max Weber mentioned the end of fossil fuel in the concluding paragraphs of his Die protestantische Ethik und der Geist des Kapitalismus (The Protestant Ethic and the Spirit of Capitalism), published in 1905.[42] Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911 Scientific American article: "in the far distant future, natural fuels having been exhausted [solar power] will remain as the only means of existence of the human race".[43]
40
+
41
+ The theory of peak oil was published in 1956.[44] In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventual depletion of oil, as well as for an escape from dependence on oil, and the first electricity-generating wind turbines appeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980.[45]
42
+
43
+ In 2018, worldwide installed capacity of wind power was 564��GW.[47]
44
+
45
+ Air flow can be used to run wind turbines. Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine.[48] Areas where winds are stronger and more constant, such as offshore and high-altitude sites, are preferred locations for wind farms. Typically, full load hours of wind turbines vary between 16 and 57 percent annually, but might be higher in particularly favorable offshore sites.[49]
46
+
47
+ Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%.
48
+
49
+ Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore. As offshore wind speeds average ~90% greater than that of land, so offshore resources can contribute substantially more energy than land-stationed turbines.[50]
50
+
51
+ In 2017, worldwide renewable hydropower capacity was 1,154 GW.[15]
52
+
53
+ Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. There are many forms of water energy:
54
+
55
+ Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. For countries having the largest percentage of electricity from renewables, the top 50 are primarily hydroelectric. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity stations larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela.[54]
56
+
57
+ Wave power, which captures the energy of ocean surface waves, and tidal power, converting the energy of tides, are two forms of hydropower with future potential; however, they are not yet widely employed commercially. A demonstration project operated by the Ocean Renewable Power Company on the coast of Maine, and connected to the grid, harnesses tidal power from the Bay of Fundy, location of world's highest tidal flow. Ocean thermal energy conversion, which uses the temperature difference between cooler deep and warmer surface waters, currently has no economic feasibility.[55][56]
58
+
59
+ In 2017, global installed solar capacity was 390 GW.[15]
60
+
61
+ Solar energy, radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, concentrated solar power (CSP), concentrator photovoltaics (CPV), solar architecture and artificial photosynthesis.[58][59] Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert, and distribute solar energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air. Active solar technologies encompass solar thermal energy, using solar collectors for heating, and solar power, converting sunlight into electricity either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP).
62
+
63
+ A photovoltaic system converts light into electrical direct current (DC) by taking advantage of the photoelectric effect.[60] Solar PV has turned into a multi-billion, fast-growing industry, continues to improve its cost-effectiveness, and has the most potential of any renewable technologies together with CSP.[61][62] Concentrated solar power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Commercial concentrated solar power plants were first developed in the 1980s. CSP-Stirling has by far the highest efficiency among all solar energy technologies.
64
+
65
+ In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared".[58] Italy has the largest proportion of solar electricity in the world; in 2015, solar supplied 7.7% of electricity demand in Italy.[63] In 2017, after another year of rapid growth, solar generated approximately 2% of global power, or 460 TWh.[64]
66
+
67
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
68
+
69
+ High temperature geothermal energy is from thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. Earth's geothermal energy originates from the original formation of the planet and from radioactive decay of minerals (in currently uncertain[65] but possibly roughly equal[66] proportions). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots geo, meaning earth, and thermos, meaning heat.
70
+
71
+ The heat that is used for geothermal energy can be from deep within the Earth, all the way down to Earth's core – 4,000 miles (6,400 km) down. At the core, temperatures may reach over 9,000 °F (5,000 °C). Heat conducts from the core to surrounding rock. Extremely high temperature and pressure cause some rock to melt, which is commonly known as magma. Magma convects upward since it is lighter than the solid rock. This magma then heats rock and water in the crust, sometimes up to 700 °F (371 °C).[67]
72
+
73
+ Low temperature geothermal[35] refers to the use of the outer crust of the Earth as a thermal battery to facilitate renewable thermal energy for heating and cooling buildings, and other refrigeration and industrial uses. In this form of geothermal, a geothermal heat pump and ground-coupled heat exchanger are used together to move heat energy into the Earth (for cooling) and out of the Earth (for heating) on a varying seasonal basis. Low temperature geothermal (generally referred to as "GHP") is an increasingly important renewable technology because it both reduces total annual energy loads associated with heating and cooling, and it also flattens the electric demand curve eliminating the extreme summer and winter peak electric supply requirements. Thus low temperature geothermal/GHP is becoming an increasing national priority with multiple tax credit support[68] and focus as part of the ongoing movement toward net zero energy.[36]
74
+
75
+ Bioenergy global capacity in 2017 was 109 GW.[15]
76
+
77
+ Biomass is biological material derived from living, or recently living organisms. It most often refers to plants or plant-derived materials which are specifically called lignocellulosic biomass.[69] As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel. Conversion of biomass to biofuel can be achieved by different methods which are broadly classified into: thermal, chemical, and biochemical methods. Wood remains the largest biomass energy source today;[70] examples include forest residues – such as dead trees, branches and tree stumps –, yard clippings, wood chips and even municipal solid waste. In the second sense, biomass includes plant or animal matter that can be converted into fibers or other industrial chemicals, including biofuels. Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, bamboo,[71] and a variety of tree species, ranging from eucalyptus to oil palm (palm oil).
78
+
79
+ Plant energy is produced by crops specifically grown for use as fuel that offer high biomass output per hectare with low input energy.[72] The grain can be used for liquid transportation fuels while the straw can be burned to produce heat or electricity. Plant biomass can also be degraded from cellulose to glucose through a series of chemical treatments, and the resulting sugar can then be used as a first generation biofuel.
80
+
81
+ Biomass can be converted to other usable forms of energy such as methane gas[73] or transportation fuels such as ethanol and biodiesel. Rotting garbage, and agricultural and human waste, all release methane gas – also called landfill gas or biogas. Crops, such as corn and sugarcane, can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products such as vegetable oils and animal fats.[74] Also, biomass to liquids (BTLs) and cellulosic ethanol are still under research.[75][76] There is a great deal of research involving algal fuel or algae-derived biomass due to the fact that it is a non-food resource and can be produced at rates 5 to 10 times those of other types of land-based agriculture, such as corn and soy. Once harvested, it can be fermented to produce biofuels such as ethanol, butanol, and methane, as well as biodiesel and hydrogen. The biomass used for electricity generation varies by region. Forest by-products, such as wood residues, are common in the United States. Agricultural waste is common in Mauritius (sugar cane residue) and Southeast Asia (rice husks). Animal husbandry residues, such as poultry litter, are common in the United Kingdom.[77]
82
+
83
+ Biofuels include a wide range of fuels which are derived from biomass. The term covers solid, liquid, and gaseous fuels.[78] Liquid biofuels include bioalcohols, such as bioethanol, and oils, such as biodiesel. Gaseous biofuels include biogas, landfill gas and synthetic gas. Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. These include maize, sugarcane and, more recently, sweet sorghum. The latter crop is particularly suitable for growing in dryland conditions, and is being investigated by International Crops Research Institute for the Semi-Arid Tropics for its potential to provide fuel, along with food and animal feed, in arid parts of Asia and Africa.[79]
84
+
85
+ With advanced technology being developed, cellulosic biomass, such as trees and grasses, are also used as feedstocks for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the United States and in Brazil. The energy costs for producing bio-ethanol are almost equal to, the energy yields from bio-ethanol. However, according to the European Environment Agency, biofuels do not address global warming concerns.[80] Biodiesel is made from vegetable oils, animal fats or recycled greases. It can be used as a fuel for vehicles in its pure form, or more commonly as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. Biofuels provided 2.7% of the world's transport fuel in 2010.[81]
86
+
87
+ Biomass, biogas and biofuels are burned to produce heat/power and in doing so harm the environment. Pollutants such as sulphurous oxides (SOx), nitrous oxides (NOx), and particulate matter (PM) are produced from the combustion of biomass; the World Health Organisation estimates that 7 million premature deaths are caused each year by air pollution.[82] Biomass combustion is a major contributor.[82][83][84]
88
+
89
+ Renewable energy production from some sources such as wind and solar is more variable and more geographically spread than technology based on fossil fuels and nuclear. While integrating it into the wider energy system is feasible, it does lead to some additional challenges. In order for the energy system to remain stable, a set of measurements can be taken. Implementation of energy storage, using a wide variety of renewable energy technologies, and implementing a smart grid in which energy is automatically used at the moment it is produced can reduce risks and costs of renewable energy implementation.[85] In some locations, individual households can opt to purchase renewable energy through a consumer green energy program.
90
+
91
+ Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity accounts for more than 90% of all grid power storage. Costs of lithium-ion batteries are dropping rapidly, and are increasingly being deployed grid ancillary services and for domestic storage.
92
+
93
+ Renewable power has been more effective in creating jobs than coal or oil in the United States.[86] In 2016, employment in the sector increased 6 percent in the United States, causing employment in the non-renewable energy sector to decrease 18 percent. Worldwide, renewables employ about 8.1 million as of 2016.[87]
94
+
95
+ From the end of 2004, worldwide renewable energy capacity grew at rates of 10–60% annually for many technologies. In 2015 global investment in renewables rose 5% to $285.9 billion, breaking the previous record of $278.5 billion in 2011. 2015 was also the first year that saw renewables, excluding large hydro, account for the majority of all new power capacity (134 GW, making up 53.6% of the total). Of the renewables total, wind accounted for 72 GW and solar photovoltaics 56 GW; both record-breaking numbers and sharply up from 2014 figures (49 GW and 45 GW respectively). In financial terms, solar made up 56% of total new investment and wind accounted for 38%.
96
+
97
+ In 2014 global wind power capacity expanded 16% to 369,553 MW.[90] Yearly wind energy production is also growing rapidly and has reached around 4% of worldwide electricity usage,[91] 11.4% in the EU,[92] and it is widely used in Asia, and the United States. In 2015, worldwide installed photovoltaics capacity increased to 227 gigawatts (GW), sufficient to supply 1 percent of global electricity demands.[93] Solar thermal energy stations operate in the United States and Spain, and as of 2016, the largest of these is the 392 MW Ivanpah Solar Electric Generating System in California.[94][95] The world's largest geothermal power installation is The Geysers in California, with a rated capacity of 750 MW. Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18% of the country's automotive fuel. Ethanol fuel is also widely available in the United States.
98
+
99
+ In 2017, investments in renewable energy amounted to US$279.8 billion worldwide, with China accounting for US$126.6 billion or 45% of the global investments, the US for US$40.5 billion, and Europe for US$40.9 billion.[6] The results of a recent review of the literature concluded that as greenhouse gas (GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies.[96]
100
+
101
+ Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2018 report from the International Renewable Energy Agency (IRENA), found that the cost of renewable energy is quickly falling, and will likely be equal to or less than the cost non-renewables such as fossil fuels by 2020. The report found that solar power costs have dropped 73% since 2010 and onshore wind costs have dropped by 23% in that same timeframe.[106]
102
+
103
+ Current projections concerning the future cost of renewables vary however. The EIA has predicted that almost two thirds of net additions to power capacity will come from renewables by 2020 due to the combined policy benefits of local pollution, decarbonisation and energy diversification.
104
+
105
+ According to a 2018 report by Bloomberg New Energy Finance, wind and solar power are expected to generate roughly 50% of the world's energy needs by 2050, while coal powered electricity plants are expected to drop to just 11%.[107]
106
+ Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies.[108] Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today".[108] A series of studies by the US National Renewable Energy Laboratory modeled the "grid in the Western US under a number of different scenarios where intermittent renewables accounted for 33 percent of the total power." In the models, inefficiencies in cycling the fossil fuel plants to compensate for the variation in solar and wind energy resulted in an additional cost of "between $0.47 and $1.28 to each MegaWatt hour generated"; however, the savings in the cost of the fuels saved "adds up to $7 billion, meaning the added costs are, at most, two percent of the savings."[109]
107
+
108
+ In 2017 the world renewable hydropower capacity was 1,154 GW.[15] Only a quarter of the worlds estimated hydroelectric potential of 14,000 TWh/year has been developed, the regional potentials for the growth of hydropower around the world are, 71% Europe, 75% North America, 79% South America, 95% Africa, 95% Middle East, 82% Asia Pacific. However, the political realities of new reservoirs in western countries, economic limitations in the third world and the lack of a transmission system in undeveloped areas, result in the possibility of developing 25% of the remaining potential before 2050, with the bulk of that being in the Asia Pacific area.[110] There is slow growth taking place in Western counties,[citation needed] but not in the conventional dam and reservoir style of the past. New projects take the form of run-of-the-river and small hydro, neither using large reservoirs. It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid.[111] Where circumstances permit existing dams such as the Russell Dam built in 1985 may be updated with "pump back" facilities for pumped-storage which is useful for peak loads or to support intermittent wind and solar power. Countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro.[112]
109
+
110
+ Wind power is widely used in Europe, China, and the United States. From 2004 to 2017, worldwide installed capacity of wind power has been growing from 47 GW to 514 GW—a more than tenfold increase within 13 years[15] As of the end of 2014, China, the United States and Germany combined accounted for half of total global capacity.[90] Several other countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, and 14% in Ireland in 2010 and have since continued to expand their installed capacity.[113][114] More than 80 countries around the world are using wind power on a commercial basis.[81]
111
+
112
+ Wind turbines are increasing in power with some commercially deployed models generating over 8MW per turbine.[115][116][117] More powerful models are in development, see list of most powerful wind turbines.
113
+
114
+ Solar thermal energy capacity has increased from 1.3 GW in 2012 to 5.0 GW in 2017.[15]
115
+
116
+ Spain is the world leader in solar thermal power deployment with 2.3 GW deployed.[15] The United States has 1.8 GW,[15] most of it in California where 1.4 GW of solar thermal power projects are operational.[121] Several power plants have been constructed in the Mojave Desert, Southwestern United States. As of 2017 only 4 other countries have deployments above 100 MW:[15] South Africa (300 MW) India (229 MW) Morocco (180 MW) and United Arab Emirates (100 MW).
117
+
118
+ The United States conducted much early research in photovoltaics and concentrated solar power. The U.S. is among the top countries in the world in electricity generated by the Sun and several of the world's largest utility-scale installations are located in the desert Southwest.
119
+
120
+ The oldest solar thermal power plant in the world is the 354 megawatt (MW) SEGS thermal power plant, in California.[122] The Ivanpah Solar Electric Generating System is a solar thermal power project in the California Mojave Desert, 40 miles (64 km) southwest of Las Vegas, with a gross capacity of 377 MW.[123] The 280 MW Solana Generating Station is a solar power plant near Gila Bend, Arizona, about 70 miles (110 km) southwest of Phoenix, completed in 2013. When commissioned it was the largest parabolic trough plant in the world and the first U.S. solar plant with molten salt thermal energy storage.[124]
121
+
122
+ In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.[125]
123
+
124
+ Worldwide growth of PV capacity grouped by region in MW (2006–2014)
125
+
126
+ Photovoltaics (PV) is rapidly-growing with global capacity increasing from 177 GW at the end of 2014 to 385 GW in 2017.[15]
127
+
128
+ PV uses solar cells assembled into solar panels to convert sunlight into electricity. PV systems range from small, residential and commercial rooftop or building integrated installations, to large utility-scale photovoltaic power station. The predominant PV technology is crystalline silicon, while thin-film solar cell technology accounts for about 10 percent of global photovoltaic deployment. In recent years, PV technology has improved its electricity generating efficiency, reduced the installation cost per watt as well as its energy payback time, and reached grid parity in at least 30 different markets by 2014.[126]
129
+ Building-integrated photovoltaics or "onsite" PV systems use existing land and structures and generate power close to where it is consumed.[127]
130
+
131
+ Photovoltaics grew fastest in China, followed by Japan and the United States. Italy meets 7.9 percent of its electricity demands with photovoltaic power—the highest share worldwide.[128] Solar power is forecasted to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16% and 11%, respectively. This requires an increase of installed PV capacity to 4,600 GW, of which more than half is expected to be deployed in China and India.[129]
132
+
133
+ Commercial concentrated solar power plants were first developed in the 1980s. As the cost of solar electricity has fallen, the number of grid-connected solar PV systems has grown into the millions and utility-scale solar power stations with hundreds of megawatts are being built. Many solar photovoltaic power stations have been built, mainly in Europe, China and the United States.[130] The 1.5 GW Tengger Desert Solar Park, in China is the world's largest PV power station. Many of these plants are integrated with agriculture and some use tracking systems that follow the sun's daily path across the sky to generate more electricity than fixed-mounted systems.
134
+
135
+ Bioenergy global capacity in 2017 was 109 GW.[15]
136
+ Biofuels provided 3% of the world's transport fuel in 2017.[131]
137
+
138
+ Mandates for blending biofuels exist in 31 countries at the national level and in 29 states/provinces.[81] According to the International Energy Agency, biofuels have the potential to meet more than a quarter of world demand for transportation fuels by 2050.[132]
139
+
140
+ Since the 1970s, Brazil has had an ethanol fuel program which has allowed the country to become the world's second largest producer of ethanol (after the United States) and the world's largest exporter.[133] Brazil's ethanol fuel program uses modern equipment and cheap sugarcane as feedstock, and the residual cane-waste (bagasse) is used to produce heat and power.[134] There are no longer light vehicles in Brazil running on pure gasoline. By the end of 2008 there were 35,000 filling stations throughout Brazil with at least one ethanol pump.[135] Unfortunately, Operation Car Wash has seriously eroded public trust in oil companies and has implicated several high ranking Brazilian officials.
141
+
142
+ Nearly all the gasoline sold in the United States today is mixed with 10% ethanol,[136] and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, Daimler AG, and GM are among the automobile companies that sell "flexible-fuel" cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol. By mid-2006, there were approximately 6 million ethanol compatible vehicles on U.S. roads.[137]
143
+
144
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
145
+
146
+ Geothermal power is cost effective, reliable, sustainable, and environmentally friendly,[138] but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are usually much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels.
147
+
148
+ In 2017, the United States led the world in geothermal electricity production with 12.9 GW of installed capacity.[15] The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California.[139] The Philippines follows the US as the second highest producer of geothermal power in the world, with 1.9 GW of capacity online.[15]
149
+
150
+ Renewable energy technology has sometimes been seen as a costly luxury item by critics, and affordable only in the affluent developed world. This erroneous view has persisted for many years, however between 2016 and 2017, investments in renewable energy were higher in developing countries than in developed countries, with China leading global investment with a record 126.6 billion dollars. Many Latin American and African countries increased their investments significantly as well.[140]
151
+ Renewable energy can be particularly suitable for developing countries. In rural and remote areas, transmission and distribution of energy generated from fossil fuels can be difficult and expensive. Producing renewable energy locally can offer a viable alternative.[141]
152
+
153
+ Technology advances are opening up a huge new market for solar power: the approximately 1.3 billion people around the world who don't have access to grid electricity. Even though they are typically very poor, these people have to pay far more for lighting than people in rich countries because they use inefficient kerosene lamps. Solar power costs half as much as lighting with kerosene.[142] As of 2010, an estimated 3 million households get power from small solar PV systems.[143] Kenya is the world leader in the number of solar power systems installed per capita. More than 30,000 very small solar panels, each producing 1[144] 2 to 30 watts, are sold in Kenya annually. Some Small Island Developing States (SIDS) are also turning to solar power to reduce their costs and increase their sustainability.
154
+
155
+ Micro-hydro configured into mini-grids also provide power. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] Clean liquid fuel sourced from renewable feedstocks are used for cooking and lighting in energy-poor areas of the developing world. Alcohol fuels (ethanol and methanol) can be produced sustainably from non-food sugary, starchy, and cellulostic feedstocks. Project Gaia, Inc. and CleanStar Mozambique are implementing clean cooking programs with liquid ethanol stoves in Ethiopia, Kenya, Nigeria and Mozambique.[145]
156
+
157
+ Renewable energy projects in many developing countries have demonstrated that renewable energy can directly contribute to poverty reduction by providing the energy needed for creating businesses and employment. Renewable energy technologies can also make indirect contributions to alleviating poverty by providing energy for cooking, space heating, and lighting. Renewable energy can also contribute to education, by providing electricity to schools.[146]
158
+
159
+ Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishing energy policy in early 2000s, most countries around the world now have some form of energy policy.[147]
160
+
161
+ The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed in 2009, by 75 countries signing the charter of IRENA.[149] As of April 2019, IRENA has 160 member states.[150] The then United Nations' Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity,[32] and in September 2011 he launched the UN Sustainable Energy for All initiative to improve energy access, efficiency and the deployment of renewable energy.[151]
162
+
163
+ The 2015 Paris Agreement on climate change motivated many countries to develop or improve renewable energy policies.[12] In 2017, a total of 121 countries have adapted some form of renewable energy policy.[147] National targets that year existed in at 176 countries.[12] In addition, there is also a wide range of policies at state/provincial and local levels.[81] Some public utilities help plan or install residential energy upgrades. Under president Barack Obama, the United States policy encouraged the uptake of renewable energy in line with commitments to the Paris agreement. Even though Trump has abandoned these goals, renewable investment is still on the rise.[152]
164
+
165
+ Many national, state, and local governments have created green banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies.[153] Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy. The US military has also focused on the use of renewable fuels for military vehicles. Unlike fossil fuels, renewable fuels can be produced in any country, creating a strategic advantage. The US military has already committed itself to have 50% of its energy consumption come from alternative sources.[154]
166
+
167
+ The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. Renewable energy use has grown much faster than even advocates anticipated.[155] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Professors S. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges".[156]
168
+
169
+ Using 100% renewable energy was first suggested in a Science paper published in 1975 by Danish physicist Bent Sørensen.[157] It was followed by several other proposals, until in 1998 the first detailed analysis of scenarios with very high shares of renewables were published. These were followed by the first detailed 100% scenarios. In 2006 a PhD thesis was published by Czisch in which it was shown that in a 100% renewable scenario energy supply could match demand in every hour of the year in Europe and North Africa. In the same year Danish Energy professor Henrik Lund published a first paper[158] in which he addresses the optimal combination of renewables, which was followed by several other papers on the transition to 100% renewable energy in Denmark. Since then Lund has been publishing several papers on 100% renewable energy. After 2009 publications began to rise steeply, covering 100% scenarios for countries in Europe, America, Australia and other parts of the world.[159]
170
+
171
+ In 2011 Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University, and Mark Delucchi published a study on 100% renewable global energy supply in the journal Energy Policy. They found producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic".[160] They also found that energy costs with a wind, solar, water system should be similar to today's energy costs.[161]
172
+
173
+ Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."[162]
174
+
175
+ The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological.[163][164] According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.[165]
176
+
177
+ According to World Bank the "below 2°C" climate scenario requires 3 billions of tonnes of metals and minerals by 2050. Supply of mined resources such as zinc, molybdenum, silver, nickel, copper must increase by up to 500%.[166] A 2018 analysis estimated required increases in stock of metals required by various sectors from 1000% (wind power) to 87'000% (personal vehicle batteries).[167]
178
+
179
+ Other renewable energy technologies are still under development, and include cellulosic ethanol, hot-dry-rock geothermal power, and marine energy.[168] These technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research, development and demonstration (RD&D) funding.[168]
180
+
181
+ There are numerous organizations within the academic, federal, and commercial sectors conducting large scale advanced research in the field of renewable energy. This research spans several areas of focus across the renewable energy spectrum. Most of the research is targeted at improving efficiency and increasing overall energy yields.[169]
182
+ Multiple federally supported research organizations have focused on renewable energy in recent years. Two of the most prominent of these labs are Sandia National Laboratories and the National Renewable Energy Laboratory (NREL), both of which are funded by the United States Department of Energy and supported by various corporate partners.[170] Sandia has a total budget of $2.4 billion[171] while NREL has a budget of $375 million.[172]
183
+
184
+ Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful in low-income countries with relative air humidity over 60%.[203]
185
+
186
+ Renewable electricity production, from sources such as wind power and solar power, is intermittent which results in reduced capacity factor and require either energy storage of capacity equal to its total output, or base load power sources based on fossil fuels or nuclear power.
187
+
188
+ Since renewable energy sources power density per land area is at best three orders of magnitude smaller than fossil or nuclear power,[204] renewable power plants tends to occupy thousands of hectares causing environmental concerns and opposition from local residents, especially in densely populated countries. Solar power plants are competing with arable land and nature reserves,[205] while on-shore wind farms face opposition due to aesthetic concerns and noise, which is impacting both humans and wildlife.[206][207][208][209] In the United States, the Massachusetts Cape Wind project was delayed for years partly because of aesthetic concerns. However, residents in other areas have been more positive. According to a town councilor, the overwhelming majority of locals believe that the Ardrossan Wind Farm in Scotland has enhanced the area.[210] These concerns, when directed against renewable energy, are sometimes described as "not in my back yard" attitude (NIMBY).
189
+
190
+ A recent[when?] UK Government document states that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake".[211] In countries such as Germany and Denmark many renewable projects are owned by communities, particularly through cooperative structures, and contribute significantly to overall levels of renewable energy deployment.[212][213]
191
+
192
+ The market for renewable energy technologies has continued to grow. Climate change concerns and increasing in green jobs, coupled with high oil prices, peak oil, oil wars, oil spills, promotion of electric vehicles and renewable electricity, nuclear disasters and increasing government support, are driving increasing renewable energy legislation, incentives and commercialization.[18] New government spending, regulation and policies helped the industry weather the 2009 economic crisis better than many other sectors.[29]
193
+
194
+ While renewables have been very successful in their ever-growing contribution to electrical power there are no countries dominated by fossil fuels who have a plan to stop and get that power from renwables. Only Scotland and Ontario have stopped burning coal, largely due to good natural gas supplies. In the area of transportation, fossil fuels are even more entrenched and solutions harder to find.[214] It's unclear if there are failures with policy or renewable energy, but twenty years after the Kyoto Protocol fossil fuels are still our primary energy source and consumption continues to grow.[215]
195
+
196
+ The International Energy Agency has stated that deployment of renewable technologies usually increases the diversity of electricity sources and, through local generation, contributes to the flexibility of the system and its resistance to central shocks.[216]
197
+
198
+ From around 2010 onwards, there was increasing discussion about the geopolitical impact of the growing use of renewable energy.[217] It was argued that former fossil fuels exporters would experience a weakening of their position in international affairs, while countries with abundant sunshine, wind, hydropower, or geothermal resources would be strengthened.[218] Also countries rich in critical materials for renewable energy technologies were expected to rise in importance in international affairs.[219]
199
+
200
+ The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former fossil fuels exporters are expected to lose power, while the positions of former fossil fuel importers and countries rich in renewable energy resources is expected to strengthen.[220]
201
+
202
+ The ability of biomass and biofuels to contribute to a reduction in CO2 emissions is limited because both biomass and biofuels emit large amounts of air pollution when burned and in some cases compete with food supply. Furthermore, biomass and biofuels consume large amounts of water.[221] Other renewable sources such as wind power, photovoltaics, and hydroelectricity have the advantage of being able to conserve water, lower pollution and reduce CO2 emissions.
203
+ The installations used to produce wind, solar and hydro power are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts.[222] More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphazised that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.[223]
204
+
205
+ Renewable energy devices depend on non-renewable resources such as mined metals and use vast amounts of land due to their small surface power density. Manufacturing of photovoltaic panels, wind turbines and batteries requires significant amounts of rare-earth elements[224] and increases mining operations, which have social and environmental impact.[225] Due to co-occurrence of rare-earth and radioactive elements (thorium, uranium and radium), rare-earth mining results in production of low-level radioactive waste.[226]
206
+
207
+ Solar panels change the albedo of the surface what increases their contribution to global warming.[227]
208
+
209
+ Burbo, NW-England
210
+
211
+ Sunrise at the Fenton Wind Farm in Minnesota, US
212
+
213
+ The CSP-station Andasol in Andalusia, Spain
214
+
215
+ Ivanpah solar plant in the Mojave Desert, California, United States
216
+
217
+ Three Gorges Dam and Gezhouba Dam, China
218
+
219
+ Shop selling PV panels in Ouagadougou, Burkina Faso
220
+
221
+ Stump harvesting increases recovery of biomass from forests
222
+
223
+ A small, roof-top mounted PV system in Bonn, Germany
224
+
225
+ The community-owned Westmill Solar Park in South East England
226
+
227
+ Komekurayama photovoltaic power station in Kofu, Japan
228
+
229
+ Krafla, a geothermal power station in Iceland
en/1752.html.txt ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ World electricity generation by source in 2017. Total generation was 26 PWh.[1]
6
+
7
+ Renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat.[3] Renewable energy often provides energy in four important areas: electricity generation, air and water heating/cooling, transportation, and rural (off-grid) energy services.[4]
8
+
9
+ Based on REN21's 2017 report, renewables contributed 19.3% to humans' global energy consumption and 24.5% to their generation of electricity in 2015 and 2016, respectively. This energy consumption is divided as 8.9% coming from traditional biomass, 4.2% as heat energy (modern biomass, geothermal and solar heat), 3.9% from hydroelectricity and the remaining 2.2% is electricity from wind, solar, geothermal, and other forms of biomass. Worldwide investments in renewable technologies amounted to more than US$286 billion in 2015.[5] In 2017, worldwide investments in renewable energy amounted to US$279.8 billion with China accounting for US$126.6 billion or 45% of the global investments, the United States for US$40.5 billion and Europe for US$40.9 billion.[6] Globally there are an estimated 7.7 million jobs associated with the renewable energy industries, with solar photovoltaics being the largest renewable employer.[7] Renewable energy systems are rapidly becoming more efficient and cheaper and their share of total energy consumption is increasing.[8] As of 2019, more than two-thirds of worldwide newly installed electricity capacity was renewable.[9] Growth in consumption of coal and oil could end by 2020 due to increased uptake of renewables and natural gas.[10][11]
10
+
11
+ At the national level, at least 30 nations around the world already have renewable energy contributing more than 20 percent of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.[12]
12
+ Some places and at least two countries, Iceland and Norway, generate all their electricity using renewable energy already, and many other countries have the set a goal to reach 100% renewable energy in the future.[13]
13
+ At least 47 nations around the world already have over 50 percent of electricity from renewable resources.[14][15][16] Renewable energy resources exist over wide geographical areas, in contrast to fossil fuels, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency technologies is resulting in significant energy security, climate change mitigation, and economic benefits.[17] In international public opinion surveys there is strong support for promoting renewable sources such as solar power and wind power.[18][19]
14
+
15
+ While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development.[20] As most of renewable energy technologies provide electricity, renewable energy deployment is often applied in conjunction with further electrification, which has several benefits: electricity can be converted to heat (where necessary generating higher temperatures than fossil fuels), can be converted into mechanical energy with high efficiency, and is clean at the point of consumption.[21][22] In addition, electrification with renewable energy is more efficient and therefore leads to significant reductions in primary energy requirements.[23]
16
+
17
+ Renewable energy flows involve natural phenomena such as sunlight, wind, tides, plant growth, and geothermal heat, as the International Energy Agency explains:[24]
18
+
19
+ Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
20
+
21
+ Renewable energy resources and significant opportunities for energy efficiency exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency, and technological diversification of energy sources, would result in significant energy security and economic benefits.[17] It would also reduce environmental pollution such as air pollution caused by burning of fossil fuels and improve public health, reduce premature mortalities due to pollution and save associated health costs that amount to several hundred billion dollars annually only in the United States.[25] Renewable energy sources, that derive their energy from the sun, either directly or indirectly, such as hydro and wind, are expected to be capable of supplying humanity energy for almost another 1 billion years, at which point the predicted increase in heat from the Sun is expected to make the surface of the Earth too hot for liquid water to exist.[26][27][28]
22
+
23
+ Climate change and global warming concerns, coupled with the continuing fall in the costs of some renewable energy equipment, such as wind turbines and solar panels, are driving increased use of renewables.[18] New government spending, regulation and policies helped the industry weather the global financial crisis better than many other sectors.[29] As of 2019[update], however, according to the International Renewable Energy Agency, renewables overall share in the energy mix (including power, heat and transport) needs to grow six times faster, in order to keep the rise in average global temperatures "well below" 2.0 °C (3.6 °F) during the present century, compared to pre-industrial levels.[30]
24
+
25
+ As of 2011, small solar PV systems provide electricity to a few million households, and micro-hydro configured into mini-grids serves many more. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] [needs update] United Nations' eighth Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity.[32] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond, and some 120 countries have various policy targets for longer-term shares of renewable energy, including a 20% target of all electricity generated for the European Union by 2020. Some countries have much higher long-term policy targets of up to 100% renewables. Outside Europe, a diverse group of 20 or more other countries target renewable energy shares in the 2020–2030 time frame that range from 10% to 50%.[12]
26
+
27
+ Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services:[4]
28
+
29
+ Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. Almost without a doubt the oldest known use of renewable energy, in the form of traditional biomass to fuel fires, dates from more than a million years ago. Use of biomass for fire did not become commonplace until many hundreds of thousands of years later.[37] Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile.[38] From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times.[39] Moving into the time of recorded history, the primary sources of traditional renewable energy were human labor, animal power, water power, wind, in grain crushing windmills, and firewood, a traditional biomass.
30
+
31
+ In the 1860s and 1870s there were already fears that civilization would run out of fossil fuels and the need was felt for a better source. In 1873 Professor Augustin Mouchot wrote:
32
+
33
+ The time will arrive when the industry of Europe will cease to find those natural resources, so necessary for it. Petroleum springs and coal mines are not inexhaustible but are rapidly diminishing in many places. Will man, then, return to the power of water and wind? Or will he emigrate where the most powerful source of heat sends its rays to all? History will show what will come.[40]
34
+
35
+ In 1885, Werner von Siemens, commenting on the discovery of the photovoltaic effect in the solid state, wrote:
36
+
37
+ In conclusion, I would say that however great the scientific importance of this discovery may be, its practical value will be no less obvious when we reflect that the supply of solar energy is both without limit and without cost, and that it will continue to pour down upon us for countless ages after all the coal deposits of the earth have been exhausted and forgotten.[41]
38
+
39
+ Max Weber mentioned the end of fossil fuel in the concluding paragraphs of his Die protestantische Ethik und der Geist des Kapitalismus (The Protestant Ethic and the Spirit of Capitalism), published in 1905.[42] Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911 Scientific American article: "in the far distant future, natural fuels having been exhausted [solar power] will remain as the only means of existence of the human race".[43]
40
+
41
+ The theory of peak oil was published in 1956.[44] In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventual depletion of oil, as well as for an escape from dependence on oil, and the first electricity-generating wind turbines appeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980.[45]
42
+
43
+ In 2018, worldwide installed capacity of wind power was 564��GW.[47]
44
+
45
+ Air flow can be used to run wind turbines. Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine.[48] Areas where winds are stronger and more constant, such as offshore and high-altitude sites, are preferred locations for wind farms. Typically, full load hours of wind turbines vary between 16 and 57 percent annually, but might be higher in particularly favorable offshore sites.[49]
46
+
47
+ Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%.
48
+
49
+ Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore. As offshore wind speeds average ~90% greater than that of land, so offshore resources can contribute substantially more energy than land-stationed turbines.[50]
50
+
51
+ In 2017, worldwide renewable hydropower capacity was 1,154 GW.[15]
52
+
53
+ Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. There are many forms of water energy:
54
+
55
+ Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. For countries having the largest percentage of electricity from renewables, the top 50 are primarily hydroelectric. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity stations larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela.[54]
56
+
57
+ Wave power, which captures the energy of ocean surface waves, and tidal power, converting the energy of tides, are two forms of hydropower with future potential; however, they are not yet widely employed commercially. A demonstration project operated by the Ocean Renewable Power Company on the coast of Maine, and connected to the grid, harnesses tidal power from the Bay of Fundy, location of world's highest tidal flow. Ocean thermal energy conversion, which uses the temperature difference between cooler deep and warmer surface waters, currently has no economic feasibility.[55][56]
58
+
59
+ In 2017, global installed solar capacity was 390 GW.[15]
60
+
61
+ Solar energy, radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, concentrated solar power (CSP), concentrator photovoltaics (CPV), solar architecture and artificial photosynthesis.[58][59] Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert, and distribute solar energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air. Active solar technologies encompass solar thermal energy, using solar collectors for heating, and solar power, converting sunlight into electricity either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP).
62
+
63
+ A photovoltaic system converts light into electrical direct current (DC) by taking advantage of the photoelectric effect.[60] Solar PV has turned into a multi-billion, fast-growing industry, continues to improve its cost-effectiveness, and has the most potential of any renewable technologies together with CSP.[61][62] Concentrated solar power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Commercial concentrated solar power plants were first developed in the 1980s. CSP-Stirling has by far the highest efficiency among all solar energy technologies.
64
+
65
+ In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared".[58] Italy has the largest proportion of solar electricity in the world; in 2015, solar supplied 7.7% of electricity demand in Italy.[63] In 2017, after another year of rapid growth, solar generated approximately 2% of global power, or 460 TWh.[64]
66
+
67
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
68
+
69
+ High temperature geothermal energy is from thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. Earth's geothermal energy originates from the original formation of the planet and from radioactive decay of minerals (in currently uncertain[65] but possibly roughly equal[66] proportions). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots geo, meaning earth, and thermos, meaning heat.
70
+
71
+ The heat that is used for geothermal energy can be from deep within the Earth, all the way down to Earth's core – 4,000 miles (6,400 km) down. At the core, temperatures may reach over 9,000 °F (5,000 °C). Heat conducts from the core to surrounding rock. Extremely high temperature and pressure cause some rock to melt, which is commonly known as magma. Magma convects upward since it is lighter than the solid rock. This magma then heats rock and water in the crust, sometimes up to 700 °F (371 °C).[67]
72
+
73
+ Low temperature geothermal[35] refers to the use of the outer crust of the Earth as a thermal battery to facilitate renewable thermal energy for heating and cooling buildings, and other refrigeration and industrial uses. In this form of geothermal, a geothermal heat pump and ground-coupled heat exchanger are used together to move heat energy into the Earth (for cooling) and out of the Earth (for heating) on a varying seasonal basis. Low temperature geothermal (generally referred to as "GHP") is an increasingly important renewable technology because it both reduces total annual energy loads associated with heating and cooling, and it also flattens the electric demand curve eliminating the extreme summer and winter peak electric supply requirements. Thus low temperature geothermal/GHP is becoming an increasing national priority with multiple tax credit support[68] and focus as part of the ongoing movement toward net zero energy.[36]
74
+
75
+ Bioenergy global capacity in 2017 was 109 GW.[15]
76
+
77
+ Biomass is biological material derived from living, or recently living organisms. It most often refers to plants or plant-derived materials which are specifically called lignocellulosic biomass.[69] As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel. Conversion of biomass to biofuel can be achieved by different methods which are broadly classified into: thermal, chemical, and biochemical methods. Wood remains the largest biomass energy source today;[70] examples include forest residues – such as dead trees, branches and tree stumps –, yard clippings, wood chips and even municipal solid waste. In the second sense, biomass includes plant or animal matter that can be converted into fibers or other industrial chemicals, including biofuels. Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, bamboo,[71] and a variety of tree species, ranging from eucalyptus to oil palm (palm oil).
78
+
79
+ Plant energy is produced by crops specifically grown for use as fuel that offer high biomass output per hectare with low input energy.[72] The grain can be used for liquid transportation fuels while the straw can be burned to produce heat or electricity. Plant biomass can also be degraded from cellulose to glucose through a series of chemical treatments, and the resulting sugar can then be used as a first generation biofuel.
80
+
81
+ Biomass can be converted to other usable forms of energy such as methane gas[73] or transportation fuels such as ethanol and biodiesel. Rotting garbage, and agricultural and human waste, all release methane gas – also called landfill gas or biogas. Crops, such as corn and sugarcane, can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products such as vegetable oils and animal fats.[74] Also, biomass to liquids (BTLs) and cellulosic ethanol are still under research.[75][76] There is a great deal of research involving algal fuel or algae-derived biomass due to the fact that it is a non-food resource and can be produced at rates 5 to 10 times those of other types of land-based agriculture, such as corn and soy. Once harvested, it can be fermented to produce biofuels such as ethanol, butanol, and methane, as well as biodiesel and hydrogen. The biomass used for electricity generation varies by region. Forest by-products, such as wood residues, are common in the United States. Agricultural waste is common in Mauritius (sugar cane residue) and Southeast Asia (rice husks). Animal husbandry residues, such as poultry litter, are common in the United Kingdom.[77]
82
+
83
+ Biofuels include a wide range of fuels which are derived from biomass. The term covers solid, liquid, and gaseous fuels.[78] Liquid biofuels include bioalcohols, such as bioethanol, and oils, such as biodiesel. Gaseous biofuels include biogas, landfill gas and synthetic gas. Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. These include maize, sugarcane and, more recently, sweet sorghum. The latter crop is particularly suitable for growing in dryland conditions, and is being investigated by International Crops Research Institute for the Semi-Arid Tropics for its potential to provide fuel, along with food and animal feed, in arid parts of Asia and Africa.[79]
84
+
85
+ With advanced technology being developed, cellulosic biomass, such as trees and grasses, are also used as feedstocks for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the United States and in Brazil. The energy costs for producing bio-ethanol are almost equal to, the energy yields from bio-ethanol. However, according to the European Environment Agency, biofuels do not address global warming concerns.[80] Biodiesel is made from vegetable oils, animal fats or recycled greases. It can be used as a fuel for vehicles in its pure form, or more commonly as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. Biofuels provided 2.7% of the world's transport fuel in 2010.[81]
86
+
87
+ Biomass, biogas and biofuels are burned to produce heat/power and in doing so harm the environment. Pollutants such as sulphurous oxides (SOx), nitrous oxides (NOx), and particulate matter (PM) are produced from the combustion of biomass; the World Health Organisation estimates that 7 million premature deaths are caused each year by air pollution.[82] Biomass combustion is a major contributor.[82][83][84]
88
+
89
+ Renewable energy production from some sources such as wind and solar is more variable and more geographically spread than technology based on fossil fuels and nuclear. While integrating it into the wider energy system is feasible, it does lead to some additional challenges. In order for the energy system to remain stable, a set of measurements can be taken. Implementation of energy storage, using a wide variety of renewable energy technologies, and implementing a smart grid in which energy is automatically used at the moment it is produced can reduce risks and costs of renewable energy implementation.[85] In some locations, individual households can opt to purchase renewable energy through a consumer green energy program.
90
+
91
+ Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity accounts for more than 90% of all grid power storage. Costs of lithium-ion batteries are dropping rapidly, and are increasingly being deployed grid ancillary services and for domestic storage.
92
+
93
+ Renewable power has been more effective in creating jobs than coal or oil in the United States.[86] In 2016, employment in the sector increased 6 percent in the United States, causing employment in the non-renewable energy sector to decrease 18 percent. Worldwide, renewables employ about 8.1 million as of 2016.[87]
94
+
95
+ From the end of 2004, worldwide renewable energy capacity grew at rates of 10–60% annually for many technologies. In 2015 global investment in renewables rose 5% to $285.9 billion, breaking the previous record of $278.5 billion in 2011. 2015 was also the first year that saw renewables, excluding large hydro, account for the majority of all new power capacity (134 GW, making up 53.6% of the total). Of the renewables total, wind accounted for 72 GW and solar photovoltaics 56 GW; both record-breaking numbers and sharply up from 2014 figures (49 GW and 45 GW respectively). In financial terms, solar made up 56% of total new investment and wind accounted for 38%.
96
+
97
+ In 2014 global wind power capacity expanded 16% to 369,553 MW.[90] Yearly wind energy production is also growing rapidly and has reached around 4% of worldwide electricity usage,[91] 11.4% in the EU,[92] and it is widely used in Asia, and the United States. In 2015, worldwide installed photovoltaics capacity increased to 227 gigawatts (GW), sufficient to supply 1 percent of global electricity demands.[93] Solar thermal energy stations operate in the United States and Spain, and as of 2016, the largest of these is the 392 MW Ivanpah Solar Electric Generating System in California.[94][95] The world's largest geothermal power installation is The Geysers in California, with a rated capacity of 750 MW. Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18% of the country's automotive fuel. Ethanol fuel is also widely available in the United States.
98
+
99
+ In 2017, investments in renewable energy amounted to US$279.8 billion worldwide, with China accounting for US$126.6 billion or 45% of the global investments, the US for US$40.5 billion, and Europe for US$40.9 billion.[6] The results of a recent review of the literature concluded that as greenhouse gas (GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies.[96]
100
+
101
+ Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2018 report from the International Renewable Energy Agency (IRENA), found that the cost of renewable energy is quickly falling, and will likely be equal to or less than the cost non-renewables such as fossil fuels by 2020. The report found that solar power costs have dropped 73% since 2010 and onshore wind costs have dropped by 23% in that same timeframe.[106]
102
+
103
+ Current projections concerning the future cost of renewables vary however. The EIA has predicted that almost two thirds of net additions to power capacity will come from renewables by 2020 due to the combined policy benefits of local pollution, decarbonisation and energy diversification.
104
+
105
+ According to a 2018 report by Bloomberg New Energy Finance, wind and solar power are expected to generate roughly 50% of the world's energy needs by 2050, while coal powered electricity plants are expected to drop to just 11%.[107]
106
+ Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies.[108] Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today".[108] A series of studies by the US National Renewable Energy Laboratory modeled the "grid in the Western US under a number of different scenarios where intermittent renewables accounted for 33 percent of the total power." In the models, inefficiencies in cycling the fossil fuel plants to compensate for the variation in solar and wind energy resulted in an additional cost of "between $0.47 and $1.28 to each MegaWatt hour generated"; however, the savings in the cost of the fuels saved "adds up to $7 billion, meaning the added costs are, at most, two percent of the savings."[109]
107
+
108
+ In 2017 the world renewable hydropower capacity was 1,154 GW.[15] Only a quarter of the worlds estimated hydroelectric potential of 14,000 TWh/year has been developed, the regional potentials for the growth of hydropower around the world are, 71% Europe, 75% North America, 79% South America, 95% Africa, 95% Middle East, 82% Asia Pacific. However, the political realities of new reservoirs in western countries, economic limitations in the third world and the lack of a transmission system in undeveloped areas, result in the possibility of developing 25% of the remaining potential before 2050, with the bulk of that being in the Asia Pacific area.[110] There is slow growth taking place in Western counties,[citation needed] but not in the conventional dam and reservoir style of the past. New projects take the form of run-of-the-river and small hydro, neither using large reservoirs. It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid.[111] Where circumstances permit existing dams such as the Russell Dam built in 1985 may be updated with "pump back" facilities for pumped-storage which is useful for peak loads or to support intermittent wind and solar power. Countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro.[112]
109
+
110
+ Wind power is widely used in Europe, China, and the United States. From 2004 to 2017, worldwide installed capacity of wind power has been growing from 47 GW to 514 GW—a more than tenfold increase within 13 years[15] As of the end of 2014, China, the United States and Germany combined accounted for half of total global capacity.[90] Several other countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, and 14% in Ireland in 2010 and have since continued to expand their installed capacity.[113][114] More than 80 countries around the world are using wind power on a commercial basis.[81]
111
+
112
+ Wind turbines are increasing in power with some commercially deployed models generating over 8MW per turbine.[115][116][117] More powerful models are in development, see list of most powerful wind turbines.
113
+
114
+ Solar thermal energy capacity has increased from 1.3 GW in 2012 to 5.0 GW in 2017.[15]
115
+
116
+ Spain is the world leader in solar thermal power deployment with 2.3 GW deployed.[15] The United States has 1.8 GW,[15] most of it in California where 1.4 GW of solar thermal power projects are operational.[121] Several power plants have been constructed in the Mojave Desert, Southwestern United States. As of 2017 only 4 other countries have deployments above 100 MW:[15] South Africa (300 MW) India (229 MW) Morocco (180 MW) and United Arab Emirates (100 MW).
117
+
118
+ The United States conducted much early research in photovoltaics and concentrated solar power. The U.S. is among the top countries in the world in electricity generated by the Sun and several of the world's largest utility-scale installations are located in the desert Southwest.
119
+
120
+ The oldest solar thermal power plant in the world is the 354 megawatt (MW) SEGS thermal power plant, in California.[122] The Ivanpah Solar Electric Generating System is a solar thermal power project in the California Mojave Desert, 40 miles (64 km) southwest of Las Vegas, with a gross capacity of 377 MW.[123] The 280 MW Solana Generating Station is a solar power plant near Gila Bend, Arizona, about 70 miles (110 km) southwest of Phoenix, completed in 2013. When commissioned it was the largest parabolic trough plant in the world and the first U.S. solar plant with molten salt thermal energy storage.[124]
121
+
122
+ In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.[125]
123
+
124
+ Worldwide growth of PV capacity grouped by region in MW (2006–2014)
125
+
126
+ Photovoltaics (PV) is rapidly-growing with global capacity increasing from 177 GW at the end of 2014 to 385 GW in 2017.[15]
127
+
128
+ PV uses solar cells assembled into solar panels to convert sunlight into electricity. PV systems range from small, residential and commercial rooftop or building integrated installations, to large utility-scale photovoltaic power station. The predominant PV technology is crystalline silicon, while thin-film solar cell technology accounts for about 10 percent of global photovoltaic deployment. In recent years, PV technology has improved its electricity generating efficiency, reduced the installation cost per watt as well as its energy payback time, and reached grid parity in at least 30 different markets by 2014.[126]
129
+ Building-integrated photovoltaics or "onsite" PV systems use existing land and structures and generate power close to where it is consumed.[127]
130
+
131
+ Photovoltaics grew fastest in China, followed by Japan and the United States. Italy meets 7.9 percent of its electricity demands with photovoltaic power—the highest share worldwide.[128] Solar power is forecasted to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16% and 11%, respectively. This requires an increase of installed PV capacity to 4,600 GW, of which more than half is expected to be deployed in China and India.[129]
132
+
133
+ Commercial concentrated solar power plants were first developed in the 1980s. As the cost of solar electricity has fallen, the number of grid-connected solar PV systems has grown into the millions and utility-scale solar power stations with hundreds of megawatts are being built. Many solar photovoltaic power stations have been built, mainly in Europe, China and the United States.[130] The 1.5 GW Tengger Desert Solar Park, in China is the world's largest PV power station. Many of these plants are integrated with agriculture and some use tracking systems that follow the sun's daily path across the sky to generate more electricity than fixed-mounted systems.
134
+
135
+ Bioenergy global capacity in 2017 was 109 GW.[15]
136
+ Biofuels provided 3% of the world's transport fuel in 2017.[131]
137
+
138
+ Mandates for blending biofuels exist in 31 countries at the national level and in 29 states/provinces.[81] According to the International Energy Agency, biofuels have the potential to meet more than a quarter of world demand for transportation fuels by 2050.[132]
139
+
140
+ Since the 1970s, Brazil has had an ethanol fuel program which has allowed the country to become the world's second largest producer of ethanol (after the United States) and the world's largest exporter.[133] Brazil's ethanol fuel program uses modern equipment and cheap sugarcane as feedstock, and the residual cane-waste (bagasse) is used to produce heat and power.[134] There are no longer light vehicles in Brazil running on pure gasoline. By the end of 2008 there were 35,000 filling stations throughout Brazil with at least one ethanol pump.[135] Unfortunately, Operation Car Wash has seriously eroded public trust in oil companies and has implicated several high ranking Brazilian officials.
141
+
142
+ Nearly all the gasoline sold in the United States today is mixed with 10% ethanol,[136] and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, Daimler AG, and GM are among the automobile companies that sell "flexible-fuel" cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol. By mid-2006, there were approximately 6 million ethanol compatible vehicles on U.S. roads.[137]
143
+
144
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
145
+
146
+ Geothermal power is cost effective, reliable, sustainable, and environmentally friendly,[138] but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are usually much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels.
147
+
148
+ In 2017, the United States led the world in geothermal electricity production with 12.9 GW of installed capacity.[15] The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California.[139] The Philippines follows the US as the second highest producer of geothermal power in the world, with 1.9 GW of capacity online.[15]
149
+
150
+ Renewable energy technology has sometimes been seen as a costly luxury item by critics, and affordable only in the affluent developed world. This erroneous view has persisted for many years, however between 2016 and 2017, investments in renewable energy were higher in developing countries than in developed countries, with China leading global investment with a record 126.6 billion dollars. Many Latin American and African countries increased their investments significantly as well.[140]
151
+ Renewable energy can be particularly suitable for developing countries. In rural and remote areas, transmission and distribution of energy generated from fossil fuels can be difficult and expensive. Producing renewable energy locally can offer a viable alternative.[141]
152
+
153
+ Technology advances are opening up a huge new market for solar power: the approximately 1.3 billion people around the world who don't have access to grid electricity. Even though they are typically very poor, these people have to pay far more for lighting than people in rich countries because they use inefficient kerosene lamps. Solar power costs half as much as lighting with kerosene.[142] As of 2010, an estimated 3 million households get power from small solar PV systems.[143] Kenya is the world leader in the number of solar power systems installed per capita. More than 30,000 very small solar panels, each producing 1[144] 2 to 30 watts, are sold in Kenya annually. Some Small Island Developing States (SIDS) are also turning to solar power to reduce their costs and increase their sustainability.
154
+
155
+ Micro-hydro configured into mini-grids also provide power. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] Clean liquid fuel sourced from renewable feedstocks are used for cooking and lighting in energy-poor areas of the developing world. Alcohol fuels (ethanol and methanol) can be produced sustainably from non-food sugary, starchy, and cellulostic feedstocks. Project Gaia, Inc. and CleanStar Mozambique are implementing clean cooking programs with liquid ethanol stoves in Ethiopia, Kenya, Nigeria and Mozambique.[145]
156
+
157
+ Renewable energy projects in many developing countries have demonstrated that renewable energy can directly contribute to poverty reduction by providing the energy needed for creating businesses and employment. Renewable energy technologies can also make indirect contributions to alleviating poverty by providing energy for cooking, space heating, and lighting. Renewable energy can also contribute to education, by providing electricity to schools.[146]
158
+
159
+ Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishing energy policy in early 2000s, most countries around the world now have some form of energy policy.[147]
160
+
161
+ The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed in 2009, by 75 countries signing the charter of IRENA.[149] As of April 2019, IRENA has 160 member states.[150] The then United Nations' Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity,[32] and in September 2011 he launched the UN Sustainable Energy for All initiative to improve energy access, efficiency and the deployment of renewable energy.[151]
162
+
163
+ The 2015 Paris Agreement on climate change motivated many countries to develop or improve renewable energy policies.[12] In 2017, a total of 121 countries have adapted some form of renewable energy policy.[147] National targets that year existed in at 176 countries.[12] In addition, there is also a wide range of policies at state/provincial and local levels.[81] Some public utilities help plan or install residential energy upgrades. Under president Barack Obama, the United States policy encouraged the uptake of renewable energy in line with commitments to the Paris agreement. Even though Trump has abandoned these goals, renewable investment is still on the rise.[152]
164
+
165
+ Many national, state, and local governments have created green banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies.[153] Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy. The US military has also focused on the use of renewable fuels for military vehicles. Unlike fossil fuels, renewable fuels can be produced in any country, creating a strategic advantage. The US military has already committed itself to have 50% of its energy consumption come from alternative sources.[154]
166
+
167
+ The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. Renewable energy use has grown much faster than even advocates anticipated.[155] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Professors S. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges".[156]
168
+
169
+ Using 100% renewable energy was first suggested in a Science paper published in 1975 by Danish physicist Bent Sørensen.[157] It was followed by several other proposals, until in 1998 the first detailed analysis of scenarios with very high shares of renewables were published. These were followed by the first detailed 100% scenarios. In 2006 a PhD thesis was published by Czisch in which it was shown that in a 100% renewable scenario energy supply could match demand in every hour of the year in Europe and North Africa. In the same year Danish Energy professor Henrik Lund published a first paper[158] in which he addresses the optimal combination of renewables, which was followed by several other papers on the transition to 100% renewable energy in Denmark. Since then Lund has been publishing several papers on 100% renewable energy. After 2009 publications began to rise steeply, covering 100% scenarios for countries in Europe, America, Australia and other parts of the world.[159]
170
+
171
+ In 2011 Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University, and Mark Delucchi published a study on 100% renewable global energy supply in the journal Energy Policy. They found producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic".[160] They also found that energy costs with a wind, solar, water system should be similar to today's energy costs.[161]
172
+
173
+ Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."[162]
174
+
175
+ The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological.[163][164] According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.[165]
176
+
177
+ According to World Bank the "below 2°C" climate scenario requires 3 billions of tonnes of metals and minerals by 2050. Supply of mined resources such as zinc, molybdenum, silver, nickel, copper must increase by up to 500%.[166] A 2018 analysis estimated required increases in stock of metals required by various sectors from 1000% (wind power) to 87'000% (personal vehicle batteries).[167]
178
+
179
+ Other renewable energy technologies are still under development, and include cellulosic ethanol, hot-dry-rock geothermal power, and marine energy.[168] These technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research, development and demonstration (RD&D) funding.[168]
180
+
181
+ There are numerous organizations within the academic, federal, and commercial sectors conducting large scale advanced research in the field of renewable energy. This research spans several areas of focus across the renewable energy spectrum. Most of the research is targeted at improving efficiency and increasing overall energy yields.[169]
182
+ Multiple federally supported research organizations have focused on renewable energy in recent years. Two of the most prominent of these labs are Sandia National Laboratories and the National Renewable Energy Laboratory (NREL), both of which are funded by the United States Department of Energy and supported by various corporate partners.[170] Sandia has a total budget of $2.4 billion[171] while NREL has a budget of $375 million.[172]
183
+
184
+ Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful in low-income countries with relative air humidity over 60%.[203]
185
+
186
+ Renewable electricity production, from sources such as wind power and solar power, is intermittent which results in reduced capacity factor and require either energy storage of capacity equal to its total output, or base load power sources based on fossil fuels or nuclear power.
187
+
188
+ Since renewable energy sources power density per land area is at best three orders of magnitude smaller than fossil or nuclear power,[204] renewable power plants tends to occupy thousands of hectares causing environmental concerns and opposition from local residents, especially in densely populated countries. Solar power plants are competing with arable land and nature reserves,[205] while on-shore wind farms face opposition due to aesthetic concerns and noise, which is impacting both humans and wildlife.[206][207][208][209] In the United States, the Massachusetts Cape Wind project was delayed for years partly because of aesthetic concerns. However, residents in other areas have been more positive. According to a town councilor, the overwhelming majority of locals believe that the Ardrossan Wind Farm in Scotland has enhanced the area.[210] These concerns, when directed against renewable energy, are sometimes described as "not in my back yard" attitude (NIMBY).
189
+
190
+ A recent[when?] UK Government document states that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake".[211] In countries such as Germany and Denmark many renewable projects are owned by communities, particularly through cooperative structures, and contribute significantly to overall levels of renewable energy deployment.[212][213]
191
+
192
+ The market for renewable energy technologies has continued to grow. Climate change concerns and increasing in green jobs, coupled with high oil prices, peak oil, oil wars, oil spills, promotion of electric vehicles and renewable electricity, nuclear disasters and increasing government support, are driving increasing renewable energy legislation, incentives and commercialization.[18] New government spending, regulation and policies helped the industry weather the 2009 economic crisis better than many other sectors.[29]
193
+
194
+ While renewables have been very successful in their ever-growing contribution to electrical power there are no countries dominated by fossil fuels who have a plan to stop and get that power from renwables. Only Scotland and Ontario have stopped burning coal, largely due to good natural gas supplies. In the area of transportation, fossil fuels are even more entrenched and solutions harder to find.[214] It's unclear if there are failures with policy or renewable energy, but twenty years after the Kyoto Protocol fossil fuels are still our primary energy source and consumption continues to grow.[215]
195
+
196
+ The International Energy Agency has stated that deployment of renewable technologies usually increases the diversity of electricity sources and, through local generation, contributes to the flexibility of the system and its resistance to central shocks.[216]
197
+
198
+ From around 2010 onwards, there was increasing discussion about the geopolitical impact of the growing use of renewable energy.[217] It was argued that former fossil fuels exporters would experience a weakening of their position in international affairs, while countries with abundant sunshine, wind, hydropower, or geothermal resources would be strengthened.[218] Also countries rich in critical materials for renewable energy technologies were expected to rise in importance in international affairs.[219]
199
+
200
+ The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former fossil fuels exporters are expected to lose power, while the positions of former fossil fuel importers and countries rich in renewable energy resources is expected to strengthen.[220]
201
+
202
+ The ability of biomass and biofuels to contribute to a reduction in CO2 emissions is limited because both biomass and biofuels emit large amounts of air pollution when burned and in some cases compete with food supply. Furthermore, biomass and biofuels consume large amounts of water.[221] Other renewable sources such as wind power, photovoltaics, and hydroelectricity have the advantage of being able to conserve water, lower pollution and reduce CO2 emissions.
203
+ The installations used to produce wind, solar and hydro power are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts.[222] More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphazised that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.[223]
204
+
205
+ Renewable energy devices depend on non-renewable resources such as mined metals and use vast amounts of land due to their small surface power density. Manufacturing of photovoltaic panels, wind turbines and batteries requires significant amounts of rare-earth elements[224] and increases mining operations, which have social and environmental impact.[225] Due to co-occurrence of rare-earth and radioactive elements (thorium, uranium and radium), rare-earth mining results in production of low-level radioactive waste.[226]
206
+
207
+ Solar panels change the albedo of the surface what increases their contribution to global warming.[227]
208
+
209
+ Burbo, NW-England
210
+
211
+ Sunrise at the Fenton Wind Farm in Minnesota, US
212
+
213
+ The CSP-station Andasol in Andalusia, Spain
214
+
215
+ Ivanpah solar plant in the Mojave Desert, California, United States
216
+
217
+ Three Gorges Dam and Gezhouba Dam, China
218
+
219
+ Shop selling PV panels in Ouagadougou, Burkina Faso
220
+
221
+ Stump harvesting increases recovery of biomass from forests
222
+
223
+ A small, roof-top mounted PV system in Bonn, Germany
224
+
225
+ The community-owned Westmill Solar Park in South East England
226
+
227
+ Komekurayama photovoltaic power station in Kofu, Japan
228
+
229
+ Krafla, a geothermal power station in Iceland
en/1753.html.txt ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ A fossil fuel is a fuel formed by natural processes, such as anaerobic decomposition of buried dead organisms, containing organic molecules originating in ancient photosynthesis[1] that release energy in combustion.[2]
6
+ Such organisms and their resulting fossil fuels typically have an age of millions of years, and sometimes more than 650 million years.[3]
7
+ Fossil fuels contain high percentages of carbon and include petroleum, coal, and natural gas.[4] Peat is also sometimes considered a fossil fuel.[5]
8
+ Commonly used derivatives of fossil fuels include kerosene and propane.
9
+ Fossil fuels range from volatile materials with low carbon-to-hydrogen ratios (like methane), to liquids (like petroleum), to nonvolatile materials composed of almost pure carbon, like anthracite coal.
10
+ Methane can be found in hydrocarbon fields alone, associated with oil, or in the form of methane clathrates.
11
+
12
+ As of 2018, the world's main primary energy sources consisted of petroleum (34%), coal (27%), and natural gas (24%), amounting to an 85% share for fossil fuels in primary energy consumption in the world.
13
+ Non-fossil sources included nuclear (4.4%), hydroelectric (6.8%), and other renewables (4.0%, including geothermal, solar, tidal, wind, wood, and waste).[6]
14
+ The share of renewables (including traditional biomass) in the world's total final energy consumption was 18% in 2018.[7] Compared with 2017, world energy-consumption grew at a rate of 2.9%, almost double its 10-year average of 1.5% per year, and the fastest since 2010.[8]
15
+
16
+ Although fossil fuels are continually formed by natural processes, they are generally classified as non-renewable resources because they take millions of years to form and known viable reserves are being depleted much faster than new ones are generated.[9][10]
17
+
18
+ Most air pollution deaths are due to fossil fuel combustion products, it is estimated to cost over 3% of global GDP,[11] and fossil fuel phase-out would save 3.6 million lives each year.[12]
19
+
20
+ The use of fossil fuels raises serious environmental concerns.
21
+ The burning of fossil fuels produces around 35 billion tonnes (35 gigatonnes) of carbon dioxide (CO2) per year.[13]
22
+ It is estimated that natural processes can only absorb a small part of that amount, so there is a net increase of many billion tonnes of atmospheric carbon dioxide per year.[14]
23
+ CO2 is a greenhouse gas that increases radiative forcing and contributes to global warming and ocean acidification.
24
+ A global movement towards the generation of low-carbon renewable energy is underway to help reduce global greenhouse-gas emissions.
25
+
26
+ The theory that fossil fuels formed from the fossilized remains of dead plants by exposure to heat and pressure in the Earth's crust over millions of years was first introduced by Andreas Libavius "in his 1597 Alchemia [Alchymia]" and later by Mikhail Lomonosov "as early as 1757 and certainly by 1763".[16] The first use of the term "fossil fuel" occurs in the work of the German chemist Caspar Neumann, in English translation in 1759.[17] The Oxford English Dictionary notes that in the phrase "fossil fuel" the adjective "fossil" means "[o]btained by digging; found buried in the earth", which dates to at least 1652,[18] before the English noun "fossil" came to refer primarily to long-dead organisms in the early 18th century.[19]
27
+
28
+ Aquatic phytoplankton and zooplankton that died and sedimented in large quantities under anoxic conditions millions of years ago began forming petroleum and natural gas as a result of anaerobic decomposition. Over geological time this organic matter, mixed with mud, became buried under further heavy layers of inorganic sediment. The resulting high temperature and pressure caused the organic matter to chemically alter, first into a waxy material known as kerogen, which is found in oil shales, and then with more heat into liquid and gaseous hydrocarbons in a process known as catagenesis. Despite these heat driven transformations (which increase the energy density compared to typical organic matter by removal of oxygen atoms)[2], the energy released in combustion is still photosynthetic in origin.[1]
29
+
30
+ Terrestrial plants, on the other hand, tended to form coal and methane. Many of the coal fields date to the Carboniferous period of Earth's history. Terrestrial plants also form type III kerogen, a source of natural gas.
31
+
32
+ There is a wide range of organic compounds in any given fuel. The specific mixture of hydrocarbons gives a fuel its characteristic properties, such as density, viscosity, boiling point, melting point, etc. Some fuels like natural gas, for instance, contain only very low boiling, gaseous components. Others such as gasoline or diesel contain much higher boiling components.
33
+
34
+ Fossil fuels are of great importance because they can be burned (oxidized to carbon dioxide and water), producing significant amounts of energy per unit mass. The use of coal as a fuel predates recorded history. Coal was used to run furnaces for the smelting of metal ore. While semi-solid hydrocarbons from seeps were also burned in ancient times,[20] they were mostly used for waterproofing and embalming.[21]
35
+
36
+ Commercial exploitation of petroleum began in the 19th century, largely to replace oils from animal sources (notably whale oil) for use in oil lamps.[22]
37
+
38
+ Natural gas, once flared-off as an unneeded byproduct of petroleum production, is now considered a very valuable resource.[23] Natural gas deposits are also the main source of helium.
39
+
40
+ Heavy crude oil, which is much more viscous than conventional crude oil, and oil sands, where bitumen is found mixed with sand and clay, began to become more important as sources of fossil fuel in the early 2000s.[24] Oil shale and similar materials are sedimentary rocks containing kerogen, a complex mixture of high-molecular weight organic compounds, which yield synthetic crude oil when heated (pyrolyzed). With additional processing, they can be employed in lieu of other established fossil fuels. More recently, there has been disinvestment from exploitation of such resources due to their high carbon cost relative to more easily processed reserves.[25]
41
+
42
+ Prior to the latter half of the 18th century, windmills and watermills provided the energy needed for industry such as milling flour, sawing wood or pumping water, while burning wood or peat provided domestic heat. The wide-scale use of fossil fuels, coal at first and petroleum later, in steam engines enabled the Industrial Revolution. At the same time, gas lights using natural gas or coal gas were coming into wide use. The invention of the internal combustion engine and its use in automobiles and trucks greatly increased the demand for gasoline and diesel oil, both made from fossil fuels. Other forms of transportation, railways and aircraft, also require fossil fuels. The other major use for fossil fuels is in generating electricity and as feedstock for the petrochemical industry. Tar, a leftover of petroleum extraction, is used in construction of roads.
43
+
44
+ Levels of primary energy sources are the reserves in the ground. Flows are production of fossil fuels from these reserves. The most important primary energy sources are carbon-based fossil energy sources.
45
+
46
+ P. E. Hodgson, a senior research fellow emeritus in physics at Corpus Christi College, Oxford, expects the world energy use to double every fourteen years and the need to increase faster still, and he insisted in 2008 that the world oil production, a main resource of fossil fuel, was expected to peak in ten years and thereafter fall.[26]
47
+
48
+ The principle of supply and demand holds that as hydrocarbon supplies diminish, prices will rise. Therefore, higher prices will lead to increased alternative, renewable energy supplies as previously uneconomic sources become sufficiently economical to exploit. Artificial gasolines and other renewable energy sources currently require more expensive production and processing technologies than conventional petroleum reserves, but may become economically viable in the near future.
49
+ Different alternative sources of energy include nuclear, hydroelectric, solar, wind, and geothermal.
50
+
51
+ One of the more promising energy alternatives is the use of inedible feed stocks and biomass for carbon dioxide capture as well as biofuel production. While these processes are not without problems, they are currently in practice around the world. Biodiesels are being produced by several companies and are the subject of research at several universities. Processes for converting renewable lipids into usable fuels include hydrotreating and decarboxylation.
52
+
53
+ The United States holds less than 5% of the world's population, but due to large houses and private cars, uses more than 25% of the world's supply of fossil fuels.[27] As the largest source of U.S. greenhouse gas emissions, CO2 from fossil fuel combustion accounted for 80 percent of weighted emissions in 1998.[28] Combustion of fossil fuels also produces other air pollutants, such as nitrogen oxides, sulfur dioxide, volatile organic compounds and heavy metals.
54
+
55
+ According to Environment Canada:
56
+
57
+ "The electricity sector is unique among industrial sectors in its very large contribution to emissions associated with nearly all air issues. Electricity generation produces a large share of Canadian nitrogen oxides and sulphur dioxide emissions, which contribute to smog and acid rain and the formation of fine particulate matter. It is the largest uncontrolled industrial source of mercury emissions in Canada. Fossil fuel-fired electric power plants also emit carbon dioxide, which may contribute to climate change. In addition, the sector has significant impacts on water and habitat and species. In particular, hydropower dams and transmission lines have significant effects on water and biodiversity."[29]
58
+
59
+ According to U.S. scientist Jerry Mahlman, who crafted the IPCC language used to define levels of scientific certainty, the new report will blame fossil fuels for global warming with "virtual certainty," meaning 99% sure. That's a significant jump from "likely," or 66% sure, in the group's last report in 2001. More than 1,600 pages of research went into the new assessment.[30]
60
+
61
+ Combustion of fossil fuels generates sulfuric and nitric acids, which fall to Earth as acid rain, impacting both natural areas and the built environment. Monuments and sculptures made from marble and limestone are particularly vulnerable, as the acids dissolve calcium carbonate.
62
+
63
+ Fossil fuels also contain radioactive materials, mainly uranium and thorium, which are released into the atmosphere. In 2000, about 12,000 tonnes of thorium and 5,000 tonnes of uranium were released worldwide from burning coal.[31] It is estimated that during 1982, US coal burning released 155 times as much radioactivity into the atmosphere as the Three Mile Island accident.[32]
64
+
65
+ Burning coal also generates large amounts of bottom ash and fly ash. These materials are used in a wide variety of applications, utilizing, for example, about 40% of the US production.[33]
66
+
67
+ Harvesting, processing, and distributing fossil fuels can also create environmental concerns. Coal mining methods, particularly mountaintop removal and strip mining, have negative environmental impacts, and offshore oil drilling poses a hazard to aquatic organisms. Fossil fuel wells can contribute to methane release via fugitive gas emissions. Oil refineries also have negative environmental impacts, including air and water pollution. Transportation of coal requires the use of diesel-powered locomotives, while crude oil is typically transported by tanker ships, requiring the combustion of additional fossil fuels.
68
+
69
+ Environmental regulation uses a variety of approaches to limit these emissions, such as command-and-control (which mandates the amount of pollution or the technology used), economic incentives, or voluntary programs.
70
+
71
+ An example of such regulation in the USA is the "EPA is implementing policies to reduce airborne mercury emissions. Under regulations issued in 2005, coal-fired power plants will need to reduce their emissions by 70 percent by 2018.".[34]
72
+
73
+ In economic terms, pollution from fossil fuels is regarded as a negative externality. Taxation is considered as one way to make societal costs explicit, in order to 'internalize' the cost of pollution. This aims to make fossil fuels more expensive, thereby reducing their use and the amount of associated pollution, along with raising the funds necessary to counteract these effects.[citation needed]
74
+
75
+ According to Rodman D. Griffin, "The burning of coal and oil have saved inestimable amounts of time and labor while substantially raising living standards around the world".[35] Although the use of fossil fuels may seem beneficial to our lives, it plays a role in global warming and it is said to be dangerous for the future.[35]
76
+
77
+ Moreover, this environmental pollution impacts humans because particulates and other air pollution from fossil fuel combustion cause illness and death when inhaled by people. These health effects include premature death, acute respiratory illness, aggravated asthma, chronic bronchitis and decreased lung function. The poor, undernourished, very young and very old, and people with preexisting respiratory disease and other ill health, are more at risk.[36]
78
+
79
+ In 2014, the global energy industry revenue was about US$8 trillion,[37] with about 84% fossil fuel, 4% nuclear, and 12% renewable (including hydroelectric).[38]
80
+
81
+ In 2014, there were 1,469 oil and gas firms listed on stock exchanges around the world, with a combined market capitalization of US$4.65 trillion.[39] In 2019, Saudi Aramco was listed and it touched a US$2 trillion valuation on its second day of trading,[40] after the world's largest initial public offering.[41]
82
+
83
+ Air pollution from fossil fuels in 2018 has been estimated to cost US$2.9 trillion, or 3.3% of global GDP.[11]
84
+
85
+ The International Energy Agency estimated 2017 global government fossil fuel subsidies to have been $300 billion.[42]
86
+
87
+ A 2015 report studied 20 fossil fuel companies and found that, while highly profitable, the hidden economic cost to society was also large.[43][44] The report spans the period 2008–2012 and notes that: "For all companies and all years, the economic cost to society of their CO2 emissions was greater than their after‐tax profit, with the single exception of ExxonMobil in 2008."[43]:4 Pure coal companies fare even worse: "the economic cost to society exceeds total revenue in all years, with this cost varying between nearly $2 and nearly $9 per $1 of revenue."[43]:5 In this case, total revenue includes "employment, taxes, supply purchases, and indirect employment."[43]:4
88
+
89
+ Fossil fuel prices generally are below their actual costs, or their "efficient prices," when economic externalities, such as the costs of air pollution and global climate destruction, are taken into account. Fossil fuels are subsidized in the amount of $4.7 trillion in 2015, which is equivalent to 6.3% of the 2015 global GDP and are estimated to grow to $5.2 trillion in 2017, which is equivalent to 6.5% of global GDP. The largest five subsidizers in 2015 were the following: China with $1.4 trillion in fossil fuel subsidies, United States with $649 billion, Russia with $551 billion, the European Union with $289 billion, and India with $209 billion. Had there been no subsidies for fossil fuels, global carbon emissions would have been lowered by an estimated 28% in 2015, air-pollution related deaths reduced by 46%, and government revenue increased by $2.8 trillion or 3.8% of GDP.[45]
en/1754.html.txt ADDED
@@ -0,0 +1,428 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object.[note 1] Energy is a conserved quantity; the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the energy transferred to an object by the work of moving it a distance of 1 metre against a force of 1 newton.
6
+
7
+ Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field (gravitational, electric or magnetic), the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, and the thermal energy due to an object's temperature.
8
+
9
+ Mass and energy are closely related. Due to mass–energy equivalence, any object that has mass when stationary (called rest mass) also has an equivalent amount of energy whose form is called rest energy, and any additional energy (of any form) acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale.
10
+
11
+ Living organisms require energy to stay alive, such as the energy humans get from food. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth.
12
+
13
+ The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, and generally is a function of the position of an object within a field or may be stored in the field itself.
14
+
15
+ While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, macroscopic mechanical energy is the sum of translational and rotational kinetic and potential energy in a system neglects the kinetic energy due to temperature, and nuclear energy which combines utilize potentials from the nuclear force and the weak force), among others.[citation needed]
16
+
17
+
18
+
19
+ The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation',[1] which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
20
+
21
+ In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two.
22
+
23
+ In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense.[2] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
24
+
25
+ These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.[3] Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
26
+
27
+ In 1843, Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle.
28
+
29
+ In the International System of Units (SI), the unit of energy is the joule, named after James Prescott Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
30
+
31
+ The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
32
+
33
+ In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
34
+
35
+ Work, a function of energy, is force times distance.
36
+
37
+ This says that the work (
38
+
39
+
40
+
41
+ W
42
+
43
+
44
+ {\displaystyle W}
45
+
46
+ ) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
47
+
48
+ The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.[4]
49
+
50
+ Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
51
+
52
+ Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
53
+
54
+ In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the case of endothermic reactions the situation is the reverse. Chemical reactions are almost invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT – that is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
55
+
56
+ In biology, energy is an attribute of all biological systems from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy used in respiration is mostly stored in molecular oxygen [5] and can be unlocked by reactions with molecules of substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum.[6] The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.[7]
57
+
58
+ Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, and proteins and high-energy compounds like oxygen [5] and ATP. Carbohydrates, lipids, and proteins can release the energy of oxygen, which is utilized by living organisms as an electron acceptor. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark, in a forest fire, or it may be made available more slowly for animal or human metabolism, when organic molecules are ingested, and catabolism is triggered by enzyme action.
59
+
60
+ Any living organism relies on an external source of energy – radiant energy from the Sun in the case of green plants, chemical energy in some form in the case of animals – to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria
61
+
62
+ and some of the energy is used to convert ADP into ATP.
63
+
64
+ The rest of the chemical energy in O2[8] and the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[note 2]
65
+
66
+ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy), and it is true that most real machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").[note 3] Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[9] i.e. reconverted into carbon dioxide and heat.
67
+
68
+ In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior,[10] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth.
69
+
70
+ Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives many weather phenomena, save those generated by volcanic events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement.
71
+
72
+ In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may be later released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars created these atoms.
73
+
74
+ In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
75
+
76
+
77
+
78
+ In quantum mechanics, energy is defined in terms of the energy operator
79
+ as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation:
80
+
81
+
82
+
83
+ E
84
+ =
85
+ h
86
+ ν
87
+
88
+
89
+ {\displaystyle E=h\nu }
90
+
91
+ (where
92
+
93
+
94
+
95
+ h
96
+
97
+
98
+ {\displaystyle h}
99
+
100
+ is Planck's constant and
101
+
102
+
103
+
104
+ ν
105
+
106
+
107
+ {\displaystyle \nu }
108
+
109
+ the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
110
+
111
+ When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
112
+
113
+ where
114
+
115
+ For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
116
+
117
+ In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.[11]
118
+
119
+ Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
120
+
121
+ In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector).[11] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts).
122
+
123
+
124
+
125
+ Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator; or a heat engine, from heat to work.
126
+
127
+ Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that in itself (since it still contains the same total energy even if in different forms), but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
128
+
129
+ There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
130
+
131
+ Energy transformations in the universe over time are characterized by various kinds of potential energy that has been available since the Big Bang later being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released that was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
132
+
133
+ Energy is also transferred from potential energy (
134
+
135
+
136
+
137
+
138
+ E
139
+
140
+ p
141
+
142
+
143
+
144
+
145
+ {\displaystyle E_{p}}
146
+
147
+ ) to kinetic energy (
148
+
149
+
150
+
151
+
152
+ E
153
+
154
+ k
155
+
156
+
157
+
158
+
159
+ {\displaystyle E_{k}}
160
+
161
+ ) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+ (4)
172
+
173
+ The equation can then be simplified further since
174
+
175
+
176
+
177
+
178
+ E
179
+
180
+ p
181
+
182
+
183
+ =
184
+ m
185
+ g
186
+ h
187
+
188
+
189
+ {\displaystyle E_{p}=mgh}
190
+
191
+ (mass times acceleration due to gravity times the height) and
192
+
193
+
194
+
195
+
196
+ E
197
+
198
+ k
199
+
200
+
201
+ =
202
+
203
+
204
+ 1
205
+ 2
206
+
207
+
208
+ m
209
+
210
+ v
211
+
212
+ 2
213
+
214
+
215
+
216
+
217
+ {\displaystyle E_{k}={\frac {1}{2}}mv^{2}}
218
+
219
+ (half mass times velocity squared). Then the total amount of energy can be found by adding
220
+
221
+
222
+
223
+
224
+ E
225
+
226
+ p
227
+
228
+
229
+ +
230
+
231
+ E
232
+
233
+ k
234
+
235
+
236
+ =
237
+
238
+ E
239
+
240
+ t
241
+ o
242
+ t
243
+ a
244
+ l
245
+
246
+
247
+
248
+
249
+ {\displaystyle E_{p}+E_{k}=E_{total}}
250
+
251
+ .
252
+
253
+ Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between rest-mass and rest-energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information).
254
+
255
+ Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since
256
+
257
+
258
+
259
+
260
+ c
261
+
262
+ 2
263
+
264
+
265
+
266
+
267
+ {\displaystyle c^{2}}
268
+
269
+ is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~
270
+
271
+
272
+
273
+ 9
274
+ ×
275
+
276
+ 10
277
+
278
+ 16
279
+
280
+
281
+
282
+
283
+ {\displaystyle 9\times 10^{16}}
284
+
285
+ joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics.
286
+
287
+ Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomisation in a crystal).
288
+
289
+ As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less.
290
+
291
+ The fact that energy can be neither created nor be destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out by work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[12]
292
+
293
+ While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations.[13] The total energy of a system can be calculated by adding up all forms of energy in the system.
294
+
295
+ Richard Feynman said during a 1961 lecture:[14]
296
+
297
+ There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.
298
+
299
+ Most kinds of energy (with gravitational energy being a notable exception)[15] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[13][14]
300
+
301
+ This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time,[16] a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in principle be defined and measured.
302
+
303
+ Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appears as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
304
+
305
+ In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
306
+
307
+ which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
308
+
309
+ In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena.
310
+
311
+ Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat.[note 4] Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[note 5] and the conductive transfer of thermal energy.
312
+
313
+ Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:[note 6]
314
+
315
+
316
+
317
+
318
+
319
+
320
+
321
+
322
+
323
+ (1)
324
+
325
+ where
326
+
327
+
328
+
329
+ E
330
+
331
+
332
+ {\displaystyle E}
333
+
334
+ is the amount of energy transferred,
335
+
336
+
337
+
338
+ W
339
+
340
+
341
+ {\displaystyle W}
342
+
343
+   represents the work done on the system, and
344
+
345
+
346
+
347
+ Q
348
+
349
+
350
+ {\displaystyle Q}
351
+
352
+ represents the heat flow into the system. As a simplification, the heat term,
353
+
354
+
355
+
356
+ Q
357
+
358
+
359
+ {\displaystyle Q}
360
+
361
+ , is sometimes ignored, especially when the thermal efficiency of the transfer is high.
362
+
363
+
364
+
365
+
366
+
367
+
368
+
369
+
370
+
371
+ (2)
372
+
373
+ This simplified equation is the one used to define the joule, for example.
374
+
375
+ Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by
376
+
377
+
378
+
379
+ E
380
+
381
+
382
+ {\displaystyle E}
383
+
384
+ , one may write
385
+
386
+
387
+
388
+
389
+
390
+
391
+
392
+
393
+
394
+ (3)
395
+
396
+ Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.[17]
397
+
398
+ The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved[18] and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
399
+
400
+ where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
401
+
402
+ This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and pV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
403
+
404
+ where
405
+
406
+
407
+
408
+ δ
409
+ Q
410
+
411
+
412
+ {\displaystyle \delta Q}
413
+
414
+ is the heat supplied to the system and
415
+
416
+
417
+
418
+ δ
419
+ W
420
+
421
+
422
+ {\displaystyle \delta W}
423
+
424
+ is the work applied to the system.
425
+
426
+ The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over the whole cycle, or over many cycles, net energy is thus equally split between kinetic and potential. This is called equipartition principle; total energy of a system with many degrees of freedom is equally split among all available degrees of freedom.
427
+
428
+ This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics. The second law of thermodynamics is valid only for systems which are near or in equilibrium state. For non-equilibrium systems, the laws governing system's behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production.[19][20] It states that nonequilibrium systems behave in such a way to maximize its entropy production.[21]
en/1755.html.txt ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Solar energy is radiant light and heat from the Sun that is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, solar thermal energy, solar architecture, molten salt power plants and artificial photosynthesis.[1][2]
4
+
5
+ It is an essential source of renewable energy, and its technologies are broadly characterized as either passive solar or active solar depending on how they capture and distribute solar energy or convert it into solar power. Active solar techniques include the use of photovoltaic systems, concentrated solar power, and solar water heating to harness the energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light-dispersing properties, and designing spaces that naturally circulate air.
6
+
7
+ The large magnitude of solar energy available makes it a highly appealing source of electricity. The United Nations Development Programme in its 2000 World Energy Assessment found that the annual potential of solar energy was 1,575–49,837 exajoules (EJ). This is several times larger than the total world energy consumption, which was 559.8 EJ in 2012.[3][4]
8
+
9
+ In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible, and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating global warming, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared".[1]
10
+
11
+ The Earth receives 174 petawatts (PW) of incoming solar radiation (insolation) at the upper atmosphere.[5] Approximately 30% is reflected back to space while the rest is absorbed by clouds, oceans and land masses. The spectrum of solar light at the Earth's surface is mostly spread across the visible and near-infrared ranges with a small part in the near-ultraviolet.[6] Most of the world's population live in areas with insolation levels of 150–300 watts/m², or 3.5–7.0 kWh/m² per day.[citation needed]
12
+
13
+ Solar radiation is absorbed by the Earth's land surface, oceans – which cover about 71% of the globe – and atmosphere. Warm air containing evaporated water from the oceans rises, causing atmospheric circulation or convection. When the air reaches a high altitude, where the temperature is low, water vapor condenses into clouds, which rain onto the Earth's surface, completing the water cycle. The latent heat of water condensation amplifies convection, producing atmospheric phenomena such as wind, cyclones and anti-cyclones.[7] Sunlight absorbed by the oceans and land masses keeps the surface at an average temperature of 14 °C.[8] By photosynthesis, green plants convert solar energy into chemically stored energy, which produces food, wood and the biomass from which fossil fuels are derived.[9]
14
+
15
+ The total solar energy absorbed by Earth's atmosphere, oceans and land masses is approximately 3,850,000 exajoules (EJ) per year.[10] In 2002, this was more energy in one hour than the world used in one year.[11][12] Photosynthesis captures approximately 3,000 EJ per year in biomass.[13] The amount of solar energy reaching the surface of the planet is so vast that in one year it is about twice as much as will ever be obtained from all of the Earth's non-renewable resources of coal, oil, natural gas, and mined uranium combined,[14]
16
+
17
+ The potential solar energy that could be used by humans differs from the amount of solar energy present near the surface of the planet because factors such as geography, time variation, cloud cover, and the land available to humans limit the amount of solar energy that we can acquire.
18
+
19
+ Geography affects solar energy potential because areas that are closer to the equator have a higher amount of solar radiation. However, the use of photovoltaics that can follow the position of the Sun can significantly increase the solar energy potential in areas that are farther from the equator.[4] Time variation effects the potential of solar energy because during the nighttime, there is little solar radiation on the surface of the Earth for solar panels to absorb. This limits the amount of energy that solar panels can absorb in one day. Cloud cover can affect the potential of solar panels because clouds block incoming light from the Sun and reduce the light available for solar cells.
20
+
21
+ Besides, land availability has a large effect on the available solar energy because solar panels can only be set up on land that is otherwise unused and suitable for solar panels. Roofs are a suitable place for solar cells, as many people have discovered that they can collect energy directly from their homes this way. Other areas that are suitable for solar cells are lands that are not being used for businesses where solar plants can be established.[4]
22
+
23
+ Solar technologies are characterized as either passive or active depending on the way they capture, convert and distribute sunlight and enable solar energy to be harnessed at different levels around the world, mostly depending on the distance from the equator. Although solar energy refers primarily to the use of solar radiation for practical ends, all renewable energies, other than Geothermal power and Tidal power, derive their energy either directly or indirectly from the Sun.
24
+
25
+ Active solar techniques use photovoltaics, concentrated solar power, solar thermal collectors, pumps, and fans to convert sunlight into useful outputs. Passive solar techniques include selecting materials with favorable thermal properties, designing spaces that naturally circulate air, and referencing the position of a building to the Sun. Active solar technologies increase the supply of energy and are considered supply side technologies, while passive solar technologies reduce the need for alternate resources and are generally considered demand-side technologies.[19]
26
+
27
+ In 2000, the United Nations Development Programme, UN Department of Economic and Social Affairs, and World Energy Council published an estimate of the potential solar energy that could be used by humans each year that took into account factors such as insolation, cloud cover, and the land that is usable by humans. The estimate found that solar energy has a global potential of 1,600 to 49,800 exajoules (4.4×1014 to 1.4×1016 kWh) per year (see table below).[4]
28
+
29
+ Quantitative relation of global solar potential vs. the world's primary energy consumption:
30
+
31
+ Source: United Nations Development Programme – World Energy Assessment (2000)[4]
32
+
33
+ Solar thermal technologies can be used for water heating, space heating, space cooling and process heat generation.[20]
34
+
35
+ In 1878, at the Universal Exposition in Paris, Augustin Mouchot successfully demonstrated a solar steam engine, but couldn't continue development because of cheap coal and other factors.
36
+
37
+ In 1897, Frank Shuman, a US inventor, engineer and solar energy pioneer built a small demonstration solar engine that worked by reflecting solar energy onto square boxes filled with ether, which has a lower boiling point than water and were fitted internally with black pipes which in turn powered a steam engine. In 1908 Shuman formed the Sun Power Company with the intent of building larger solar power plants. He, along with his technical advisor A.S.E. Ackermann and British physicist Sir Charles Vernon Boys,[citation needed] developed an improved system using mirrors to reflect solar energy upon collector boxes, increasing heating capacity to the extent that water could now be used instead of ether. Shuman then constructed a full-scale steam engine powered by low-pressure water, enabling him to patent the entire solar engine system by 1912.
38
+
39
+ Shuman built the world's first solar thermal power station in Maadi, Egypt, between 1912 and 1913. His plant used parabolic troughs to power a 45–52 kilowatts (60–70 hp) engine that pumped more than 22,000 litres (4,800 imp gal; 5,800 US gal) of water per minute from the Nile River to adjacent cotton fields. Although the outbreak of World War I and the discovery of cheap oil in the 1930s discouraged the advancement of solar energy, Shuman's vision, and basic design were resurrected in the 1970s with a new wave of interest in solar thermal energy.[21] In 1916 Shuman was quoted in the media advocating solar energy's utilization, saying:
40
+
41
+ We have proved the commercial profit of sun power in the tropics and have more particularly proved that after our stores of oil and coal are exhausted the human race can receive unlimited power from the rays of the Sun.
42
+
43
+ Solar hot water systems use sunlight to heat water. In middle geographical latitudes (between 40 degrees north and 40 degrees south), 60 to 70% of the domestic hot water use, with water temperatures up to 60 °C (140 °F), can be provided by solar heating systems.[23] The most common types of solar water heaters are evacuated tube collectors (44%) and glazed flat plate collectors (34%) generally used for domestic hot water; and unglazed plastic collectors (21%) used mainly to heat swimming pools.[24]
44
+
45
+ As of 2007, the total installed capacity of solar hot water systems was approximately 154 thermal gigawatt (GWth).[25] China is the world leader in their deployment with 70 GWth installed as of 2006 and a long-term goal of 210 GWth by 2020.[26] Israel and Cyprus are the per capita leaders in the use of solar hot water systems with over 90% of homes using them.[27] In the United States, Canada, and Australia, heating swimming pools is the dominant application of solar hot water with an installed capacity of 18 GWth as of 2005.[19]
46
+
47
+ In the United States, heating, ventilation and air conditioning (HVAC) systems account for 30% (4.65 EJ/yr) of the energy used in commercial buildings and nearly 50% (10.1 EJ/yr) of the energy used in residential buildings.[28][29] Solar heating, cooling and ventilation technologies can be used to offset a portion of this energy.
48
+
49
+ Thermal mass is any material that can be used to store heat—heat from the Sun in the case of solar energy. Common thermal mass materials include stone, cement, and water. Historically they have been used in arid climates or warm temperate regions to keep buildings cool by absorbing solar energy during the day and radiating stored heat to the cooler atmosphere at night. However, they can be used in cold temperate areas to maintain warmth as well. The size and placement of thermal mass depend on several factors such as climate, daylighting, and shading conditions. When duly incorporated, thermal mass maintains space temperatures in a comfortable range and reduces the need for auxiliary heating and cooling equipment.[30]
50
+
51
+ A solar chimney (or thermal chimney, in this context) is a passive solar ventilation system composed of a vertical shaft connecting the interior and exterior of a building. As the chimney warms, the air inside is heated, causing an updraft that pulls air through the building. Performance can be improved by using glazing and thermal mass materials[31] in a way that mimics greenhouses.
52
+
53
+ Deciduous trees and plants have been promoted as a means of controlling solar heating and cooling. When planted on the southern side of a building in the northern hemisphere or the northern side in the southern hemisphere, their leaves provide shade during the summer, while the bare limbs allow light to pass during the winter.[32] Since bare, leafless trees shade 1/3 to 1/2 of incident solar radiation, there is a balance between the benefits of summer shading and the corresponding loss of winter heating.[33] In climates with significant heating loads, deciduous trees should not be planted on the Equator-facing side of a building because they will interfere with winter solar availability. They can, however, be used on the east and west sides to provide a degree of summer shading without appreciably affecting winter solar gain.[34]
54
+
55
+ Solar cookers use sunlight for cooking, drying, and pasteurization. They can be grouped into three broad categories: box cookers, panel cookers, and reflector cookers.[35] The simplest solar cooker is the box cooker first built by Horace de Saussure in 1767.[36] A basic box cooker consists of an insulated container with a transparent lid. It can be used effectively with partially overcast skies and will typically reach temperatures of 90–150 °C (194–302 °F).[37] Panel cookers use a reflective panel to direct sunlight onto an insulated container and reach temperatures comparable to box cookers. Reflector cookers use various concentrating geometries (dish, trough, Fresnel mirrors) to focus light on a cooking container. These cookers reach temperatures of 315 °C (599 °F) and above but require direct light to function properly and must be repositioned to track the Sun.[38]
56
+
57
+ Solar concentrating technologies such as parabolic dish, trough and Scheffler reflectors can provide process heat for commercial and industrial applications. The first commercial system was the Solar Total Energy Project (STEP) in Shenandoah, Georgia, US where a field of 114 parabolic dishes provided 50% of the process heating, air conditioning and electrical requirements for a clothing factory. This grid-connected cogeneration system provided 400 kW of electricity plus thermal energy in the form of 401 kW steam and 468 kW chilled water, and had a one-hour peak load thermal storage.[39] Evaporation ponds are shallow pools that concentrate dissolved solids through evaporation. The use of evaporation ponds to obtain salt from seawater is one of the oldest applications of solar energy. Modern uses include concentrating brine solutions used in leach mining and removing dissolved solids from waste streams.[40] Clothes lines, clotheshorses, and clothes racks dry clothes through evaporation by wind and sunlight without consuming electricity or gas. In some states of the United States legislation protects the "right to dry" clothes.[41] Unglazed transpired collectors (UTC) are perforated sun-facing walls used for preheating ventilation air. UTCs can raise the incoming air temperature up to 22 °C (40 °F) and deliver outlet temperatures of 45–60 °C (113–140 °F).[42] The short payback period of transpired collectors (3 to 12 years) makes them a more cost-effective alternative than glazed collection systems.[42] As of 2003, over 80 systems with a combined collector area of 35,000 square metres (380,000 sq ft) had been installed worldwide, including an 860 m2 (9,300 sq ft) collector in Costa Rica used for drying coffee beans and a 1,300 m2 (14,000 sq ft) collector in Coimbatore, India, used for drying marigolds.[43]
58
+
59
+ Solar distillation can be used to make saline or brackish water potable. The first recorded instance of this was by 16th-century Arab alchemists.[44] A large-scale solar distillation project was first constructed in 1872 in the Chilean mining town of Las Salinas.[45] The plant, which had solar collection area of 4,700 m2 (51,000 sq ft), could produce up to 22,700 L (5,000 imp gal; 6,000 US gal) per day and operate for 40 years.[45] Individual still designs include single-slope, double-slope (or greenhouse type), vertical, conical, inverted absorber, multi-wick, and multiple effect. These stills can operate in passive, active, or hybrid modes. Double-slope stills are the most economical for decentralized domestic purposes, while active multiple effect units are more suitable for large-scale applications.[44]
60
+
61
+ Solar water disinfection (SODIS) involves exposing water-filled plastic polyethylene terephthalate (PET) bottles to sunlight for several hours.[46] Exposure times vary depending on weather and climate from a minimum of six hours to two days during fully overcast conditions.[47] It is recommended by the World Health Organization as a viable method for household water treatment and safe storage.[48] Over two million people in developing countries use this method for their daily drinking water.[47]
62
+
63
+ Solar energy may be used in a water stabilization pond to treat waste water without chemicals or electricity. A further environmental advantage is that algae grow in such ponds and consume carbon dioxide in photosynthesis, although algae may produce toxic chemicals that make the water unusable.[49][50]
64
+
65
+ Molten salt can be employed as a thermal energy storage method to retain thermal energy collected by a solar tower or solar trough of a concentrated solar power plant so that it can be used to generate electricity in bad weather or at night. It was demonstrated in the Solar Two project from 1995–1999. The system is predicted to have an annual efficiency of 99%, a reference to the energy retained by storing heat before turning it into electricity, versus converting heat directly into electricity.[51][52][53] The molten salt mixtures vary. The most extended mixture contains sodium nitrate, potassium nitrate and calcium nitrate. It is non-flammable and non-toxic, and has already been used in the chemical and metals industries as a heat-transport fluid. Hence, experience with such systems exists in non-solar applications.
66
+
67
+ The salt melts at 131 °C (268 °F). It is kept liquid at 288 °C (550 °F) in an insulated "cold" storage tank. The liquid salt is pumped through panels in a solar collector where the focused irradiance heats it to 566 °C (1,051 °F). It is then sent to a hot storage tank. This is so well insulated that the thermal energy can be usefully stored for up to a week.[54]
68
+
69
+ When electricity is needed, the hot salt is pumped to a conventional steam-generator to produce superheated steam for a turbine/generator as used in any conventional coal, oil, or nuclear power plant. A 100-megawatt turbine would need a tank about 9.1 metres (30 ft) tall and 24 metres (79 ft) in diameter to drive it for four hours by this design.
70
+
71
+ Several parabolic trough power plants in Spain[55] and solar power tower developer SolarReserve use this thermal energy storage concept. The Solana Generating Station in the U.S. has six hours of storage by molten salt. The María Elena plant[56] is a 400 MW thermo-solar complex in the northern Chilean region of Antofagasta employing molten salt technology.
72
+
73
+ Solar power is the conversion of sunlight into electricity, either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP). CSP systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. PV converts light into electric current using the photoelectric effect.
74
+
75
+ Solar power is anticipated to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16 and 11 percent to the global overall consumption, respectively.[57] In 2016, after another year of rapid growth, solar generated 1.3% of global power.[58]
76
+
77
+ Commercial concentrated solar power plants were first developed in the 1980s. The 392 MW Ivanpah Solar Power Facility, in the Mojave Desert of California, is the largest solar power plant in the world. Other large concentrated solar power plants include the 150 MW Solnova Solar Power Station and the 100 MW Andasol solar power station, both in Spain. The 250 MW Agua Caliente Solar Project, in the United States, and the 221 MW Charanka Solar Park in India, are the world's largest photovoltaic plants. Solar projects exceeding 1 GW are being developed, but most of the deployed photovoltaics are in small rooftop arrays of less than 5 kW, which are connected to the grid using net metering or a feed-in tariff.[59]
78
+
79
+ Worldwide growth of PV capacity grouped by region in MW (2006–2014)
80
+
81
+ In the last two decades, photovoltaics (PV), also known as solar PV, has evolved from a pure niche market of small scale applications towards becoming a mainstream electricity source. A solar cell is a device that converts light directly into electricity using the photoelectric effect. The first solar cell was constructed by Charles Fritts in the 1880s.[60] In 1931 a German engineer, Dr Bruno Lange, developed a photo cell using silver selenide in place of copper oxide.[61] Although the prototype selenium cells converted less than 1% of incident light into electricity, both Ernst Werner von Siemens and James Clerk Maxwell recognized the importance of this discovery.[62] Following the work of Russell Ohl in the 1940s, researchers Gerald Pearson, Calvin Fuller and Daryl Chapin created the crystalline silicon solar cell in 1954.[63] These early solar cells cost US$286/watt and reached efficiencies of 4.5–6%.[64] By 2012 available efficiencies exceeded 20%, and the maximum efficiency of research photovoltaics was in excess of 40%.[65]
82
+
83
+ Concentrating Solar Power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. The concentrated heat is then used as a heat source for a conventional power plant. A wide range of concentrating technologies exists; the most developed are the parabolic trough, the concentrating linear fresnel reflector, the Stirling dish, and the solar power tower. Various techniques are used to track the Sun and focus light. In all of these systems a working fluid is heated by the concentrated sunlight, and is then used for power generation or energy storage.[66]
84
+
85
+ Sunlight has influenced building design since the beginning of architectural history.[68] Advanced solar architecture and urban planning methods were first employed by the Greeks and Chinese, who oriented their buildings toward the south to provide light and warmth.[69]
86
+
87
+ The common features of passive solar architecture are orientation relative to the Sun, compact proportion (a low surface area to volume ratio), selective shading (overhangs) and thermal mass.[68] When these features are tailored to the local climate and environment, they can produce well-lit spaces that stay in a comfortable temperature range. Socrates' Megaron House is a classic example of passive solar design.[68] The most recent approaches to solar design use computer modeling tying together solar lighting, heating and ventilation systems in an integrated solar design package.[70] Active solar equipment such as pumps, fans, and switchable windows can complement passive design and improve system performance.
88
+
89
+ Urban heat islands (UHI) are metropolitan areas with higher temperatures than that of the surrounding environment. The higher temperatures result from increased absorption of solar energy by urban materials such as asphalt and concrete, which have lower albedos and higher heat capacities than those in the natural environment. A straightforward method of counteracting the UHI effect is to paint buildings and roads white and to plant trees in the area. Using these methods, a hypothetical "cool communities" program in Los Angeles has projected that urban temperatures could be reduced by approximately 3 °C at an estimated cost of US$1  billion, giving estimated total annual benefits of US$530  million from reduced air-conditioning costs and healthcare savings.[71]
90
+
91
+ Agriculture and horticulture seek to optimize the capture of solar energy to optimize the productivity of plants. Techniques such as timed planting cycles, tailored row orientation, staggered heights between rows and the mixing of plant varieties can improve crop yields.[72][73] While sunlight is generally considered a plentiful resource, the exceptions highlight the importance of solar energy to agriculture. During the short growing seasons of the Little Ice Age, French and English farmers employed fruit walls to maximize the collection of solar energy. These walls acted as thermal masses and accelerated ripening by keeping plants warm. Early fruit walls were built perpendicular to the ground and facing south, but over time, sloping walls were developed to make better use of sunlight. In 1699, Nicolas Fatio de Duillier even suggested using a tracking mechanism which could pivot to follow the Sun.[74] Applications of solar energy in agriculture aside from growing crops include pumping water, drying crops, brooding chicks and drying chicken manure.[43][75] More recently the technology has been embraced by vintners, who use the energy generated by solar panels to power grape presses.[76]
92
+
93
+ Greenhouses convert solar light to heat, enabling year-round production and the growth (in enclosed environments) of specialty crops and other plants not naturally suited to the local climate. Primitive greenhouses were first used during Roman times to produce cucumbers year-round for the Roman emperor Tiberius.[77] The first modern greenhouses were built in Europe in the 16th century to keep exotic plants brought back from explorations abroad.[78] Greenhouses remain an important part of horticulture today. Plastic transparent materials have also been used to similar effect in polytunnels and row covers.
94
+
95
+ Development of a solar-powered car has been an engineering goal since the 1980s. The World Solar Challenge is a biannual solar-powered car race, where teams from universities and enterprises compete over 3,021 kilometres (1,877 mi) across central Australia from Darwin to Adelaide. In 1987, when it was founded, the winner's average speed was 67 kilometres per hour (42 mph) and by 2007 the winner's average speed had improved to 90.87 kilometres per hour (56.46 mph).[79]
96
+ The North American Solar Challenge and the planned South African Solar Challenge are comparable competitions that reflect an international interest in the engineering and development of solar powered vehicles.[80][81]
97
+
98
+ Some vehicles use solar panels for auxiliary power, such as for air conditioning, to keep the interior cool, thus reducing fuel consumption.[82][83]
99
+
100
+ In 1975, the first practical solar boat was constructed in England.[84] By 1995, passenger boats incorporating PV panels began appearing and are now used extensively.[85] In 1996, Kenichi Horie made the first solar-powered crossing of the Pacific Ocean, and the Sun21 catamaran made the first solar-powered crossing of the Atlantic Ocean in the winter of 2006–2007.[86] There were plans to circumnavigate the globe in 2010.[87]
101
+
102
+ In 1974, the unmanned AstroFlight Sunrise airplane made the first solar flight. On 29 April 1979, the Solar Riser made the first flight in a solar-powered, fully controlled, man-carrying flying machine, reaching an altitude of 40 ft (12 m). In 1980, the Gossamer Penguin made the first piloted flights powered solely by photovoltaics. This was quickly followed by the Solar Challenger which crossed the English Channel in July 1981. In 1990 Eric Scott Raymond in 21 hops flew from California to North Carolina using solar power.[88] Developments then turned back to unmanned aerial vehicles (UAV) with the Pathfinder (1997) and subsequent designs, culminating in the Helios which set the altitude record for a non-rocket-propelled aircraft at 29,524 metres (96,864 ft) in 2001.[89] The Zephyr, developed by BAE Systems, is the latest in a line of record-breaking solar aircraft, making a 54-hour flight in 2007, and month-long flights were envisioned by 2010.[90] As of 2016, Solar Impulse, an electric aircraft, is currently circumnavigating the globe. It is a single-seat plane powered by solar cells and capable of taking off under its own power. The design allows the aircraft to remain airborne for several days.[91]
103
+
104
+ A solar balloon is a black balloon that is filled with ordinary air. As sunlight shines on the balloon, the air inside is heated and expands, causing an upward buoyancy force, much like an artificially heated hot air balloon. Some solar balloons are large enough for human flight, but usage is generally limited to the toy market as the surface-area to payload-weight ratio is relatively high.[92]
105
+
106
+ Solar chemical processes use solar energy to drive chemical reactions. These processes offset energy that would otherwise come from a fossil fuel source and can also convert solar energy into storable and transportable fuels. Solar induced chemical reactions can be divided into thermochemical or photochemical.[93] A variety of fuels can be produced by artificial photosynthesis.[94] The multielectron catalytic chemistry involved in making carbon-based fuels (such as methanol) from reduction of carbon dioxide is challenging; a feasible alternative is hydrogen production from protons, though use of water as the source of electrons (as plants do) requires mastering the multielectron oxidation of two water molecules to molecular oxygen.[95] Some have envisaged working solar fuel plants in coastal metropolitan areas by 2050 – the splitting of seawater providing hydrogen to be run through adjacent fuel-cell electric power plants and the pure water by-product going directly into the municipal water system.[96] Another vision involves all human structures covering the Earth's surface (i.e., roads, vehicles and buildings) doing photosynthesis more efficiently than plants.[97]
107
+
108
+ Hydrogen production technologies have been a significant area of solar chemical research since the 1970s. Aside from electrolysis driven by photovoltaic or photochemical cells, several thermochemical processes have also been explored. One such route uses concentrators to split water into oxygen and hydrogen at high temperatures (2,300–2,600 °C or 4,200–4,700 °F).[98] Another approach uses the heat from solar concentrators to drive the steam reformation of natural gas thereby increasing the overall hydrogen yield compared to conventional reforming methods.[99] Thermochemical cycles characterized by the decomposition and regeneration of reactants present another avenue for hydrogen production. The Solzinc process under development at the Weizmann Institute of Science uses a 1 MW solar furnace to decompose zinc oxide (ZnO) at temperatures above 1,200 °C (2,200 °F). This initial reaction produces pure zinc, which can subsequently be reacted with water to produce hydrogen.[100]
109
+
110
+ Thermal mass systems can store solar energy in the form of heat at domestically useful temperatures for daily or interseasonal durations. Thermal storage systems generally use readily available materials with high specific heat capacities such as water, earth and stone. Well-designed systems can lower peak demand, shift time-of-use to off-peak hours and reduce overall heating and cooling requirements.[101][102]
111
+
112
+ Phase change materials such as paraffin wax and Glauber's salt are another thermal storage medium. These materials are inexpensive, readily available, and can deliver domestically useful temperatures (approximately 64 °C or 147 °F). The "Dover House" (in Dover, Massachusetts) was the first to use a Glauber's salt heating system, in 1948.[103] Solar energy can also be stored at high temperatures using molten salts. Salts are an effective storage medium because they are low-cost, have a high specific heat capacity, and can deliver heat at temperatures compatible with conventional power systems. The Solar Two project used this method of energy storage, allowing it to store 1.44 terajoules (400,000 kWh) in its 68 m³ storage tank with an annual storage efficiency of about 99%.[104]
113
+
114
+ Off-grid PV systems have traditionally used rechargeable batteries to store excess electricity. With grid-tied systems, excess electricity can be sent to the transmission grid, while standard grid electricity can be used to meet shortfalls. Net metering programs give household systems credit for any electricity they deliver to the grid. This is handled by 'rolling back' the meter whenever the home produces more electricity than it consumes. If the net electricity use is below zero, the utility then rolls over the kilowatt-hour credit to the next month.[105] Other approaches involve the use of two meters, to measure electricity consumed vs. electricity produced. This is less common due to the increased installation cost of the second meter. Most standard meters accurately measure in both directions, making a second meter unnecessary.
115
+
116
+ Pumped-storage hydroelectricity stores energy in the form of water pumped when energy is available from a lower elevation reservoir to a higher elevation one. The energy is recovered when demand is high by releasing the water, with the pump becoming a hydroelectric power generator.[106]
117
+
118
+ Beginning with the surge in coal use, which accompanied the Industrial Revolution, energy consumption has steadily transitioned from wood and biomass to fossil fuels. The early development of solar technologies starting in the 1860s was driven by an expectation that coal would soon become scarce. However, development of solar technologies stagnated in the early 20th  century in the face of the increasing availability, economy, and utility of coal and petroleum.[107]
119
+
120
+ The 1973 oil embargo and 1979 energy crisis caused a reorganization of energy policies around the world. It brought renewed attention to developing solar technologies.[108][109] Deployment strategies focused on incentive programs such as the Federal Photovoltaic Utilization Program in the US and the Sunshine Program in Japan. Other efforts included the formation of research facilities in the US (SERI, now NREL), Japan (NEDO), and Germany (Fraunhofer Institute for Solar Energy Systems ISE).[110]
121
+
122
+ Commercial solar water heaters began appearing in the United States in the 1890s.[111] These systems saw increasing use until the 1920s but were gradually replaced by cheaper and more reliable heating fuels.[112] As with photovoltaics, solar water heating attracted renewed attention as a result of the oil crises in the 1970s, but interest subsided in the 1980s due to falling petroleum prices. Development in the solar water heating sector progressed steadily throughout the 1990s, and annual growth rates have averaged 20% since 1999.[25] Although generally underestimated, solar water heating and cooling is by far the most widely deployed solar technology with an estimated capacity of 154  GW as of 2007.[25]
123
+
124
+ The International Energy Agency has said that solar energy can make considerable contributions to solving some of the most urgent problems the world now faces:[1]
125
+
126
+ The development of affordable, inexhaustible, and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible, and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared.[1]
127
+
128
+ In 2011, a report by the International Energy Agency found that solar energy technologies such as photovoltaics, solar hot water, and concentrated solar power could provide a third of the world's energy by 2060 if politicians commit to limiting climate change and transitioning to renewable energy. The energy from the Sun could play a key role in de-carbonizing the global economy alongside improvements in energy efficiency and imposing costs on greenhouse gas emitters. "The strength of solar is the incredible variety and flexibility of applications, from small scale to big scale".[113]
129
+
130
+ We have proved ... that after our stores of oil and coal are exhausted the human race can receive unlimited power from the rays of the Sun.
131
+
132
+ The International Organization for Standardization has established several standards relating to solar energy equipment. For example, ISO 9050 relates to glass in the building, while ISO 10217 relates to the materials used in solar water heaters.
133
+
en/1756.html.txt ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Solar energy is radiant light and heat from the Sun that is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, solar thermal energy, solar architecture, molten salt power plants and artificial photosynthesis.[1][2]
4
+
5
+ It is an essential source of renewable energy, and its technologies are broadly characterized as either passive solar or active solar depending on how they capture and distribute solar energy or convert it into solar power. Active solar techniques include the use of photovoltaic systems, concentrated solar power, and solar water heating to harness the energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light-dispersing properties, and designing spaces that naturally circulate air.
6
+
7
+ The large magnitude of solar energy available makes it a highly appealing source of electricity. The United Nations Development Programme in its 2000 World Energy Assessment found that the annual potential of solar energy was 1,575–49,837 exajoules (EJ). This is several times larger than the total world energy consumption, which was 559.8 EJ in 2012.[3][4]
8
+
9
+ In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible, and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating global warming, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared".[1]
10
+
11
+ The Earth receives 174 petawatts (PW) of incoming solar radiation (insolation) at the upper atmosphere.[5] Approximately 30% is reflected back to space while the rest is absorbed by clouds, oceans and land masses. The spectrum of solar light at the Earth's surface is mostly spread across the visible and near-infrared ranges with a small part in the near-ultraviolet.[6] Most of the world's population live in areas with insolation levels of 150–300 watts/m², or 3.5–7.0 kWh/m² per day.[citation needed]
12
+
13
+ Solar radiation is absorbed by the Earth's land surface, oceans – which cover about 71% of the globe – and atmosphere. Warm air containing evaporated water from the oceans rises, causing atmospheric circulation or convection. When the air reaches a high altitude, where the temperature is low, water vapor condenses into clouds, which rain onto the Earth's surface, completing the water cycle. The latent heat of water condensation amplifies convection, producing atmospheric phenomena such as wind, cyclones and anti-cyclones.[7] Sunlight absorbed by the oceans and land masses keeps the surface at an average temperature of 14 °C.[8] By photosynthesis, green plants convert solar energy into chemically stored energy, which produces food, wood and the biomass from which fossil fuels are derived.[9]
14
+
15
+ The total solar energy absorbed by Earth's atmosphere, oceans and land masses is approximately 3,850,000 exajoules (EJ) per year.[10] In 2002, this was more energy in one hour than the world used in one year.[11][12] Photosynthesis captures approximately 3,000 EJ per year in biomass.[13] The amount of solar energy reaching the surface of the planet is so vast that in one year it is about twice as much as will ever be obtained from all of the Earth's non-renewable resources of coal, oil, natural gas, and mined uranium combined,[14]
16
+
17
+ The potential solar energy that could be used by humans differs from the amount of solar energy present near the surface of the planet because factors such as geography, time variation, cloud cover, and the land available to humans limit the amount of solar energy that we can acquire.
18
+
19
+ Geography affects solar energy potential because areas that are closer to the equator have a higher amount of solar radiation. However, the use of photovoltaics that can follow the position of the Sun can significantly increase the solar energy potential in areas that are farther from the equator.[4] Time variation effects the potential of solar energy because during the nighttime, there is little solar radiation on the surface of the Earth for solar panels to absorb. This limits the amount of energy that solar panels can absorb in one day. Cloud cover can affect the potential of solar panels because clouds block incoming light from the Sun and reduce the light available for solar cells.
20
+
21
+ Besides, land availability has a large effect on the available solar energy because solar panels can only be set up on land that is otherwise unused and suitable for solar panels. Roofs are a suitable place for solar cells, as many people have discovered that they can collect energy directly from their homes this way. Other areas that are suitable for solar cells are lands that are not being used for businesses where solar plants can be established.[4]
22
+
23
+ Solar technologies are characterized as either passive or active depending on the way they capture, convert and distribute sunlight and enable solar energy to be harnessed at different levels around the world, mostly depending on the distance from the equator. Although solar energy refers primarily to the use of solar radiation for practical ends, all renewable energies, other than Geothermal power and Tidal power, derive their energy either directly or indirectly from the Sun.
24
+
25
+ Active solar techniques use photovoltaics, concentrated solar power, solar thermal collectors, pumps, and fans to convert sunlight into useful outputs. Passive solar techniques include selecting materials with favorable thermal properties, designing spaces that naturally circulate air, and referencing the position of a building to the Sun. Active solar technologies increase the supply of energy and are considered supply side technologies, while passive solar technologies reduce the need for alternate resources and are generally considered demand-side technologies.[19]
26
+
27
+ In 2000, the United Nations Development Programme, UN Department of Economic and Social Affairs, and World Energy Council published an estimate of the potential solar energy that could be used by humans each year that took into account factors such as insolation, cloud cover, and the land that is usable by humans. The estimate found that solar energy has a global potential of 1,600 to 49,800 exajoules (4.4×1014 to 1.4×1016 kWh) per year (see table below).[4]
28
+
29
+ Quantitative relation of global solar potential vs. the world's primary energy consumption:
30
+
31
+ Source: United Nations Development Programme – World Energy Assessment (2000)[4]
32
+
33
+ Solar thermal technologies can be used for water heating, space heating, space cooling and process heat generation.[20]
34
+
35
+ In 1878, at the Universal Exposition in Paris, Augustin Mouchot successfully demonstrated a solar steam engine, but couldn't continue development because of cheap coal and other factors.
36
+
37
+ In 1897, Frank Shuman, a US inventor, engineer and solar energy pioneer built a small demonstration solar engine that worked by reflecting solar energy onto square boxes filled with ether, which has a lower boiling point than water and were fitted internally with black pipes which in turn powered a steam engine. In 1908 Shuman formed the Sun Power Company with the intent of building larger solar power plants. He, along with his technical advisor A.S.E. Ackermann and British physicist Sir Charles Vernon Boys,[citation needed] developed an improved system using mirrors to reflect solar energy upon collector boxes, increasing heating capacity to the extent that water could now be used instead of ether. Shuman then constructed a full-scale steam engine powered by low-pressure water, enabling him to patent the entire solar engine system by 1912.
38
+
39
+ Shuman built the world's first solar thermal power station in Maadi, Egypt, between 1912 and 1913. His plant used parabolic troughs to power a 45–52 kilowatts (60–70 hp) engine that pumped more than 22,000 litres (4,800 imp gal; 5,800 US gal) of water per minute from the Nile River to adjacent cotton fields. Although the outbreak of World War I and the discovery of cheap oil in the 1930s discouraged the advancement of solar energy, Shuman's vision, and basic design were resurrected in the 1970s with a new wave of interest in solar thermal energy.[21] In 1916 Shuman was quoted in the media advocating solar energy's utilization, saying:
40
+
41
+ We have proved the commercial profit of sun power in the tropics and have more particularly proved that after our stores of oil and coal are exhausted the human race can receive unlimited power from the rays of the Sun.
42
+
43
+ Solar hot water systems use sunlight to heat water. In middle geographical latitudes (between 40 degrees north and 40 degrees south), 60 to 70% of the domestic hot water use, with water temperatures up to 60 °C (140 °F), can be provided by solar heating systems.[23] The most common types of solar water heaters are evacuated tube collectors (44%) and glazed flat plate collectors (34%) generally used for domestic hot water; and unglazed plastic collectors (21%) used mainly to heat swimming pools.[24]
44
+
45
+ As of 2007, the total installed capacity of solar hot water systems was approximately 154 thermal gigawatt (GWth).[25] China is the world leader in their deployment with 70 GWth installed as of 2006 and a long-term goal of 210 GWth by 2020.[26] Israel and Cyprus are the per capita leaders in the use of solar hot water systems with over 90% of homes using them.[27] In the United States, Canada, and Australia, heating swimming pools is the dominant application of solar hot water with an installed capacity of 18 GWth as of 2005.[19]
46
+
47
+ In the United States, heating, ventilation and air conditioning (HVAC) systems account for 30% (4.65 EJ/yr) of the energy used in commercial buildings and nearly 50% (10.1 EJ/yr) of the energy used in residential buildings.[28][29] Solar heating, cooling and ventilation technologies can be used to offset a portion of this energy.
48
+
49
+ Thermal mass is any material that can be used to store heat—heat from the Sun in the case of solar energy. Common thermal mass materials include stone, cement, and water. Historically they have been used in arid climates or warm temperate regions to keep buildings cool by absorbing solar energy during the day and radiating stored heat to the cooler atmosphere at night. However, they can be used in cold temperate areas to maintain warmth as well. The size and placement of thermal mass depend on several factors such as climate, daylighting, and shading conditions. When duly incorporated, thermal mass maintains space temperatures in a comfortable range and reduces the need for auxiliary heating and cooling equipment.[30]
50
+
51
+ A solar chimney (or thermal chimney, in this context) is a passive solar ventilation system composed of a vertical shaft connecting the interior and exterior of a building. As the chimney warms, the air inside is heated, causing an updraft that pulls air through the building. Performance can be improved by using glazing and thermal mass materials[31] in a way that mimics greenhouses.
52
+
53
+ Deciduous trees and plants have been promoted as a means of controlling solar heating and cooling. When planted on the southern side of a building in the northern hemisphere or the northern side in the southern hemisphere, their leaves provide shade during the summer, while the bare limbs allow light to pass during the winter.[32] Since bare, leafless trees shade 1/3 to 1/2 of incident solar radiation, there is a balance between the benefits of summer shading and the corresponding loss of winter heating.[33] In climates with significant heating loads, deciduous trees should not be planted on the Equator-facing side of a building because they will interfere with winter solar availability. They can, however, be used on the east and west sides to provide a degree of summer shading without appreciably affecting winter solar gain.[34]
54
+
55
+ Solar cookers use sunlight for cooking, drying, and pasteurization. They can be grouped into three broad categories: box cookers, panel cookers, and reflector cookers.[35] The simplest solar cooker is the box cooker first built by Horace de Saussure in 1767.[36] A basic box cooker consists of an insulated container with a transparent lid. It can be used effectively with partially overcast skies and will typically reach temperatures of 90–150 °C (194–302 °F).[37] Panel cookers use a reflective panel to direct sunlight onto an insulated container and reach temperatures comparable to box cookers. Reflector cookers use various concentrating geometries (dish, trough, Fresnel mirrors) to focus light on a cooking container. These cookers reach temperatures of 315 °C (599 °F) and above but require direct light to function properly and must be repositioned to track the Sun.[38]
56
+
57
+ Solar concentrating technologies such as parabolic dish, trough and Scheffler reflectors can provide process heat for commercial and industrial applications. The first commercial system was the Solar Total Energy Project (STEP) in Shenandoah, Georgia, US where a field of 114 parabolic dishes provided 50% of the process heating, air conditioning and electrical requirements for a clothing factory. This grid-connected cogeneration system provided 400 kW of electricity plus thermal energy in the form of 401 kW steam and 468 kW chilled water, and had a one-hour peak load thermal storage.[39] Evaporation ponds are shallow pools that concentrate dissolved solids through evaporation. The use of evaporation ponds to obtain salt from seawater is one of the oldest applications of solar energy. Modern uses include concentrating brine solutions used in leach mining and removing dissolved solids from waste streams.[40] Clothes lines, clotheshorses, and clothes racks dry clothes through evaporation by wind and sunlight without consuming electricity or gas. In some states of the United States legislation protects the "right to dry" clothes.[41] Unglazed transpired collectors (UTC) are perforated sun-facing walls used for preheating ventilation air. UTCs can raise the incoming air temperature up to 22 °C (40 °F) and deliver outlet temperatures of 45–60 °C (113–140 °F).[42] The short payback period of transpired collectors (3 to 12 years) makes them a more cost-effective alternative than glazed collection systems.[42] As of 2003, over 80 systems with a combined collector area of 35,000 square metres (380,000 sq ft) had been installed worldwide, including an 860 m2 (9,300 sq ft) collector in Costa Rica used for drying coffee beans and a 1,300 m2 (14,000 sq ft) collector in Coimbatore, India, used for drying marigolds.[43]
58
+
59
+ Solar distillation can be used to make saline or brackish water potable. The first recorded instance of this was by 16th-century Arab alchemists.[44] A large-scale solar distillation project was first constructed in 1872 in the Chilean mining town of Las Salinas.[45] The plant, which had solar collection area of 4,700 m2 (51,000 sq ft), could produce up to 22,700 L (5,000 imp gal; 6,000 US gal) per day and operate for 40 years.[45] Individual still designs include single-slope, double-slope (or greenhouse type), vertical, conical, inverted absorber, multi-wick, and multiple effect. These stills can operate in passive, active, or hybrid modes. Double-slope stills are the most economical for decentralized domestic purposes, while active multiple effect units are more suitable for large-scale applications.[44]
60
+
61
+ Solar water disinfection (SODIS) involves exposing water-filled plastic polyethylene terephthalate (PET) bottles to sunlight for several hours.[46] Exposure times vary depending on weather and climate from a minimum of six hours to two days during fully overcast conditions.[47] It is recommended by the World Health Organization as a viable method for household water treatment and safe storage.[48] Over two million people in developing countries use this method for their daily drinking water.[47]
62
+
63
+ Solar energy may be used in a water stabilization pond to treat waste water without chemicals or electricity. A further environmental advantage is that algae grow in such ponds and consume carbon dioxide in photosynthesis, although algae may produce toxic chemicals that make the water unusable.[49][50]
64
+
65
+ Molten salt can be employed as a thermal energy storage method to retain thermal energy collected by a solar tower or solar trough of a concentrated solar power plant so that it can be used to generate electricity in bad weather or at night. It was demonstrated in the Solar Two project from 1995–1999. The system is predicted to have an annual efficiency of 99%, a reference to the energy retained by storing heat before turning it into electricity, versus converting heat directly into electricity.[51][52][53] The molten salt mixtures vary. The most extended mixture contains sodium nitrate, potassium nitrate and calcium nitrate. It is non-flammable and non-toxic, and has already been used in the chemical and metals industries as a heat-transport fluid. Hence, experience with such systems exists in non-solar applications.
66
+
67
+ The salt melts at 131 °C (268 °F). It is kept liquid at 288 °C (550 °F) in an insulated "cold" storage tank. The liquid salt is pumped through panels in a solar collector where the focused irradiance heats it to 566 °C (1,051 °F). It is then sent to a hot storage tank. This is so well insulated that the thermal energy can be usefully stored for up to a week.[54]
68
+
69
+ When electricity is needed, the hot salt is pumped to a conventional steam-generator to produce superheated steam for a turbine/generator as used in any conventional coal, oil, or nuclear power plant. A 100-megawatt turbine would need a tank about 9.1 metres (30 ft) tall and 24 metres (79 ft) in diameter to drive it for four hours by this design.
70
+
71
+ Several parabolic trough power plants in Spain[55] and solar power tower developer SolarReserve use this thermal energy storage concept. The Solana Generating Station in the U.S. has six hours of storage by molten salt. The María Elena plant[56] is a 400 MW thermo-solar complex in the northern Chilean region of Antofagasta employing molten salt technology.
72
+
73
+ Solar power is the conversion of sunlight into electricity, either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP). CSP systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. PV converts light into electric current using the photoelectric effect.
74
+
75
+ Solar power is anticipated to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16 and 11 percent to the global overall consumption, respectively.[57] In 2016, after another year of rapid growth, solar generated 1.3% of global power.[58]
76
+
77
+ Commercial concentrated solar power plants were first developed in the 1980s. The 392 MW Ivanpah Solar Power Facility, in the Mojave Desert of California, is the largest solar power plant in the world. Other large concentrated solar power plants include the 150 MW Solnova Solar Power Station and the 100 MW Andasol solar power station, both in Spain. The 250 MW Agua Caliente Solar Project, in the United States, and the 221 MW Charanka Solar Park in India, are the world's largest photovoltaic plants. Solar projects exceeding 1 GW are being developed, but most of the deployed photovoltaics are in small rooftop arrays of less than 5 kW, which are connected to the grid using net metering or a feed-in tariff.[59]
78
+
79
+ Worldwide growth of PV capacity grouped by region in MW (2006–2014)
80
+
81
+ In the last two decades, photovoltaics (PV), also known as solar PV, has evolved from a pure niche market of small scale applications towards becoming a mainstream electricity source. A solar cell is a device that converts light directly into electricity using the photoelectric effect. The first solar cell was constructed by Charles Fritts in the 1880s.[60] In 1931 a German engineer, Dr Bruno Lange, developed a photo cell using silver selenide in place of copper oxide.[61] Although the prototype selenium cells converted less than 1% of incident light into electricity, both Ernst Werner von Siemens and James Clerk Maxwell recognized the importance of this discovery.[62] Following the work of Russell Ohl in the 1940s, researchers Gerald Pearson, Calvin Fuller and Daryl Chapin created the crystalline silicon solar cell in 1954.[63] These early solar cells cost US$286/watt and reached efficiencies of 4.5–6%.[64] By 2012 available efficiencies exceeded 20%, and the maximum efficiency of research photovoltaics was in excess of 40%.[65]
82
+
83
+ Concentrating Solar Power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. The concentrated heat is then used as a heat source for a conventional power plant. A wide range of concentrating technologies exists; the most developed are the parabolic trough, the concentrating linear fresnel reflector, the Stirling dish, and the solar power tower. Various techniques are used to track the Sun and focus light. In all of these systems a working fluid is heated by the concentrated sunlight, and is then used for power generation or energy storage.[66]
84
+
85
+ Sunlight has influenced building design since the beginning of architectural history.[68] Advanced solar architecture and urban planning methods were first employed by the Greeks and Chinese, who oriented their buildings toward the south to provide light and warmth.[69]
86
+
87
+ The common features of passive solar architecture are orientation relative to the Sun, compact proportion (a low surface area to volume ratio), selective shading (overhangs) and thermal mass.[68] When these features are tailored to the local climate and environment, they can produce well-lit spaces that stay in a comfortable temperature range. Socrates' Megaron House is a classic example of passive solar design.[68] The most recent approaches to solar design use computer modeling tying together solar lighting, heating and ventilation systems in an integrated solar design package.[70] Active solar equipment such as pumps, fans, and switchable windows can complement passive design and improve system performance.
88
+
89
+ Urban heat islands (UHI) are metropolitan areas with higher temperatures than that of the surrounding environment. The higher temperatures result from increased absorption of solar energy by urban materials such as asphalt and concrete, which have lower albedos and higher heat capacities than those in the natural environment. A straightforward method of counteracting the UHI effect is to paint buildings and roads white and to plant trees in the area. Using these methods, a hypothetical "cool communities" program in Los Angeles has projected that urban temperatures could be reduced by approximately 3 °C at an estimated cost of US$1  billion, giving estimated total annual benefits of US$530  million from reduced air-conditioning costs and healthcare savings.[71]
90
+
91
+ Agriculture and horticulture seek to optimize the capture of solar energy to optimize the productivity of plants. Techniques such as timed planting cycles, tailored row orientation, staggered heights between rows and the mixing of plant varieties can improve crop yields.[72][73] While sunlight is generally considered a plentiful resource, the exceptions highlight the importance of solar energy to agriculture. During the short growing seasons of the Little Ice Age, French and English farmers employed fruit walls to maximize the collection of solar energy. These walls acted as thermal masses and accelerated ripening by keeping plants warm. Early fruit walls were built perpendicular to the ground and facing south, but over time, sloping walls were developed to make better use of sunlight. In 1699, Nicolas Fatio de Duillier even suggested using a tracking mechanism which could pivot to follow the Sun.[74] Applications of solar energy in agriculture aside from growing crops include pumping water, drying crops, brooding chicks and drying chicken manure.[43][75] More recently the technology has been embraced by vintners, who use the energy generated by solar panels to power grape presses.[76]
92
+
93
+ Greenhouses convert solar light to heat, enabling year-round production and the growth (in enclosed environments) of specialty crops and other plants not naturally suited to the local climate. Primitive greenhouses were first used during Roman times to produce cucumbers year-round for the Roman emperor Tiberius.[77] The first modern greenhouses were built in Europe in the 16th century to keep exotic plants brought back from explorations abroad.[78] Greenhouses remain an important part of horticulture today. Plastic transparent materials have also been used to similar effect in polytunnels and row covers.
94
+
95
+ Development of a solar-powered car has been an engineering goal since the 1980s. The World Solar Challenge is a biannual solar-powered car race, where teams from universities and enterprises compete over 3,021 kilometres (1,877 mi) across central Australia from Darwin to Adelaide. In 1987, when it was founded, the winner's average speed was 67 kilometres per hour (42 mph) and by 2007 the winner's average speed had improved to 90.87 kilometres per hour (56.46 mph).[79]
96
+ The North American Solar Challenge and the planned South African Solar Challenge are comparable competitions that reflect an international interest in the engineering and development of solar powered vehicles.[80][81]
97
+
98
+ Some vehicles use solar panels for auxiliary power, such as for air conditioning, to keep the interior cool, thus reducing fuel consumption.[82][83]
99
+
100
+ In 1975, the first practical solar boat was constructed in England.[84] By 1995, passenger boats incorporating PV panels began appearing and are now used extensively.[85] In 1996, Kenichi Horie made the first solar-powered crossing of the Pacific Ocean, and the Sun21 catamaran made the first solar-powered crossing of the Atlantic Ocean in the winter of 2006–2007.[86] There were plans to circumnavigate the globe in 2010.[87]
101
+
102
+ In 1974, the unmanned AstroFlight Sunrise airplane made the first solar flight. On 29 April 1979, the Solar Riser made the first flight in a solar-powered, fully controlled, man-carrying flying machine, reaching an altitude of 40 ft (12 m). In 1980, the Gossamer Penguin made the first piloted flights powered solely by photovoltaics. This was quickly followed by the Solar Challenger which crossed the English Channel in July 1981. In 1990 Eric Scott Raymond in 21 hops flew from California to North Carolina using solar power.[88] Developments then turned back to unmanned aerial vehicles (UAV) with the Pathfinder (1997) and subsequent designs, culminating in the Helios which set the altitude record for a non-rocket-propelled aircraft at 29,524 metres (96,864 ft) in 2001.[89] The Zephyr, developed by BAE Systems, is the latest in a line of record-breaking solar aircraft, making a 54-hour flight in 2007, and month-long flights were envisioned by 2010.[90] As of 2016, Solar Impulse, an electric aircraft, is currently circumnavigating the globe. It is a single-seat plane powered by solar cells and capable of taking off under its own power. The design allows the aircraft to remain airborne for several days.[91]
103
+
104
+ A solar balloon is a black balloon that is filled with ordinary air. As sunlight shines on the balloon, the air inside is heated and expands, causing an upward buoyancy force, much like an artificially heated hot air balloon. Some solar balloons are large enough for human flight, but usage is generally limited to the toy market as the surface-area to payload-weight ratio is relatively high.[92]
105
+
106
+ Solar chemical processes use solar energy to drive chemical reactions. These processes offset energy that would otherwise come from a fossil fuel source and can also convert solar energy into storable and transportable fuels. Solar induced chemical reactions can be divided into thermochemical or photochemical.[93] A variety of fuels can be produced by artificial photosynthesis.[94] The multielectron catalytic chemistry involved in making carbon-based fuels (such as methanol) from reduction of carbon dioxide is challenging; a feasible alternative is hydrogen production from protons, though use of water as the source of electrons (as plants do) requires mastering the multielectron oxidation of two water molecules to molecular oxygen.[95] Some have envisaged working solar fuel plants in coastal metropolitan areas by 2050 – the splitting of seawater providing hydrogen to be run through adjacent fuel-cell electric power plants and the pure water by-product going directly into the municipal water system.[96] Another vision involves all human structures covering the Earth's surface (i.e., roads, vehicles and buildings) doing photosynthesis more efficiently than plants.[97]
107
+
108
+ Hydrogen production technologies have been a significant area of solar chemical research since the 1970s. Aside from electrolysis driven by photovoltaic or photochemical cells, several thermochemical processes have also been explored. One such route uses concentrators to split water into oxygen and hydrogen at high temperatures (2,300–2,600 °C or 4,200–4,700 °F).[98] Another approach uses the heat from solar concentrators to drive the steam reformation of natural gas thereby increasing the overall hydrogen yield compared to conventional reforming methods.[99] Thermochemical cycles characterized by the decomposition and regeneration of reactants present another avenue for hydrogen production. The Solzinc process under development at the Weizmann Institute of Science uses a 1 MW solar furnace to decompose zinc oxide (ZnO) at temperatures above 1,200 °C (2,200 °F). This initial reaction produces pure zinc, which can subsequently be reacted with water to produce hydrogen.[100]
109
+
110
+ Thermal mass systems can store solar energy in the form of heat at domestically useful temperatures for daily or interseasonal durations. Thermal storage systems generally use readily available materials with high specific heat capacities such as water, earth and stone. Well-designed systems can lower peak demand, shift time-of-use to off-peak hours and reduce overall heating and cooling requirements.[101][102]
111
+
112
+ Phase change materials such as paraffin wax and Glauber's salt are another thermal storage medium. These materials are inexpensive, readily available, and can deliver domestically useful temperatures (approximately 64 °C or 147 °F). The "Dover House" (in Dover, Massachusetts) was the first to use a Glauber's salt heating system, in 1948.[103] Solar energy can also be stored at high temperatures using molten salts. Salts are an effective storage medium because they are low-cost, have a high specific heat capacity, and can deliver heat at temperatures compatible with conventional power systems. The Solar Two project used this method of energy storage, allowing it to store 1.44 terajoules (400,000 kWh) in its 68 m³ storage tank with an annual storage efficiency of about 99%.[104]
113
+
114
+ Off-grid PV systems have traditionally used rechargeable batteries to store excess electricity. With grid-tied systems, excess electricity can be sent to the transmission grid, while standard grid electricity can be used to meet shortfalls. Net metering programs give household systems credit for any electricity they deliver to the grid. This is handled by 'rolling back' the meter whenever the home produces more electricity than it consumes. If the net electricity use is below zero, the utility then rolls over the kilowatt-hour credit to the next month.[105] Other approaches involve the use of two meters, to measure electricity consumed vs. electricity produced. This is less common due to the increased installation cost of the second meter. Most standard meters accurately measure in both directions, making a second meter unnecessary.
115
+
116
+ Pumped-storage hydroelectricity stores energy in the form of water pumped when energy is available from a lower elevation reservoir to a higher elevation one. The energy is recovered when demand is high by releasing the water, with the pump becoming a hydroelectric power generator.[106]
117
+
118
+ Beginning with the surge in coal use, which accompanied the Industrial Revolution, energy consumption has steadily transitioned from wood and biomass to fossil fuels. The early development of solar technologies starting in the 1860s was driven by an expectation that coal would soon become scarce. However, development of solar technologies stagnated in the early 20th  century in the face of the increasing availability, economy, and utility of coal and petroleum.[107]
119
+
120
+ The 1973 oil embargo and 1979 energy crisis caused a reorganization of energy policies around the world. It brought renewed attention to developing solar technologies.[108][109] Deployment strategies focused on incentive programs such as the Federal Photovoltaic Utilization Program in the US and the Sunshine Program in Japan. Other efforts included the formation of research facilities in the US (SERI, now NREL), Japan (NEDO), and Germany (Fraunhofer Institute for Solar Energy Systems ISE).[110]
121
+
122
+ Commercial solar water heaters began appearing in the United States in the 1890s.[111] These systems saw increasing use until the 1920s but were gradually replaced by cheaper and more reliable heating fuels.[112] As with photovoltaics, solar water heating attracted renewed attention as a result of the oil crises in the 1970s, but interest subsided in the 1980s due to falling petroleum prices. Development in the solar water heating sector progressed steadily throughout the 1990s, and annual growth rates have averaged 20% since 1999.[25] Although generally underestimated, solar water heating and cooling is by far the most widely deployed solar technology with an estimated capacity of 154  GW as of 2007.[25]
123
+
124
+ The International Energy Agency has said that solar energy can make considerable contributions to solving some of the most urgent problems the world now faces:[1]
125
+
126
+ The development of affordable, inexhaustible, and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible, and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared.[1]
127
+
128
+ In 2011, a report by the International Energy Agency found that solar energy technologies such as photovoltaics, solar hot water, and concentrated solar power could provide a third of the world's energy by 2060 if politicians commit to limiting climate change and transitioning to renewable energy. The energy from the Sun could play a key role in de-carbonizing the global economy alongside improvements in energy efficiency and imposing costs on greenhouse gas emitters. "The strength of solar is the incredible variety and flexibility of applications, from small scale to big scale".[113]
129
+
130
+ We have proved ... that after our stores of oil and coal are exhausted the human race can receive unlimited power from the rays of the Sun.
131
+
132
+ The International Organization for Standardization has established several standards relating to solar energy equipment. For example, ISO 9050 relates to glass in the building, while ISO 10217 relates to the materials used in solar water heaters.
133
+
en/1757.html.txt ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ World electricity generation by source in 2017. Total generation was 26 PWh.[1]
6
+
7
+ Renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat.[3] Renewable energy often provides energy in four important areas: electricity generation, air and water heating/cooling, transportation, and rural (off-grid) energy services.[4]
8
+
9
+ Based on REN21's 2017 report, renewables contributed 19.3% to humans' global energy consumption and 24.5% to their generation of electricity in 2015 and 2016, respectively. This energy consumption is divided as 8.9% coming from traditional biomass, 4.2% as heat energy (modern biomass, geothermal and solar heat), 3.9% from hydroelectricity and the remaining 2.2% is electricity from wind, solar, geothermal, and other forms of biomass. Worldwide investments in renewable technologies amounted to more than US$286 billion in 2015.[5] In 2017, worldwide investments in renewable energy amounted to US$279.8 billion with China accounting for US$126.6 billion or 45% of the global investments, the United States for US$40.5 billion and Europe for US$40.9 billion.[6] Globally there are an estimated 7.7 million jobs associated with the renewable energy industries, with solar photovoltaics being the largest renewable employer.[7] Renewable energy systems are rapidly becoming more efficient and cheaper and their share of total energy consumption is increasing.[8] As of 2019, more than two-thirds of worldwide newly installed electricity capacity was renewable.[9] Growth in consumption of coal and oil could end by 2020 due to increased uptake of renewables and natural gas.[10][11]
10
+
11
+ At the national level, at least 30 nations around the world already have renewable energy contributing more than 20 percent of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.[12]
12
+ Some places and at least two countries, Iceland and Norway, generate all their electricity using renewable energy already, and many other countries have the set a goal to reach 100% renewable energy in the future.[13]
13
+ At least 47 nations around the world already have over 50 percent of electricity from renewable resources.[14][15][16] Renewable energy resources exist over wide geographical areas, in contrast to fossil fuels, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency technologies is resulting in significant energy security, climate change mitigation, and economic benefits.[17] In international public opinion surveys there is strong support for promoting renewable sources such as solar power and wind power.[18][19]
14
+
15
+ While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development.[20] As most of renewable energy technologies provide electricity, renewable energy deployment is often applied in conjunction with further electrification, which has several benefits: electricity can be converted to heat (where necessary generating higher temperatures than fossil fuels), can be converted into mechanical energy with high efficiency, and is clean at the point of consumption.[21][22] In addition, electrification with renewable energy is more efficient and therefore leads to significant reductions in primary energy requirements.[23]
16
+
17
+ Renewable energy flows involve natural phenomena such as sunlight, wind, tides, plant growth, and geothermal heat, as the International Energy Agency explains:[24]
18
+
19
+ Renewable energy is derived from natural processes that are replenished constantly. In its various forms, it derives directly from the sun, or from heat generated deep within the earth. Included in the definition is electricity and heat generated from solar, wind, ocean, hydropower, biomass, geothermal resources, and biofuels and hydrogen derived from renewable resources.
20
+
21
+ Renewable energy resources and significant opportunities for energy efficiency exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency, and technological diversification of energy sources, would result in significant energy security and economic benefits.[17] It would also reduce environmental pollution such as air pollution caused by burning of fossil fuels and improve public health, reduce premature mortalities due to pollution and save associated health costs that amount to several hundred billion dollars annually only in the United States.[25] Renewable energy sources, that derive their energy from the sun, either directly or indirectly, such as hydro and wind, are expected to be capable of supplying humanity energy for almost another 1 billion years, at which point the predicted increase in heat from the Sun is expected to make the surface of the Earth too hot for liquid water to exist.[26][27][28]
22
+
23
+ Climate change and global warming concerns, coupled with the continuing fall in the costs of some renewable energy equipment, such as wind turbines and solar panels, are driving increased use of renewables.[18] New government spending, regulation and policies helped the industry weather the global financial crisis better than many other sectors.[29] As of 2019[update], however, according to the International Renewable Energy Agency, renewables overall share in the energy mix (including power, heat and transport) needs to grow six times faster, in order to keep the rise in average global temperatures "well below" 2.0 °C (3.6 °F) during the present century, compared to pre-industrial levels.[30]
24
+
25
+ As of 2011, small solar PV systems provide electricity to a few million households, and micro-hydro configured into mini-grids serves many more. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] [needs update] United Nations' eighth Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity.[32] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond, and some 120 countries have various policy targets for longer-term shares of renewable energy, including a 20% target of all electricity generated for the European Union by 2020. Some countries have much higher long-term policy targets of up to 100% renewables. Outside Europe, a diverse group of 20 or more other countries target renewable energy shares in the 2020–2030 time frame that range from 10% to 50%.[12]
26
+
27
+ Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services:[4]
28
+
29
+ Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. Almost without a doubt the oldest known use of renewable energy, in the form of traditional biomass to fuel fires, dates from more than a million years ago. Use of biomass for fire did not become commonplace until many hundreds of thousands of years later.[37] Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile.[38] From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times.[39] Moving into the time of recorded history, the primary sources of traditional renewable energy were human labor, animal power, water power, wind, in grain crushing windmills, and firewood, a traditional biomass.
30
+
31
+ In the 1860s and 1870s there were already fears that civilization would run out of fossil fuels and the need was felt for a better source. In 1873 Professor Augustin Mouchot wrote:
32
+
33
+ The time will arrive when the industry of Europe will cease to find those natural resources, so necessary for it. Petroleum springs and coal mines are not inexhaustible but are rapidly diminishing in many places. Will man, then, return to the power of water and wind? Or will he emigrate where the most powerful source of heat sends its rays to all? History will show what will come.[40]
34
+
35
+ In 1885, Werner von Siemens, commenting on the discovery of the photovoltaic effect in the solid state, wrote:
36
+
37
+ In conclusion, I would say that however great the scientific importance of this discovery may be, its practical value will be no less obvious when we reflect that the supply of solar energy is both without limit and without cost, and that it will continue to pour down upon us for countless ages after all the coal deposits of the earth have been exhausted and forgotten.[41]
38
+
39
+ Max Weber mentioned the end of fossil fuel in the concluding paragraphs of his Die protestantische Ethik und der Geist des Kapitalismus (The Protestant Ethic and the Spirit of Capitalism), published in 1905.[42] Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911 Scientific American article: "in the far distant future, natural fuels having been exhausted [solar power] will remain as the only means of existence of the human race".[43]
40
+
41
+ The theory of peak oil was published in 1956.[44] In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventual depletion of oil, as well as for an escape from dependence on oil, and the first electricity-generating wind turbines appeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980.[45]
42
+
43
+ In 2018, worldwide installed capacity of wind power was 564��GW.[47]
44
+
45
+ Air flow can be used to run wind turbines. Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine.[48] Areas where winds are stronger and more constant, such as offshore and high-altitude sites, are preferred locations for wind farms. Typically, full load hours of wind turbines vary between 16 and 57 percent annually, but might be higher in particularly favorable offshore sites.[49]
46
+
47
+ Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%.
48
+
49
+ Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore. As offshore wind speeds average ~90% greater than that of land, so offshore resources can contribute substantially more energy than land-stationed turbines.[50]
50
+
51
+ In 2017, worldwide renewable hydropower capacity was 1,154 GW.[15]
52
+
53
+ Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. There are many forms of water energy:
54
+
55
+ Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. For countries having the largest percentage of electricity from renewables, the top 50 are primarily hydroelectric. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity stations larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela.[54]
56
+
57
+ Wave power, which captures the energy of ocean surface waves, and tidal power, converting the energy of tides, are two forms of hydropower with future potential; however, they are not yet widely employed commercially. A demonstration project operated by the Ocean Renewable Power Company on the coast of Maine, and connected to the grid, harnesses tidal power from the Bay of Fundy, location of world's highest tidal flow. Ocean thermal energy conversion, which uses the temperature difference between cooler deep and warmer surface waters, currently has no economic feasibility.[55][56]
58
+
59
+ In 2017, global installed solar capacity was 390 GW.[15]
60
+
61
+ Solar energy, radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, concentrated solar power (CSP), concentrator photovoltaics (CPV), solar architecture and artificial photosynthesis.[58][59] Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert, and distribute solar energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air. Active solar technologies encompass solar thermal energy, using solar collectors for heating, and solar power, converting sunlight into electricity either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP).
62
+
63
+ A photovoltaic system converts light into electrical direct current (DC) by taking advantage of the photoelectric effect.[60] Solar PV has turned into a multi-billion, fast-growing industry, continues to improve its cost-effectiveness, and has the most potential of any renewable technologies together with CSP.[61][62] Concentrated solar power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Commercial concentrated solar power plants were first developed in the 1980s. CSP-Stirling has by far the highest efficiency among all solar energy technologies.
64
+
65
+ In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared".[58] Italy has the largest proportion of solar electricity in the world; in 2015, solar supplied 7.7% of electricity demand in Italy.[63] In 2017, after another year of rapid growth, solar generated approximately 2% of global power, or 460 TWh.[64]
66
+
67
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
68
+
69
+ High temperature geothermal energy is from thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. Earth's geothermal energy originates from the original formation of the planet and from radioactive decay of minerals (in currently uncertain[65] but possibly roughly equal[66] proportions). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots geo, meaning earth, and thermos, meaning heat.
70
+
71
+ The heat that is used for geothermal energy can be from deep within the Earth, all the way down to Earth's core – 4,000 miles (6,400 km) down. At the core, temperatures may reach over 9,000 °F (5,000 °C). Heat conducts from the core to surrounding rock. Extremely high temperature and pressure cause some rock to melt, which is commonly known as magma. Magma convects upward since it is lighter than the solid rock. This magma then heats rock and water in the crust, sometimes up to 700 °F (371 °C).[67]
72
+
73
+ Low temperature geothermal[35] refers to the use of the outer crust of the Earth as a thermal battery to facilitate renewable thermal energy for heating and cooling buildings, and other refrigeration and industrial uses. In this form of geothermal, a geothermal heat pump and ground-coupled heat exchanger are used together to move heat energy into the Earth (for cooling) and out of the Earth (for heating) on a varying seasonal basis. Low temperature geothermal (generally referred to as "GHP") is an increasingly important renewable technology because it both reduces total annual energy loads associated with heating and cooling, and it also flattens the electric demand curve eliminating the extreme summer and winter peak electric supply requirements. Thus low temperature geothermal/GHP is becoming an increasing national priority with multiple tax credit support[68] and focus as part of the ongoing movement toward net zero energy.[36]
74
+
75
+ Bioenergy global capacity in 2017 was 109 GW.[15]
76
+
77
+ Biomass is biological material derived from living, or recently living organisms. It most often refers to plants or plant-derived materials which are specifically called lignocellulosic biomass.[69] As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel. Conversion of biomass to biofuel can be achieved by different methods which are broadly classified into: thermal, chemical, and biochemical methods. Wood remains the largest biomass energy source today;[70] examples include forest residues – such as dead trees, branches and tree stumps –, yard clippings, wood chips and even municipal solid waste. In the second sense, biomass includes plant or animal matter that can be converted into fibers or other industrial chemicals, including biofuels. Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, bamboo,[71] and a variety of tree species, ranging from eucalyptus to oil palm (palm oil).
78
+
79
+ Plant energy is produced by crops specifically grown for use as fuel that offer high biomass output per hectare with low input energy.[72] The grain can be used for liquid transportation fuels while the straw can be burned to produce heat or electricity. Plant biomass can also be degraded from cellulose to glucose through a series of chemical treatments, and the resulting sugar can then be used as a first generation biofuel.
80
+
81
+ Biomass can be converted to other usable forms of energy such as methane gas[73] or transportation fuels such as ethanol and biodiesel. Rotting garbage, and agricultural and human waste, all release methane gas – also called landfill gas or biogas. Crops, such as corn and sugarcane, can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products such as vegetable oils and animal fats.[74] Also, biomass to liquids (BTLs) and cellulosic ethanol are still under research.[75][76] There is a great deal of research involving algal fuel or algae-derived biomass due to the fact that it is a non-food resource and can be produced at rates 5 to 10 times those of other types of land-based agriculture, such as corn and soy. Once harvested, it can be fermented to produce biofuels such as ethanol, butanol, and methane, as well as biodiesel and hydrogen. The biomass used for electricity generation varies by region. Forest by-products, such as wood residues, are common in the United States. Agricultural waste is common in Mauritius (sugar cane residue) and Southeast Asia (rice husks). Animal husbandry residues, such as poultry litter, are common in the United Kingdom.[77]
82
+
83
+ Biofuels include a wide range of fuels which are derived from biomass. The term covers solid, liquid, and gaseous fuels.[78] Liquid biofuels include bioalcohols, such as bioethanol, and oils, such as biodiesel. Gaseous biofuels include biogas, landfill gas and synthetic gas. Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. These include maize, sugarcane and, more recently, sweet sorghum. The latter crop is particularly suitable for growing in dryland conditions, and is being investigated by International Crops Research Institute for the Semi-Arid Tropics for its potential to provide fuel, along with food and animal feed, in arid parts of Asia and Africa.[79]
84
+
85
+ With advanced technology being developed, cellulosic biomass, such as trees and grasses, are also used as feedstocks for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the United States and in Brazil. The energy costs for producing bio-ethanol are almost equal to, the energy yields from bio-ethanol. However, according to the European Environment Agency, biofuels do not address global warming concerns.[80] Biodiesel is made from vegetable oils, animal fats or recycled greases. It can be used as a fuel for vehicles in its pure form, or more commonly as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. Biofuels provided 2.7% of the world's transport fuel in 2010.[81]
86
+
87
+ Biomass, biogas and biofuels are burned to produce heat/power and in doing so harm the environment. Pollutants such as sulphurous oxides (SOx), nitrous oxides (NOx), and particulate matter (PM) are produced from the combustion of biomass; the World Health Organisation estimates that 7 million premature deaths are caused each year by air pollution.[82] Biomass combustion is a major contributor.[82][83][84]
88
+
89
+ Renewable energy production from some sources such as wind and solar is more variable and more geographically spread than technology based on fossil fuels and nuclear. While integrating it into the wider energy system is feasible, it does lead to some additional challenges. In order for the energy system to remain stable, a set of measurements can be taken. Implementation of energy storage, using a wide variety of renewable energy technologies, and implementing a smart grid in which energy is automatically used at the moment it is produced can reduce risks and costs of renewable energy implementation.[85] In some locations, individual households can opt to purchase renewable energy through a consumer green energy program.
90
+
91
+ Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity accounts for more than 90% of all grid power storage. Costs of lithium-ion batteries are dropping rapidly, and are increasingly being deployed grid ancillary services and for domestic storage.
92
+
93
+ Renewable power has been more effective in creating jobs than coal or oil in the United States.[86] In 2016, employment in the sector increased 6 percent in the United States, causing employment in the non-renewable energy sector to decrease 18 percent. Worldwide, renewables employ about 8.1 million as of 2016.[87]
94
+
95
+ From the end of 2004, worldwide renewable energy capacity grew at rates of 10–60% annually for many technologies. In 2015 global investment in renewables rose 5% to $285.9 billion, breaking the previous record of $278.5 billion in 2011. 2015 was also the first year that saw renewables, excluding large hydro, account for the majority of all new power capacity (134 GW, making up 53.6% of the total). Of the renewables total, wind accounted for 72 GW and solar photovoltaics 56 GW; both record-breaking numbers and sharply up from 2014 figures (49 GW and 45 GW respectively). In financial terms, solar made up 56% of total new investment and wind accounted for 38%.
96
+
97
+ In 2014 global wind power capacity expanded 16% to 369,553 MW.[90] Yearly wind energy production is also growing rapidly and has reached around 4% of worldwide electricity usage,[91] 11.4% in the EU,[92] and it is widely used in Asia, and the United States. In 2015, worldwide installed photovoltaics capacity increased to 227 gigawatts (GW), sufficient to supply 1 percent of global electricity demands.[93] Solar thermal energy stations operate in the United States and Spain, and as of 2016, the largest of these is the 392 MW Ivanpah Solar Electric Generating System in California.[94][95] The world's largest geothermal power installation is The Geysers in California, with a rated capacity of 750 MW. Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18% of the country's automotive fuel. Ethanol fuel is also widely available in the United States.
98
+
99
+ In 2017, investments in renewable energy amounted to US$279.8 billion worldwide, with China accounting for US$126.6 billion or 45% of the global investments, the US for US$40.5 billion, and Europe for US$40.9 billion.[6] The results of a recent review of the literature concluded that as greenhouse gas (GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies.[96]
100
+
101
+ Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2018 report from the International Renewable Energy Agency (IRENA), found that the cost of renewable energy is quickly falling, and will likely be equal to or less than the cost non-renewables such as fossil fuels by 2020. The report found that solar power costs have dropped 73% since 2010 and onshore wind costs have dropped by 23% in that same timeframe.[106]
102
+
103
+ Current projections concerning the future cost of renewables vary however. The EIA has predicted that almost two thirds of net additions to power capacity will come from renewables by 2020 due to the combined policy benefits of local pollution, decarbonisation and energy diversification.
104
+
105
+ According to a 2018 report by Bloomberg New Energy Finance, wind and solar power are expected to generate roughly 50% of the world's energy needs by 2050, while coal powered electricity plants are expected to drop to just 11%.[107]
106
+ Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies.[108] Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today".[108] A series of studies by the US National Renewable Energy Laboratory modeled the "grid in the Western US under a number of different scenarios where intermittent renewables accounted for 33 percent of the total power." In the models, inefficiencies in cycling the fossil fuel plants to compensate for the variation in solar and wind energy resulted in an additional cost of "between $0.47 and $1.28 to each MegaWatt hour generated"; however, the savings in the cost of the fuels saved "adds up to $7 billion, meaning the added costs are, at most, two percent of the savings."[109]
107
+
108
+ In 2017 the world renewable hydropower capacity was 1,154 GW.[15] Only a quarter of the worlds estimated hydroelectric potential of 14,000 TWh/year has been developed, the regional potentials for the growth of hydropower around the world are, 71% Europe, 75% North America, 79% South America, 95% Africa, 95% Middle East, 82% Asia Pacific. However, the political realities of new reservoirs in western countries, economic limitations in the third world and the lack of a transmission system in undeveloped areas, result in the possibility of developing 25% of the remaining potential before 2050, with the bulk of that being in the Asia Pacific area.[110] There is slow growth taking place in Western counties,[citation needed] but not in the conventional dam and reservoir style of the past. New projects take the form of run-of-the-river and small hydro, neither using large reservoirs. It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid.[111] Where circumstances permit existing dams such as the Russell Dam built in 1985 may be updated with "pump back" facilities for pumped-storage which is useful for peak loads or to support intermittent wind and solar power. Countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro.[112]
109
+
110
+ Wind power is widely used in Europe, China, and the United States. From 2004 to 2017, worldwide installed capacity of wind power has been growing from 47 GW to 514 GW—a more than tenfold increase within 13 years[15] As of the end of 2014, China, the United States and Germany combined accounted for half of total global capacity.[90] Several other countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, and 14% in Ireland in 2010 and have since continued to expand their installed capacity.[113][114] More than 80 countries around the world are using wind power on a commercial basis.[81]
111
+
112
+ Wind turbines are increasing in power with some commercially deployed models generating over 8MW per turbine.[115][116][117] More powerful models are in development, see list of most powerful wind turbines.
113
+
114
+ Solar thermal energy capacity has increased from 1.3 GW in 2012 to 5.0 GW in 2017.[15]
115
+
116
+ Spain is the world leader in solar thermal power deployment with 2.3 GW deployed.[15] The United States has 1.8 GW,[15] most of it in California where 1.4 GW of solar thermal power projects are operational.[121] Several power plants have been constructed in the Mojave Desert, Southwestern United States. As of 2017 only 4 other countries have deployments above 100 MW:[15] South Africa (300 MW) India (229 MW) Morocco (180 MW) and United Arab Emirates (100 MW).
117
+
118
+ The United States conducted much early research in photovoltaics and concentrated solar power. The U.S. is among the top countries in the world in electricity generated by the Sun and several of the world's largest utility-scale installations are located in the desert Southwest.
119
+
120
+ The oldest solar thermal power plant in the world is the 354 megawatt (MW) SEGS thermal power plant, in California.[122] The Ivanpah Solar Electric Generating System is a solar thermal power project in the California Mojave Desert, 40 miles (64 km) southwest of Las Vegas, with a gross capacity of 377 MW.[123] The 280 MW Solana Generating Station is a solar power plant near Gila Bend, Arizona, about 70 miles (110 km) southwest of Phoenix, completed in 2013. When commissioned it was the largest parabolic trough plant in the world and the first U.S. solar plant with molten salt thermal energy storage.[124]
121
+
122
+ In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.[125]
123
+
124
+ Worldwide growth of PV capacity grouped by region in MW (2006–2014)
125
+
126
+ Photovoltaics (PV) is rapidly-growing with global capacity increasing from 177 GW at the end of 2014 to 385 GW in 2017.[15]
127
+
128
+ PV uses solar cells assembled into solar panels to convert sunlight into electricity. PV systems range from small, residential and commercial rooftop or building integrated installations, to large utility-scale photovoltaic power station. The predominant PV technology is crystalline silicon, while thin-film solar cell technology accounts for about 10 percent of global photovoltaic deployment. In recent years, PV technology has improved its electricity generating efficiency, reduced the installation cost per watt as well as its energy payback time, and reached grid parity in at least 30 different markets by 2014.[126]
129
+ Building-integrated photovoltaics or "onsite" PV systems use existing land and structures and generate power close to where it is consumed.[127]
130
+
131
+ Photovoltaics grew fastest in China, followed by Japan and the United States. Italy meets 7.9 percent of its electricity demands with photovoltaic power—the highest share worldwide.[128] Solar power is forecasted to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16% and 11%, respectively. This requires an increase of installed PV capacity to 4,600 GW, of which more than half is expected to be deployed in China and India.[129]
132
+
133
+ Commercial concentrated solar power plants were first developed in the 1980s. As the cost of solar electricity has fallen, the number of grid-connected solar PV systems has grown into the millions and utility-scale solar power stations with hundreds of megawatts are being built. Many solar photovoltaic power stations have been built, mainly in Europe, China and the United States.[130] The 1.5 GW Tengger Desert Solar Park, in China is the world's largest PV power station. Many of these plants are integrated with agriculture and some use tracking systems that follow the sun's daily path across the sky to generate more electricity than fixed-mounted systems.
134
+
135
+ Bioenergy global capacity in 2017 was 109 GW.[15]
136
+ Biofuels provided 3% of the world's transport fuel in 2017.[131]
137
+
138
+ Mandates for blending biofuels exist in 31 countries at the national level and in 29 states/provinces.[81] According to the International Energy Agency, biofuels have the potential to meet more than a quarter of world demand for transportation fuels by 2050.[132]
139
+
140
+ Since the 1970s, Brazil has had an ethanol fuel program which has allowed the country to become the world's second largest producer of ethanol (after the United States) and the world's largest exporter.[133] Brazil's ethanol fuel program uses modern equipment and cheap sugarcane as feedstock, and the residual cane-waste (bagasse) is used to produce heat and power.[134] There are no longer light vehicles in Brazil running on pure gasoline. By the end of 2008 there were 35,000 filling stations throughout Brazil with at least one ethanol pump.[135] Unfortunately, Operation Car Wash has seriously eroded public trust in oil companies and has implicated several high ranking Brazilian officials.
141
+
142
+ Nearly all the gasoline sold in the United States today is mixed with 10% ethanol,[136] and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, Daimler AG, and GM are among the automobile companies that sell "flexible-fuel" cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol. By mid-2006, there were approximately 6 million ethanol compatible vehicles on U.S. roads.[137]
143
+
144
+ Global geothermal capacity in 2017 was 12.9 GW.[15]
145
+
146
+ Geothermal power is cost effective, reliable, sustainable, and environmentally friendly,[138] but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are usually much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels.
147
+
148
+ In 2017, the United States led the world in geothermal electricity production with 12.9 GW of installed capacity.[15] The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California.[139] The Philippines follows the US as the second highest producer of geothermal power in the world, with 1.9 GW of capacity online.[15]
149
+
150
+ Renewable energy technology has sometimes been seen as a costly luxury item by critics, and affordable only in the affluent developed world. This erroneous view has persisted for many years, however between 2016 and 2017, investments in renewable energy were higher in developing countries than in developed countries, with China leading global investment with a record 126.6 billion dollars. Many Latin American and African countries increased their investments significantly as well.[140]
151
+ Renewable energy can be particularly suitable for developing countries. In rural and remote areas, transmission and distribution of energy generated from fossil fuels can be difficult and expensive. Producing renewable energy locally can offer a viable alternative.[141]
152
+
153
+ Technology advances are opening up a huge new market for solar power: the approximately 1.3 billion people around the world who don't have access to grid electricity. Even though they are typically very poor, these people have to pay far more for lighting than people in rich countries because they use inefficient kerosene lamps. Solar power costs half as much as lighting with kerosene.[142] As of 2010, an estimated 3 million households get power from small solar PV systems.[143] Kenya is the world leader in the number of solar power systems installed per capita. More than 30,000 very small solar panels, each producing 1[144] 2 to 30 watts, are sold in Kenya annually. Some Small Island Developing States (SIDS) are also turning to solar power to reduce their costs and increase their sustainability.
154
+
155
+ Micro-hydro configured into mini-grids also provide power. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves.[31] Clean liquid fuel sourced from renewable feedstocks are used for cooking and lighting in energy-poor areas of the developing world. Alcohol fuels (ethanol and methanol) can be produced sustainably from non-food sugary, starchy, and cellulostic feedstocks. Project Gaia, Inc. and CleanStar Mozambique are implementing clean cooking programs with liquid ethanol stoves in Ethiopia, Kenya, Nigeria and Mozambique.[145]
156
+
157
+ Renewable energy projects in many developing countries have demonstrated that renewable energy can directly contribute to poverty reduction by providing the energy needed for creating businesses and employment. Renewable energy technologies can also make indirect contributions to alleviating poverty by providing energy for cooking, space heating, and lighting. Renewable energy can also contribute to education, by providing electricity to schools.[146]
158
+
159
+ Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishing energy policy in early 2000s, most countries around the world now have some form of energy policy.[147]
160
+
161
+ The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed in 2009, by 75 countries signing the charter of IRENA.[149] As of April 2019, IRENA has 160 member states.[150] The then United Nations' Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity,[32] and in September 2011 he launched the UN Sustainable Energy for All initiative to improve energy access, efficiency and the deployment of renewable energy.[151]
162
+
163
+ The 2015 Paris Agreement on climate change motivated many countries to develop or improve renewable energy policies.[12] In 2017, a total of 121 countries have adapted some form of renewable energy policy.[147] National targets that year existed in at 176 countries.[12] In addition, there is also a wide range of policies at state/provincial and local levels.[81] Some public utilities help plan or install residential energy upgrades. Under president Barack Obama, the United States policy encouraged the uptake of renewable energy in line with commitments to the Paris agreement. Even though Trump has abandoned these goals, renewable investment is still on the rise.[152]
164
+
165
+ Many national, state, and local governments have created green banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies.[153] Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy. The US military has also focused on the use of renewable fuels for military vehicles. Unlike fossil fuels, renewable fuels can be produced in any country, creating a strategic advantage. The US military has already committed itself to have 50% of its energy consumption come from alternative sources.[154]
166
+
167
+ The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. Renewable energy use has grown much faster than even advocates anticipated.[155] At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Professors S. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges".[156]
168
+
169
+ Using 100% renewable energy was first suggested in a Science paper published in 1975 by Danish physicist Bent Sørensen.[157] It was followed by several other proposals, until in 1998 the first detailed analysis of scenarios with very high shares of renewables were published. These were followed by the first detailed 100% scenarios. In 2006 a PhD thesis was published by Czisch in which it was shown that in a 100% renewable scenario energy supply could match demand in every hour of the year in Europe and North Africa. In the same year Danish Energy professor Henrik Lund published a first paper[158] in which he addresses the optimal combination of renewables, which was followed by several other papers on the transition to 100% renewable energy in Denmark. Since then Lund has been publishing several papers on 100% renewable energy. After 2009 publications began to rise steeply, covering 100% scenarios for countries in Europe, America, Australia and other parts of the world.[159]
170
+
171
+ In 2011 Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University, and Mark Delucchi published a study on 100% renewable global energy supply in the journal Energy Policy. They found producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic".[160] They also found that energy costs with a wind, solar, water system should be similar to today's energy costs.[161]
172
+
173
+ Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."[162]
174
+
175
+ The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological.[163][164] According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.[165]
176
+
177
+ According to World Bank the "below 2°C" climate scenario requires 3 billions of tonnes of metals and minerals by 2050. Supply of mined resources such as zinc, molybdenum, silver, nickel, copper must increase by up to 500%.[166] A 2018 analysis estimated required increases in stock of metals required by various sectors from 1000% (wind power) to 87'000% (personal vehicle batteries).[167]
178
+
179
+ Other renewable energy technologies are still under development, and include cellulosic ethanol, hot-dry-rock geothermal power, and marine energy.[168] These technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research, development and demonstration (RD&D) funding.[168]
180
+
181
+ There are numerous organizations within the academic, federal, and commercial sectors conducting large scale advanced research in the field of renewable energy. This research spans several areas of focus across the renewable energy spectrum. Most of the research is targeted at improving efficiency and increasing overall energy yields.[169]
182
+ Multiple federally supported research organizations have focused on renewable energy in recent years. Two of the most prominent of these labs are Sandia National Laboratories and the National Renewable Energy Laboratory (NREL), both of which are funded by the United States Department of Energy and supported by various corporate partners.[170] Sandia has a total budget of $2.4 billion[171] while NREL has a budget of $375 million.[172]
183
+
184
+ Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful in low-income countries with relative air humidity over 60%.[203]
185
+
186
+ Renewable electricity production, from sources such as wind power and solar power, is intermittent which results in reduced capacity factor and require either energy storage of capacity equal to its total output, or base load power sources based on fossil fuels or nuclear power.
187
+
188
+ Since renewable energy sources power density per land area is at best three orders of magnitude smaller than fossil or nuclear power,[204] renewable power plants tends to occupy thousands of hectares causing environmental concerns and opposition from local residents, especially in densely populated countries. Solar power plants are competing with arable land and nature reserves,[205] while on-shore wind farms face opposition due to aesthetic concerns and noise, which is impacting both humans and wildlife.[206][207][208][209] In the United States, the Massachusetts Cape Wind project was delayed for years partly because of aesthetic concerns. However, residents in other areas have been more positive. According to a town councilor, the overwhelming majority of locals believe that the Ardrossan Wind Farm in Scotland has enhanced the area.[210] These concerns, when directed against renewable energy, are sometimes described as "not in my back yard" attitude (NIMBY).
189
+
190
+ A recent[when?] UK Government document states that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake".[211] In countries such as Germany and Denmark many renewable projects are owned by communities, particularly through cooperative structures, and contribute significantly to overall levels of renewable energy deployment.[212][213]
191
+
192
+ The market for renewable energy technologies has continued to grow. Climate change concerns and increasing in green jobs, coupled with high oil prices, peak oil, oil wars, oil spills, promotion of electric vehicles and renewable electricity, nuclear disasters and increasing government support, are driving increasing renewable energy legislation, incentives and commercialization.[18] New government spending, regulation and policies helped the industry weather the 2009 economic crisis better than many other sectors.[29]
193
+
194
+ While renewables have been very successful in their ever-growing contribution to electrical power there are no countries dominated by fossil fuels who have a plan to stop and get that power from renwables. Only Scotland and Ontario have stopped burning coal, largely due to good natural gas supplies. In the area of transportation, fossil fuels are even more entrenched and solutions harder to find.[214] It's unclear if there are failures with policy or renewable energy, but twenty years after the Kyoto Protocol fossil fuels are still our primary energy source and consumption continues to grow.[215]
195
+
196
+ The International Energy Agency has stated that deployment of renewable technologies usually increases the diversity of electricity sources and, through local generation, contributes to the flexibility of the system and its resistance to central shocks.[216]
197
+
198
+ From around 2010 onwards, there was increasing discussion about the geopolitical impact of the growing use of renewable energy.[217] It was argued that former fossil fuels exporters would experience a weakening of their position in international affairs, while countries with abundant sunshine, wind, hydropower, or geothermal resources would be strengthened.[218] Also countries rich in critical materials for renewable energy technologies were expected to rise in importance in international affairs.[219]
199
+
200
+ The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former fossil fuels exporters are expected to lose power, while the positions of former fossil fuel importers and countries rich in renewable energy resources is expected to strengthen.[220]
201
+
202
+ The ability of biomass and biofuels to contribute to a reduction in CO2 emissions is limited because both biomass and biofuels emit large amounts of air pollution when burned and in some cases compete with food supply. Furthermore, biomass and biofuels consume large amounts of water.[221] Other renewable sources such as wind power, photovoltaics, and hydroelectricity have the advantage of being able to conserve water, lower pollution and reduce CO2 emissions.
203
+ The installations used to produce wind, solar and hydro power are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts.[222] More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphazised that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.[223]
204
+
205
+ Renewable energy devices depend on non-renewable resources such as mined metals and use vast amounts of land due to their small surface power density. Manufacturing of photovoltaic panels, wind turbines and batteries requires significant amounts of rare-earth elements[224] and increases mining operations, which have social and environmental impact.[225] Due to co-occurrence of rare-earth and radioactive elements (thorium, uranium and radium), rare-earth mining results in production of low-level radioactive waste.[226]
206
+
207
+ Solar panels change the albedo of the surface what increases their contribution to global warming.[227]
208
+
209
+ Burbo, NW-England
210
+
211
+ Sunrise at the Fenton Wind Farm in Minnesota, US
212
+
213
+ The CSP-station Andasol in Andalusia, Spain
214
+
215
+ Ivanpah solar plant in the Mojave Desert, California, United States
216
+
217
+ Three Gorges Dam and Gezhouba Dam, China
218
+
219
+ Shop selling PV panels in Ouagadougou, Burkina Faso
220
+
221
+ Stump harvesting increases recovery of biomass from forests
222
+
223
+ A small, roof-top mounted PV system in Bonn, Germany
224
+
225
+ The community-owned Westmill Solar Park in South East England
226
+
227
+ Komekurayama photovoltaic power station in Kofu, Japan
228
+
229
+ Krafla, a geothermal power station in Iceland
en/1758.html.txt ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Thermal energy refers to several distinct physical concepts, such as the internal energy of a system; heat or sensible heat, which are defined as types of energy transfer (as is work); or for the characteristic energy of a degree of freedom in a thermal system
2
+
3
+
4
+
5
+ k
6
+ T
7
+
8
+
9
+ {\displaystyle kT}
10
+
11
+ , where
12
+
13
+
14
+
15
+ T
16
+
17
+
18
+ {\displaystyle T}
19
+
20
+ is temperature and
21
+
22
+
23
+
24
+ k
25
+
26
+
27
+ {\displaystyle k}
28
+
29
+ is the Boltzmann constant.
30
+
31
+ In thermodynamics, heat is energy in transfer to or from a thermodynamic system, by mechanisms other than thermodynamic work or transfer of matter.[1][2][3] Heat refers to a quantity transferred between systems, not to a property of any one system, or 'contained' within it.[4] On the other hand, internal energy is a property of a single system. Heat and work depend on the way in which an energy transfer occurred, whereas internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there.
32
+
33
+ In a statistical mechanical account of an ideal gas, in which the molecules move independently between instantaneous collisions, the internal energy is the sum total of the gas's independent particles' kinetic energies, and it is this kinetic motion that is the source and the effect of the transfer of heat across a system's boundary. For a gas that does not have particle interactions except for instantaneous collisions, the term 'thermal energy' is effectively synonymous with 'internal energy'. In many statistical physics texts, "thermal energy" refers to
34
+
35
+
36
+
37
+ k
38
+ T
39
+
40
+
41
+ {\displaystyle kT}
42
+
43
+ , the product of Boltzmann's constant and the absolute temperature, also written as
44
+
45
+
46
+
47
+
48
+ k
49
+
50
+ B
51
+
52
+
53
+ T
54
+
55
+
56
+ {\displaystyle k_{\text{B}}T}
57
+
58
+ .[5] In a material, especially in condensed matter, such as a liquid or a solid, in which the constituent particles, such as molecules or ions, interact strongly with one another, the energies of such interactions contribute strongly to the internal energy of the body, but are not simply apparent in the temperature.
59
+
60
+ The term 'thermal energy' is also applied to the energy carried by a heat flow,[6] although this can also simply be called heat or quantity of heat.
61
+
62
+ In an 1847 lecture titled "On Matter, Living Force, and Heat", James Prescott Joule characterised various terms that are closely related to thermal energy and heat. He identified the terms latent heat and sensible heat as forms of heat each affecting distinct physical phenomena, namely the potential and kinetic energy of particles, respectively.[7] He described latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy, and the sensible heat as an energy affecting temperature measured by the thermometer due to the thermal energy, which he called the living force.
63
+
64
+ If the minimum temperature of a system's environment is
65
+
66
+
67
+
68
+
69
+ T
70
+
71
+ e
72
+
73
+
74
+
75
+
76
+ {\displaystyle T_{\text{e}}}
77
+
78
+ and the system's entropy is
79
+
80
+
81
+
82
+ S
83
+
84
+
85
+ {\displaystyle S}
86
+
87
+ , then a part of the system's internal energy amounting to
88
+
89
+
90
+
91
+ S
92
+
93
+
94
+ T
95
+
96
+ e
97
+
98
+
99
+
100
+
101
+ {\displaystyle S\cdot T_{\text{e}}}
102
+
103
+ cannot be converted into useful work. This is the difference between the internal energy and the Helmholtz free energy.
en/1759.html.txt ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Thermal energy refers to several distinct physical concepts, such as the internal energy of a system; heat or sensible heat, which are defined as types of energy transfer (as is work); or for the characteristic energy of a degree of freedom in a thermal system
2
+
3
+
4
+
5
+ k
6
+ T
7
+
8
+
9
+ {\displaystyle kT}
10
+
11
+ , where
12
+
13
+
14
+
15
+ T
16
+
17
+
18
+ {\displaystyle T}
19
+
20
+ is temperature and
21
+
22
+
23
+
24
+ k
25
+
26
+
27
+ {\displaystyle k}
28
+
29
+ is the Boltzmann constant.
30
+
31
+ In thermodynamics, heat is energy in transfer to or from a thermodynamic system, by mechanisms other than thermodynamic work or transfer of matter.[1][2][3] Heat refers to a quantity transferred between systems, not to a property of any one system, or 'contained' within it.[4] On the other hand, internal energy is a property of a single system. Heat and work depend on the way in which an energy transfer occurred, whereas internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there.
32
+
33
+ In a statistical mechanical account of an ideal gas, in which the molecules move independently between instantaneous collisions, the internal energy is the sum total of the gas's independent particles' kinetic energies, and it is this kinetic motion that is the source and the effect of the transfer of heat across a system's boundary. For a gas that does not have particle interactions except for instantaneous collisions, the term 'thermal energy' is effectively synonymous with 'internal energy'. In many statistical physics texts, "thermal energy" refers to
34
+
35
+
36
+
37
+ k
38
+ T
39
+
40
+
41
+ {\displaystyle kT}
42
+
43
+ , the product of Boltzmann's constant and the absolute temperature, also written as
44
+
45
+
46
+
47
+
48
+ k
49
+
50
+ B
51
+
52
+
53
+ T
54
+
55
+
56
+ {\displaystyle k_{\text{B}}T}
57
+
58
+ .[5] In a material, especially in condensed matter, such as a liquid or a solid, in which the constituent particles, such as molecules or ions, interact strongly with one another, the energies of such interactions contribute strongly to the internal energy of the body, but are not simply apparent in the temperature.
59
+
60
+ The term 'thermal energy' is also applied to the energy carried by a heat flow,[6] although this can also simply be called heat or quantity of heat.
61
+
62
+ In an 1847 lecture titled "On Matter, Living Force, and Heat", James Prescott Joule characterised various terms that are closely related to thermal energy and heat. He identified the terms latent heat and sensible heat as forms of heat each affecting distinct physical phenomena, namely the potential and kinetic energy of particles, respectively.[7] He described latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy, and the sensible heat as an energy affecting temperature measured by the thermometer due to the thermal energy, which he called the living force.
63
+
64
+ If the minimum temperature of a system's environment is
65
+
66
+
67
+
68
+
69
+ T
70
+
71
+ e
72
+
73
+
74
+
75
+
76
+ {\displaystyle T_{\text{e}}}
77
+
78
+ and the system's entropy is
79
+
80
+
81
+
82
+ S
83
+
84
+
85
+ {\displaystyle S}
86
+
87
+ , then a part of the system's internal energy amounting to
88
+
89
+
90
+
91
+ S
92
+
93
+
94
+ T
95
+
96
+ e
97
+
98
+
99
+
100
+
101
+ {\displaystyle S\cdot T_{\text{e}}}
102
+
103
+ cannot be converted into useful work. This is the difference between the internal energy and the Helmholtz free energy.
en/176.html.txt ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The viola (/viˈoʊlə/ vee-OH-lə,[1][2] also UK: /vaɪˈoʊlə/ vy-OH-lə,[3][4][a] Italian: [ˈvjɔːla, viˈɔːla]) is a string instrument that is bowed, plucked, or played with varying techniques. It is slightly larger than a violin and has a lower and deeper sound. Since the 18th century, it has been the middle or alto voice of the violin family, between the violin (which is tuned a perfect fifth above) and the cello (which is tuned an octave below).[5] The strings from low to high are typically tuned to C3, G3, D4, and A4.
2
+
3
+ In the past, the viola varied in size and style, as did its names. The word viola originates from the Italian language. The Italians often used the term: viola da braccio meaning literally: 'of the arm'. "Brazzo" was another Italian word for the viola, which the Germans adopted as Bratsche. The French had their own names: cinquiesme was a small viola, haute contre was a large viola, and taile was a tenor. Today, the French use the term alto, a reference to its range.
4
+
5
+ The viola was popular in the heyday of five-part harmony, up until the eighteenth century, taking three lines of the harmony and occasionally playing the melody line. Music for the viola differs from most other instruments in that it primarily uses the alto clef. When viola music has substantial sections in a higher register, it switches to the treble clef to make it easier to read.
6
+
7
+ The viola often plays the "inner voices" in string quartets and symphonic writing, and it is more likely than the first violin to play accompaniment parts. The viola occasionally plays a major, soloistic role in orchestral music. Examples include the symphonic poem, "Don Quixote", by Richard Strauss, and the symphony, "Harold en Italie", by Hector Berlioz. In the earlier part of the 20th century, more composers began to write for the viola, encouraged by the emergence of specialized soloists such as Lionel Tertis and William Primrose. English composers Arthur Bliss, York Bowen, Benjamin Dale, Frank Bridge, Benjamin Britten, Rebecca Clarke and Ralph Vaughan Williams all wrote substantial chamber and concert works. Many of these pieces were commissioned by, or written for Lionel Tertis. William Walton, Bohuslav Martinů, Toru Takemitsu, Tibor Serly, Alfred Schnittke, and Béla Bartók have written well-known viola concertos. Paul Hindemith, who was a violist, wrote a substantial amount of music for viola, including the concerto, "Der Schwanendreher". The concerti by Béla Bartók, Paul Hindemith, Carl Stamitz, Georg Philipp Telemann, and William Walton are considered major works of the viola repertoire.
8
+
9
+ The viola is similar in material and construction to the violin. A full-size viola's body is between 25 mm (1 in) and 100 mm (4 in) longer than the body of a full-size violin (i.e., between 38 and 46 cm [15–18 in]), with an average length of 41 cm (16 in). Small violas typically made for children typically start at 30 cm (12 in), which is equivalent to a half-size violin. For a child who needs a smaller size, a fractional-sized violin is often strung with the strings of a viola.[6] Unlike the violin, the viola does not have a standard full size. The body of a viola would need to measure about 51 cm (20 in) long to match the acoustics of a violin, making it impractical to play in the same manner as the violin.[7] For centuries, viola makers have experimented with the size and shape of the viola, often adjusting proportions or shape to make a lighter instrument with shorter string lengths, but with a large enough sound box to retain the viola sound. Prior to the eighteenth century, violas had no uniform size. Large violas (tenors) were designed to play the lower register viola lines or second viola in five part harmony depending on instrumentation. A smaller viola, nearer the size of the violin, was called an alto viola. It was more suited to higher register writing, as in the viola 1 parts, as their sound was usually richer in the upper register. Its size was not as conducive to a full tone in the lower register.
10
+
11
+ Several experiments have intended to increase the size of the viola to improve its sound. Hermann Ritter's viola alta, which measured about 48 cm (19 in), was intended for use in Wagner's operas.[8] The Tertis model viola, which has wider bouts and deeper ribs to promote a better tone, is another slightly "nonstandard" shape that allows the player to use a larger instrument. Many experiments with the acoustics of a viola, particularly increasing the size of the body, have resulted in a much deeper tone, making it resemble the tone of a cello. Since many composers wrote for a traditional-sized viola, particularly in orchestral music, changes in the tone of a viola can have unintended consequences upon the balance in ensembles.
12
+
13
+ One of the most notable makers of violas of the twentieth century was Englishman A. E. Smith, whose violas are sought after and highly valued. Many of his violas remain in Australia, his country of residence, where during some decades the violists of the Sydney Symphony Orchestra had a dozen of them in their section.
14
+
15
+ More recent (and more radically shaped) innovations have addressed the ergonomic problems associated with playing the viola by making it shorter and lighter, while finding ways to keep the traditional sound. These include the Otto Erdesz "cutaway" viola, which has one shoulder cut out to make shifting easier;[9] the "Oak Leaf" viola, which has two extra bouts; viol-shaped violas such as Joseph Curtin's "Evia" model, which also uses a moveable neck and maple-veneered carbon fibre back, to reduce weight:[10] violas played in the same manner as cellos (see vertical viola); and the eye-catching "Dalí-esque" shapes of both Bernard Sabatier's violas in fractional sizes—which appear to have melted—and David Rivinus' Pellegrina model violas.[11]
16
+
17
+ Other experiments that deal with the "ergonomics vs. sound" problem have appeared. The American composer Harry Partch fitted a viola with a cello neck to allow the use of his 43-tone scale. Luthiers have also created five-stringed violas, which allow a greater playing range.
18
+
19
+ A person who plays the viola is called a violist or a viola player. The technique required for playing a viola has certain differences compared with that of a violin, partly because of its larger size: the notes are spread out farther along the fingerboard and often require different fingerings. The viola's less responsive strings and the heavier bow warrant a somewhat different bowing technique, and a violist has to lean more intensely on the strings.[12]
20
+
21
+ The viola's four strings are normally tuned in fifths: the lowest string is C (an octave below middle C), with G, D and A above it. This tuning is exactly one fifth below the violin,[14] so that they have three strings in common—G, D, and A—and is one octave above the cello.
22
+
23
+ Each string of a viola is wrapped around a peg near the scroll and is tuned by turning the peg. Tightening the string raises the pitch; loosening the string lowers the pitch. The A string is normally tuned first, typically to a pitch of 440 Hz or 442 Hz. The other strings are then tuned to it in intervals of perfect fifths, sometimes by bowing two strings simultaneously. Most violas also have adjusters—fine tuners, that make finer changes. These adjust the tension of the string via rotating a small knob at the opposite or tailpiece end of the string. Such tuning is generally easier to learn than using the pegs, and adjusters are usually recommended for younger players and put on smaller violas, though pegs and adjusters are usually used together. Adjusters work best, and are most useful, on metal strings. It is common to use one on the A string, which is most prone to breaking, even if the others are not equipped with them. Some violists reverse the stringing of the C and G pegs, so that the thicker C string does not turn so severe an angle over the nut, although this is uncommon.
24
+
25
+ Small, temporary tuning adjustments can also be made by stretching a string with the hand. A string may be tuned down by pulling it above the fingerboard, or tuned up by pressing the part of the string in the pegbox. These techniques may be useful in performance, reducing the ill effects of an out-of-tune string until an opportunity to tune properly.
26
+
27
+ The tuning C–G–D–A is used for the great majority of all viola music. However, other tunings are occasionally employed, both in classical music, where the technique is known as scordatura, and in some folk styles. Mozart, in his Sinfonia Concertante for Violin, Viola and Orchestra in E♭, wrote the viola part in D major, and specified that the violist raise the strings in pitch by a semitone. He probably intended to give the viola a brighter tone so the rest of the ensemble wouldn't overpower it. Lionel Tertis, in his transcription of the Elgar cello concerto, wrote the slow movement with the C string tuned down to B♭, enabling the viola to play one passage an octave lower. Occasionally the C string may also be tuned up to D.
28
+
29
+ A renewal of interest in the viola by performers and composers in the twentieth century led to increased research devoted to the instrument. Paul Hindemith and Vadim Borisovsky made an early attempt at an organization, in 1927, with the Violists' World Union. But it was not until 1968, with the creation of the Viola-Forschungsgesellschaft, now the International Viola Society (IVS), that a lasting organization took hold.[citation needed] The IVS now consists of twelve chapters around the world, the largest being the American Viola Society (AVS), which publishes the Journal of the American Viola Society. In addition to the journal, the AVS sponsors the David Dalton Research Competition and the Primrose International Viola Competition.
30
+
31
+ The 1960s also saw the beginning of several research publications devoted to the viola, beginning with Franz Zeyringer's, "Literatur für Viola", which has undergone several versions, the most recent being in 1985. In 1980, Maurice Riley produced the first attempt at a comprehensive history of the viola, in his History of the Viola, which was followed with a second volume in 1991. The IVS published the multi-language Viola Yearbook from 1979 to 1994, during which several other national chapters of the IVS published respective newsletters. The Primrose International Viola Archive at Brigham Young University houses the greatest amount of material related to the viola, including scores, recordings, instruments, and archival materials from some of the world's greatest violists.[citation needed]
32
+
33
+ Music that is written for the viola differs from that of most other instruments, in that it primarily uses the alto clef, which is otherwise rarely used. The trombone occasionally uses the alto clef, but not primarily. (The comparatively rare alto trombone primarily uses the alto clef.) Viola music employs the treble clef when there are substantial sections of music written in a higher register. The alto clef is defined by the placement of C4 on the center line of the staff. In treble clef, this note is placed one line below the staff and in the bass clef (used, notably, by the cello and double bass) it is placed one line above.[15]
34
+
35
+ As the viola is tuned exactly one octave above the cello (meaning that the viola retains the same string notes as the cello, but an octave up), pieces written for the cello can be easily transposed to the alto clef. For example, there are numerous editions of Bach's Cello Suites transcribed for viola that retain the original key, notes, and musical patterns. The viola also has the advantage of smaller strings, which means that the intervals meant for cello are easier to play on the viola.
36
+
37
+ In early orchestral music, the viola part was usually limited to filling in harmonies, with very little melodic material assigned to it. When the viola was given a melodic part, it was often duplicated (or was in unison with) the melody played by other strings.
38
+
39
+ The concerti grossi, "Brandenburg Concerti", composed by J. S. Bach, were unusual in their use of viola. The third concerto grosso, scored for three violins, three violas, and lower strings with basso continuo, requires occasional virtuosity from the violists. The sixth concerto grosso, "Brandenburg Concerto No. 6", which was scored for 2 violas "concertino", cello, 2 violas da gamba, and continuo, had the two violas playing the primary melodic role.[16] He also used this unusual ensemble in his cantata, Gleichwie der Regen und Schnee vom Himmel fällt, BWV 18 and in Mein Herze schwimmt im Blut, BWV 199, the chorale is accompanied by an obbligato viola.
40
+
41
+ There are a few Baroque and Classical concerti, such as those by Georg Philipp Telemann (one of the earliest viola concertos known), Alessandro Rolla, Franz Anton Hoffmeister and Carl Stamitz. Hector Berlioz's, "Harold in Italy", was written for solo viola and orchestra.
42
+
43
+ The viola plays an important role in chamber music. Mozart used the viola in more creative ways when he wrote his six string quintets. The quintets use two violas, which frees them (especially the first viola) for solo passages and increases the variety of writing that is possible for the ensemble. Mozart also wrote for the viola in his, "Sinfonia Concertante", a set of two duets for violin and viola, and the Kegelstatt Trio for viola, clarinet, and piano. The young Felix Mendelssohn wrote a little-known Viola Sonata in C minor (without opus number, but dating from 1824). Robert Schumann wrote his Märchenbilder for viola and piano. He also wrote a set of four pieces for clarinet, viola, and piano, Märchenerzählungen.
44
+
45
+ Max Bruch wrote a romance for viola and orchestra, his Op. 85, which explores the emotive capabilities of the viola's timbre. In addition, his Eight pieces for clarinet, viola, and piano, Op. 83, features the viola in a very prominent, solo aspect throughout. His Concerto for Clarinet, Viola, and Orchestra, Op. 88 has been quite prominent in the repertoire and has been recorded by prominent violists throughout the 20th century.
46
+
47
+ From his earliest works, Brahms wrote music that prominently featured the viola. Among his first published pieces of chamber music, the sextets for strings Op. 18 and Op. 36 contain what amounts to solo parts for both violas. Late in life, he wrote two greatly admired sonatas for clarinet and piano, his Op. 120 (1894): he later transcribed these works for the viola (the solo part in his horn trio is also available in a transcription for viola). Brahms also wrote "Two Songs for Alto with Viola and Piano", Op. 91, "Gestillte Sehnsucht" ("Satisfied Longing") and "Geistliches Wiegenlied" ("Spiritual Lullaby") as presents for the famous violinist Joseph Joachim and his wife, Amalie. Dvořák played the viola and apparently said that it was his favorite instrument: his chamber music is rich in important parts for the viola. Another Czech composer, Bedřich Smetana, included a significant viola part in his quartet "From My Life": the quartet begins with an impassioned statement by the viola. Bach, Mozart and Beethoven all occasionally played the viola part in chamber music.
48
+
49
+ The viola occasionally has a major role in orchestral music, a prominent example being Richard Strauss' tone poem Don Quixote for solo cello and viola and orchestra. Other examples are the "Ysobel" variation of Edward Elgar's Enigma Variations and the solo in his other work, In the South (Alassio), the pas de deux scene from Act 2 of Adolphe Adam's Giselle and the "La Paix" movement of Léo Delibes's ballet Coppélia, which features a lengthy viola solo.
50
+
51
+ Gabriel Fauré's Requiem was originally scored (in 1888) with divided viola sections, lacking the usual violin sections, having only a solo violin for the Sanctus. It was later scored for orchestra with violin sections, and published in 1901. Recordings of the older scoring with violas are available.
52
+
53
+ While the viola repertoire is quite large, the amount written by well-known pre-20th-century composers is relatively small. There are many transcriptions of works for other instruments for the viola and the large number of 20th-century compositions is very diverse. See "The Viola Project" at the San Francisco Conservatory of Music, where Professor of Viola Jodi Levitz has paired a composer with each of her students, resulting in a recital of brand-new works played for the very first time.
54
+
55
+ In the earlier part of the 20th century, more composers began to write for the viola, encouraged by the emergence of specialized soloists such as Lionel Tertis. Englishmen Arthur Bliss, York Bowen, Benjamin Dale, and Ralph Vaughan Williams all wrote chamber and concert works for Tertis. William Walton, Bohuslav Martinů, and Béla Bartók wrote well-known viola concertos. Paul Hindemith wrote a substantial amount of music for the viola; being himself a violist, he often performed his own works. Claude Debussy's Sonata for flute, viola and harp has inspired a significant number of other composers to write for this combination.
56
+
57
+ Charles Wuorinen composed his virtuosic Viola Variations in 2008 for Lois Martin. Elliott Carter also wrote several works for viola including his Elegy (1943) for viola and piano; it was subsequently transcribed for clarinet. Ernest Bloch, a Swiss-born American composer best known for his compositions inspired by Jewish music, wrote two famous works for viola, the Suite 1919 and the Suite Hébraïque for solo viola and orchestra. Rebecca Clarke was a 20th-century composer and violist who also wrote extensively for the viola. Lionel Tertis records that Edward Elgar (whose cello concerto Tertis transcribed for viola, with the slow movement in scordatura), Alexander Glazunov (who wrote an Elegy, Op. 44, for viola and piano), and Maurice Ravel all promised concertos for viola, yet all three died before doing any substantial work on them.
58
+
59
+ In the latter part of the 20th century a substantial repertoire was produced for the viola; many composers including Miklós Rózsa, Revol Bunin, Alfred Schnittke, Sofia Gubaidulina, Giya Kancheli and Krzysztof Penderecki, have written viola concertos. The American composer Morton Feldman wrote a series of works entitled The Viola in My Life, which feature concertante viola parts. In spectral music, the viola has been sought after because of its lower overtone partials that are more easily heard than on the violin. Spectral composers like Gérard Grisey, Tristan Murail, and Horațiu Rădulescu have written solo works for viola. Neo-Romantic, post-Modern composers have also written significant works for viola including Robin Holloway Viola Concerto op.56 and Sonata op.87, and Peter Seabourne a large five movement work with piano, Pietà.
60
+
61
+ The viola is sometimes used in contemporary popular music, mostly in the avant-garde. John Cale of The Velvet Underground used the viola, as do some modern groups such as alternative rock band 10,000 Maniacs, Imagine Dragons, folk duo John & Mary, Flobots, British Sea Power, Quargs (Mya) Greene of Love Ghost and others. Jazz music has also seen its share of violists, from those used in string sections in the early 1900s to a handful of quartets and soloists emerging from the 1960s onward. It is quite unusual though, to use individual bowed string instruments in contemporary popular music.
62
+
63
+ Although not as commonly used as the violin in folk music, the viola is nevertheless used by many folk musicians across the world. Extensive research into the historical and current use of the viola in folk music has been carried out by Dr. Lindsay Aitkenhead. Players in this genre include Eliza Carthy, Mary Ramsey, Helen Bell, and Nancy Kerr. Clarence "Gatemouth" Brown was the viola's most prominent exponent in the genre of blues.
64
+
65
+ The viola is also an important accompaniment instrument in Slovakian, Hungarian and Romanian folk string band music, especially in Transylvania. Here the instrument has three strings tuned G3–D4–A3 (note that the A is an octave lower than found on the standard instrument), and the bridge is flattened with the instrument playing chords in a strongly rhythmic manner. In this usage, it is called a kontra or brácsa (pronounced "bra-cha", from German Bratsche, "viola").
66
+
67
+ There are few well-known viola virtuoso soloists, perhaps because little virtuoso viola music was written before the twentieth century. Pre-twentieth century viola players of note include Carl Stamitz, Alessandro Rolla, Antonio Rolla, Chrétien Urhan, Casimir Ney, Louis van Waefelghem, and Hermann Ritter. Important viola pioneers from the twentieth century were Lionel Tertis, William Primrose, composer/performer Paul Hindemith, Théophile Laforge, Cecil Aronowitz, Maurice Vieux, Vadim Borisovsky, Lillian Fuchs, Dino Asciolla, Frederick Riddle, Walter Trampler, Ernst Wallfisch, Csaba Erdélyi, the only violist to ever win the Carl Flesch International Violin Competition, and Emanuel Vardi, the first violist to record the 24 Caprices by Paganini on viola. Many noted violinists have publicly performed and recorded on the viola as well, among them Eugène Ysaÿe, Yehudi Menuhin, David Oistrakh, Pinchas Zukerman, Maxim Vengerov, Julian Rachlin and Nigel Kennedy.
68
+
69
+ Among the great composers, several preferred the viola to the violin when they were playing in ensembles,[18] the most noted being Ludwig van Beethoven, Johann Sebastian Bach[19] and Wolfgang Amadeus Mozart. Other composers also chose to play the viola in ensembles, including Joseph Haydn, Franz Schubert, Felix Mendelssohn, Antonín Dvořák, and Benjamin Britten. Among those noted both as violists and as composers are Rebecca Clarke and Paul Hindemith. Contemporary composers and violists Kenji Bunch, Scott Slapin, and Lev Zhurbin have written a number of works for viola.
70
+
71
+ Amplification of a viola with a pickup, an instrument amplifier (and speaker), and adjusting the tone with a graphic equalizer can make up for the comparatively weaker output of a violin-family instrument string tuned to notes below G3. There are two types of instruments used for electric viola: regular acoustic violas fitted with a piezoelectric pickup and specialized electric violas, which have little or no body. While traditional acoustic violas are typically only available in historically used earth tones (e.g., brown, reddish brown, blonde), electric violas may be traditional colors or they may use bright colors, such as red, blue or green. Some electric violas are made of materials other than wood.
72
+
73
+ Most electric instruments with lower strings are violin-sized, as they use the amp and speaker to create a big sound, so they don't need a large soundbox. Indeed, some electric violas have little or no soundbox, and thus rely on entirely on amplification. Fewer electric violas are available relative to electric violins. it can be hard for violists who prefer physical size or familiar touch references of a viola-sized instrument, when they must use an electric viola that uses a smaller violin-sized body. Welsh musician John Cale, formerly of The Velvet Underground, is one of the more famous users of such an electric viola, who has used them both for melodies in his solo work and for drones in his work with The Velvet Underground (e.g. "Venus in Furs"). Other notable players of the electric viola are Geoffrey Richardson of Caravan and Mary Ramsey of 10,000 Maniacs.
74
+
75
+ Instruments may be built with an internal preamplifier, or may put out an unbuffered transducer signal. While such signals may be fed directly to an amplifier or mixing board, they often benefit from an external preamp/equalizer on the end of a short cable, before being fed to the sound system. In rock and other loud styles, the electric viola player may use effects units such as reverb or overdrive.
en/1760.html.txt ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Biologically, a child (plural children) is a human being between the stages of birth and puberty,[1][2] or between the developmental period of infancy and puberty.[3] The legal definition of child generally refers to a minor, otherwise known as a person younger than the age of majority.[1] Children generally have fewer rights and less responsibility than adults. They are classed as unable to make serious decisions, and legally must be under the care of their parents or another responsible caregiver.
2
+
3
+ Child may also describe a relationship with a parent (such as sons and daughters of any age)[4] or, metaphorically, an authority figure, or signify group membership in a clan, tribe, or religion; it can also signify being strongly affected by a specific time, place, or circumstance, as in "a child of nature" or "a child of the Sixties".[5]
4
+
5
+ Biologically, a child is a person between birth and puberty,[1][2] or between the developmental period of infancy and puberty.[3] Legally, the term child may refer to anyone below the age of majority or some other age limit.
6
+
7
+ The United Nations Convention on the Rights of the Child defines child as "a human being below the age of 18 years unless under the law applicable to the child, majority is attained earlier".[6] This is ratified by 192 of 194 member countries. The term child may also refer to someone below another legally defined age limit unconnected to the age of majority. In Singapore, for example, a child is legally defined as someone under the age of 14 under the "Children and Young Persons Act" whereas the age of majority is 21.[7][8] In U.S. Immigration Law, a child refers to anyone who is under the age of 21.[9]
8
+
9
+ Some English definitions of the word child include the fetus (sometimes termed the unborn).[10] In many cultures, a child is considered an adult after undergoing a rite of passage, which may or may not correspond to the time of puberty.
10
+
11
+ Children generally have fewer rights than adults and are classed as unable to make serious decisions, and legally must always be under the care of a responsible adult or child custody, whether their parents divorce or not. Recognition of childhood as a state different from adulthood began to emerge in the 16th and 17th centuries. Society began to relate to the child not as a miniature adult but as a person of a lower level of maturity needing adult protection, love and nurturing. This change can be traced in paintings: In the Middle Ages, children were portrayed in art as miniature adults with no childlike characteristics. In the 16th century, images of children began to acquire a distinct childlike appearance. From the late 17th century onwards, children were shown playing with toys and later literature for children also began to develop at this time.[11]
12
+
13
+ According to Professor Peter Jones of Cambridge University development of the brain continues long past legal definitions of adulthood so "to have a definition of when you move from childhood to adulthood looks increasingly absurd. It's a much more nuanced transition that takes place over three decades."[12] Children go through stages of social development. Children learn initially through play and later in most societies through formal schooling. As a child is growing they are learning how to do some tasks in chronological order. They learn how to prioritize their goals and actions. Their behavior is transcending as they learn new perspectives from other people. They learn how to represent certain things symbolically and learn new behavior.[13]
14
+
15
+ Children with ADHD and learning disabilities may need extra help to develop social skills. The impulsive characteristics of an ADHD child may lead to poor peer relationships. Children with poor attention spans may not tune into social cues in their environment, making it difficult for them to learn social skills through experience.[14] Health issues affecting children are generally managed separately from those affecting adults, by pediatricians.
16
+
17
+ The age at which children are considered responsible for their society-bound actions (e. g. marriage, voting, etc.) has also changed over time, and this is reflected in the way they are treated in courts of law. In Roman times, children were regarded as not culpable for crimes, a position later adopted by the Church. In the 19th century, children younger than seven years old were believed incapable of crime. Children from the age of seven forward were considered responsible for their actions. Therefore, they could face criminal charges, be sent to adult prison, and be punished like adults by whipping, branding or hanging. However, courts at the time would consider the offender's age when deliberating sentencing.[15] Minimum employment age and marriage age also vary. The age limit of voluntary/involuntary military service is also disputed at the international level.[16]
18
+
19
+ During the early 17th century in England, about two-thirds of all children died before the age of four.[18] During the Industrial Revolution, the life expectancy of children increased dramatically.[19] This has continued in England, and in the 21st century child mortality rates have fallen across the world. About 12.6 million under-five infants died worldwide in 1990, which declined to 6.6 million in 2012. The infant mortality rate dropped from 90 deaths per 1,000 live births in 1990, to 48 in 2012. The highest average infant mortality rates are in sub-Saharan Africa, at 98 deaths per 1,000 live births – over double the world's average.[17]
20
+
21
+ Education, in the general sense, refers to the act or process of imparting or acquiring general knowledge, developing the powers of reasoning and judgment, and preparing intellectually for mature life.[20] Formal education most often takes place through schooling. A right to education has been recognized by some governments. At the global level, Article 13 of the United Nations' 1966 International Covenant on Economic, Social and Cultural Rights recognizes the right of everyone to an education.[21] Education is compulsory in most places up to a certain age, but attendance at school may not be, with alternative options such as home-schooling or e-learning being recognized as valid forms of education in certain jurisdictions.
22
+
23
+ Children in some countries (especially in parts of Africa and Asia) are often kept out of school, or attend only for short periods. Data from UNICEF indicate that in 2011, 57 million children were out of school; and more than 20% of African children have never attended primary school or have left without completing primary education.[22] According to a UN report, warfare is preventing 28 million children worldwide from receiving an education, due to the risk of sexual violence and attacks in schools.[23] Other factors that keep children out of school include poverty, child labor, social attitudes, and long distances to school.[24][25]
24
+
25
+ Social attitudes toward children differ around the world in various cultures and change over time. A 1988 study on European attitudes toward the centrality of children found that Italy was more child-centric and the Netherlands less child-centric, with other countries, such as Austria, Great Britain, Ireland and West Germany falling in between.[26]
26
+
27
+ In 2013, child marriage rates of female children under the age of 18 reached 75% in Niger, 68% in Central African Republic and Chad, 66% in Bangladesh, and 47% in India.[27] According to a 2019 UNICEF report on child marriage, 37% of females were married before the age of 18 in sub-Saharan Africa, followed by South Asia at 30%. Lower levels were found in Latin America and Caribbean (25%), the Middle East and North Africa (18%), and Eastern Europe and Central Asia (11%), while rates in Western Europe and North America were minimal.[28] Child marriage is more prevalent with girls, but also involves boys. A 2018 study in the journal Vulnerable Children and Youth Studies found that, worldwide, 4.5% of males are married before age 18, with the Central African Republic having the highest average rate at 27.9%.[29]
28
+
29
+ Protection of children from abuse is considered an important contemporary goal. This includes protecting children from exploitation such as child labor, child trafficking and selling, child sexual abuse, including child prostitution and child pornography, military use of children, and child laundering in illegal adoptions. There exist several international instruments for these purposes, such as:
30
+
31
+ Emergencies and conflicts pose detrimental risks to the health, safety, and well-being of children. There are many different kinds of conflicts and emergencies, e.g. wars and natural disasters. As of 2010 approximately 13 million children are displaced by armed conflicts and violence around the world.[30] Where violent conflicts are the norm, the lives of young children are significantly disrupted and their families have great difficulty in offering the sensitive and consistent care that young children need for their healthy development.[30] Studies on the effect of emergencies and conflict on the physical and mental health of children between birth and 8 years old show that where the disaster is natural, the rate of PTSD occurs in anywhere from 3 to 87 percent of affected children.[31] However, rates of PTSD for children living in chronic conflict conditions varies from 15 to 50 percent.[32][33]
32
+
33
+ This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 License statement: Investing against Evidence: The Global State of Early Childhood Care and Education, 118–125, Marope, P.T.M., Kaga, Y., UNESCO. UNESCO. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
34
+  This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 License statement: Creating sustainable futures for all; Global education monitoring report, 2016; Gender review, 20, UNESCO, UNESCO. UNESCO. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
en/1761.html.txt ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Biologically, a child (plural children) is a human being between the stages of birth and puberty,[1][2] or between the developmental period of infancy and puberty.[3] The legal definition of child generally refers to a minor, otherwise known as a person younger than the age of majority.[1] Children generally have fewer rights and less responsibility than adults. They are classed as unable to make serious decisions, and legally must be under the care of their parents or another responsible caregiver.
2
+
3
+ Child may also describe a relationship with a parent (such as sons and daughters of any age)[4] or, metaphorically, an authority figure, or signify group membership in a clan, tribe, or religion; it can also signify being strongly affected by a specific time, place, or circumstance, as in "a child of nature" or "a child of the Sixties".[5]
4
+
5
+ Biologically, a child is a person between birth and puberty,[1][2] or between the developmental period of infancy and puberty.[3] Legally, the term child may refer to anyone below the age of majority or some other age limit.
6
+
7
+ The United Nations Convention on the Rights of the Child defines child as "a human being below the age of 18 years unless under the law applicable to the child, majority is attained earlier".[6] This is ratified by 192 of 194 member countries. The term child may also refer to someone below another legally defined age limit unconnected to the age of majority. In Singapore, for example, a child is legally defined as someone under the age of 14 under the "Children and Young Persons Act" whereas the age of majority is 21.[7][8] In U.S. Immigration Law, a child refers to anyone who is under the age of 21.[9]
8
+
9
+ Some English definitions of the word child include the fetus (sometimes termed the unborn).[10] In many cultures, a child is considered an adult after undergoing a rite of passage, which may or may not correspond to the time of puberty.
10
+
11
+ Children generally have fewer rights than adults and are classed as unable to make serious decisions, and legally must always be under the care of a responsible adult or child custody, whether their parents divorce or not. Recognition of childhood as a state different from adulthood began to emerge in the 16th and 17th centuries. Society began to relate to the child not as a miniature adult but as a person of a lower level of maturity needing adult protection, love and nurturing. This change can be traced in paintings: In the Middle Ages, children were portrayed in art as miniature adults with no childlike characteristics. In the 16th century, images of children began to acquire a distinct childlike appearance. From the late 17th century onwards, children were shown playing with toys and later literature for children also began to develop at this time.[11]
12
+
13
+ According to Professor Peter Jones of Cambridge University development of the brain continues long past legal definitions of adulthood so "to have a definition of when you move from childhood to adulthood looks increasingly absurd. It's a much more nuanced transition that takes place over three decades."[12] Children go through stages of social development. Children learn initially through play and later in most societies through formal schooling. As a child is growing they are learning how to do some tasks in chronological order. They learn how to prioritize their goals and actions. Their behavior is transcending as they learn new perspectives from other people. They learn how to represent certain things symbolically and learn new behavior.[13]
14
+
15
+ Children with ADHD and learning disabilities may need extra help to develop social skills. The impulsive characteristics of an ADHD child may lead to poor peer relationships. Children with poor attention spans may not tune into social cues in their environment, making it difficult for them to learn social skills through experience.[14] Health issues affecting children are generally managed separately from those affecting adults, by pediatricians.
16
+
17
+ The age at which children are considered responsible for their society-bound actions (e. g. marriage, voting, etc.) has also changed over time, and this is reflected in the way they are treated in courts of law. In Roman times, children were regarded as not culpable for crimes, a position later adopted by the Church. In the 19th century, children younger than seven years old were believed incapable of crime. Children from the age of seven forward were considered responsible for their actions. Therefore, they could face criminal charges, be sent to adult prison, and be punished like adults by whipping, branding or hanging. However, courts at the time would consider the offender's age when deliberating sentencing.[15] Minimum employment age and marriage age also vary. The age limit of voluntary/involuntary military service is also disputed at the international level.[16]
18
+
19
+ During the early 17th century in England, about two-thirds of all children died before the age of four.[18] During the Industrial Revolution, the life expectancy of children increased dramatically.[19] This has continued in England, and in the 21st century child mortality rates have fallen across the world. About 12.6 million under-five infants died worldwide in 1990, which declined to 6.6 million in 2012. The infant mortality rate dropped from 90 deaths per 1,000 live births in 1990, to 48 in 2012. The highest average infant mortality rates are in sub-Saharan Africa, at 98 deaths per 1,000 live births – over double the world's average.[17]
20
+
21
+ Education, in the general sense, refers to the act or process of imparting or acquiring general knowledge, developing the powers of reasoning and judgment, and preparing intellectually for mature life.[20] Formal education most often takes place through schooling. A right to education has been recognized by some governments. At the global level, Article 13 of the United Nations' 1966 International Covenant on Economic, Social and Cultural Rights recognizes the right of everyone to an education.[21] Education is compulsory in most places up to a certain age, but attendance at school may not be, with alternative options such as home-schooling or e-learning being recognized as valid forms of education in certain jurisdictions.
22
+
23
+ Children in some countries (especially in parts of Africa and Asia) are often kept out of school, or attend only for short periods. Data from UNICEF indicate that in 2011, 57 million children were out of school; and more than 20% of African children have never attended primary school or have left without completing primary education.[22] According to a UN report, warfare is preventing 28 million children worldwide from receiving an education, due to the risk of sexual violence and attacks in schools.[23] Other factors that keep children out of school include poverty, child labor, social attitudes, and long distances to school.[24][25]
24
+
25
+ Social attitudes toward children differ around the world in various cultures and change over time. A 1988 study on European attitudes toward the centrality of children found that Italy was more child-centric and the Netherlands less child-centric, with other countries, such as Austria, Great Britain, Ireland and West Germany falling in between.[26]
26
+
27
+ In 2013, child marriage rates of female children under the age of 18 reached 75% in Niger, 68% in Central African Republic and Chad, 66% in Bangladesh, and 47% in India.[27] According to a 2019 UNICEF report on child marriage, 37% of females were married before the age of 18 in sub-Saharan Africa, followed by South Asia at 30%. Lower levels were found in Latin America and Caribbean (25%), the Middle East and North Africa (18%), and Eastern Europe and Central Asia (11%), while rates in Western Europe and North America were minimal.[28] Child marriage is more prevalent with girls, but also involves boys. A 2018 study in the journal Vulnerable Children and Youth Studies found that, worldwide, 4.5% of males are married before age 18, with the Central African Republic having the highest average rate at 27.9%.[29]
28
+
29
+ Protection of children from abuse is considered an important contemporary goal. This includes protecting children from exploitation such as child labor, child trafficking and selling, child sexual abuse, including child prostitution and child pornography, military use of children, and child laundering in illegal adoptions. There exist several international instruments for these purposes, such as:
30
+
31
+ Emergencies and conflicts pose detrimental risks to the health, safety, and well-being of children. There are many different kinds of conflicts and emergencies, e.g. wars and natural disasters. As of 2010 approximately 13 million children are displaced by armed conflicts and violence around the world.[30] Where violent conflicts are the norm, the lives of young children are significantly disrupted and their families have great difficulty in offering the sensitive and consistent care that young children need for their healthy development.[30] Studies on the effect of emergencies and conflict on the physical and mental health of children between birth and 8 years old show that where the disaster is natural, the rate of PTSD occurs in anywhere from 3 to 87 percent of affected children.[31] However, rates of PTSD for children living in chronic conflict conditions varies from 15 to 50 percent.[32][33]
32
+
33
+ This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 License statement: Investing against Evidence: The Global State of Early Childhood Care and Education, 118–125, Marope, P.T.M., Kaga, Y., UNESCO. UNESCO. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
34
+  This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 License statement: Creating sustainable futures for all; Global education monitoring report, 2016; Gender review, 20, UNESCO, UNESCO. UNESCO. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
en/1762.html.txt ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Biologically, a child (plural children) is a human being between the stages of birth and puberty,[1][2] or between the developmental period of infancy and puberty.[3] The legal definition of child generally refers to a minor, otherwise known as a person younger than the age of majority.[1] Children generally have fewer rights and less responsibility than adults. They are classed as unable to make serious decisions, and legally must be under the care of their parents or another responsible caregiver.
2
+
3
+ Child may also describe a relationship with a parent (such as sons and daughters of any age)[4] or, metaphorically, an authority figure, or signify group membership in a clan, tribe, or religion; it can also signify being strongly affected by a specific time, place, or circumstance, as in "a child of nature" or "a child of the Sixties".[5]
4
+
5
+ Biologically, a child is a person between birth and puberty,[1][2] or between the developmental period of infancy and puberty.[3] Legally, the term child may refer to anyone below the age of majority or some other age limit.
6
+
7
+ The United Nations Convention on the Rights of the Child defines child as "a human being below the age of 18 years unless under the law applicable to the child, majority is attained earlier".[6] This is ratified by 192 of 194 member countries. The term child may also refer to someone below another legally defined age limit unconnected to the age of majority. In Singapore, for example, a child is legally defined as someone under the age of 14 under the "Children and Young Persons Act" whereas the age of majority is 21.[7][8] In U.S. Immigration Law, a child refers to anyone who is under the age of 21.[9]
8
+
9
+ Some English definitions of the word child include the fetus (sometimes termed the unborn).[10] In many cultures, a child is considered an adult after undergoing a rite of passage, which may or may not correspond to the time of puberty.
10
+
11
+ Children generally have fewer rights than adults and are classed as unable to make serious decisions, and legally must always be under the care of a responsible adult or child custody, whether their parents divorce or not. Recognition of childhood as a state different from adulthood began to emerge in the 16th and 17th centuries. Society began to relate to the child not as a miniature adult but as a person of a lower level of maturity needing adult protection, love and nurturing. This change can be traced in paintings: In the Middle Ages, children were portrayed in art as miniature adults with no childlike characteristics. In the 16th century, images of children began to acquire a distinct childlike appearance. From the late 17th century onwards, children were shown playing with toys and later literature for children also began to develop at this time.[11]
12
+
13
+ According to Professor Peter Jones of Cambridge University development of the brain continues long past legal definitions of adulthood so "to have a definition of when you move from childhood to adulthood looks increasingly absurd. It's a much more nuanced transition that takes place over three decades."[12] Children go through stages of social development. Children learn initially through play and later in most societies through formal schooling. As a child is growing they are learning how to do some tasks in chronological order. They learn how to prioritize their goals and actions. Their behavior is transcending as they learn new perspectives from other people. They learn how to represent certain things symbolically and learn new behavior.[13]
14
+
15
+ Children with ADHD and learning disabilities may need extra help to develop social skills. The impulsive characteristics of an ADHD child may lead to poor peer relationships. Children with poor attention spans may not tune into social cues in their environment, making it difficult for them to learn social skills through experience.[14] Health issues affecting children are generally managed separately from those affecting adults, by pediatricians.
16
+
17
+ The age at which children are considered responsible for their society-bound actions (e. g. marriage, voting, etc.) has also changed over time, and this is reflected in the way they are treated in courts of law. In Roman times, children were regarded as not culpable for crimes, a position later adopted by the Church. In the 19th century, children younger than seven years old were believed incapable of crime. Children from the age of seven forward were considered responsible for their actions. Therefore, they could face criminal charges, be sent to adult prison, and be punished like adults by whipping, branding or hanging. However, courts at the time would consider the offender's age when deliberating sentencing.[15] Minimum employment age and marriage age also vary. The age limit of voluntary/involuntary military service is also disputed at the international level.[16]
18
+
19
+ During the early 17th century in England, about two-thirds of all children died before the age of four.[18] During the Industrial Revolution, the life expectancy of children increased dramatically.[19] This has continued in England, and in the 21st century child mortality rates have fallen across the world. About 12.6 million under-five infants died worldwide in 1990, which declined to 6.6 million in 2012. The infant mortality rate dropped from 90 deaths per 1,000 live births in 1990, to 48 in 2012. The highest average infant mortality rates are in sub-Saharan Africa, at 98 deaths per 1,000 live births – over double the world's average.[17]
20
+
21
+ Education, in the general sense, refers to the act or process of imparting or acquiring general knowledge, developing the powers of reasoning and judgment, and preparing intellectually for mature life.[20] Formal education most often takes place through schooling. A right to education has been recognized by some governments. At the global level, Article 13 of the United Nations' 1966 International Covenant on Economic, Social and Cultural Rights recognizes the right of everyone to an education.[21] Education is compulsory in most places up to a certain age, but attendance at school may not be, with alternative options such as home-schooling or e-learning being recognized as valid forms of education in certain jurisdictions.
22
+
23
+ Children in some countries (especially in parts of Africa and Asia) are often kept out of school, or attend only for short periods. Data from UNICEF indicate that in 2011, 57 million children were out of school; and more than 20% of African children have never attended primary school or have left without completing primary education.[22] According to a UN report, warfare is preventing 28 million children worldwide from receiving an education, due to the risk of sexual violence and attacks in schools.[23] Other factors that keep children out of school include poverty, child labor, social attitudes, and long distances to school.[24][25]
24
+
25
+ Social attitudes toward children differ around the world in various cultures and change over time. A 1988 study on European attitudes toward the centrality of children found that Italy was more child-centric and the Netherlands less child-centric, with other countries, such as Austria, Great Britain, Ireland and West Germany falling in between.[26]
26
+
27
+ In 2013, child marriage rates of female children under the age of 18 reached 75% in Niger, 68% in Central African Republic and Chad, 66% in Bangladesh, and 47% in India.[27] According to a 2019 UNICEF report on child marriage, 37% of females were married before the age of 18 in sub-Saharan Africa, followed by South Asia at 30%. Lower levels were found in Latin America and Caribbean (25%), the Middle East and North Africa (18%), and Eastern Europe and Central Asia (11%), while rates in Western Europe and North America were minimal.[28] Child marriage is more prevalent with girls, but also involves boys. A 2018 study in the journal Vulnerable Children and Youth Studies found that, worldwide, 4.5% of males are married before age 18, with the Central African Republic having the highest average rate at 27.9%.[29]
28
+
29
+ Protection of children from abuse is considered an important contemporary goal. This includes protecting children from exploitation such as child labor, child trafficking and selling, child sexual abuse, including child prostitution and child pornography, military use of children, and child laundering in illegal adoptions. There exist several international instruments for these purposes, such as:
30
+
31
+ Emergencies and conflicts pose detrimental risks to the health, safety, and well-being of children. There are many different kinds of conflicts and emergencies, e.g. wars and natural disasters. As of 2010 approximately 13 million children are displaced by armed conflicts and violence around the world.[30] Where violent conflicts are the norm, the lives of young children are significantly disrupted and their families have great difficulty in offering the sensitive and consistent care that young children need for their healthy development.[30] Studies on the effect of emergencies and conflict on the physical and mental health of children between birth and 8 years old show that where the disaster is natural, the rate of PTSD occurs in anywhere from 3 to 87 percent of affected children.[31] However, rates of PTSD for children living in chronic conflict conditions varies from 15 to 50 percent.[32][33]
32
+
33
+ This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 License statement: Investing against Evidence: The Global State of Early Childhood Care and Education, 118–125, Marope, P.T.M., Kaga, Y., UNESCO. UNESCO. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
34
+  This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 License statement: Creating sustainable futures for all; Global education monitoring report, 2016; Gender review, 20, UNESCO, UNESCO. UNESCO. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
en/1763.html.txt ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ In religion and folklore, Hell is an afterlife location in which evil souls are subjected to punitive suffering, often torture as eternal punishment after death. Religions with a linear divine history often depict hells as eternal destinations, the biggest examples of which are Christianity and Islam, whereas religions with reincarnation usually depict a hell as an intermediary period between incarnations, as is the case in the dharmic religions. Religions typically locate hell in another dimension or under Earth's surface. Other afterlife destinations include Heaven, Paradise, Purgatory, Limbo, and the underworld.
4
+
5
+ Other religions, which do not conceive of the afterlife as a place of punishment or reward, merely describe an abode of the dead, the grave, a neutral place that is located under the surface of Earth (for example, see Kur, Hades, and Sheol). Such places are sometimes equated with the English word hell, though a more correct translation would be "underworld" or "world of the dead". The ancient Mesopotamian, Greek, Roman, and Finnic religions include entrances to the underworld from the land of the living.
6
+
7
+ The modern English word hell is derived from Old English hel, helle (first attested around 725 AD to refer to a nether world of the dead) reaching into the Anglo-Saxon pagan period.[1] The word has cognates in all branches of the Germanic languages, including Old Norse hel (which refers to both a location and goddess-like being in Norse mythology), Old Frisian helle, Old Saxon hellia, Old High German hella, and Gothic halja. All forms ultimately derive from the reconstructed Proto-Germanic feminine noun *xaljō or *haljō ('concealed place, the underworld'). In turn, the Proto-Germanic form derives from the o-grade form of the Proto-Indo-European root *kel-, *kol-: 'to cover, conceal, save'.[2] Indo-European cognates include Latin cēlāre ("to hide", related to the English word cellar) and early Irish ceilid ("hides"). Upon the Christianization of the Germanic peoples, extension of Proto-Germanic *xaljō were reinterpreted to denote the underworld in Christian mythology,[1][3] for which see Gehenna.
8
+
9
+ Related early Germanic terms and concepts include Proto-Germanic *xalja-rūnō(n), a feminine compound noun, and *xalja-wītjan, a neutral compound noun. This form is reconstructed from the Latinized Gothic plural noun *haliurunnae (attested by Jordanes; according to philologist Vladimir Orel, meaning 'witches'), Old English helle-rúne ('sorceress, necromancer', according to Orel), and Old High German helli-rūna 'magic'. The compound is composed of two elements: *xaljō (*haljō) and *rūnō, the Proto-Germanic precursor to Modern English rune.[4] The second element in the Gothic haliurunnae may however instead be an agent noun from the verb rinnan ("to run, go"), which would make its literal meaning "one who travels to the netherworld".[5][6]
10
+
11
+ Proto-Germanic *xalja-wītjan (or *halja-wītjan) is reconstructed from Old Norse hel-víti 'hell', Old English helle-wíte 'hell-torment, hell', Old Saxon helli-wīti 'hell', and the Middle High German feminine noun helle-wīze. The compound is a compound of *xaljō (discussed above) and *wītjan (reconstructed from forms such as Old English witt 'right mind, wits', Old Saxon gewit 'understanding', and Gothic un-witi 'foolishness, understanding').[7]
12
+
13
+ Hell appears in several mythologies and religions. It is commonly inhabited by demons and the souls of dead people. A fable about Hell which recurs in folklore across several cultures is the allegory of the long spoons. Hell is often depicted in art and literature, perhaps most famously in Dante's Divine Comedy.
14
+
15
+ Punishment in Hell typically corresponds to sins committed during life. Sometimes these distinctions are specific, with damned souls suffering for each sin committed (see for example Plato's myth of Er or Dante's The Divine Comedy), but sometimes they are general, with condemned sinners relegated to one or more chamber of Hell or to a level of suffering.
16
+
17
+ In many religious cultures, including Christianity and Islam, Hell is often depicted as fiery, painful, and harsh, inflicting suffering on the guilty. Despite these common depictions of Hell as a place of fire, some other traditions portray Hell as cold. Buddhist – and particularly Tibetan Buddhist – descriptions of Hell feature an equal number of hot and cold Hells. Among Christian descriptions Dante's Inferno portrays the innermost (9th) circle of Hell as a frozen lake of blood and guilt.[11]
18
+ But cold also played a part in earlier Christian depictions of Hell, beginning with the Apocalypse of Paul, originally from the early third century;[12] the "Vision of Dryhthelm" by the Venerable Bede from the seventh century;[13] "St Patrick's Purgatory", "The Vision of Tundale" or "Visio Tnugdali", and the "Vision of the Monk of Eynsham", all from the twelfth century;[14] and the "Vision of Thurkill" from the early thirteenth century.[15]
19
+
20
+ The Sumerian afterlife was a dark, dreary cavern located deep below the ground,[16] where inhabitants were believed to continue "a shadowy version of life on earth".[16] This bleak domain was known as Kur,[17]:114 and was believed to be ruled by the goddess Ereshkigal.[16][18]:184 All souls went to the same afterlife,[16] and a person's actions during life had no effect on how the person would be treated in the world to come.[16]
21
+
22
+ The souls in Kur were believed to eat nothing but dry dust[17]:58 and family members of the deceased would ritually pour libations into the dead person's grave through a clay pipe, thereby allowing the dead to drink.[17]:58 Nonetheless, funerary evidence indicates that some people believed that the goddess Inanna, Ereshkigal's younger sister, had the power to award her devotees with special favors in the afterlife.[16][19] During the Third Dynasty of Ur, it was believed that a person's treatment in the afterlife depended on how he or she was buried;[17]:58 those that had been given sumptuous burials would be treated well,[17]:58 but those who had been given poor burials would fare poorly.[17]:58
23
+
24
+ The entrance to Kur was believed to be located in the Zagros mountains in the far east.[17]:114 It had seven gates, through which a soul needed to pass.[16] The god Neti was the gatekeeper.[18]:184[17]:86 Ereshkigal's sukkal, or messenger, was the god Namtar.[17]:134[18]:184 Galla were a class of demons that were believed to reside in the underworld;[17]:85 their primary purpose appears to have been to drag unfortunate mortals back to Kur.[17]:85 They are frequently referenced in magical texts,[17]:85–86 and some texts describe them as being seven in number.[17]:85–86 Several extant poems describe the galla dragging the god Dumuzid into the underworld.[17]:86 The later Mesopotamians knew this underworld by its East Semitic name: Irkalla. During the Akkadian Period, Ereshkigal's role as the ruler of the underworld was assigned to Nergal, the god of death.[16][18]:184 The Akkadians attempted to harmonize this dual rulership of the underworld by making Nergal Ereshkigal's husband.[16]
25
+
26
+ With the rise of the cult of Osiris during the Middle Kingdom the "democratization of religion" offered to even his humblest followers the prospect of eternal life, with moral fitness becoming the dominant factor in determining a person's suitability. At death a person faced judgment by a tribunal of forty-two divine judges. If they had led a life in conformance with the precepts of the goddess Maat, who represented truth and right living, the person was welcomed into the heavenly reed fields. If found guilty the person was thrown to Ammit, the "devourer of the dead" and would be condemned to the lake of fire.[21] The person taken by the devourer is subject first to terrifying punishment and then annihilated. These depictions of punishment may have influenced medieval perceptions of the inferno in hell via early Christian and Coptic texts.[22] Purification for those considered justified appears in the descriptions of "Flame Island", where humans experience the triumph over evil and rebirth. For the damned complete destruction into a state of non-being awaits but there is no suggestion of eternal torture; the weighing of the heart in Egyptian mythology can lead to annihilation.[23][24] The Tale of Khaemwese describes the torment of a rich man, who lacked charity, when he dies and compares it to the blessed state of a poor man who has also died.[25]
27
+ Divine pardon at judgement always remained a central concern for the ancient Egyptians.[26]
28
+
29
+ Modern understanding of Egyptian notions of hell relies on six ancient texts:[27]
30
+
31
+ In classic Greek mythology, below Heaven, Earth, and Pontus is Tartarus, or Tartaros (Greek Τάρταρος, deep place). It is either a deep, gloomy place, a pit or abyss used as a dungeon of torment and suffering that resides within Hades (the entire underworld) with Tartarus being the hellish component. In the Gorgias, Plato (c. 400 BC) wrote that souls of the deceased were judged after they payed for crossing the river of the dead and those who received punishment were sent to Tartarus.[28] As a place of punishment, it can be considered a hell. The classic Hades, on the other hand, is more similar to Old Testament Sheol. The Romans later adopted these views.
32
+
33
+ The hells of Europe include Breton mythology's "Anaon", Celtic mythology's "Uffern", Slavic mythology's "Peklo", the hell of Sami mythology and Finnish "tuonela" ("manala").
34
+
35
+ The hells of Asia include the Bagobo "Gimokodan" (which is believed to be more of an otherworld, where the Red Region is reserved who those who died in battle, while ordinary people go to the White Region)[29] and ancient Indian mythology's "Kalichi" or "Naraka".
36
+
37
+ According to a few sources, hell is below ground, and described as an uninviting wet[30] or fiery place reserved for sinful people in the Ainu religion, as stated by missionary John Batchelor.[31] However, belief in hell does not appear in oral tradition of the Ainu.[32] Instead, there is belief within the Ainu religion that the soul of the deceased (ramat) would become a kamuy after death.[32] There is also belief that the soul of someone who has been wicked during lifetime, committed suicide, got murdered or died in great agony would become a ghost (tukap) who would haunt the living,[32] to come to fulfillment from which it was excluded during life.[33]
38
+
39
+ In Tengrism, it was believed that the wicked would get punished in Tamag before they would be brought to the third floor of the sky.[34]
40
+
41
+ In Taoism, hell is represented by Diyu.
42
+
43
+ The Hell of Swahili mythology is called kuzimu, and believe in it deleveloped in the 7th and 8th century under the influence of Muslim merchants at the east African coast.[35] It is imagined as a very cold place.[35] Serer religion rejects the general notion of heaven and hell.[36] In Serer religion, acceptance by the ancestors who have long departed is as close to any heaven as one can get. Rejection and becoming a wandering soul is a sort of hell for one passing over. The souls of the dead must make their way to Jaaniw (the sacred dwelling place of the soul). Only those who have lived their lives on earth in accordance with Serer doctrines will be able to make this necessary journey and thus accepted by the ancestors. Those who can't make the journey become lost and wandering souls, but they do not burn in "hell fire".[36][37]
44
+
45
+ The hells of the Americas include the Aztec religion's Mictlan, Inuit religion's Adlivun, and the Yanomami religion's Shobari Waka. In Mayan religion, Xibalba (or Metnal) is the dangerous underworld of nine levels. The road into and out of it is said to be steep, thorny and very forbidding. Ritual healers would intone healing prayers banishing diseases to Xibalba. Much of the Popol Vuh describes the adventures of the Maya Hero Twins in their cunning struggle with the evil lords of Xibalba.
46
+
47
+ The Aztecs believed that the dead traveled to Mictlan, a neutral place found far to the north. There was also a legend of a place of white flowers, which was always dark, and was home to the gods of death, particularly Mictlantecutli and his spouse Mictlantecihuatl, which means literally "lords of Mictlan". The journey to Mictlan took four years, and the travelers had to overcome difficult tests, such as passing a mountain range where the mountains crashed into each other, a field where the wind carried flesh-scraping knives, and a river of blood with fearsome jaguars.
48
+
49
+ In pre-Christian Fijian mythology there was belief in an underworld called Murimuria.
50
+
51
+ Hell is conceived of in most Abrahamic religions as a place of, or a form of, punishment.[38]
52
+
53
+ Early Judaism had no concept of Hell, although the concept of an afterlife was introduced during the Hellenistic period, apparently from neighboring Hellenistic religions. It occurs for example in the Book of Daniel. Daniel 12:2 proclaims "And many of those who sleep in the dust of the earth shall awake, Some to everlasting life, Some to shame and everlasting contempt."
54
+
55
+ Judaism does not have a specific doctrine about the afterlife, but it does have a mystical/Orthodox tradition of describing Gehinnom. Gehinnom is not Hell, but originally a grave and in later times a sort of Purgatory where one is judged based on one's life's deeds, or rather, where one becomes fully aware of one's own shortcomings and negative actions during one's life. The Kabbalah explains it as a "waiting room" (commonly translated as an "entry way") for all souls (not just the wicked). The overwhelming majority of rabbinic thought maintains that people are not in Gehinnom forever; the longest that one can be there is said to be 12 months, however there has been the occasional noted exception. Some consider it a spiritual forge where the soul is purified for its eventual ascent to Olam Habah (heb. עולם הבא; lit. "The world to come", often viewed as analogous to heaven). This is also mentioned in the Kabbalah, where the soul is described as breaking, like the flame of a candle lighting another: the part of the soul that ascends being pure and the "unfinished" piece being reborn.
56
+
57
+ According to Jewish teachings, hell is not entirely physical; rather, it can be compared to a very intense feeling of shame. People are ashamed of their misdeeds and this constitutes suffering which makes up for the bad deeds. When one has so deviated from the will of God, one is said to be in Gehinnom. This is not meant to refer to some point in the future, but to the very present moment. The gates of teshuva (return) are said to be always open, and so one can align his will with that of God at any moment. Being out of alignment with God's will is itself a punishment according to the Torah.
58
+
59
+ Many scholars of Jewish mysticism, particularly of the Kabbalah, describe seven "compartments" or "habitations" of Hell, just as they describe seven divisions of Heaven. These divisions go by many different names, and the most frequently mentioned are as follows:[39]
60
+
61
+ Besides those mentioned above, there also exist additional terms that have been often used to either refer to Hell in general or to some region of the underworld:
62
+
63
+ For more information, see Qliphoth.
64
+
65
+ Maimonides declares in his 13 principles of faith that the hells of the rabbinic literature were pedagocically motivated inventions to encourage respect of the Torah commandements by mankind, which had been regarded as immature.[49] Instead of being sent to hell, the souls of the wicked would actually get annihilated.[50]
66
+
67
+ The Christian doctrine of hell derives from passages in the New Testament. The word hell does not appear in the Greek New Testament; instead one of three words is used: the Greek words Tartarus or Hades, or the Hebrew word Gehinnom.
68
+
69
+ In the Septuagint and New Testament the authors used the Greek term Hades for the Hebrew Sheol, but often with Jewish rather than Greek concepts in mind. In the Jewish concept of Sheol, such as expressed in Ecclesiastes,[51] Sheol or Hades is a place where there is no activity. However, since Augustine, some[which?] Christians have believed that the souls of those who die either rest peacefully, in the case of Christians, or are afflicted, in the case of the damned, after death until the resurrection.[52]
70
+
71
+ While these three terms are translated in the KJV as "hell" these three terms have three very different meanings.
72
+
73
+ The Roman Catholic Church defines Hell as "a state of definitive self-exclusion from communion with God and the blessed." One finds oneself in Hell as the result of dying in mortal sin without repenting and accepting God's merciful love, becoming eternally separated from him by one's own free choice[66] immediately after death.[67] In the Roman Catholic Church, many other Christian churches, such as the Baptists and Episcopalians, and some Greek Orthodox churches,[68] Hell is taught as the final destiny of those who have not been found worthy after the general resurrection and last judgment,[69][70][71] where they will be eternally punished for sin and permanently separated from God. The nature of this judgment is inconsistent with many Protestant churches teaching the saving comes from accepting Jesus Christ as their savior, while the Greek Orthodox and Catholic Churches teach that the judgment hinges on both faith and works. However, many Liberal Christians throughout Liberal Protestant and Anglican churches believe in universal reconciliation (see below), even though it contradicts the traditional doctrines that are usually held by the evangelicals within their denominations.[72]
74
+
75
+ Some modern Christian theologians subscribe to the doctrines of conditional immortality. Conditional immortality is the belief that the soul dies with the body and does not live again until the resurrection. As with other Jewish writings of the Second Temple period, the New Testament text distinguishes two words, both translated "Hell" in older English Bibles: Hades, "the grave", and Gehenna where God "can destroy both body and soul".[73] A minority of Christians read this to mean that neither Hades nor Gehenna are eternal but refer to the ultimate destruction of the wicked in the Lake of Fire in a consuming fire after resurrection. However, because of the Greek words used in translating from the Hebrew text, the Hebrew ideas have become confused with Greek myths and ideas. In the Hebrew text when people died they went to Sheol, the grave[74] and the wicked ultimately went to Gehenna and were consumed by fire. The Hebrew words for "the grave" or "death" or "eventual destruction of the wicked", were translated using Greek words and later texts became a mix of mistranslation, pagan influence, and Greek myth.[75]
76
+
77
+ Christian mortalism is the doctrine that all men and women, including Christians, must die, and do not continue and are not conscious after death. Therefore, annihilationism includes the doctrine that "the wicked" are also destroyed rather than tormented forever in traditional "Hell" or the lake of fire. Christian mortalism and annihilationism are directly related to the doctrine of conditional immortality, the idea that a human soul is not immortal unless it is given eternal life at the second coming of Christ and resurrection of the dead.
78
+
79
+ Biblical scholars looking at the issue through the Hebrew text have denied the teaching of innate immortality.[76][77] Rejection of the immortality of the soul, and advocacy of Christian mortalism, was a feature of Protestantism since the early days of the Reformation with Martin Luther himself rejecting the traditional idea, though his mortalism did not carry into orthodox Lutheranism. One of the most notable English opponents of the immortality of the soul was Thomas Hobbes who describes the idea as a Greek "contagion" in Christian doctrine.[78] Modern proponents of conditional immortality include some in the Anglican church such as N.T. Wright[79] and as denominations the Seventh-day Adventists, Bible Students, Jehovah's Witnesses, Christadelphians, Living Church of God, The Church of God International, and some other Protestant Christians, as well as recent Roman Catholic teaching. It is not Roman Catholic dogma that anyone is in Hell,[80] though many individual Catholics do not share this view. The 1993 Catechism of the Catholic Church states:[81] "This state of definitive self-exclusion from communion with God and the blessed is called 'hell'" and[82] "... they suffer the punishments of hell, "eternal fire." The chief punishment of hell is eternal separation from God" (CCC 1035). During an Audience in 1999, Pope John Paul II commented: "images of hell that Sacred Scripture presents to us must be correctly interpreted. They show the complete frustration and emptiness of life without God. Rather than a place, hell indicates the state of those who freely and definitively separate themselves from God, the source of all life and joy."[83]
80
+
81
+ The Seventh-day Adventist Church's official beliefs support annihilationism.[84][85] They deny the Catholic purgatory and teach that the dead lie in the grave until they are raised for a last judgment, both the righteous and wicked await the resurrection at the Second Coming. Seventh-day Adventists believe that death is a state of unconscious sleep until the resurrection. They base this belief on biblical texts such as Ecclesiastes 9:5 which states "the dead know nothing", and 1 Thessalonians 4:13–18 which contains a description of the dead being raised from the grave at the second coming. These verses, it is argued, indicate that death is only a period or form of slumber.
82
+
83
+ Adventists teach that the resurrection of the righteous will take place shortly after the second coming of Jesus, as described in Revelation 20:4–6 that follows Revelation 19:11–16, whereas the resurrection of the wicked will occur after the millennium, as described in Revelation 20:5 and 20:12–13 that follow Revelation 20:4 and 6–7, though Revelation 20:12–13 and 15 actually describe a mixture of saved and condemned people being raised from the dead and judged. Adventists reject the traditional doctrine of hell as a state of everlasting conscious torment, believing instead that the wicked will be permanently destroyed after the millennium by the lake of fire, which is called 'the second death' in Revelation 20:14.
84
+
85
+ Those Adventist doctrines about death and hell reflect an underlying belief in: (a) conditional immortality (or conditionalism), as opposed to the immortality of the soul; and (b) the monistic nature of human beings, in which the soul is not separable from the body, as opposed to bipartite or tripartite conceptions, in which the soul is separable.
86
+
87
+ Jehovah's Witnesses hold that the soul ceases to exist when the person dies[86] and therefore that Hell (Sheol or Hades) is a state of non-existence.[86] In their theology, Gehenna differs from Sheol or Hades in that it holds no hope of a resurrection.[86] Tartarus is held to be the metaphorical state of debasement of the fallen angels between the time of their moral fall (Genesis chapter 6) until their post-millennial destruction along with Satan (Revelation chapter 20).[87]
88
+
89
+ Bible Students and Christadelphians also believe in annihilationism.
90
+
91
+ Christian Universalists believe in universal reconciliation, the belief that all human souls will be eventually reconciled with God and admitted to Heaven.[88] This belief is held by some Unitarian-Universalists.[89][90][91]
92
+
93
+ According to Emanuel Swedenborg's Second Coming Christian revelation, hell exists because evil people want it.[92] They, not God, introduced evil to the human race.[93] In Swedenborgianism, every soul joins the like-minded group after death in which it feels the most comfortable. Hell is therefore believed to be a place of hapiness for the souls which delight in evilness.[94]
94
+
95
+ Members of The Church of Jesus Christ of Latter-day Saints (LDS Church) teach that hell is a state between death and resurrection, in which those spirits who did not repent while on earth must suffer for their own sins (Doctrine and Covenants 19:15–17[95]). After that, only the Sons of perdition, who committed the Eternal sin, would be cast into Outer darkness. However, according to Mormon faith, committing the Eternal sin requires so much knowledge that most persons cannot do this.[96] Satan and Cain are counted as examples of Sons of perdition.
96
+
97
+ In Islam, jahannam (in Arabic: جهنم) (related to the Hebrew word gehinnom) is the counterpart to heaven and likewise divided into seven layers, both co-existing with the temporal world,[97] filled with blazing fire, boiling water, and a variety of other torments for those who have been condemned to it in the hereafter. In the Quran, God declares that the fire of Jahannam is prepared for both mankind and jinn.[98][99] After the Day of Judgement, it is to be occupied by those who do not believe in God, those who have disobeyed his laws, or rejected his messengers.[100] "Enemies of Islam" are sent to Hell immediately upon their deaths.[101] Muslim modernists downplay the vivid descriptions of hell common during Classical period, on one hand reaffirming that the afterlife must not be denied, but simultaneously asserting its exact nature remains unknown. Other modern Muslims continue the line of Sufism as an interiorized hell, combining the eschatological thoughts of Ibn Arabi and Rumi with Western philosophy.[102] Although disputed by some scholars, most scholars consider jahannam to be eternal.[103][104] There is belief that the fire which represents the own bad deeds can already be seen during the Punishment of the Grave, and that the spiritual pain caused by this can lead to purification of the soul.[105]
98
+
99
+ Medieval sources usually identified hell with the seven layers of the earth mentioned in Surah 65:12, inhabited by devils, harsh angels, scorpions and serpents, who torment the sinners. They described thorny shrubs, seas filled with blood and fire and darkness only illuminated by the flames of hell.[106] However, some sources also mention a place of extreme cold at the bottom of hell, called Zamhareer, characterized in as being unbearably cold, with blizzards, ice, and snow.[107] Maalik is thought of as the keeper of the gates of hell, namely appears in Ibn Abbas' Isra and Mi'raj.[108] Over hell, a narrow bridge called As-Sirāt is spanned. On Judgement Day one must pass over it to reach paradise, but those destined for hell will find too narrow and fall from into their new abode.[109] Iblis, the temporary ruler of hell,[110] is thought of residing in the bottom of hell, from where he commands his hosts of infernal demons.[111][112] But contrary to Christian traditions, Iblis and his infernal hosts do not wage war against God,[113] his enmity applies against humanity only. Further, his dominion in hell is also his punishment. According to the Muwatta Hadith, the Bukhari Hadith, the Tirmidhi Hadith, and the Kabir Hadith, Muhammad claimed that the fire of Jahannam is not red, but pitch-black, and is 70 times hotter than ordinary fire, and is much more painful than ordinary fire.[citation needed]
100
+
101
+ Polytheism (shirk) is regarded as a particularly grievous sin; therefore entering Paradise is forbidden to a polytheist (musyrik) because his place is Hell;[114] and the lowest pit of Hell (Hawiyah), is intended for hypocrites who claimed aloud to believe in God and his messenger but in their hearts did not.[115] Not all Muslims and scholars agree whether hell is an eternal destination or whether some or all of the condemned will eventually be forgiven and allowed to enter paradise.[101][113][116][117]
102
+
103
+ In the Bahá'í Faith, the conventional descriptions of Hell and Heaven are considered to be symbolic representations of spiritual conditions. The Bahá'í writings describe closeness to God to be Heaven, and conversely, remoteness from God as Hell.[118] The Bahá'í writings state that the soul is immortal and after death it will continue to progress until it finally attains God's presence.[119]
104
+
105
+ In "Devaduta Sutta", the 130th discourse of the Majjhima Nikaya, Buddha teaches about hell in vivid detail. Buddhism teaches that there are five[citation needed] (sometimes six[citation needed]) realms of rebirth, which can then be further subdivided into degrees of agony or pleasure. Of these realms, the hell realms, or Naraka, is the lowest realm of rebirth. Of the hell realms, the worst is Avīci (Sanskrit and Pali for "without waves"). The Buddha's disciple, Devadatta, who tried to kill the Buddha on three occasions, as well as create a schism in the monastic order, is said[by whom?] to have been reborn in the Avici Hell.
106
+
107
+ Like all realms of rebirth in Buddhism, rebirth in the Hell realms is not permanent, though suffering can persist for eons before being reborn again.[citation needed] In the Lotus Sutra, the Buddha teaches that eventually even Devadatta will become a Pratyekabuddha himself, emphasizing the temporary nature of the Hell realms. Thus, Buddhism teaches to escape the endless migration of rebirths (both positive and negative) through the attainment of Nirvana.
108
+
109
+ The Bodhisattva Ksitigarbha, according to the Ksitigarbha Sutra, made a great vow as a young girl to not reach Nirvana until all beings were liberated from the Hell Realms or other unwholesome rebirths. In popular literature, Ksitigarbha travels to the Hell realms to teach and relieve beings of their suffering.
110
+
111
+ Early Vedic religion does not have a concept of Hell. Ṛg-veda mentions three realms, bhūr (the earth), svar (the sky) and bhuvas or antarikṣa (the middle area, i.e. air or atmosphere). In later Hindu literature, especially the law books and Puranas, more realms are mentioned, including a realm similar to Hell, called naraka (in Devanāgarī: नरक). Yama as the first born human (together with his twin sister Yamī), by virtue of precedence, becomes ruler of men and a judge on their departure. Originally he resides in Heaven, but later, especially medieval, traditions mention his court in naraka.[citation needed]
112
+
113
+ In the law-books (smṛtis and dharma-sūtras, like the Manu-smṛti), naraka is a place of punishment for sins. It is a lower spiritual plane (called naraka-loka) where the spirit is judged and the partial fruits of karma affect the next life. In Mahabharata there is a mention of the Pandavas and the Kauravas both going to Heaven. At first Yudhisthir goes to heaven where he sees Duryodhana enjoying heaven; Indra tells him that Duryodhana is in heaven as he did his Kshatriya duties. Then he shows Yudhisthir hell where it appears his brothers are. Later it is revealed that this was a test for Yudhisthir and that his brothers and the Kauravas are all in heaven and live happily in the divine abode of gods. Hells are also described in various Puranas and other scriptures. The Garuda Purana gives a detailed account of Hell and its features; it lists the amount of punishment for most crimes, much like a modern-day penal code.
114
+
115
+ It is believed[by whom?] that people who commit sins go to Hell and have to go through punishments in accordance with the sins they committed. The god Yamarāja, who is also the god of death, presides over Hell. Detailed accounts of all the sins committed by an individual are kept by Chitragupta, who is the record keeper in Yama's court. Chitragupta reads out the sins committed and Yama orders appropriate punishments to be given to individuals. These punishments include dipping in boiling oil, burning in fire, torture using various weapons, etc. in various Hells. Individuals who finish their quota of the punishments are reborn in accordance with their balance of karma. All created beings are imperfect and thus have at least one sin to their record; but if one has generally led a pious life, one ascends to svarga, a temporary realm of enjoyment similar to Paradise, after a brief period of expiation in Hell and before the next reincarnation, according to the law of karma.[citation needed] With the exception of Hindu philosopher Madhva, time in Hell is not regarded as eternal damnation within Hinduism.[120]
116
+
117
+ According to Brahma Kumaris, the iron age (Kali Yuga) is regarded as hell.
118
+
119
+ In Jain cosmology, Naraka (translated as Hell) is the name given to realm of existence having great suffering. However, a Naraka differs from the hells of Abrahamic religions as souls are not sent to Naraka as the result of a divine judgment and punishment. Furthermore, length of a being's stay in a Naraka is not eternal, though it is usually very long and measured in billions of years. A soul is born into a Naraka as a direct result of his or her previous karma (actions of body, speech and mind), and resides there for a finite length of time until his karma has achieved its full result. After his karma is used up, he may be reborn in one of the higher worlds as the result of an earlier karma that had not yet ripened.
120
+
121
+ The Hells are situated in the seven grounds at the lower part of the universe. The seven grounds are:
122
+
123
+ The hellish beings are a type of souls which are residing in these various hells. They are born in hells by sudden manifestation.[121] The hellish beings possess vaikriya body (protean body which can transform itself and take various forms). They have a fixed life span (ranging from ten thousand to billions of years) in the respective hells where they reside. According to Jain scripture, Tattvarthasutra, following are the causes for birth in hell:[122]
124
+
125
+ According to Meivazhi, the purpose of all religions is to guide people to Heaven.[124] However, those who do not approach God and are not blessed by Him are believed to be condemned to Hell.[125]
126
+
127
+ In Sikh thought, Heaven and Hell are not places for living hereafter, they are part of spiritual topography of man and do not exist otherwise. They refer to good and evil stages of life respectively and can be lived now and here during our earthly existence.[126] For example, Guru Arjan explains that people who are entangled in emotional attachment and doubt are living in hell on this Earth i.e. their life is hellish.
128
+
129
+ So many are being drowned in emotional attachment and doubt; they dwell in the most horrible hell.
130
+
131
+ Ancient Taoism had no concept of Hell, as morality was seen to be a man-made distinction and there was no concept of an immaterial soul. In its home country China, where Taoism adopted tenets of other religions, popular belief endows Taoist Hell with many deities and spirits who punish sin in a variety of horrible ways.
132
+
133
+ Diyu is the realm of the dead in Chinese mythology. It is very loosely based upon the Buddhist concept of Naraka combined with traditional Chinese afterlife beliefs and a variety of popular expansions and re-interpretations of these two traditions. Ruled by Yanluo Wang, the King of Hell, Diyu is a maze of underground levels and chambers where souls are taken to atone for their earthly sins.
134
+
135
+ Incorporating ideas from Taoism and Buddhism as well as traditional Chinese folk religion, Diyu is a kind of purgatory place which serves not only to punish but also to renew spirits ready for their next incarnation. There are many deities associated with the place, whose names and purposes are the subject of much conflicting information.
136
+
137
+ The exact number of levels in Chinese Hell – and their associated deities – differs according to the Buddhist or Taoist perception. Some speak of three to four 'Courts', other as many as ten. The ten judges are also known as the 10 Kings of Yama. Each Court deals with a different aspect of atonement. For example, murder is punished in one Court, adultery in another. According to some Chinese legends, there are eighteen levels in Hell. Punishment also varies according to belief, but most legends speak of highly imaginative chambers where wrong-doers are sawn in half, beheaded, thrown into pits of filth or forced to climb trees adorned with sharp blades.
138
+
139
+ However, most legends agree that once a soul (usually referred to as a 'ghost') has atoned for their deeds and repented, he or she is given the Drink of Forgetfulness by Meng Po and sent back into the world to be reborn, possibly as an animal or a poor or sick person, for further punishment.
140
+
141
+ Zoroastrianism has historically suggested several possible fates for the wicked, including annihilation, purgation in molten metal, and eternal punishment, all of which have standing in Zoroaster's writings. Zoroastrian eschatology includes the belief that wicked souls will remain in Duzakh until, following the arrival of three saviors at thousand-year intervals, Ahura Mazda reconciles the world, destroying evil and resurrecting tormented souls to perfection.[128]
142
+
143
+ The sacred Gathas mention a "House of the Lie″ for those "that are of an evil dominion, of evil deeds, evil words, evil Self, and evil thought, Liars."[129] However, the best-known Zoroastrian text to describe hell in detail is the Book of Arda Viraf.[130] It depicts particular punishments for particular sins—for instance, being trampled by cattle as punishment for neglecting the needs of work animals.[131] Other descriptions can be found in the Book of Scriptures (Hadhokht Nask), Religious Judgments (Dadestan-i Denig) and the Book of the Judgments of the Spirit of Wisdom (Mainyo-I-Khard).[132]
144
+
145
+ The Mandaeans believe in purification of souls inside of Leviathan,[133] whom they also call Ur.[134] Within detention houses, so called Mattarathas,[135] the detained souls would receive so much punishment that they would wish to die a Second death, which would, however, not (yet) befall their spirit.[136] At the end of days, the souls of the Mandaeans which could be purified, would be liberated out of Ur's mouth.[137] After this, Ur would get destroyed along with the souls remaining inside him,[138] so they die the second death.[139]
146
+
147
+ The two oldest sects of Wicca- Gardnerian Wicca and Alexandrian Wicca, include "the wiccan laws" that Gerald Gardner wrote. Those laws state that wiccan souls are privileged with reincarnation, but that the souls of wiccans who break the wiccan laws, "even under torture", would be cursed by the goddess, never be reborn on earth, and "remain where they belong, in the Hell of the Christians."[140][141] Later wiccan sects do not necessarily include Gerald Gardner's wiccan laws. The influential wiccan author Raymond Buckland wrote that the wiccan laws are unimportant. Solitary neo-wiccans, who originated in the 1980s, do not include the wiccan laws in their doctrine.
148
+
149
+ In his Divina commedia (Divine Comedy), set in the year 1300), Dante Alighieri employed the concept of taking Virgil as his guide through Inferno (and then, in the second canticle, up the mountain of Purgatorio). Virgil himself is not condemned to Hell proper in Dante's poem but is rather, as a virtuous pagan, confined to Limbo just at the edge of Hell. The geography of Hell is very elaborately laid out in this work, with nine concentric rings leading deeper into Earth, and deeper into the various punishments of Hell, until, at the center of the world, Dante finds Satan himself trapped in the frozen lake of Cocytus. A small tunnel leads past Satan and out to the other side of the world, at the base of the Mount of Purgatory.
150
+
151
+ John Milton's Paradise Lost (1667) opens with the fallen angels, including their leader Satan, waking up in Hell after having been defeated in the war in heaven and the action returns there at several points throughout the poem. Milton portrays Hell as the abode of the demons, and the passive prison from which they plot their revenge upon Heaven through the corruption of the human race. 19th-century French poet Arthur Rimbaud alluded to the concept as well in the title and themes of one of his major works, A Season in Hell. Rimbaud's poetry portrays his own suffering in a poetic form as well as other themes.
152
+
153
+ Many of the great epics of European literature include episodes that occur in Hell. In the Roman poet Virgil's Latin epic, the Aeneid, Aeneas descends into Dis (the underworld) to visit his father's spirit. The underworld is only vaguely described, with one unexplored path leading to the punishments of Tartarus, while the other leads through Erebus and the Elysian Fields.
154
+
155
+ The idea of Hell was highly influential to writers such as Jean-Paul Sartre who authored the 1944 play No Exit about the idea that "Hell is other people". Although not a religious man, Sartre was fascinated by his interpretation of a Hellish state of suffering. C.S. Lewis's The Great Divorce (1945) borrows its title from William Blake's Marriage of Heaven and Hell (1793) and its inspiration from the Divine Comedy as the narrator is likewise guided through Hell and Heaven. Hell is portrayed here as an endless, desolate twilight city upon which night is imperceptibly sinking. The night is actually the Apocalypse, and it heralds the arrival of the demons after their judgment. Before the night comes, anyone can escape Hell if they leave behind their former selves and accept Heaven's offer, and a journey to Heaven reveals that Hell is infinitely small; it is nothing more or less than what happens to a soul that turns away from God and into itself.
156
+
157
+ Piers Anthony in his series Incarnations of Immortality portrays examples of Heaven and Hell via Death, Fate, Underworld, Nature, War, Time, Good-God, and Evil-Devil. Robert A. Heinlein offers a yin-yang version of Hell where there is still some good within; most evident in his book Job: A Comedy of Justice. Lois McMaster Bujold uses her five Gods 'Father, Mother, Son, Daughter and Bastard' in The Curse of Chalion with an example of Hell as formless chaos. Michael Moorcock is one of many who offer Chaos-Evil-(Hell) and Uniformity-Good-(Heaven) as equally unacceptable extremes which must be held in balance; in particular in the Elric and Eternal Champion series. Fredric Brown wrote a number of fantasy short stories about Satan's activities in Hell. Cartoonist Jimmy Hatlo created a series of cartoons about life in Hell called The Hatlo Inferno, which ran from 1953 to 1958.[142]