2dffa68a3248b873c26941dc05b1eb40ee5a76a25a1af0f04efbf8b987375b78
Browse files- en/19.html.txt +289 -0
- en/190.html.txt +1 -0
- en/1900.html.txt +130 -0
- en/1901.html.txt +61 -0
- en/1902.html.txt +126 -0
- en/1903.html.txt +284 -0
- en/1904.html.txt +284 -0
- en/1905.html.txt +284 -0
- en/1906.html.txt +1 -0
- en/1907.html.txt +284 -0
- en/1908.html.txt +111 -0
- en/1909.html.txt +71 -0
- en/191.html.txt +1 -0
- en/1910.html.txt +71 -0
- en/1911.html.txt +111 -0
- en/1912.html.txt +126 -0
- en/1913.html.txt +235 -0
- en/1914.html.txt +44 -0
- en/1915.html.txt +35 -0
- en/1916.html.txt +179 -0
- en/1917.html.txt +179 -0
- en/1918.html.txt +387 -0
- en/1919.html.txt +387 -0
- en/192.html.txt +228 -0
- en/1920.html.txt +195 -0
- en/1921.html.txt +195 -0
- en/1922.html.txt +195 -0
- en/1923.html.txt +73 -0
- en/1924.html.txt +73 -0
- en/1925.html.txt +116 -0
- en/1926.html.txt +159 -0
- en/1927.html.txt +174 -0
- en/1928.html.txt +39 -0
- en/1929.html.txt +248 -0
- en/193.html.txt +228 -0
- en/1930.html.txt +77 -0
- en/1931.html.txt +77 -0
- en/1932.html.txt +368 -0
- en/1933.html.txt +101 -0
- en/1934.html.txt +184 -0
- en/1935.html.txt +19 -0
- en/1936.html.txt +19 -0
- en/1937.html.txt +202 -0
- en/1938.html.txt +202 -0
- en/1939.html.txt +217 -0
- en/194.html.txt +186 -0
- en/1940.html.txt +186 -0
- en/1941.html.txt +122 -0
- en/1942.html.txt +87 -0
- en/1943.html.txt +241 -0
en/19.html.txt
ADDED
@@ -0,0 +1,289 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Space Shuttle Challenger disaster was a fatal incident in the United States space program that occurred on Tuesday, January 28, 1986, when the Space Shuttle Challenger (OV-099) broke apart 73 seconds into its flight, killing all seven crew members aboard. The crew consisted of five NASA astronauts, and two payload specialists. The mission carried the designation STS-51-L and was the tenth flight for the Challenger orbiter.
|
4 |
+
|
5 |
+
The spacecraft disintegrated over the Atlantic Ocean, off the coast of Cape Canaveral, Florida, at 11:39 a.m. EST (16:39 UTC). The disintegration of the vehicle began after a joint in its right solid rocket booster (SRB) failed at liftoff. The failure was caused by the failure of O-ring seals used in the joint that were not designed to handle the unusually cold conditions that existed at this launch. The seals' failure caused a breach in the SRB joint, allowing pressurized burning gas from within the solid rocket motor to reach the outside and impinge upon the adjacent SRB aft field joint attachment hardware and external fuel tank. This led to the separation of the right-hand SRB's aft field joint attachment and the structural failure of the external tank. Aerodynamic forces broke up the orbiter.
|
6 |
+
|
7 |
+
The crew compartment and many other vehicle fragments were eventually recovered from the ocean floor after a lengthy search and recovery operation. The exact timing of the death of the crew is unknown; several crew members are known to have survived the initial breakup of the spacecraft. The shuttle had no escape system,[a][1] and the impact of the crew compartment at terminal velocity with the ocean surface was too violent to be survivable.[2]
|
8 |
+
|
9 |
+
The disaster resulted in a 32-month hiatus in the shuttle program and the formation of the Rogers Commission, a special commission appointed by United States President Ronald Reagan to investigate the accident. The Rogers Commission found NASA's organizational culture and decision-making processes had been key contributing factors to the accident,[3] with the agency violating its own safety rules. NASA managers had known since 1977 that contractor Morton-Thiokol's design of the SRBs contained a potentially catastrophic flaw in the O-rings, but they had failed to address this problem properly. NASA managers also disregarded warnings from engineers about the dangers of launching posed by the low temperatures of that morning, and failed to adequately report these technical concerns to their superiors.
|
10 |
+
|
11 |
+
Approximately 17 percent of the American population witnessed the launch on live television broadcast because of the presence of high school teacher Christa McAuliffe, who would have been the first teacher in space. Media coverage of the accident was extensive; one study reported that 85 percent of Americans surveyed had heard the news within an hour of the accident.[4] The Challenger disaster has been used as a case study in many discussions of engineering safety and workplace ethics.
|
12 |
+
|
13 |
+
Each of the two solid rocket boosters (SRBs) was constructed of seven sections, six of which were permanently joined in pairs at the factory. For each flight, the four resulting segments were then assembled in the Vehicle Assembly Building at Kennedy Space Center (KSC), with three field joints. The factory joints were sealed with asbestos-silica insulation applied over the joint, while each field joint was sealed with two rubber O-rings. After the destruction of Challenger, the number of O-rings per field joint was increased to three.[5] The seals of all of the SRB joints were required to contain the hot, high-pressure gases produced by the burning solid propellant inside, thus forcing them out of the nozzle at the aft end of each rocket.
|
14 |
+
|
15 |
+
During the Space Shuttle design process, a McDonnell Douglas report in September 1971 discussed the safety record of solid rockets. While a safe abort was possible after most types of failures, one was especially dangerous: a burnthrough by hot gases of the rocket's casing. The report stated that "if burnthrough occurs adjacent to [liquid hydrogen/oxygen] tank or orbiter, timely sensing may not be feasible and abort not possible", accurately foreshadowing the Challenger accident.[6] Morton-Thiokol was the contractor responsible for the construction and maintenance of the shuttle's SRBs. As originally designed by Thiokol, the O-ring joints in the SRBs were supposed to close more tightly due to forces generated at ignition, but a 1977 test showed that when pressurized water was used to simulate the effects of booster combustion, the metal parts bent away from each other, opening a gap through which gases could leak. This phenomenon, known as "joint rotation", caused a momentary drop in air pressure. This made it possible for combustion gases to erode the O-rings. In the event of widespread erosion, a flame path could develop, causing the joint to burst—which would have destroyed the booster and the shuttle.[7]:118
|
16 |
+
|
17 |
+
Engineers at the Marshall Space Flight Center wrote to the manager of the Solid Rocket Booster project, George Hardy, on several occasions suggesting that Thiokol's field joint design was unacceptable. For example, one engineer suggested that joint rotation would render the secondary O-ring useless, but Hardy did not forward these memos to Thiokol, and the field joints were accepted for flight in 1980.[8]
|
18 |
+
|
19 |
+
Evidence of serious O-ring erosion was present as early as the second space shuttle mission, STS-2, which was flown by Columbia. Contrary to NASA regulations, the Marshall Center did not report this problem to senior management at NASA, but opted to keep the problem within their reporting channels with Thiokol. Even after the O-rings were redesignated as "Criticality 1" — meaning that their failure would result in the destruction of the Orbiter, no one at Marshall suggested that the shuttles be grounded until the flaw could be fixed.[8]
|
20 |
+
|
21 |
+
After the 1984 launch of STS-41-D, flown by Discovery, the first occurrence of hot gas "blow-by" was discovered beyond the primary O-ring. In the post-flight analysis, Thiokol engineers found that the amount of blow-by was relatively small and had not impinged upon the secondary O-ring, and concluded that for future flights, the damage was an acceptable risk. However, after the Challenger disaster, Thiokol engineer Brian Russell identified this event as the first "big red flag" regarding O-ring safety.[9]
|
22 |
+
|
23 |
+
By 1985, with seven of nine shuttle launches that year using boosters displaying O-ring erosion or hot gas blow-by,[10] Marshall and Thiokol realized that they had a potentially catastrophic problem on their hands. Perhaps most concerning was the launch of STS-51-B in April 1985, flown by Challenger, in which the worst O-ring damage to date was discovered in post-flight analysis. The primary O-ring of the left nozzle had been eroded so extensively that it had failed to seal, and for the first time hot gases had eroded the secondary O-ring.[11] They began the process of redesigning the joint with three inches (76 mm) of additional steel around the tang. This tang would grip the inner face of the joint and prevent it from rotating. They did not call for a halt to shuttle flights until the joints could be redesigned, but rather treated the problem as an acceptable flight risk. For example, Lawrence Mulloy, Marshall's manager for the SRB project since 1982, issued and waived launch constraints for six consecutive flights. Thiokol even went as far as to persuade NASA to declare the O-ring problem "closed".[8] Donald Kutyna, a member of the Rogers Commission, later likened this situation to an airline permitting one of its planes to continue to fly despite evidence that one of its wings was about to fall off.
|
24 |
+
|
25 |
+
Challenger was originally set to launch from Kennedy Space Center in Florida at 14:42 Eastern Standard Time (EST) on Wednesday, January 22, 1986. Delays in the previous mission, STS-61-C, caused the launch date to be moved to January 23 and then to January 24. The launch was rescheduled again, to Saturday, January 25, due to bad weather at the transoceanic abort landing (TAL) site in Dakar, Senegal. NASA decided to use the TAL site at Casablanca, Morocco, but because it was not equipped for night landings, the launch had to be moved to the morning. Predictions of unacceptable weather at Kennedy Space Center on January 26 caused the launch to be rescheduled for 09:37 EST on Monday, January 27.[12]
|
26 |
+
|
27 |
+
The planned January 27 launch was initially delayed due to problems with the exterior access hatch. First, a micro-switch indicator malfunctioned. Its purpose was to verify that the orbiter's hatch was safely locked.[7]:150–53 Then, a stripped bolt prevented the closeout crew from removing a closing fixture from the hatch.[7]:154 By the time repair personnel had sawed off the fixture, crosswinds at the Shuttle Landing Facility exceeded the limits for a return-to-launch-site abort. While the crew waited for the winds to die down, the launch window expired, forcing yet another scrub.[13]
|
28 |
+
|
29 |
+
Forecasts for January 28 predicted an unusually cold morning, with temperatures close to −1 °C (30 °F), the minimum temperature permitted for launch. The Shuttle was never certified to operate in temperatures that low. The O-rings, as well as many other critical components, had no test data to support any expectation of a successful launch in such conditions.[14][15]
|
30 |
+
|
31 |
+
By mid-1985 Thiokol engineers worried that others did not share their concerns about the low temperature effects on the boosters. Engineer Bob Ebeling in October 1985 wrote a memo—titled "Help!" so others would read it—of concerns regarding low temperatures and O-rings. After the weather forecast, NASA personnel remembered Thiokol's warnings and contacted the company. When a Thiokol manager asked Ebeling about the possibility of a launch at 18 °F (−8 °C), he answered "[W]e're only qualified to 40° [40 °F or 4 °C] ... 'what business does anyone even have thinking about 18°, we're in no-man's land.'" After his team agreed that a launch risked disaster, Thiokol immediately called NASA recommending a postponement until temperatures rose in the afternoon. NASA manager Jud Lovingood responded that Thiokol could not make the recommendation without providing a safe temperature. The company prepared for a teleconference two hours later during which it would have to justify a no-launch recommendation.[14][15]
|
32 |
+
|
33 |
+
At the teleconference on the evening of January 27, Thiokol engineers and managers discussed the weather conditions with NASA managers from Kennedy Space Center and Marshall Space Flight Center. Several engineers (most notably Allan McDonald, Ebeling and Roger Boisjoly) reiterated their concerns about the effect of low temperatures on the resilience of the rubber O-rings that sealed the joints of the SRBs, and recommended a launch postponement.[15] They argued that they did not have enough data to determine whether the joints would properly seal if the O-rings were colder than 54 °F (12 °C). This was an important consideration, since the SRB O-rings had been designated as a "Criticality 1" component, meaning that there was no backup if both the primary and secondary O-rings failed, and their failure could destroy the Orbiter and kill its crew.
|
34 |
+
|
35 |
+
Thiokol management initially supported its engineers' recommendation to postpone the launch, but NASA staff opposed a delay. During the conference call, Hardy told Thiokol, "I am appalled. I am appalled by your recommendation." Mulloy said, "My God, Thiokol, when do you want me to launch—next April?"[15] NASA believed that Thiokol's hastily prepared presentation's quality was too poor to support such a statement on flight safety.[14] One argument by NASA personnel contesting Thiokol's concerns was that if the primary O-ring failed, the secondary O-ring would still seal. This was unproven, and was in any case an argument that did not apply to a "Criticality 1" component. As astronaut Sally Ride stated when questioning NASA managers before the Rogers Commission, it is forbidden to rely on a backup for a "Criticality 1" component.
|
36 |
+
|
37 |
+
NASA claimed that it did not know of Thiokol's earlier concerns about the effects of the cold on the O-rings, and did not understand that Rockwell International, the shuttle's prime contractor, additionally viewed the large amount of ice present on the pad as a constraint to launch.
|
38 |
+
|
39 |
+
According to Ebeling, a second conference call was scheduled with only NASA and Thiokol management, excluding the engineers. For reasons that are unclear, Thiokol management disregarded its own engineers' warnings and now recommended that the launch proceed as scheduled;[15][16] NASA did not ask why.[14] Ebeling told his wife that night that Challenger would blow up.[17]
|
40 |
+
|
41 |
+
Ken Iliff, a former NASA Chief Scientist who had worked on the Space Shuttle Program since its first mission (and the X-15 program before that), stated this in 2004:
|
42 |
+
|
43 |
+
Not violating flight rules was something I had been taught on the X-15 program. It was something that we just never did. We never changed a mission rule on the fly. We aborted the mission and came back and discussed it. Violating a couple of mission rules was the primary cause of the Challenger accident.[18]
|
44 |
+
|
45 |
+
The Thiokol engineers had also argued that the low overnight temperatures of 18 °F (−8 °C) projected on the day[16] prior to launch would almost certainly result in SRB temperatures below their redline of 39 °F (4 °C). Ice had accumulated all over the launch pad, raising concerns that ice could damage the shuttle upon lift-off. The Kennedy Ice Team inadvertently pointed an infrared camera at the aft field joint of the right SRB and found the temperature to be only 9 °F (−13 °C). This was believed to be the result of supercooled air blowing on the joint from the liquid oxygen (LOX) tank vent. It was much lower than the air temperature and far below the design specifications for the O-rings. The low reading was later determined to be erroneous, the error caused by not following the temperature probe manufacturer's instructions. Tests and adjusted calculations later confirmed that the temperature of the joint was not substantially different from the ambient temperature.
|
46 |
+
|
47 |
+
The temperature on the day of the launch was far lower than had been the case with previous launches: below freezing at 28.0 to 28.9 °F (−2.2 to −1.7 °C); previously, the coldest launch had been at 54 °F (12 °C). Although the Ice Team had worked through the night removing ice, engineers at Rockwell still expressed concern. Rockwell engineers watching the pad from their headquarters in Downey, California, were horrified when they saw the amount of ice. They feared that during launch, ice might be shaken loose and strike the shuttle's thermal protection tiles, possibly due to the aspiration induced by the jet of exhaust gas from the SRBs. Rocco Petrone, the head of Rockwell's space transportation division, and his colleagues viewed this situation as a launch constraint, and told Rockwell's managers at the Cape that Rockwell could not support a launch. Rockwell's managers at the Cape voiced their concerns in a manner that led Houston-based mission manager Arnold Aldrich to go ahead with the launch. Aldrich decided to postpone the shuttle launch by an hour to give the Ice Team time to perform another inspection. After that last inspection, during which the ice appeared to be melting, Challenger was cleared to launch at 11:38 am EST.[16]
|
48 |
+
|
49 |
+
The following account of the accident is derived from real time telemetry data and photographic analysis, as well as from transcripts of air-to-ground and mission control voice communications.[19] All times are given in seconds after launch and correspond to the telemetry time-codes from the closest instrumented event to each described event.[20]
|
50 |
+
|
51 |
+
The Space Shuttle main engines (SSMEs) were ignited at T -6.6 seconds. The SSMEs were liquid-fueled and could be safely shut down (and the launch aborted if necessary) until the solid rocket boosters (SRBs) ignited at T=0 (which was at 11:38:00.010 EST) and the hold-down bolts were released with explosives, freeing the vehicle from the pad. At lift off, the three SSMEs were at 100% of their original rated performance, and began throttling up to 104% under computer control. With the first vertical motion of the vehicle, the gaseous hydrogen vent arm retracted from the external tank (ET) but failed to latch back. Review of film shot by pad cameras showed that the arm did not re-contact the vehicle, and thus it was ruled out as a contributing factor in the accident.[20] The post-launch inspection of the pad also revealed that kick springs on four of the hold-down bolts were missing, but they were similarly ruled out as a possible cause.[21]
|
52 |
+
|
53 |
+
Later review of launch film showed that at T+0.678, strong puffs of dark gray smoke were emitted from the right-hand SRB near the aft strut that attached the booster to the ET. The last smoke puff occurred at about T+2.733. The last view of smoke around the strut was at T+3.375. It was later determined that these smoke puffs were caused by the opening and closing of the aft field joint of the right-hand SRB. The booster's casing had ballooned under the stress of ignition. As a result of this ballooning, the metal parts of the casing bent away from each other, opening a gap through which hot gases—above 5,000 °F (2,760 °C)—leaked. This had occurred in previous launches, but each time the primary O-ring had shifted out of its groove and formed a seal. Although the SRB was not designed to function this way, it appeared to work well enough, and Morton-Thiokol changed the design specs to accommodate this process, known as extrusion.
|
54 |
+
|
55 |
+
While extrusion was taking place, hot gases leaked past (a process called "blow-by"), damaging the O-rings until a seal was made. Investigations by Morton-Thiokol engineers determined that the amount of damage to the O-rings was directly related to the time it took for extrusion to occur, and that cold weather, by causing the O-rings to harden, lengthened the time of extrusion. The redesigned SRB field joint used subsequent to the Challenger accident used an additional interlocking mortise and tang with a third O-ring, mitigating blow-by.
|
56 |
+
|
57 |
+
On the morning of the disaster, the primary O-ring had become so hard due to the cold that it could not seal in time. The temperature had dropped below the glass transition temperature of the O-rings. Above the glass transition temperature, the O-rings display properties of elasticity and flexibility, while below the glass transition temperature, they become rigid and brittle. The secondary O-ring was not in its seated position due to the metal bending. There was now no barrier to the gases, and both O-rings were vaporized across 70 degrees of arc. Aluminum oxides from the burned solid propellant sealed the damaged joint, temporarily replacing the O-ring seal before flame passed through the joint.
|
58 |
+
|
59 |
+
As the vehicle cleared the tower, the SSMEs were operating at 104% of their rated maximum thrust, and control switched from the Launch Control Center (LCC) at Kennedy to the Mission Control Center (MCC) at Johnson Space Center in Houston, Texas. To prevent aerodynamic forces from structurally overloading the orbiter, at T+28 the SSMEs began throttling down to limit the velocity of the shuttle in the dense lower atmosphere, per normal operating procedure. At T+35.379, the SSMEs throttled back further to the planned 65%. Five seconds later, at about 19,000 feet (5,800 m), Challenger passed through Mach 1. At T+51.860, the SSMEs began throttling back up to 104% as the vehicle passed beyond max q, the period of maximum aerodynamic pressure on the vehicle.
|
60 |
+
|
61 |
+
Beginning at about T+37 and for 27 seconds, the shuttle experienced a series of wind shear events that were stronger than on any previous flight.[22]
|
62 |
+
|
63 |
+
At T+58.788, a tracking film camera captured the beginnings of a plume near the aft attach strut on the right SRB. Unknown to those on Challenger or in Houston, hot gas had begun to leak through a growing hole in one of the right-hand SRB joints. The force of the wind shear shattered the temporary oxide seal that had taken the place of the damaged O-rings, removing the last barrier to flame passing through the joint. Had it not been for the wind shear, the fortuitous oxide seal might have held through booster burnout.
|
64 |
+
|
65 |
+
Within a second, the plume became well defined and intense. Internal pressure in the right SRB began to drop because of the rapidly enlarging hole in the failed joint, and at T+60.238 there was visual evidence of flame burning through the joint and impinging on the external tank.[19]
|
66 |
+
|
67 |
+
At T+64.660, the plume suddenly changed shape, indicating that a leak had begun in the liquid hydrogen (LH2) tank, located in the aft portion of the external tank. The nozzles of the main engines pivoted under computer control to compensate for the unbalanced thrust produced by the booster burn-through. The pressure in the shuttle's external LH2 tank began to drop at T+66.764, indicating the effect of the leak.[19]
|
68 |
+
|
69 |
+
At this stage the situation still seemed normal both to the crew and to flight controllers. At T+68, the CAPCOM Richard O. Covey informed the crew that they were "go at throttle up", and Commander Dick Scobee confirmed, "Roger, go at throttle up"; this was the last communication from Challenger on the air-to-ground loop.[19]
|
70 |
+
|
71 |
+
At T+72.284, the right SRB pulled away from the aft strut attaching it to the external tank. Later analysis of telemetry data showed a sudden lateral acceleration to the right at T+72.525, which may have been felt by the crew. The last statement captured by the crew cabin recorder came just half a second after this acceleration, when Pilot Michael J. Smith said, "Uh-oh."[23] Smith may also have been responding to onboard indications of main engine performance, or to falling pressures in the external fuel tank.
|
72 |
+
|
73 |
+
At T+73.124, the aft dome of the liquid hydrogen tank failed, producing a propulsive force that rammed the hydrogen tank into the LOX tank in the forward part of the ET. At the same time, the right SRB rotated about the forward attach strut, and struck the intertank structure. The external tank at this point suffered a complete structural failure, the LH2 and LOX tanks rupturing, mixing, and igniting, creating a fireball that enveloped the whole stack.[24]
|
74 |
+
|
75 |
+
The breakup of the vehicle began at T+73.162 seconds and at an altitude of 48,000 feet (15 km).[25] With the external tank disintegrating (and with the semi-detached right SRB contributing its thrust on an anomalous vector), Challenger veered from its correct attitude with respect to the local airflow, resulting in a load factor of up to 20 g, well over its design limit of 5 g and was quickly ripped apart by abnormal aerodynamic forces (the orbiter did not explode as is often suggested, as the force of the external tank breakup was well within its structural limits). The two SRBs, which could withstand greater aerodynamic loads, separated from the ET and continued in uncontrolled powered flight. The SRB casings were made of half-inch-thick (12.7 mm) steel and were much stronger than the orbiter and ET; thus, both SRBs survived the breakup of the space shuttle stack, even though the right SRB was still suffering the effects of the joint burn-through that had set the destruction of Challenger in motion.[21]
|
76 |
+
|
77 |
+
The more robustly constructed crew cabin also survived the breakup of the launch vehicle, as it was designed to survive 20 psi (140 kPa) while the estimated pressure it had been subjected to during orbiter breakup was only about 4–5 psi (28–34 kPa). While the SRBs were subsequently destroyed remotely by the Range Safety Officer, the detached cabin continued along a ballistic trajectory and was observed exiting the cloud of gases at T+75.237.[21] Twenty-five seconds after the breakup of the vehicle, the altitude of the crew compartment peaked at a height of 65,000 feet (20 km).[25] The cabin was stabilized during descent by the large mass of electrical wires trailing behind it. At T+76.437 the nose caps and drogue parachutes of the SRBs separated, as designed, and the drogue of the right-hand SRB was seen by a tracking camera, bearing the frustum and its location aids.[26]
|
78 |
+
|
79 |
+
The Thiokol engineers who had opposed the decision to launch were watching the events on television. They had believed that any O-ring failure would have occurred at liftoff, and thus were happy to see the shuttle successfully leave the launch pad. At about one minute after liftoff, a friend of Boisjoly said to him "Oh God. We made it. We made it!" Boisjoly recalled that when the shuttle was destroyed a few seconds later, "we all knew exactly what happened".[15] Veteran astronaut Robert Crippen was preparing to command STS-62-A, scheduled for mid-year, when his crew stopped training to watch the launch. Pete Aldridge recalled, "I was waiting for the orbiter, as we all were, to come out of the smoke. But as soon as that explosion occurred, Crippen obviously knew what it was. His head dropped. I remember this so distinctly".[27]
|
80 |
+
|
81 |
+
In Mission Control, there was a burst of static on the air-to-ground loop as Challenger disintegrated. Television screens showed a cloud of smoke and water vapor (the product of hydrogen+oxygen combustion) where Challenger had been, with pieces of debris falling toward the ocean. At about T+89, flight director Jay Greene prompted his Flight Dynamics Officer (FIDO) for information. FIDO responded that "the [radar] filter has discreting sources", a further indication that Challenger had broken into multiple pieces. Moments later, the Ground Control Officer reported "negative contact (and) loss of downlink" of radio and telemetry data from Challenger. Greene ordered his team to "watch your data carefully" and look for any sign that the Orbiter had escaped.
|
82 |
+
|
83 |
+
At T+110.250, the range safety officer (RSO) at the Cape Canaveral Air Force Station sent radio signals that activated the range safety system's "destruct" packages on board both solid rocket boosters. This was a normal contingency procedure, undertaken because the RSO judged the free-flying SRBs a possible threat to land or sea. The same destruct signal would have destroyed the external tank had it not already disintegrated.[28] The SRBs were close to the end of their scheduled burn (110 seconds after launch) and had nearly exhausted their propellants when the destruct command was sent, so very little, if any, explosive force was generated by this event.
|
84 |
+
|
85 |
+
Public affairs officer Steve Nesbitt reported: "Flight controllers here are looking very carefully at the situation. Obviously a major malfunction. We have no downlink."[19]
|
86 |
+
|
87 |
+
On the Mission Control loop, Greene ordered that contingency procedures be put into effect; these procedures included locking the doors of the control center, shutting down telephone communications with the outside world, and following checklists that ensured that the relevant data were correctly recorded and preserved.[29]
|
88 |
+
|
89 |
+
Nesbitt relayed this information to the public: "We have a report from the Flight Dynamics Officer that the vehicle has exploded. The flight director confirms that. We are looking at checking with the recovery forces to see what can be done at this point."[19]
|
90 |
+
|
91 |
+
The crew cabin, made of reinforced aluminum, was a particularly robust section of the orbiter.[30] During vehicle breakup, it detached in one piece and slowly tumbled into a ballistic arc. NASA estimated the load factor at separation to be between 12 and 20 g; within two seconds it had already dropped to below 4 g and within 10 seconds the cabin was in free fall. The forces involved at this stage were probably insufficient to cause major injury.
|
92 |
+
|
93 |
+
At least some of the crew were alive and at least briefly conscious after the breakup, as three of the four recovered Personal Egress Air Packs (PEAPs) on the flight deck were found to have been activated.[31] These were those of Judith Resnik, mission specialist Ellison Onizuka, and pilot Michael J. Smith.[32] The location of Smith's activation switch, on the back side of his seat, likely indicated that either Resnik or Onizuka activated it for him. Astronaut Mike Mullane wrote that "There had been nothing in our training concerning the activation of a PEAP in the event of an in-flight emergency. The fact that Judy or El had done so for Mike Smith made them heroic in my mind."[32] Investigators found their remaining unused air supply consistent with the expected consumption during the 2-minute-and-45-second post-breakup trajectory.
|
94 |
+
|
95 |
+
While analyzing the wreckage, investigators discovered that several electrical system switches on pilot Mike Smith's right-hand panel had been moved from their usual launch positions. Mike Mullane wrote, "These switches were protected with lever locks that required them to be pulled outward against a spring force before they could be moved to a new position." Later tests established that neither force of the explosion nor the impact with the ocean could have moved them, indicating that Smith made the switch changes, presumably in a futile attempt to restore electrical power to the cockpit after the crew cabin detached from the rest of the orbiter.[33]
|
96 |
+
|
97 |
+
Whether the crew members remained conscious long after the breakup is unknown, and largely depends on whether the detached crew cabin maintained pressure integrity. If it did not, the time of useful consciousness at that altitude is just a few seconds; the PEAPs supplied only unpressurized air, and hence would not have helped the crew to retain consciousness. If, on the other hand, the cabin was not depressurized or only slowly depressurizing, they may have been conscious for the entire fall until impact. Recovery of the cabin found that the middeck floor had not suffered buckling or tearing, as would result from a rapid decompression, thus providing some evidence that the depressurization may not have happened suddenly.
|
98 |
+
|
99 |
+
NASA routinely trained shuttle crews for splashdown events, but the cabin hit the ocean surface at roughly 207 mph (333 km/h), with an estimated deceleration at impact of well over 200 g, far beyond the structural limits of the crew compartment or crew survivability levels, and far greater than almost any automobile, aircraft, or train accident. The crew would have been torn from their seats and killed instantly by the extreme impact force.[25]
|
100 |
+
|
101 |
+
On July 28, 1986, NASA's Associate Administrator for Space Flight, former astronaut Richard H. Truly, released a report on the deaths of the crew from the director of Space and Life Sciences at the Johnson Space Center, Joseph P. Kerwin. A medical doctor and former astronaut, Kerwin was a veteran of the 1973 Skylab 2 mission. According to the Kerwin Report:
|
102 |
+
|
103 |
+
The findings are inconclusive. The impact of the crew compartment with the ocean surface was so violent that evidence of damage occurring in the seconds which followed the disintegration was masked. Our final conclusions are:
|
104 |
+
|
105 |
+
Some experts believe most if not all of the crew were alive and possibly conscious during the entire descent until impact with the ocean. Astronaut and NASA lead accident investigator Robert Overmyer said, "I not only flew with Dick Scobee, we owned a plane together, and I know Scob did everything he could to save his crew. Scob fought for any and every edge to survive. He flew that ship without wings all the way down ... they were alive."[30]
|
106 |
+
|
107 |
+
During powered flight of the space shuttle, crew escape was not possible. Launch escape systems were considered several times during shuttle development, but NASA's conclusion was that the shuttle's expected high reliability would preclude the need for one. Modified SR-71 Blackbird ejection seats and full pressure suits were used for the two-man crews on the first four shuttle orbital missions, which were considered test flights, but they were removed for the "operational" missions that followed. The Columbia Accident Investigation Board later declared, after the 2003 Columbia re-entry disaster, that the space shuttle system should never have been declared operational because it is experimental by nature due to the limited number of flights as compared to certified commercial aircraft. The multi-deck design of the crew cabin precluded use of such ejection seats for larger crews. Providing some sort of launch escape system had been considered, but deemed impractical due to "limited utility, technical complexity and excessive cost in dollars, weight or schedule delays".[28][a]
|
108 |
+
|
109 |
+
After the loss of Challenger, the question was re-opened, and NASA considered several different options, including ejector seats, tractor rockets, and emergency egress through the bottom of the orbiter. NASA once again concluded that all of the launch escape systems considered would be impractical due to the sweeping vehicle modifications that would have been necessary and the resultant limitations on crew size. A system was designed to give the crew the option to leave the shuttle during gliding flight, but this system would not have been usable in the Challenger situation.[34]
|
110 |
+
|
111 |
+
President Ronald Reagan had been scheduled to give the 1986 State of the Union Address on the evening of the Challenger disaster. After a discussion with his aides, Reagan postponed the State of the Union, and instead addressed the nation about the disaster from the Oval Office of the White House. Reagan's national address was written by Peggy Noonan, and was listed as one of the most significant speeches of the 20th century in a survey of 137 communication scholars.[35][36] It finished with the following statement, which quoted from the poem "High Flight" by John Gillespie Magee Jr.:
|
112 |
+
|
113 |
+
We will never forget them, nor the last time we saw them, this morning, as they prepared for their journey and waved goodbye and 'slipped the surly bonds of Earth' to 'touch the face of God.'[37]
|
114 |
+
|
115 |
+
Three days later, Ronald and Nancy Reagan traveled to the Johnson Space Center to speak at a memorial service honoring the crew members, where he stated:
|
116 |
+
|
117 |
+
Sometimes, when we reach for the stars, we fall short. But we must pick ourselves up again and press on despite the pain.[38]
|
118 |
+
|
119 |
+
It was attended by 6,000 NASA employees and 4,000 guests,[39][40] as well as by the families of the crew.[41]:17 During the ceremony, an Air Force band led the singing of "God Bless America" as NASA T-38 Talon jets flew directly over the scene, in the traditional missing-man formation.[39][40] All activities were broadcast live by the national television networks.[39]
|
120 |
+
|
121 |
+
Rumors surfaced in the weeks after the disaster that the White House itself had pressed for Challenger to launch before the scheduled January 28 State of the Union address, because Reagan had intended to mention the launch in his remarks.[42] Three weeks before the State of the Union address was to have been given, NASA did suggest that Reagan make callouts to the assumed successful Challenger launch, and to mention Christa McAuliffe's shuttle voyage as "'the ultimate field trip' of an American schoolteacher".[42] In March 1986, The White House released a copy of the original State of the Union speech as it would have been given before the disaster. In that speech, Reagan intended to mention an X-ray experiment launched on Challenger and designed by a guest he'd invited to the address, but he did not plan any other specific discussion about the launch or NASA in the address.[42][43] Inclusion of extensive discussion of NASA, the launch, or McAuliffe in the State of the Union address effectively stopped at the desk of Cabinet Secretary Alfred H. Kingon.[42] In the rescheduled State of the Union address on February 4, President Reagan did mention the deceased Challenger crew members, and modified his remarks about the X-ray experiment as "launched and lost".
|
122 |
+
|
123 |
+
In the first minutes after the accident, recovery efforts were begun by NASA's Launch Recovery Director, who ordered the ships normally used by NASA for recovery of the solid rocket boosters to be sent to the location of the water impact. Search and rescue aircraft were also dispatched. At this stage, since debris was still falling, the Range Safety Officer (RSO) held both aircraft and ships out of the impact area until it was considered safe for them to enter. It was about an hour until the RSO allowed the recovery forces to begin their work.[44]
|
124 |
+
|
125 |
+
The search and rescue operations that took place in the first week after the Challenger accident were managed by the Department of Defense on behalf of NASA, with assistance from the United States Coast Guard, and mostly involved surface searches. According to the Coast Guard, "the operation was the largest surface search in which they had participated."[44] This phase of operations lasted until February 7. In order to discourage scavengers, NASA did not disclose the exact location of the debris field and insisted on secrecy, utilizing code names such as "Target 67" to refer to the crew cabin and "Tom O'Malley" to refer to any crew remains.[45] Radio Shack stores in the Cape Canaveral area were soon completely sold out of radios that could tune into the frequency used by Coast Guard vessels.[45] Thereafter, recovery efforts were managed by a Search, Recovery, and Reconstruction team; its aim was to salvage debris that would help in determining the cause of the accident. Sonar, divers, remotely operated submersibles and crewed submersibles were all used during the search, which covered an area of 486 square nautical miles (1,670 km2), and took place at water depths between 70 feet (21 m) and 1,200 feet (370 m).[46] On March 7, divers from the USS Preserver identified what might be the crew compartment on the ocean floor.[47][48] The finding, along with discovery of the remains of all seven crew members, was confirmed the next day and on March 9, NASA announced the finding to the press.[49] The crew cabin was severely crushed and fragmented from the extreme impact forces; one member of the search team described it as "largely a pile of rubble with wires protruding from it". The largest intact section was the rear wall containing the two payload bay windows and the airlock. All windows in the cabin had been destroyed, with only small bits of glass still attached to the frames. Impact forces appeared to be greatest on the left side, indicating that it had struck the water in a nose-down, left-end-first position.
|
126 |
+
|
127 |
+
Inside the twisted debris of the crew cabin were the bodies of the astronauts which, after weeks of immersion in salt water and exposure to scavenging marine life, were in a "semi-liquefied state that bore little resemblance to anything living". However, according to John Devlin, the skipper of the USS Preserver, they "were not as badly mangled as you'd see in some aircraft accidents". Lt. Cmdr James Simpson of the Coast Guard reported finding a helmet with ears and a scalp in it.[50] Judith Resnik was the first to be removed, followed by Christa McAuliffe, with more remains retrieved over several hours. Due to the hazardous nature of the recovery operation (the cabin was filled with large pieces of protruding jagged metal), the Navy divers protested that they would not go on with the work unless the cabin was hauled onto the ship's deck.
|
128 |
+
|
129 |
+
During the recovery of the remains of the crew, Gregory Jarvis's body floated out of the shattered crew compartment and was lost to the diving team. A day later, it was seen floating on the ocean's surface. It sank as a team prepared to pull it from the water. Determined not to end the recovery operations without retrieving Jarvis, Crippen rented a fishing boat at his own expense and went searching for the body. On April 15, near the end of the salvage operations, the Navy divers found Jarvis. His body had settled to the sea floor, 101.2 feet (30.8 m) below the surface, some 0.7 nautical miles (1.3 km; 0.81 mi) from the final resting place of the crew compartment. The body was recovered and brought to the surface before being processed with the other crew members and then prepared for release to Jarvis's family.[51]
|
130 |
+
|
131 |
+
Navy pathologists performed autopsies on the crew members, but due to the poor condition of the bodies, the exact cause of death could not be determined for any of them. The crew transfer took place on April 29, 1986, three months and one day after the accident. Seven hearses carried the crew's remains from the Life Sciences Facility on Cape Canaveral to a waiting MAC C-141 aircraft. Their caskets were each draped with an American flag and carried past an honor guard and followed by an astronaut escort. The astronaut escorts for the Challenger crew were Dan Brandenstein, James Buchli, Norm Thagard, Charles Bolden, Tammy Jernigan, Dick Richards, and Loren Shriver. Once the crew's remains were aboard the jet, they were flown to Dover Air Force Base in Delaware to be processed and then released to their relatives.
|
132 |
+
|
133 |
+
It had been suggested early in the investigation that the accident was caused by inadvertent detonation of the Range Safety destruct charges on the external tank, but the charges were recovered mostly intact and a quick overview of telemetry data immediately ruled out that theory.
|
134 |
+
|
135 |
+
The three shuttle main engines were found largely intact and still attached to the thrust assembly despite extensive damage from impact with the ocean, marine life, and immersion in salt water. They had considerable heat damage due to a LOX-rich shutdown caused by the drop in hydrogen fuel pressure as the external tank began to fail. The memory units from Engines 1 and 2 were recovered, cleaned, and their contents analyzed, which confirmed normal engine operation until LH2 starvation began at T+72 seconds. Loss of fuel pressure and rising combustion chamber temperatures caused the computers to shut off the engines. Since there was no evidence of abnormal SSME behavior until 72 seconds (only about one second before the breakup of Challenger), the engines were ruled out as a contributing factor in the accident.
|
136 |
+
|
137 |
+
Other recovered orbiter components showed no indication of pre-breakup malfunction. Recovered parts of the TDRSS satellite also did not disclose any abnormalities other than damage caused by vehicle breakup, impact, and immersion in salt water. The solid rocket motor boost stage for the payload had not ignited either and was quickly ruled out as a cause of the accident.
|
138 |
+
|
139 |
+
The solid rocket booster debris had no signs of explosion (other than the Range Safety charges splitting the casings open), or propellant debonding and cracking. There was no question about the RSO manually destroying the SRBs following vehicle breakup, so the idea of the destruct charges accidentally detonating was ruled out. Premature separation of the SRBs from the stack or inadvertent activation of the recovery system was also considered, but telemetry data quickly disproved that idea. Nor was there any evidence of in-flight structural failure since visual and telemetry evidence showed that the SRBs remained structurally intact up to and beyond vehicle breakup. The aft field joint on the right SRB did show extensive burn damage.
|
140 |
+
|
141 |
+
Telemetry proved that the right SRB, after the failure of the lower struts, had come loose and struck the external tank. The exact point where the struts broke could not be determined from film of the launch, nor were the struts or the adjacent section of the external tank recovered during salvage operations. Based on the location of the rupture in the right SRB, the P12 strut most likely failed first. The SRB's nose cone also exhibited some impact damage from this behavior (for comparison, the left SRB nose cone had no damage at all) and the intertank region had signs of impact damage as well. In addition, the orbiter's right wing had impact and burn damage from the right SRB colliding with it following vehicle breakup.
|
142 |
+
|
143 |
+
Most of the initially considered failure modes were soon ruled out and by May 1, enough of the right solid rocket booster had been recovered to determine the original cause of the accident, and the major salvage operations were concluded. While some shallow-water recovery efforts continued, this was unconnected with the accident investigation; it aimed to recover debris for use in NASA's studies of the properties of materials used in spacecraft and launch vehicles.[44] The recovery operation was able to pull 15 short tons (14 t) of debris from the ocean; this means that 55% of Challenger, 5% of the crew cabin and 65% of the satellite cargo are still missing.[52] Some of the missing debris continued to wash up on Florida shores for some years, such as on December 17, 1996, nearly 11 years after the incident, when two large pieces of the shuttle were found at Cocoa Beach.[53] Under 18 U.S.C. § 641 it is against the law to be in possession of Challenger debris, and any newly discovered pieces must be turned over to NASA.[54]
|
144 |
+
|
145 |
+
On board Challenger was an American flag, dubbed the Challenger flag, that was sponsored by Boy Scout Troop 514 of Monument, Colorado. It was recovered intact, still sealed in its plastic container.[55]
|
146 |
+
|
147 |
+
A soccer ball from the personal effects locker of Mission Specialist Ellison Onizuka was also recovered intact from the wreckage, and was later flown to the International Space Station aboard Soyuz Expedition 49 by American astronaut Robert S. Kimbrough. It is currently on display at Clear Lake High School in Houston, which was attended by Onizuka's children.[56]
|
148 |
+
|
149 |
+
All recovered non-organic debris from Challenger was ultimately buried in a former missile silo at Cape Canaveral Air Force Station Launch Complex 31.
|
150 |
+
|
151 |
+
The remains of the crew that were identifiable were returned to their families on April 29, 1986. Three of the crew members, Judith Resnik, Dick Scobee, and Capt. Michael J. Smith, were buried by their families at Arlington National Cemetery at individual grave sites. Mission Specialist Lt Col Ellison Onizuka was buried at the National Memorial Cemetery of the Pacific in Honolulu, Hawaii. Ronald McNair was buried in Rest Lawn Memorial Park in Lake City, South Carolina. Christa McAuliffe was buried at Calvary Cemetery in her hometown of Concord, New Hampshire.[57] Gregory Jarvis was cremated, and his ashes scattered in the Pacific Ocean. Unidentified crew remains were buried communally at the Space Shuttle Challenger Memorial in Arlington on May 20, 1986.[58]
|
152 |
+
|
153 |
+
As a result of the disaster, several National Reconnaissance Office (NRO) satellites that only the shuttle could launch were grounded because of the accident. This was a dilemma the NRO had feared since the 1970s when the shuttle was designated as the United States' primary launch system for all government and commercial payloads.[59][60] Compounding NASA's problems were difficulties with its Titan and Delta rocket programs which suffered other unexpected rocket failures around the time of the Challenger disaster.
|
154 |
+
|
155 |
+
On August 28, 1985, a Titan 34D[61] carrying a KH-11 Kennen satellite exploded after liftoff over Vandenberg Air Force Base, when the first stage propellant feed system failed. It was the first failure of a Titan missile since 1978. On April 18, 1986, another Titan 34D-9[61][62] carrying a classified payload,[62] said to be a Big Bird spy satellite, exploded at about 830 feet (250 m) above the pad after liftoff over Vandenberg AFB, when a burnthrough occurred on one of the rocket boosters. On May 3, 1986, a Delta 3914[61] carrying the GOES-G weather satellite[63] exploded 71 seconds after liftoff over Cape Canaveral Air Force Station due to an electrical malfunction on the Delta's first stage, which prompted the range safety officer on the ground to decide to destroy the rocket, just as a few of the rocket's boosters were jettisoned. As a result of these three failures, NASA decided to cancel all Titan and Delta launches from Cape Canaveral and Vandenberg for four months until the problems in the rockets' designs were solved.
|
156 |
+
|
157 |
+
A separate related accident occurred at the Pacific Engineering and Production Company of Nevada (PEPCON) plant in Henderson, Nevada. Due to the shuttle fleet being grounded, excess ammonium perchlorate that was manufactured as rocket fuel was being kept on site. This excess ammonium perchlorate later caught fire and the magnitude of the resulting explosion destroyed the PEPCON facility and the neighboring Kidd & Co marshmallow factory.[64]
|
158 |
+
|
159 |
+
In the aftermath of the accident, NASA was criticized for its lack of openness with the press. The New York Times noted on the day after the accident that "neither Jay Greene, flight director for the ascent, nor any other person in the control room, was made available to the press by the space agency."[65] In the absence of reliable sources, the press turned to speculation; both The New York Times and United Press International ran stories suggesting that a fault with the space shuttle external tank had caused the accident, despite the fact that NASA's internal investigation had quickly focused in on the solid rocket boosters.[66][67] "The space agency," wrote space reporter William Harwood, "stuck to its policy of strict secrecy about the details of the investigation, an uncharacteristic stance for an agency that long prided itself on openness."[66]
|
160 |
+
|
161 |
+
The Presidential Commission on the Space Shuttle Challenger Accident, also known as the Rogers Commission after its chairman, was formed to investigate the disaster. The commission members were Chairman William P. Rogers, Vice Chairman Neil Armstrong, David Acheson, Eugene Covert, Richard Feynman, Robert Hotz, Donald Kutyna, Sally Ride, Robert Rummel, Joseph Sutter, Arthur Walker, Albert Wheelon, and Chuck Yeager. The commission worked for several months and published a report of its findings. It found that the Challenger accident was caused by a failure in the O-rings sealing a joint on the right solid rocket booster, which allowed pressurized hot gases and eventually flame to "blow by" the O-ring and make contact with the adjacent external tank, causing structural failure. The failure of the O-rings was attributed to a faulty design, whose performance could be too easily compromised by factors including the low ambient temperature on the day of launch. The O-rings would not work properly at ambient temperatures below 50 °F (10 °C)- and it was 36 °F (2 °C) on the morning of the launch.[68][69] According to the report in July 1985, a memorandum was sent to Michael Weeks, the agency's No. 2 shuttle official, and Jesse W. Moore, the associate administrator in charge of the shuttle program, raising attention to the "failure history" of the O rings and recommending a review of the matter.
|
162 |
+
|
163 |
+
More broadly, the report also considered the contributing causes of the accident. Most salient was the failure of both NASA and Morton-Thiokol to respond adequately to the danger posed by the deficient joint design. Rather than redesigning the joint, they came to define the problem as an acceptable flight risk. The report found that managers at Marshall had known about the flawed design since 1977, but never discussed the problem outside their reporting channels with Thiokol—a flagrant violation of NASA regulations. Even when it became more apparent how serious the flaw was, no one at Marshall considered grounding the shuttles until a fix could be implemented. On the contrary, Marshall managers went as far as to issue and waive six launch constraints related to the O-rings.[8] The report also strongly criticized the decision-making process that led to the launch of Challenger, saying that it was seriously flawed:[16]
|
164 |
+
|
165 |
+
failures in communication ... resulted in a decision to launch 51-L based on incomplete and sometimes misleading information, a conflict between engineering data and management judgments, and a NASA management structure that permitted internal flight safety problems to bypass key Shuttle managers.[16]
|
166 |
+
|
167 |
+
One of the commission's members was theoretical physicist Richard Feynman. Feynman, who was then seriously ill with cancer, was reluctant to undertake the job. He did so to find the root cause of the disaster and to speak plainly to the public about his findings.[70]
|
168 |
+
Before going to Washington, D.C., Feynman did his own investigation. He became suspicious about the O-rings. “O-rings show scorching in Clovis check,” he scribbled in his notes. "Once a small hole burns through generates a large hole very fast! Few seconds catastrophic failure."[71] At the start of investigation, fellow members Dr. Sally Ride and General Donald J. Kutyna told Feynman that the O-rings had not been tested at temperatures below 50 °F (10 °C).[72] During a televised hearing, Feynman demonstrated how the O-rings became less resilient and subject to seal failures at ice-cold temperatures by immersing a sample of the material in a glass of ice water. While other members of the Commission met with NASA and supplier top management, Feynman sought out the engineers and technicians for the answers.[73] He was critical of flaws in NASA's "safety culture", so much so that he threatened to remove his name from the report unless it included his personal observations on the reliability of the shuttle, which appeared as Appendix F.[73][74] In the appendix, he argued that the estimates of reliability offered by NASA management were wildly unrealistic, differing as much as a thousandfold from the estimates of working engineers. "For a successful technology," he concluded, "reality must take precedence over public relations, for nature cannot be fooled."[75]
|
169 |
+
|
170 |
+
The U.S. House Committee on Science and Technology also conducted hearings and, on October 29, 1986, released its own report on the Challenger accident.[76] The committee reviewed the findings of the Rogers Commission as part of its investigation and agreed with the Rogers Commission as to the technical causes of the accident. It differed from the committee in its assessment of the accident's contributing causes:
|
171 |
+
|
172 |
+
the Committee feels that the underlying problem which led to the Challenger accident was not poor communication or underlying procedures as implied by the Rogers Commission conclusion. Rather, the fundamental problem was poor technical decision-making over a period of several years by top NASA and contractor personnel, who failed to act decisively to solve the increasingly serious anomalies in the Solid Rocket Booster joints.[76]
|
173 |
+
|
174 |
+
After the Challenger accident, further shuttle flights were suspended, pending the results of the Rogers Commission investigation. Whereas NASA had held an internal inquiry into the Apollo 1 fire in 1967, its actions after Challenger were more constrained by the judgment of outside bodies. The Rogers Commission offered nine recommendations on improving safety in the space shuttle program, and NASA was directed by President Reagan to report back within thirty days as to how it planned to implement those recommendations.[77]
|
175 |
+
|
176 |
+
When the disaster happened, the Air Force had performed extensive modifications of its Space Launch Complex 6 (SLC-6, pronounced as "Slick Six") at Vandenberg Air Force Base in California, for launch and landing operations of classified Shuttle launches of satellites in polar orbit, and was planning its first polar flight for October 15, 1986. Originally built for the Manned Orbital Laboratory project cancelled in 1969, the modifications were proving problematic and expensive,[78] costing over $4 billion (equivalent to $9.3 billion today). The Challenger loss motivated the Air Force to set in motion a chain of events that finally led to the May 13, 1988, decision to cancel its Vandenberg Shuttle launch plans in favor of the Titan IV uncrewed launch vehicle.
|
177 |
+
|
178 |
+
In response to the commission's recommendation, NASA initiated a total redesign of the space shuttle's solid rocket boosters, which was watched over by an independent oversight group as stipulated by the commission.[77] NASA's contract with Morton-Thiokol, the contractor responsible for the solid rocket boosters, included a clause stating that in the event of a failure leading to "loss of life or mission", Thiokol would forfeit $10 million (equivalent to $23.3 million today) of its incentive fee and formally accept legal liability for the failure. After the Challenger accident, Thiokol agreed to "voluntarily accept" the monetary penalty in exchange for not being forced to accept liability.[41]:355
|
179 |
+
|
180 |
+
NASA also created a new Office of Safety, Reliability and Quality Assurance, headed as the commission had specified by a NASA associate administrator who reported directly to the NASA administrator. George Martin, formerly of Martin Marietta, was appointed to this position.[79] Former Challenger flight director Jay Greene became chief of the Safety Division of the directorate.[80]
|
181 |
+
|
182 |
+
The unrealistically optimistic launch schedule pursued by NASA had been criticized by the Rogers Commission as a possible contributing cause to the accident. After the accident, NASA attempted to aim at a more realistic shuttle flight rate: it added another orbiter, Endeavour, to the space shuttle fleet to replace Challenger, and it worked with the Department of Defense to put more satellites in orbit using expendable launch vehicles rather than the shuttle.[81] In August 1986, President Reagan also announced that the shuttle would no longer carry commercial satellite payloads.[81] After a 32-month hiatus, the next shuttle mission, STS-26, was launched on September 29, 1988.
|
183 |
+
|
184 |
+
Although changes were made by NASA after the Challenger accident, many commentators have argued that the changes in its management structure and organizational culture were neither deep nor long-lasting.
|
185 |
+
|
186 |
+
After the Space Shuttle Columbia disaster in 2003, attention once again focused on the attitude of NASA management towards safety issues. The Columbia Accident Investigation Board (CAIB) concluded that NASA had failed to learn many of the lessons of Challenger. In particular, the agency had not set up a truly independent office for safety oversight; the CAIB decided that in this area, "NASA's response to the Rogers Commission did not meet the Commission's intent".[82] The CAIB believed that "the causes of the institutional failure responsible for Challenger have not been fixed", saying that the same "flawed decision making process" that had resulted in the Challenger accident was responsible for Columbia's destruction seventeen years later.[83]
|
187 |
+
|
188 |
+
While the presence of New Hampshire's Christa McAuliffe, a member of the Teacher in Space program, on the Challenger crew had provoked some media interest, there was little live broadcast coverage of the launch. The only live national TV coverage available publicly was provided by CNN.[84] Los Angeles station KNBC also carried the launch with anchor Kent Shocknek describing the tragedy as it happened.[85] Live radio coverage of the launch and explosion was heard on ABC Radio anchored by Vic Ratner and Bob Walker.[86] CBS Radio broadcast the launch live then returned to their regularly scheduled programming just a few seconds before the explosion, necessitating anchor Christopher Glenn to hastily scramble back on the air to report what had happened.[87]
|
189 |
+
|
190 |
+
NBC, CBS, and ABC all broke into regular programming shortly after the accident; NBC's John Palmer announced there had been "a major problem" with the launch. Both Palmer and CBS anchor Dan Rather reacted to cameras catching live video of something descending by parachute into the area where Challenger debris was falling with confusion and speculation that a crew member may have ejected from the shuttle and survived. The shuttle had no individual ejection seats or a crew escape capsule. Mission control identified the parachute as a paramedic parachuting into the area but this was also incorrect based on internal speculation at mission control. The chute was the parachute and nose cone from one of the solid rocket boosters which had been destroyed by the range safety officer after the explosion.[88] Due to McAuliffe's presence on the mission, NASA arranged for many US public schools to view the launch live on NASA TV.[89] As a result, many who were schoolchildren in the US in 1986 had the opportunity to view the launch live. After the accident, 17 percent of respondents in one study reported that they had seen the shuttle launch, while 85 percent said that they had learned of the accident within an hour. As the authors of the paper reported, "only two studies have revealed more rapid dissemination [of news]." One of those studies is of the spread of news in Dallas after President John F. Kennedy's assassination, while the other is the spread of news among students at Kent State regarding President Franklin D. Roosevelt's death.[90] Another study noted that "even those who were not watching television at the time of the disaster were almost certain to see the graphic pictures of the accident replayed as the television networks reported the story almost continuously for the rest of the day."[91] Children were even more likely than adults to have seen the accident live, since many children—48 percent of nine- to thirteen-year-olds, according to a New York Times poll—watched the launch at school.[91]
|
191 |
+
|
192 |
+
Following the day of the accident, press interest remained high. While only 535 reporters were accredited to cover the launch, three days later there were 1,467 reporters at Kennedy Space Center and another 1,040 at the Johnson Space Center. The event made headlines in newspapers worldwide.[66]
|
193 |
+
|
194 |
+
The Challenger accident has frequently been used as a case study in the study of subjects such as engineering safety, the ethics of whistle-blowing, communications, group decision-making, and the dangers of groupthink. It is part of the required readings for engineers seeking a professional license in Canada and other countries.[92] Roger Boisjoly, the engineer who had warned about the effect of cold weather on the O-rings, left his job at Morton-Thiokol and became a speaker on workplace ethics.[93] He argues that the caucus called by Morton-Thiokol managers, which resulted in a recommendation to launch, "constituted the unethical decision-making forum resulting from intense customer intimidation."[94] For his honesty and integrity leading up to and directly following the shuttle disaster, Roger Boisjoly was awarded the Prize for Scientific Freedom and Responsibility from the American Association for the Advancement of Science. Many colleges and universities have also used the accident in classes on the ethics of engineering.[95][96]
|
195 |
+
|
196 |
+
Information designer Edward Tufte has claimed that the Challenger accident is an example of the problems that can occur from the lack of clarity in the presentation of information. He argues that if Morton-Thiokol engineers had more clearly presented the data that they had on the relationship between low temperatures and burn-through in the solid rocket booster joints, they might have succeeded in persuading NASA managers to cancel the launch. To demonstrate this, he took all of the data he claimed the engineers had presented during the briefing, and reformatted it onto a single graph of O-ring damage versus external launch temperature, showing the effects of cold on the degree of O-ring damage. Tufte then placed the proposed launch of Challenger on the graph according to its predicted temperature at launch. According to Tufte, the launch temperature of Challenger was so far below the coldest launch, with the worst damage seen to date, that even a casual observer could have determined that the risk of disaster was severe.[97]
|
197 |
+
|
198 |
+
Tufte has also argued that poor presentation of information may have also affected NASA decisions during the last flight of the space shuttle Columbia.[98]
|
199 |
+
|
200 |
+
Boisjoly, Wade Robison, a Rochester Institute of Technology professor, and their colleagues have vigorously repudiated Tufte's conclusions about the Morton-Thiokol engineers' role in the loss of Challenger. First, they argue that the engineers didn't have the information available as Tufte claimed: "But they did not know the temperatures even though they did try to obtain that information. Tufte has not gotten the facts right even though the information was available to him had he looked for it."[99] They further argue that Tufte "misunderstands thoroughly the argument and evidence the engineers gave."[99] They also criticized Tufte's diagram as "fatally flawed by Tufte's own criteria. The vertical axis tracks the wrong effect, and the horizontal axis cites temperatures not available to the engineers and, in addition, mixes O-ring temperatures and ambient air temperature as though the two were the same."[99]
|
201 |
+
|
202 |
+
The Challenger disaster also provided a chance to see how traumatic events affected children's psyches. The large number of children who saw the accident live or in replays the same day was well known that day, and influenced the speech President Reagan gave that evening.
|
203 |
+
|
204 |
+
I want to say something to the schoolchildren of America who were watching the live coverage of the shuttle's takeoff. I know it is hard to understand, but sometimes painful things like this happen. It's all part of the process of exploration and discovery. It's all part of taking a chance and expanding man's horizons. The future doesn't belong to the fainthearted; it belongs to the brave. The Challenger crew was pulling us into the future, and we'll continue to follow them.
|
205 |
+
|
206 |
+
At least one psychological study has found that memories of the Challenger explosion were similar to memories of experiencing single, unrepeated traumas. The majority of children's memories of Challenger were often clear and consistent, and even things like personal placement such as who they were with or what they were doing when they heard the news were remembered well. In one U.S. study, children's memories were recorded and tested again. Children on the East Coast recalled the event more easily than children on the West Coast, due to the time difference. Children on the East Coast either saw the explosion on TV while in school, or heard people talking about it. On the other side of the country, most children were either on their way to school, or just beginning their morning classes. Researchers found that those children who saw the explosion on TV had a more emotional connection to the event, and thus had an easier time remembering it. After one year the children's memories were tested, and those on the East Coast recalled the event better than their West Coast counterparts. Regardless of where they were when it happened, the Challenger explosion was still an important event that many children easily remembered.[100]
|
207 |
+
|
208 |
+
After the accident, NASA's Space Shuttle fleet was grounded for almost three years while the investigation, hearings, engineering redesign of the SRBs, and other behind-the-scenes technical and management reviews, changes, and preparations were taking place. At 11:37 on September 29, 1988, Space Shuttle Discovery lifted off with a crew of five, all veteran astronauts,[101] from Kennedy Space Center pad 39-B. Its crew included Richard O. Covey, who had given the last status callout to Challenger before its breakup, "Challenger, go at throttle up". The shuttle carried a Tracking and Data Relay Satellite, TDRS-C (named TDRS-3 after deployment), which replaced TDRS-B, the satellite that was launched and lost on Challenger. The "Return to Flight" launch of Discovery also represented a test of the redesigned boosters, a shift to a more conservative stance on safety (being the first time the crew had launched in pressure suits since STS-4, the last of the four initial Shuttle test flights), and a chance to restore national pride in the American space program, especially crewed space flight. The mission, STS-26, was a success (with only two minor system failures, one of a cabin cooling system and one of a Ku-band antenna), and a regular schedule of STS flights followed, continuing without extended interruption until the 2003 Columbia disaster.
|
209 |
+
|
210 |
+
Barbara Morgan, the backup for McAuliffe who trained with her in the Teacher in Space program and was at KSC watching her launch on January 28, 1986, flew on STS-118 as a Mission Specialist in August 2007.
|
211 |
+
|
212 |
+
NASA had intended to send a wide range of civilian passengers into space on subsequent flights. These plans were all scrapped immediately following the Challenger disaster.
|
213 |
+
|
214 |
+
NBC News's Cape Canaveral correspondent Jay Barbree was among 40 candidates in NASA's Journalist in Space Program. The first journalist was due to fly on the shuttle Challenger in September 1986. In 1984, prior to establishing the Teacher in Space Program (TISP), NASA created the Space Flight Participant Program whose aim was "to select teachers, journalists, artists, and other people who could bring their unique perspective to the human spaceflight experience as a passenger on the space shuttle."[citation needed] Notable among early potential passengers was Caroll Spinney who, from 1969–2018, played the characters Big Bird and Oscar the Grouch on the children's television show Sesame Street. In 2014, after the situation was mentioned in the documentary I Am Big Bird, NASA confirmed the revelation, stating this:
|
215 |
+
|
216 |
+
A review of past documentation shows there were initial conversations with Sesame Street regarding their potential participation on a Challenger flight, but that plan was never approved.
|
217 |
+
|
218 |
+
Spinney went on to say that the Big Bird costuming was prohibitive in the tight quarters of the NASA Space Shuttles and they had moved on.[102]
|
219 |
+
|
220 |
+
The families of the Challenger crew organized the Challenger Center for Space Science Education as a permanent memorial to the crew. Forty-three learning centers and one headquarters office have been established by this non-profit organization.[103][citation needed]
|
221 |
+
|
222 |
+
The astronauts' names are among those of several astronauts and cosmonauts who have died in the line of duty, listed on the Space Mirror Memorial at the Kennedy Space Center Visitor Complex in Merritt Island, Florida.
|
223 |
+
|
224 |
+
The final episode of the second season of Punky Brewster is notable for centering on the very recent, real-life Space Shuttle Challenger disaster. Punky and her classmates watched the live coverage of the shuttle launch in Mike Fulton's class. After the accident occurred, Punky is traumatized, and finds her dreams to become an astronaut are crushed. She writes a letter to NASA, and is visited by special guest star Buzz Aldrin.[104][105]
|
225 |
+
|
226 |
+
Seven asteroids were named after the crew members: 3350 Scobee, 3351 Smith, 3352 McAuliffe, 3353 Jarvis, 3354 McNair, 3355 Onizuka, and 3356 Resnik. The approved naming citation was published by the Minor Planet Center on March 26, 1986 (M.P.C. 10550).[106]
|
227 |
+
|
228 |
+
On the evening of April 5, 1986, the Rendez-vous Houston concert commemorated and celebrated the crew of the Challenger. It features a live performance by musician Jean Michel Jarre, a friend of crew member Ron McNair. McNair was supposed to play the saxophone from space during the track "Last Rendez-Vous". It was to have become the first musical piece professionally recorded in space.[citation needed] His substitute for the concert was Houston native Kirk Whalum.[citation needed]
|
229 |
+
|
230 |
+
In June 1986, singer-songwriter John Denver, a pilot with a deep interest in going to space himself, released the album One World including the song "Flying For Me" as a tribute to the Challenger crew.
|
231 |
+
|
232 |
+
Hard rock band Keel dedicated their 1986 album The Final Frontier to the "Astronauts of the Space Shuttle Challenger the 28 of January 1986". On the cover of the album is a stylized "space shuttle" type vehicle with the band's logo, flying over the sea.[107]
|
233 |
+
|
234 |
+
Star Trek IV: The Voyage Home was dedicated to the crew of the Challenger. Principal photography for The Voyage Home began four weeks after Challenger and her crew were lost.
|
235 |
+
|
236 |
+
An unpainted decorative oval in the Brumidi Corridors of the United States Capitol was finished with a portrait depicting the crew by Charles Schmidt in 1987.[108] The scene was painted on canvas and then applied to the wall.[108]
|
237 |
+
|
238 |
+
In 1988, seven craters on the far side of the Moon, within the Apollo Basin, were named after the fallen astronauts by the IAU.[109]
|
239 |
+
|
240 |
+
In Huntsville, Alabama, home of Marshall Space Flight Center, Challenger Elementary School, Challenger Middle School, and the Ronald E. McNair Junior High School are all named in memory of the crew. Huntsville has also named new schools posthumously in memory of each of the Apollo 1 astronauts and the final Space Shuttle Columbia crew. Streets in a neighborhood established in the late-1980s in nearby Decatur are named in memory of each of the Challenger crew members (Onizuka excluded), as well as the three deceased Apollo 1 astronauts.[citation needed] Julian Harris Elementary School is located on McAuliffe Drive, and its mascot is the Challengers.
|
241 |
+
|
242 |
+
The Squadron "Challenger" 17 is an Air Force unit in the Texas A&M Corps of Cadets that emphasizes athletic and academic success in honor of the Challenger crew.[110] The unit was established in 1992.[111]
|
243 |
+
|
244 |
+
In San Antonio, Texas, Scobee Elementary School opened in 1987, the year after the disaster. Students at the school are referred to as "Challengers". An elementary school in Nogales, Arizona, commemorates the accident in name, Challenger Elementary School, and their school motto, "Reach for the sky". The suburbs of Seattle, Washington are home to Challenger Elementary School in Issaquah, Washington[112] and Christa McAuliffe Elementary School in Sammamish, Washington.[113] and Dick Scobee Elementary in Auburn, Washington. In San Diego, California, the next-opened public middle school in the San Diego Unified School District was named Challenger Middle School.[114] The City of Palmdale, the birthplace of the entire shuttle fleet, and its neighbor City of Lancaster, California, both renamed 10th Street East, from Avenue M to Edwards Air Force Base, to Challenger Way in honor of the lost shuttle and its crew.[citation needed] This is the road that the Challenger, Enterprise, and Columbia all were towed along in their initial move from U.S. Air Force Plant 42 to Edwards AFB after completion since Palmdale airport had not yet installed the shuttle crane for placement of an orbiter on the 747 Shuttle Carrier Aircraft.[citation needed] In addition, the City of Lancaster has built Challenger Middle School, and Challenger Memorial Hall at the former site of the Antelope Valley Fairgrounds, all in tribute to the Challenger shuttle and crew.[citation needed] Another school was opened in Chicago, IL as the Sharon Christa McAuliffe Elementary school.[115] The public Peers Park in Palo Alto, California features a Challenger Memorial Grove that includes redwood trees grown from seeds carried aboard Challenger in 1985.[116] In Boise, ID, Boise High School has a memorial to the Challenger astrounauts. In 1986 in Webster, Texas, the Challenger Seven Memorial Park was also dedicated in remembrance of the event.[117] Their Spirits Circle the Earth was installed in Columbus, Ohio, in 1987.
|
245 |
+
|
246 |
+
In Port Saint John, Florida within Brevard County the same county that the Kennedy Space Center resides in is the Challenger 7 Elementary School which is named in memory of the seven crew members of STS-51-L.[118] There is a middle school in neighboring Rockledge, McNair Magnet School, named after astronaut Ronald McNair.[119] A middle school (formerly high school) in Mohawk, New York is named after Payload Specialist Gregory Jarvis. There are schools named in honor of Christa McAuliffe in Boynton Beach, Florida; Denver, Colorado; Riverside, California; Saratoga, California; Stockton, California; Lowell, Massachusetts; Houston, Texas; Germantown, Maryland;[120] Green Bay, Wisconsin;[121] Hastings, Minnesota;[122] Lenexa, Kansas; and Jackson, New Jersey.[123] The McAuliffe-Shepard Discovery Center, a science museum and planetarium in Concord, New Hampshire, is partly named in her honor. The draw bridge over the barge canal on State Rd.3 on Merritt Island, Florida, is named the Christa McAuliffe Memorial Bridge.[124] In Oxnard, CA, McAuliffe Elementary School is named after Christa McAuliffe, and bears tribute to the crew of the Challenger flight in its logo, with an image of the Shuttle and the motto "We Meet The Challenge." The crew and mission are also tributed by the schools mascot, The Challengers, and their saying "We Reach for the Stars."[125]
|
247 |
+
|
248 |
+
The 1996 science fiction television series Space Cases is set on a spaceship known as the Christa, named in honor of Christa McAuliffe, described in the series as "an Earth teacher who died during the early days of space exploration".
|
249 |
+
|
250 |
+
In 1997, playwright Jane Anderson wrote a play inspired by the Challenger incident, titled Defying Gravity.[126]
|
251 |
+
|
252 |
+
In 2001, Peruvian American rapper Immortal Technique released an album titled Revolutionary Vol. 1 which contained the song Dance With The Devil. A hidden track at the conclusion of the song contains the lyrics "I'm bout to blow up like, NASA Challenger computer chips".[127]
|
253 |
+
|
254 |
+
In 2004, President George W. Bush conferred posthumous Congressional Space Medals of Honor to all 14 crew members lost in the Challenger and Columbia accidents.[128]
|
255 |
+
|
256 |
+
In 2009, Allan J. McDonald, former director of the Space Shuttle Solid Motor Rocket Project for Morton-Thiokol, Inc. published his book Truth, Lies, and O-Rings: Inside the Space Shuttle Challenger Disaster. Up to that point, no one directly involved in the decision to launch Challenger had published a memoir about the experience.[129]
|
257 |
+
|
258 |
+
On June 14, 2011, Christian singer Adam Young, through his electronica project, released a song about the Challenger incident on his third studio album All Things Bright and Beautiful.
|
259 |
+
|
260 |
+
In December 2013, Beyoncé Knowles released a song titled "XO", which begins with a sample of former NASA public affairs officer Steve Nesbitt, recorded moments after the disaster: "Flight controllers here looking very carefully at the situation. Obviously a major malfunction."[130] The song raised controversy, with former NASA astronauts and families labelling Knowles's sample as "insensitive".[131] Hardeep Phull of the New York Post described the sample's presence as "tasteless",[132] and Keith Cowing of NASA Watch suggested that the usage of the clip ranged from "negligence" to "repugnant".[133] On December 30, 2013, Knowles issued a statement to ABC News, saying: "My heart goes out to the families of those lost in the Challenger disaster. The song "XO" was recorded with the sincerest intention to help heal those who have lost loved ones and to remind us that unexpected things happen, so love and appreciate every minute that you have with those who mean the most to you. The songwriters included the audio in tribute to the unselfish work of the Challenger crew with hope that they will never be forgotten."[134] On December 31, 2013, NASA criticized the use of the sample, stating that "The Challenger accident is an important part of our history; a tragic reminder that space exploration is risky and should never be trivialized. NASA works everyday to honor the legacy of our fallen astronauts as we carry out our mission to reach for new heights and explore the universe."[130][133]
|
261 |
+
|
262 |
+
On June 16, 2015, post-metal band Vattnet Viskar released a full-length album titled Settler which was largely inspired by the Challenger accident and Christa McAuliffe in particular. The album was released in Europe on June 29. Guitarist Chris Alfieri stated in a June 17, 2015 interview with Decibel Magazine that, "Christa was from Concord, New Hampshire, the town that I live in. One of my first memories is the Challenger mission's demise, so it's a personal thing for me. But the album isn't about the explosion, it's about everything else. Pushing to become something else, something better. A transformation, and touching the divine."[135]
|
263 |
+
|
264 |
+
On July 23, 2015, Australian post-rock band We Lost The Sea released an album titled Departure Songs about human sacrifice for the greater good, or for the progress of the human race itself, including the song "Challenger" which is split into two parts: "Flight" and "Swan Song".[136] The song samples audio from NASA, a William S. Burroughs lecture, reactions of people witnessing the disaster, and Ronald Reagan's national address.[137]
|
265 |
+
|
266 |
+
On June 27, 2015, the "Forever Remembered" exhibit at the Kennedy Space Center Visitor Complex, Florida, opened and includes a display of a section of Challenger's recovered fuselage to memorialize and honor the fallen astronauts. The exhibit was opened by NASA Administrator Charles Bolden along with family members of the crew.[138]
|
267 |
+
|
268 |
+
On August 7, 2015 English singer-songwriter Frank Turner released his sixth album Positive Songs for Negative People which includes the song "Silent Key".[139]
|
269 |
+
|
270 |
+
The mountain range Challenger Colles on Pluto was named in honor of the victims of the Challenger disaster.
|
271 |
+
|
272 |
+
The Challenger Columbia Stadium in League City, Texas is named in honor of the victims of both the Challenger disaster as well as the Columbia disaster in 2003.
|
273 |
+
|
274 |
+
A tree for each astronaut was planted in NASA's Astronaut Memorial Grove at the Johnson Space Center in Houston, Texas, not far from the Saturn V building, along with trees for each astronaut from the Apollo 1 and Columbia disasters.[140] Tours of the space center pause briefly near the grove for a moment of silence, and the trees can be seen from nearby NASA Road 1.
|
275 |
+
|
276 |
+
Until 2010, the live broadcast of the launch and subsequent disaster by CNN was the only known on-location video footage from within range of the launch site. As of March 15, 2014[update], eight other motion picture recordings of the event have become publicly available:
|
277 |
+
|
278 |
+
An ABC television movie titled Challenger was broadcast on February 24, 1990. It stars Barry Bostwick as Scobee, Brian Kerwin as Smith, Joe Morton as McNair, Keone Young as Onizuka, Julie Fulton as Resnik, Richard Jenkins as Jarvis, and Karen Allen as McAuliffe.[149][150][151]
|
279 |
+
|
280 |
+
A BBC docudrama titled The Challenger was broadcast on March 18, 2013, based on the last of Richard Feynman's autobiographical works, What Do You Care What Other People Think? It stars William Hurt as Feynman.[152][153]
|
281 |
+
|
282 |
+
A film produced by Vision Makers was released on January 25, 2019. It stars Dean Cain and Glenn Morshower and tells the story of the evening before the disaster where one engineer tried to stop the mission from launching.[154]
|
283 |
+
|
284 |
+
The first episode of History Channel's documentary titled Days That Shaped America is about Challenger.[155]
|
285 |
+
|
286 |
+
This article incorporates public domain material from websites or documents of the National Aeronautics and Space Administration.
|
287 |
+
|
288 |
+
Coordinates: 28°38′24″N 80°16′48″W / 28.64000°N 80.28000°W / 28.64000; -80.28000
|
289 |
+
|
en/190.html.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Native Americans may refer to:
|
en/1900.html.txt
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Buoyancy (/ˈbɔɪənsi, ˈbuːjənsi/)[1][2] or upthrust, is an upward force exerted by a fluid that opposes the weight of a partially or fully immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. The pressure difference results in a net upward force on the object. The magnitude of the force is proportional to the pressure difference, and (as explained by Archimedes' principle) is equivalent to the weight of the fluid that would otherwise occupy the submerged volume of the object, i.e. the displaced fluid.
|
4 |
+
|
5 |
+
For this reason, an object whose average density is greater than that of the fluid in which it is submerged tends to sink. If the object is less dense than the liquid, the force can keep the object afloat. This can occur only in a non-inertial reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a "downward" direction.[3]
|
6 |
+
|
7 |
+
The center of buoyancy of an object is the centroid of the displaced volume of fluid.
|
8 |
+
|
9 |
+
Archimedes' principle is named after Archimedes of Syracuse, who first discovered this law in 212 BC.[4] For objects, floating and sunken, and in gases as well as liquids (i.e. a fluid), Archimedes' principle may be stated thus in terms of forces:
|
10 |
+
|
11 |
+
Any object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object
|
12 |
+
|
13 |
+
—with the clarifications that for a sunken object the volume of displaced fluid is the volume of the object, and for a floating object on a liquid, the weight of the displaced liquid is the weight of the object.[5]
|
14 |
+
|
15 |
+
More tersely: buoyant force = weight of displaced fluid.
|
16 |
+
|
17 |
+
Archimedes' principle does not consider the surface tension (capillarity) acting on the body,[6] but this additional force modifies only the amount of fluid displaced and the spatial distribution of the displacement, so the principle that buoyancy = weight of displaced fluid remains valid.
|
18 |
+
|
19 |
+
The weight of the displaced fluid is directly proportional to the volume of the displaced fluid (if the surrounding fluid is of uniform density). In simple terms, the principle states that the buoyancy force on an object is equal to the weight of the fluid displaced by the object, or the density of the fluid multiplied by the submerged volume times the gravitational acceleration, g. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy. This is also known as upthrust.
|
20 |
+
|
21 |
+
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it. Suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor. It is generally easier to lift an object up through the water than it is to pull it out of the water.
|
22 |
+
|
23 |
+
Assuming Archimedes' principle to be reformulated as follows,
|
24 |
+
|
25 |
+
then inserted into the quotient of weights, which has been expanded by the mutual volume
|
26 |
+
|
27 |
+
yields the formula below. The density of the immersed object relative to the density of the fluid can easily be calculated without measuring any volumes.:
|
28 |
+
|
29 |
+
(This formula is used for example in describing the measuring principle of a dasymeter and of hydrostatic weighing.)
|
30 |
+
|
31 |
+
Example: If you drop wood into water, buoyancy will keep it afloat.
|
32 |
+
|
33 |
+
Example: A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the car's acceleration (i.e., towards the rear). The balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed "out of the way", and will actually drift in the same direction as the car's acceleration (i.e., forward). If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve, the balloon will drift towards the inside of the curve.
|
34 |
+
|
35 |
+
The equation to calculate the pressure inside a fluid in equilibrium is:
|
36 |
+
|
37 |
+
where f is the force density exerted by some outer field on the fluid, and σ is the Cauchy stress tensor. In this case the stress tensor is proportional to the identity tensor:
|
38 |
+
|
39 |
+
Here δij is the Kronecker delta. Using this the above equation becomes:
|
40 |
+
|
41 |
+
Assuming the outer force field is conservative, that is it can be written as the negative gradient of some scalar valued function:
|
42 |
+
|
43 |
+
Then:
|
44 |
+
|
45 |
+
Therefore, the shape of the open surface of a fluid equals the equipotential plane of the applied outer conservative force field. Let the z-axis point downward. In this case the field is gravity, so Φ = −ρfgz where g is the gravitational acceleration, ρf is the mass density of the fluid. Taking the pressure as zero at the surface, where z is zero, the constant will be zero, so the pressure inside the fluid, when it is subject to gravity, is
|
46 |
+
|
47 |
+
So pressure increases with depth below the surface of a liquid, as z denotes the distance from the surface of the liquid into it. Any object with a non-zero vertical depth will have different pressures on its top and bottom, with the pressure on the bottom being greater. This difference in pressure causes the upward buoyancy force.
|
48 |
+
|
49 |
+
The buoyancy force exerted on a body can now be calculated easily, since the internal pressure of the fluid is known. The force exerted on the body can be calculated by integrating the stress tensor over the surface of the body which is in contact with the fluid:
|
50 |
+
|
51 |
+
The surface integral can be transformed into a volume integral with the help of the Gauss theorem:
|
52 |
+
|
53 |
+
where V is the measure of the volume in contact with the fluid, that is the volume of the submerged part of the body, since the fluid doesn't exert force on the part of the body which is outside of it.
|
54 |
+
|
55 |
+
The magnitude of buoyancy force may be appreciated a bit more from the following argument. Consider any object of arbitrary shape and volume V surrounded by a liquid. The force the liquid exerts on an object within the liquid is equal to the weight of the liquid with a volume equal to that of the object. This force is applied in a direction opposite to gravitational force, that is of magnitude:
|
56 |
+
|
57 |
+
where ρf is the density of the fluid, Vdisp is the volume of the displaced body of liquid, and g is the gravitational acceleration at the location in question.
|
58 |
+
|
59 |
+
If this volume of liquid is replaced by a solid body of exactly the same shape, the force the liquid exerts on it must be exactly the same as above. In other words, the "buoyancy force" on a submerged body is directed in the opposite direction to gravity and is equal in magnitude to
|
60 |
+
|
61 |
+
The net force on the object must be zero if it is to be a situation of fluid statics such that Archimedes principle is applicable, and is thus the sum of the buoyancy force and the object's weight
|
62 |
+
|
63 |
+
If the buoyancy of an (unrestrained and unpowered) object exceeds its weight, it tends to rise. An object whose weight exceeds its buoyancy tends to sink. Calculation of the upwards force on a submerged object during its accelerating period cannot be done by the Archimedes principle alone; it is necessary to consider dynamics of an object involving buoyancy. Once it fully sinks to the floor of the fluid or rises to the surface and settles, Archimedes principle can be applied alone. For a floating object, only the submerged volume displaces water. For a sunken object, the entire volume displaces water, and there will be an additional force of reaction from the solid floor.
|
64 |
+
|
65 |
+
In order for Archimedes' principle to be used alone, the object in question must be in equilibrium (the sum of the forces on the object must be zero), therefore;
|
66 |
+
|
67 |
+
and therefore
|
68 |
+
|
69 |
+
showing that the depth to which a floating object will sink, and the volume of fluid it will displace, is independent of the gravitational field regardless of geographic location.
|
70 |
+
|
71 |
+
It can be the case that forces other than just buoyancy and gravity come into play. This is the case if the object is restrained or if the object sinks to the solid floor. An object which tends to float requires a tension restraint force T in order to remain fully submerged. An object which tends to sink will eventually have a normal force of constraint N exerted upon it by the solid floor. The constraint force can be tension in a spring scale measuring its weight in the fluid, and is how apparent weight is defined.
|
72 |
+
|
73 |
+
If the object would otherwise float, the tension to restrain it fully submerged is:
|
74 |
+
|
75 |
+
When a sinking object settles on the solid floor, it experiences a normal force of:
|
76 |
+
|
77 |
+
Another possible formula for calculating buoyancy of an object is by finding the apparent weight of that particular object in the air (calculated in Newtons), and apparent weight of that object in the water (in Newtons). To find the force of buoyancy acting on the object when in air, using this particular information, this formula applies:
|
78 |
+
|
79 |
+
The final result would be measured in Newtons.
|
80 |
+
|
81 |
+
Air's density is very small compared to most solids and liquids. For this reason, the weight of an object in air is approximately the same as its true weight in a vacuum. The buoyancy of air is neglected for most objects during a measurement in air because the error is usually insignificant (typically less than 0.1% except for objects of very low average density such as a balloon or light foam).
|
82 |
+
|
83 |
+
A simplified explanation for the integration of the pressure over the contact area may be stated as follows:
|
84 |
+
|
85 |
+
Consider a cube immersed in a fluid with the upper surface horizontal.
|
86 |
+
|
87 |
+
The sides are identical in area, and have the same depth distribution, therefore they also have the same pressure distribution, and consequently the same total force resulting from hydrostatic pressure, exerted perpendicular to the plane of the surface of each side.
|
88 |
+
|
89 |
+
There are two pairs of opposing sides, therefore the resultant horizontal forces balance in both orthogonal directions, and the resultant force is zero.
|
90 |
+
|
91 |
+
The upward force on the cube is the pressure on the bottom surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal bottom surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the bottom surface.
|
92 |
+
|
93 |
+
Similarly, the downward force on the cube is the pressure on the top surface integrated over its area. The surface is at constant depth, so the pressure is constant. Therefore, the integral of the pressure over the area of the horizontal top surface of the cube is the hydrostatic pressure at that depth multiplied by the area of the top surface.
|
94 |
+
|
95 |
+
As this is a cube, the top and bottom surfaces are identical in shape and area, and the pressure difference between the top and bottom of the cube is directly proportional to the depth difference, and the resultant force difference is exactly equal to the weight of the fluid that would occupy the volume of the cube in its absence.
|
96 |
+
|
97 |
+
This means that the resultant upward force on the cube is equal to the weight of the fluid that would fit into the volume of the cube, and the downward force on the cube is its weight, in the absence of external forces.
|
98 |
+
|
99 |
+
This analogy is valid for variations in the size of the cube.
|
100 |
+
|
101 |
+
If two cubes are placed alongside each other with a face of each in contact, the pressures and resultant forces on the sides or parts thereof in contact are balanced and may be disregarded, as the contact surfaces are equal in shape, size and pressure distribution, therefore the buoyancy of two cubes in contact is the sum of the buoyancies of each cube. This analogy can be extended to an arbitrary number of cubes.
|
102 |
+
|
103 |
+
An object of any shape can be approximated as a group of cubes in contact with each other, and as the size of the cube is decreased, the precision of the approximation increases. The limiting case for infinitely small cubes is the exact equivalence.
|
104 |
+
|
105 |
+
Angled surfaces do not nullify the analogy as the resultant force can be split into orthogonal components and each dealt with in the same way.
|
106 |
+
|
107 |
+
A floating object is stable if it tends to restore itself to an equilibrium position after a small displacement. For example, floating objects will generally have vertical stability, as if the object is pushed down slightly, this will create a greater buoyancy force, which, unbalanced by the weight force, will push the object back up.
|
108 |
+
|
109 |
+
Rotational stability is of great importance to floating vessels. Given a small angular displacement, the vessel may return to its original position (stable), move away from its original position (unstable), or remain where it is (neutral).
|
110 |
+
|
111 |
+
Rotational stability depends on the relative lines of action of forces on an object. The upward buoyancy force on an object acts through the center of buoyancy, being the centroid of the displaced volume of fluid. The weight force on the object acts through its center of gravity. A buoyant object will be stable if the center of gravity is beneath the center of buoyancy because any angular displacement will then produce a 'righting moment'.
|
112 |
+
|
113 |
+
The stability of a buoyant object at the surface is more complex, and it may remain stable even if the centre of gravity is above the centre of buoyancy, provided that when disturbed from the equilibrium position, the centre of buoyancy moves further to the same side that the centre of gravity moves, thus providing a positive righting moment. If this occurs, the floating object is said to have a positive metacentric height. This situation is typically valid for a range of heel angles, beyond which the centre of buoyancy does not move enough to provide a positive righting moment, and the object becomes unstable. It is possible to shift from positive to negative or vice versa more than once during a heeling disturbance, and many shapes are stable in more than one position.
|
114 |
+
|
115 |
+
The atmosphere's density depends upon altitude. As an airship rises in the atmosphere, its buoyancy decreases as the density of the surrounding air decreases. In contrast, as a submarine expels water from its buoyancy tanks, it rises because its volume is constant (the volume of water it displaces if it is fully submerged) while its mass is decreased.
|
116 |
+
|
117 |
+
As a floating object rises or falls, the forces external to it change and, as all objects are compressible to some extent or another, so does the object's volume. Buoyancy depends on volume and so an object's buoyancy reduces if it is compressed and increases if it expands.
|
118 |
+
|
119 |
+
If an object at equilibrium has a compressibility less than that of the surrounding fluid, the object's equilibrium is stable and it remains at rest. If, however, its compressibility is greater, its equilibrium is then unstable, and it rises and expands on the slightest upward perturbation, or falls and compresses on the slightest downward perturbation.
|
120 |
+
|
121 |
+
Submarines rise and dive by filling large ballast tanks with seawater. To dive, the tanks are opened to allow air to exhaust out the top of the tanks, while the water flows in from the bottom. Once the weight has been balanced so the overall density of the submarine is equal to the water around it, it has neutral buoyancy and will remain at that depth. Most military submarines operate with a slightly negative buoyancy and maintain depth by using the "lift" of the stabilizers with forward motion.[citation needed]
|
122 |
+
|
123 |
+
The height to which a balloon rises tends to be stable. As a balloon rises it tends to increase in volume with reducing atmospheric pressure, but the balloon itself does not expand as much as the air on which it rides. The average density of the balloon decreases less than that of the surrounding air. The weight of the displaced air is reduced. A rising balloon stops rising when it and the displaced air are equal in weight. Similarly, a sinking balloon tends to stop sinking.
|
124 |
+
|
125 |
+
Underwater divers are a common example of the problem of unstable buoyancy due to compressibility. The diver typically wears an exposure suit which relies on gas-filled spaces for insulation, and may also wear a buoyancy compensator, which is a variable volume buoyancy bag which is inflated to increase buoyancy and deflated to decrease buoyancy. The desired condition is usually neutral buoyancy when the diver is swimming in mid-water, and this condition is unstable, so the diver is constantly making fine adjustments by control of lung volume, and has to adjust the contents of the buoyancy compensator if the depth varies.
|
126 |
+
|
127 |
+
If the weight of an object is less than the weight of the displaced fluid when fully submerged, then the object has an average density that is less than the fluid and when fully submerged will experience a buoyancy force greater than its own weight.[7] If the fluid has a surface, such as water in a lake or the sea, the object will float and settle at a level where it displaces the same weight of fluid as the weight of the object. If the object is immersed in the fluid, such as a submerged submarine or air in a balloon, it will tend to rise.
|
128 |
+
If the object has exactly the same density as the fluid, then its buoyancy equals its weight. It will remain submerged in the fluid, but it will neither sink nor float, although a disturbance in either direction will cause it to drift away from its position.
|
129 |
+
An object with a higher average density than the fluid will never experience more buoyancy than weight and it will sink.
|
130 |
+
A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water.
|
en/1901.html.txt
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Eurasia /jʊəˈreɪʒə/ is the largest continental area on Earth, comprising all of Europe and Asia.[3][4] Located primarily in the Northern and Eastern Hemispheres, it is bordered by the Atlantic Ocean to the west, the Pacific Ocean to the east, the Arctic Ocean to the north, and by Africa, the Mediterranean Sea, and the Indian Ocean to the south.[5] The division between Europe and Asia as two different continents is a historical social construct, with no clear physical separation between them; thus, in some parts of the world, Eurasia is recognized as the largest of the six, five, or even four continents on Earth.[4] In geology, Eurasia is often considered as a single rigid megablock. However, the rigidity of Eurasia is debated based on paleomagnetic data.[6][7]
|
4 |
+
|
5 |
+
Eurasia covers around 55,000,000 square kilometres (21,000,000 sq mi), or around 36.2% of the Earth's total land area. The landmass contains well over 5 billion people, equating to approximately 70% of the human population. Humans first settled in Eurasia between 60,000 and 125,000 years ago. Some major islands, including Great Britain, Iceland, Ireland and Sri Lanka, as well as those of Japan, the Philippines and most of Indonesia, are often included under the popular definition of Eurasia, in spite of being separate from the contiguous landmass.
|
6 |
+
|
7 |
+
Physiographically, Eurasia is a single continent.[4] The concepts of Europe and Asia as distinct continents date back to antiquity and their borders are geologically arbitrary. In ancient times the Black Sea and the Sea of Marmara, along with their associated straits, were seen as separating the continents, but today the Ural and Caucasus ranges are more seen as the main delimiters between the two. Eurasia is connected to Africa at the Suez Canal, and Eurasia is sometimes combined with Africa to make the largest contiguous landmass on Earth called Afro-Eurasia.[8] Due to the vast landmass and differences in latitude, Eurasia exhibits all types of climate under the Köppen classification, including the harshest types of hot and cold temperatures, high and low precipitation and various types of ecosystems.
|
8 |
+
|
9 |
+
Eurasia formed between 375 and 325 million years ago with the merging of Siberia, Kazakhstania, and Baltica, which was joined to Laurentia, now North America, to form Euramerica. Chinese cratons collided with Siberia's southern coast.
|
10 |
+
|
11 |
+
Eurasia has been the host of many ancient civilizations, including those based in Mesopotamia, the Indus Valley and China. In the Axial Age (mid-first millennium BC), a continuous belt of civilizations stretched through the Eurasian subtropical zone from the Atlantic to the Pacific. This belt became the mainstream of world history for two millennia.
|
12 |
+
|
13 |
+
Originally, “Eurasia” is a geographical notion: in this sense, it is simply the biggest continent; the combined landmass of Europe and Asia. However, geopolitically, the word has several meanings, reflecting specific geopolitical interests.[9] “Eurasia” is one of the most important geopolitical concepts and it figures prominently in the commentaries on the ideas of Halford Mackinder. As Zbigniew Brzezinski observed on Eurasia:
|
14 |
+
|
15 |
+
“... how America "manages" Eurasia is critical. A power that dominates “Eurasia” would control two of the world’s three most advanced and economically productive regions. A mere glance at the map also suggests that control over “Eurasia” would almost automatically entail Africa’s subordination, rendering the Western Hemisphere and Oceania geopolitically peripheral to the world’s central continent. About 75 per cent of the world’s people live in “Eurasia”, and most of the world’s physical wealth is there as well, both in its enterprises and underneath its soil. “Eurasia” accounts for about three-fourths of the world’s known energy resources.”[10]
|
16 |
+
|
17 |
+
The Russian "Eurasianism" corresponded initially more or less to the land area of Imperial Russia in 1914, including parts of Eastern Europe.[11] One of Russia's main geopolitical interests lies in ever closer integration with those countries that it considers part of “Eurasia.”[12] This concept is further integrated with communist eschatology by author Alexander Dugin as the guiding principle of "self-sufficiency of a large space" during expansion.[13]
|
18 |
+
|
19 |
+
The term Eurasia gained geopolitical reputation as one of the three superstates in 1984,[14] George Orwell's[15] novel where constant surveillance and propaganda are strategic elements (introduced as reflexive antagonists) of the heterogeneous dispositif such metapolitical constructs use in order to control and exercise power.[16]
|
20 |
+
|
21 |
+
Across Eurasia, several single markets have emerged including the Eurasian Economic Space, European Single Market, ASEAN Economic Community and the Gulf Cooperation Council. There are also several international organizations and initiatives which seek to promote integration throughout Eurasia, including:
|
22 |
+
|
23 |
+
In ancient times, the Greeks classified Europe (derived from the mythological Phoenician princess Europa) and Asia (derived from Asia, a woman in Greek mythology) as separate "lands". Where to draw the dividing line between the two regions is still a matter of discussion. Especially whether the Kuma-Manych Depression or the Caucasus Mountains form the southeast boundary is disputed, since Mount Elbrus would be part of Europe in the latter case, making it (and not Mont Blanc) Europe's highest mountain. Most accepted is probably the boundary as defined by Philip Johan von Strahlenberg in the 18th century. He defined the dividing line along the Aegean Sea, Dardanelles, Sea of Marmara, Bosporus, Black Sea, Kuma–Manych Depression, Caspian Sea, Ural River, and Ural Mountains.
|
24 |
+
|
25 |
+
In modern usage, the term "Eurasian" is a demonym usually meaning "of or relating to Eurasia" or "a native or inhabitant of Eurasia".[17] It is also used to describe people of combined "Asian" and "European" descent.
|
26 |
+
|
27 |
+
Located primarily in the eastern and northern hemispheres, Eurasia is considered a supercontinent, part of the supercontinent of Afro-Eurasia or simply a continent in its own right.[18] In plate tectonics, the Eurasian Plate includes Europe and most of Asia but not the Indian subcontinent, the Arabian Peninsula or the area of the Russian Far East east of the Chersky Range.
|
28 |
+
|
29 |
+
From the point of view of history and culture, Eurasia can be loosely subdivided into Western and Eastern Eurasia.[19]
|
30 |
+
|
31 |
+
Nineteenth-century Russian philosopher Nikolai Danilevsky defined Eurasia as an entity separate from Europe and Asia, bounded by the Himalayas, the Caucasus, the Alps, the Arctic, the Pacific, the Atlantic, the Mediterranean, the Black Sea and the Caspian Sea, a definition that has been influential in Russia and other parts of the former Soviet Union.[20] Nowadays, partly inspired by this usage, the term Eurasia is sometimes used to refer to the post-Soviet space – in particular Russia, the Central Asian republics, and the Transcaucasian republics – and sometimes also adjacent regions such as Turkey, Mongolia, Afghanistan and Xinjiang.
|
32 |
+
|
33 |
+
The word "Eurasia" is often used in Kazakhstan to describe its location. Numerous Kazakh institutions have the term in their names, like the L. N. Gumilev Eurasian National University (Kazakh: Л. Н. Гумилёв атындағы Еуразия Ұлттық университеті; Russian: Евразийский Национальный университет имени Л. Н. Гумилёва)[21] (Lev Gumilev's Eurasianism ideas having been popularized in Kazakhstan by Olzhas Suleimenov), the Eurasian Media Forum,[22] the Eurasian Cultural Foundation (Russian: Евразийский фонд культуры), the Eurasian Development Bank (Russian: Евразийский банк развития),[23] and the Eurasian Bank.[24] In 2007 Kazakhstan's President, Nursultan Nazarbayev, proposed building a "Eurasia Canal" to connect the Caspian Sea and the Black Sea via Russia's Kuma-Manych Depression in order to provide Kazakhstan and other Caspian-basin countries with a more efficient path to the ocean than the existing Volga-Don Canal.[25]
|
34 |
+
|
35 |
+
This usage can also be seen in the names of Eurasianet,[26] The Journal of Eurasian Studies,[27] and the Association for Slavic, East European, and Eurasian Studies,[28] as well as the titles of numerous academic programmes at US universities.[29][30][31][32][33]
|
36 |
+
|
37 |
+
This usage is comparable to how Americans use "Western Hemisphere" to describe concepts and organizations dealing with the Americas (e.g., Council on Hemispheric Affairs, Western Hemisphere Institute for Security Cooperation).
|
38 |
+
|
39 |
+
Africa
|
40 |
+
|
41 |
+
Antarctica
|
42 |
+
|
43 |
+
Asia
|
44 |
+
|
45 |
+
Australia
|
46 |
+
|
47 |
+
Europe
|
48 |
+
|
49 |
+
North America
|
50 |
+
|
51 |
+
South America
|
52 |
+
|
53 |
+
Afro-Eurasia
|
54 |
+
|
55 |
+
America
|
56 |
+
|
57 |
+
Eurasia
|
58 |
+
|
59 |
+
Oceania
|
60 |
+
|
61 |
+
Coordinates: 50°N 80°E / 50°N 80°E / 50; 80
|
en/1902.html.txt
ADDED
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The euro (sign: €; code: EUR) is the official currency of 19 of the 27 member states of the European Union. This group of states is known as the eurozone or euro area and includes about 343 million citizens as of 2019[update].[9][10] The euro, which is divided into 100 cents, is the second-largest and second-most traded currency in the foreign exchange market after the United States dollar.[11]
|
4 |
+
|
5 |
+
The currency is also used officially by the institutions of the European Union, by four European microstates that are not EU members,[10] the British Overseas Territory of Akrotiri and Dhekelia, as well as unilaterally by Montenegro and Kosovo. Outside Europe, a number of special territories of EU members also use the euro as their currency. Additionally, over 200 million people worldwide use currencies pegged to the euro.
|
6 |
+
|
7 |
+
The euro is the second-largest reserve currency as well as the second-most traded currency in the world after the United States dollar.[12][13][14]
|
8 |
+
As of December 2019[update], with more than €1.3 trillion in circulation, the euro has one of the highest combined values of banknotes and coins in circulation in the world.[15][16]
|
9 |
+
|
10 |
+
The name euro was officially adopted on 16 December 1995 in Madrid.[17] The euro was introduced to world financial markets as an accounting currency on 1 January 1999, replacing the former European Currency Unit (ECU) at a ratio of 1:1 (US$1.1743). Physical euro coins and banknotes entered into circulation on 1 January 2002, making it the day-to-day operating currency of its original members, and by March 2002 it had completely replaced the former currencies.[18] While the euro dropped subsequently to US$0.83 within two years (26 October 2000), it has traded above the U.S. dollar since the end of 2002, peaking at US$1.60 on 18 July 2008. However, the currency is lower than the original issue rate of US$1.1743, showing a relative decline since its issuance. [19] In late 2009, the euro became immersed in the European sovereign-debt crisis, which led to the creation of the European Financial Stability Facility as well as other reforms aimed at stabilising and strengthening the currency.
|
11 |
+
|
12 |
+
The euro is managed and administered by the Frankfurt-based European Central Bank (ECB) and the Eurosystem (composed of the central banks of the eurozone countries). As an independent central bank, the ECB has sole authority to set monetary policy. The Eurosystem participates in the printing, minting and distribution of notes and coins in all member states, and the operation of the eurozone payment systems.
|
13 |
+
|
14 |
+
The 1992 Maastricht Treaty obliges most EU member states to adopt the euro upon meeting certain monetary and budgetary convergence criteria, although not all states have done so. The United Kingdom and Denmark negotiated exemptions,[20] while Sweden (which joined the EU in 1995, after the Maastricht Treaty was signed) turned down the euro in a non-binding referendum in 2003, and has circumvented the obligation to adopt the euro by not meeting the monetary and budgetary requirements. All nations that have joined the EU since 1993 have pledged to adopt the euro in due course.
|
15 |
+
|
16 |
+
Since 1 January 2002, the national central banks (NCBs) and the ECB have issued euro banknotes on a joint basis.[21] Eurosystem NCBs are required to accept euro banknotes put into circulation by other Eurosystem members and these banknotes are not repatriated. The ECB issues 8% of the total value of banknotes issued by the Eurosystem.[21] In practice, the ECB's banknotes are put into circulation by the NCBs, thereby incurring matching liabilities vis-à-vis the ECB. These liabilities carry interest at the main refinancing rate of the ECB. The other 92% of euro banknotes are issued by the NCBs in proportion to their respective shares of the ECB capital key,[21] calculated using national share of European Union (EU) population and national share of EU GDP, equally weighted.[22]
|
17 |
+
|
18 |
+
The euro is divided into 100 cents (also referred to as euro cents, especially when distinguishing them from other currencies, and referred to as such on the common side of all cent coins). In Community legislative acts the plural forms of euro and cent are spelled without the s, notwithstanding normal English usage.[23][24] Otherwise, normal English plurals are used,[25] with many local variations such as centime in France.
|
19 |
+
|
20 |
+
All circulating coins have a common side showing the denomination or value, and a map in the background. Due to the linguistic plurality in the European Union, the Latin alphabet version of euro is used (as opposed to the less common Greek or Cyrillic) and Arabic numerals (other text is used on national sides in national languages, but other text on the common side is avoided). For the denominations except the 1-, 2- and 5-cent coins, the map only showed the 15 member states which were members when the euro was introduced. Beginning in 2007 or 2008 (depending on the country), the old map was replaced by a map of Europe also showing countries outside the EU like Norway, Ukraine, Belarus, Russia and Turkey.[citation needed] The 1-, 2- and 5-cent coins, however, keep their old design, showing a geographical map of Europe with the 15 member states of 2002 raised somewhat above the rest of the map. All common sides were designed by Luc Luycx. The coins also have a national side showing an image specifically chosen by the country that issued the coin. Euro coins from any member state may be freely used in any nation that has adopted the euro.
|
21 |
+
|
22 |
+
The coins are issued in denominations of €2, €1, 50c, 20c, 10c, 5c, 2c, and 1c. To avoid the use of the two smallest coins, some cash transactions are rounded to the nearest five cents in the Netherlands and Ireland[26][27] (by voluntary agreement) and in Finland (by law).[28] This practice is discouraged by the commission, as is the practice of certain shops of refusing to accept high-value euro notes.[29]
|
23 |
+
|
24 |
+
Commemorative coins with €2 face value have been issued with changes to the design of the national side of the coin. These include both commonly issued coins, such as the €2 commemorative coin for the fiftieth anniversary of the signing of the Treaty of Rome, and nationally issued coins, such as the coin to commemorate the 2004 Summer Olympics issued by Greece. These coins are legal tender throughout the eurozone. Collector coins with various other denominations have been issued as well, but these are not intended for general circulation, and they are legal tender only in the member state that issued them.[30]
|
25 |
+
|
26 |
+
The design for the euro banknotes has common designs on both sides. The design was created by the Austrian designer Robert Kalina.[31] Notes are issued in €500, €200, €100, €50, €20, €10, €5. Each banknote has its own colour and is dedicated to an artistic period of European architecture. The front of the note features windows or gateways while the back has bridges, symbolising links between states in the union and with the future. While the designs are supposed to be devoid of any identifiable characteristics, the initial designs by Robert Kalina were of specific bridges, including the Rialto and the Pont de Neuilly, and were subsequently rendered more generic; the final designs still bear very close similarities to their specific prototypes; thus they are not truly generic. The monuments looked similar enough to different national monuments to please everyone.[32]
|
27 |
+
|
28 |
+
The Europa series, or second series, consists of six denominations and does no longer include the €500 with issuance discontinued as of 27 April 2019.[33] However, both the first and the second series of euro banknotes, including the €500, remain legal tender throughout the euro area.[34]
|
29 |
+
|
30 |
+
Capital within the EU may be transferred in any amount from one state to another. All intra-Union transfers in euro are treated as domestic transactions and bear the corresponding domestic transfer costs.[35] This includes all member states of the EU, even those outside the eurozone providing the transactions are carried out in euro.[36] Credit/debit card charging and ATM withdrawals within the eurozone are also treated as domestic transactions; however paper-based payment orders, like cheques, have not been standardised so these are still domestic-based. The ECB has also set up a clearing system, TARGET, for large euro transactions.[37]
|
31 |
+
|
32 |
+
A special euro currency sign (€) was designed after a public survey had narrowed the original ten proposals down to two. The European Commission then chose the design created by the Belgian Alain Billiet. Of the symbol, the Commission stated[23]
|
33 |
+
|
34 |
+
Inspiration for the € symbol itself came from the Greek epsilon (Є)[note 13] – a reference to the cradle of European civilisation – and the first letter of the word Europe, crossed by two parallel lines to 'certify' the stability of the euro.
|
35 |
+
|
36 |
+
The European Commission also specified a euro logo with exact proportions and foreground and background colour tones.[38] Placement of the currency sign relative to the numeric amount varies from state to state, but for texts in English the symbol (or the ISO-standard "EUR") should precede the amount.[39]
|
37 |
+
|
38 |
+
The euro was established by the provisions in the 1992 Maastricht Treaty. To participate in the currency, member states are meant to meet strict criteria, such as a budget deficit of less than 3% of their GDP, a debt ratio of less than 60% of GDP (both of which were ultimately widely flouted after introduction), low inflation, and interest rates close to the EU average. In the Maastricht Treaty, the United Kingdom and Denmark were granted exemptions per their request from moving to the stage of monetary union which resulted in the introduction of the euro. (For macroeconomic theory, see below.)
|
39 |
+
|
40 |
+
The name "euro" was officially adopted in Madrid on 16 December 1995.[17] Belgian Esperantist Germain Pirlot, a former teacher of French and history is credited with naming the new currency by sending a letter to then President of the European Commission, Jacques Santer, suggesting the name "euro" on 4 August 1995.[41]
|
41 |
+
|
42 |
+
Due to differences in national conventions for rounding and significant digits, all conversion between the national currencies had to be carried out using the process of triangulation via the euro. The definitive values of one euro in terms of the exchange rates at which the currency entered the euro are shown on the right.
|
43 |
+
|
44 |
+
The rates were determined by the Council of the European Union,[note 14] based on a recommendation from the European Commission based on the market rates on 31 December 1998. They were set so that one European Currency Unit (ECU) would equal one euro. The European Currency Unit was an accounting unit used by the EU, based on the currencies of the member states; it was not a currency in its own right. They could not be set earlier, because the ECU depended on the closing exchange rate of the non-euro currencies (principally the pound sterling) that day.
|
45 |
+
|
46 |
+
The procedure used to fix the conversion rate between the Greek drachma and the euro was different since the euro by then was already two years old. While the conversion rates for the initial eleven currencies were determined only hours before the euro was introduced, the conversion rate for the Greek drachma was fixed several months beforehand.[note 15]
|
47 |
+
|
48 |
+
The currency was introduced in non-physical form (traveller's cheques, electronic transfers, banking, etc.) at midnight on 1 January 1999, when the national currencies of participating countries (the eurozone) ceased to exist independently. Their exchange rates were locked at fixed rates against each other. The euro thus became the successor to the European Currency Unit (ECU). The notes and coins for the old currencies, however, continued to be used as legal tender until new euro notes and coins were introduced on 1 January 2002.
|
49 |
+
|
50 |
+
The changeover period during which the former currencies' notes and coins were exchanged for those of the euro lasted about two months, until 28 February 2002. The official date on which the national currencies ceased to be legal tender varied from member state to member state. The earliest date was in Germany, where the mark officially ceased to be legal tender on 31 December 2001, though the exchange period lasted for two months more. Even after the old currencies ceased to be legal tender, they continued to be accepted by national central banks for periods ranging from several years to indefinitely (the latter for Austria, Germany, Ireland, Estonia and Latvia in banknotes and coins, and for Belgium, Luxembourg, Slovenia and Slovakia in banknotes only). The earliest coins to become non-convertible were the Portuguese escudos, which ceased to have monetary value after 31 December 2002, although banknotes remain exchangeable until 2022.
|
51 |
+
|
52 |
+
Following the U.S. financial crisis in 2008, fears of a sovereign debt crisis developed in 2009 among investors concerning some European states, with the situation becoming particularly tense in early 2010.[42][43] Greece was most acutely affected, but fellow Eurozone members Cyprus, Ireland, Italy, Portugal, and Spain were also significantly affected.[44][45] All these countries utilized EU funds except Italy, which is a major donor to the EFSF.[46] To be included in the eurozone, countries had to fulfil certain convergence criteria, but the meaningfulness of such criteria was diminished by the fact it was not enforced with the same level of strictness among countries.[47]
|
53 |
+
|
54 |
+
According to the Economist Intelligence Unit in 2011, "[I]f the [euro area] is treated as a single entity, its [economic and fiscal] position looks no worse and in some respects, rather better than that of the US or the UK" and the budget deficit for the euro area as a whole is much lower and the euro area's government debt/GDP ratio of 86% in 2010 was about the same level as that of the United States. "Moreover", they write, "private-sector indebtedness across the euro area as a whole is markedly lower than in the highly leveraged Anglo-Saxon economies". The authors conclude that the crisis "is as much political as economic" and the result of the fact that the euro area lacks the support of "institutional paraphernalia (and mutual bonds of solidarity) of a state".[48]
|
55 |
+
|
56 |
+
The crisis continued with S&P downgrading the credit rating of nine euro-area countries, including France, then downgrading the entire European Financial Stability Facility (EFSF) fund.[49]
|
57 |
+
|
58 |
+
A historical parallel – to 1931 when Germany was burdened with debt, unemployment and austerity while France and the United States were relatively strong creditors – gained attention in summer 2012[50] even as Germany received a debt-rating warning of its own.[51][52] In the enduring of this scenario the Euro serves as a mean of quantitative primitive accumulation.
|
59 |
+
|
60 |
+
The euro is the sole currency of 19 EU member states: Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Portugal, Slovakia, Slovenia, and Spain. These countries constitute the "eurozone", some 343 million people in total as of 2018[update].[53]
|
61 |
+
|
62 |
+
With all but one (Denmark) of the remaining EU members obliged to join when economic conditions permit, together with future members of the EU, the enlargement of the eurozone is set to continue. Outside the EU, the euro is also the sole currency of Montenegro and Kosovo and several European microstates (Andorra, Monaco, San Marino and the Vatican City) as well as in five overseas territories of EU members that are not themselves part of the EU (Saint Barthélemy, Saint Martin, Saint Pierre and Miquelon, the French Southern and Antarctic Lands and Akrotiri and Dhekelia). Together this direct usage of the euro outside the EU affects nearly 3 million people.
|
63 |
+
|
64 |
+
The euro has been used as a trading currency in Cuba since 1998,[54] Syria since 2006,[55] and Venezuela since 2018.[56] There are also various currencies pegged to the euro (see below). In 2009, Zimbabwe abandoned its local currency and used major currencies instead, including the euro and the United States dollar.[57]
|
65 |
+
|
66 |
+
Since its introduction, the euro has been the second most widely held international reserve currency after the U.S. dollar. The share of the euro as a reserve currency increased from 18% in 1999 to 27% in 2008. Over this period, the share held in U.S. dollar fell from 71% to 64% and that held in RMB fell from 6.4% to 3.3%. The euro inherited and built on the status of the Deutsche Mark as the second most important reserve currency. The euro remains underweight as a reserve currency in advanced economies while overweight in emerging and developing economies: according to the International Monetary Fund[58] the total of euro held as a reserve in the world at the end of 2008 was equal to $1.1 trillion or €850 billion, with a share of 22% of all currency reserves in advanced economies, but a total of 31% of all currency reserves in emerging and developing economies.
|
67 |
+
|
68 |
+
The possibility of the euro becoming the first international reserve currency has been debated among economists.[59] Former Federal Reserve Chairman Alan Greenspan gave his opinion in September 2007 that it was "absolutely conceivable that the euro will replace the US dollar as reserve currency, or will be traded as an equally important reserve currency".[60] In contrast to Greenspan's 2007 assessment, the euro's increase in the share of the worldwide currency reserve basket has slowed considerably since 2007 and since the beginning of the worldwide credit crunch related recession and European sovereign-debt crisis.[58]
|
69 |
+
|
70 |
+
Outside the eurozone, a total of 22 countries and territories that do not belong to the EU have currencies that are directly pegged to the euro including 14 countries in mainland Africa (CFA franc), two African island countries (Comorian franc and Cape Verdean escudo), three French Pacific territories (CFP franc) and three Balkan countries, Bosnia and Herzegovina (Bosnia and Herzegovina convertible mark), Bulgaria (Bulgarian lev) and North Macedonia (Macedonian denar).[8] On 28 July 2009, São Tomé and Príncipe signed an agreement with Portugal which will eventually tie its currency to the euro.[61] Additionally, the Moroccan dirham is tied to a basket of currencies, including the euro and the US dollar, with the euro given the highest weighting.
|
71 |
+
|
72 |
+
With the exception of Bosnia, Bulgaria, North Macedonia (which had pegged their currencies against the Deutsche Mark) and Cape Verde (formerly pegged to the Portuguese escudo), all of these non-EU countries had a currency peg to the French Franc before pegging their currencies to the euro. Pegging a country's currency to a major currency is regarded as a safety measure, especially for currencies of areas with weak economies, as the euro is seen as a stable currency, prevents runaway inflation and encourages foreign investment due to its stability.
|
73 |
+
|
74 |
+
Within the EU several currencies are pegged to the euro, mostly as a precondition to joining the eurozone. The Danish krone, Croatian kuna and Bulgarian lev are pegged due to their participation in the ERM II.
|
75 |
+
|
76 |
+
In total, as of 2013[update], 182 million people in Africa use a currency pegged to the euro, 27 million people outside the eurozone in Europe, and another 545,000 people on Pacific islands.[53]
|
77 |
+
|
78 |
+
Since 2005, stamps issued by the Sovereign Military Order of Malta have been denominated in euros, although the Order's official currency remains the Maltese scudo.[62] The Maltese scudo itself is pegged to the euro and is only recognised as legal tender within the Order.
|
79 |
+
|
80 |
+
In economics, an optimum currency area, or region (OCA or OCR), is a geographical region in which it would maximise economic efficiency to have the entire region share a single currency. There are two models, both proposed by Robert Mundell: the stationary expectations model and the international risk sharing model. Mundell himself advocates the international risk sharing model and thus concludes in favour of the euro.[63] However, even before the creation of the single currency, there were concerns over diverging economies. Before the late-2000s recession it was considered unlikely that a state would leave the euro or the whole zone would collapse.[64] However the Greek government-debt crisis led to former British Foreign Secretary Jack Straw claiming the eurozone could not last in its current form.[65] Part of the problem seems to be the rules that were created when the euro was set up. John Lanchester, writing for The New Yorker, explains it:
|
81 |
+
|
82 |
+
The guiding principle of the currency, which opened for business in 1999, were supposed to be a set of rules to limit a country's annual deficit to three per cent of gross domestic product, and the total accumulated debt to sixty per cent of G.D.P. It was a nice idea, but by 2004 the two biggest economies in the euro zone, Germany and France, had broken the rules for three years in a row.[66]
|
83 |
+
|
84 |
+
The most obvious benefit of adopting a single currency is to remove the cost of exchanging currency, theoretically allowing businesses and individuals to consummate previously unprofitable trades. For consumers, banks in the eurozone must charge the same for intra-member cross-border transactions as purely domestic transactions for electronic payments (e.g., credit cards, debit cards and cash machine withdrawals).
|
85 |
+
|
86 |
+
Financial markets on the continent are expected to be far more liquid and flexible than they were in the past. The reduction in cross-border transaction costs will allow larger banking firms to provide a wider array of banking services that can compete across and beyond the eurozone. However, although transaction costs were reduced, some studies have shown that risk aversion has increased during the last 40 years in the Eurozone.[68]
|
87 |
+
|
88 |
+
Another effect of the common European currency is that differences in prices—in particular in price levels—should decrease because of the law of one price. Differences in prices can trigger arbitrage, i.e., speculative trade in a commodity across borders purely to exploit the price differential. Therefore, prices on commonly traded goods are likely to converge, causing inflation in some regions and deflation in others during the transition. Some evidence of this has been observed in specific eurozone markets.[69]
|
89 |
+
|
90 |
+
Before the introduction of the euro, some countries had successfully contained inflation, which was then seen as a major economic problem, by establishing largely independent central banks. One such bank was the Bundesbank in Germany; the European Central Bank was modelled on the Bundesbank.[70]
|
91 |
+
|
92 |
+
The euro has come under criticism due to its regulation, lack of flexibility and rigidity towards sharing member States on issues such as nominal interest rates.[71]
|
93 |
+
Many national and corporate bonds denominated in euro are significantly more liquid and have lower interest rates than was historically the case when denominated in national currencies. While increased liquidity may lower the nominal interest rate on the bond, denominating the bond in a currency with low levels of inflation arguably plays a much larger role. A credible commitment to low levels of inflation and a stable debt reduces the risk that the value of the debt will be eroded by higher levels of inflation or default in the future, allowing debt to be issued at a lower nominal interest rate.
|
94 |
+
|
95 |
+
Unfortunately, there is also a cost in structurally keeping inflation lower than in the United States, UK, and China. The result is that seen from those countries, the euro has become expensive, making European products increasingly expensive for its largest importers. Hence export from the eurozone becomes more difficult.
|
96 |
+
|
97 |
+
In general, those in Europe who own large amounts of euros are served by high stability and low inflation.
|
98 |
+
|
99 |
+
A monetary union means states in that union lose the main mechanism of recovery of their international competitiveness by weakening (depreciating) their currency. When wages become too high compared to productivity in exports sector then these exports become more expensive and they are crowded out from the market within a country and abroad. This drive fall of employment and output in the exports sector and fall of trade and current account balances. Fall of output and employment in tradable goods sector may be offset by the growth of non-exports sectors, especially in construction and services. Increased purchases abroad and negative current account balance can be financed without a problem as long as credit is cheap.[72] The need to finance trade deficit weakens currency making exports automatically more attractive in a country and abroad. A state in a monetary union cannot use weakening of currency to recover its international competitiveness. To achieve this a state has to reduce prices, including wages (deflation). This could result in high unemployment and lower incomes as it was during European sovereign-debt crisis.[73]
|
100 |
+
|
101 |
+
A 2009 consensus from the studies of the introduction of the euro concluded that it has increased trade within the eurozone by 5% to 10%,[74] although one study suggested an increase of only 3%[75] while another estimated 9 to 14%.[76] However, a meta-analysis of all available studies suggests that the prevalence of positive estimates is caused by publication bias and that the underlying effect may be negligible.[77] Although a more recent meta-analysis shows that publication bias decreases over time and that there are positive trade effects from the introduction of the euro, as long as results from before 2010 are taken into account. This may be because of the inclusion of the Financial crisis of 2007–2008 and ongoing integration within the EU.[78] Furthermore, older studies accounting for time trend reflecting general cohesion policies in Europe that started before, and continue after implementing the common currency find no effect on trade.[79][80] These results suggest that other policies aimed at European integration might be the source of observed increase in trade.
|
102 |
+
|
103 |
+
Physical investment seems to have increased by 5% in the eurozone due to the introduction.[81] Regarding foreign direct investment, a study found that the intra-eurozone FDI stocks have increased by about 20% during the first four years of the EMU.[82] Concerning the effect on corporate investment, there is evidence that the introduction of the euro has resulted in an increase in investment rates and that it has made it easier for firms to access financing in Europe. The euro has most specifically stimulated investment in companies that come from countries that previously had weak currencies. A study found that the introduction of the euro accounts for 22% of the investment rate after 1998 in countries that previously had a weak currency.[83]
|
104 |
+
|
105 |
+
The introduction of the euro has led to extensive discussion about its possible effect on inflation. In the short term, there was a widespread impression in the population of the eurozone that the introduction of the euro had led to an increase in prices, but this impression was not confirmed by general indices of inflation and other studies.[84][85] A study of this paradox found that this was due to an asymmetric effect of the introduction of the euro on prices: while it had no effect on most goods, it had an effect on cheap goods which have seen their price round up after the introduction of the euro. The study found that consumers based their beliefs on inflation of those cheap goods which are frequently purchased.[86] It has also been suggested that the jump in small prices may be because prior to the introduction, retailers made fewer upward adjustments and waited for the introduction of the euro to do so.[87]
|
106 |
+
|
107 |
+
One of the advantages of the adoption of a common currency is the reduction of the risk associated with changes in currency exchange rates. It has been found that the introduction of the euro created "significant reductions in market risk exposures for nonfinancial firms both in and outside Europe".[88] These reductions in market risk "were concentrated in firms domiciled in the eurozone and in non-euro firms with a high fraction of foreign sales or assets in Europe".
|
108 |
+
|
109 |
+
The introduction of the euro seems to have had a strong effect on European financial integration. According to a study on this question, it has "significantly reshaped the European financial system, especially with respect to the securities markets [...] However, the real and policy barriers to integration in the retail and corporate banking sectors remain significant, even if the wholesale end of banking has been largely integrated."[89] Specifically, the euro has significantly decreased the cost of trade in bonds, equity, and banking assets within the eurozone.[90] On a global level, there is evidence that the introduction of the euro has led to an integration in terms of investment in bond portfolios, with eurozone countries lending and borrowing more between each other than with other countries.[91]
|
110 |
+
|
111 |
+
As of January 2014, and since the introduction of the euro, interest rates of most member countries (particularly those with a weak currency) have decreased. Some of these countries had the most serious sovereign financing problems.
|
112 |
+
|
113 |
+
The effect of declining interest rates, combined with excess liquidity continually provided by the ECB, made it easier for banks within the countries in which interest rates fell the most, and their linked sovereigns, to borrow significant amounts (above the 3% of GDP budget deficit imposed on the eurozone initially) and significantly inflate their public and private debt levels.[92] Following the financial crisis of 2007–2008, governments in these countries found it necessary to bail out or nationalise their privately held banks to prevent systemic failure of the banking system when underlying hard or financial asset values were found to be grossly inflated and sometimes so near worthless there was no liquid market for them.[93] This further increased the already high levels of public debt to a level the markets began to consider unsustainable, via increasing government bond interest rates, producing the ongoing European sovereign-debt crisis.
|
114 |
+
|
115 |
+
The evidence on the convergence of prices in the eurozone with the introduction of the euro is mixed. Several studies failed to find any evidence of convergence following the introduction of the euro after a phase of convergence in the early 1990s.[94][95] Other studies have found evidence of price convergence,[96][97] in particular for cars.[98] A possible reason for the divergence between the different studies is that the processes of convergence may not have been linear, slowing down substantially between 2000 and 2003, and resurfacing after 2003 as suggested by a recent study (2009).[99]
|
116 |
+
|
117 |
+
A study suggests that the introduction of the euro has had a positive effect on the amount of tourist travel within the EMU, with an increase of 6.5%.[100]
|
118 |
+
|
119 |
+
The ECB targets interest rates rather than exchange rates and in general does not intervene on the foreign exchange rate markets. This is because of the implications of the Mundell–Fleming model, which implies a central bank cannot (without capital controls) maintain interest rate and exchange rate targets simultaneously, because increasing the money supply results in a depreciation of the currency. In the years following the Single European Act, the EU has liberalised its capital markets and, as the ECB has inflation targeting as its monetary policy, the exchange-rate regime of the euro is floating.
|
120 |
+
|
121 |
+
The euro is the second-most widely held reserve currency after the U.S. dollar. After its introduction on 4 January 1999 its exchange rate against the other major currencies fell reaching its lowest exchange rates in 2000 (3 May vs Pound sterling, 25 October vs the U.S. dollar, 26 October vs Japanese yen). Afterwards it regained and its exchange rate reached its historical highest point in 2008 (15 July vs U.S. dollar, 23 July vs Japanese yen, 29 December vs Pound sterling). With the advent of the global financial crisis the euro initially fell, to regain later. Despite pressure due to the European sovereign-debt crisis the euro remained stable.[101] In November 2011 the euro's exchange rate index – measured against currencies of the bloc's major trading partners – was trading almost two percent higher on the year, approximately at the same level as it was before the crisis kicked off in 2007.[102]
|
122 |
+
|
123 |
+
Besides the economic motivations to the introduction of the euro, its creation was also partly justified as a way to foster a closer sense of joint identity between European citizens. Statements about this goal where for instance made by Wim Duisenberg, European Central Bank Governor, in 1998,[103] Laurent Fabius, French Finance Minister, in 2000, [104] Romano Prodi, President of the European Commission, in 2002.[105] However, 15 years after the introduction of the euro, a study found no evidence that it has had a positive influence on a shared sense of European identity (and no evidence that it had a negative effect either).[106]
|
124 |
+
|
125 |
+
The formal titles of the currency are euro for the major unit and cent for the minor (one-hundredth) unit and for official use in most eurozone languages; according to the ECB, all languages should use the same spelling for the nominative singular.[107] This may contradict normal rules for word formation in some languages, e.g., those in which there is no eu diphthong. Bulgaria has negotiated an exception; euro in the Bulgarian Cyrillic alphabet is spelled as eвро (evro) and not eуро (euro) in all official documents.[108] In the Greek script the term ευρώ (evró) is used; the Greek "cent" coins are denominated in λεπτό/ά (leptó/á). Official practice for English-language EU legislation is to use the words euro and cent as both singular and plural,[109] although the European Commission's Directorate-General for Translation states that the plural forms euros and cents should be used in English.[110]
|
126 |
+
|
en/1903.html.txt
ADDED
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Europe is a continent located entirely in the Northern Hemisphere and mostly in the Eastern Hemisphere. It comprises the westernmost part of Eurasia and is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, the Mediterranean Sea to the south, and Asia to the east. Europe is commonly considered to be separated from Asia by the watershed of the Ural Mountains, the Ural River, the Caspian Sea, the Greater Caucasus, the Black Sea, and the waterways of the Turkish Straits.[9] Although much of this border is over land, Europe is generally accorded the status of a full continent because of its great physical size and the weight of history and tradition.
|
6 |
+
|
7 |
+
Europe covers about 10,180,000 square kilometres (3,930,000 sq mi), or 2% of the Earth's surface (6.8% of land area), making it the sixth largest continent. Politically, Europe is divided into about fifty sovereign states, of which Russia is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a total population of about 741 million (about 11% of the world population) as of 2018[update].[2][3] The European climate is largely affected by warm Atlantic currents that temper winters and summers on much of the continent, even at latitudes along which the climate in Asia and North America is severe. Further from the sea, seasonal differences are more noticeable than close to the coast.
|
8 |
+
|
9 |
+
Europe, in particular ancient Greece and ancient Rome, was the birthplace of Western civilisation.[10][11][12] The fall of the Western Roman Empire in 476 AD and the subsequent Migration Period marked the end of ancient history and the beginning of the Middle Ages. Renaissance humanism, exploration, art and science led to the modern era. Since the Age of Discovery, Europe played a predominant role in global affairs. Between the 16th and 20th centuries, European powers colonized at various times the Americas, almost all of Africa and Oceania and the majority of Asia.
|
10 |
+
|
11 |
+
The Age of Enlightenment, the subsequent French Revolution and the Napoleonic Wars shaped the continent culturally, politically and economically from the end of the 17th century until the first half of the 19th century. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to radical economic, cultural and social change in Western Europe and eventually the wider world. Both world wars took place for the most part in Europe, contributing to a decline in Western European dominance in world affairs by the mid-20th century as the Soviet Union and the United States took prominence.[13] During the Cold War, Europe was divided along the Iron Curtain between NATO in the West and the Warsaw Pact in the East, until the revolutions of 1989 and fall of the Berlin Wall.
|
12 |
+
|
13 |
+
In 1949 the Council of Europe was founded with the idea of unifying Europe to achieve common goals. Further European integration by some states led to the formation of the European Union (EU), a separate political entity that lies between a confederation and a federation.[14] The EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. The currency of most countries of the European Union, the euro, is the most commonly used among Europeans; and the EU's Schengen Area abolishes border and immigration controls between most of its member states.
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
In classical Greek mythology, Europa (Ancient Greek: Εὐρώπη, Eurṓpē) was a Phoenician princess. One view is that her name derives from the ancient Greek elements εὐρύς (eurús), "wide, broad" and ὤψ (ōps, gen. ὠπός, ōpós) "eye, face, countenance", hence their composite Eurṓpē would mean "wide-gazing" or "broad of aspect".[15][16][17] Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion and the poetry devoted to it.[15] An alternative view is that of R.S.P. Beekes who has argued in favor of a Pre-Indo-European origin for the name, explains that a derivation from ancient Greek eurus would yield a different toponym than Europa. Beekes has located toponyms related to that of Europa in the territory of ancient Greece and localities like that of Europos in ancient Macedonia.[18]
|
18 |
+
|
19 |
+
There have been attempts to connect Eurṓpē to a Semitic term for "west", this being either Akkadian erebu meaning "to go down, set" (said of the sun) or Phoenician 'ereb "evening, west",[19] which is at the origin of Arabic Maghreb and Hebrew ma'arav. Michael A. Barry finds the mention of the word Ereb on an Assyrian stele with the meaning of "night, [the country of] sunset", in opposition to Asu "[the country of] sunrise", i.e. Asia. The same naming motive according to "cartographic convention" appears in Greek Ἀνατολή (Anatolḗ "[sun] rise", "east", hence Anatolia).[20] Martin Litchfield West stated that "phonologically, the match between Europa's name and any form of the Semitic word is very poor",[21] while Beekes considers a connection to Semitic languages improbable.[18] Next to these hypotheses there is also a Proto-Indo-European root *h1regʷos, meaning "darkness", which also produced Greek Erebus.[citation needed]
|
20 |
+
|
21 |
+
Most major world languages use words derived from Eurṓpē or Europa to refer to the continent. Chinese, for example, uses the word Ōuzhōu (歐洲/欧洲), which is an abbreviation of the transliterated name Ōuluóbā zhōu (歐羅巴洲) (zhōu means "continent"); a similar Chinese-derived term Ōshū (欧州) is also sometimes used in Japanese such as in the Japanese name of the European Union, Ōshū Rengō (欧州連合), despite the katakana Yōroppa (ヨーロッパ) being more commonly used. In some Turkic languages the originally Persian name Frangistan ("land of the Franks") is used casually in referring to much of Europe, besides official names such as Avrupa or Evropa.[22]
|
22 |
+
|
23 |
+
Clickable map of Europe, showing one of the most commonly used continental boundaries[23] Key: blue: states which straddle the border between Europe and Asia;
|
24 |
+
green: countries not geographically in Europe, but closely associated with the continent
|
25 |
+
|
26 |
+
The prevalent definition of Europe as a geographical term has been in use since the mid-19th century.
|
27 |
+
Europe is taken to be bounded by large bodies of water to the north, west and south; Europe's limits to the east and northeast are usually taken to be the Ural Mountains, the Ural River, and the Caspian Sea; to the southeast, the Caucasus Mountains, the Black Sea and the waterways connecting the Black Sea to the Mediterranean Sea.[24]
|
28 |
+
|
29 |
+
Islands are generally grouped with the nearest continental landmass, hence Iceland is considered to be part of Europe, while the nearby island of Greenland is usually assigned to North America, although politically belonging to Denmark. Nevertheless, there are some exceptions based on sociopolitical and cultural differences. Cyprus is closest to Anatolia (or Asia Minor), but is considered part of Europe politically and it is a member state of the EU. Malta was considered an island of Northwest Africa for centuries, but now it is considered to be part of Europe as well.[25]
|
30 |
+
"Europe" as used specifically in British English may also refer to Continental Europe exclusively.[26]
|
31 |
+
|
32 |
+
The term "continent" usually implies the physical geography of a large land mass completely or almost completely surrounded by water at its borders. However the Europe-Asia part of the border is somewhat arbitrary and inconsistent with this definition because of its partial adherence to the Ural and Caucasus Mountains rather than a series of partly joined waterways suggested by cartographer Herman Moll in 1715. These water divides extend with a few relatively small interruptions (compared to the aforementioned mountain ranges) from the Turkish straits running into the Mediterranean Sea to the upper part of the Ob River that drains into the Arctic Ocean. Prior to the adoption of the current convention that includes mountain divides, the border between Europe and Asia had been redefined several times since its first conception in classical antiquity, but always as a series of rivers, seas, and straits that were believed to extend an unknown distance east and north from the Mediterranean Sea without the inclusion of any mountain ranges.
|
33 |
+
|
34 |
+
The current division of Eurasia into two continents now reflects East-West cultural, linguistic and ethnic differences which vary on a spectrum rather than with a sharp dividing line. The geographic border between Europe and Asia does not follow any state boundaries and now only follows a few bodies of water. Turkey is generally considered a transcontinental country divided entirely by water, while Russia and Kazakhstan are only partly divided by waterways. France, Portugal, the Netherlands, Spain and the United Kingdom are also transcontinental (or more properly, intercontinental, when oceans or large seas are involved) in that their main land areas are in Europe while pockets of their territories are located on other continents separated from Europe by large bodies of water. Spain, for example, has territories south of the Mediterranean Sea namely Ceuta and Melilla which are parts of Africa and share a border with Morocco. According to the current convention, Georgia and Azerbaijan are transcontinental countries where waterways have been completely replaced by mountains as the divide between continents.
|
35 |
+
|
36 |
+
The first recorded usage of Eurṓpē as a geographic term is in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea. As a name for a part of the known world, it is first used in the 6th century BC by Anaximander and Hecataeus. Anaximander placed the boundary between Asia and Europe along the Phasis River (the modern Rioni River on the territory of Georgia) in the Caucasus, a convention still followed by Herodotus in the 5th century BC.[27] Herodotus mentioned that the world had been divided by unknown persons into three parts, Europe, Asia, and Libya (Africa), with the Nile and the Phasis forming their boundaries—though he also states that some considered the River Don, rather than the Phasis, as the boundary between Europe and Asia.[28] Europe's eastern frontier was defined in the 1st century by geographer Strabo at the River Don.[29] The Book of Jubilees described the continents as the lands given by Noah to his three sons; Europe was defined as stretching from the Pillars of Hercules at the Strait of Gibraltar, separating it from Northwest Africa, to the Don, separating it from Asia.[30]
|
37 |
+
|
38 |
+
The convention received by the Middle Ages and surviving into modern usage is that of the Roman era used by Roman era authors such as Posidonius,[31] Strabo[32] and Ptolemy,[33]
|
39 |
+
who took the Tanais (the modern Don River) as the boundary.
|
40 |
+
|
41 |
+
The term "Europe" is first used for a cultural sphere in the Carolingian Renaissance of the 9th century. From that time, the term designated the sphere of influence of the Western Church, as opposed to both the Eastern Orthodox churches and to the Islamic world.
|
42 |
+
|
43 |
+
A cultural definition of Europe as the lands of Latin Christendom coalesced in the 8th century, signifying the new cultural condominium created through the confluence of Germanic traditions and Christian-Latin culture, defined partly in contrast with Byzantium and Islam, and limited to northern Iberia, the British Isles, France, Christianised western Germany, the Alpine regions and northern and central Italy.[34] The concept is one of the lasting legacies of the Carolingian Renaissance: Europa often[dubious – discuss] figures in the letters of Charlemagne's court scholar, Alcuin.[35]
|
44 |
+
|
45 |
+
The question of defining a precise eastern boundary of Europe arises in the Early Modern period, as the eastern extension of Muscovy began to include North Asia. Throughout the Middle Ages and into the 18th century, the traditional division of the landmass of Eurasia into two continents, Europe and Asia, followed Ptolemy, with the boundary following the Turkish Straits, the Black Sea, the Kerch Strait, the Sea of Azov and the Don (ancient Tanais). But maps produced during the 16th to 18th centuries tended to differ in how to continue the boundary beyond the Don bend at Kalach-na-Donu (where it is closest to the Volga, now joined with it by the Volga–Don Canal), into territory not described in any detail by the ancient geographers. Around 1715, Herman Moll produced a map showing the northern part of the Ob River and the Irtysh River, a major tributary of the former, as components of a series of partly-joined waterways taking the boundary between Europe and Asia from the Turkish Straits and the Don River all the way to the Arctic Ocean. In 1721, he produced a more up to date map that was easier to read. However, his idea to use major rivers almost exclusively as the line of demarcation was never taken up by the Russian Empire.
|
46 |
+
|
47 |
+
Four years later, in 1725, Philip Johan von Strahlenberg was the first to depart from the classical Don boundary by proposing that mountain ranges could be included as boundaries between continents whenever there were deemed to be no suitable waterways, the Ob and Irtysh Rivers notwithstanding. He drew a new line along the Volga, following the Volga north until the Samara Bend, along Obshchy Syrt (the drainage divide between Volga and Ural) and then north along Ural Mountains.[36] This was adopted by the Russian Empire, and introduced the convention that would eventually become commonly accepted, but not without criticism by many modern analytical geographers like Halford Mackinder who saw little validity in the Ural Mountains as a boundary between continents.[37]
|
48 |
+
|
49 |
+
The mapmakers continued to differ on the boundary between the lower Don and Samara well into the 19th century. The 1745 atlas published by the Russian Academy of Sciences has the boundary follow the Don beyond Kalach as far as Serafimovich before cutting north towards Arkhangelsk, while other 18th- to 19th-century mapmakers such as John Cary followed Strahlenberg's prescription. To the south, the Kuma–Manych Depression was identified circa 1773 by a German naturalist, Peter Simon Pallas, as a valley that once connected the Black Sea and the Caspian Sea,[38][39] and subsequently was proposed as a natural boundary between continents.
|
50 |
+
|
51 |
+
By the mid-19th century, there were three main conventions, one following the Don, the Volga–Don Canal and the Volga, the other following the Kuma–Manych Depression to the Caspian and then the Ural River, and the third abandoning the Don altogether, following the Greater Caucasus watershed to the Caspian. The question was still treated as a "controversy" in geographical literature of the 1860s, with Douglas Freshfield advocating the Caucasus crest boundary as the "best possible", citing support from various "modern geographers".[40]
|
52 |
+
|
53 |
+
In Russia and the Soviet Union, the boundary along the Kuma–Manych Depression was the most commonly used as early as 1906.[41] In 1958, the Soviet Geographical Society formally recommended that the boundary between the Europe and Asia be drawn in textbooks from Baydaratskaya Bay, on the Kara Sea, along the eastern foot of Ural Mountains, then following the Ural River until the Mugodzhar Hills, and then the Emba River; and Kuma–Manych Depression,[42] thus placing the Caucasus entirely in Asia and the Urals entirely in Europe.[43] However, most geographers in the Soviet Union favoured the boundary along the Caucasus crest[44] and this became the common convention in the later 20th century, although the Kuma–Manych boundary remained in use in some 20th-century maps.
|
54 |
+
|
55 |
+
Homo erectus georgicus, which lived roughly 1.8 million years ago in Georgia, is the earliest hominid to have been discovered in Europe.[45] Other hominid remains, dating back roughly 1 million years, have been discovered in Atapuerca, Spain.[46] Neanderthal man (named after the Neandertal valley in Germany) appeared in Europe 150,000 years ago (115,000 years ago it is found already in Poland[47]) and disappeared from the fossil record about 28,000 years ago, with their final refuge being present-day Portugal. The Neanderthals were supplanted by modern humans (Cro-Magnons), who appeared in Europe around 43,000 to 40,000 years ago.[48] The earliest sites in Europe dated 48,000 years ago are
|
56 |
+
Riparo Mochi (Italy), Geissenklösterle (Germany), and Isturitz (France)[49][50]
|
57 |
+
|
58 |
+
The European Neolithic period—marked by the cultivation of crops and the raising of livestock, increased numbers of settlements and the widespread use of pottery—began around 7000 BC in Greece and the Balkans, probably influenced by earlier farming practices in Anatolia and the Near East.[51] It spread from the Balkans along the valleys of the Danube and the Rhine (Linear Pottery culture) and along the Mediterranean coast (Cardial culture). Between 4500 and 3000 BC, these central European neolithic cultures developed further to the west and the north, transmitting newly acquired skills in producing copper artifacts. In Western Europe the Neolithic period was characterised not by large agricultural settlements but by field monuments, such as causewayed enclosures, burial mounds and megalithic tombs.[52] The Corded Ware cultural horizon flourished at the transition from the Neolithic to the Chalcolithic. During this period giant megalithic monuments, such as the Megalithic Temples of Malta and Stonehenge, were constructed throughout Western and Southern Europe.[53][54]
|
59 |
+
|
60 |
+
The European Bronze Age began c. 3200 BC in Greece with the Minoan civilisation on Crete, the first advanced civilisation in Europe.[55] The Minoans were followed by the Myceneans, who collapsed suddenly around 1200 BC, ushering the European Iron Age.[56] Iron Age colonisation by the Greeks and Phoenicians gave rise to early Mediterranean cities. Early Iron Age Italy and Greece from around the 8th century BC gradually gave rise to historical Classical antiquity, whose beginning is sometimes dated to 776 BC, the year the first Olympic Games.[57]
|
61 |
+
|
62 |
+
Ancient Greece was the founding culture of Western civilisation. Western democratic and rationalist culture are often attributed to Ancient Greece.[58] The Greek city-state, the polis, was the fundamental political unit of classical Greece.[58] In 508 BC, Cleisthenes instituted the world's first democratic system of government in Athens.[59] The Greek political ideals were rediscovered in the late 18th century by European philosophers and idealists. Greece also generated many cultural contributions: in philosophy, humanism and rationalism under Aristotle, Socrates and Plato; in history with Herodotus and Thucydides; in dramatic and narrative verse, starting with the epic poems of Homer;[60] in drama with Sophocles and Euripides, in medicine with Hippocrates and Galen; and in science with Pythagoras, Euclid and Archimedes.[61][62][63] In the course of the 5th century BC, several of the Greek city states would ultimately check the Achaemenid Persian advance in Europe through the Greco-Persian Wars, considered a pivotal moment in world history,[64] as the 50 years of peace that followed are known as Golden Age of Athens, the seminal period of ancient Greece that laid many of the foundations of Western civilisation. The conquests of Alexander the Great brought the Middle East into the Greek cultural sphere.
|
63 |
+
|
64 |
+
Greece was followed by Rome, which left its mark on law, politics, language, engineering, architecture, government and many more key aspects in western civilisation.[58] Rome began as a small city-state, founded, according to tradition, in 753 BC as an elective kingdom. Tradition has it that there were seven kings of Rome with Romulus, the founder, being the first and Lucius Tarquinius Superbus falling to a republican uprising led by Lucius Junius Brutus, but modern scholars doubt many of those stories and even the Romans themselves acknowledged that the sack of Rome by the Gauls in 387 BC destroyed many sources on their early history.
|
65 |
+
|
66 |
+
By 200 BC, Rome had conquered Italy, and over the following two centuries it conquered Greece and Hispania (Spain and Portugal), the North African coast, much of the Middle East, Gaul (France and Belgium), and Britannia (England and Wales). The forty-year conquest of Britannia left 250,000 Britons dead, and emperor Antoninus Pius built the Antonine Wall across Scotland's Central Belt to defend the province from the Caledonians; it marked the northernmost border of the Roman Empire.
|
67 |
+
|
68 |
+
The Roman Republic ended in 27 BC, when Augustus proclaimed the Roman Empire. The two centuries that followed are known as the pax romana, a period of unprecedented peace, prosperity, and political stability in most of Europe.[65] The empire continued to expand under emperors such as Antoninus Pius and Marcus Aurelius, who spent time on the Empire's northern border fighting Germanic, Pictish and Scottish tribes.[66][67] Christianity was legalised by Constantine I in 313 AD after three centuries of imperial persecution. Constantine also permanently moved the capital of the empire from Rome to the city of Byzantium, which was renamed Constantinople in his honour (modern-day Istanbul) in 330 AD. Christianity became the sole official religion of the empire in 380 AD, and in 391–392 AD, the emperor Theodosius outlawed pagan religions.[68] This is sometimes considered to mark the end of antiquity; alternatively antiquity is considered to end with the fall of the Western Roman Empire in 476 AD; the closure of the pagan Platonic Academy of Athens in 529 AD;[69] or the rise of Islam in the early 7th century AD.
|
69 |
+
|
70 |
+
During the decline of the Roman Empire, Europe entered a long period of change arising from what historians call the "Age of Migrations". There were numerous invasions and migrations amongst the Goths, Vandals, Huns, Franks, Angles, Saxons, Slavs, Avars, Bulgars and, later on, the Vikings, Pechenegs, Cumans and Magyars.[65] Germanic tribes settled in the former Roman provinces of England and Spain, while other groups pressed into northern France and Italy.[70] Renaissance thinkers such as Petrarch would later refer to this as the "Dark Ages".[71] Isolated monastic communities were the only places to safeguard and compile written knowledge accumulated previously; apart from this very few written records survive and much literature, philosophy, mathematics, and other thinking from the classical period disappeared from Western Europe though they were preserved in the east, in the Byzantine Empire.[72]
|
71 |
+
|
72 |
+
While the Roman empire in the west continued to decline, Roman traditions and the Roman state remained strong in the predominantly Greek-speaking Eastern Roman Empire, also known as the Byzantine Empire. During most of its existence, the Byzantine Empire was the most powerful economic, cultural, and military force in Europe. Emperor Justinian I presided over Constantinople's first golden age: he established a legal code that forms the basis of many modern legal systems, funded the construction of the Hagia Sophia, and brought the Christian church under state control.[73] He reconquered North Africa, southern Spain, and Italy.[74] The Ostrogothic Kingdom, which sought to preserve Roman culture and adopt its values, later changed course and became anti-Constantinople, so Justinian waged war against the Ostrogoths for 30 years and destroyed much of urban life in Italy, and reduced agriculture to subsistence farming.[75]
|
73 |
+
|
74 |
+
From the 7th century onwards, as the Byzantines and neighbouring Sasanid Persians were severely weakened due the protracted, centuries-lasting and frequent Byzantine–Sasanian wars, the Muslim Arabs began to make inroads into historically Roman territory, taking the Levant and North Africa and making inroads into Asia Minor. In the mid 7th century AD, following the Muslim conquest of Persia, Islam penetrated into the Caucasus region.[76] Over the next centuries Muslim forces took Cyprus, Malta, Crete, Sicily and parts of southern Italy.[77] Between 711 and 726, the Umayyad Caliphate conquered the Visigothic Kingdom, which occupied the Iberian Peninsula and Septimania (southwestern France). The unsuccessful second siege of Constantinople (717) weakened the Umayyad dynasty and reduced their prestige. The Umayyads were then defeated by the Frankish leader Charles Martel at the Battle of Poitiers in 732, which ended their northward advance. In the remote regions of north-western Iberia and the middle Pyrenees the power of the Muslims in the south was scarcely felt. It was here that the foundations of the Christian kingdoms of Asturias, Leon and Galicia were laid and from where the reconquest of the Iberian Peninsula would start. However, no coordinated attempt would be made to drive the Moors out. The Christian kingdoms were mainly focussed on their own internal power struggles. As a result, the Reconquista took the greater part of eight hundred years, in which period a long list of Alfonsos, Sanchos, Ordoños, Ramiros, Fernandos and Bermudos would be fighting their Christian rivals as much as the Muslim invaders.
|
75 |
+
|
76 |
+
One of the biggest threats during the Dark Ages were the Vikings, Norse seafarers who raided, raped, and pillaged across the Western world. Many Vikings died in battles in continental Europe, and in 844 they lost many men and ships to King Ramiro in northern Spain.[78] A few months later, another fleet took Seville, only to be driven off with further heavy losses.[79] In 911, the Vikings attacked Paris, and the Franks decided to give the Viking king Rollo land along the English Channel coast in exchange for peace. This land was named "Normandy" for the Norse "Northmen", and the Norse settlers became known as "Normans", adopting the French culture and language and becoming French vassals. In 1066, the Normans went on to conquer England in the first successful cross-Channel invasion since Roman times.[80]
|
77 |
+
|
78 |
+
During the Dark Ages, the Western Roman Empire fell under the control of various tribes. The Germanic and Slav tribes established their domains over Western and Eastern Europe respectively.[81] Eventually the Frankish tribes were united under Clovis I.[82] In 507, Clovis defeated the Visigoths at the Battle of Vouillé, slaying King Alaric II and conquering southern Gaul for the Franks. His conquests laid the foundation for the Frankish kingdom. Charlemagne, a Frankish king of the Carolingian dynasty who had conquered most of Western Europe, was anointed "Holy Roman Emperor" by the Pope in 800. This led in 962 to the founding of the Holy Roman Empire, which eventually became centred in the German principalities of central Europe.[83]
|
79 |
+
|
80 |
+
East Central Europe saw the creation of the first Slavic states and the adoption of Christianity (circa 1000 AD). The powerful West Slavic state of Great Moravia spread its territory all the way south to the Balkans, reaching its largest territorial extent under Svatopluk I and causing a series of armed conflicts with East Francia. Further south, the first South Slavic states emerged in the late 7th and 8th century and adopted Christianity: the First Bulgarian Empire, the Serbian Principality (later Kingdom and Empire), and the Duchy of Croatia (later Kingdom of Croatia). To the East, the Kievan Rus expanded from its capital in Kiev to become the largest state in Europe by the 10th century. In 988, Vladimir the Great adopted Orthodox Christianity as the religion of state.[84][85] Further East, Volga Bulgaria became an Islamic state in the 10th century, but was eventually absorbed into Russia several centuries later.[86]
|
81 |
+
|
82 |
+
The period between the year 1000 and 1300 is known as the High Middle Ages, during which the population of Europe experienced significant growth, culminating in the Renaissance of the 12th century. Economic growth, together with the lack of safety on the mainland trading routes, made possible the development of major commercial routes along the coast of the Mediterranean and Baltic Seas. The growing wealth and independence acquired by some coastal cities gave the Maritime Republics a leading role in the European scene. Constantinople was the largest and wealthiest city in Europe from the 9th to the 12th centuries, with a population of approximately 400,000.[91]
|
83 |
+
|
84 |
+
The Middle Ages on the mainland were dominated by the two upper echelons of the social structure: the nobility and the clergy. Feudalism developed in France in the Early Middle Ages and soon spread throughout Europe.[92] A struggle for influence between the nobility and the monarchy in England led to the writing of the Magna Carta and the establishment of a parliament.[93] The primary source of culture in this period came from the Roman Catholic Church. Through monasteries and cathedral schools, the Church was responsible for education in much of Europe.[92]
|
85 |
+
|
86 |
+
The Papacy reached the height of its power during the High Middle Ages. An East-West Schism in 1054 split the former Roman Empire religiously, with the Eastern Orthodox Church in the Byzantine Empire and the Roman Catholic Church in the former Western Roman Empire. In 1095, Pope Urban II called for a crusade against Muslims occupying Jerusalem and the Holy Land.[94] In Europe itself, the Church organised the Inquisition against heretics. Popes also encouraged crusading in the Iberian Peninsula.[95] In the east, a resurgent Byzantine Empire recaptured Crete and Cyprus from the Muslims and reconquered the Balkans. However, it faced a new enemy in the form of seaborne attacks by the Normans, who conquered Southern Italy and Sicily. Even more dangerous than the Franco-Norsemen was a new enemy from the steppe: the Seljuk Turks. The Empire was weakened following the defeat at the Battle of Manzikert and was weakened considerably by the Frankish-Venetian sack of Constantinople in 1204, during the Fourth Crusade.[96][97][98][99][100][101][102][103][104] Although it would recover Constantinople in 1261, Byzantium fell in 1453 when Constantinople was taken by the Ottoman Empire.[105][106][107]
|
87 |
+
|
88 |
+
In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Pechenegs and the Cuman-Kipchaks, caused a massive migration of Slavic populations to the safer, heavily forested regions of the north and temporarily halted the expansion of the Rus' state to the south and east.[108] Like many other parts of Eurasia, these territories were overrun by the Mongols.[109] The invaders, who became known as Tatars, were mostly Turkic-speaking peoples under Mongol suzerainty. They established the state of the Golden Horde with headquarters in Crimea, which later adopted Islam as a religion and ruled over modern-day southern and central Russia for more than three centuries.[110][111] After the collapse of Mongol dominions, the first Romanian states (principalities) emerged in the 14th century: Moldova and Walachia. Previously, these territories were under the successive control of Pechenegs and Cumans.[112] From the 12th to the 15th centuries, the Grand Duchy of Moscow grew from a small principality under Mongol rule to the largest state in Europe, overthrowing the Mongols in 1480 and eventually becoming the Tsardom of Russia. The state was consolidated under Ivan III the Great and Ivan the Terrible, steadily expanding to the east and south over the next centuries.
|
89 |
+
|
90 |
+
The Great Famine of 1315–1317 was the first crisis that would strike Europe in the late Middle Ages.[113] The period between 1348 and 1420 witnessed the heaviest loss. The population of France was reduced by half.[114][115] Medieval Britain was afflicted by 95 famines,[116] and France suffered the effects of 75 or more in the same period.[117] Europe was devastated in the mid-14th century by the Black Death, one of the most deadly pandemics in human history which killed an estimated 25 million people in Europe alone—a third of the European population at the time.[118]
|
91 |
+
|
92 |
+
The plague had a devastating effect on Europe's social structure; it induced people to live for the moment as illustrated by Giovanni Boccaccio in The Decameron (1353). It was a serious blow to the Roman Catholic Church and led to increased persecution of Jews, beggars, and lepers.[119] The plague is thought to have returned every generation with varying virulence and mortalities until the 18th century.[120] During this period, more than 100 plague epidemics swept across Europe.[121]
|
93 |
+
|
94 |
+
The Renaissance was a period of cultural change originating in Florence and later spreading to the rest of Europe. The rise of a new humanism was accompanied by the recovery of forgotten classical Greek and Arabic knowledge from monastic libraries, often translated from Arabic into Latin.[122][123][124] The Renaissance spread across Europe between the 14th and 16th centuries: it saw the flowering of art, philosophy, music, and the sciences, under the joint patronage of royalty, the nobility, the Roman Catholic Church, and an emerging merchant class.[125][126][127] Patrons in Italy, including the Medici family of Florentine bankers and the Popes in Rome, funded prolific quattrocento and cinquecento artists such as Raphael, Michelangelo, and Leonardo da Vinci.[128][129]
|
95 |
+
|
96 |
+
Political intrigue within the Church in the mid-14th century caused the Western Schism. During this forty-year period, two popes—one in Avignon and one in Rome—claimed rulership over the Church. Although the schism was eventually healed in 1417, the papacy's spiritual authority had suffered greatly.[132] In the 15th century, Europe started to extend itself beyond its geographic frontiers. Spain and Portugal, the greatest naval powers of the time, took the lead in exploring the world.[133][134] Exploration reached the Southern Hemisphere in the Atlantic and the Southern tip of Africa. Christopher Columbus reached the New World in 1492, and Vasco da Gama opened the ocean route to the East linking the Atlantic and Indian Oceans in 1498. Ferdinand Magellan reached Asia westward across the Atlantic and the Pacific Oceans in the Spanish expedition of Magellan-Elcano, resulting in the first circumnavigation of the globe, completed by Juan Sebastián Elcano (1519–22). Soon after, the Spanish and Portuguese began establishing large global empires in the Americas, Asia, Africa and Oceania.[135] France, the Dutch Republic and England soon followed in building large colonial empires with vast holdings in Africa, the Americas, and Asia. The Italian Wars (1494–1559) led to the development of the first modern professional army in Europe, the Tercios. Their superiority became evident at the Battle of Bicocca (1522) against French-Swiss-Italian forces, which marked the end of massive pike assaults after thousands of Swiss pikemen fell to the Spanish arquebusiers without the latter suffering a single casualty.[136] Over the course of the 16th century, Spanish and Italian troops marched north to fight on Dutch and German battlefields, dying there in large numbers.[137][138]
|
97 |
+
|
98 |
+
The Church's power was further weakened by the Protestant Reformation in 1517 when German theologian Martin Luther nailed his Ninety-five Theses criticising the selling of indulgences to the church door. He was subsequently excommunicated in the papal bull Exsurge Domine in 1520, and his followers were condemned in the 1521 Diet of Worms, which divided German princes between Protestant and Roman Catholic faiths.[139] Religious fighting and warfare spread with Protestantism. War broke out first in Germany, where Imperial troops defeated the Protestant army of the Schmalkaldic League at the Battle of Mühlberg, putting an end to the Schmalkaldic War (1546–47).[140] Between 2 and 3 million people were killed in the French Wars of Religion (1562–98).[141] The Eighty Years' War (1568–1648) between Catholic Habsburgs and Dutch Calvinists resulted in the death of hundreds of thousands of people.[141] In the Cologne War (1583–88), Catholic forces faced off against German Calvinist and Dutch troops. The Thirty Years War (1618–48) crippled the Holy Roman Empire and devastated much of Germany, killing between 25 and 40 percent of its population.[142] During the Thirty Years' War, in which various Protestant forces battled Imperial armies, France provided subsidies to Habsburg enemies, especially Sweden. The destruction of the Swedish army by an Austro-Spanish force at the Battle of Nördlingen (1634) forced France to intervene militarily in the Thirty Years' War. In 1638, France and Oxenstierna agreed to a treaty at Hamburg, extending the annual French subsidy of 400,000 riksdalers for three years, but Sweden would not fight against Spain. In 1643, the French defeated one of Spain's best armies at the Battle of Rocroi. In the aftermath of the Peace of Westphalia (1648), France rose to predominance within Western Europe.[143] France fought a series of wars over domination of Western Europe, including the Franco-Spanish War, the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession; these wars cost France over 1,000,000 battle casualties.[144][145]
|
99 |
+
|
100 |
+
The 17th century in central and eastern Europe was a period of general decline.[146] Central and Eastern Europe experienced more than 150 famines in a 200-year period between 1501 and 1700.[147] From the Union of Krewo (1385) central and eastern Europe was dominated by Kingdom of Poland and Grand Duchy of Lithuania. Between 1648 and 1655 in the central and eastern Europe ended hegemony of the Polish–Lithuanian Commonwealth. From the 15th to 18th centuries, when the disintegrating khanates of the Golden Horde were conquered by Russia, Tatars from the Crimean Khanate frequently raided Eastern Slavic lands to capture slaves.[148] Further east, the Nogai Horde and Kazakh Khanate frequently raided the Slavic-speaking areas of Russia, Ukraine and Poland for hundreds of years, until the Russian expansion and conquest of most of northern Eurasia (i.e. Eastern Europe, Central Asia and Siberia).
|
101 |
+
|
102 |
+
The Renaissance and the New Monarchs marked the start of an Age of Discovery, a period of exploration, invention, and scientific development.[149] Among the great figures of the Western scientific revolution of the 16th and 17th centuries were Copernicus, Kepler, Galileo, and Isaac Newton.[150] According to Peter Barrett, "It is widely accepted that 'modern science' arose in the Europe of the 17th century (towards the end of the Renaissance), introducing a new understanding of the natural world."[122]
|
103 |
+
|
104 |
+
The Age of Enlightenment was a powerful intellectual movement during the 18th century promoting scientific and reason-based thoughts.[151][152][153] Discontent with the aristocracy and clergy's monopoly on political power in France resulted in the French Revolution and the establishment of the First Republic as a result of which the monarchy and many of the nobility perished during the initial reign of terror.[154] Napoleon Bonaparte rose to power in the aftermath of the French Revolution and established the First French Empire that, during the Napoleonic Wars, grew to encompass large parts of western and central Europe before collapsing in 1814 with the Battle of Paris between the Sixth Coalition—consisting of Russia, Austria, and Prussia—and the French Empire.[155][156] Napoleonic rule resulted in the further dissemination of the ideals of the French Revolution, including that of the nation-state, as well as the widespread adoption of the French models of administration, law, and education.[157][158][159] The Congress of Vienna, convened after Napoleon's downfall, established a new balance of power in Europe centred on the five "Great Powers": the UK, France, Prussia, Austria, and Russia.[160] This balance would remain in place until the Revolutions of 1848, during which liberal uprisings affected all of Europe except for Russia and the UK. These revolutions were eventually put down by conservative elements and few reforms resulted.[161] The year 1859 saw the unification of Romania, as a nation-state, from smaller principalities. In 1867, the Austro-Hungarian empire was formed; and 1871 saw the unifications of both Italy and Germany as nation-states from smaller principalities.[162]
|
105 |
+
|
106 |
+
In parallel, the Eastern Question grew more complex ever since the Ottoman defeat in the Russo-Turkish War (1768–1774). As the dissolution of the Ottoman Empire seemed imminent, the Great Powers struggled to safeguard their strategic and commercial interests in the Ottoman domains. The Russian Empire stood to benefit from the decline, whereas the Habsburg Empire and Britain perceived the preservation of the Ottoman Empire to be in their best interests. Meanwhile, the Serbian revolution (1804) and Greek War of Independence (1821) marked the beginning of the end of Ottoman rule in the Balkans, which ended with the Balkan Wars in 1912–1913.[163] Formal recognition of the de facto independent principalities of Montenegro, Serbia and Romania ensued at the Congress of Berlin in 1878. Many Ottoman Muslims faced either extermination at the hands of the newly independent states, or were expelled to the shrinking Ottoman possessions in Europe or to Anatolia; between 1821 and 1922 alone, more than 5 million Ottoman Muslims were driven away from their homes, while another 5.5 million died in wars or due to starvation and disease.[164]
|
107 |
+
|
108 |
+
The Industrial Revolution started in Great Britain in the last part of the 18th century and spread throughout Europe. The invention and implementation of new technologies resulted in rapid urban growth, mass employment, and the rise of a new working class.[165] Reforms in social and economic spheres followed, including the first laws on child labour, the legalisation of trade unions,[166] and the abolition of slavery.[167] In Britain, the Public Health Act of 1875 was passed, which significantly improved living conditions in many British cities.[168] Europe's population increased from about 100 million in 1700 to 400 million by 1900.[169] The last major famine recorded in Western Europe, the Irish Potato Famine, caused death and mass emigration of millions of Irish people.[170] In the 19th century, 70 million people left Europe in migrations to various European colonies abroad and to the United States.[171] Demographic growth meant that, by 1900, Europe's share of the world's population was 25%.[172]
|
109 |
+
|
110 |
+
Two world wars and an economic depression dominated the first half of the 20th century. World War I was fought between 1914 and 1918. It started when Archduke Franz Ferdinand of Austria was assassinated by the Yugoslav nationalist[178] Gavrilo Princip.[179] Most European nations were drawn into the war, which was fought between the Entente Powers (France, Belgium, Serbia, Portugal, Russia, the United Kingdom, and later Italy, Greece, Romania, and the United States) and the Central Powers (Austria-Hungary, Germany, Bulgaria, and the Ottoman Empire).
|
111 |
+
|
112 |
+
On 4 August 1914, Germany invaded and occupied Belgium. On 17 August 1914, the Russian army launched an assault on the eastern German province of East Prussia, but they would be dealt a fatal blow at the Battle of Tannenberg on 26-30 August 1914. As 1914 came to a close, the Germans were facing off against the French and British in northern France and in the Alsace and Lorraine regions of eastern France, and the opposing armies dug trenches and set up defensive positions. From 1915 to 1917, the two armies often engaged in trench warfare and massive offensives. The Battle of the Somme was the largest battle on the Western Front; the British suffered 420,000 casualties, the French 200,000 and the Germans 500,000. The Battle of Verdun saw around 377,231 French and 337,000 Germans become casualties. At the Battle of Passchendaele, mustard gas was used by both sides as chemical weapons, and several troops on both sides died in battle. In 1915, the tide of the war was changed when the Kingdom of Italy decided to enter the war on the side of the Triple Entente, seeking to acquire Austrian Tyrol and some possessions along the Adriatic Sea. This violated the "Triple Alliance" proposed in the 19th century, and Austria-Hungary was ill-prepared for yet another front to fight on. The Royal Italian Army launched several offensives along the Isonzo River, with eleven battles of the Isonzo being fought during the war. Over the course of the war, 462,391 Italian soldiers died; 420,000 of the dead were lost on the Alpine front with Austria-Hungary, the rest fell in France, Albania, or Macedonia.[180]
|
113 |
+
|
114 |
+
In 1917, German troops began to arrive in Italy to assist the crumbling Austro-Hungarian forces, and the Germans destroyed an Italian army at the Battle of Caporetto. This led to British and French (and later American) troops arriving in Italy to assist the Italian military against the Central forces, and the Allied troops succeeded in taking parts of northern Italy and on the Adriatic sea. In the Balkans, the Serbians resisted the Austrians until Bulgaria and the German Empire sent troops to assist in the conquest of Serbia in December 1915. The Austro-Hungarian Army on the Eastern Front, which had performed poorly, was soon subordinated to the Imperial German Army's high command, and the Germans destroyed a Russian army at the Gorlice-Tarnow Offensive in 1916. The Russians launched a massive counterattack against the Central Powers in Poland, and this "Brusilov Offensive" inflicted very high losses on the Austro-Hungarians; 1,325,000 Central Powers troops and 500,000 Russian troops were lost. However, the political situation in Russia deteriorated as people became more aware of the corruption in Czar Nicholas II of Russia's government, and the Germans made rapid progress in 1917 after the Russian Revolution toppled the czar. The Germans secured Riga in Latvia in September 1917, and they benefited from the fall of Aleksandr Kerensky's provisional government to the Bolsheviks in October 1917. In February-March 1918, the Germans launched a massive offensive after the Russian SFSR came to power, taking advantage of the Russian Civil War to seize large amounts of territory. In March 1918, the Soviets signed the Treaty of Brest-Litovsk, ending the war on the Eastern Front, and Germany set up various puppet nations in Eastern Europe.
|
115 |
+
|
116 |
+
Germany's huge successes during the war with the Russians were morale boosts for the Central Powers, but the Germans faced a new problem when they began to use "unrestricted submarine warfare" to attack enemy ships at sea. These tactics led to the sinking of several civilian ships, angering the United States, which lost several civilians aboard these ships. Germany agreed to the "Sussex Pledge", stating that it would halt this strategy. However, the Germans sunk RMS Lusitania on 7 May 1915, and the British intercepted an offer from the German government made to Mexico that proposed an anti-American alliance. In 1916, President Woodrow Wilson declared war on Germany, and the USA joined the Allies. In 1918, the American troops took part in the Battle of Belleau Wood and in the Meuse-Argonne Offensive, major offensives that saw several American troops die on European soil for the first time. The Germans began to suffer as more and more Allied troops arrived, and they launched the desperate "Spring Offensive" of 1918. This offensive was repelled, and the Allies barreled towards Belgium in the Hundred Days Offensive during the autumn of 1918. On 11 November 1918, the Germans agreed to an armistice with the Allies, ending the war. The war left more than 16 million civilians and military dead.[181] Over 60 million European soldiers were mobilised from 1914 to 1918.[182]
|
117 |
+
|
118 |
+
Russia was plunged into the Russian Revolution, which threw down the Tsarist monarchy and replaced it with the communist Soviet Union.[183] Austria-Hungary and the Ottoman Empire collapsed and broke up into separate nations, and many other nations had their borders redrawn. The Treaty of Versailles, which officially ended World War I in 1919, was harsh towards Germany, upon whom it placed full responsibility for the war and imposed heavy sanctions.[184] Excess deaths in Russia over the course of World War I and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million.[185] In 1932–1933, under Stalin's leadership, confiscations of grain by the Soviet authorities contributed to the second Soviet famine which caused millions of deaths;[186] surviving kulaks were persecuted and many sent to Gulags to do forced labour. Stalin was also responsible for the Great Purge of 1937–38 in which the NKVD executed 681,692 people;[187] millions of people were deported and exiled to remote areas of the Soviet Union.[188]
|
119 |
+
|
120 |
+
The social revolutions sweeping through Russia also affected other European nations following The Great War: in 1919, with the Weimar Republic in Germany, and the First Austrian Republic; in 1922, with Mussolini's one party fascist government in the Kingdom of Italy, and in Atatürk's Turkish Republic, adopting the Western alphabet, and state secularism.
|
121 |
+
Economic instability, caused in part by debts incurred in the First World War and 'loans' to Germany played havoc in Europe in the late 1920s and 1930s. This and the Wall Street Crash of 1929 brought about the worldwide Great Depression. Helped by the economic crisis, social instability and the threat of communism, fascist movements developed throughout Europe placing Adolf Hitler in power of what became Nazi Germany.[198][199] In 1933, Hitler became the leader of Germany and began to work towards his goal of building Greater Germany. Germany re-expanded and took back the Saarland and Rhineland in 1935 and 1936. In 1938, Austria became a part of Germany following the Anschluss. Later that year, following the Munich Agreement signed by Germany, France, the United Kingdom and Italy, Germany annexed the Sudetenland, which was a part of Czechoslovakia inhabited by ethnic Germans, and in early 1939, the remainder of Czechoslovakia was split into the Protectorate of Bohemia and Moravia, controlled by Germany, and the Slovak Republic. At the time, Britain and France preferred a policy of appeasement.
|
122 |
+
|
123 |
+
With tensions mounting between Germany and Poland over the future of Danzig, the Germans turned to the Soviets, and signed the Molotov–Ribbentrop Pact, which allowed the Soviets to invade the Baltic states and parts of Poland and Romania. Germany invaded Poland on 1 September 1939, prompting France and the United Kingdom to declare war on Germany on 3 September, opening the European Theatre of World War II.[194][200][201] The Soviet invasion of Poland started on 17 September and Poland fell soon thereafter. On 24 September, the Soviet Union attacked the Baltic countries and later, Finland. The British hoped to land at Narvik and send troops to aid Finland, but their primary objective in the landing was to encircle Germany and cut the Germans off from Scandinavian resources. Around the same time, Germany moved troops into Denmark; 1,400 Nazi soldiers conquered Denmark in one day.[202] The Phoney War continued. In May 1940, Germany conquered the Netherlands and Belgium. That same month, Italy allied with Germany and the two countries conquered France. By August, Germany began a bombing offensive on Britain, but failed to convince the Britons to give up.[203] Some 43,000 British civilians were killed and 139,000 wounded in the Blitz; much of London was destroyed, with 1,400,245 buildings destroyed or damaged.[204] In April 1941, Germany invaded Yugoslavia. In June 1941, Germany invaded the Soviet Union in Operation Barbarossa.[205] The Soviets and Germans fought an extremely brutal war in the east that cost millions of civilian and soldier lives. The Jews of Russia were wiped out by the German Nazis, who wanted to purge the world of Jews and other "subhumans". On 7 December 1941 Japan's attack on Pearl Harbor drew the United States into the conflict as allies of the British Empire and other allied forces.[206][207] In 1943, the Allies knocked Italy out of the war. The Italian campaign had the highest casualty rates of the whole war for the western allies.[208]
|
124 |
+
|
125 |
+
After the staggering Battle of Stalingrad in 1943, the German offensive in the Soviet Union turned into a continual fallback. The Battle of Kursk, which involved the largest tank battle in history, was the last major German offensive on the Eastern Front. In 1944, the Soviets and Allies both launched offensives against the Axis in Europe, with the Western Allies liberating France and much of Western Europe as the Soviets liberated much of Eastern Europe before halting in central Poland. The final year of the war saw the Soviets and Western Allies divide Germany in two after meeting along the Elbe River, while the Soviets and allied partisans liberated the Balkans. The Soviets conquered the German capital of Berlin on 2 May 1945, ending World War II in Europe. Hitler killed himself; Mussolini was killed by partisans in Italy. The Soviet-German struggle made World War II the bloodiest and costliest conflict in history.[209][210] More than 40 million people in Europe had died as a result of World War II (70 percent on the eastern front),[211][212] including between 11 and 17 million people who perished during the Holocaust (the genocide against Jews and the massacre of intellectuals, gypsies, homosexuals, handicapped people, Slavs, and Red Army prisoners-of-war).[213] The Soviet Union lost around 27 million people during the war, about half of all World War II casualties.[214] By the end of World War II, Europe had more than 40 million refugees.[215] Several post-war expulsions in Central and Eastern Europe displaced a total of about 20 million people,[216] in particular, German-speakers from all over Eastern Europe. In the aftermath of World War II, American troops were stationed in Western Europe, while Soviet troops occupied several Central and Eastern European states.
|
126 |
+
|
127 |
+
World War I and especially World War II diminished the eminence of Western Europe in world affairs. After World War II the map of Europe was redrawn at the Yalta Conference and divided into two blocs, the Western countries and the communist Eastern bloc, separated by what was later called by Winston Churchill an "Iron Curtain". Germany was divided between the capitalist West Germany and the communist East Germany. The United States and Western Europe
|
128 |
+
established the NATO alliance and later the Soviet Union and Central Europe established the Warsaw Pact.[217]
|
129 |
+
|
130 |
+
The two new superpowers, the United States and the Soviet Union, became locked in a fifty-year-long Cold War, centred on nuclear proliferation. At the same time decolonisation, which had already started after World War I, gradually resulted in the independence of most of the European colonies in Asia and Africa.[13] During the Cold War, extra-continental military interventions were the preserve of the two superpowers, a few West European countries, and Cuba.[218] Using the small Caribbean nation of Cuba as a surrogate, the Soviets projected power to distant points of the globe, exploiting world trouble spots without directly involving their own troops.[219]
|
131 |
+
|
132 |
+
In the 1980s the reforms of Mikhail Gorbachev and the Solidarity movement in Poland accelerated the collapse of the Eastern bloc and the end of the Cold War. Germany was reunited, after the symbolic fall of the Berlin Wall in 1989, and the maps of Central and Eastern Europe were redrawn once more.[198] European integration also grew after World War II. In 1949 the Council of Europe was founded, following a speech by Sir Winston Churchill, with the idea of unifying Europe to achieve common goals. It includes all European states except for Belarus and Vatican City. The Treaty of Rome in 1957 established the European Economic Community between six Western European states with the goal of a unified economic policy and common market.[221] In 1967 the EEC, European Coal and Steel Community and Euratom formed the European Community, which in 1993 became the European Union. The EU established a parliament, court and central bank and introduced the euro as a unified currency.[222] Between 2004 and 2013, more Central and Eastern European countries began joining, expanding the EU to 28 European countries, and once more making Europe a major economical and political centre of power.[223] However, the United Kingdom withdrew from the EU on 31 January 2020, as a result of a June 2016 referendum on EU membership.[224]
|
133 |
+
|
134 |
+
Europe makes up the western fifth of the Eurasian landmass.[24] It has a higher ratio of coast to landmass than any other continent or subcontinent.[225] Its maritime borders consist of the Arctic Ocean to the north, the Atlantic Ocean to the west, and the Mediterranean, Black, and Caspian Seas to the south.[226]
|
135 |
+
Land relief in Europe shows great variation within relatively small areas. The southern regions are more mountainous, while moving north the terrain descends from the high Alps, Pyrenees, and Carpathians, through hilly uplands, into broad, low northern plains, which are vast in the east. This extended lowland is known as the Great European Plain, and at its heart lies the North German Plain. An arc of uplands also exists along the north-western seaboard, which begins in the western parts of the islands of Britain and Ireland, and then continues along the mountainous, fjord-cut spine of Norway.
|
136 |
+
|
137 |
+
This description is simplified. Sub-regions such as the Iberian Peninsula and the Italian Peninsula contain their own complex features, as does mainland Central Europe itself, where the relief contains many plateaus, river valleys and basins that complicate the general trend. Sub-regions like Iceland, Britain, and Ireland are special cases. The former is a land unto itself in the northern ocean which is counted as part of Europe, while the latter are upland areas that were once joined to the mainland until rising sea levels cut them off.
|
138 |
+
|
139 |
+
Europe lies mainly in the temperate climate zones, being subjected to prevailing westerlies. The climate is milder in comparison to other areas of the same latitude around the globe due to the influence of the Gulf Stream.[228] The Gulf Stream is nicknamed "Europe's central heating", because it makes Europe's climate warmer and wetter than it would otherwise be. The Gulf Stream not only carries warm water to Europe's coast but also warms up the prevailing westerly winds that blow across the continent from the Atlantic Ocean.
|
140 |
+
|
141 |
+
Therefore, the average temperature throughout the year of Naples is 16 °C (61 °F), while it is only 12 °C (54 °F) in New York City which is almost on the same latitude. Berlin, Germany; Calgary, Canada; and Irkutsk, in the Asian part of Russia, lie on around the same latitude; January temperatures in Berlin average around 8 °C (14 °F) higher than those in Calgary, and they are almost 22 °C (40 °F) higher than average temperatures in Irkutsk.[228] Similarly, northern parts of Scotland have a temperate marine climate. The yearly average temperature in city of Inverness is 9.05 °C (48.29 °F). However, Churchill, Manitoba, Canada, is on roughly the same latitude and has an average temperature of −6.5 °C (20.3 °F), giving it a nearly subarctic climate.
|
142 |
+
|
143 |
+
In general, Europe is not just colder towards the north compared to the south, but it also gets colder from the west towards the east. The climate is more oceanic in the west, and less so in the east. This can be illustrated by the following table of average temperatures at locations roughly following the 60th, 55th, 50th, 45th and 40th latitudes. None of them is located at high altitude; most of them are close to the sea. (location, approximate latitude and longitude, coldest month average, hottest month average and annual average temperatures in degrees C)
|
144 |
+
|
145 |
+
[229]
|
146 |
+
It is notable how the average temperatures for the coldest month, as well as the annual average temperatures, drop from the west to the east. For instance, Edinburgh is warmer than Belgrade during the coldest month of the year, although Belgrade is around 10° of latitude farther south.
|
147 |
+
|
148 |
+
The geological history of Europe traces back to the formation of the Baltic Shield (Fennoscandia) and the Sarmatian craton, both around 2.25 billion years ago, followed by the Volgo–Uralia shield, the three together leading to the East European craton (≈ Baltica) which became a part of the supercontinent Columbia. Around 1.1 billion years ago, Baltica and Arctica (as part of the Laurentia block) became joined to Rodinia, later resplitting around 550 million years ago to reform as Baltica. Around 440 million years ago Euramerica was formed from Baltica and Laurentia; a further joining with Gondwana then leading to the formation of Pangea. Around 190 million years ago, Gondwana and Laurasia split apart due to the widening of the Atlantic Ocean. Finally, and very soon afterwards, Laurasia itself split up again, into Laurentia (North America) and the Eurasian continent. The land connection between the two persisted for a considerable time, via Greenland, leading to interchange of animal species. From around 50 million years ago, rising and falling sea levels have determined the actual shape of Europe, and its connections with continents such as Asia. Europe's present shape dates to the late Tertiary period about five million years ago.[232]
|
149 |
+
|
150 |
+
The geology of Europe is hugely varied and complex, and gives rise to the wide variety of landscapes found across the continent, from the Scottish Highlands to the rolling plains of Hungary.[234] Europe's most significant feature is the dichotomy between highland and mountainous Southern Europe and a vast, partially underwater, northern plain ranging from Ireland in the west to the Ural Mountains in the east. These two halves are separated by the mountain chains of the Pyrenees and Alps/Carpathians. The northern plains are delimited in the west by the Scandinavian Mountains and the mountainous parts of the British Isles. Major shallow water bodies submerging parts of the northern plains are the Celtic Sea, the North Sea, the Baltic Sea complex and Barents Sea.
|
151 |
+
|
152 |
+
The northern plain contains the old geological continent of Baltica, and so may be regarded geologically as the "main continent", while peripheral highlands and mountainous regions in the south and west constitute fragments from various other geological continents. Most of the older geology of western Europe existed as part of the ancient microcontinent Avalonia.
|
153 |
+
|
154 |
+
Having lived side by side with agricultural peoples for millennia, Europe's animals and plants have been profoundly affected by the presence and activities of man. With the exception of Fennoscandia and northern Russia, few areas of untouched wilderness are currently found in Europe, except for various national parks.
|
155 |
+
|
156 |
+
The main natural vegetation cover in Europe is mixed forest. The conditions for growth are very favourable. In the north, the Gulf Stream and North Atlantic Drift warm the continent. Southern Europe could be described as having a warm, but mild climate. There are frequent summer droughts in this region. Mountain ridges also affect the conditions. Some of these (Alps, Pyrenees) are oriented east–west and allow the wind to carry large masses of water from the ocean in the interior. Others are oriented south–north (Scandinavian Mountains, Dinarides, Carpathians, Apennines) and because the rain falls primarily on the side of mountains that is oriented towards the sea, forests grow well on this side, while on the other side, the conditions are much less favourable. Few corners of mainland Europe have not been grazed by livestock at some point in time, and the cutting down of the pre-agricultural forest habitat caused disruption to the original plant and animal ecosystems.
|
157 |
+
|
158 |
+
Probably 80 to 90 percent of Europe was once covered by forest.[235] It stretched from the Mediterranean Sea to the Arctic Ocean. Although over half of Europe's original forests disappeared through the centuries of deforestation, Europe still has over one quarter of its land area as forest, such as the broadleaf and mixed forests, taiga of Scandinavia and Russia, mixed rainforests of the Caucasus and the Cork oak forests in the western Mediterranean. During recent times, deforestation has been slowed and many trees have been planted. However, in many cases monoculture plantations of conifers have replaced the original mixed natural forest, because these grow quicker. The plantations now cover vast areas of land, but offer poorer habitats for many European forest dwelling species which require a mixture of tree species and diverse forest structure. The amount of natural forest in Western Europe is just 2–3% or less, in European Russia 5–10%. The country with the smallest percentage of forested area is Iceland (1%), while the most forested country is Finland (77%).[236]
|
159 |
+
|
160 |
+
In temperate Europe, mixed forest with both broadleaf and coniferous trees dominate. The most important species in central and western Europe are beech and oak. In the north, the taiga is a mixed spruce–pine–birch forest; further north within Russia and extreme northern Scandinavia, the taiga gives way to tundra as the Arctic is approached. In the Mediterranean, many olive trees have been planted, which are very well adapted to its arid climate; Mediterranean Cypress is also widely planted in southern Europe. The semi-arid Mediterranean region hosts much scrub forest. A narrow east–west tongue of Eurasian grassland (the steppe) extends eastwards from Ukraine and southern Russia and ends in Hungary and traverses into taiga to the north.
|
161 |
+
|
162 |
+
Glaciation during the most recent ice age and the presence of man affected the distribution of European fauna. As for the animals, in many parts of Europe most large animals and top predator species have been hunted to extinction. The woolly mammoth was extinct before the end of the Neolithic period. Today wolves (carnivores) and bears (omnivores) are endangered. Once they were found in most parts of Europe. However, deforestation and hunting caused these animals to withdraw further and further. By the Middle Ages the bears' habitats were limited to more or less inaccessible mountains with sufficient forest cover. Today, the brown bear lives primarily in the Balkan peninsula, Scandinavia, and Russia; a small number also persist in other countries across Europe (Austria, Pyrenees etc.), but in these areas brown bear populations are fragmented and marginalised because of the destruction of their habitat. In addition, polar bears may be found on Svalbard, a Norwegian archipelago far north of Scandinavia. The wolf, the second largest predator in Europe after the brown bear, can be found primarily in Central and Eastern Europe and in the Balkans, with a handful of packs in pockets of Western Europe (Scandinavia, Spain, etc.).
|
163 |
+
|
164 |
+
European wild cat, foxes (especially the red fox), jackal and different species of martens, hedgehogs, different species of reptiles (like snakes such as vipers and grass snakes) and amphibians, different birds (owls, hawks and other birds of prey).
|
165 |
+
|
166 |
+
Important European herbivores are snails, larvae, fish, different birds, and mammals, like rodents, deer and roe deer, boars, and living in the mountains, marmots, steinbocks, chamois among others. A number of insects, such as the small tortoiseshell butterfly, add to the biodiversity.[239]
|
167 |
+
|
168 |
+
The extinction of the dwarf hippos and dwarf elephants has been linked to the earliest arrival of humans on the islands of the Mediterranean.[240]
|
169 |
+
|
170 |
+
Sea creatures are also an important part of European flora and fauna. The sea flora is mainly phytoplankton. Important animals that live in European seas are zooplankton, molluscs, echinoderms, different crustaceans, squids and octopuses, fish, dolphins, and whales.
|
171 |
+
|
172 |
+
Biodiversity is protected in Europe through the Council of Europe's Bern Convention, which has also been signed by the European Community as well as non-European states.
|
173 |
+
|
174 |
+
|
175 |
+
|
176 |
+
The political map of Europe is substantially derived from the re-organisation of Europe following the Napoleonic Wars in 1815. The prevalent form of government in Europe is parliamentary democracy, in most cases in the form of Republic; in 1815, the prevalent form of government was still the Monarchy. Europe's remaining eleven monarchies[241] are constitutional.
|
177 |
+
|
178 |
+
European integration is the process of political, legal, economic (and in some cases social and cultural) integration of European states as it has been pursued by the powers sponsoring the Council of Europe since the end of World War II
|
179 |
+
The European Union has been the focus of economic integration on the continent since its foundation in 1993. More recently, the Eurasian Economic Union has been established as a counterpart comprising former Soviet states.
|
180 |
+
|
181 |
+
27 European states are members of the politico-economic European Union, 26 of the border-free Schengen Area and 19 of the monetary union Eurozone. Among the smaller European organisations are the Nordic Council, the Benelux, the Baltic Assembly and the Visegrád Group.
|
182 |
+
|
183 |
+
The list below includes all entities falling even partially under any of the various common definitions of Europe, geographically or politically.
|
184 |
+
|
185 |
+
Within the above-mentioned states are several de facto independent countries with limited to no international recognition. None of them are members of the UN:
|
186 |
+
|
187 |
+
Several dependencies and similar territories with broad autonomy are also found within or in close proximity to Europe. This includes Åland (a region of Finland), two constituent countries of the Kingdom of Denmark (other than Denmark itself), three Crown dependencies, and two British Overseas Territories. Svalbard is also included due to its unique status within Norway, although it is not autonomous. Not included are the three countries of the United Kingdom with devolved powers and the two Autonomous Regions of Portugal, which despite having a unique degree of autonomy, are not largely self-governing in matters other than international affairs. Areas with little more than a unique tax status, such as Heligoland and the Canary Islands, are also not included for this reason.
|
188 |
+
|
189 |
+
As a continent, the economy of Europe is currently the largest on Earth and it is the richest region as measured by assets under management with over $32.7 trillion compared to North America's $27.1 trillion in 2008.[243] In 2009 Europe remained the wealthiest region. Its $37.1 trillion in assets under management represented one-third of the world's wealth. It was one of several regions where wealth surpassed its precrisis year-end peak.[244] As with other continents, Europe has a large variation of wealth among its countries. The richer states tend to be in the West; some of the Central and Eastern European economies are still emerging from the collapse of the Soviet Union and the breakup of Yugoslavia.
|
190 |
+
|
191 |
+
The European Union, a political entity composed of 28 European states, comprises the largest single economic area in the world. 19 EU countries share the euro as a common currency.
|
192 |
+
Five European countries rank in the top ten of the world's largest national economies in GDP (PPP). This includes (ranks according to the CIA): Germany (6), Russia (7), the United Kingdom (10), France (11), and Italy (13).[245]
|
193 |
+
|
194 |
+
There is huge disparity between many European countries in terms of their income. The richest in terms of GDP per capita is Monaco with its US$172,676 per capita (2009) and the poorest is Moldova with its GDP per capita of US$1,631 (2010).[246] Monaco is the richest country in terms of GDP per capita in the world according to the World Bank report.
|
195 |
+
|
196 |
+
As a whole, Europe's GDP per capita is US$21,767 according to a 2016 International Monetary Fund assessment.[247]
|
197 |
+
|
198 |
+
Capitalism has been dominant in the Western world since the end of feudalism.[248] From Britain, it gradually spread throughout Europe.[249] The Industrial Revolution started in Europe, specifically the United Kingdom in the late 18th century,[250] and the 19th century saw Western Europe industrialise. Economies were disrupted by World War I but by the beginning of World War II they had recovered and were having to compete with the growing economic strength of the United States. World War II, again, damaged much of Europe's industries.
|
199 |
+
|
200 |
+
After World War II the economy of the UK was in a state of ruin,[251] and continued to suffer relative economic decline in the following decades.[252] Italy was also in a poor economic condition but regained a high level of growth by the 1950s. West Germany recovered quickly and had doubled production from pre-war levels by the 1950s.[253] France also staged a remarkable comeback enjoying rapid growth and modernisation; later on Spain, under the leadership of Franco, also recovered, and the nation recorded huge unprecedented economic growth beginning in the 1960s in what is called the Spanish miracle.[254] The majority of Central and Eastern European states came under the control of the Soviet Union and thus were members of the Council for Mutual Economic Assistance (COMECON).[255]
|
201 |
+
|
202 |
+
The states which retained a free-market system were given a large amount of aid by the United States under the Marshall Plan.[256] The western states moved to link their economies together, providing the basis for the EU and increasing cross border trade. This helped them to enjoy rapidly improving economies, while those states in COMECON were struggling in a large part due to the cost of the Cold War. Until 1990, the European Community was expanded from 6 founding members to 12. The emphasis placed on resurrecting the West German economy led to it overtaking the UK as Europe's largest economy.
|
203 |
+
|
204 |
+
With the fall of communism in Central and Eastern Europe in 1991, the post-socialist states began free market reforms.
|
205 |
+
|
206 |
+
After East and West Germany were reunited in 1990, the economy of West Germany struggled as it had to support and largely rebuild the infrastructure of East Germany.
|
207 |
+
|
208 |
+
By the millennium change, the EU dominated the economy of Europe comprising the five largest European economies of the time namely Germany, the United Kingdom, France, Italy, and Spain. In 1999, 12 of the 15 members of the EU joined the Eurozone replacing their former national currencies by the common euro. The three who chose to remain outside the Eurozone were: the United Kingdom, Denmark, and Sweden. The European Union is now the largest economy in the world.[257]
|
209 |
+
|
210 |
+
Figures released by Eurostat in 2009 confirmed that the Eurozone had gone into recession in 2008.[258] It impacted much of the region.[259] In 2010, fears of a sovereign debt crisis[260] developed concerning some countries in Europe, especially Greece, Ireland, Spain, and Portugal.[261] As a result, measures were taken, especially for Greece, by the leading countries of the Eurozone.[262] The EU-27 unemployment rate was 10.3% in 2012.[263] For those aged 15–24 it was 22.4%.[263]
|
211 |
+
|
212 |
+
In 2017, the population of Europe was estimated to be 742 million according to the 2019 revision of the World Population Prospects[2][3], which is slightly more than one-ninth of the world's population. This number includes Siberia, (about 38 million people) but excludes European Turkey (about 12 million).
|
213 |
+
|
214 |
+
A century ago, Europe had nearly a quarter of the world's population.[265] The population of Europe has grown in the past century, but in other areas of the world (in particular Africa and Asia) the population has grown far more quickly.[266] Among the continents, Europe has a relatively high population density, second only to Asia. Most of Europe is in a mode of Sub-replacement fertility, which means that each new(-born) generation is being less populous than the older.
|
215 |
+
The most densely populated country in Europe (and in the world) is the microstate of Monaco.
|
216 |
+
|
217 |
+
Pan and Pfeil (2004) count 87 distinct "peoples of Europe", of which 33 form the majority population in at least one sovereign state, while the remaining 54 constitute ethnic minorities.[267]
|
218 |
+
|
219 |
+
According to UN population projection, Europe's population may fall to about 7% of world population by 2050, or 653 million people (medium variant, 556 to 777 million in low and high variants, respectively).[266] Within this context, significant disparities exist between regions in relation to fertility rates. The average number of children per female of child-bearing age is 1.52.[268] According to some sources,[269] this rate is higher among Muslims in Europe. The UN predicts a steady population decline in Central and Eastern Europe as a result of emigration and low birth rates.[270]
|
220 |
+
|
221 |
+
Europe is home to the highest number of migrants of all global regions at 70.6 million people, the IOM's report said.[271] In 2005, the EU had an overall net gain from immigration of 1.8 million people. This accounted for almost 85% of Europe's total population growth.[272] In 2008, 696,000 persons were given citizenship of an EU27 member state, a decrease from 707,000 the previous year.[273] In 2017, approximately 825,000 persons acquired citizenship of an EU28 member state.[274] 2.4 million immigrants from non-EU countries entered the EU in 2017.[275]
|
222 |
+
|
223 |
+
Early modern emigration from Europe began with Spanish and Portuguese settlers in the 16th century,[276][277] and French and English settlers in the 17th century.[278] But numbers remained relatively small until waves of mass emigration in the 19th century, when millions of poor families left Europe.[279]
|
224 |
+
|
225 |
+
Today, large populations of European descent are found on every continent. European ancestry predominates in North America, and to a lesser degree in South America (particularly in Uruguay, Argentina, Chile and Brazil, while most of the other Latin American countries also have a considerable population of European origins). Australia and New Zealand have large European derived populations. Africa has no countries with European-derived majorities (or with the exception of Cape Verde and probably São Tomé and Príncipe, depending on context), but there are significant minorities, such as the White South Africans in South Africa. In Asia, European-derived populations, (specifically Russians), predominate in North Asia and some parts of Northern Kazakhstan.[280]
|
226 |
+
|
227 |
+
Europe has about 225 indigenous languages,[281] mostly falling within three Indo-European language groups: the Romance languages, derived from the Latin of the Roman Empire; the Germanic languages, whose ancestor language came from southern Scandinavia; and the Slavic languages.[232] Slavic languages are mostly spoken in Southern, Central and Eastern Europe. Romance languages are spoken primarily in Western and Southern Europe as well as in Switzerland in Central Europe and Romania and Moldova in Eastern Europe. Germanic languages are spoken in Western, Northern and Central Europe as well as in Gibraltar and Malta in Southern Europe.[232] Languages in adjacent areas show significant overlaps (such as in English, for example). Other Indo-European languages outside the three main groups include the Baltic group (Latvian and Lithuanian), the Celtic group (Irish, Scottish Gaelic, Manx, Welsh, Cornish, and Breton[232]), Greek, Armenian, and Albanian.
|
228 |
+
|
229 |
+
A distinct non-Indo-European family of Uralic languages (Estonian, Finnish, Hungarian, Erzya, Komi, Mari, Moksha, and Udmurt) is spoken mainly in Estonia, Finland, Hungary, and parts of Russia. Turkic languages include Azerbaijani, Kazakh and Turkish, in addition to smaller languages in Eastern and Southeast Europe (Balkan Gagauz Turkish, Bashkir, Chuvash, Crimean Tatar, Karachay-Balkar, Kumyk, Nogai, and Tatar). Kartvelian languages (Georgian, Mingrelian, and Svan) are spoken primarily in Georgia. Two other language families reside in the North Caucasus (termed Northeast Caucasian, most notably including Chechen, Avar, and Lezgin; and Northwest Caucasian, most notably including Adyghe). Maltese is the only Semitic language that is official within the EU, while Basque is the only European language isolate.
|
230 |
+
|
231 |
+
Multilingualism and the protection of regional and minority languages are recognised political goals in Europe today. The Council of Europe Framework Convention for the Protection of National Minorities and the Council of Europe's European Charter for Regional or Minority Languages set up a legal framework for language rights in Europe.
|
232 |
+
|
233 |
+
The four largest cities of Europe are Istanbul, Moscow, Paris and London, each have over 10 million residents,[8] and as such have been described as megacities.[282] While Istanbul has the highest total population, one third lies on the Asian side of the Bosporus, making Moscow the most populous city entirely in Europe.
|
234 |
+
The next largest cities in order of population are Saint Petersburg, Madrid, Berlin and Rome, each having over 3 million residents.[8]
|
235 |
+
|
236 |
+
When considering the commuter belts or metropolitan areas, within the EU (for which comparable data is available) London covers the largest population, followed in order by Paris, Madrid, Barcelona, Berlin, the Ruhr area, Rome, Milan, Athens and Warsaw.[283]
|
237 |
+
|
238 |
+
"Europe" as a cultural concept is substantially derived from the shared heritage of the Roman Empire and its culture.
|
239 |
+
|
240 |
+
The boundaries of Europe were historically understood as those of Christendom (or more specifically Latin Christendom), as established or defended throughout the medieval and early modern history of Europe, especially against Islam, as in the Reconquista and the Ottoman wars in Europe.[284]
|
241 |
+
|
242 |
+
This shared cultural heritage is combined by overlapping indigenous national cultures and folklores, roughly divided into Slavic, Latin (Romance) and Germanic, but with several components not part of either of these group (notably Greek and Celtic).
|
243 |
+
|
244 |
+
Cultural contact and mixtures characterise much of European regional cultures; Kaplan (2014) describes Europe as "embracing maximum cultural diversity at minimal geographical distances".[clarification needed][285]
|
245 |
+
|
246 |
+
Different cultural events are organised in Europe, with the aim of bringing different cultures closer together and raising awareness of their importance, such as the European Capital of Culture, the European Region of Gastronomy, the European Youth Capital and the European Capital of Sport.
|
247 |
+
|
248 |
+
Historically, religion in Europe has been a major influence on European art, culture, philosophy and law. There are six patron saints of Europe venerated in Roman Catholicism, five of them so declared by Pope John Paul II between 1980–1999: Saints Cyril and Methodius, Saint Bridget of Sweden, Catherine of Siena and Saint Teresa Benedicta of the Cross (Edith Stein). The exception is Benedict of Nursia, who had already been declared "Patron Saint of all Europe" by Pope Paul VI in 1964.[286][circular reference]
|
249 |
+
|
250 |
+
Religion in Europe according to the Global Religious Landscape survey by the Pew Forum, 2012[287]
|
251 |
+
|
252 |
+
The largest religion in Europe is Christianity, with 76.2% of Europeans considering themselves Christians,[288] including Catholic, Eastern Orthodox and various Protestant denominations. Among Protestants, the most popular are historically state-supported European denominations such as Lutheranism, Anglicanism and the Reformed faith. Other Protestant denominations such as historically significant ones like Anabaptists were never supported by any state and thus are not so widespread, as well as these newly arriving from the United States such as Pentecostalism, Adventism, Methodism, Baptists and various Evangelical Protestants; although Methodism and Baptists both have European origins. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom"; many even attribute Christianity for being the link that created a unified European identity.[289]
|
253 |
+
|
254 |
+
Christianity, including the Roman Catholic Church,[290][291] has played a prominent role in the shaping of Western civilisation since at least the 4th century,[292][293][294][295] and for at least a millennium and a half, Europe has been nearly equivalent to Christian culture, even though the religion was inherited from the Middle East. Christian culture was the predominant force in western civilisation, guiding the course of philosophy, art, and science.[296][297]
|
255 |
+
|
256 |
+
The second most popular religion is Islam (6%)[298] concentrated mainly in the Balkans and Eastern Europe (Bosnia and Herzegovina, Albania, Kosovo, Kazakhstan, North Cyprus, Turkey, Azerbaijan, North Caucasus, and the Volga-Ural region). Other religions, including Judaism, Hinduism, and Buddhism are minority religions (though Tibetan Buddhism is the majority religion of Russia's Republic of Kalmykia). The 20th century saw the revival of Neopaganism through movements such as Wicca and Druidry.
|
257 |
+
|
258 |
+
Europe has become a relatively secular continent, with an increasing number and proportion of irreligious, atheist and agnostic people, who make up about 18.2% of Europe's population,[299] currently the largest secular population in the Western world. There are a particularly high number of self-described non-religious people in the Czech Republic, Estonia, Sweden, former East Germany, and France.[300]
|
259 |
+
|
260 |
+
Sport in Europe tends to be highly organized with many sports having professional leagues.
|
261 |
+
|
262 |
+
Historical Maps
|
263 |
+
|
264 |
+
Africa
|
265 |
+
|
266 |
+
Antarctica
|
267 |
+
|
268 |
+
Asia
|
269 |
+
|
270 |
+
Australia
|
271 |
+
|
272 |
+
Europe
|
273 |
+
|
274 |
+
North America
|
275 |
+
|
276 |
+
South America
|
277 |
+
|
278 |
+
Afro-Eurasia
|
279 |
+
|
280 |
+
America
|
281 |
+
|
282 |
+
Eurasia
|
283 |
+
|
284 |
+
Oceania
|
en/1904.html.txt
ADDED
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Europe is a continent located entirely in the Northern Hemisphere and mostly in the Eastern Hemisphere. It comprises the westernmost part of Eurasia and is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, the Mediterranean Sea to the south, and Asia to the east. Europe is commonly considered to be separated from Asia by the watershed of the Ural Mountains, the Ural River, the Caspian Sea, the Greater Caucasus, the Black Sea, and the waterways of the Turkish Straits.[9] Although much of this border is over land, Europe is generally accorded the status of a full continent because of its great physical size and the weight of history and tradition.
|
6 |
+
|
7 |
+
Europe covers about 10,180,000 square kilometres (3,930,000 sq mi), or 2% of the Earth's surface (6.8% of land area), making it the sixth largest continent. Politically, Europe is divided into about fifty sovereign states, of which Russia is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a total population of about 741 million (about 11% of the world population) as of 2018[update].[2][3] The European climate is largely affected by warm Atlantic currents that temper winters and summers on much of the continent, even at latitudes along which the climate in Asia and North America is severe. Further from the sea, seasonal differences are more noticeable than close to the coast.
|
8 |
+
|
9 |
+
Europe, in particular ancient Greece and ancient Rome, was the birthplace of Western civilisation.[10][11][12] The fall of the Western Roman Empire in 476 AD and the subsequent Migration Period marked the end of ancient history and the beginning of the Middle Ages. Renaissance humanism, exploration, art and science led to the modern era. Since the Age of Discovery, Europe played a predominant role in global affairs. Between the 16th and 20th centuries, European powers colonized at various times the Americas, almost all of Africa and Oceania and the majority of Asia.
|
10 |
+
|
11 |
+
The Age of Enlightenment, the subsequent French Revolution and the Napoleonic Wars shaped the continent culturally, politically and economically from the end of the 17th century until the first half of the 19th century. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to radical economic, cultural and social change in Western Europe and eventually the wider world. Both world wars took place for the most part in Europe, contributing to a decline in Western European dominance in world affairs by the mid-20th century as the Soviet Union and the United States took prominence.[13] During the Cold War, Europe was divided along the Iron Curtain between NATO in the West and the Warsaw Pact in the East, until the revolutions of 1989 and fall of the Berlin Wall.
|
12 |
+
|
13 |
+
In 1949 the Council of Europe was founded with the idea of unifying Europe to achieve common goals. Further European integration by some states led to the formation of the European Union (EU), a separate political entity that lies between a confederation and a federation.[14] The EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. The currency of most countries of the European Union, the euro, is the most commonly used among Europeans; and the EU's Schengen Area abolishes border and immigration controls between most of its member states.
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
In classical Greek mythology, Europa (Ancient Greek: Εὐρώπη, Eurṓpē) was a Phoenician princess. One view is that her name derives from the ancient Greek elements εὐρύς (eurús), "wide, broad" and ὤψ (ōps, gen. ὠπός, ōpós) "eye, face, countenance", hence their composite Eurṓpē would mean "wide-gazing" or "broad of aspect".[15][16][17] Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion and the poetry devoted to it.[15] An alternative view is that of R.S.P. Beekes who has argued in favor of a Pre-Indo-European origin for the name, explains that a derivation from ancient Greek eurus would yield a different toponym than Europa. Beekes has located toponyms related to that of Europa in the territory of ancient Greece and localities like that of Europos in ancient Macedonia.[18]
|
18 |
+
|
19 |
+
There have been attempts to connect Eurṓpē to a Semitic term for "west", this being either Akkadian erebu meaning "to go down, set" (said of the sun) or Phoenician 'ereb "evening, west",[19] which is at the origin of Arabic Maghreb and Hebrew ma'arav. Michael A. Barry finds the mention of the word Ereb on an Assyrian stele with the meaning of "night, [the country of] sunset", in opposition to Asu "[the country of] sunrise", i.e. Asia. The same naming motive according to "cartographic convention" appears in Greek Ἀνατολή (Anatolḗ "[sun] rise", "east", hence Anatolia).[20] Martin Litchfield West stated that "phonologically, the match between Europa's name and any form of the Semitic word is very poor",[21] while Beekes considers a connection to Semitic languages improbable.[18] Next to these hypotheses there is also a Proto-Indo-European root *h1regʷos, meaning "darkness", which also produced Greek Erebus.[citation needed]
|
20 |
+
|
21 |
+
Most major world languages use words derived from Eurṓpē or Europa to refer to the continent. Chinese, for example, uses the word Ōuzhōu (歐洲/欧洲), which is an abbreviation of the transliterated name Ōuluóbā zhōu (歐羅巴洲) (zhōu means "continent"); a similar Chinese-derived term Ōshū (欧州) is also sometimes used in Japanese such as in the Japanese name of the European Union, Ōshū Rengō (欧州連合), despite the katakana Yōroppa (ヨーロッパ) being more commonly used. In some Turkic languages the originally Persian name Frangistan ("land of the Franks") is used casually in referring to much of Europe, besides official names such as Avrupa or Evropa.[22]
|
22 |
+
|
23 |
+
Clickable map of Europe, showing one of the most commonly used continental boundaries[23] Key: blue: states which straddle the border between Europe and Asia;
|
24 |
+
green: countries not geographically in Europe, but closely associated with the continent
|
25 |
+
|
26 |
+
The prevalent definition of Europe as a geographical term has been in use since the mid-19th century.
|
27 |
+
Europe is taken to be bounded by large bodies of water to the north, west and south; Europe's limits to the east and northeast are usually taken to be the Ural Mountains, the Ural River, and the Caspian Sea; to the southeast, the Caucasus Mountains, the Black Sea and the waterways connecting the Black Sea to the Mediterranean Sea.[24]
|
28 |
+
|
29 |
+
Islands are generally grouped with the nearest continental landmass, hence Iceland is considered to be part of Europe, while the nearby island of Greenland is usually assigned to North America, although politically belonging to Denmark. Nevertheless, there are some exceptions based on sociopolitical and cultural differences. Cyprus is closest to Anatolia (or Asia Minor), but is considered part of Europe politically and it is a member state of the EU. Malta was considered an island of Northwest Africa for centuries, but now it is considered to be part of Europe as well.[25]
|
30 |
+
"Europe" as used specifically in British English may also refer to Continental Europe exclusively.[26]
|
31 |
+
|
32 |
+
The term "continent" usually implies the physical geography of a large land mass completely or almost completely surrounded by water at its borders. However the Europe-Asia part of the border is somewhat arbitrary and inconsistent with this definition because of its partial adherence to the Ural and Caucasus Mountains rather than a series of partly joined waterways suggested by cartographer Herman Moll in 1715. These water divides extend with a few relatively small interruptions (compared to the aforementioned mountain ranges) from the Turkish straits running into the Mediterranean Sea to the upper part of the Ob River that drains into the Arctic Ocean. Prior to the adoption of the current convention that includes mountain divides, the border between Europe and Asia had been redefined several times since its first conception in classical antiquity, but always as a series of rivers, seas, and straits that were believed to extend an unknown distance east and north from the Mediterranean Sea without the inclusion of any mountain ranges.
|
33 |
+
|
34 |
+
The current division of Eurasia into two continents now reflects East-West cultural, linguistic and ethnic differences which vary on a spectrum rather than with a sharp dividing line. The geographic border between Europe and Asia does not follow any state boundaries and now only follows a few bodies of water. Turkey is generally considered a transcontinental country divided entirely by water, while Russia and Kazakhstan are only partly divided by waterways. France, Portugal, the Netherlands, Spain and the United Kingdom are also transcontinental (or more properly, intercontinental, when oceans or large seas are involved) in that their main land areas are in Europe while pockets of their territories are located on other continents separated from Europe by large bodies of water. Spain, for example, has territories south of the Mediterranean Sea namely Ceuta and Melilla which are parts of Africa and share a border with Morocco. According to the current convention, Georgia and Azerbaijan are transcontinental countries where waterways have been completely replaced by mountains as the divide between continents.
|
35 |
+
|
36 |
+
The first recorded usage of Eurṓpē as a geographic term is in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea. As a name for a part of the known world, it is first used in the 6th century BC by Anaximander and Hecataeus. Anaximander placed the boundary between Asia and Europe along the Phasis River (the modern Rioni River on the territory of Georgia) in the Caucasus, a convention still followed by Herodotus in the 5th century BC.[27] Herodotus mentioned that the world had been divided by unknown persons into three parts, Europe, Asia, and Libya (Africa), with the Nile and the Phasis forming their boundaries—though he also states that some considered the River Don, rather than the Phasis, as the boundary between Europe and Asia.[28] Europe's eastern frontier was defined in the 1st century by geographer Strabo at the River Don.[29] The Book of Jubilees described the continents as the lands given by Noah to his three sons; Europe was defined as stretching from the Pillars of Hercules at the Strait of Gibraltar, separating it from Northwest Africa, to the Don, separating it from Asia.[30]
|
37 |
+
|
38 |
+
The convention received by the Middle Ages and surviving into modern usage is that of the Roman era used by Roman era authors such as Posidonius,[31] Strabo[32] and Ptolemy,[33]
|
39 |
+
who took the Tanais (the modern Don River) as the boundary.
|
40 |
+
|
41 |
+
The term "Europe" is first used for a cultural sphere in the Carolingian Renaissance of the 9th century. From that time, the term designated the sphere of influence of the Western Church, as opposed to both the Eastern Orthodox churches and to the Islamic world.
|
42 |
+
|
43 |
+
A cultural definition of Europe as the lands of Latin Christendom coalesced in the 8th century, signifying the new cultural condominium created through the confluence of Germanic traditions and Christian-Latin culture, defined partly in contrast with Byzantium and Islam, and limited to northern Iberia, the British Isles, France, Christianised western Germany, the Alpine regions and northern and central Italy.[34] The concept is one of the lasting legacies of the Carolingian Renaissance: Europa often[dubious – discuss] figures in the letters of Charlemagne's court scholar, Alcuin.[35]
|
44 |
+
|
45 |
+
The question of defining a precise eastern boundary of Europe arises in the Early Modern period, as the eastern extension of Muscovy began to include North Asia. Throughout the Middle Ages and into the 18th century, the traditional division of the landmass of Eurasia into two continents, Europe and Asia, followed Ptolemy, with the boundary following the Turkish Straits, the Black Sea, the Kerch Strait, the Sea of Azov and the Don (ancient Tanais). But maps produced during the 16th to 18th centuries tended to differ in how to continue the boundary beyond the Don bend at Kalach-na-Donu (where it is closest to the Volga, now joined with it by the Volga–Don Canal), into territory not described in any detail by the ancient geographers. Around 1715, Herman Moll produced a map showing the northern part of the Ob River and the Irtysh River, a major tributary of the former, as components of a series of partly-joined waterways taking the boundary between Europe and Asia from the Turkish Straits and the Don River all the way to the Arctic Ocean. In 1721, he produced a more up to date map that was easier to read. However, his idea to use major rivers almost exclusively as the line of demarcation was never taken up by the Russian Empire.
|
46 |
+
|
47 |
+
Four years later, in 1725, Philip Johan von Strahlenberg was the first to depart from the classical Don boundary by proposing that mountain ranges could be included as boundaries between continents whenever there were deemed to be no suitable waterways, the Ob and Irtysh Rivers notwithstanding. He drew a new line along the Volga, following the Volga north until the Samara Bend, along Obshchy Syrt (the drainage divide between Volga and Ural) and then north along Ural Mountains.[36] This was adopted by the Russian Empire, and introduced the convention that would eventually become commonly accepted, but not without criticism by many modern analytical geographers like Halford Mackinder who saw little validity in the Ural Mountains as a boundary between continents.[37]
|
48 |
+
|
49 |
+
The mapmakers continued to differ on the boundary between the lower Don and Samara well into the 19th century. The 1745 atlas published by the Russian Academy of Sciences has the boundary follow the Don beyond Kalach as far as Serafimovich before cutting north towards Arkhangelsk, while other 18th- to 19th-century mapmakers such as John Cary followed Strahlenberg's prescription. To the south, the Kuma–Manych Depression was identified circa 1773 by a German naturalist, Peter Simon Pallas, as a valley that once connected the Black Sea and the Caspian Sea,[38][39] and subsequently was proposed as a natural boundary between continents.
|
50 |
+
|
51 |
+
By the mid-19th century, there were three main conventions, one following the Don, the Volga–Don Canal and the Volga, the other following the Kuma–Manych Depression to the Caspian and then the Ural River, and the third abandoning the Don altogether, following the Greater Caucasus watershed to the Caspian. The question was still treated as a "controversy" in geographical literature of the 1860s, with Douglas Freshfield advocating the Caucasus crest boundary as the "best possible", citing support from various "modern geographers".[40]
|
52 |
+
|
53 |
+
In Russia and the Soviet Union, the boundary along the Kuma–Manych Depression was the most commonly used as early as 1906.[41] In 1958, the Soviet Geographical Society formally recommended that the boundary between the Europe and Asia be drawn in textbooks from Baydaratskaya Bay, on the Kara Sea, along the eastern foot of Ural Mountains, then following the Ural River until the Mugodzhar Hills, and then the Emba River; and Kuma–Manych Depression,[42] thus placing the Caucasus entirely in Asia and the Urals entirely in Europe.[43] However, most geographers in the Soviet Union favoured the boundary along the Caucasus crest[44] and this became the common convention in the later 20th century, although the Kuma–Manych boundary remained in use in some 20th-century maps.
|
54 |
+
|
55 |
+
Homo erectus georgicus, which lived roughly 1.8 million years ago in Georgia, is the earliest hominid to have been discovered in Europe.[45] Other hominid remains, dating back roughly 1 million years, have been discovered in Atapuerca, Spain.[46] Neanderthal man (named after the Neandertal valley in Germany) appeared in Europe 150,000 years ago (115,000 years ago it is found already in Poland[47]) and disappeared from the fossil record about 28,000 years ago, with their final refuge being present-day Portugal. The Neanderthals were supplanted by modern humans (Cro-Magnons), who appeared in Europe around 43,000 to 40,000 years ago.[48] The earliest sites in Europe dated 48,000 years ago are
|
56 |
+
Riparo Mochi (Italy), Geissenklösterle (Germany), and Isturitz (France)[49][50]
|
57 |
+
|
58 |
+
The European Neolithic period—marked by the cultivation of crops and the raising of livestock, increased numbers of settlements and the widespread use of pottery—began around 7000 BC in Greece and the Balkans, probably influenced by earlier farming practices in Anatolia and the Near East.[51] It spread from the Balkans along the valleys of the Danube and the Rhine (Linear Pottery culture) and along the Mediterranean coast (Cardial culture). Between 4500 and 3000 BC, these central European neolithic cultures developed further to the west and the north, transmitting newly acquired skills in producing copper artifacts. In Western Europe the Neolithic period was characterised not by large agricultural settlements but by field monuments, such as causewayed enclosures, burial mounds and megalithic tombs.[52] The Corded Ware cultural horizon flourished at the transition from the Neolithic to the Chalcolithic. During this period giant megalithic monuments, such as the Megalithic Temples of Malta and Stonehenge, were constructed throughout Western and Southern Europe.[53][54]
|
59 |
+
|
60 |
+
The European Bronze Age began c. 3200 BC in Greece with the Minoan civilisation on Crete, the first advanced civilisation in Europe.[55] The Minoans were followed by the Myceneans, who collapsed suddenly around 1200 BC, ushering the European Iron Age.[56] Iron Age colonisation by the Greeks and Phoenicians gave rise to early Mediterranean cities. Early Iron Age Italy and Greece from around the 8th century BC gradually gave rise to historical Classical antiquity, whose beginning is sometimes dated to 776 BC, the year the first Olympic Games.[57]
|
61 |
+
|
62 |
+
Ancient Greece was the founding culture of Western civilisation. Western democratic and rationalist culture are often attributed to Ancient Greece.[58] The Greek city-state, the polis, was the fundamental political unit of classical Greece.[58] In 508 BC, Cleisthenes instituted the world's first democratic system of government in Athens.[59] The Greek political ideals were rediscovered in the late 18th century by European philosophers and idealists. Greece also generated many cultural contributions: in philosophy, humanism and rationalism under Aristotle, Socrates and Plato; in history with Herodotus and Thucydides; in dramatic and narrative verse, starting with the epic poems of Homer;[60] in drama with Sophocles and Euripides, in medicine with Hippocrates and Galen; and in science with Pythagoras, Euclid and Archimedes.[61][62][63] In the course of the 5th century BC, several of the Greek city states would ultimately check the Achaemenid Persian advance in Europe through the Greco-Persian Wars, considered a pivotal moment in world history,[64] as the 50 years of peace that followed are known as Golden Age of Athens, the seminal period of ancient Greece that laid many of the foundations of Western civilisation. The conquests of Alexander the Great brought the Middle East into the Greek cultural sphere.
|
63 |
+
|
64 |
+
Greece was followed by Rome, which left its mark on law, politics, language, engineering, architecture, government and many more key aspects in western civilisation.[58] Rome began as a small city-state, founded, according to tradition, in 753 BC as an elective kingdom. Tradition has it that there were seven kings of Rome with Romulus, the founder, being the first and Lucius Tarquinius Superbus falling to a republican uprising led by Lucius Junius Brutus, but modern scholars doubt many of those stories and even the Romans themselves acknowledged that the sack of Rome by the Gauls in 387 BC destroyed many sources on their early history.
|
65 |
+
|
66 |
+
By 200 BC, Rome had conquered Italy, and over the following two centuries it conquered Greece and Hispania (Spain and Portugal), the North African coast, much of the Middle East, Gaul (France and Belgium), and Britannia (England and Wales). The forty-year conquest of Britannia left 250,000 Britons dead, and emperor Antoninus Pius built the Antonine Wall across Scotland's Central Belt to defend the province from the Caledonians; it marked the northernmost border of the Roman Empire.
|
67 |
+
|
68 |
+
The Roman Republic ended in 27 BC, when Augustus proclaimed the Roman Empire. The two centuries that followed are known as the pax romana, a period of unprecedented peace, prosperity, and political stability in most of Europe.[65] The empire continued to expand under emperors such as Antoninus Pius and Marcus Aurelius, who spent time on the Empire's northern border fighting Germanic, Pictish and Scottish tribes.[66][67] Christianity was legalised by Constantine I in 313 AD after three centuries of imperial persecution. Constantine also permanently moved the capital of the empire from Rome to the city of Byzantium, which was renamed Constantinople in his honour (modern-day Istanbul) in 330 AD. Christianity became the sole official religion of the empire in 380 AD, and in 391–392 AD, the emperor Theodosius outlawed pagan religions.[68] This is sometimes considered to mark the end of antiquity; alternatively antiquity is considered to end with the fall of the Western Roman Empire in 476 AD; the closure of the pagan Platonic Academy of Athens in 529 AD;[69] or the rise of Islam in the early 7th century AD.
|
69 |
+
|
70 |
+
During the decline of the Roman Empire, Europe entered a long period of change arising from what historians call the "Age of Migrations". There were numerous invasions and migrations amongst the Goths, Vandals, Huns, Franks, Angles, Saxons, Slavs, Avars, Bulgars and, later on, the Vikings, Pechenegs, Cumans and Magyars.[65] Germanic tribes settled in the former Roman provinces of England and Spain, while other groups pressed into northern France and Italy.[70] Renaissance thinkers such as Petrarch would later refer to this as the "Dark Ages".[71] Isolated monastic communities were the only places to safeguard and compile written knowledge accumulated previously; apart from this very few written records survive and much literature, philosophy, mathematics, and other thinking from the classical period disappeared from Western Europe though they were preserved in the east, in the Byzantine Empire.[72]
|
71 |
+
|
72 |
+
While the Roman empire in the west continued to decline, Roman traditions and the Roman state remained strong in the predominantly Greek-speaking Eastern Roman Empire, also known as the Byzantine Empire. During most of its existence, the Byzantine Empire was the most powerful economic, cultural, and military force in Europe. Emperor Justinian I presided over Constantinople's first golden age: he established a legal code that forms the basis of many modern legal systems, funded the construction of the Hagia Sophia, and brought the Christian church under state control.[73] He reconquered North Africa, southern Spain, and Italy.[74] The Ostrogothic Kingdom, which sought to preserve Roman culture and adopt its values, later changed course and became anti-Constantinople, so Justinian waged war against the Ostrogoths for 30 years and destroyed much of urban life in Italy, and reduced agriculture to subsistence farming.[75]
|
73 |
+
|
74 |
+
From the 7th century onwards, as the Byzantines and neighbouring Sasanid Persians were severely weakened due the protracted, centuries-lasting and frequent Byzantine–Sasanian wars, the Muslim Arabs began to make inroads into historically Roman territory, taking the Levant and North Africa and making inroads into Asia Minor. In the mid 7th century AD, following the Muslim conquest of Persia, Islam penetrated into the Caucasus region.[76] Over the next centuries Muslim forces took Cyprus, Malta, Crete, Sicily and parts of southern Italy.[77] Between 711 and 726, the Umayyad Caliphate conquered the Visigothic Kingdom, which occupied the Iberian Peninsula and Septimania (southwestern France). The unsuccessful second siege of Constantinople (717) weakened the Umayyad dynasty and reduced their prestige. The Umayyads were then defeated by the Frankish leader Charles Martel at the Battle of Poitiers in 732, which ended their northward advance. In the remote regions of north-western Iberia and the middle Pyrenees the power of the Muslims in the south was scarcely felt. It was here that the foundations of the Christian kingdoms of Asturias, Leon and Galicia were laid and from where the reconquest of the Iberian Peninsula would start. However, no coordinated attempt would be made to drive the Moors out. The Christian kingdoms were mainly focussed on their own internal power struggles. As a result, the Reconquista took the greater part of eight hundred years, in which period a long list of Alfonsos, Sanchos, Ordoños, Ramiros, Fernandos and Bermudos would be fighting their Christian rivals as much as the Muslim invaders.
|
75 |
+
|
76 |
+
One of the biggest threats during the Dark Ages were the Vikings, Norse seafarers who raided, raped, and pillaged across the Western world. Many Vikings died in battles in continental Europe, and in 844 they lost many men and ships to King Ramiro in northern Spain.[78] A few months later, another fleet took Seville, only to be driven off with further heavy losses.[79] In 911, the Vikings attacked Paris, and the Franks decided to give the Viking king Rollo land along the English Channel coast in exchange for peace. This land was named "Normandy" for the Norse "Northmen", and the Norse settlers became known as "Normans", adopting the French culture and language and becoming French vassals. In 1066, the Normans went on to conquer England in the first successful cross-Channel invasion since Roman times.[80]
|
77 |
+
|
78 |
+
During the Dark Ages, the Western Roman Empire fell under the control of various tribes. The Germanic and Slav tribes established their domains over Western and Eastern Europe respectively.[81] Eventually the Frankish tribes were united under Clovis I.[82] In 507, Clovis defeated the Visigoths at the Battle of Vouillé, slaying King Alaric II and conquering southern Gaul for the Franks. His conquests laid the foundation for the Frankish kingdom. Charlemagne, a Frankish king of the Carolingian dynasty who had conquered most of Western Europe, was anointed "Holy Roman Emperor" by the Pope in 800. This led in 962 to the founding of the Holy Roman Empire, which eventually became centred in the German principalities of central Europe.[83]
|
79 |
+
|
80 |
+
East Central Europe saw the creation of the first Slavic states and the adoption of Christianity (circa 1000 AD). The powerful West Slavic state of Great Moravia spread its territory all the way south to the Balkans, reaching its largest territorial extent under Svatopluk I and causing a series of armed conflicts with East Francia. Further south, the first South Slavic states emerged in the late 7th and 8th century and adopted Christianity: the First Bulgarian Empire, the Serbian Principality (later Kingdom and Empire), and the Duchy of Croatia (later Kingdom of Croatia). To the East, the Kievan Rus expanded from its capital in Kiev to become the largest state in Europe by the 10th century. In 988, Vladimir the Great adopted Orthodox Christianity as the religion of state.[84][85] Further East, Volga Bulgaria became an Islamic state in the 10th century, but was eventually absorbed into Russia several centuries later.[86]
|
81 |
+
|
82 |
+
The period between the year 1000 and 1300 is known as the High Middle Ages, during which the population of Europe experienced significant growth, culminating in the Renaissance of the 12th century. Economic growth, together with the lack of safety on the mainland trading routes, made possible the development of major commercial routes along the coast of the Mediterranean and Baltic Seas. The growing wealth and independence acquired by some coastal cities gave the Maritime Republics a leading role in the European scene. Constantinople was the largest and wealthiest city in Europe from the 9th to the 12th centuries, with a population of approximately 400,000.[91]
|
83 |
+
|
84 |
+
The Middle Ages on the mainland were dominated by the two upper echelons of the social structure: the nobility and the clergy. Feudalism developed in France in the Early Middle Ages and soon spread throughout Europe.[92] A struggle for influence between the nobility and the monarchy in England led to the writing of the Magna Carta and the establishment of a parliament.[93] The primary source of culture in this period came from the Roman Catholic Church. Through monasteries and cathedral schools, the Church was responsible for education in much of Europe.[92]
|
85 |
+
|
86 |
+
The Papacy reached the height of its power during the High Middle Ages. An East-West Schism in 1054 split the former Roman Empire religiously, with the Eastern Orthodox Church in the Byzantine Empire and the Roman Catholic Church in the former Western Roman Empire. In 1095, Pope Urban II called for a crusade against Muslims occupying Jerusalem and the Holy Land.[94] In Europe itself, the Church organised the Inquisition against heretics. Popes also encouraged crusading in the Iberian Peninsula.[95] In the east, a resurgent Byzantine Empire recaptured Crete and Cyprus from the Muslims and reconquered the Balkans. However, it faced a new enemy in the form of seaborne attacks by the Normans, who conquered Southern Italy and Sicily. Even more dangerous than the Franco-Norsemen was a new enemy from the steppe: the Seljuk Turks. The Empire was weakened following the defeat at the Battle of Manzikert and was weakened considerably by the Frankish-Venetian sack of Constantinople in 1204, during the Fourth Crusade.[96][97][98][99][100][101][102][103][104] Although it would recover Constantinople in 1261, Byzantium fell in 1453 when Constantinople was taken by the Ottoman Empire.[105][106][107]
|
87 |
+
|
88 |
+
In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Pechenegs and the Cuman-Kipchaks, caused a massive migration of Slavic populations to the safer, heavily forested regions of the north and temporarily halted the expansion of the Rus' state to the south and east.[108] Like many other parts of Eurasia, these territories were overrun by the Mongols.[109] The invaders, who became known as Tatars, were mostly Turkic-speaking peoples under Mongol suzerainty. They established the state of the Golden Horde with headquarters in Crimea, which later adopted Islam as a religion and ruled over modern-day southern and central Russia for more than three centuries.[110][111] After the collapse of Mongol dominions, the first Romanian states (principalities) emerged in the 14th century: Moldova and Walachia. Previously, these territories were under the successive control of Pechenegs and Cumans.[112] From the 12th to the 15th centuries, the Grand Duchy of Moscow grew from a small principality under Mongol rule to the largest state in Europe, overthrowing the Mongols in 1480 and eventually becoming the Tsardom of Russia. The state was consolidated under Ivan III the Great and Ivan the Terrible, steadily expanding to the east and south over the next centuries.
|
89 |
+
|
90 |
+
The Great Famine of 1315–1317 was the first crisis that would strike Europe in the late Middle Ages.[113] The period between 1348 and 1420 witnessed the heaviest loss. The population of France was reduced by half.[114][115] Medieval Britain was afflicted by 95 famines,[116] and France suffered the effects of 75 or more in the same period.[117] Europe was devastated in the mid-14th century by the Black Death, one of the most deadly pandemics in human history which killed an estimated 25 million people in Europe alone—a third of the European population at the time.[118]
|
91 |
+
|
92 |
+
The plague had a devastating effect on Europe's social structure; it induced people to live for the moment as illustrated by Giovanni Boccaccio in The Decameron (1353). It was a serious blow to the Roman Catholic Church and led to increased persecution of Jews, beggars, and lepers.[119] The plague is thought to have returned every generation with varying virulence and mortalities until the 18th century.[120] During this period, more than 100 plague epidemics swept across Europe.[121]
|
93 |
+
|
94 |
+
The Renaissance was a period of cultural change originating in Florence and later spreading to the rest of Europe. The rise of a new humanism was accompanied by the recovery of forgotten classical Greek and Arabic knowledge from monastic libraries, often translated from Arabic into Latin.[122][123][124] The Renaissance spread across Europe between the 14th and 16th centuries: it saw the flowering of art, philosophy, music, and the sciences, under the joint patronage of royalty, the nobility, the Roman Catholic Church, and an emerging merchant class.[125][126][127] Patrons in Italy, including the Medici family of Florentine bankers and the Popes in Rome, funded prolific quattrocento and cinquecento artists such as Raphael, Michelangelo, and Leonardo da Vinci.[128][129]
|
95 |
+
|
96 |
+
Political intrigue within the Church in the mid-14th century caused the Western Schism. During this forty-year period, two popes—one in Avignon and one in Rome—claimed rulership over the Church. Although the schism was eventually healed in 1417, the papacy's spiritual authority had suffered greatly.[132] In the 15th century, Europe started to extend itself beyond its geographic frontiers. Spain and Portugal, the greatest naval powers of the time, took the lead in exploring the world.[133][134] Exploration reached the Southern Hemisphere in the Atlantic and the Southern tip of Africa. Christopher Columbus reached the New World in 1492, and Vasco da Gama opened the ocean route to the East linking the Atlantic and Indian Oceans in 1498. Ferdinand Magellan reached Asia westward across the Atlantic and the Pacific Oceans in the Spanish expedition of Magellan-Elcano, resulting in the first circumnavigation of the globe, completed by Juan Sebastián Elcano (1519–22). Soon after, the Spanish and Portuguese began establishing large global empires in the Americas, Asia, Africa and Oceania.[135] France, the Dutch Republic and England soon followed in building large colonial empires with vast holdings in Africa, the Americas, and Asia. The Italian Wars (1494–1559) led to the development of the first modern professional army in Europe, the Tercios. Their superiority became evident at the Battle of Bicocca (1522) against French-Swiss-Italian forces, which marked the end of massive pike assaults after thousands of Swiss pikemen fell to the Spanish arquebusiers without the latter suffering a single casualty.[136] Over the course of the 16th century, Spanish and Italian troops marched north to fight on Dutch and German battlefields, dying there in large numbers.[137][138]
|
97 |
+
|
98 |
+
The Church's power was further weakened by the Protestant Reformation in 1517 when German theologian Martin Luther nailed his Ninety-five Theses criticising the selling of indulgences to the church door. He was subsequently excommunicated in the papal bull Exsurge Domine in 1520, and his followers were condemned in the 1521 Diet of Worms, which divided German princes between Protestant and Roman Catholic faiths.[139] Religious fighting and warfare spread with Protestantism. War broke out first in Germany, where Imperial troops defeated the Protestant army of the Schmalkaldic League at the Battle of Mühlberg, putting an end to the Schmalkaldic War (1546–47).[140] Between 2 and 3 million people were killed in the French Wars of Religion (1562–98).[141] The Eighty Years' War (1568–1648) between Catholic Habsburgs and Dutch Calvinists resulted in the death of hundreds of thousands of people.[141] In the Cologne War (1583–88), Catholic forces faced off against German Calvinist and Dutch troops. The Thirty Years War (1618–48) crippled the Holy Roman Empire and devastated much of Germany, killing between 25 and 40 percent of its population.[142] During the Thirty Years' War, in which various Protestant forces battled Imperial armies, France provided subsidies to Habsburg enemies, especially Sweden. The destruction of the Swedish army by an Austro-Spanish force at the Battle of Nördlingen (1634) forced France to intervene militarily in the Thirty Years' War. In 1638, France and Oxenstierna agreed to a treaty at Hamburg, extending the annual French subsidy of 400,000 riksdalers for three years, but Sweden would not fight against Spain. In 1643, the French defeated one of Spain's best armies at the Battle of Rocroi. In the aftermath of the Peace of Westphalia (1648), France rose to predominance within Western Europe.[143] France fought a series of wars over domination of Western Europe, including the Franco-Spanish War, the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession; these wars cost France over 1,000,000 battle casualties.[144][145]
|
99 |
+
|
100 |
+
The 17th century in central and eastern Europe was a period of general decline.[146] Central and Eastern Europe experienced more than 150 famines in a 200-year period between 1501 and 1700.[147] From the Union of Krewo (1385) central and eastern Europe was dominated by Kingdom of Poland and Grand Duchy of Lithuania. Between 1648 and 1655 in the central and eastern Europe ended hegemony of the Polish–Lithuanian Commonwealth. From the 15th to 18th centuries, when the disintegrating khanates of the Golden Horde were conquered by Russia, Tatars from the Crimean Khanate frequently raided Eastern Slavic lands to capture slaves.[148] Further east, the Nogai Horde and Kazakh Khanate frequently raided the Slavic-speaking areas of Russia, Ukraine and Poland for hundreds of years, until the Russian expansion and conquest of most of northern Eurasia (i.e. Eastern Europe, Central Asia and Siberia).
|
101 |
+
|
102 |
+
The Renaissance and the New Monarchs marked the start of an Age of Discovery, a period of exploration, invention, and scientific development.[149] Among the great figures of the Western scientific revolution of the 16th and 17th centuries were Copernicus, Kepler, Galileo, and Isaac Newton.[150] According to Peter Barrett, "It is widely accepted that 'modern science' arose in the Europe of the 17th century (towards the end of the Renaissance), introducing a new understanding of the natural world."[122]
|
103 |
+
|
104 |
+
The Age of Enlightenment was a powerful intellectual movement during the 18th century promoting scientific and reason-based thoughts.[151][152][153] Discontent with the aristocracy and clergy's monopoly on political power in France resulted in the French Revolution and the establishment of the First Republic as a result of which the monarchy and many of the nobility perished during the initial reign of terror.[154] Napoleon Bonaparte rose to power in the aftermath of the French Revolution and established the First French Empire that, during the Napoleonic Wars, grew to encompass large parts of western and central Europe before collapsing in 1814 with the Battle of Paris between the Sixth Coalition—consisting of Russia, Austria, and Prussia—and the French Empire.[155][156] Napoleonic rule resulted in the further dissemination of the ideals of the French Revolution, including that of the nation-state, as well as the widespread adoption of the French models of administration, law, and education.[157][158][159] The Congress of Vienna, convened after Napoleon's downfall, established a new balance of power in Europe centred on the five "Great Powers": the UK, France, Prussia, Austria, and Russia.[160] This balance would remain in place until the Revolutions of 1848, during which liberal uprisings affected all of Europe except for Russia and the UK. These revolutions were eventually put down by conservative elements and few reforms resulted.[161] The year 1859 saw the unification of Romania, as a nation-state, from smaller principalities. In 1867, the Austro-Hungarian empire was formed; and 1871 saw the unifications of both Italy and Germany as nation-states from smaller principalities.[162]
|
105 |
+
|
106 |
+
In parallel, the Eastern Question grew more complex ever since the Ottoman defeat in the Russo-Turkish War (1768–1774). As the dissolution of the Ottoman Empire seemed imminent, the Great Powers struggled to safeguard their strategic and commercial interests in the Ottoman domains. The Russian Empire stood to benefit from the decline, whereas the Habsburg Empire and Britain perceived the preservation of the Ottoman Empire to be in their best interests. Meanwhile, the Serbian revolution (1804) and Greek War of Independence (1821) marked the beginning of the end of Ottoman rule in the Balkans, which ended with the Balkan Wars in 1912–1913.[163] Formal recognition of the de facto independent principalities of Montenegro, Serbia and Romania ensued at the Congress of Berlin in 1878. Many Ottoman Muslims faced either extermination at the hands of the newly independent states, or were expelled to the shrinking Ottoman possessions in Europe or to Anatolia; between 1821 and 1922 alone, more than 5 million Ottoman Muslims were driven away from their homes, while another 5.5 million died in wars or due to starvation and disease.[164]
|
107 |
+
|
108 |
+
The Industrial Revolution started in Great Britain in the last part of the 18th century and spread throughout Europe. The invention and implementation of new technologies resulted in rapid urban growth, mass employment, and the rise of a new working class.[165] Reforms in social and economic spheres followed, including the first laws on child labour, the legalisation of trade unions,[166] and the abolition of slavery.[167] In Britain, the Public Health Act of 1875 was passed, which significantly improved living conditions in many British cities.[168] Europe's population increased from about 100 million in 1700 to 400 million by 1900.[169] The last major famine recorded in Western Europe, the Irish Potato Famine, caused death and mass emigration of millions of Irish people.[170] In the 19th century, 70 million people left Europe in migrations to various European colonies abroad and to the United States.[171] Demographic growth meant that, by 1900, Europe's share of the world's population was 25%.[172]
|
109 |
+
|
110 |
+
Two world wars and an economic depression dominated the first half of the 20th century. World War I was fought between 1914 and 1918. It started when Archduke Franz Ferdinand of Austria was assassinated by the Yugoslav nationalist[178] Gavrilo Princip.[179] Most European nations were drawn into the war, which was fought between the Entente Powers (France, Belgium, Serbia, Portugal, Russia, the United Kingdom, and later Italy, Greece, Romania, and the United States) and the Central Powers (Austria-Hungary, Germany, Bulgaria, and the Ottoman Empire).
|
111 |
+
|
112 |
+
On 4 August 1914, Germany invaded and occupied Belgium. On 17 August 1914, the Russian army launched an assault on the eastern German province of East Prussia, but they would be dealt a fatal blow at the Battle of Tannenberg on 26-30 August 1914. As 1914 came to a close, the Germans were facing off against the French and British in northern France and in the Alsace and Lorraine regions of eastern France, and the opposing armies dug trenches and set up defensive positions. From 1915 to 1917, the two armies often engaged in trench warfare and massive offensives. The Battle of the Somme was the largest battle on the Western Front; the British suffered 420,000 casualties, the French 200,000 and the Germans 500,000. The Battle of Verdun saw around 377,231 French and 337,000 Germans become casualties. At the Battle of Passchendaele, mustard gas was used by both sides as chemical weapons, and several troops on both sides died in battle. In 1915, the tide of the war was changed when the Kingdom of Italy decided to enter the war on the side of the Triple Entente, seeking to acquire Austrian Tyrol and some possessions along the Adriatic Sea. This violated the "Triple Alliance" proposed in the 19th century, and Austria-Hungary was ill-prepared for yet another front to fight on. The Royal Italian Army launched several offensives along the Isonzo River, with eleven battles of the Isonzo being fought during the war. Over the course of the war, 462,391 Italian soldiers died; 420,000 of the dead were lost on the Alpine front with Austria-Hungary, the rest fell in France, Albania, or Macedonia.[180]
|
113 |
+
|
114 |
+
In 1917, German troops began to arrive in Italy to assist the crumbling Austro-Hungarian forces, and the Germans destroyed an Italian army at the Battle of Caporetto. This led to British and French (and later American) troops arriving in Italy to assist the Italian military against the Central forces, and the Allied troops succeeded in taking parts of northern Italy and on the Adriatic sea. In the Balkans, the Serbians resisted the Austrians until Bulgaria and the German Empire sent troops to assist in the conquest of Serbia in December 1915. The Austro-Hungarian Army on the Eastern Front, which had performed poorly, was soon subordinated to the Imperial German Army's high command, and the Germans destroyed a Russian army at the Gorlice-Tarnow Offensive in 1916. The Russians launched a massive counterattack against the Central Powers in Poland, and this "Brusilov Offensive" inflicted very high losses on the Austro-Hungarians; 1,325,000 Central Powers troops and 500,000 Russian troops were lost. However, the political situation in Russia deteriorated as people became more aware of the corruption in Czar Nicholas II of Russia's government, and the Germans made rapid progress in 1917 after the Russian Revolution toppled the czar. The Germans secured Riga in Latvia in September 1917, and they benefited from the fall of Aleksandr Kerensky's provisional government to the Bolsheviks in October 1917. In February-March 1918, the Germans launched a massive offensive after the Russian SFSR came to power, taking advantage of the Russian Civil War to seize large amounts of territory. In March 1918, the Soviets signed the Treaty of Brest-Litovsk, ending the war on the Eastern Front, and Germany set up various puppet nations in Eastern Europe.
|
115 |
+
|
116 |
+
Germany's huge successes during the war with the Russians were morale boosts for the Central Powers, but the Germans faced a new problem when they began to use "unrestricted submarine warfare" to attack enemy ships at sea. These tactics led to the sinking of several civilian ships, angering the United States, which lost several civilians aboard these ships. Germany agreed to the "Sussex Pledge", stating that it would halt this strategy. However, the Germans sunk RMS Lusitania on 7 May 1915, and the British intercepted an offer from the German government made to Mexico that proposed an anti-American alliance. In 1916, President Woodrow Wilson declared war on Germany, and the USA joined the Allies. In 1918, the American troops took part in the Battle of Belleau Wood and in the Meuse-Argonne Offensive, major offensives that saw several American troops die on European soil for the first time. The Germans began to suffer as more and more Allied troops arrived, and they launched the desperate "Spring Offensive" of 1918. This offensive was repelled, and the Allies barreled towards Belgium in the Hundred Days Offensive during the autumn of 1918. On 11 November 1918, the Germans agreed to an armistice with the Allies, ending the war. The war left more than 16 million civilians and military dead.[181] Over 60 million European soldiers were mobilised from 1914 to 1918.[182]
|
117 |
+
|
118 |
+
Russia was plunged into the Russian Revolution, which threw down the Tsarist monarchy and replaced it with the communist Soviet Union.[183] Austria-Hungary and the Ottoman Empire collapsed and broke up into separate nations, and many other nations had their borders redrawn. The Treaty of Versailles, which officially ended World War I in 1919, was harsh towards Germany, upon whom it placed full responsibility for the war and imposed heavy sanctions.[184] Excess deaths in Russia over the course of World War I and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million.[185] In 1932–1933, under Stalin's leadership, confiscations of grain by the Soviet authorities contributed to the second Soviet famine which caused millions of deaths;[186] surviving kulaks were persecuted and many sent to Gulags to do forced labour. Stalin was also responsible for the Great Purge of 1937–38 in which the NKVD executed 681,692 people;[187] millions of people were deported and exiled to remote areas of the Soviet Union.[188]
|
119 |
+
|
120 |
+
The social revolutions sweeping through Russia also affected other European nations following The Great War: in 1919, with the Weimar Republic in Germany, and the First Austrian Republic; in 1922, with Mussolini's one party fascist government in the Kingdom of Italy, and in Atatürk's Turkish Republic, adopting the Western alphabet, and state secularism.
|
121 |
+
Economic instability, caused in part by debts incurred in the First World War and 'loans' to Germany played havoc in Europe in the late 1920s and 1930s. This and the Wall Street Crash of 1929 brought about the worldwide Great Depression. Helped by the economic crisis, social instability and the threat of communism, fascist movements developed throughout Europe placing Adolf Hitler in power of what became Nazi Germany.[198][199] In 1933, Hitler became the leader of Germany and began to work towards his goal of building Greater Germany. Germany re-expanded and took back the Saarland and Rhineland in 1935 and 1936. In 1938, Austria became a part of Germany following the Anschluss. Later that year, following the Munich Agreement signed by Germany, France, the United Kingdom and Italy, Germany annexed the Sudetenland, which was a part of Czechoslovakia inhabited by ethnic Germans, and in early 1939, the remainder of Czechoslovakia was split into the Protectorate of Bohemia and Moravia, controlled by Germany, and the Slovak Republic. At the time, Britain and France preferred a policy of appeasement.
|
122 |
+
|
123 |
+
With tensions mounting between Germany and Poland over the future of Danzig, the Germans turned to the Soviets, and signed the Molotov–Ribbentrop Pact, which allowed the Soviets to invade the Baltic states and parts of Poland and Romania. Germany invaded Poland on 1 September 1939, prompting France and the United Kingdom to declare war on Germany on 3 September, opening the European Theatre of World War II.[194][200][201] The Soviet invasion of Poland started on 17 September and Poland fell soon thereafter. On 24 September, the Soviet Union attacked the Baltic countries and later, Finland. The British hoped to land at Narvik and send troops to aid Finland, but their primary objective in the landing was to encircle Germany and cut the Germans off from Scandinavian resources. Around the same time, Germany moved troops into Denmark; 1,400 Nazi soldiers conquered Denmark in one day.[202] The Phoney War continued. In May 1940, Germany conquered the Netherlands and Belgium. That same month, Italy allied with Germany and the two countries conquered France. By August, Germany began a bombing offensive on Britain, but failed to convince the Britons to give up.[203] Some 43,000 British civilians were killed and 139,000 wounded in the Blitz; much of London was destroyed, with 1,400,245 buildings destroyed or damaged.[204] In April 1941, Germany invaded Yugoslavia. In June 1941, Germany invaded the Soviet Union in Operation Barbarossa.[205] The Soviets and Germans fought an extremely brutal war in the east that cost millions of civilian and soldier lives. The Jews of Russia were wiped out by the German Nazis, who wanted to purge the world of Jews and other "subhumans". On 7 December 1941 Japan's attack on Pearl Harbor drew the United States into the conflict as allies of the British Empire and other allied forces.[206][207] In 1943, the Allies knocked Italy out of the war. The Italian campaign had the highest casualty rates of the whole war for the western allies.[208]
|
124 |
+
|
125 |
+
After the staggering Battle of Stalingrad in 1943, the German offensive in the Soviet Union turned into a continual fallback. The Battle of Kursk, which involved the largest tank battle in history, was the last major German offensive on the Eastern Front. In 1944, the Soviets and Allies both launched offensives against the Axis in Europe, with the Western Allies liberating France and much of Western Europe as the Soviets liberated much of Eastern Europe before halting in central Poland. The final year of the war saw the Soviets and Western Allies divide Germany in two after meeting along the Elbe River, while the Soviets and allied partisans liberated the Balkans. The Soviets conquered the German capital of Berlin on 2 May 1945, ending World War II in Europe. Hitler killed himself; Mussolini was killed by partisans in Italy. The Soviet-German struggle made World War II the bloodiest and costliest conflict in history.[209][210] More than 40 million people in Europe had died as a result of World War II (70 percent on the eastern front),[211][212] including between 11 and 17 million people who perished during the Holocaust (the genocide against Jews and the massacre of intellectuals, gypsies, homosexuals, handicapped people, Slavs, and Red Army prisoners-of-war).[213] The Soviet Union lost around 27 million people during the war, about half of all World War II casualties.[214] By the end of World War II, Europe had more than 40 million refugees.[215] Several post-war expulsions in Central and Eastern Europe displaced a total of about 20 million people,[216] in particular, German-speakers from all over Eastern Europe. In the aftermath of World War II, American troops were stationed in Western Europe, while Soviet troops occupied several Central and Eastern European states.
|
126 |
+
|
127 |
+
World War I and especially World War II diminished the eminence of Western Europe in world affairs. After World War II the map of Europe was redrawn at the Yalta Conference and divided into two blocs, the Western countries and the communist Eastern bloc, separated by what was later called by Winston Churchill an "Iron Curtain". Germany was divided between the capitalist West Germany and the communist East Germany. The United States and Western Europe
|
128 |
+
established the NATO alliance and later the Soviet Union and Central Europe established the Warsaw Pact.[217]
|
129 |
+
|
130 |
+
The two new superpowers, the United States and the Soviet Union, became locked in a fifty-year-long Cold War, centred on nuclear proliferation. At the same time decolonisation, which had already started after World War I, gradually resulted in the independence of most of the European colonies in Asia and Africa.[13] During the Cold War, extra-continental military interventions were the preserve of the two superpowers, a few West European countries, and Cuba.[218] Using the small Caribbean nation of Cuba as a surrogate, the Soviets projected power to distant points of the globe, exploiting world trouble spots without directly involving their own troops.[219]
|
131 |
+
|
132 |
+
In the 1980s the reforms of Mikhail Gorbachev and the Solidarity movement in Poland accelerated the collapse of the Eastern bloc and the end of the Cold War. Germany was reunited, after the symbolic fall of the Berlin Wall in 1989, and the maps of Central and Eastern Europe were redrawn once more.[198] European integration also grew after World War II. In 1949 the Council of Europe was founded, following a speech by Sir Winston Churchill, with the idea of unifying Europe to achieve common goals. It includes all European states except for Belarus and Vatican City. The Treaty of Rome in 1957 established the European Economic Community between six Western European states with the goal of a unified economic policy and common market.[221] In 1967 the EEC, European Coal and Steel Community and Euratom formed the European Community, which in 1993 became the European Union. The EU established a parliament, court and central bank and introduced the euro as a unified currency.[222] Between 2004 and 2013, more Central and Eastern European countries began joining, expanding the EU to 28 European countries, and once more making Europe a major economical and political centre of power.[223] However, the United Kingdom withdrew from the EU on 31 January 2020, as a result of a June 2016 referendum on EU membership.[224]
|
133 |
+
|
134 |
+
Europe makes up the western fifth of the Eurasian landmass.[24] It has a higher ratio of coast to landmass than any other continent or subcontinent.[225] Its maritime borders consist of the Arctic Ocean to the north, the Atlantic Ocean to the west, and the Mediterranean, Black, and Caspian Seas to the south.[226]
|
135 |
+
Land relief in Europe shows great variation within relatively small areas. The southern regions are more mountainous, while moving north the terrain descends from the high Alps, Pyrenees, and Carpathians, through hilly uplands, into broad, low northern plains, which are vast in the east. This extended lowland is known as the Great European Plain, and at its heart lies the North German Plain. An arc of uplands also exists along the north-western seaboard, which begins in the western parts of the islands of Britain and Ireland, and then continues along the mountainous, fjord-cut spine of Norway.
|
136 |
+
|
137 |
+
This description is simplified. Sub-regions such as the Iberian Peninsula and the Italian Peninsula contain their own complex features, as does mainland Central Europe itself, where the relief contains many plateaus, river valleys and basins that complicate the general trend. Sub-regions like Iceland, Britain, and Ireland are special cases. The former is a land unto itself in the northern ocean which is counted as part of Europe, while the latter are upland areas that were once joined to the mainland until rising sea levels cut them off.
|
138 |
+
|
139 |
+
Europe lies mainly in the temperate climate zones, being subjected to prevailing westerlies. The climate is milder in comparison to other areas of the same latitude around the globe due to the influence of the Gulf Stream.[228] The Gulf Stream is nicknamed "Europe's central heating", because it makes Europe's climate warmer and wetter than it would otherwise be. The Gulf Stream not only carries warm water to Europe's coast but also warms up the prevailing westerly winds that blow across the continent from the Atlantic Ocean.
|
140 |
+
|
141 |
+
Therefore, the average temperature throughout the year of Naples is 16 °C (61 °F), while it is only 12 °C (54 °F) in New York City which is almost on the same latitude. Berlin, Germany; Calgary, Canada; and Irkutsk, in the Asian part of Russia, lie on around the same latitude; January temperatures in Berlin average around 8 °C (14 °F) higher than those in Calgary, and they are almost 22 °C (40 °F) higher than average temperatures in Irkutsk.[228] Similarly, northern parts of Scotland have a temperate marine climate. The yearly average temperature in city of Inverness is 9.05 °C (48.29 °F). However, Churchill, Manitoba, Canada, is on roughly the same latitude and has an average temperature of −6.5 °C (20.3 °F), giving it a nearly subarctic climate.
|
142 |
+
|
143 |
+
In general, Europe is not just colder towards the north compared to the south, but it also gets colder from the west towards the east. The climate is more oceanic in the west, and less so in the east. This can be illustrated by the following table of average temperatures at locations roughly following the 60th, 55th, 50th, 45th and 40th latitudes. None of them is located at high altitude; most of them are close to the sea. (location, approximate latitude and longitude, coldest month average, hottest month average and annual average temperatures in degrees C)
|
144 |
+
|
145 |
+
[229]
|
146 |
+
It is notable how the average temperatures for the coldest month, as well as the annual average temperatures, drop from the west to the east. For instance, Edinburgh is warmer than Belgrade during the coldest month of the year, although Belgrade is around 10° of latitude farther south.
|
147 |
+
|
148 |
+
The geological history of Europe traces back to the formation of the Baltic Shield (Fennoscandia) and the Sarmatian craton, both around 2.25 billion years ago, followed by the Volgo–Uralia shield, the three together leading to the East European craton (≈ Baltica) which became a part of the supercontinent Columbia. Around 1.1 billion years ago, Baltica and Arctica (as part of the Laurentia block) became joined to Rodinia, later resplitting around 550 million years ago to reform as Baltica. Around 440 million years ago Euramerica was formed from Baltica and Laurentia; a further joining with Gondwana then leading to the formation of Pangea. Around 190 million years ago, Gondwana and Laurasia split apart due to the widening of the Atlantic Ocean. Finally, and very soon afterwards, Laurasia itself split up again, into Laurentia (North America) and the Eurasian continent. The land connection between the two persisted for a considerable time, via Greenland, leading to interchange of animal species. From around 50 million years ago, rising and falling sea levels have determined the actual shape of Europe, and its connections with continents such as Asia. Europe's present shape dates to the late Tertiary period about five million years ago.[232]
|
149 |
+
|
150 |
+
The geology of Europe is hugely varied and complex, and gives rise to the wide variety of landscapes found across the continent, from the Scottish Highlands to the rolling plains of Hungary.[234] Europe's most significant feature is the dichotomy between highland and mountainous Southern Europe and a vast, partially underwater, northern plain ranging from Ireland in the west to the Ural Mountains in the east. These two halves are separated by the mountain chains of the Pyrenees and Alps/Carpathians. The northern plains are delimited in the west by the Scandinavian Mountains and the mountainous parts of the British Isles. Major shallow water bodies submerging parts of the northern plains are the Celtic Sea, the North Sea, the Baltic Sea complex and Barents Sea.
|
151 |
+
|
152 |
+
The northern plain contains the old geological continent of Baltica, and so may be regarded geologically as the "main continent", while peripheral highlands and mountainous regions in the south and west constitute fragments from various other geological continents. Most of the older geology of western Europe existed as part of the ancient microcontinent Avalonia.
|
153 |
+
|
154 |
+
Having lived side by side with agricultural peoples for millennia, Europe's animals and plants have been profoundly affected by the presence and activities of man. With the exception of Fennoscandia and northern Russia, few areas of untouched wilderness are currently found in Europe, except for various national parks.
|
155 |
+
|
156 |
+
The main natural vegetation cover in Europe is mixed forest. The conditions for growth are very favourable. In the north, the Gulf Stream and North Atlantic Drift warm the continent. Southern Europe could be described as having a warm, but mild climate. There are frequent summer droughts in this region. Mountain ridges also affect the conditions. Some of these (Alps, Pyrenees) are oriented east–west and allow the wind to carry large masses of water from the ocean in the interior. Others are oriented south–north (Scandinavian Mountains, Dinarides, Carpathians, Apennines) and because the rain falls primarily on the side of mountains that is oriented towards the sea, forests grow well on this side, while on the other side, the conditions are much less favourable. Few corners of mainland Europe have not been grazed by livestock at some point in time, and the cutting down of the pre-agricultural forest habitat caused disruption to the original plant and animal ecosystems.
|
157 |
+
|
158 |
+
Probably 80 to 90 percent of Europe was once covered by forest.[235] It stretched from the Mediterranean Sea to the Arctic Ocean. Although over half of Europe's original forests disappeared through the centuries of deforestation, Europe still has over one quarter of its land area as forest, such as the broadleaf and mixed forests, taiga of Scandinavia and Russia, mixed rainforests of the Caucasus and the Cork oak forests in the western Mediterranean. During recent times, deforestation has been slowed and many trees have been planted. However, in many cases monoculture plantations of conifers have replaced the original mixed natural forest, because these grow quicker. The plantations now cover vast areas of land, but offer poorer habitats for many European forest dwelling species which require a mixture of tree species and diverse forest structure. The amount of natural forest in Western Europe is just 2–3% or less, in European Russia 5–10%. The country with the smallest percentage of forested area is Iceland (1%), while the most forested country is Finland (77%).[236]
|
159 |
+
|
160 |
+
In temperate Europe, mixed forest with both broadleaf and coniferous trees dominate. The most important species in central and western Europe are beech and oak. In the north, the taiga is a mixed spruce–pine–birch forest; further north within Russia and extreme northern Scandinavia, the taiga gives way to tundra as the Arctic is approached. In the Mediterranean, many olive trees have been planted, which are very well adapted to its arid climate; Mediterranean Cypress is also widely planted in southern Europe. The semi-arid Mediterranean region hosts much scrub forest. A narrow east–west tongue of Eurasian grassland (the steppe) extends eastwards from Ukraine and southern Russia and ends in Hungary and traverses into taiga to the north.
|
161 |
+
|
162 |
+
Glaciation during the most recent ice age and the presence of man affected the distribution of European fauna. As for the animals, in many parts of Europe most large animals and top predator species have been hunted to extinction. The woolly mammoth was extinct before the end of the Neolithic period. Today wolves (carnivores) and bears (omnivores) are endangered. Once they were found in most parts of Europe. However, deforestation and hunting caused these animals to withdraw further and further. By the Middle Ages the bears' habitats were limited to more or less inaccessible mountains with sufficient forest cover. Today, the brown bear lives primarily in the Balkan peninsula, Scandinavia, and Russia; a small number also persist in other countries across Europe (Austria, Pyrenees etc.), but in these areas brown bear populations are fragmented and marginalised because of the destruction of their habitat. In addition, polar bears may be found on Svalbard, a Norwegian archipelago far north of Scandinavia. The wolf, the second largest predator in Europe after the brown bear, can be found primarily in Central and Eastern Europe and in the Balkans, with a handful of packs in pockets of Western Europe (Scandinavia, Spain, etc.).
|
163 |
+
|
164 |
+
European wild cat, foxes (especially the red fox), jackal and different species of martens, hedgehogs, different species of reptiles (like snakes such as vipers and grass snakes) and amphibians, different birds (owls, hawks and other birds of prey).
|
165 |
+
|
166 |
+
Important European herbivores are snails, larvae, fish, different birds, and mammals, like rodents, deer and roe deer, boars, and living in the mountains, marmots, steinbocks, chamois among others. A number of insects, such as the small tortoiseshell butterfly, add to the biodiversity.[239]
|
167 |
+
|
168 |
+
The extinction of the dwarf hippos and dwarf elephants has been linked to the earliest arrival of humans on the islands of the Mediterranean.[240]
|
169 |
+
|
170 |
+
Sea creatures are also an important part of European flora and fauna. The sea flora is mainly phytoplankton. Important animals that live in European seas are zooplankton, molluscs, echinoderms, different crustaceans, squids and octopuses, fish, dolphins, and whales.
|
171 |
+
|
172 |
+
Biodiversity is protected in Europe through the Council of Europe's Bern Convention, which has also been signed by the European Community as well as non-European states.
|
173 |
+
|
174 |
+
|
175 |
+
|
176 |
+
The political map of Europe is substantially derived from the re-organisation of Europe following the Napoleonic Wars in 1815. The prevalent form of government in Europe is parliamentary democracy, in most cases in the form of Republic; in 1815, the prevalent form of government was still the Monarchy. Europe's remaining eleven monarchies[241] are constitutional.
|
177 |
+
|
178 |
+
European integration is the process of political, legal, economic (and in some cases social and cultural) integration of European states as it has been pursued by the powers sponsoring the Council of Europe since the end of World War II
|
179 |
+
The European Union has been the focus of economic integration on the continent since its foundation in 1993. More recently, the Eurasian Economic Union has been established as a counterpart comprising former Soviet states.
|
180 |
+
|
181 |
+
27 European states are members of the politico-economic European Union, 26 of the border-free Schengen Area and 19 of the monetary union Eurozone. Among the smaller European organisations are the Nordic Council, the Benelux, the Baltic Assembly and the Visegrád Group.
|
182 |
+
|
183 |
+
The list below includes all entities falling even partially under any of the various common definitions of Europe, geographically or politically.
|
184 |
+
|
185 |
+
Within the above-mentioned states are several de facto independent countries with limited to no international recognition. None of them are members of the UN:
|
186 |
+
|
187 |
+
Several dependencies and similar territories with broad autonomy are also found within or in close proximity to Europe. This includes Åland (a region of Finland), two constituent countries of the Kingdom of Denmark (other than Denmark itself), three Crown dependencies, and two British Overseas Territories. Svalbard is also included due to its unique status within Norway, although it is not autonomous. Not included are the three countries of the United Kingdom with devolved powers and the two Autonomous Regions of Portugal, which despite having a unique degree of autonomy, are not largely self-governing in matters other than international affairs. Areas with little more than a unique tax status, such as Heligoland and the Canary Islands, are also not included for this reason.
|
188 |
+
|
189 |
+
As a continent, the economy of Europe is currently the largest on Earth and it is the richest region as measured by assets under management with over $32.7 trillion compared to North America's $27.1 trillion in 2008.[243] In 2009 Europe remained the wealthiest region. Its $37.1 trillion in assets under management represented one-third of the world's wealth. It was one of several regions where wealth surpassed its precrisis year-end peak.[244] As with other continents, Europe has a large variation of wealth among its countries. The richer states tend to be in the West; some of the Central and Eastern European economies are still emerging from the collapse of the Soviet Union and the breakup of Yugoslavia.
|
190 |
+
|
191 |
+
The European Union, a political entity composed of 28 European states, comprises the largest single economic area in the world. 19 EU countries share the euro as a common currency.
|
192 |
+
Five European countries rank in the top ten of the world's largest national economies in GDP (PPP). This includes (ranks according to the CIA): Germany (6), Russia (7), the United Kingdom (10), France (11), and Italy (13).[245]
|
193 |
+
|
194 |
+
There is huge disparity between many European countries in terms of their income. The richest in terms of GDP per capita is Monaco with its US$172,676 per capita (2009) and the poorest is Moldova with its GDP per capita of US$1,631 (2010).[246] Monaco is the richest country in terms of GDP per capita in the world according to the World Bank report.
|
195 |
+
|
196 |
+
As a whole, Europe's GDP per capita is US$21,767 according to a 2016 International Monetary Fund assessment.[247]
|
197 |
+
|
198 |
+
Capitalism has been dominant in the Western world since the end of feudalism.[248] From Britain, it gradually spread throughout Europe.[249] The Industrial Revolution started in Europe, specifically the United Kingdom in the late 18th century,[250] and the 19th century saw Western Europe industrialise. Economies were disrupted by World War I but by the beginning of World War II they had recovered and were having to compete with the growing economic strength of the United States. World War II, again, damaged much of Europe's industries.
|
199 |
+
|
200 |
+
After World War II the economy of the UK was in a state of ruin,[251] and continued to suffer relative economic decline in the following decades.[252] Italy was also in a poor economic condition but regained a high level of growth by the 1950s. West Germany recovered quickly and had doubled production from pre-war levels by the 1950s.[253] France also staged a remarkable comeback enjoying rapid growth and modernisation; later on Spain, under the leadership of Franco, also recovered, and the nation recorded huge unprecedented economic growth beginning in the 1960s in what is called the Spanish miracle.[254] The majority of Central and Eastern European states came under the control of the Soviet Union and thus were members of the Council for Mutual Economic Assistance (COMECON).[255]
|
201 |
+
|
202 |
+
The states which retained a free-market system were given a large amount of aid by the United States under the Marshall Plan.[256] The western states moved to link their economies together, providing the basis for the EU and increasing cross border trade. This helped them to enjoy rapidly improving economies, while those states in COMECON were struggling in a large part due to the cost of the Cold War. Until 1990, the European Community was expanded from 6 founding members to 12. The emphasis placed on resurrecting the West German economy led to it overtaking the UK as Europe's largest economy.
|
203 |
+
|
204 |
+
With the fall of communism in Central and Eastern Europe in 1991, the post-socialist states began free market reforms.
|
205 |
+
|
206 |
+
After East and West Germany were reunited in 1990, the economy of West Germany struggled as it had to support and largely rebuild the infrastructure of East Germany.
|
207 |
+
|
208 |
+
By the millennium change, the EU dominated the economy of Europe comprising the five largest European economies of the time namely Germany, the United Kingdom, France, Italy, and Spain. In 1999, 12 of the 15 members of the EU joined the Eurozone replacing their former national currencies by the common euro. The three who chose to remain outside the Eurozone were: the United Kingdom, Denmark, and Sweden. The European Union is now the largest economy in the world.[257]
|
209 |
+
|
210 |
+
Figures released by Eurostat in 2009 confirmed that the Eurozone had gone into recession in 2008.[258] It impacted much of the region.[259] In 2010, fears of a sovereign debt crisis[260] developed concerning some countries in Europe, especially Greece, Ireland, Spain, and Portugal.[261] As a result, measures were taken, especially for Greece, by the leading countries of the Eurozone.[262] The EU-27 unemployment rate was 10.3% in 2012.[263] For those aged 15–24 it was 22.4%.[263]
|
211 |
+
|
212 |
+
In 2017, the population of Europe was estimated to be 742 million according to the 2019 revision of the World Population Prospects[2][3], which is slightly more than one-ninth of the world's population. This number includes Siberia, (about 38 million people) but excludes European Turkey (about 12 million).
|
213 |
+
|
214 |
+
A century ago, Europe had nearly a quarter of the world's population.[265] The population of Europe has grown in the past century, but in other areas of the world (in particular Africa and Asia) the population has grown far more quickly.[266] Among the continents, Europe has a relatively high population density, second only to Asia. Most of Europe is in a mode of Sub-replacement fertility, which means that each new(-born) generation is being less populous than the older.
|
215 |
+
The most densely populated country in Europe (and in the world) is the microstate of Monaco.
|
216 |
+
|
217 |
+
Pan and Pfeil (2004) count 87 distinct "peoples of Europe", of which 33 form the majority population in at least one sovereign state, while the remaining 54 constitute ethnic minorities.[267]
|
218 |
+
|
219 |
+
According to UN population projection, Europe's population may fall to about 7% of world population by 2050, or 653 million people (medium variant, 556 to 777 million in low and high variants, respectively).[266] Within this context, significant disparities exist between regions in relation to fertility rates. The average number of children per female of child-bearing age is 1.52.[268] According to some sources,[269] this rate is higher among Muslims in Europe. The UN predicts a steady population decline in Central and Eastern Europe as a result of emigration and low birth rates.[270]
|
220 |
+
|
221 |
+
Europe is home to the highest number of migrants of all global regions at 70.6 million people, the IOM's report said.[271] In 2005, the EU had an overall net gain from immigration of 1.8 million people. This accounted for almost 85% of Europe's total population growth.[272] In 2008, 696,000 persons were given citizenship of an EU27 member state, a decrease from 707,000 the previous year.[273] In 2017, approximately 825,000 persons acquired citizenship of an EU28 member state.[274] 2.4 million immigrants from non-EU countries entered the EU in 2017.[275]
|
222 |
+
|
223 |
+
Early modern emigration from Europe began with Spanish and Portuguese settlers in the 16th century,[276][277] and French and English settlers in the 17th century.[278] But numbers remained relatively small until waves of mass emigration in the 19th century, when millions of poor families left Europe.[279]
|
224 |
+
|
225 |
+
Today, large populations of European descent are found on every continent. European ancestry predominates in North America, and to a lesser degree in South America (particularly in Uruguay, Argentina, Chile and Brazil, while most of the other Latin American countries also have a considerable population of European origins). Australia and New Zealand have large European derived populations. Africa has no countries with European-derived majorities (or with the exception of Cape Verde and probably São Tomé and Príncipe, depending on context), but there are significant minorities, such as the White South Africans in South Africa. In Asia, European-derived populations, (specifically Russians), predominate in North Asia and some parts of Northern Kazakhstan.[280]
|
226 |
+
|
227 |
+
Europe has about 225 indigenous languages,[281] mostly falling within three Indo-European language groups: the Romance languages, derived from the Latin of the Roman Empire; the Germanic languages, whose ancestor language came from southern Scandinavia; and the Slavic languages.[232] Slavic languages are mostly spoken in Southern, Central and Eastern Europe. Romance languages are spoken primarily in Western and Southern Europe as well as in Switzerland in Central Europe and Romania and Moldova in Eastern Europe. Germanic languages are spoken in Western, Northern and Central Europe as well as in Gibraltar and Malta in Southern Europe.[232] Languages in adjacent areas show significant overlaps (such as in English, for example). Other Indo-European languages outside the three main groups include the Baltic group (Latvian and Lithuanian), the Celtic group (Irish, Scottish Gaelic, Manx, Welsh, Cornish, and Breton[232]), Greek, Armenian, and Albanian.
|
228 |
+
|
229 |
+
A distinct non-Indo-European family of Uralic languages (Estonian, Finnish, Hungarian, Erzya, Komi, Mari, Moksha, and Udmurt) is spoken mainly in Estonia, Finland, Hungary, and parts of Russia. Turkic languages include Azerbaijani, Kazakh and Turkish, in addition to smaller languages in Eastern and Southeast Europe (Balkan Gagauz Turkish, Bashkir, Chuvash, Crimean Tatar, Karachay-Balkar, Kumyk, Nogai, and Tatar). Kartvelian languages (Georgian, Mingrelian, and Svan) are spoken primarily in Georgia. Two other language families reside in the North Caucasus (termed Northeast Caucasian, most notably including Chechen, Avar, and Lezgin; and Northwest Caucasian, most notably including Adyghe). Maltese is the only Semitic language that is official within the EU, while Basque is the only European language isolate.
|
230 |
+
|
231 |
+
Multilingualism and the protection of regional and minority languages are recognised political goals in Europe today. The Council of Europe Framework Convention for the Protection of National Minorities and the Council of Europe's European Charter for Regional or Minority Languages set up a legal framework for language rights in Europe.
|
232 |
+
|
233 |
+
The four largest cities of Europe are Istanbul, Moscow, Paris and London, each have over 10 million residents,[8] and as such have been described as megacities.[282] While Istanbul has the highest total population, one third lies on the Asian side of the Bosporus, making Moscow the most populous city entirely in Europe.
|
234 |
+
The next largest cities in order of population are Saint Petersburg, Madrid, Berlin and Rome, each having over 3 million residents.[8]
|
235 |
+
|
236 |
+
When considering the commuter belts or metropolitan areas, within the EU (for which comparable data is available) London covers the largest population, followed in order by Paris, Madrid, Barcelona, Berlin, the Ruhr area, Rome, Milan, Athens and Warsaw.[283]
|
237 |
+
|
238 |
+
"Europe" as a cultural concept is substantially derived from the shared heritage of the Roman Empire and its culture.
|
239 |
+
|
240 |
+
The boundaries of Europe were historically understood as those of Christendom (or more specifically Latin Christendom), as established or defended throughout the medieval and early modern history of Europe, especially against Islam, as in the Reconquista and the Ottoman wars in Europe.[284]
|
241 |
+
|
242 |
+
This shared cultural heritage is combined by overlapping indigenous national cultures and folklores, roughly divided into Slavic, Latin (Romance) and Germanic, but with several components not part of either of these group (notably Greek and Celtic).
|
243 |
+
|
244 |
+
Cultural contact and mixtures characterise much of European regional cultures; Kaplan (2014) describes Europe as "embracing maximum cultural diversity at minimal geographical distances".[clarification needed][285]
|
245 |
+
|
246 |
+
Different cultural events are organised in Europe, with the aim of bringing different cultures closer together and raising awareness of their importance, such as the European Capital of Culture, the European Region of Gastronomy, the European Youth Capital and the European Capital of Sport.
|
247 |
+
|
248 |
+
Historically, religion in Europe has been a major influence on European art, culture, philosophy and law. There are six patron saints of Europe venerated in Roman Catholicism, five of them so declared by Pope John Paul II between 1980–1999: Saints Cyril and Methodius, Saint Bridget of Sweden, Catherine of Siena and Saint Teresa Benedicta of the Cross (Edith Stein). The exception is Benedict of Nursia, who had already been declared "Patron Saint of all Europe" by Pope Paul VI in 1964.[286][circular reference]
|
249 |
+
|
250 |
+
Religion in Europe according to the Global Religious Landscape survey by the Pew Forum, 2012[287]
|
251 |
+
|
252 |
+
The largest religion in Europe is Christianity, with 76.2% of Europeans considering themselves Christians,[288] including Catholic, Eastern Orthodox and various Protestant denominations. Among Protestants, the most popular are historically state-supported European denominations such as Lutheranism, Anglicanism and the Reformed faith. Other Protestant denominations such as historically significant ones like Anabaptists were never supported by any state and thus are not so widespread, as well as these newly arriving from the United States such as Pentecostalism, Adventism, Methodism, Baptists and various Evangelical Protestants; although Methodism and Baptists both have European origins. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom"; many even attribute Christianity for being the link that created a unified European identity.[289]
|
253 |
+
|
254 |
+
Christianity, including the Roman Catholic Church,[290][291] has played a prominent role in the shaping of Western civilisation since at least the 4th century,[292][293][294][295] and for at least a millennium and a half, Europe has been nearly equivalent to Christian culture, even though the religion was inherited from the Middle East. Christian culture was the predominant force in western civilisation, guiding the course of philosophy, art, and science.[296][297]
|
255 |
+
|
256 |
+
The second most popular religion is Islam (6%)[298] concentrated mainly in the Balkans and Eastern Europe (Bosnia and Herzegovina, Albania, Kosovo, Kazakhstan, North Cyprus, Turkey, Azerbaijan, North Caucasus, and the Volga-Ural region). Other religions, including Judaism, Hinduism, and Buddhism are minority religions (though Tibetan Buddhism is the majority religion of Russia's Republic of Kalmykia). The 20th century saw the revival of Neopaganism through movements such as Wicca and Druidry.
|
257 |
+
|
258 |
+
Europe has become a relatively secular continent, with an increasing number and proportion of irreligious, atheist and agnostic people, who make up about 18.2% of Europe's population,[299] currently the largest secular population in the Western world. There are a particularly high number of self-described non-religious people in the Czech Republic, Estonia, Sweden, former East Germany, and France.[300]
|
259 |
+
|
260 |
+
Sport in Europe tends to be highly organized with many sports having professional leagues.
|
261 |
+
|
262 |
+
Historical Maps
|
263 |
+
|
264 |
+
Africa
|
265 |
+
|
266 |
+
Antarctica
|
267 |
+
|
268 |
+
Asia
|
269 |
+
|
270 |
+
Australia
|
271 |
+
|
272 |
+
Europe
|
273 |
+
|
274 |
+
North America
|
275 |
+
|
276 |
+
South America
|
277 |
+
|
278 |
+
Afro-Eurasia
|
279 |
+
|
280 |
+
America
|
281 |
+
|
282 |
+
Eurasia
|
283 |
+
|
284 |
+
Oceania
|
en/1905.html.txt
ADDED
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Europe is a continent located entirely in the Northern Hemisphere and mostly in the Eastern Hemisphere. It comprises the westernmost part of Eurasia and is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, the Mediterranean Sea to the south, and Asia to the east. Europe is commonly considered to be separated from Asia by the watershed of the Ural Mountains, the Ural River, the Caspian Sea, the Greater Caucasus, the Black Sea, and the waterways of the Turkish Straits.[9] Although much of this border is over land, Europe is generally accorded the status of a full continent because of its great physical size and the weight of history and tradition.
|
6 |
+
|
7 |
+
Europe covers about 10,180,000 square kilometres (3,930,000 sq mi), or 2% of the Earth's surface (6.8% of land area), making it the sixth largest continent. Politically, Europe is divided into about fifty sovereign states, of which Russia is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a total population of about 741 million (about 11% of the world population) as of 2018[update].[2][3] The European climate is largely affected by warm Atlantic currents that temper winters and summers on much of the continent, even at latitudes along which the climate in Asia and North America is severe. Further from the sea, seasonal differences are more noticeable than close to the coast.
|
8 |
+
|
9 |
+
Europe, in particular ancient Greece and ancient Rome, was the birthplace of Western civilisation.[10][11][12] The fall of the Western Roman Empire in 476 AD and the subsequent Migration Period marked the end of ancient history and the beginning of the Middle Ages. Renaissance humanism, exploration, art and science led to the modern era. Since the Age of Discovery, Europe played a predominant role in global affairs. Between the 16th and 20th centuries, European powers colonized at various times the Americas, almost all of Africa and Oceania and the majority of Asia.
|
10 |
+
|
11 |
+
The Age of Enlightenment, the subsequent French Revolution and the Napoleonic Wars shaped the continent culturally, politically and economically from the end of the 17th century until the first half of the 19th century. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to radical economic, cultural and social change in Western Europe and eventually the wider world. Both world wars took place for the most part in Europe, contributing to a decline in Western European dominance in world affairs by the mid-20th century as the Soviet Union and the United States took prominence.[13] During the Cold War, Europe was divided along the Iron Curtain between NATO in the West and the Warsaw Pact in the East, until the revolutions of 1989 and fall of the Berlin Wall.
|
12 |
+
|
13 |
+
In 1949 the Council of Europe was founded with the idea of unifying Europe to achieve common goals. Further European integration by some states led to the formation of the European Union (EU), a separate political entity that lies between a confederation and a federation.[14] The EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. The currency of most countries of the European Union, the euro, is the most commonly used among Europeans; and the EU's Schengen Area abolishes border and immigration controls between most of its member states.
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
In classical Greek mythology, Europa (Ancient Greek: Εὐρώπη, Eurṓpē) was a Phoenician princess. One view is that her name derives from the ancient Greek elements εὐρύς (eurús), "wide, broad" and ὤψ (ōps, gen. ὠπός, ōpós) "eye, face, countenance", hence their composite Eurṓpē would mean "wide-gazing" or "broad of aspect".[15][16][17] Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion and the poetry devoted to it.[15] An alternative view is that of R.S.P. Beekes who has argued in favor of a Pre-Indo-European origin for the name, explains that a derivation from ancient Greek eurus would yield a different toponym than Europa. Beekes has located toponyms related to that of Europa in the territory of ancient Greece and localities like that of Europos in ancient Macedonia.[18]
|
18 |
+
|
19 |
+
There have been attempts to connect Eurṓpē to a Semitic term for "west", this being either Akkadian erebu meaning "to go down, set" (said of the sun) or Phoenician 'ereb "evening, west",[19] which is at the origin of Arabic Maghreb and Hebrew ma'arav. Michael A. Barry finds the mention of the word Ereb on an Assyrian stele with the meaning of "night, [the country of] sunset", in opposition to Asu "[the country of] sunrise", i.e. Asia. The same naming motive according to "cartographic convention" appears in Greek Ἀνατολή (Anatolḗ "[sun] rise", "east", hence Anatolia).[20] Martin Litchfield West stated that "phonologically, the match between Europa's name and any form of the Semitic word is very poor",[21] while Beekes considers a connection to Semitic languages improbable.[18] Next to these hypotheses there is also a Proto-Indo-European root *h1regʷos, meaning "darkness", which also produced Greek Erebus.[citation needed]
|
20 |
+
|
21 |
+
Most major world languages use words derived from Eurṓpē or Europa to refer to the continent. Chinese, for example, uses the word Ōuzhōu (歐洲/欧洲), which is an abbreviation of the transliterated name Ōuluóbā zhōu (歐羅巴洲) (zhōu means "continent"); a similar Chinese-derived term Ōshū (欧州) is also sometimes used in Japanese such as in the Japanese name of the European Union, Ōshū Rengō (欧州連合), despite the katakana Yōroppa (ヨーロッパ) being more commonly used. In some Turkic languages the originally Persian name Frangistan ("land of the Franks") is used casually in referring to much of Europe, besides official names such as Avrupa or Evropa.[22]
|
22 |
+
|
23 |
+
Clickable map of Europe, showing one of the most commonly used continental boundaries[23] Key: blue: states which straddle the border between Europe and Asia;
|
24 |
+
green: countries not geographically in Europe, but closely associated with the continent
|
25 |
+
|
26 |
+
The prevalent definition of Europe as a geographical term has been in use since the mid-19th century.
|
27 |
+
Europe is taken to be bounded by large bodies of water to the north, west and south; Europe's limits to the east and northeast are usually taken to be the Ural Mountains, the Ural River, and the Caspian Sea; to the southeast, the Caucasus Mountains, the Black Sea and the waterways connecting the Black Sea to the Mediterranean Sea.[24]
|
28 |
+
|
29 |
+
Islands are generally grouped with the nearest continental landmass, hence Iceland is considered to be part of Europe, while the nearby island of Greenland is usually assigned to North America, although politically belonging to Denmark. Nevertheless, there are some exceptions based on sociopolitical and cultural differences. Cyprus is closest to Anatolia (or Asia Minor), but is considered part of Europe politically and it is a member state of the EU. Malta was considered an island of Northwest Africa for centuries, but now it is considered to be part of Europe as well.[25]
|
30 |
+
"Europe" as used specifically in British English may also refer to Continental Europe exclusively.[26]
|
31 |
+
|
32 |
+
The term "continent" usually implies the physical geography of a large land mass completely or almost completely surrounded by water at its borders. However the Europe-Asia part of the border is somewhat arbitrary and inconsistent with this definition because of its partial adherence to the Ural and Caucasus Mountains rather than a series of partly joined waterways suggested by cartographer Herman Moll in 1715. These water divides extend with a few relatively small interruptions (compared to the aforementioned mountain ranges) from the Turkish straits running into the Mediterranean Sea to the upper part of the Ob River that drains into the Arctic Ocean. Prior to the adoption of the current convention that includes mountain divides, the border between Europe and Asia had been redefined several times since its first conception in classical antiquity, but always as a series of rivers, seas, and straits that were believed to extend an unknown distance east and north from the Mediterranean Sea without the inclusion of any mountain ranges.
|
33 |
+
|
34 |
+
The current division of Eurasia into two continents now reflects East-West cultural, linguistic and ethnic differences which vary on a spectrum rather than with a sharp dividing line. The geographic border between Europe and Asia does not follow any state boundaries and now only follows a few bodies of water. Turkey is generally considered a transcontinental country divided entirely by water, while Russia and Kazakhstan are only partly divided by waterways. France, Portugal, the Netherlands, Spain and the United Kingdom are also transcontinental (or more properly, intercontinental, when oceans or large seas are involved) in that their main land areas are in Europe while pockets of their territories are located on other continents separated from Europe by large bodies of water. Spain, for example, has territories south of the Mediterranean Sea namely Ceuta and Melilla which are parts of Africa and share a border with Morocco. According to the current convention, Georgia and Azerbaijan are transcontinental countries where waterways have been completely replaced by mountains as the divide between continents.
|
35 |
+
|
36 |
+
The first recorded usage of Eurṓpē as a geographic term is in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea. As a name for a part of the known world, it is first used in the 6th century BC by Anaximander and Hecataeus. Anaximander placed the boundary between Asia and Europe along the Phasis River (the modern Rioni River on the territory of Georgia) in the Caucasus, a convention still followed by Herodotus in the 5th century BC.[27] Herodotus mentioned that the world had been divided by unknown persons into three parts, Europe, Asia, and Libya (Africa), with the Nile and the Phasis forming their boundaries—though he also states that some considered the River Don, rather than the Phasis, as the boundary between Europe and Asia.[28] Europe's eastern frontier was defined in the 1st century by geographer Strabo at the River Don.[29] The Book of Jubilees described the continents as the lands given by Noah to his three sons; Europe was defined as stretching from the Pillars of Hercules at the Strait of Gibraltar, separating it from Northwest Africa, to the Don, separating it from Asia.[30]
|
37 |
+
|
38 |
+
The convention received by the Middle Ages and surviving into modern usage is that of the Roman era used by Roman era authors such as Posidonius,[31] Strabo[32] and Ptolemy,[33]
|
39 |
+
who took the Tanais (the modern Don River) as the boundary.
|
40 |
+
|
41 |
+
The term "Europe" is first used for a cultural sphere in the Carolingian Renaissance of the 9th century. From that time, the term designated the sphere of influence of the Western Church, as opposed to both the Eastern Orthodox churches and to the Islamic world.
|
42 |
+
|
43 |
+
A cultural definition of Europe as the lands of Latin Christendom coalesced in the 8th century, signifying the new cultural condominium created through the confluence of Germanic traditions and Christian-Latin culture, defined partly in contrast with Byzantium and Islam, and limited to northern Iberia, the British Isles, France, Christianised western Germany, the Alpine regions and northern and central Italy.[34] The concept is one of the lasting legacies of the Carolingian Renaissance: Europa often[dubious – discuss] figures in the letters of Charlemagne's court scholar, Alcuin.[35]
|
44 |
+
|
45 |
+
The question of defining a precise eastern boundary of Europe arises in the Early Modern period, as the eastern extension of Muscovy began to include North Asia. Throughout the Middle Ages and into the 18th century, the traditional division of the landmass of Eurasia into two continents, Europe and Asia, followed Ptolemy, with the boundary following the Turkish Straits, the Black Sea, the Kerch Strait, the Sea of Azov and the Don (ancient Tanais). But maps produced during the 16th to 18th centuries tended to differ in how to continue the boundary beyond the Don bend at Kalach-na-Donu (where it is closest to the Volga, now joined with it by the Volga–Don Canal), into territory not described in any detail by the ancient geographers. Around 1715, Herman Moll produced a map showing the northern part of the Ob River and the Irtysh River, a major tributary of the former, as components of a series of partly-joined waterways taking the boundary between Europe and Asia from the Turkish Straits and the Don River all the way to the Arctic Ocean. In 1721, he produced a more up to date map that was easier to read. However, his idea to use major rivers almost exclusively as the line of demarcation was never taken up by the Russian Empire.
|
46 |
+
|
47 |
+
Four years later, in 1725, Philip Johan von Strahlenberg was the first to depart from the classical Don boundary by proposing that mountain ranges could be included as boundaries between continents whenever there were deemed to be no suitable waterways, the Ob and Irtysh Rivers notwithstanding. He drew a new line along the Volga, following the Volga north until the Samara Bend, along Obshchy Syrt (the drainage divide between Volga and Ural) and then north along Ural Mountains.[36] This was adopted by the Russian Empire, and introduced the convention that would eventually become commonly accepted, but not without criticism by many modern analytical geographers like Halford Mackinder who saw little validity in the Ural Mountains as a boundary between continents.[37]
|
48 |
+
|
49 |
+
The mapmakers continued to differ on the boundary between the lower Don and Samara well into the 19th century. The 1745 atlas published by the Russian Academy of Sciences has the boundary follow the Don beyond Kalach as far as Serafimovich before cutting north towards Arkhangelsk, while other 18th- to 19th-century mapmakers such as John Cary followed Strahlenberg's prescription. To the south, the Kuma–Manych Depression was identified circa 1773 by a German naturalist, Peter Simon Pallas, as a valley that once connected the Black Sea and the Caspian Sea,[38][39] and subsequently was proposed as a natural boundary between continents.
|
50 |
+
|
51 |
+
By the mid-19th century, there were three main conventions, one following the Don, the Volga–Don Canal and the Volga, the other following the Kuma–Manych Depression to the Caspian and then the Ural River, and the third abandoning the Don altogether, following the Greater Caucasus watershed to the Caspian. The question was still treated as a "controversy" in geographical literature of the 1860s, with Douglas Freshfield advocating the Caucasus crest boundary as the "best possible", citing support from various "modern geographers".[40]
|
52 |
+
|
53 |
+
In Russia and the Soviet Union, the boundary along the Kuma–Manych Depression was the most commonly used as early as 1906.[41] In 1958, the Soviet Geographical Society formally recommended that the boundary between the Europe and Asia be drawn in textbooks from Baydaratskaya Bay, on the Kara Sea, along the eastern foot of Ural Mountains, then following the Ural River until the Mugodzhar Hills, and then the Emba River; and Kuma–Manych Depression,[42] thus placing the Caucasus entirely in Asia and the Urals entirely in Europe.[43] However, most geographers in the Soviet Union favoured the boundary along the Caucasus crest[44] and this became the common convention in the later 20th century, although the Kuma–Manych boundary remained in use in some 20th-century maps.
|
54 |
+
|
55 |
+
Homo erectus georgicus, which lived roughly 1.8 million years ago in Georgia, is the earliest hominid to have been discovered in Europe.[45] Other hominid remains, dating back roughly 1 million years, have been discovered in Atapuerca, Spain.[46] Neanderthal man (named after the Neandertal valley in Germany) appeared in Europe 150,000 years ago (115,000 years ago it is found already in Poland[47]) and disappeared from the fossil record about 28,000 years ago, with their final refuge being present-day Portugal. The Neanderthals were supplanted by modern humans (Cro-Magnons), who appeared in Europe around 43,000 to 40,000 years ago.[48] The earliest sites in Europe dated 48,000 years ago are
|
56 |
+
Riparo Mochi (Italy), Geissenklösterle (Germany), and Isturitz (France)[49][50]
|
57 |
+
|
58 |
+
The European Neolithic period—marked by the cultivation of crops and the raising of livestock, increased numbers of settlements and the widespread use of pottery—began around 7000 BC in Greece and the Balkans, probably influenced by earlier farming practices in Anatolia and the Near East.[51] It spread from the Balkans along the valleys of the Danube and the Rhine (Linear Pottery culture) and along the Mediterranean coast (Cardial culture). Between 4500 and 3000 BC, these central European neolithic cultures developed further to the west and the north, transmitting newly acquired skills in producing copper artifacts. In Western Europe the Neolithic period was characterised not by large agricultural settlements but by field monuments, such as causewayed enclosures, burial mounds and megalithic tombs.[52] The Corded Ware cultural horizon flourished at the transition from the Neolithic to the Chalcolithic. During this period giant megalithic monuments, such as the Megalithic Temples of Malta and Stonehenge, were constructed throughout Western and Southern Europe.[53][54]
|
59 |
+
|
60 |
+
The European Bronze Age began c. 3200 BC in Greece with the Minoan civilisation on Crete, the first advanced civilisation in Europe.[55] The Minoans were followed by the Myceneans, who collapsed suddenly around 1200 BC, ushering the European Iron Age.[56] Iron Age colonisation by the Greeks and Phoenicians gave rise to early Mediterranean cities. Early Iron Age Italy and Greece from around the 8th century BC gradually gave rise to historical Classical antiquity, whose beginning is sometimes dated to 776 BC, the year the first Olympic Games.[57]
|
61 |
+
|
62 |
+
Ancient Greece was the founding culture of Western civilisation. Western democratic and rationalist culture are often attributed to Ancient Greece.[58] The Greek city-state, the polis, was the fundamental political unit of classical Greece.[58] In 508 BC, Cleisthenes instituted the world's first democratic system of government in Athens.[59] The Greek political ideals were rediscovered in the late 18th century by European philosophers and idealists. Greece also generated many cultural contributions: in philosophy, humanism and rationalism under Aristotle, Socrates and Plato; in history with Herodotus and Thucydides; in dramatic and narrative verse, starting with the epic poems of Homer;[60] in drama with Sophocles and Euripides, in medicine with Hippocrates and Galen; and in science with Pythagoras, Euclid and Archimedes.[61][62][63] In the course of the 5th century BC, several of the Greek city states would ultimately check the Achaemenid Persian advance in Europe through the Greco-Persian Wars, considered a pivotal moment in world history,[64] as the 50 years of peace that followed are known as Golden Age of Athens, the seminal period of ancient Greece that laid many of the foundations of Western civilisation. The conquests of Alexander the Great brought the Middle East into the Greek cultural sphere.
|
63 |
+
|
64 |
+
Greece was followed by Rome, which left its mark on law, politics, language, engineering, architecture, government and many more key aspects in western civilisation.[58] Rome began as a small city-state, founded, according to tradition, in 753 BC as an elective kingdom. Tradition has it that there were seven kings of Rome with Romulus, the founder, being the first and Lucius Tarquinius Superbus falling to a republican uprising led by Lucius Junius Brutus, but modern scholars doubt many of those stories and even the Romans themselves acknowledged that the sack of Rome by the Gauls in 387 BC destroyed many sources on their early history.
|
65 |
+
|
66 |
+
By 200 BC, Rome had conquered Italy, and over the following two centuries it conquered Greece and Hispania (Spain and Portugal), the North African coast, much of the Middle East, Gaul (France and Belgium), and Britannia (England and Wales). The forty-year conquest of Britannia left 250,000 Britons dead, and emperor Antoninus Pius built the Antonine Wall across Scotland's Central Belt to defend the province from the Caledonians; it marked the northernmost border of the Roman Empire.
|
67 |
+
|
68 |
+
The Roman Republic ended in 27 BC, when Augustus proclaimed the Roman Empire. The two centuries that followed are known as the pax romana, a period of unprecedented peace, prosperity, and political stability in most of Europe.[65] The empire continued to expand under emperors such as Antoninus Pius and Marcus Aurelius, who spent time on the Empire's northern border fighting Germanic, Pictish and Scottish tribes.[66][67] Christianity was legalised by Constantine I in 313 AD after three centuries of imperial persecution. Constantine also permanently moved the capital of the empire from Rome to the city of Byzantium, which was renamed Constantinople in his honour (modern-day Istanbul) in 330 AD. Christianity became the sole official religion of the empire in 380 AD, and in 391–392 AD, the emperor Theodosius outlawed pagan religions.[68] This is sometimes considered to mark the end of antiquity; alternatively antiquity is considered to end with the fall of the Western Roman Empire in 476 AD; the closure of the pagan Platonic Academy of Athens in 529 AD;[69] or the rise of Islam in the early 7th century AD.
|
69 |
+
|
70 |
+
During the decline of the Roman Empire, Europe entered a long period of change arising from what historians call the "Age of Migrations". There were numerous invasions and migrations amongst the Goths, Vandals, Huns, Franks, Angles, Saxons, Slavs, Avars, Bulgars and, later on, the Vikings, Pechenegs, Cumans and Magyars.[65] Germanic tribes settled in the former Roman provinces of England and Spain, while other groups pressed into northern France and Italy.[70] Renaissance thinkers such as Petrarch would later refer to this as the "Dark Ages".[71] Isolated monastic communities were the only places to safeguard and compile written knowledge accumulated previously; apart from this very few written records survive and much literature, philosophy, mathematics, and other thinking from the classical period disappeared from Western Europe though they were preserved in the east, in the Byzantine Empire.[72]
|
71 |
+
|
72 |
+
While the Roman empire in the west continued to decline, Roman traditions and the Roman state remained strong in the predominantly Greek-speaking Eastern Roman Empire, also known as the Byzantine Empire. During most of its existence, the Byzantine Empire was the most powerful economic, cultural, and military force in Europe. Emperor Justinian I presided over Constantinople's first golden age: he established a legal code that forms the basis of many modern legal systems, funded the construction of the Hagia Sophia, and brought the Christian church under state control.[73] He reconquered North Africa, southern Spain, and Italy.[74] The Ostrogothic Kingdom, which sought to preserve Roman culture and adopt its values, later changed course and became anti-Constantinople, so Justinian waged war against the Ostrogoths for 30 years and destroyed much of urban life in Italy, and reduced agriculture to subsistence farming.[75]
|
73 |
+
|
74 |
+
From the 7th century onwards, as the Byzantines and neighbouring Sasanid Persians were severely weakened due the protracted, centuries-lasting and frequent Byzantine–Sasanian wars, the Muslim Arabs began to make inroads into historically Roman territory, taking the Levant and North Africa and making inroads into Asia Minor. In the mid 7th century AD, following the Muslim conquest of Persia, Islam penetrated into the Caucasus region.[76] Over the next centuries Muslim forces took Cyprus, Malta, Crete, Sicily and parts of southern Italy.[77] Between 711 and 726, the Umayyad Caliphate conquered the Visigothic Kingdom, which occupied the Iberian Peninsula and Septimania (southwestern France). The unsuccessful second siege of Constantinople (717) weakened the Umayyad dynasty and reduced their prestige. The Umayyads were then defeated by the Frankish leader Charles Martel at the Battle of Poitiers in 732, which ended their northward advance. In the remote regions of north-western Iberia and the middle Pyrenees the power of the Muslims in the south was scarcely felt. It was here that the foundations of the Christian kingdoms of Asturias, Leon and Galicia were laid and from where the reconquest of the Iberian Peninsula would start. However, no coordinated attempt would be made to drive the Moors out. The Christian kingdoms were mainly focussed on their own internal power struggles. As a result, the Reconquista took the greater part of eight hundred years, in which period a long list of Alfonsos, Sanchos, Ordoños, Ramiros, Fernandos and Bermudos would be fighting their Christian rivals as much as the Muslim invaders.
|
75 |
+
|
76 |
+
One of the biggest threats during the Dark Ages were the Vikings, Norse seafarers who raided, raped, and pillaged across the Western world. Many Vikings died in battles in continental Europe, and in 844 they lost many men and ships to King Ramiro in northern Spain.[78] A few months later, another fleet took Seville, only to be driven off with further heavy losses.[79] In 911, the Vikings attacked Paris, and the Franks decided to give the Viking king Rollo land along the English Channel coast in exchange for peace. This land was named "Normandy" for the Norse "Northmen", and the Norse settlers became known as "Normans", adopting the French culture and language and becoming French vassals. In 1066, the Normans went on to conquer England in the first successful cross-Channel invasion since Roman times.[80]
|
77 |
+
|
78 |
+
During the Dark Ages, the Western Roman Empire fell under the control of various tribes. The Germanic and Slav tribes established their domains over Western and Eastern Europe respectively.[81] Eventually the Frankish tribes were united under Clovis I.[82] In 507, Clovis defeated the Visigoths at the Battle of Vouillé, slaying King Alaric II and conquering southern Gaul for the Franks. His conquests laid the foundation for the Frankish kingdom. Charlemagne, a Frankish king of the Carolingian dynasty who had conquered most of Western Europe, was anointed "Holy Roman Emperor" by the Pope in 800. This led in 962 to the founding of the Holy Roman Empire, which eventually became centred in the German principalities of central Europe.[83]
|
79 |
+
|
80 |
+
East Central Europe saw the creation of the first Slavic states and the adoption of Christianity (circa 1000 AD). The powerful West Slavic state of Great Moravia spread its territory all the way south to the Balkans, reaching its largest territorial extent under Svatopluk I and causing a series of armed conflicts with East Francia. Further south, the first South Slavic states emerged in the late 7th and 8th century and adopted Christianity: the First Bulgarian Empire, the Serbian Principality (later Kingdom and Empire), and the Duchy of Croatia (later Kingdom of Croatia). To the East, the Kievan Rus expanded from its capital in Kiev to become the largest state in Europe by the 10th century. In 988, Vladimir the Great adopted Orthodox Christianity as the religion of state.[84][85] Further East, Volga Bulgaria became an Islamic state in the 10th century, but was eventually absorbed into Russia several centuries later.[86]
|
81 |
+
|
82 |
+
The period between the year 1000 and 1300 is known as the High Middle Ages, during which the population of Europe experienced significant growth, culminating in the Renaissance of the 12th century. Economic growth, together with the lack of safety on the mainland trading routes, made possible the development of major commercial routes along the coast of the Mediterranean and Baltic Seas. The growing wealth and independence acquired by some coastal cities gave the Maritime Republics a leading role in the European scene. Constantinople was the largest and wealthiest city in Europe from the 9th to the 12th centuries, with a population of approximately 400,000.[91]
|
83 |
+
|
84 |
+
The Middle Ages on the mainland were dominated by the two upper echelons of the social structure: the nobility and the clergy. Feudalism developed in France in the Early Middle Ages and soon spread throughout Europe.[92] A struggle for influence between the nobility and the monarchy in England led to the writing of the Magna Carta and the establishment of a parliament.[93] The primary source of culture in this period came from the Roman Catholic Church. Through monasteries and cathedral schools, the Church was responsible for education in much of Europe.[92]
|
85 |
+
|
86 |
+
The Papacy reached the height of its power during the High Middle Ages. An East-West Schism in 1054 split the former Roman Empire religiously, with the Eastern Orthodox Church in the Byzantine Empire and the Roman Catholic Church in the former Western Roman Empire. In 1095, Pope Urban II called for a crusade against Muslims occupying Jerusalem and the Holy Land.[94] In Europe itself, the Church organised the Inquisition against heretics. Popes also encouraged crusading in the Iberian Peninsula.[95] In the east, a resurgent Byzantine Empire recaptured Crete and Cyprus from the Muslims and reconquered the Balkans. However, it faced a new enemy in the form of seaborne attacks by the Normans, who conquered Southern Italy and Sicily. Even more dangerous than the Franco-Norsemen was a new enemy from the steppe: the Seljuk Turks. The Empire was weakened following the defeat at the Battle of Manzikert and was weakened considerably by the Frankish-Venetian sack of Constantinople in 1204, during the Fourth Crusade.[96][97][98][99][100][101][102][103][104] Although it would recover Constantinople in 1261, Byzantium fell in 1453 when Constantinople was taken by the Ottoman Empire.[105][106][107]
|
87 |
+
|
88 |
+
In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Pechenegs and the Cuman-Kipchaks, caused a massive migration of Slavic populations to the safer, heavily forested regions of the north and temporarily halted the expansion of the Rus' state to the south and east.[108] Like many other parts of Eurasia, these territories were overrun by the Mongols.[109] The invaders, who became known as Tatars, were mostly Turkic-speaking peoples under Mongol suzerainty. They established the state of the Golden Horde with headquarters in Crimea, which later adopted Islam as a religion and ruled over modern-day southern and central Russia for more than three centuries.[110][111] After the collapse of Mongol dominions, the first Romanian states (principalities) emerged in the 14th century: Moldova and Walachia. Previously, these territories were under the successive control of Pechenegs and Cumans.[112] From the 12th to the 15th centuries, the Grand Duchy of Moscow grew from a small principality under Mongol rule to the largest state in Europe, overthrowing the Mongols in 1480 and eventually becoming the Tsardom of Russia. The state was consolidated under Ivan III the Great and Ivan the Terrible, steadily expanding to the east and south over the next centuries.
|
89 |
+
|
90 |
+
The Great Famine of 1315–1317 was the first crisis that would strike Europe in the late Middle Ages.[113] The period between 1348 and 1420 witnessed the heaviest loss. The population of France was reduced by half.[114][115] Medieval Britain was afflicted by 95 famines,[116] and France suffered the effects of 75 or more in the same period.[117] Europe was devastated in the mid-14th century by the Black Death, one of the most deadly pandemics in human history which killed an estimated 25 million people in Europe alone—a third of the European population at the time.[118]
|
91 |
+
|
92 |
+
The plague had a devastating effect on Europe's social structure; it induced people to live for the moment as illustrated by Giovanni Boccaccio in The Decameron (1353). It was a serious blow to the Roman Catholic Church and led to increased persecution of Jews, beggars, and lepers.[119] The plague is thought to have returned every generation with varying virulence and mortalities until the 18th century.[120] During this period, more than 100 plague epidemics swept across Europe.[121]
|
93 |
+
|
94 |
+
The Renaissance was a period of cultural change originating in Florence and later spreading to the rest of Europe. The rise of a new humanism was accompanied by the recovery of forgotten classical Greek and Arabic knowledge from monastic libraries, often translated from Arabic into Latin.[122][123][124] The Renaissance spread across Europe between the 14th and 16th centuries: it saw the flowering of art, philosophy, music, and the sciences, under the joint patronage of royalty, the nobility, the Roman Catholic Church, and an emerging merchant class.[125][126][127] Patrons in Italy, including the Medici family of Florentine bankers and the Popes in Rome, funded prolific quattrocento and cinquecento artists such as Raphael, Michelangelo, and Leonardo da Vinci.[128][129]
|
95 |
+
|
96 |
+
Political intrigue within the Church in the mid-14th century caused the Western Schism. During this forty-year period, two popes—one in Avignon and one in Rome—claimed rulership over the Church. Although the schism was eventually healed in 1417, the papacy's spiritual authority had suffered greatly.[132] In the 15th century, Europe started to extend itself beyond its geographic frontiers. Spain and Portugal, the greatest naval powers of the time, took the lead in exploring the world.[133][134] Exploration reached the Southern Hemisphere in the Atlantic and the Southern tip of Africa. Christopher Columbus reached the New World in 1492, and Vasco da Gama opened the ocean route to the East linking the Atlantic and Indian Oceans in 1498. Ferdinand Magellan reached Asia westward across the Atlantic and the Pacific Oceans in the Spanish expedition of Magellan-Elcano, resulting in the first circumnavigation of the globe, completed by Juan Sebastián Elcano (1519–22). Soon after, the Spanish and Portuguese began establishing large global empires in the Americas, Asia, Africa and Oceania.[135] France, the Dutch Republic and England soon followed in building large colonial empires with vast holdings in Africa, the Americas, and Asia. The Italian Wars (1494–1559) led to the development of the first modern professional army in Europe, the Tercios. Their superiority became evident at the Battle of Bicocca (1522) against French-Swiss-Italian forces, which marked the end of massive pike assaults after thousands of Swiss pikemen fell to the Spanish arquebusiers without the latter suffering a single casualty.[136] Over the course of the 16th century, Spanish and Italian troops marched north to fight on Dutch and German battlefields, dying there in large numbers.[137][138]
|
97 |
+
|
98 |
+
The Church's power was further weakened by the Protestant Reformation in 1517 when German theologian Martin Luther nailed his Ninety-five Theses criticising the selling of indulgences to the church door. He was subsequently excommunicated in the papal bull Exsurge Domine in 1520, and his followers were condemned in the 1521 Diet of Worms, which divided German princes between Protestant and Roman Catholic faiths.[139] Religious fighting and warfare spread with Protestantism. War broke out first in Germany, where Imperial troops defeated the Protestant army of the Schmalkaldic League at the Battle of Mühlberg, putting an end to the Schmalkaldic War (1546–47).[140] Between 2 and 3 million people were killed in the French Wars of Religion (1562–98).[141] The Eighty Years' War (1568–1648) between Catholic Habsburgs and Dutch Calvinists resulted in the death of hundreds of thousands of people.[141] In the Cologne War (1583–88), Catholic forces faced off against German Calvinist and Dutch troops. The Thirty Years War (1618–48) crippled the Holy Roman Empire and devastated much of Germany, killing between 25 and 40 percent of its population.[142] During the Thirty Years' War, in which various Protestant forces battled Imperial armies, France provided subsidies to Habsburg enemies, especially Sweden. The destruction of the Swedish army by an Austro-Spanish force at the Battle of Nördlingen (1634) forced France to intervene militarily in the Thirty Years' War. In 1638, France and Oxenstierna agreed to a treaty at Hamburg, extending the annual French subsidy of 400,000 riksdalers for three years, but Sweden would not fight against Spain. In 1643, the French defeated one of Spain's best armies at the Battle of Rocroi. In the aftermath of the Peace of Westphalia (1648), France rose to predominance within Western Europe.[143] France fought a series of wars over domination of Western Europe, including the Franco-Spanish War, the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession; these wars cost France over 1,000,000 battle casualties.[144][145]
|
99 |
+
|
100 |
+
The 17th century in central and eastern Europe was a period of general decline.[146] Central and Eastern Europe experienced more than 150 famines in a 200-year period between 1501 and 1700.[147] From the Union of Krewo (1385) central and eastern Europe was dominated by Kingdom of Poland and Grand Duchy of Lithuania. Between 1648 and 1655 in the central and eastern Europe ended hegemony of the Polish–Lithuanian Commonwealth. From the 15th to 18th centuries, when the disintegrating khanates of the Golden Horde were conquered by Russia, Tatars from the Crimean Khanate frequently raided Eastern Slavic lands to capture slaves.[148] Further east, the Nogai Horde and Kazakh Khanate frequently raided the Slavic-speaking areas of Russia, Ukraine and Poland for hundreds of years, until the Russian expansion and conquest of most of northern Eurasia (i.e. Eastern Europe, Central Asia and Siberia).
|
101 |
+
|
102 |
+
The Renaissance and the New Monarchs marked the start of an Age of Discovery, a period of exploration, invention, and scientific development.[149] Among the great figures of the Western scientific revolution of the 16th and 17th centuries were Copernicus, Kepler, Galileo, and Isaac Newton.[150] According to Peter Barrett, "It is widely accepted that 'modern science' arose in the Europe of the 17th century (towards the end of the Renaissance), introducing a new understanding of the natural world."[122]
|
103 |
+
|
104 |
+
The Age of Enlightenment was a powerful intellectual movement during the 18th century promoting scientific and reason-based thoughts.[151][152][153] Discontent with the aristocracy and clergy's monopoly on political power in France resulted in the French Revolution and the establishment of the First Republic as a result of which the monarchy and many of the nobility perished during the initial reign of terror.[154] Napoleon Bonaparte rose to power in the aftermath of the French Revolution and established the First French Empire that, during the Napoleonic Wars, grew to encompass large parts of western and central Europe before collapsing in 1814 with the Battle of Paris between the Sixth Coalition—consisting of Russia, Austria, and Prussia—and the French Empire.[155][156] Napoleonic rule resulted in the further dissemination of the ideals of the French Revolution, including that of the nation-state, as well as the widespread adoption of the French models of administration, law, and education.[157][158][159] The Congress of Vienna, convened after Napoleon's downfall, established a new balance of power in Europe centred on the five "Great Powers": the UK, France, Prussia, Austria, and Russia.[160] This balance would remain in place until the Revolutions of 1848, during which liberal uprisings affected all of Europe except for Russia and the UK. These revolutions were eventually put down by conservative elements and few reforms resulted.[161] The year 1859 saw the unification of Romania, as a nation-state, from smaller principalities. In 1867, the Austro-Hungarian empire was formed; and 1871 saw the unifications of both Italy and Germany as nation-states from smaller principalities.[162]
|
105 |
+
|
106 |
+
In parallel, the Eastern Question grew more complex ever since the Ottoman defeat in the Russo-Turkish War (1768–1774). As the dissolution of the Ottoman Empire seemed imminent, the Great Powers struggled to safeguard their strategic and commercial interests in the Ottoman domains. The Russian Empire stood to benefit from the decline, whereas the Habsburg Empire and Britain perceived the preservation of the Ottoman Empire to be in their best interests. Meanwhile, the Serbian revolution (1804) and Greek War of Independence (1821) marked the beginning of the end of Ottoman rule in the Balkans, which ended with the Balkan Wars in 1912–1913.[163] Formal recognition of the de facto independent principalities of Montenegro, Serbia and Romania ensued at the Congress of Berlin in 1878. Many Ottoman Muslims faced either extermination at the hands of the newly independent states, or were expelled to the shrinking Ottoman possessions in Europe or to Anatolia; between 1821 and 1922 alone, more than 5 million Ottoman Muslims were driven away from their homes, while another 5.5 million died in wars or due to starvation and disease.[164]
|
107 |
+
|
108 |
+
The Industrial Revolution started in Great Britain in the last part of the 18th century and spread throughout Europe. The invention and implementation of new technologies resulted in rapid urban growth, mass employment, and the rise of a new working class.[165] Reforms in social and economic spheres followed, including the first laws on child labour, the legalisation of trade unions,[166] and the abolition of slavery.[167] In Britain, the Public Health Act of 1875 was passed, which significantly improved living conditions in many British cities.[168] Europe's population increased from about 100 million in 1700 to 400 million by 1900.[169] The last major famine recorded in Western Europe, the Irish Potato Famine, caused death and mass emigration of millions of Irish people.[170] In the 19th century, 70 million people left Europe in migrations to various European colonies abroad and to the United States.[171] Demographic growth meant that, by 1900, Europe's share of the world's population was 25%.[172]
|
109 |
+
|
110 |
+
Two world wars and an economic depression dominated the first half of the 20th century. World War I was fought between 1914 and 1918. It started when Archduke Franz Ferdinand of Austria was assassinated by the Yugoslav nationalist[178] Gavrilo Princip.[179] Most European nations were drawn into the war, which was fought between the Entente Powers (France, Belgium, Serbia, Portugal, Russia, the United Kingdom, and later Italy, Greece, Romania, and the United States) and the Central Powers (Austria-Hungary, Germany, Bulgaria, and the Ottoman Empire).
|
111 |
+
|
112 |
+
On 4 August 1914, Germany invaded and occupied Belgium. On 17 August 1914, the Russian army launched an assault on the eastern German province of East Prussia, but they would be dealt a fatal blow at the Battle of Tannenberg on 26-30 August 1914. As 1914 came to a close, the Germans were facing off against the French and British in northern France and in the Alsace and Lorraine regions of eastern France, and the opposing armies dug trenches and set up defensive positions. From 1915 to 1917, the two armies often engaged in trench warfare and massive offensives. The Battle of the Somme was the largest battle on the Western Front; the British suffered 420,000 casualties, the French 200,000 and the Germans 500,000. The Battle of Verdun saw around 377,231 French and 337,000 Germans become casualties. At the Battle of Passchendaele, mustard gas was used by both sides as chemical weapons, and several troops on both sides died in battle. In 1915, the tide of the war was changed when the Kingdom of Italy decided to enter the war on the side of the Triple Entente, seeking to acquire Austrian Tyrol and some possessions along the Adriatic Sea. This violated the "Triple Alliance" proposed in the 19th century, and Austria-Hungary was ill-prepared for yet another front to fight on. The Royal Italian Army launched several offensives along the Isonzo River, with eleven battles of the Isonzo being fought during the war. Over the course of the war, 462,391 Italian soldiers died; 420,000 of the dead were lost on the Alpine front with Austria-Hungary, the rest fell in France, Albania, or Macedonia.[180]
|
113 |
+
|
114 |
+
In 1917, German troops began to arrive in Italy to assist the crumbling Austro-Hungarian forces, and the Germans destroyed an Italian army at the Battle of Caporetto. This led to British and French (and later American) troops arriving in Italy to assist the Italian military against the Central forces, and the Allied troops succeeded in taking parts of northern Italy and on the Adriatic sea. In the Balkans, the Serbians resisted the Austrians until Bulgaria and the German Empire sent troops to assist in the conquest of Serbia in December 1915. The Austro-Hungarian Army on the Eastern Front, which had performed poorly, was soon subordinated to the Imperial German Army's high command, and the Germans destroyed a Russian army at the Gorlice-Tarnow Offensive in 1916. The Russians launched a massive counterattack against the Central Powers in Poland, and this "Brusilov Offensive" inflicted very high losses on the Austro-Hungarians; 1,325,000 Central Powers troops and 500,000 Russian troops were lost. However, the political situation in Russia deteriorated as people became more aware of the corruption in Czar Nicholas II of Russia's government, and the Germans made rapid progress in 1917 after the Russian Revolution toppled the czar. The Germans secured Riga in Latvia in September 1917, and they benefited from the fall of Aleksandr Kerensky's provisional government to the Bolsheviks in October 1917. In February-March 1918, the Germans launched a massive offensive after the Russian SFSR came to power, taking advantage of the Russian Civil War to seize large amounts of territory. In March 1918, the Soviets signed the Treaty of Brest-Litovsk, ending the war on the Eastern Front, and Germany set up various puppet nations in Eastern Europe.
|
115 |
+
|
116 |
+
Germany's huge successes during the war with the Russians were morale boosts for the Central Powers, but the Germans faced a new problem when they began to use "unrestricted submarine warfare" to attack enemy ships at sea. These tactics led to the sinking of several civilian ships, angering the United States, which lost several civilians aboard these ships. Germany agreed to the "Sussex Pledge", stating that it would halt this strategy. However, the Germans sunk RMS Lusitania on 7 May 1915, and the British intercepted an offer from the German government made to Mexico that proposed an anti-American alliance. In 1916, President Woodrow Wilson declared war on Germany, and the USA joined the Allies. In 1918, the American troops took part in the Battle of Belleau Wood and in the Meuse-Argonne Offensive, major offensives that saw several American troops die on European soil for the first time. The Germans began to suffer as more and more Allied troops arrived, and they launched the desperate "Spring Offensive" of 1918. This offensive was repelled, and the Allies barreled towards Belgium in the Hundred Days Offensive during the autumn of 1918. On 11 November 1918, the Germans agreed to an armistice with the Allies, ending the war. The war left more than 16 million civilians and military dead.[181] Over 60 million European soldiers were mobilised from 1914 to 1918.[182]
|
117 |
+
|
118 |
+
Russia was plunged into the Russian Revolution, which threw down the Tsarist monarchy and replaced it with the communist Soviet Union.[183] Austria-Hungary and the Ottoman Empire collapsed and broke up into separate nations, and many other nations had their borders redrawn. The Treaty of Versailles, which officially ended World War I in 1919, was harsh towards Germany, upon whom it placed full responsibility for the war and imposed heavy sanctions.[184] Excess deaths in Russia over the course of World War I and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million.[185] In 1932–1933, under Stalin's leadership, confiscations of grain by the Soviet authorities contributed to the second Soviet famine which caused millions of deaths;[186] surviving kulaks were persecuted and many sent to Gulags to do forced labour. Stalin was also responsible for the Great Purge of 1937–38 in which the NKVD executed 681,692 people;[187] millions of people were deported and exiled to remote areas of the Soviet Union.[188]
|
119 |
+
|
120 |
+
The social revolutions sweeping through Russia also affected other European nations following The Great War: in 1919, with the Weimar Republic in Germany, and the First Austrian Republic; in 1922, with Mussolini's one party fascist government in the Kingdom of Italy, and in Atatürk's Turkish Republic, adopting the Western alphabet, and state secularism.
|
121 |
+
Economic instability, caused in part by debts incurred in the First World War and 'loans' to Germany played havoc in Europe in the late 1920s and 1930s. This and the Wall Street Crash of 1929 brought about the worldwide Great Depression. Helped by the economic crisis, social instability and the threat of communism, fascist movements developed throughout Europe placing Adolf Hitler in power of what became Nazi Germany.[198][199] In 1933, Hitler became the leader of Germany and began to work towards his goal of building Greater Germany. Germany re-expanded and took back the Saarland and Rhineland in 1935 and 1936. In 1938, Austria became a part of Germany following the Anschluss. Later that year, following the Munich Agreement signed by Germany, France, the United Kingdom and Italy, Germany annexed the Sudetenland, which was a part of Czechoslovakia inhabited by ethnic Germans, and in early 1939, the remainder of Czechoslovakia was split into the Protectorate of Bohemia and Moravia, controlled by Germany, and the Slovak Republic. At the time, Britain and France preferred a policy of appeasement.
|
122 |
+
|
123 |
+
With tensions mounting between Germany and Poland over the future of Danzig, the Germans turned to the Soviets, and signed the Molotov–Ribbentrop Pact, which allowed the Soviets to invade the Baltic states and parts of Poland and Romania. Germany invaded Poland on 1 September 1939, prompting France and the United Kingdom to declare war on Germany on 3 September, opening the European Theatre of World War II.[194][200][201] The Soviet invasion of Poland started on 17 September and Poland fell soon thereafter. On 24 September, the Soviet Union attacked the Baltic countries and later, Finland. The British hoped to land at Narvik and send troops to aid Finland, but their primary objective in the landing was to encircle Germany and cut the Germans off from Scandinavian resources. Around the same time, Germany moved troops into Denmark; 1,400 Nazi soldiers conquered Denmark in one day.[202] The Phoney War continued. In May 1940, Germany conquered the Netherlands and Belgium. That same month, Italy allied with Germany and the two countries conquered France. By August, Germany began a bombing offensive on Britain, but failed to convince the Britons to give up.[203] Some 43,000 British civilians were killed and 139,000 wounded in the Blitz; much of London was destroyed, with 1,400,245 buildings destroyed or damaged.[204] In April 1941, Germany invaded Yugoslavia. In June 1941, Germany invaded the Soviet Union in Operation Barbarossa.[205] The Soviets and Germans fought an extremely brutal war in the east that cost millions of civilian and soldier lives. The Jews of Russia were wiped out by the German Nazis, who wanted to purge the world of Jews and other "subhumans". On 7 December 1941 Japan's attack on Pearl Harbor drew the United States into the conflict as allies of the British Empire and other allied forces.[206][207] In 1943, the Allies knocked Italy out of the war. The Italian campaign had the highest casualty rates of the whole war for the western allies.[208]
|
124 |
+
|
125 |
+
After the staggering Battle of Stalingrad in 1943, the German offensive in the Soviet Union turned into a continual fallback. The Battle of Kursk, which involved the largest tank battle in history, was the last major German offensive on the Eastern Front. In 1944, the Soviets and Allies both launched offensives against the Axis in Europe, with the Western Allies liberating France and much of Western Europe as the Soviets liberated much of Eastern Europe before halting in central Poland. The final year of the war saw the Soviets and Western Allies divide Germany in two after meeting along the Elbe River, while the Soviets and allied partisans liberated the Balkans. The Soviets conquered the German capital of Berlin on 2 May 1945, ending World War II in Europe. Hitler killed himself; Mussolini was killed by partisans in Italy. The Soviet-German struggle made World War II the bloodiest and costliest conflict in history.[209][210] More than 40 million people in Europe had died as a result of World War II (70 percent on the eastern front),[211][212] including between 11 and 17 million people who perished during the Holocaust (the genocide against Jews and the massacre of intellectuals, gypsies, homosexuals, handicapped people, Slavs, and Red Army prisoners-of-war).[213] The Soviet Union lost around 27 million people during the war, about half of all World War II casualties.[214] By the end of World War II, Europe had more than 40 million refugees.[215] Several post-war expulsions in Central and Eastern Europe displaced a total of about 20 million people,[216] in particular, German-speakers from all over Eastern Europe. In the aftermath of World War II, American troops were stationed in Western Europe, while Soviet troops occupied several Central and Eastern European states.
|
126 |
+
|
127 |
+
World War I and especially World War II diminished the eminence of Western Europe in world affairs. After World War II the map of Europe was redrawn at the Yalta Conference and divided into two blocs, the Western countries and the communist Eastern bloc, separated by what was later called by Winston Churchill an "Iron Curtain". Germany was divided between the capitalist West Germany and the communist East Germany. The United States and Western Europe
|
128 |
+
established the NATO alliance and later the Soviet Union and Central Europe established the Warsaw Pact.[217]
|
129 |
+
|
130 |
+
The two new superpowers, the United States and the Soviet Union, became locked in a fifty-year-long Cold War, centred on nuclear proliferation. At the same time decolonisation, which had already started after World War I, gradually resulted in the independence of most of the European colonies in Asia and Africa.[13] During the Cold War, extra-continental military interventions were the preserve of the two superpowers, a few West European countries, and Cuba.[218] Using the small Caribbean nation of Cuba as a surrogate, the Soviets projected power to distant points of the globe, exploiting world trouble spots without directly involving their own troops.[219]
|
131 |
+
|
132 |
+
In the 1980s the reforms of Mikhail Gorbachev and the Solidarity movement in Poland accelerated the collapse of the Eastern bloc and the end of the Cold War. Germany was reunited, after the symbolic fall of the Berlin Wall in 1989, and the maps of Central and Eastern Europe were redrawn once more.[198] European integration also grew after World War II. In 1949 the Council of Europe was founded, following a speech by Sir Winston Churchill, with the idea of unifying Europe to achieve common goals. It includes all European states except for Belarus and Vatican City. The Treaty of Rome in 1957 established the European Economic Community between six Western European states with the goal of a unified economic policy and common market.[221] In 1967 the EEC, European Coal and Steel Community and Euratom formed the European Community, which in 1993 became the European Union. The EU established a parliament, court and central bank and introduced the euro as a unified currency.[222] Between 2004 and 2013, more Central and Eastern European countries began joining, expanding the EU to 28 European countries, and once more making Europe a major economical and political centre of power.[223] However, the United Kingdom withdrew from the EU on 31 January 2020, as a result of a June 2016 referendum on EU membership.[224]
|
133 |
+
|
134 |
+
Europe makes up the western fifth of the Eurasian landmass.[24] It has a higher ratio of coast to landmass than any other continent or subcontinent.[225] Its maritime borders consist of the Arctic Ocean to the north, the Atlantic Ocean to the west, and the Mediterranean, Black, and Caspian Seas to the south.[226]
|
135 |
+
Land relief in Europe shows great variation within relatively small areas. The southern regions are more mountainous, while moving north the terrain descends from the high Alps, Pyrenees, and Carpathians, through hilly uplands, into broad, low northern plains, which are vast in the east. This extended lowland is known as the Great European Plain, and at its heart lies the North German Plain. An arc of uplands also exists along the north-western seaboard, which begins in the western parts of the islands of Britain and Ireland, and then continues along the mountainous, fjord-cut spine of Norway.
|
136 |
+
|
137 |
+
This description is simplified. Sub-regions such as the Iberian Peninsula and the Italian Peninsula contain their own complex features, as does mainland Central Europe itself, where the relief contains many plateaus, river valleys and basins that complicate the general trend. Sub-regions like Iceland, Britain, and Ireland are special cases. The former is a land unto itself in the northern ocean which is counted as part of Europe, while the latter are upland areas that were once joined to the mainland until rising sea levels cut them off.
|
138 |
+
|
139 |
+
Europe lies mainly in the temperate climate zones, being subjected to prevailing westerlies. The climate is milder in comparison to other areas of the same latitude around the globe due to the influence of the Gulf Stream.[228] The Gulf Stream is nicknamed "Europe's central heating", because it makes Europe's climate warmer and wetter than it would otherwise be. The Gulf Stream not only carries warm water to Europe's coast but also warms up the prevailing westerly winds that blow across the continent from the Atlantic Ocean.
|
140 |
+
|
141 |
+
Therefore, the average temperature throughout the year of Naples is 16 °C (61 °F), while it is only 12 °C (54 °F) in New York City which is almost on the same latitude. Berlin, Germany; Calgary, Canada; and Irkutsk, in the Asian part of Russia, lie on around the same latitude; January temperatures in Berlin average around 8 °C (14 °F) higher than those in Calgary, and they are almost 22 °C (40 °F) higher than average temperatures in Irkutsk.[228] Similarly, northern parts of Scotland have a temperate marine climate. The yearly average temperature in city of Inverness is 9.05 °C (48.29 °F). However, Churchill, Manitoba, Canada, is on roughly the same latitude and has an average temperature of −6.5 °C (20.3 °F), giving it a nearly subarctic climate.
|
142 |
+
|
143 |
+
In general, Europe is not just colder towards the north compared to the south, but it also gets colder from the west towards the east. The climate is more oceanic in the west, and less so in the east. This can be illustrated by the following table of average temperatures at locations roughly following the 60th, 55th, 50th, 45th and 40th latitudes. None of them is located at high altitude; most of them are close to the sea. (location, approximate latitude and longitude, coldest month average, hottest month average and annual average temperatures in degrees C)
|
144 |
+
|
145 |
+
[229]
|
146 |
+
It is notable how the average temperatures for the coldest month, as well as the annual average temperatures, drop from the west to the east. For instance, Edinburgh is warmer than Belgrade during the coldest month of the year, although Belgrade is around 10° of latitude farther south.
|
147 |
+
|
148 |
+
The geological history of Europe traces back to the formation of the Baltic Shield (Fennoscandia) and the Sarmatian craton, both around 2.25 billion years ago, followed by the Volgo–Uralia shield, the three together leading to the East European craton (≈ Baltica) which became a part of the supercontinent Columbia. Around 1.1 billion years ago, Baltica and Arctica (as part of the Laurentia block) became joined to Rodinia, later resplitting around 550 million years ago to reform as Baltica. Around 440 million years ago Euramerica was formed from Baltica and Laurentia; a further joining with Gondwana then leading to the formation of Pangea. Around 190 million years ago, Gondwana and Laurasia split apart due to the widening of the Atlantic Ocean. Finally, and very soon afterwards, Laurasia itself split up again, into Laurentia (North America) and the Eurasian continent. The land connection between the two persisted for a considerable time, via Greenland, leading to interchange of animal species. From around 50 million years ago, rising and falling sea levels have determined the actual shape of Europe, and its connections with continents such as Asia. Europe's present shape dates to the late Tertiary period about five million years ago.[232]
|
149 |
+
|
150 |
+
The geology of Europe is hugely varied and complex, and gives rise to the wide variety of landscapes found across the continent, from the Scottish Highlands to the rolling plains of Hungary.[234] Europe's most significant feature is the dichotomy between highland and mountainous Southern Europe and a vast, partially underwater, northern plain ranging from Ireland in the west to the Ural Mountains in the east. These two halves are separated by the mountain chains of the Pyrenees and Alps/Carpathians. The northern plains are delimited in the west by the Scandinavian Mountains and the mountainous parts of the British Isles. Major shallow water bodies submerging parts of the northern plains are the Celtic Sea, the North Sea, the Baltic Sea complex and Barents Sea.
|
151 |
+
|
152 |
+
The northern plain contains the old geological continent of Baltica, and so may be regarded geologically as the "main continent", while peripheral highlands and mountainous regions in the south and west constitute fragments from various other geological continents. Most of the older geology of western Europe existed as part of the ancient microcontinent Avalonia.
|
153 |
+
|
154 |
+
Having lived side by side with agricultural peoples for millennia, Europe's animals and plants have been profoundly affected by the presence and activities of man. With the exception of Fennoscandia and northern Russia, few areas of untouched wilderness are currently found in Europe, except for various national parks.
|
155 |
+
|
156 |
+
The main natural vegetation cover in Europe is mixed forest. The conditions for growth are very favourable. In the north, the Gulf Stream and North Atlantic Drift warm the continent. Southern Europe could be described as having a warm, but mild climate. There are frequent summer droughts in this region. Mountain ridges also affect the conditions. Some of these (Alps, Pyrenees) are oriented east–west and allow the wind to carry large masses of water from the ocean in the interior. Others are oriented south–north (Scandinavian Mountains, Dinarides, Carpathians, Apennines) and because the rain falls primarily on the side of mountains that is oriented towards the sea, forests grow well on this side, while on the other side, the conditions are much less favourable. Few corners of mainland Europe have not been grazed by livestock at some point in time, and the cutting down of the pre-agricultural forest habitat caused disruption to the original plant and animal ecosystems.
|
157 |
+
|
158 |
+
Probably 80 to 90 percent of Europe was once covered by forest.[235] It stretched from the Mediterranean Sea to the Arctic Ocean. Although over half of Europe's original forests disappeared through the centuries of deforestation, Europe still has over one quarter of its land area as forest, such as the broadleaf and mixed forests, taiga of Scandinavia and Russia, mixed rainforests of the Caucasus and the Cork oak forests in the western Mediterranean. During recent times, deforestation has been slowed and many trees have been planted. However, in many cases monoculture plantations of conifers have replaced the original mixed natural forest, because these grow quicker. The plantations now cover vast areas of land, but offer poorer habitats for many European forest dwelling species which require a mixture of tree species and diverse forest structure. The amount of natural forest in Western Europe is just 2–3% or less, in European Russia 5–10%. The country with the smallest percentage of forested area is Iceland (1%), while the most forested country is Finland (77%).[236]
|
159 |
+
|
160 |
+
In temperate Europe, mixed forest with both broadleaf and coniferous trees dominate. The most important species in central and western Europe are beech and oak. In the north, the taiga is a mixed spruce–pine–birch forest; further north within Russia and extreme northern Scandinavia, the taiga gives way to tundra as the Arctic is approached. In the Mediterranean, many olive trees have been planted, which are very well adapted to its arid climate; Mediterranean Cypress is also widely planted in southern Europe. The semi-arid Mediterranean region hosts much scrub forest. A narrow east–west tongue of Eurasian grassland (the steppe) extends eastwards from Ukraine and southern Russia and ends in Hungary and traverses into taiga to the north.
|
161 |
+
|
162 |
+
Glaciation during the most recent ice age and the presence of man affected the distribution of European fauna. As for the animals, in many parts of Europe most large animals and top predator species have been hunted to extinction. The woolly mammoth was extinct before the end of the Neolithic period. Today wolves (carnivores) and bears (omnivores) are endangered. Once they were found in most parts of Europe. However, deforestation and hunting caused these animals to withdraw further and further. By the Middle Ages the bears' habitats were limited to more or less inaccessible mountains with sufficient forest cover. Today, the brown bear lives primarily in the Balkan peninsula, Scandinavia, and Russia; a small number also persist in other countries across Europe (Austria, Pyrenees etc.), but in these areas brown bear populations are fragmented and marginalised because of the destruction of their habitat. In addition, polar bears may be found on Svalbard, a Norwegian archipelago far north of Scandinavia. The wolf, the second largest predator in Europe after the brown bear, can be found primarily in Central and Eastern Europe and in the Balkans, with a handful of packs in pockets of Western Europe (Scandinavia, Spain, etc.).
|
163 |
+
|
164 |
+
European wild cat, foxes (especially the red fox), jackal and different species of martens, hedgehogs, different species of reptiles (like snakes such as vipers and grass snakes) and amphibians, different birds (owls, hawks and other birds of prey).
|
165 |
+
|
166 |
+
Important European herbivores are snails, larvae, fish, different birds, and mammals, like rodents, deer and roe deer, boars, and living in the mountains, marmots, steinbocks, chamois among others. A number of insects, such as the small tortoiseshell butterfly, add to the biodiversity.[239]
|
167 |
+
|
168 |
+
The extinction of the dwarf hippos and dwarf elephants has been linked to the earliest arrival of humans on the islands of the Mediterranean.[240]
|
169 |
+
|
170 |
+
Sea creatures are also an important part of European flora and fauna. The sea flora is mainly phytoplankton. Important animals that live in European seas are zooplankton, molluscs, echinoderms, different crustaceans, squids and octopuses, fish, dolphins, and whales.
|
171 |
+
|
172 |
+
Biodiversity is protected in Europe through the Council of Europe's Bern Convention, which has also been signed by the European Community as well as non-European states.
|
173 |
+
|
174 |
+
|
175 |
+
|
176 |
+
The political map of Europe is substantially derived from the re-organisation of Europe following the Napoleonic Wars in 1815. The prevalent form of government in Europe is parliamentary democracy, in most cases in the form of Republic; in 1815, the prevalent form of government was still the Monarchy. Europe's remaining eleven monarchies[241] are constitutional.
|
177 |
+
|
178 |
+
European integration is the process of political, legal, economic (and in some cases social and cultural) integration of European states as it has been pursued by the powers sponsoring the Council of Europe since the end of World War II
|
179 |
+
The European Union has been the focus of economic integration on the continent since its foundation in 1993. More recently, the Eurasian Economic Union has been established as a counterpart comprising former Soviet states.
|
180 |
+
|
181 |
+
27 European states are members of the politico-economic European Union, 26 of the border-free Schengen Area and 19 of the monetary union Eurozone. Among the smaller European organisations are the Nordic Council, the Benelux, the Baltic Assembly and the Visegrád Group.
|
182 |
+
|
183 |
+
The list below includes all entities falling even partially under any of the various common definitions of Europe, geographically or politically.
|
184 |
+
|
185 |
+
Within the above-mentioned states are several de facto independent countries with limited to no international recognition. None of them are members of the UN:
|
186 |
+
|
187 |
+
Several dependencies and similar territories with broad autonomy are also found within or in close proximity to Europe. This includes Åland (a region of Finland), two constituent countries of the Kingdom of Denmark (other than Denmark itself), three Crown dependencies, and two British Overseas Territories. Svalbard is also included due to its unique status within Norway, although it is not autonomous. Not included are the three countries of the United Kingdom with devolved powers and the two Autonomous Regions of Portugal, which despite having a unique degree of autonomy, are not largely self-governing in matters other than international affairs. Areas with little more than a unique tax status, such as Heligoland and the Canary Islands, are also not included for this reason.
|
188 |
+
|
189 |
+
As a continent, the economy of Europe is currently the largest on Earth and it is the richest region as measured by assets under management with over $32.7 trillion compared to North America's $27.1 trillion in 2008.[243] In 2009 Europe remained the wealthiest region. Its $37.1 trillion in assets under management represented one-third of the world's wealth. It was one of several regions where wealth surpassed its precrisis year-end peak.[244] As with other continents, Europe has a large variation of wealth among its countries. The richer states tend to be in the West; some of the Central and Eastern European economies are still emerging from the collapse of the Soviet Union and the breakup of Yugoslavia.
|
190 |
+
|
191 |
+
The European Union, a political entity composed of 28 European states, comprises the largest single economic area in the world. 19 EU countries share the euro as a common currency.
|
192 |
+
Five European countries rank in the top ten of the world's largest national economies in GDP (PPP). This includes (ranks according to the CIA): Germany (6), Russia (7), the United Kingdom (10), France (11), and Italy (13).[245]
|
193 |
+
|
194 |
+
There is huge disparity between many European countries in terms of their income. The richest in terms of GDP per capita is Monaco with its US$172,676 per capita (2009) and the poorest is Moldova with its GDP per capita of US$1,631 (2010).[246] Monaco is the richest country in terms of GDP per capita in the world according to the World Bank report.
|
195 |
+
|
196 |
+
As a whole, Europe's GDP per capita is US$21,767 according to a 2016 International Monetary Fund assessment.[247]
|
197 |
+
|
198 |
+
Capitalism has been dominant in the Western world since the end of feudalism.[248] From Britain, it gradually spread throughout Europe.[249] The Industrial Revolution started in Europe, specifically the United Kingdom in the late 18th century,[250] and the 19th century saw Western Europe industrialise. Economies were disrupted by World War I but by the beginning of World War II they had recovered and were having to compete with the growing economic strength of the United States. World War II, again, damaged much of Europe's industries.
|
199 |
+
|
200 |
+
After World War II the economy of the UK was in a state of ruin,[251] and continued to suffer relative economic decline in the following decades.[252] Italy was also in a poor economic condition but regained a high level of growth by the 1950s. West Germany recovered quickly and had doubled production from pre-war levels by the 1950s.[253] France also staged a remarkable comeback enjoying rapid growth and modernisation; later on Spain, under the leadership of Franco, also recovered, and the nation recorded huge unprecedented economic growth beginning in the 1960s in what is called the Spanish miracle.[254] The majority of Central and Eastern European states came under the control of the Soviet Union and thus were members of the Council for Mutual Economic Assistance (COMECON).[255]
|
201 |
+
|
202 |
+
The states which retained a free-market system were given a large amount of aid by the United States under the Marshall Plan.[256] The western states moved to link their economies together, providing the basis for the EU and increasing cross border trade. This helped them to enjoy rapidly improving economies, while those states in COMECON were struggling in a large part due to the cost of the Cold War. Until 1990, the European Community was expanded from 6 founding members to 12. The emphasis placed on resurrecting the West German economy led to it overtaking the UK as Europe's largest economy.
|
203 |
+
|
204 |
+
With the fall of communism in Central and Eastern Europe in 1991, the post-socialist states began free market reforms.
|
205 |
+
|
206 |
+
After East and West Germany were reunited in 1990, the economy of West Germany struggled as it had to support and largely rebuild the infrastructure of East Germany.
|
207 |
+
|
208 |
+
By the millennium change, the EU dominated the economy of Europe comprising the five largest European economies of the time namely Germany, the United Kingdom, France, Italy, and Spain. In 1999, 12 of the 15 members of the EU joined the Eurozone replacing their former national currencies by the common euro. The three who chose to remain outside the Eurozone were: the United Kingdom, Denmark, and Sweden. The European Union is now the largest economy in the world.[257]
|
209 |
+
|
210 |
+
Figures released by Eurostat in 2009 confirmed that the Eurozone had gone into recession in 2008.[258] It impacted much of the region.[259] In 2010, fears of a sovereign debt crisis[260] developed concerning some countries in Europe, especially Greece, Ireland, Spain, and Portugal.[261] As a result, measures were taken, especially for Greece, by the leading countries of the Eurozone.[262] The EU-27 unemployment rate was 10.3% in 2012.[263] For those aged 15–24 it was 22.4%.[263]
|
211 |
+
|
212 |
+
In 2017, the population of Europe was estimated to be 742 million according to the 2019 revision of the World Population Prospects[2][3], which is slightly more than one-ninth of the world's population. This number includes Siberia, (about 38 million people) but excludes European Turkey (about 12 million).
|
213 |
+
|
214 |
+
A century ago, Europe had nearly a quarter of the world's population.[265] The population of Europe has grown in the past century, but in other areas of the world (in particular Africa and Asia) the population has grown far more quickly.[266] Among the continents, Europe has a relatively high population density, second only to Asia. Most of Europe is in a mode of Sub-replacement fertility, which means that each new(-born) generation is being less populous than the older.
|
215 |
+
The most densely populated country in Europe (and in the world) is the microstate of Monaco.
|
216 |
+
|
217 |
+
Pan and Pfeil (2004) count 87 distinct "peoples of Europe", of which 33 form the majority population in at least one sovereign state, while the remaining 54 constitute ethnic minorities.[267]
|
218 |
+
|
219 |
+
According to UN population projection, Europe's population may fall to about 7% of world population by 2050, or 653 million people (medium variant, 556 to 777 million in low and high variants, respectively).[266] Within this context, significant disparities exist between regions in relation to fertility rates. The average number of children per female of child-bearing age is 1.52.[268] According to some sources,[269] this rate is higher among Muslims in Europe. The UN predicts a steady population decline in Central and Eastern Europe as a result of emigration and low birth rates.[270]
|
220 |
+
|
221 |
+
Europe is home to the highest number of migrants of all global regions at 70.6 million people, the IOM's report said.[271] In 2005, the EU had an overall net gain from immigration of 1.8 million people. This accounted for almost 85% of Europe's total population growth.[272] In 2008, 696,000 persons were given citizenship of an EU27 member state, a decrease from 707,000 the previous year.[273] In 2017, approximately 825,000 persons acquired citizenship of an EU28 member state.[274] 2.4 million immigrants from non-EU countries entered the EU in 2017.[275]
|
222 |
+
|
223 |
+
Early modern emigration from Europe began with Spanish and Portuguese settlers in the 16th century,[276][277] and French and English settlers in the 17th century.[278] But numbers remained relatively small until waves of mass emigration in the 19th century, when millions of poor families left Europe.[279]
|
224 |
+
|
225 |
+
Today, large populations of European descent are found on every continent. European ancestry predominates in North America, and to a lesser degree in South America (particularly in Uruguay, Argentina, Chile and Brazil, while most of the other Latin American countries also have a considerable population of European origins). Australia and New Zealand have large European derived populations. Africa has no countries with European-derived majorities (or with the exception of Cape Verde and probably São Tomé and Príncipe, depending on context), but there are significant minorities, such as the White South Africans in South Africa. In Asia, European-derived populations, (specifically Russians), predominate in North Asia and some parts of Northern Kazakhstan.[280]
|
226 |
+
|
227 |
+
Europe has about 225 indigenous languages,[281] mostly falling within three Indo-European language groups: the Romance languages, derived from the Latin of the Roman Empire; the Germanic languages, whose ancestor language came from southern Scandinavia; and the Slavic languages.[232] Slavic languages are mostly spoken in Southern, Central and Eastern Europe. Romance languages are spoken primarily in Western and Southern Europe as well as in Switzerland in Central Europe and Romania and Moldova in Eastern Europe. Germanic languages are spoken in Western, Northern and Central Europe as well as in Gibraltar and Malta in Southern Europe.[232] Languages in adjacent areas show significant overlaps (such as in English, for example). Other Indo-European languages outside the three main groups include the Baltic group (Latvian and Lithuanian), the Celtic group (Irish, Scottish Gaelic, Manx, Welsh, Cornish, and Breton[232]), Greek, Armenian, and Albanian.
|
228 |
+
|
229 |
+
A distinct non-Indo-European family of Uralic languages (Estonian, Finnish, Hungarian, Erzya, Komi, Mari, Moksha, and Udmurt) is spoken mainly in Estonia, Finland, Hungary, and parts of Russia. Turkic languages include Azerbaijani, Kazakh and Turkish, in addition to smaller languages in Eastern and Southeast Europe (Balkan Gagauz Turkish, Bashkir, Chuvash, Crimean Tatar, Karachay-Balkar, Kumyk, Nogai, and Tatar). Kartvelian languages (Georgian, Mingrelian, and Svan) are spoken primarily in Georgia. Two other language families reside in the North Caucasus (termed Northeast Caucasian, most notably including Chechen, Avar, and Lezgin; and Northwest Caucasian, most notably including Adyghe). Maltese is the only Semitic language that is official within the EU, while Basque is the only European language isolate.
|
230 |
+
|
231 |
+
Multilingualism and the protection of regional and minority languages are recognised political goals in Europe today. The Council of Europe Framework Convention for the Protection of National Minorities and the Council of Europe's European Charter for Regional or Minority Languages set up a legal framework for language rights in Europe.
|
232 |
+
|
233 |
+
The four largest cities of Europe are Istanbul, Moscow, Paris and London, each have over 10 million residents,[8] and as such have been described as megacities.[282] While Istanbul has the highest total population, one third lies on the Asian side of the Bosporus, making Moscow the most populous city entirely in Europe.
|
234 |
+
The next largest cities in order of population are Saint Petersburg, Madrid, Berlin and Rome, each having over 3 million residents.[8]
|
235 |
+
|
236 |
+
When considering the commuter belts or metropolitan areas, within the EU (for which comparable data is available) London covers the largest population, followed in order by Paris, Madrid, Barcelona, Berlin, the Ruhr area, Rome, Milan, Athens and Warsaw.[283]
|
237 |
+
|
238 |
+
"Europe" as a cultural concept is substantially derived from the shared heritage of the Roman Empire and its culture.
|
239 |
+
|
240 |
+
The boundaries of Europe were historically understood as those of Christendom (or more specifically Latin Christendom), as established or defended throughout the medieval and early modern history of Europe, especially against Islam, as in the Reconquista and the Ottoman wars in Europe.[284]
|
241 |
+
|
242 |
+
This shared cultural heritage is combined by overlapping indigenous national cultures and folklores, roughly divided into Slavic, Latin (Romance) and Germanic, but with several components not part of either of these group (notably Greek and Celtic).
|
243 |
+
|
244 |
+
Cultural contact and mixtures characterise much of European regional cultures; Kaplan (2014) describes Europe as "embracing maximum cultural diversity at minimal geographical distances".[clarification needed][285]
|
245 |
+
|
246 |
+
Different cultural events are organised in Europe, with the aim of bringing different cultures closer together and raising awareness of their importance, such as the European Capital of Culture, the European Region of Gastronomy, the European Youth Capital and the European Capital of Sport.
|
247 |
+
|
248 |
+
Historically, religion in Europe has been a major influence on European art, culture, philosophy and law. There are six patron saints of Europe venerated in Roman Catholicism, five of them so declared by Pope John Paul II between 1980–1999: Saints Cyril and Methodius, Saint Bridget of Sweden, Catherine of Siena and Saint Teresa Benedicta of the Cross (Edith Stein). The exception is Benedict of Nursia, who had already been declared "Patron Saint of all Europe" by Pope Paul VI in 1964.[286][circular reference]
|
249 |
+
|
250 |
+
Religion in Europe according to the Global Religious Landscape survey by the Pew Forum, 2012[287]
|
251 |
+
|
252 |
+
The largest religion in Europe is Christianity, with 76.2% of Europeans considering themselves Christians,[288] including Catholic, Eastern Orthodox and various Protestant denominations. Among Protestants, the most popular are historically state-supported European denominations such as Lutheranism, Anglicanism and the Reformed faith. Other Protestant denominations such as historically significant ones like Anabaptists were never supported by any state and thus are not so widespread, as well as these newly arriving from the United States such as Pentecostalism, Adventism, Methodism, Baptists and various Evangelical Protestants; although Methodism and Baptists both have European origins. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom"; many even attribute Christianity for being the link that created a unified European identity.[289]
|
253 |
+
|
254 |
+
Christianity, including the Roman Catholic Church,[290][291] has played a prominent role in the shaping of Western civilisation since at least the 4th century,[292][293][294][295] and for at least a millennium and a half, Europe has been nearly equivalent to Christian culture, even though the religion was inherited from the Middle East. Christian culture was the predominant force in western civilisation, guiding the course of philosophy, art, and science.[296][297]
|
255 |
+
|
256 |
+
The second most popular religion is Islam (6%)[298] concentrated mainly in the Balkans and Eastern Europe (Bosnia and Herzegovina, Albania, Kosovo, Kazakhstan, North Cyprus, Turkey, Azerbaijan, North Caucasus, and the Volga-Ural region). Other religions, including Judaism, Hinduism, and Buddhism are minority religions (though Tibetan Buddhism is the majority religion of Russia's Republic of Kalmykia). The 20th century saw the revival of Neopaganism through movements such as Wicca and Druidry.
|
257 |
+
|
258 |
+
Europe has become a relatively secular continent, with an increasing number and proportion of irreligious, atheist and agnostic people, who make up about 18.2% of Europe's population,[299] currently the largest secular population in the Western world. There are a particularly high number of self-described non-religious people in the Czech Republic, Estonia, Sweden, former East Germany, and France.[300]
|
259 |
+
|
260 |
+
Sport in Europe tends to be highly organized with many sports having professional leagues.
|
261 |
+
|
262 |
+
Historical Maps
|
263 |
+
|
264 |
+
Africa
|
265 |
+
|
266 |
+
Antarctica
|
267 |
+
|
268 |
+
Asia
|
269 |
+
|
270 |
+
Australia
|
271 |
+
|
272 |
+
Europe
|
273 |
+
|
274 |
+
North America
|
275 |
+
|
276 |
+
South America
|
277 |
+
|
278 |
+
Afro-Eurasia
|
279 |
+
|
280 |
+
America
|
281 |
+
|
282 |
+
Eurasia
|
283 |
+
|
284 |
+
Oceania
|
en/1906.html.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Europa may refer to:
|
en/1907.html.txt
ADDED
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Europe is a continent located entirely in the Northern Hemisphere and mostly in the Eastern Hemisphere. It comprises the westernmost part of Eurasia and is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, the Mediterranean Sea to the south, and Asia to the east. Europe is commonly considered to be separated from Asia by the watershed of the Ural Mountains, the Ural River, the Caspian Sea, the Greater Caucasus, the Black Sea, and the waterways of the Turkish Straits.[9] Although much of this border is over land, Europe is generally accorded the status of a full continent because of its great physical size and the weight of history and tradition.
|
6 |
+
|
7 |
+
Europe covers about 10,180,000 square kilometres (3,930,000 sq mi), or 2% of the Earth's surface (6.8% of land area), making it the sixth largest continent. Politically, Europe is divided into about fifty sovereign states, of which Russia is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a total population of about 741 million (about 11% of the world population) as of 2018[update].[2][3] The European climate is largely affected by warm Atlantic currents that temper winters and summers on much of the continent, even at latitudes along which the climate in Asia and North America is severe. Further from the sea, seasonal differences are more noticeable than close to the coast.
|
8 |
+
|
9 |
+
Europe, in particular ancient Greece and ancient Rome, was the birthplace of Western civilisation.[10][11][12] The fall of the Western Roman Empire in 476 AD and the subsequent Migration Period marked the end of ancient history and the beginning of the Middle Ages. Renaissance humanism, exploration, art and science led to the modern era. Since the Age of Discovery, Europe played a predominant role in global affairs. Between the 16th and 20th centuries, European powers colonized at various times the Americas, almost all of Africa and Oceania and the majority of Asia.
|
10 |
+
|
11 |
+
The Age of Enlightenment, the subsequent French Revolution and the Napoleonic Wars shaped the continent culturally, politically and economically from the end of the 17th century until the first half of the 19th century. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to radical economic, cultural and social change in Western Europe and eventually the wider world. Both world wars took place for the most part in Europe, contributing to a decline in Western European dominance in world affairs by the mid-20th century as the Soviet Union and the United States took prominence.[13] During the Cold War, Europe was divided along the Iron Curtain between NATO in the West and the Warsaw Pact in the East, until the revolutions of 1989 and fall of the Berlin Wall.
|
12 |
+
|
13 |
+
In 1949 the Council of Europe was founded with the idea of unifying Europe to achieve common goals. Further European integration by some states led to the formation of the European Union (EU), a separate political entity that lies between a confederation and a federation.[14] The EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. The currency of most countries of the European Union, the euro, is the most commonly used among Europeans; and the EU's Schengen Area abolishes border and immigration controls between most of its member states.
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
In classical Greek mythology, Europa (Ancient Greek: Εὐρώπη, Eurṓpē) was a Phoenician princess. One view is that her name derives from the ancient Greek elements εὐρύς (eurús), "wide, broad" and ὤψ (ōps, gen. ὠπός, ōpós) "eye, face, countenance", hence their composite Eurṓpē would mean "wide-gazing" or "broad of aspect".[15][16][17] Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion and the poetry devoted to it.[15] An alternative view is that of R.S.P. Beekes who has argued in favor of a Pre-Indo-European origin for the name, explains that a derivation from ancient Greek eurus would yield a different toponym than Europa. Beekes has located toponyms related to that of Europa in the territory of ancient Greece and localities like that of Europos in ancient Macedonia.[18]
|
18 |
+
|
19 |
+
There have been attempts to connect Eurṓpē to a Semitic term for "west", this being either Akkadian erebu meaning "to go down, set" (said of the sun) or Phoenician 'ereb "evening, west",[19] which is at the origin of Arabic Maghreb and Hebrew ma'arav. Michael A. Barry finds the mention of the word Ereb on an Assyrian stele with the meaning of "night, [the country of] sunset", in opposition to Asu "[the country of] sunrise", i.e. Asia. The same naming motive according to "cartographic convention" appears in Greek Ἀνατολή (Anatolḗ "[sun] rise", "east", hence Anatolia).[20] Martin Litchfield West stated that "phonologically, the match between Europa's name and any form of the Semitic word is very poor",[21] while Beekes considers a connection to Semitic languages improbable.[18] Next to these hypotheses there is also a Proto-Indo-European root *h1regʷos, meaning "darkness", which also produced Greek Erebus.[citation needed]
|
20 |
+
|
21 |
+
Most major world languages use words derived from Eurṓpē or Europa to refer to the continent. Chinese, for example, uses the word Ōuzhōu (歐洲/欧洲), which is an abbreviation of the transliterated name Ōuluóbā zhōu (歐羅巴洲) (zhōu means "continent"); a similar Chinese-derived term Ōshū (欧州) is also sometimes used in Japanese such as in the Japanese name of the European Union, Ōshū Rengō (欧州連合), despite the katakana Yōroppa (ヨーロッパ) being more commonly used. In some Turkic languages the originally Persian name Frangistan ("land of the Franks") is used casually in referring to much of Europe, besides official names such as Avrupa or Evropa.[22]
|
22 |
+
|
23 |
+
Clickable map of Europe, showing one of the most commonly used continental boundaries[23] Key: blue: states which straddle the border between Europe and Asia;
|
24 |
+
green: countries not geographically in Europe, but closely associated with the continent
|
25 |
+
|
26 |
+
The prevalent definition of Europe as a geographical term has been in use since the mid-19th century.
|
27 |
+
Europe is taken to be bounded by large bodies of water to the north, west and south; Europe's limits to the east and northeast are usually taken to be the Ural Mountains, the Ural River, and the Caspian Sea; to the southeast, the Caucasus Mountains, the Black Sea and the waterways connecting the Black Sea to the Mediterranean Sea.[24]
|
28 |
+
|
29 |
+
Islands are generally grouped with the nearest continental landmass, hence Iceland is considered to be part of Europe, while the nearby island of Greenland is usually assigned to North America, although politically belonging to Denmark. Nevertheless, there are some exceptions based on sociopolitical and cultural differences. Cyprus is closest to Anatolia (or Asia Minor), but is considered part of Europe politically and it is a member state of the EU. Malta was considered an island of Northwest Africa for centuries, but now it is considered to be part of Europe as well.[25]
|
30 |
+
"Europe" as used specifically in British English may also refer to Continental Europe exclusively.[26]
|
31 |
+
|
32 |
+
The term "continent" usually implies the physical geography of a large land mass completely or almost completely surrounded by water at its borders. However the Europe-Asia part of the border is somewhat arbitrary and inconsistent with this definition because of its partial adherence to the Ural and Caucasus Mountains rather than a series of partly joined waterways suggested by cartographer Herman Moll in 1715. These water divides extend with a few relatively small interruptions (compared to the aforementioned mountain ranges) from the Turkish straits running into the Mediterranean Sea to the upper part of the Ob River that drains into the Arctic Ocean. Prior to the adoption of the current convention that includes mountain divides, the border between Europe and Asia had been redefined several times since its first conception in classical antiquity, but always as a series of rivers, seas, and straits that were believed to extend an unknown distance east and north from the Mediterranean Sea without the inclusion of any mountain ranges.
|
33 |
+
|
34 |
+
The current division of Eurasia into two continents now reflects East-West cultural, linguistic and ethnic differences which vary on a spectrum rather than with a sharp dividing line. The geographic border between Europe and Asia does not follow any state boundaries and now only follows a few bodies of water. Turkey is generally considered a transcontinental country divided entirely by water, while Russia and Kazakhstan are only partly divided by waterways. France, Portugal, the Netherlands, Spain and the United Kingdom are also transcontinental (or more properly, intercontinental, when oceans or large seas are involved) in that their main land areas are in Europe while pockets of their territories are located on other continents separated from Europe by large bodies of water. Spain, for example, has territories south of the Mediterranean Sea namely Ceuta and Melilla which are parts of Africa and share a border with Morocco. According to the current convention, Georgia and Azerbaijan are transcontinental countries where waterways have been completely replaced by mountains as the divide between continents.
|
35 |
+
|
36 |
+
The first recorded usage of Eurṓpē as a geographic term is in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea. As a name for a part of the known world, it is first used in the 6th century BC by Anaximander and Hecataeus. Anaximander placed the boundary between Asia and Europe along the Phasis River (the modern Rioni River on the territory of Georgia) in the Caucasus, a convention still followed by Herodotus in the 5th century BC.[27] Herodotus mentioned that the world had been divided by unknown persons into three parts, Europe, Asia, and Libya (Africa), with the Nile and the Phasis forming their boundaries—though he also states that some considered the River Don, rather than the Phasis, as the boundary between Europe and Asia.[28] Europe's eastern frontier was defined in the 1st century by geographer Strabo at the River Don.[29] The Book of Jubilees described the continents as the lands given by Noah to his three sons; Europe was defined as stretching from the Pillars of Hercules at the Strait of Gibraltar, separating it from Northwest Africa, to the Don, separating it from Asia.[30]
|
37 |
+
|
38 |
+
The convention received by the Middle Ages and surviving into modern usage is that of the Roman era used by Roman era authors such as Posidonius,[31] Strabo[32] and Ptolemy,[33]
|
39 |
+
who took the Tanais (the modern Don River) as the boundary.
|
40 |
+
|
41 |
+
The term "Europe" is first used for a cultural sphere in the Carolingian Renaissance of the 9th century. From that time, the term designated the sphere of influence of the Western Church, as opposed to both the Eastern Orthodox churches and to the Islamic world.
|
42 |
+
|
43 |
+
A cultural definition of Europe as the lands of Latin Christendom coalesced in the 8th century, signifying the new cultural condominium created through the confluence of Germanic traditions and Christian-Latin culture, defined partly in contrast with Byzantium and Islam, and limited to northern Iberia, the British Isles, France, Christianised western Germany, the Alpine regions and northern and central Italy.[34] The concept is one of the lasting legacies of the Carolingian Renaissance: Europa often[dubious – discuss] figures in the letters of Charlemagne's court scholar, Alcuin.[35]
|
44 |
+
|
45 |
+
The question of defining a precise eastern boundary of Europe arises in the Early Modern period, as the eastern extension of Muscovy began to include North Asia. Throughout the Middle Ages and into the 18th century, the traditional division of the landmass of Eurasia into two continents, Europe and Asia, followed Ptolemy, with the boundary following the Turkish Straits, the Black Sea, the Kerch Strait, the Sea of Azov and the Don (ancient Tanais). But maps produced during the 16th to 18th centuries tended to differ in how to continue the boundary beyond the Don bend at Kalach-na-Donu (where it is closest to the Volga, now joined with it by the Volga–Don Canal), into territory not described in any detail by the ancient geographers. Around 1715, Herman Moll produced a map showing the northern part of the Ob River and the Irtysh River, a major tributary of the former, as components of a series of partly-joined waterways taking the boundary between Europe and Asia from the Turkish Straits and the Don River all the way to the Arctic Ocean. In 1721, he produced a more up to date map that was easier to read. However, his idea to use major rivers almost exclusively as the line of demarcation was never taken up by the Russian Empire.
|
46 |
+
|
47 |
+
Four years later, in 1725, Philip Johan von Strahlenberg was the first to depart from the classical Don boundary by proposing that mountain ranges could be included as boundaries between continents whenever there were deemed to be no suitable waterways, the Ob and Irtysh Rivers notwithstanding. He drew a new line along the Volga, following the Volga north until the Samara Bend, along Obshchy Syrt (the drainage divide between Volga and Ural) and then north along Ural Mountains.[36] This was adopted by the Russian Empire, and introduced the convention that would eventually become commonly accepted, but not without criticism by many modern analytical geographers like Halford Mackinder who saw little validity in the Ural Mountains as a boundary between continents.[37]
|
48 |
+
|
49 |
+
The mapmakers continued to differ on the boundary between the lower Don and Samara well into the 19th century. The 1745 atlas published by the Russian Academy of Sciences has the boundary follow the Don beyond Kalach as far as Serafimovich before cutting north towards Arkhangelsk, while other 18th- to 19th-century mapmakers such as John Cary followed Strahlenberg's prescription. To the south, the Kuma–Manych Depression was identified circa 1773 by a German naturalist, Peter Simon Pallas, as a valley that once connected the Black Sea and the Caspian Sea,[38][39] and subsequently was proposed as a natural boundary between continents.
|
50 |
+
|
51 |
+
By the mid-19th century, there were three main conventions, one following the Don, the Volga–Don Canal and the Volga, the other following the Kuma–Manych Depression to the Caspian and then the Ural River, and the third abandoning the Don altogether, following the Greater Caucasus watershed to the Caspian. The question was still treated as a "controversy" in geographical literature of the 1860s, with Douglas Freshfield advocating the Caucasus crest boundary as the "best possible", citing support from various "modern geographers".[40]
|
52 |
+
|
53 |
+
In Russia and the Soviet Union, the boundary along the Kuma–Manych Depression was the most commonly used as early as 1906.[41] In 1958, the Soviet Geographical Society formally recommended that the boundary between the Europe and Asia be drawn in textbooks from Baydaratskaya Bay, on the Kara Sea, along the eastern foot of Ural Mountains, then following the Ural River until the Mugodzhar Hills, and then the Emba River; and Kuma–Manych Depression,[42] thus placing the Caucasus entirely in Asia and the Urals entirely in Europe.[43] However, most geographers in the Soviet Union favoured the boundary along the Caucasus crest[44] and this became the common convention in the later 20th century, although the Kuma–Manych boundary remained in use in some 20th-century maps.
|
54 |
+
|
55 |
+
Homo erectus georgicus, which lived roughly 1.8 million years ago in Georgia, is the earliest hominid to have been discovered in Europe.[45] Other hominid remains, dating back roughly 1 million years, have been discovered in Atapuerca, Spain.[46] Neanderthal man (named after the Neandertal valley in Germany) appeared in Europe 150,000 years ago (115,000 years ago it is found already in Poland[47]) and disappeared from the fossil record about 28,000 years ago, with their final refuge being present-day Portugal. The Neanderthals were supplanted by modern humans (Cro-Magnons), who appeared in Europe around 43,000 to 40,000 years ago.[48] The earliest sites in Europe dated 48,000 years ago are
|
56 |
+
Riparo Mochi (Italy), Geissenklösterle (Germany), and Isturitz (France)[49][50]
|
57 |
+
|
58 |
+
The European Neolithic period—marked by the cultivation of crops and the raising of livestock, increased numbers of settlements and the widespread use of pottery—began around 7000 BC in Greece and the Balkans, probably influenced by earlier farming practices in Anatolia and the Near East.[51] It spread from the Balkans along the valleys of the Danube and the Rhine (Linear Pottery culture) and along the Mediterranean coast (Cardial culture). Between 4500 and 3000 BC, these central European neolithic cultures developed further to the west and the north, transmitting newly acquired skills in producing copper artifacts. In Western Europe the Neolithic period was characterised not by large agricultural settlements but by field monuments, such as causewayed enclosures, burial mounds and megalithic tombs.[52] The Corded Ware cultural horizon flourished at the transition from the Neolithic to the Chalcolithic. During this period giant megalithic monuments, such as the Megalithic Temples of Malta and Stonehenge, were constructed throughout Western and Southern Europe.[53][54]
|
59 |
+
|
60 |
+
The European Bronze Age began c. 3200 BC in Greece with the Minoan civilisation on Crete, the first advanced civilisation in Europe.[55] The Minoans were followed by the Myceneans, who collapsed suddenly around 1200 BC, ushering the European Iron Age.[56] Iron Age colonisation by the Greeks and Phoenicians gave rise to early Mediterranean cities. Early Iron Age Italy and Greece from around the 8th century BC gradually gave rise to historical Classical antiquity, whose beginning is sometimes dated to 776 BC, the year the first Olympic Games.[57]
|
61 |
+
|
62 |
+
Ancient Greece was the founding culture of Western civilisation. Western democratic and rationalist culture are often attributed to Ancient Greece.[58] The Greek city-state, the polis, was the fundamental political unit of classical Greece.[58] In 508 BC, Cleisthenes instituted the world's first democratic system of government in Athens.[59] The Greek political ideals were rediscovered in the late 18th century by European philosophers and idealists. Greece also generated many cultural contributions: in philosophy, humanism and rationalism under Aristotle, Socrates and Plato; in history with Herodotus and Thucydides; in dramatic and narrative verse, starting with the epic poems of Homer;[60] in drama with Sophocles and Euripides, in medicine with Hippocrates and Galen; and in science with Pythagoras, Euclid and Archimedes.[61][62][63] In the course of the 5th century BC, several of the Greek city states would ultimately check the Achaemenid Persian advance in Europe through the Greco-Persian Wars, considered a pivotal moment in world history,[64] as the 50 years of peace that followed are known as Golden Age of Athens, the seminal period of ancient Greece that laid many of the foundations of Western civilisation. The conquests of Alexander the Great brought the Middle East into the Greek cultural sphere.
|
63 |
+
|
64 |
+
Greece was followed by Rome, which left its mark on law, politics, language, engineering, architecture, government and many more key aspects in western civilisation.[58] Rome began as a small city-state, founded, according to tradition, in 753 BC as an elective kingdom. Tradition has it that there were seven kings of Rome with Romulus, the founder, being the first and Lucius Tarquinius Superbus falling to a republican uprising led by Lucius Junius Brutus, but modern scholars doubt many of those stories and even the Romans themselves acknowledged that the sack of Rome by the Gauls in 387 BC destroyed many sources on their early history.
|
65 |
+
|
66 |
+
By 200 BC, Rome had conquered Italy, and over the following two centuries it conquered Greece and Hispania (Spain and Portugal), the North African coast, much of the Middle East, Gaul (France and Belgium), and Britannia (England and Wales). The forty-year conquest of Britannia left 250,000 Britons dead, and emperor Antoninus Pius built the Antonine Wall across Scotland's Central Belt to defend the province from the Caledonians; it marked the northernmost border of the Roman Empire.
|
67 |
+
|
68 |
+
The Roman Republic ended in 27 BC, when Augustus proclaimed the Roman Empire. The two centuries that followed are known as the pax romana, a period of unprecedented peace, prosperity, and political stability in most of Europe.[65] The empire continued to expand under emperors such as Antoninus Pius and Marcus Aurelius, who spent time on the Empire's northern border fighting Germanic, Pictish and Scottish tribes.[66][67] Christianity was legalised by Constantine I in 313 AD after three centuries of imperial persecution. Constantine also permanently moved the capital of the empire from Rome to the city of Byzantium, which was renamed Constantinople in his honour (modern-day Istanbul) in 330 AD. Christianity became the sole official religion of the empire in 380 AD, and in 391–392 AD, the emperor Theodosius outlawed pagan religions.[68] This is sometimes considered to mark the end of antiquity; alternatively antiquity is considered to end with the fall of the Western Roman Empire in 476 AD; the closure of the pagan Platonic Academy of Athens in 529 AD;[69] or the rise of Islam in the early 7th century AD.
|
69 |
+
|
70 |
+
During the decline of the Roman Empire, Europe entered a long period of change arising from what historians call the "Age of Migrations". There were numerous invasions and migrations amongst the Goths, Vandals, Huns, Franks, Angles, Saxons, Slavs, Avars, Bulgars and, later on, the Vikings, Pechenegs, Cumans and Magyars.[65] Germanic tribes settled in the former Roman provinces of England and Spain, while other groups pressed into northern France and Italy.[70] Renaissance thinkers such as Petrarch would later refer to this as the "Dark Ages".[71] Isolated monastic communities were the only places to safeguard and compile written knowledge accumulated previously; apart from this very few written records survive and much literature, philosophy, mathematics, and other thinking from the classical period disappeared from Western Europe though they were preserved in the east, in the Byzantine Empire.[72]
|
71 |
+
|
72 |
+
While the Roman empire in the west continued to decline, Roman traditions and the Roman state remained strong in the predominantly Greek-speaking Eastern Roman Empire, also known as the Byzantine Empire. During most of its existence, the Byzantine Empire was the most powerful economic, cultural, and military force in Europe. Emperor Justinian I presided over Constantinople's first golden age: he established a legal code that forms the basis of many modern legal systems, funded the construction of the Hagia Sophia, and brought the Christian church under state control.[73] He reconquered North Africa, southern Spain, and Italy.[74] The Ostrogothic Kingdom, which sought to preserve Roman culture and adopt its values, later changed course and became anti-Constantinople, so Justinian waged war against the Ostrogoths for 30 years and destroyed much of urban life in Italy, and reduced agriculture to subsistence farming.[75]
|
73 |
+
|
74 |
+
From the 7th century onwards, as the Byzantines and neighbouring Sasanid Persians were severely weakened due the protracted, centuries-lasting and frequent Byzantine–Sasanian wars, the Muslim Arabs began to make inroads into historically Roman territory, taking the Levant and North Africa and making inroads into Asia Minor. In the mid 7th century AD, following the Muslim conquest of Persia, Islam penetrated into the Caucasus region.[76] Over the next centuries Muslim forces took Cyprus, Malta, Crete, Sicily and parts of southern Italy.[77] Between 711 and 726, the Umayyad Caliphate conquered the Visigothic Kingdom, which occupied the Iberian Peninsula and Septimania (southwestern France). The unsuccessful second siege of Constantinople (717) weakened the Umayyad dynasty and reduced their prestige. The Umayyads were then defeated by the Frankish leader Charles Martel at the Battle of Poitiers in 732, which ended their northward advance. In the remote regions of north-western Iberia and the middle Pyrenees the power of the Muslims in the south was scarcely felt. It was here that the foundations of the Christian kingdoms of Asturias, Leon and Galicia were laid and from where the reconquest of the Iberian Peninsula would start. However, no coordinated attempt would be made to drive the Moors out. The Christian kingdoms were mainly focussed on their own internal power struggles. As a result, the Reconquista took the greater part of eight hundred years, in which period a long list of Alfonsos, Sanchos, Ordoños, Ramiros, Fernandos and Bermudos would be fighting their Christian rivals as much as the Muslim invaders.
|
75 |
+
|
76 |
+
One of the biggest threats during the Dark Ages were the Vikings, Norse seafarers who raided, raped, and pillaged across the Western world. Many Vikings died in battles in continental Europe, and in 844 they lost many men and ships to King Ramiro in northern Spain.[78] A few months later, another fleet took Seville, only to be driven off with further heavy losses.[79] In 911, the Vikings attacked Paris, and the Franks decided to give the Viking king Rollo land along the English Channel coast in exchange for peace. This land was named "Normandy" for the Norse "Northmen", and the Norse settlers became known as "Normans", adopting the French culture and language and becoming French vassals. In 1066, the Normans went on to conquer England in the first successful cross-Channel invasion since Roman times.[80]
|
77 |
+
|
78 |
+
During the Dark Ages, the Western Roman Empire fell under the control of various tribes. The Germanic and Slav tribes established their domains over Western and Eastern Europe respectively.[81] Eventually the Frankish tribes were united under Clovis I.[82] In 507, Clovis defeated the Visigoths at the Battle of Vouillé, slaying King Alaric II and conquering southern Gaul for the Franks. His conquests laid the foundation for the Frankish kingdom. Charlemagne, a Frankish king of the Carolingian dynasty who had conquered most of Western Europe, was anointed "Holy Roman Emperor" by the Pope in 800. This led in 962 to the founding of the Holy Roman Empire, which eventually became centred in the German principalities of central Europe.[83]
|
79 |
+
|
80 |
+
East Central Europe saw the creation of the first Slavic states and the adoption of Christianity (circa 1000 AD). The powerful West Slavic state of Great Moravia spread its territory all the way south to the Balkans, reaching its largest territorial extent under Svatopluk I and causing a series of armed conflicts with East Francia. Further south, the first South Slavic states emerged in the late 7th and 8th century and adopted Christianity: the First Bulgarian Empire, the Serbian Principality (later Kingdom and Empire), and the Duchy of Croatia (later Kingdom of Croatia). To the East, the Kievan Rus expanded from its capital in Kiev to become the largest state in Europe by the 10th century. In 988, Vladimir the Great adopted Orthodox Christianity as the religion of state.[84][85] Further East, Volga Bulgaria became an Islamic state in the 10th century, but was eventually absorbed into Russia several centuries later.[86]
|
81 |
+
|
82 |
+
The period between the year 1000 and 1300 is known as the High Middle Ages, during which the population of Europe experienced significant growth, culminating in the Renaissance of the 12th century. Economic growth, together with the lack of safety on the mainland trading routes, made possible the development of major commercial routes along the coast of the Mediterranean and Baltic Seas. The growing wealth and independence acquired by some coastal cities gave the Maritime Republics a leading role in the European scene. Constantinople was the largest and wealthiest city in Europe from the 9th to the 12th centuries, with a population of approximately 400,000.[91]
|
83 |
+
|
84 |
+
The Middle Ages on the mainland were dominated by the two upper echelons of the social structure: the nobility and the clergy. Feudalism developed in France in the Early Middle Ages and soon spread throughout Europe.[92] A struggle for influence between the nobility and the monarchy in England led to the writing of the Magna Carta and the establishment of a parliament.[93] The primary source of culture in this period came from the Roman Catholic Church. Through monasteries and cathedral schools, the Church was responsible for education in much of Europe.[92]
|
85 |
+
|
86 |
+
The Papacy reached the height of its power during the High Middle Ages. An East-West Schism in 1054 split the former Roman Empire religiously, with the Eastern Orthodox Church in the Byzantine Empire and the Roman Catholic Church in the former Western Roman Empire. In 1095, Pope Urban II called for a crusade against Muslims occupying Jerusalem and the Holy Land.[94] In Europe itself, the Church organised the Inquisition against heretics. Popes also encouraged crusading in the Iberian Peninsula.[95] In the east, a resurgent Byzantine Empire recaptured Crete and Cyprus from the Muslims and reconquered the Balkans. However, it faced a new enemy in the form of seaborne attacks by the Normans, who conquered Southern Italy and Sicily. Even more dangerous than the Franco-Norsemen was a new enemy from the steppe: the Seljuk Turks. The Empire was weakened following the defeat at the Battle of Manzikert and was weakened considerably by the Frankish-Venetian sack of Constantinople in 1204, during the Fourth Crusade.[96][97][98][99][100][101][102][103][104] Although it would recover Constantinople in 1261, Byzantium fell in 1453 when Constantinople was taken by the Ottoman Empire.[105][106][107]
|
87 |
+
|
88 |
+
In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Pechenegs and the Cuman-Kipchaks, caused a massive migration of Slavic populations to the safer, heavily forested regions of the north and temporarily halted the expansion of the Rus' state to the south and east.[108] Like many other parts of Eurasia, these territories were overrun by the Mongols.[109] The invaders, who became known as Tatars, were mostly Turkic-speaking peoples under Mongol suzerainty. They established the state of the Golden Horde with headquarters in Crimea, which later adopted Islam as a religion and ruled over modern-day southern and central Russia for more than three centuries.[110][111] After the collapse of Mongol dominions, the first Romanian states (principalities) emerged in the 14th century: Moldova and Walachia. Previously, these territories were under the successive control of Pechenegs and Cumans.[112] From the 12th to the 15th centuries, the Grand Duchy of Moscow grew from a small principality under Mongol rule to the largest state in Europe, overthrowing the Mongols in 1480 and eventually becoming the Tsardom of Russia. The state was consolidated under Ivan III the Great and Ivan the Terrible, steadily expanding to the east and south over the next centuries.
|
89 |
+
|
90 |
+
The Great Famine of 1315–1317 was the first crisis that would strike Europe in the late Middle Ages.[113] The period between 1348 and 1420 witnessed the heaviest loss. The population of France was reduced by half.[114][115] Medieval Britain was afflicted by 95 famines,[116] and France suffered the effects of 75 or more in the same period.[117] Europe was devastated in the mid-14th century by the Black Death, one of the most deadly pandemics in human history which killed an estimated 25 million people in Europe alone—a third of the European population at the time.[118]
|
91 |
+
|
92 |
+
The plague had a devastating effect on Europe's social structure; it induced people to live for the moment as illustrated by Giovanni Boccaccio in The Decameron (1353). It was a serious blow to the Roman Catholic Church and led to increased persecution of Jews, beggars, and lepers.[119] The plague is thought to have returned every generation with varying virulence and mortalities until the 18th century.[120] During this period, more than 100 plague epidemics swept across Europe.[121]
|
93 |
+
|
94 |
+
The Renaissance was a period of cultural change originating in Florence and later spreading to the rest of Europe. The rise of a new humanism was accompanied by the recovery of forgotten classical Greek and Arabic knowledge from monastic libraries, often translated from Arabic into Latin.[122][123][124] The Renaissance spread across Europe between the 14th and 16th centuries: it saw the flowering of art, philosophy, music, and the sciences, under the joint patronage of royalty, the nobility, the Roman Catholic Church, and an emerging merchant class.[125][126][127] Patrons in Italy, including the Medici family of Florentine bankers and the Popes in Rome, funded prolific quattrocento and cinquecento artists such as Raphael, Michelangelo, and Leonardo da Vinci.[128][129]
|
95 |
+
|
96 |
+
Political intrigue within the Church in the mid-14th century caused the Western Schism. During this forty-year period, two popes—one in Avignon and one in Rome—claimed rulership over the Church. Although the schism was eventually healed in 1417, the papacy's spiritual authority had suffered greatly.[132] In the 15th century, Europe started to extend itself beyond its geographic frontiers. Spain and Portugal, the greatest naval powers of the time, took the lead in exploring the world.[133][134] Exploration reached the Southern Hemisphere in the Atlantic and the Southern tip of Africa. Christopher Columbus reached the New World in 1492, and Vasco da Gama opened the ocean route to the East linking the Atlantic and Indian Oceans in 1498. Ferdinand Magellan reached Asia westward across the Atlantic and the Pacific Oceans in the Spanish expedition of Magellan-Elcano, resulting in the first circumnavigation of the globe, completed by Juan Sebastián Elcano (1519–22). Soon after, the Spanish and Portuguese began establishing large global empires in the Americas, Asia, Africa and Oceania.[135] France, the Dutch Republic and England soon followed in building large colonial empires with vast holdings in Africa, the Americas, and Asia. The Italian Wars (1494–1559) led to the development of the first modern professional army in Europe, the Tercios. Their superiority became evident at the Battle of Bicocca (1522) against French-Swiss-Italian forces, which marked the end of massive pike assaults after thousands of Swiss pikemen fell to the Spanish arquebusiers without the latter suffering a single casualty.[136] Over the course of the 16th century, Spanish and Italian troops marched north to fight on Dutch and German battlefields, dying there in large numbers.[137][138]
|
97 |
+
|
98 |
+
The Church's power was further weakened by the Protestant Reformation in 1517 when German theologian Martin Luther nailed his Ninety-five Theses criticising the selling of indulgences to the church door. He was subsequently excommunicated in the papal bull Exsurge Domine in 1520, and his followers were condemned in the 1521 Diet of Worms, which divided German princes between Protestant and Roman Catholic faiths.[139] Religious fighting and warfare spread with Protestantism. War broke out first in Germany, where Imperial troops defeated the Protestant army of the Schmalkaldic League at the Battle of Mühlberg, putting an end to the Schmalkaldic War (1546–47).[140] Between 2 and 3 million people were killed in the French Wars of Religion (1562–98).[141] The Eighty Years' War (1568–1648) between Catholic Habsburgs and Dutch Calvinists resulted in the death of hundreds of thousands of people.[141] In the Cologne War (1583–88), Catholic forces faced off against German Calvinist and Dutch troops. The Thirty Years War (1618–48) crippled the Holy Roman Empire and devastated much of Germany, killing between 25 and 40 percent of its population.[142] During the Thirty Years' War, in which various Protestant forces battled Imperial armies, France provided subsidies to Habsburg enemies, especially Sweden. The destruction of the Swedish army by an Austro-Spanish force at the Battle of Nördlingen (1634) forced France to intervene militarily in the Thirty Years' War. In 1638, France and Oxenstierna agreed to a treaty at Hamburg, extending the annual French subsidy of 400,000 riksdalers for three years, but Sweden would not fight against Spain. In 1643, the French defeated one of Spain's best armies at the Battle of Rocroi. In the aftermath of the Peace of Westphalia (1648), France rose to predominance within Western Europe.[143] France fought a series of wars over domination of Western Europe, including the Franco-Spanish War, the Franco-Dutch War, the Nine Years' War, and the War of the Spanish Succession; these wars cost France over 1,000,000 battle casualties.[144][145]
|
99 |
+
|
100 |
+
The 17th century in central and eastern Europe was a period of general decline.[146] Central and Eastern Europe experienced more than 150 famines in a 200-year period between 1501 and 1700.[147] From the Union of Krewo (1385) central and eastern Europe was dominated by Kingdom of Poland and Grand Duchy of Lithuania. Between 1648 and 1655 in the central and eastern Europe ended hegemony of the Polish–Lithuanian Commonwealth. From the 15th to 18th centuries, when the disintegrating khanates of the Golden Horde were conquered by Russia, Tatars from the Crimean Khanate frequently raided Eastern Slavic lands to capture slaves.[148] Further east, the Nogai Horde and Kazakh Khanate frequently raided the Slavic-speaking areas of Russia, Ukraine and Poland for hundreds of years, until the Russian expansion and conquest of most of northern Eurasia (i.e. Eastern Europe, Central Asia and Siberia).
|
101 |
+
|
102 |
+
The Renaissance and the New Monarchs marked the start of an Age of Discovery, a period of exploration, invention, and scientific development.[149] Among the great figures of the Western scientific revolution of the 16th and 17th centuries were Copernicus, Kepler, Galileo, and Isaac Newton.[150] According to Peter Barrett, "It is widely accepted that 'modern science' arose in the Europe of the 17th century (towards the end of the Renaissance), introducing a new understanding of the natural world."[122]
|
103 |
+
|
104 |
+
The Age of Enlightenment was a powerful intellectual movement during the 18th century promoting scientific and reason-based thoughts.[151][152][153] Discontent with the aristocracy and clergy's monopoly on political power in France resulted in the French Revolution and the establishment of the First Republic as a result of which the monarchy and many of the nobility perished during the initial reign of terror.[154] Napoleon Bonaparte rose to power in the aftermath of the French Revolution and established the First French Empire that, during the Napoleonic Wars, grew to encompass large parts of western and central Europe before collapsing in 1814 with the Battle of Paris between the Sixth Coalition—consisting of Russia, Austria, and Prussia—and the French Empire.[155][156] Napoleonic rule resulted in the further dissemination of the ideals of the French Revolution, including that of the nation-state, as well as the widespread adoption of the French models of administration, law, and education.[157][158][159] The Congress of Vienna, convened after Napoleon's downfall, established a new balance of power in Europe centred on the five "Great Powers": the UK, France, Prussia, Austria, and Russia.[160] This balance would remain in place until the Revolutions of 1848, during which liberal uprisings affected all of Europe except for Russia and the UK. These revolutions were eventually put down by conservative elements and few reforms resulted.[161] The year 1859 saw the unification of Romania, as a nation-state, from smaller principalities. In 1867, the Austro-Hungarian empire was formed; and 1871 saw the unifications of both Italy and Germany as nation-states from smaller principalities.[162]
|
105 |
+
|
106 |
+
In parallel, the Eastern Question grew more complex ever since the Ottoman defeat in the Russo-Turkish War (1768–1774). As the dissolution of the Ottoman Empire seemed imminent, the Great Powers struggled to safeguard their strategic and commercial interests in the Ottoman domains. The Russian Empire stood to benefit from the decline, whereas the Habsburg Empire and Britain perceived the preservation of the Ottoman Empire to be in their best interests. Meanwhile, the Serbian revolution (1804) and Greek War of Independence (1821) marked the beginning of the end of Ottoman rule in the Balkans, which ended with the Balkan Wars in 1912–1913.[163] Formal recognition of the de facto independent principalities of Montenegro, Serbia and Romania ensued at the Congress of Berlin in 1878. Many Ottoman Muslims faced either extermination at the hands of the newly independent states, or were expelled to the shrinking Ottoman possessions in Europe or to Anatolia; between 1821 and 1922 alone, more than 5 million Ottoman Muslims were driven away from their homes, while another 5.5 million died in wars or due to starvation and disease.[164]
|
107 |
+
|
108 |
+
The Industrial Revolution started in Great Britain in the last part of the 18th century and spread throughout Europe. The invention and implementation of new technologies resulted in rapid urban growth, mass employment, and the rise of a new working class.[165] Reforms in social and economic spheres followed, including the first laws on child labour, the legalisation of trade unions,[166] and the abolition of slavery.[167] In Britain, the Public Health Act of 1875 was passed, which significantly improved living conditions in many British cities.[168] Europe's population increased from about 100 million in 1700 to 400 million by 1900.[169] The last major famine recorded in Western Europe, the Irish Potato Famine, caused death and mass emigration of millions of Irish people.[170] In the 19th century, 70 million people left Europe in migrations to various European colonies abroad and to the United States.[171] Demographic growth meant that, by 1900, Europe's share of the world's population was 25%.[172]
|
109 |
+
|
110 |
+
Two world wars and an economic depression dominated the first half of the 20th century. World War I was fought between 1914 and 1918. It started when Archduke Franz Ferdinand of Austria was assassinated by the Yugoslav nationalist[178] Gavrilo Princip.[179] Most European nations were drawn into the war, which was fought between the Entente Powers (France, Belgium, Serbia, Portugal, Russia, the United Kingdom, and later Italy, Greece, Romania, and the United States) and the Central Powers (Austria-Hungary, Germany, Bulgaria, and the Ottoman Empire).
|
111 |
+
|
112 |
+
On 4 August 1914, Germany invaded and occupied Belgium. On 17 August 1914, the Russian army launched an assault on the eastern German province of East Prussia, but they would be dealt a fatal blow at the Battle of Tannenberg on 26-30 August 1914. As 1914 came to a close, the Germans were facing off against the French and British in northern France and in the Alsace and Lorraine regions of eastern France, and the opposing armies dug trenches and set up defensive positions. From 1915 to 1917, the two armies often engaged in trench warfare and massive offensives. The Battle of the Somme was the largest battle on the Western Front; the British suffered 420,000 casualties, the French 200,000 and the Germans 500,000. The Battle of Verdun saw around 377,231 French and 337,000 Germans become casualties. At the Battle of Passchendaele, mustard gas was used by both sides as chemical weapons, and several troops on both sides died in battle. In 1915, the tide of the war was changed when the Kingdom of Italy decided to enter the war on the side of the Triple Entente, seeking to acquire Austrian Tyrol and some possessions along the Adriatic Sea. This violated the "Triple Alliance" proposed in the 19th century, and Austria-Hungary was ill-prepared for yet another front to fight on. The Royal Italian Army launched several offensives along the Isonzo River, with eleven battles of the Isonzo being fought during the war. Over the course of the war, 462,391 Italian soldiers died; 420,000 of the dead were lost on the Alpine front with Austria-Hungary, the rest fell in France, Albania, or Macedonia.[180]
|
113 |
+
|
114 |
+
In 1917, German troops began to arrive in Italy to assist the crumbling Austro-Hungarian forces, and the Germans destroyed an Italian army at the Battle of Caporetto. This led to British and French (and later American) troops arriving in Italy to assist the Italian military against the Central forces, and the Allied troops succeeded in taking parts of northern Italy and on the Adriatic sea. In the Balkans, the Serbians resisted the Austrians until Bulgaria and the German Empire sent troops to assist in the conquest of Serbia in December 1915. The Austro-Hungarian Army on the Eastern Front, which had performed poorly, was soon subordinated to the Imperial German Army's high command, and the Germans destroyed a Russian army at the Gorlice-Tarnow Offensive in 1916. The Russians launched a massive counterattack against the Central Powers in Poland, and this "Brusilov Offensive" inflicted very high losses on the Austro-Hungarians; 1,325,000 Central Powers troops and 500,000 Russian troops were lost. However, the political situation in Russia deteriorated as people became more aware of the corruption in Czar Nicholas II of Russia's government, and the Germans made rapid progress in 1917 after the Russian Revolution toppled the czar. The Germans secured Riga in Latvia in September 1917, and they benefited from the fall of Aleksandr Kerensky's provisional government to the Bolsheviks in October 1917. In February-March 1918, the Germans launched a massive offensive after the Russian SFSR came to power, taking advantage of the Russian Civil War to seize large amounts of territory. In March 1918, the Soviets signed the Treaty of Brest-Litovsk, ending the war on the Eastern Front, and Germany set up various puppet nations in Eastern Europe.
|
115 |
+
|
116 |
+
Germany's huge successes during the war with the Russians were morale boosts for the Central Powers, but the Germans faced a new problem when they began to use "unrestricted submarine warfare" to attack enemy ships at sea. These tactics led to the sinking of several civilian ships, angering the United States, which lost several civilians aboard these ships. Germany agreed to the "Sussex Pledge", stating that it would halt this strategy. However, the Germans sunk RMS Lusitania on 7 May 1915, and the British intercepted an offer from the German government made to Mexico that proposed an anti-American alliance. In 1916, President Woodrow Wilson declared war on Germany, and the USA joined the Allies. In 1918, the American troops took part in the Battle of Belleau Wood and in the Meuse-Argonne Offensive, major offensives that saw several American troops die on European soil for the first time. The Germans began to suffer as more and more Allied troops arrived, and they launched the desperate "Spring Offensive" of 1918. This offensive was repelled, and the Allies barreled towards Belgium in the Hundred Days Offensive during the autumn of 1918. On 11 November 1918, the Germans agreed to an armistice with the Allies, ending the war. The war left more than 16 million civilians and military dead.[181] Over 60 million European soldiers were mobilised from 1914 to 1918.[182]
|
117 |
+
|
118 |
+
Russia was plunged into the Russian Revolution, which threw down the Tsarist monarchy and replaced it with the communist Soviet Union.[183] Austria-Hungary and the Ottoman Empire collapsed and broke up into separate nations, and many other nations had their borders redrawn. The Treaty of Versailles, which officially ended World War I in 1919, was harsh towards Germany, upon whom it placed full responsibility for the war and imposed heavy sanctions.[184] Excess deaths in Russia over the course of World War I and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million.[185] In 1932–1933, under Stalin's leadership, confiscations of grain by the Soviet authorities contributed to the second Soviet famine which caused millions of deaths;[186] surviving kulaks were persecuted and many sent to Gulags to do forced labour. Stalin was also responsible for the Great Purge of 1937–38 in which the NKVD executed 681,692 people;[187] millions of people were deported and exiled to remote areas of the Soviet Union.[188]
|
119 |
+
|
120 |
+
The social revolutions sweeping through Russia also affected other European nations following The Great War: in 1919, with the Weimar Republic in Germany, and the First Austrian Republic; in 1922, with Mussolini's one party fascist government in the Kingdom of Italy, and in Atatürk's Turkish Republic, adopting the Western alphabet, and state secularism.
|
121 |
+
Economic instability, caused in part by debts incurred in the First World War and 'loans' to Germany played havoc in Europe in the late 1920s and 1930s. This and the Wall Street Crash of 1929 brought about the worldwide Great Depression. Helped by the economic crisis, social instability and the threat of communism, fascist movements developed throughout Europe placing Adolf Hitler in power of what became Nazi Germany.[198][199] In 1933, Hitler became the leader of Germany and began to work towards his goal of building Greater Germany. Germany re-expanded and took back the Saarland and Rhineland in 1935 and 1936. In 1938, Austria became a part of Germany following the Anschluss. Later that year, following the Munich Agreement signed by Germany, France, the United Kingdom and Italy, Germany annexed the Sudetenland, which was a part of Czechoslovakia inhabited by ethnic Germans, and in early 1939, the remainder of Czechoslovakia was split into the Protectorate of Bohemia and Moravia, controlled by Germany, and the Slovak Republic. At the time, Britain and France preferred a policy of appeasement.
|
122 |
+
|
123 |
+
With tensions mounting between Germany and Poland over the future of Danzig, the Germans turned to the Soviets, and signed the Molotov–Ribbentrop Pact, which allowed the Soviets to invade the Baltic states and parts of Poland and Romania. Germany invaded Poland on 1 September 1939, prompting France and the United Kingdom to declare war on Germany on 3 September, opening the European Theatre of World War II.[194][200][201] The Soviet invasion of Poland started on 17 September and Poland fell soon thereafter. On 24 September, the Soviet Union attacked the Baltic countries and later, Finland. The British hoped to land at Narvik and send troops to aid Finland, but their primary objective in the landing was to encircle Germany and cut the Germans off from Scandinavian resources. Around the same time, Germany moved troops into Denmark; 1,400 Nazi soldiers conquered Denmark in one day.[202] The Phoney War continued. In May 1940, Germany conquered the Netherlands and Belgium. That same month, Italy allied with Germany and the two countries conquered France. By August, Germany began a bombing offensive on Britain, but failed to convince the Britons to give up.[203] Some 43,000 British civilians were killed and 139,000 wounded in the Blitz; much of London was destroyed, with 1,400,245 buildings destroyed or damaged.[204] In April 1941, Germany invaded Yugoslavia. In June 1941, Germany invaded the Soviet Union in Operation Barbarossa.[205] The Soviets and Germans fought an extremely brutal war in the east that cost millions of civilian and soldier lives. The Jews of Russia were wiped out by the German Nazis, who wanted to purge the world of Jews and other "subhumans". On 7 December 1941 Japan's attack on Pearl Harbor drew the United States into the conflict as allies of the British Empire and other allied forces.[206][207] In 1943, the Allies knocked Italy out of the war. The Italian campaign had the highest casualty rates of the whole war for the western allies.[208]
|
124 |
+
|
125 |
+
After the staggering Battle of Stalingrad in 1943, the German offensive in the Soviet Union turned into a continual fallback. The Battle of Kursk, which involved the largest tank battle in history, was the last major German offensive on the Eastern Front. In 1944, the Soviets and Allies both launched offensives against the Axis in Europe, with the Western Allies liberating France and much of Western Europe as the Soviets liberated much of Eastern Europe before halting in central Poland. The final year of the war saw the Soviets and Western Allies divide Germany in two after meeting along the Elbe River, while the Soviets and allied partisans liberated the Balkans. The Soviets conquered the German capital of Berlin on 2 May 1945, ending World War II in Europe. Hitler killed himself; Mussolini was killed by partisans in Italy. The Soviet-German struggle made World War II the bloodiest and costliest conflict in history.[209][210] More than 40 million people in Europe had died as a result of World War II (70 percent on the eastern front),[211][212] including between 11 and 17 million people who perished during the Holocaust (the genocide against Jews and the massacre of intellectuals, gypsies, homosexuals, handicapped people, Slavs, and Red Army prisoners-of-war).[213] The Soviet Union lost around 27 million people during the war, about half of all World War II casualties.[214] By the end of World War II, Europe had more than 40 million refugees.[215] Several post-war expulsions in Central and Eastern Europe displaced a total of about 20 million people,[216] in particular, German-speakers from all over Eastern Europe. In the aftermath of World War II, American troops were stationed in Western Europe, while Soviet troops occupied several Central and Eastern European states.
|
126 |
+
|
127 |
+
World War I and especially World War II diminished the eminence of Western Europe in world affairs. After World War II the map of Europe was redrawn at the Yalta Conference and divided into two blocs, the Western countries and the communist Eastern bloc, separated by what was later called by Winston Churchill an "Iron Curtain". Germany was divided between the capitalist West Germany and the communist East Germany. The United States and Western Europe
|
128 |
+
established the NATO alliance and later the Soviet Union and Central Europe established the Warsaw Pact.[217]
|
129 |
+
|
130 |
+
The two new superpowers, the United States and the Soviet Union, became locked in a fifty-year-long Cold War, centred on nuclear proliferation. At the same time decolonisation, which had already started after World War I, gradually resulted in the independence of most of the European colonies in Asia and Africa.[13] During the Cold War, extra-continental military interventions were the preserve of the two superpowers, a few West European countries, and Cuba.[218] Using the small Caribbean nation of Cuba as a surrogate, the Soviets projected power to distant points of the globe, exploiting world trouble spots without directly involving their own troops.[219]
|
131 |
+
|
132 |
+
In the 1980s the reforms of Mikhail Gorbachev and the Solidarity movement in Poland accelerated the collapse of the Eastern bloc and the end of the Cold War. Germany was reunited, after the symbolic fall of the Berlin Wall in 1989, and the maps of Central and Eastern Europe were redrawn once more.[198] European integration also grew after World War II. In 1949 the Council of Europe was founded, following a speech by Sir Winston Churchill, with the idea of unifying Europe to achieve common goals. It includes all European states except for Belarus and Vatican City. The Treaty of Rome in 1957 established the European Economic Community between six Western European states with the goal of a unified economic policy and common market.[221] In 1967 the EEC, European Coal and Steel Community and Euratom formed the European Community, which in 1993 became the European Union. The EU established a parliament, court and central bank and introduced the euro as a unified currency.[222] Between 2004 and 2013, more Central and Eastern European countries began joining, expanding the EU to 28 European countries, and once more making Europe a major economical and political centre of power.[223] However, the United Kingdom withdrew from the EU on 31 January 2020, as a result of a June 2016 referendum on EU membership.[224]
|
133 |
+
|
134 |
+
Europe makes up the western fifth of the Eurasian landmass.[24] It has a higher ratio of coast to landmass than any other continent or subcontinent.[225] Its maritime borders consist of the Arctic Ocean to the north, the Atlantic Ocean to the west, and the Mediterranean, Black, and Caspian Seas to the south.[226]
|
135 |
+
Land relief in Europe shows great variation within relatively small areas. The southern regions are more mountainous, while moving north the terrain descends from the high Alps, Pyrenees, and Carpathians, through hilly uplands, into broad, low northern plains, which are vast in the east. This extended lowland is known as the Great European Plain, and at its heart lies the North German Plain. An arc of uplands also exists along the north-western seaboard, which begins in the western parts of the islands of Britain and Ireland, and then continues along the mountainous, fjord-cut spine of Norway.
|
136 |
+
|
137 |
+
This description is simplified. Sub-regions such as the Iberian Peninsula and the Italian Peninsula contain their own complex features, as does mainland Central Europe itself, where the relief contains many plateaus, river valleys and basins that complicate the general trend. Sub-regions like Iceland, Britain, and Ireland are special cases. The former is a land unto itself in the northern ocean which is counted as part of Europe, while the latter are upland areas that were once joined to the mainland until rising sea levels cut them off.
|
138 |
+
|
139 |
+
Europe lies mainly in the temperate climate zones, being subjected to prevailing westerlies. The climate is milder in comparison to other areas of the same latitude around the globe due to the influence of the Gulf Stream.[228] The Gulf Stream is nicknamed "Europe's central heating", because it makes Europe's climate warmer and wetter than it would otherwise be. The Gulf Stream not only carries warm water to Europe's coast but also warms up the prevailing westerly winds that blow across the continent from the Atlantic Ocean.
|
140 |
+
|
141 |
+
Therefore, the average temperature throughout the year of Naples is 16 °C (61 °F), while it is only 12 °C (54 °F) in New York City which is almost on the same latitude. Berlin, Germany; Calgary, Canada; and Irkutsk, in the Asian part of Russia, lie on around the same latitude; January temperatures in Berlin average around 8 °C (14 °F) higher than those in Calgary, and they are almost 22 °C (40 °F) higher than average temperatures in Irkutsk.[228] Similarly, northern parts of Scotland have a temperate marine climate. The yearly average temperature in city of Inverness is 9.05 °C (48.29 °F). However, Churchill, Manitoba, Canada, is on roughly the same latitude and has an average temperature of −6.5 °C (20.3 °F), giving it a nearly subarctic climate.
|
142 |
+
|
143 |
+
In general, Europe is not just colder towards the north compared to the south, but it also gets colder from the west towards the east. The climate is more oceanic in the west, and less so in the east. This can be illustrated by the following table of average temperatures at locations roughly following the 60th, 55th, 50th, 45th and 40th latitudes. None of them is located at high altitude; most of them are close to the sea. (location, approximate latitude and longitude, coldest month average, hottest month average and annual average temperatures in degrees C)
|
144 |
+
|
145 |
+
[229]
|
146 |
+
It is notable how the average temperatures for the coldest month, as well as the annual average temperatures, drop from the west to the east. For instance, Edinburgh is warmer than Belgrade during the coldest month of the year, although Belgrade is around 10° of latitude farther south.
|
147 |
+
|
148 |
+
The geological history of Europe traces back to the formation of the Baltic Shield (Fennoscandia) and the Sarmatian craton, both around 2.25 billion years ago, followed by the Volgo–Uralia shield, the three together leading to the East European craton (≈ Baltica) which became a part of the supercontinent Columbia. Around 1.1 billion years ago, Baltica and Arctica (as part of the Laurentia block) became joined to Rodinia, later resplitting around 550 million years ago to reform as Baltica. Around 440 million years ago Euramerica was formed from Baltica and Laurentia; a further joining with Gondwana then leading to the formation of Pangea. Around 190 million years ago, Gondwana and Laurasia split apart due to the widening of the Atlantic Ocean. Finally, and very soon afterwards, Laurasia itself split up again, into Laurentia (North America) and the Eurasian continent. The land connection between the two persisted for a considerable time, via Greenland, leading to interchange of animal species. From around 50 million years ago, rising and falling sea levels have determined the actual shape of Europe, and its connections with continents such as Asia. Europe's present shape dates to the late Tertiary period about five million years ago.[232]
|
149 |
+
|
150 |
+
The geology of Europe is hugely varied and complex, and gives rise to the wide variety of landscapes found across the continent, from the Scottish Highlands to the rolling plains of Hungary.[234] Europe's most significant feature is the dichotomy between highland and mountainous Southern Europe and a vast, partially underwater, northern plain ranging from Ireland in the west to the Ural Mountains in the east. These two halves are separated by the mountain chains of the Pyrenees and Alps/Carpathians. The northern plains are delimited in the west by the Scandinavian Mountains and the mountainous parts of the British Isles. Major shallow water bodies submerging parts of the northern plains are the Celtic Sea, the North Sea, the Baltic Sea complex and Barents Sea.
|
151 |
+
|
152 |
+
The northern plain contains the old geological continent of Baltica, and so may be regarded geologically as the "main continent", while peripheral highlands and mountainous regions in the south and west constitute fragments from various other geological continents. Most of the older geology of western Europe existed as part of the ancient microcontinent Avalonia.
|
153 |
+
|
154 |
+
Having lived side by side with agricultural peoples for millennia, Europe's animals and plants have been profoundly affected by the presence and activities of man. With the exception of Fennoscandia and northern Russia, few areas of untouched wilderness are currently found in Europe, except for various national parks.
|
155 |
+
|
156 |
+
The main natural vegetation cover in Europe is mixed forest. The conditions for growth are very favourable. In the north, the Gulf Stream and North Atlantic Drift warm the continent. Southern Europe could be described as having a warm, but mild climate. There are frequent summer droughts in this region. Mountain ridges also affect the conditions. Some of these (Alps, Pyrenees) are oriented east–west and allow the wind to carry large masses of water from the ocean in the interior. Others are oriented south–north (Scandinavian Mountains, Dinarides, Carpathians, Apennines) and because the rain falls primarily on the side of mountains that is oriented towards the sea, forests grow well on this side, while on the other side, the conditions are much less favourable. Few corners of mainland Europe have not been grazed by livestock at some point in time, and the cutting down of the pre-agricultural forest habitat caused disruption to the original plant and animal ecosystems.
|
157 |
+
|
158 |
+
Probably 80 to 90 percent of Europe was once covered by forest.[235] It stretched from the Mediterranean Sea to the Arctic Ocean. Although over half of Europe's original forests disappeared through the centuries of deforestation, Europe still has over one quarter of its land area as forest, such as the broadleaf and mixed forests, taiga of Scandinavia and Russia, mixed rainforests of the Caucasus and the Cork oak forests in the western Mediterranean. During recent times, deforestation has been slowed and many trees have been planted. However, in many cases monoculture plantations of conifers have replaced the original mixed natural forest, because these grow quicker. The plantations now cover vast areas of land, but offer poorer habitats for many European forest dwelling species which require a mixture of tree species and diverse forest structure. The amount of natural forest in Western Europe is just 2–3% or less, in European Russia 5–10%. The country with the smallest percentage of forested area is Iceland (1%), while the most forested country is Finland (77%).[236]
|
159 |
+
|
160 |
+
In temperate Europe, mixed forest with both broadleaf and coniferous trees dominate. The most important species in central and western Europe are beech and oak. In the north, the taiga is a mixed spruce–pine–birch forest; further north within Russia and extreme northern Scandinavia, the taiga gives way to tundra as the Arctic is approached. In the Mediterranean, many olive trees have been planted, which are very well adapted to its arid climate; Mediterranean Cypress is also widely planted in southern Europe. The semi-arid Mediterranean region hosts much scrub forest. A narrow east–west tongue of Eurasian grassland (the steppe) extends eastwards from Ukraine and southern Russia and ends in Hungary and traverses into taiga to the north.
|
161 |
+
|
162 |
+
Glaciation during the most recent ice age and the presence of man affected the distribution of European fauna. As for the animals, in many parts of Europe most large animals and top predator species have been hunted to extinction. The woolly mammoth was extinct before the end of the Neolithic period. Today wolves (carnivores) and bears (omnivores) are endangered. Once they were found in most parts of Europe. However, deforestation and hunting caused these animals to withdraw further and further. By the Middle Ages the bears' habitats were limited to more or less inaccessible mountains with sufficient forest cover. Today, the brown bear lives primarily in the Balkan peninsula, Scandinavia, and Russia; a small number also persist in other countries across Europe (Austria, Pyrenees etc.), but in these areas brown bear populations are fragmented and marginalised because of the destruction of their habitat. In addition, polar bears may be found on Svalbard, a Norwegian archipelago far north of Scandinavia. The wolf, the second largest predator in Europe after the brown bear, can be found primarily in Central and Eastern Europe and in the Balkans, with a handful of packs in pockets of Western Europe (Scandinavia, Spain, etc.).
|
163 |
+
|
164 |
+
European wild cat, foxes (especially the red fox), jackal and different species of martens, hedgehogs, different species of reptiles (like snakes such as vipers and grass snakes) and amphibians, different birds (owls, hawks and other birds of prey).
|
165 |
+
|
166 |
+
Important European herbivores are snails, larvae, fish, different birds, and mammals, like rodents, deer and roe deer, boars, and living in the mountains, marmots, steinbocks, chamois among others. A number of insects, such as the small tortoiseshell butterfly, add to the biodiversity.[239]
|
167 |
+
|
168 |
+
The extinction of the dwarf hippos and dwarf elephants has been linked to the earliest arrival of humans on the islands of the Mediterranean.[240]
|
169 |
+
|
170 |
+
Sea creatures are also an important part of European flora and fauna. The sea flora is mainly phytoplankton. Important animals that live in European seas are zooplankton, molluscs, echinoderms, different crustaceans, squids and octopuses, fish, dolphins, and whales.
|
171 |
+
|
172 |
+
Biodiversity is protected in Europe through the Council of Europe's Bern Convention, which has also been signed by the European Community as well as non-European states.
|
173 |
+
|
174 |
+
|
175 |
+
|
176 |
+
The political map of Europe is substantially derived from the re-organisation of Europe following the Napoleonic Wars in 1815. The prevalent form of government in Europe is parliamentary democracy, in most cases in the form of Republic; in 1815, the prevalent form of government was still the Monarchy. Europe's remaining eleven monarchies[241] are constitutional.
|
177 |
+
|
178 |
+
European integration is the process of political, legal, economic (and in some cases social and cultural) integration of European states as it has been pursued by the powers sponsoring the Council of Europe since the end of World War II
|
179 |
+
The European Union has been the focus of economic integration on the continent since its foundation in 1993. More recently, the Eurasian Economic Union has been established as a counterpart comprising former Soviet states.
|
180 |
+
|
181 |
+
27 European states are members of the politico-economic European Union, 26 of the border-free Schengen Area and 19 of the monetary union Eurozone. Among the smaller European organisations are the Nordic Council, the Benelux, the Baltic Assembly and the Visegrád Group.
|
182 |
+
|
183 |
+
The list below includes all entities falling even partially under any of the various common definitions of Europe, geographically or politically.
|
184 |
+
|
185 |
+
Within the above-mentioned states are several de facto independent countries with limited to no international recognition. None of them are members of the UN:
|
186 |
+
|
187 |
+
Several dependencies and similar territories with broad autonomy are also found within or in close proximity to Europe. This includes Åland (a region of Finland), two constituent countries of the Kingdom of Denmark (other than Denmark itself), three Crown dependencies, and two British Overseas Territories. Svalbard is also included due to its unique status within Norway, although it is not autonomous. Not included are the three countries of the United Kingdom with devolved powers and the two Autonomous Regions of Portugal, which despite having a unique degree of autonomy, are not largely self-governing in matters other than international affairs. Areas with little more than a unique tax status, such as Heligoland and the Canary Islands, are also not included for this reason.
|
188 |
+
|
189 |
+
As a continent, the economy of Europe is currently the largest on Earth and it is the richest region as measured by assets under management with over $32.7 trillion compared to North America's $27.1 trillion in 2008.[243] In 2009 Europe remained the wealthiest region. Its $37.1 trillion in assets under management represented one-third of the world's wealth. It was one of several regions where wealth surpassed its precrisis year-end peak.[244] As with other continents, Europe has a large variation of wealth among its countries. The richer states tend to be in the West; some of the Central and Eastern European economies are still emerging from the collapse of the Soviet Union and the breakup of Yugoslavia.
|
190 |
+
|
191 |
+
The European Union, a political entity composed of 28 European states, comprises the largest single economic area in the world. 19 EU countries share the euro as a common currency.
|
192 |
+
Five European countries rank in the top ten of the world's largest national economies in GDP (PPP). This includes (ranks according to the CIA): Germany (6), Russia (7), the United Kingdom (10), France (11), and Italy (13).[245]
|
193 |
+
|
194 |
+
There is huge disparity between many European countries in terms of their income. The richest in terms of GDP per capita is Monaco with its US$172,676 per capita (2009) and the poorest is Moldova with its GDP per capita of US$1,631 (2010).[246] Monaco is the richest country in terms of GDP per capita in the world according to the World Bank report.
|
195 |
+
|
196 |
+
As a whole, Europe's GDP per capita is US$21,767 according to a 2016 International Monetary Fund assessment.[247]
|
197 |
+
|
198 |
+
Capitalism has been dominant in the Western world since the end of feudalism.[248] From Britain, it gradually spread throughout Europe.[249] The Industrial Revolution started in Europe, specifically the United Kingdom in the late 18th century,[250] and the 19th century saw Western Europe industrialise. Economies were disrupted by World War I but by the beginning of World War II they had recovered and were having to compete with the growing economic strength of the United States. World War II, again, damaged much of Europe's industries.
|
199 |
+
|
200 |
+
After World War II the economy of the UK was in a state of ruin,[251] and continued to suffer relative economic decline in the following decades.[252] Italy was also in a poor economic condition but regained a high level of growth by the 1950s. West Germany recovered quickly and had doubled production from pre-war levels by the 1950s.[253] France also staged a remarkable comeback enjoying rapid growth and modernisation; later on Spain, under the leadership of Franco, also recovered, and the nation recorded huge unprecedented economic growth beginning in the 1960s in what is called the Spanish miracle.[254] The majority of Central and Eastern European states came under the control of the Soviet Union and thus were members of the Council for Mutual Economic Assistance (COMECON).[255]
|
201 |
+
|
202 |
+
The states which retained a free-market system were given a large amount of aid by the United States under the Marshall Plan.[256] The western states moved to link their economies together, providing the basis for the EU and increasing cross border trade. This helped them to enjoy rapidly improving economies, while those states in COMECON were struggling in a large part due to the cost of the Cold War. Until 1990, the European Community was expanded from 6 founding members to 12. The emphasis placed on resurrecting the West German economy led to it overtaking the UK as Europe's largest economy.
|
203 |
+
|
204 |
+
With the fall of communism in Central and Eastern Europe in 1991, the post-socialist states began free market reforms.
|
205 |
+
|
206 |
+
After East and West Germany were reunited in 1990, the economy of West Germany struggled as it had to support and largely rebuild the infrastructure of East Germany.
|
207 |
+
|
208 |
+
By the millennium change, the EU dominated the economy of Europe comprising the five largest European economies of the time namely Germany, the United Kingdom, France, Italy, and Spain. In 1999, 12 of the 15 members of the EU joined the Eurozone replacing their former national currencies by the common euro. The three who chose to remain outside the Eurozone were: the United Kingdom, Denmark, and Sweden. The European Union is now the largest economy in the world.[257]
|
209 |
+
|
210 |
+
Figures released by Eurostat in 2009 confirmed that the Eurozone had gone into recession in 2008.[258] It impacted much of the region.[259] In 2010, fears of a sovereign debt crisis[260] developed concerning some countries in Europe, especially Greece, Ireland, Spain, and Portugal.[261] As a result, measures were taken, especially for Greece, by the leading countries of the Eurozone.[262] The EU-27 unemployment rate was 10.3% in 2012.[263] For those aged 15–24 it was 22.4%.[263]
|
211 |
+
|
212 |
+
In 2017, the population of Europe was estimated to be 742 million according to the 2019 revision of the World Population Prospects[2][3], which is slightly more than one-ninth of the world's population. This number includes Siberia, (about 38 million people) but excludes European Turkey (about 12 million).
|
213 |
+
|
214 |
+
A century ago, Europe had nearly a quarter of the world's population.[265] The population of Europe has grown in the past century, but in other areas of the world (in particular Africa and Asia) the population has grown far more quickly.[266] Among the continents, Europe has a relatively high population density, second only to Asia. Most of Europe is in a mode of Sub-replacement fertility, which means that each new(-born) generation is being less populous than the older.
|
215 |
+
The most densely populated country in Europe (and in the world) is the microstate of Monaco.
|
216 |
+
|
217 |
+
Pan and Pfeil (2004) count 87 distinct "peoples of Europe", of which 33 form the majority population in at least one sovereign state, while the remaining 54 constitute ethnic minorities.[267]
|
218 |
+
|
219 |
+
According to UN population projection, Europe's population may fall to about 7% of world population by 2050, or 653 million people (medium variant, 556 to 777 million in low and high variants, respectively).[266] Within this context, significant disparities exist between regions in relation to fertility rates. The average number of children per female of child-bearing age is 1.52.[268] According to some sources,[269] this rate is higher among Muslims in Europe. The UN predicts a steady population decline in Central and Eastern Europe as a result of emigration and low birth rates.[270]
|
220 |
+
|
221 |
+
Europe is home to the highest number of migrants of all global regions at 70.6 million people, the IOM's report said.[271] In 2005, the EU had an overall net gain from immigration of 1.8 million people. This accounted for almost 85% of Europe's total population growth.[272] In 2008, 696,000 persons were given citizenship of an EU27 member state, a decrease from 707,000 the previous year.[273] In 2017, approximately 825,000 persons acquired citizenship of an EU28 member state.[274] 2.4 million immigrants from non-EU countries entered the EU in 2017.[275]
|
222 |
+
|
223 |
+
Early modern emigration from Europe began with Spanish and Portuguese settlers in the 16th century,[276][277] and French and English settlers in the 17th century.[278] But numbers remained relatively small until waves of mass emigration in the 19th century, when millions of poor families left Europe.[279]
|
224 |
+
|
225 |
+
Today, large populations of European descent are found on every continent. European ancestry predominates in North America, and to a lesser degree in South America (particularly in Uruguay, Argentina, Chile and Brazil, while most of the other Latin American countries also have a considerable population of European origins). Australia and New Zealand have large European derived populations. Africa has no countries with European-derived majorities (or with the exception of Cape Verde and probably São Tomé and Príncipe, depending on context), but there are significant minorities, such as the White South Africans in South Africa. In Asia, European-derived populations, (specifically Russians), predominate in North Asia and some parts of Northern Kazakhstan.[280]
|
226 |
+
|
227 |
+
Europe has about 225 indigenous languages,[281] mostly falling within three Indo-European language groups: the Romance languages, derived from the Latin of the Roman Empire; the Germanic languages, whose ancestor language came from southern Scandinavia; and the Slavic languages.[232] Slavic languages are mostly spoken in Southern, Central and Eastern Europe. Romance languages are spoken primarily in Western and Southern Europe as well as in Switzerland in Central Europe and Romania and Moldova in Eastern Europe. Germanic languages are spoken in Western, Northern and Central Europe as well as in Gibraltar and Malta in Southern Europe.[232] Languages in adjacent areas show significant overlaps (such as in English, for example). Other Indo-European languages outside the three main groups include the Baltic group (Latvian and Lithuanian), the Celtic group (Irish, Scottish Gaelic, Manx, Welsh, Cornish, and Breton[232]), Greek, Armenian, and Albanian.
|
228 |
+
|
229 |
+
A distinct non-Indo-European family of Uralic languages (Estonian, Finnish, Hungarian, Erzya, Komi, Mari, Moksha, and Udmurt) is spoken mainly in Estonia, Finland, Hungary, and parts of Russia. Turkic languages include Azerbaijani, Kazakh and Turkish, in addition to smaller languages in Eastern and Southeast Europe (Balkan Gagauz Turkish, Bashkir, Chuvash, Crimean Tatar, Karachay-Balkar, Kumyk, Nogai, and Tatar). Kartvelian languages (Georgian, Mingrelian, and Svan) are spoken primarily in Georgia. Two other language families reside in the North Caucasus (termed Northeast Caucasian, most notably including Chechen, Avar, and Lezgin; and Northwest Caucasian, most notably including Adyghe). Maltese is the only Semitic language that is official within the EU, while Basque is the only European language isolate.
|
230 |
+
|
231 |
+
Multilingualism and the protection of regional and minority languages are recognised political goals in Europe today. The Council of Europe Framework Convention for the Protection of National Minorities and the Council of Europe's European Charter for Regional or Minority Languages set up a legal framework for language rights in Europe.
|
232 |
+
|
233 |
+
The four largest cities of Europe are Istanbul, Moscow, Paris and London, each have over 10 million residents,[8] and as such have been described as megacities.[282] While Istanbul has the highest total population, one third lies on the Asian side of the Bosporus, making Moscow the most populous city entirely in Europe.
|
234 |
+
The next largest cities in order of population are Saint Petersburg, Madrid, Berlin and Rome, each having over 3 million residents.[8]
|
235 |
+
|
236 |
+
When considering the commuter belts or metropolitan areas, within the EU (for which comparable data is available) London covers the largest population, followed in order by Paris, Madrid, Barcelona, Berlin, the Ruhr area, Rome, Milan, Athens and Warsaw.[283]
|
237 |
+
|
238 |
+
"Europe" as a cultural concept is substantially derived from the shared heritage of the Roman Empire and its culture.
|
239 |
+
|
240 |
+
The boundaries of Europe were historically understood as those of Christendom (or more specifically Latin Christendom), as established or defended throughout the medieval and early modern history of Europe, especially against Islam, as in the Reconquista and the Ottoman wars in Europe.[284]
|
241 |
+
|
242 |
+
This shared cultural heritage is combined by overlapping indigenous national cultures and folklores, roughly divided into Slavic, Latin (Romance) and Germanic, but with several components not part of either of these group (notably Greek and Celtic).
|
243 |
+
|
244 |
+
Cultural contact and mixtures characterise much of European regional cultures; Kaplan (2014) describes Europe as "embracing maximum cultural diversity at minimal geographical distances".[clarification needed][285]
|
245 |
+
|
246 |
+
Different cultural events are organised in Europe, with the aim of bringing different cultures closer together and raising awareness of their importance, such as the European Capital of Culture, the European Region of Gastronomy, the European Youth Capital and the European Capital of Sport.
|
247 |
+
|
248 |
+
Historically, religion in Europe has been a major influence on European art, culture, philosophy and law. There are six patron saints of Europe venerated in Roman Catholicism, five of them so declared by Pope John Paul II between 1980–1999: Saints Cyril and Methodius, Saint Bridget of Sweden, Catherine of Siena and Saint Teresa Benedicta of the Cross (Edith Stein). The exception is Benedict of Nursia, who had already been declared "Patron Saint of all Europe" by Pope Paul VI in 1964.[286][circular reference]
|
249 |
+
|
250 |
+
Religion in Europe according to the Global Religious Landscape survey by the Pew Forum, 2012[287]
|
251 |
+
|
252 |
+
The largest religion in Europe is Christianity, with 76.2% of Europeans considering themselves Christians,[288] including Catholic, Eastern Orthodox and various Protestant denominations. Among Protestants, the most popular are historically state-supported European denominations such as Lutheranism, Anglicanism and the Reformed faith. Other Protestant denominations such as historically significant ones like Anabaptists were never supported by any state and thus are not so widespread, as well as these newly arriving from the United States such as Pentecostalism, Adventism, Methodism, Baptists and various Evangelical Protestants; although Methodism and Baptists both have European origins. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom"; many even attribute Christianity for being the link that created a unified European identity.[289]
|
253 |
+
|
254 |
+
Christianity, including the Roman Catholic Church,[290][291] has played a prominent role in the shaping of Western civilisation since at least the 4th century,[292][293][294][295] and for at least a millennium and a half, Europe has been nearly equivalent to Christian culture, even though the religion was inherited from the Middle East. Christian culture was the predominant force in western civilisation, guiding the course of philosophy, art, and science.[296][297]
|
255 |
+
|
256 |
+
The second most popular religion is Islam (6%)[298] concentrated mainly in the Balkans and Eastern Europe (Bosnia and Herzegovina, Albania, Kosovo, Kazakhstan, North Cyprus, Turkey, Azerbaijan, North Caucasus, and the Volga-Ural region). Other religions, including Judaism, Hinduism, and Buddhism are minority religions (though Tibetan Buddhism is the majority religion of Russia's Republic of Kalmykia). The 20th century saw the revival of Neopaganism through movements such as Wicca and Druidry.
|
257 |
+
|
258 |
+
Europe has become a relatively secular continent, with an increasing number and proportion of irreligious, atheist and agnostic people, who make up about 18.2% of Europe's population,[299] currently the largest secular population in the Western world. There are a particularly high number of self-described non-religious people in the Czech Republic, Estonia, Sweden, former East Germany, and France.[300]
|
259 |
+
|
260 |
+
Sport in Europe tends to be highly organized with many sports having professional leagues.
|
261 |
+
|
262 |
+
Historical Maps
|
263 |
+
|
264 |
+
Africa
|
265 |
+
|
266 |
+
Antarctica
|
267 |
+
|
268 |
+
Asia
|
269 |
+
|
270 |
+
Australia
|
271 |
+
|
272 |
+
Europe
|
273 |
+
|
274 |
+
North America
|
275 |
+
|
276 |
+
South America
|
277 |
+
|
278 |
+
Afro-Eurasia
|
279 |
+
|
280 |
+
America
|
281 |
+
|
282 |
+
Eurasia
|
283 |
+
|
284 |
+
Oceania
|
en/1908.html.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Europa /jʊˈroʊpə/ (listen), or Jupiter II, is the smallest of the four Galilean moons orbiting Jupiter, and the sixth-closest to the planet of all the 79 known moons of Jupiter. It is also the sixth-largest moon in the Solar System. Europa was discovered in 1610 by Galileo Galilei[1] and was named after Europa, the Phoenician mother of King Minos of Crete and lover of Zeus (the Greek equivalent of the Roman god Jupiter).
|
4 |
+
|
5 |
+
Slightly smaller than Earth's Moon, Europa is primarily made of silicate rock and has a water-ice crust[13] and probably an iron–nickel core. It has a very thin atmosphere, composed primarily of oxygen. Its surface is striated by cracks and streaks, but craters are relatively few. In addition to Earth-bound telescope observations, Europa has been examined by a succession of space-probe flybys, the first occurring in the early 1970s.
|
6 |
+
|
7 |
+
Europa has the smoothest surface of any known solid object in the Solar System. The apparent youth and smoothness of the surface have led to the hypothesis that a water ocean exists beneath the surface, which could conceivably harbor extraterrestrial life.[14] The predominant model suggests that heat from tidal flexing causes the ocean to remain liquid and drives ice movement similar to plate tectonics, absorbing chemicals from the surface into the ocean below.[15][16] Sea salt from a subsurface ocean may be coating some geological features on Europa, suggesting that the ocean is interacting with the sea floor. This may be important in determining whether Europa could be habitable.[17] In addition, the Hubble Space Telescope detected water vapor plumes similar to those observed on Saturn's moon Enceladus, which are thought to be caused by erupting cryogeysers.[18] In May 2018, astronomers provided supporting evidence of water plume activity on Europa, based on an updated analysis of data obtained from the Galileo space probe, which orbited Jupiter from 1995 to 2003. Such plume activity could help researchers in a search for life from the subsurface Europan ocean without having to land on the moon.[19][20][21][22]
|
8 |
+
|
9 |
+
The Galileo mission, launched in 1989, provides the bulk of current data on Europa. No spacecraft has yet landed on Europa, although there have been several proposed exploration missions. The European Space Agency's Jupiter Icy Moon Explorer (JUICE) is a mission to Ganymede that is due to launch in 2022 and will include two flybys of Europa.[23] NASA's planned Europa Clipper should be launched in 2025.[24]
|
10 |
+
|
11 |
+
Europa, along with Jupiter's three other large moons, Io, Ganymede, and Callisto, was discovered by Galileo Galilei on 8 January 1610,[1] and possibly independently by Simon Marius. The first reported observation of Io and Europa was made by Galileo on 7 January 1610 using a 20×-magnification refracting telescope at the University of Padua. However, in that observation, Galileo could not separate Io and Europa due to the low magnification of his telescope, so that the two were recorded as a single point of light. The following day, 8 January 1610 (used as the discovery date for Europa by the IAU), Io and Europa were seen for the first time as separate bodies during Galileo's observations of the Jupiter system.[1]
|
12 |
+
|
13 |
+
Europa is named after Europa, daughter of the king of Tyre, a Phoenician noblewoman in Greek mythology. Like all the Galilean satellites, Europa is named after a lover of Zeus, the Greek counterpart of Jupiter. Europa was courted by Zeus and became the queen of Crete.[25] The naming scheme was suggested by Simon Marius,[26] who attributed the proposal to Johannes Kepler.[26][27]:
|
14 |
+
|
15 |
+
... Inprimis autem celebrantur tres fœminæ Virgines, quarum furtivo amore Iupiter captus & positus est... Europa Agenoris filia... à me vocatur... Secundus Europa... [Io,] Europa, Ganimedes puer, atque Calisto, lascivo nimium perplacuere Jovi.
|
16 |
+
|
17 |
+
... First, three young women who were captured by Jupiter for secret love shall be honoured, [including] Europa, the daughter of Agenor... The second [moon] is called by me Europa... Io, Europa, the boy Ganymede, and Callisto greatly pleased lustful Jupiter.[28]
|
18 |
+
|
19 |
+
The names fell out of favor for a considerable time and were not revived in general use until the mid-20th century.[29] In much of the earlier astronomical literature, Europa is simply referred to by its Roman numeral designation as Jupiter II (a system also introduced by Galileo) or as the "second satellite of Jupiter". In 1892, the discovery of Amalthea, whose orbit lay closer to Jupiter than those of the Galilean moons, pushed Europa to the third position. The Voyager probes discovered three more inner satellites in 1979, so Europa is now counted as Jupiter's sixth satellite, though it is still referred to as Jupiter II.[29]
|
20 |
+
The adjectival form has stabilized as Europan.[4][30]
|
21 |
+
|
22 |
+
Europa orbits Jupiter in just over three and a half days, with an orbital radius of about 670,900 km. With an orbital eccentricity of only 0.009, the orbit itself is nearly circular, and the orbital inclination relative to Jupiter's equatorial plane is small, at 0.470°.[31] Like its fellow Galilean satellites, Europa is tidally locked to Jupiter, with one hemisphere of Europa constantly facing Jupiter. Because of this, there is a sub-Jovian point on Europa's surface, from which Jupiter would appear to hang directly overhead. Europa's prime meridian is a line passing through this point.[32] Research suggests that the tidal locking may not be full, as a non-synchronous rotation has been proposed: Europa spins faster than it orbits, or at least did so in the past. This suggests an asymmetry in internal mass distribution and that a layer of subsurface liquid separates the icy crust from the rocky interior.[9]
|
23 |
+
|
24 |
+
The slight eccentricity of Europa's orbit, maintained by the gravitational disturbances from the other Galileans, causes Europa's sub-Jovian point to oscillate around a mean position. As Europa comes slightly nearer to Jupiter, Jupiter's gravitational attraction increases, causing Europa to elongate towards and away from it. As Europa moves slightly away from Jupiter, Jupiter's gravitational force decreases, causing Europa to relax back into a more spherical shape, and creating tides in its ocean. The orbital eccentricity of Europa is continuously pumped by its mean-motion resonance with Io.[33] Thus, the tidal flexing kneads Europa's interior and gives it a source of heat, possibly allowing its ocean to stay liquid while driving subsurface geological processes.[15][33] The ultimate source of this energy is Jupiter's rotation, which is tapped by Io through the tides it raises on Jupiter and is transferred to Europa and Ganymede by the orbital resonance.[33][34]
|
25 |
+
|
26 |
+
Analysis of the unique cracks lining Europa yielded evidence that it likely spun around a tilted axis at some point in time. If correct, this would explain many of Europa's features. Europa's immense network of crisscrossing cracks serves as a record of the stresses caused by massive tides in its global ocean. Europa's tilt could influence calculations of how much of its history is recorded in its frozen shell, how much heat is generated by tides in its ocean, and even how long the ocean has been liquid. Its ice layer must stretch to accommodate these changes. When there is too much stress, it cracks. A tilt in Europa's axis could suggest that its cracks may be much more recent than previously thought. The reason for this is that the direction of the spin pole may change by as much as a few degrees per day, completing one precession period over several months. A tilt could also affect the estimates of the age of Europa's ocean. Tidal forces are thought to generate the heat that keeps Europa's ocean liquid, and a tilt in the spin axis would cause more heat to be generated by tidal forces. Such additional heat would have allowed the ocean to remain liquid for a longer time. However, it has not yet been determined when this hypothesized shift in the spin axis might have occurred.[35]
|
27 |
+
|
28 |
+
Europa is slightly smaller than the Moon. At just over 3,100 kilometres (1,900 mi) in diameter, it is the sixth-largest moon and fifteenth-largest object in the Solar System. Though by a wide margin the least massive of the Galilean satellites, it is nonetheless more massive than all known moons in the Solar System smaller than itself combined.[36] Its bulk density suggests that it is similar in composition to the terrestrial planets, being primarily composed of silicate rock.[37]
|
29 |
+
|
30 |
+
It is estimated that Europa has an outer layer of water around 100 km (62 mi) thick; a part frozen as its crust, and a part as a liquid ocean underneath the ice. Recent magnetic-field data from the Galileo orbiter showed that Europa has an induced magnetic field through interaction with Jupiter's, which suggests the presence of a subsurface conductive layer.[38] This layer is likely a salty liquid-water ocean. Portions of the crust are estimated to have undergone a rotation of nearly 80°, nearly flipping over (see true polar wander), which would be unlikely if the ice were solidly attached to the mantle.[39] Europa probably contains a metallic iron core.[40][41]
|
31 |
+
|
32 |
+
Europa is the smoothest known object in the Solar System, lacking large-scale features such as mountains and craters.[42] However, according to one study, Europa's equator may be covered in icy spikes called penitentes, which may be up to 15 meters high, due to direct overhead sunlight on the equator, causing the ice to sublime, forming vertical cracks.[43][44][45] Although the imaging available from the Galileo orbiter does not have the resolution needed to confirm this, radar and thermal data are consistent with this interpretation.[45] The prominent markings crisscrossing Europa appear to be mainly albedo features that emphasize low topography. There are few craters on Europa, because its surface is tectonically too active and therefore young.[46][47] Europa's icy crust has an albedo (light reflectivity) of 0.64, one of the highest of all moons.[31][47] This indicates a young and active surface: based on estimates of the frequency of cometary bombardment that Europa experiences, the surface is about 20 to 180 million years old.[48] There is currently no full scientific consensus among the sometimes contradictory explanations for the surface features of Europa.[49]
|
33 |
+
|
34 |
+
The radiation level at the surface of Europa is equivalent to a dose of about 5400 mSv (540 rem) per day,[50] an amount of radiation that would cause severe illness or death in human beings exposed for a single day.[51]
|
35 |
+
|
36 |
+
Europa's most striking surface features are a series of dark streaks crisscrossing the entire globe, called lineae (English: lines). Close examination shows that the edges of Europa's crust on either side of the cracks have moved relative to each other. The larger bands are more than 20 km (12 mi) across, often with dark, diffuse outer edges, regular striations, and a central band of lighter material.[52]
|
37 |
+
The most likely hypothesis is that the lineae on Europa were produced by a series of eruptions of warm ice as the Europan crust spread open to expose warmer layers beneath.[53] The effect would have been similar to that seen in Earth's oceanic ridges. These various fractures are thought to have been caused in large part by the tidal flexing exerted by Jupiter. Because Europa is tidally locked to Jupiter, and therefore always maintains approximately the same orientation towards Jupiter, the stress patterns should form a distinctive and predictable pattern. However, only the youngest of Europa's fractures conform to the predicted pattern; other fractures appear to occur at increasingly different orientations the older they are. This could be explained if Europa's surface rotates slightly faster than its interior, an effect that is possible due to the subsurface ocean mechanically decoupling Europa's surface from its rocky mantle and the effects of Jupiter's gravity tugging on Europa's outer ice crust.[54] Comparisons of Voyager and Galileo spacecraft photos serve to put an upper limit on this hypothetical slippage. A full revolution of the outer rigid shell relative to the interior of Europa takes at least 12,000 years.[55] Studies of Voyager and Galileo images have revealed evidence of subduction on Europa's surface, suggesting that, just as the cracks are analogous to ocean ridges,[56][57] so plates of icy crust analogous to tectonic plates on Earth are recycled into the molten interior. This evidence of both crustal spreading at bands[56] and convergence at other sites[57] suggests that Europa may have active plate tectonics, similar to Earth.[16][contradictory] However, the physics driving these plate tectonics are not likely to resemble those driving terrestrial plate tectonics, as the forces resisting potential Earth-like plate motions in Europa's crust are significantly stronger than the forces that could drive them.[58]
|
38 |
+
|
39 |
+
Other features present on Europa are circular and elliptical lenticulae (Latin for "freckles"). Many are domes, some are pits and some are smooth, dark spots. Others have a jumbled or rough texture. The dome tops look like pieces of the older plains around them, suggesting that the domes formed when the plains were pushed up from below.[59]
|
40 |
+
|
41 |
+
One hypothesis states that these lenticulae were formed by diapirs of warm ice rising up through the colder ice of the outer crust, much like magma chambers in Earth's crust.[59] The smooth, dark spots could be formed by meltwater released when the warm ice breaks through the surface. The rough, jumbled lenticulae (called regions of "chaos"; for example, Conamara Chaos) would then be formed from many small fragments of crust, embedded in hummocky, dark material, appearing like icebergs in a frozen sea.[60]
|
42 |
+
|
43 |
+
An alternative hypothesis suggest that lenticulae are actually small areas of chaos and that the claimed pits, spots and domes are artefacts resulting from over-interpretation of early, low-resolution Galileo images. The implication is that the ice is too thin to support the convective diapir model of feature formation.[61][62]
|
44 |
+
|
45 |
+
In November 2011, a team of researchers from the University of Texas at Austin and elsewhere presented evidence in the journal Nature suggesting that many "chaos terrain" features on Europa sit atop vast lakes of liquid water.[63][64] These lakes would be entirely encased in Europa's icy outer shell and distinct from a liquid ocean thought to exist farther down beneath the ice shell. Full confirmation of the lakes' existence will require a space mission designed to probe the ice shell either physically or indirectly, for example, using radar.[64]
|
46 |
+
|
47 |
+
Scientists' consensus is that a layer of liquid water exists beneath Europa's surface, and that heat from tidal flexing allows the subsurface ocean to remain liquid.[15][65] Europa's surface temperature averages about 110 K (−160 °C; −260 °F) at the equator and only 50 K (−220 °C; −370 °F) at the poles, keeping Europa's icy crust as hard as granite.[11] The first hints of a subsurface ocean came from theoretical considerations of tidal heating (a consequence of Europa's slightly eccentric orbit and orbital resonance with the other Galilean moons). Galileo imaging team members argue for the existence of a subsurface ocean from analysis of Voyager and Galileo images.[65] The most dramatic example is "chaos terrain", a common feature on Europa's surface that some interpret as a region where the subsurface ocean has melted through the icy crust. This interpretation is controversial. Most geologists who have studied Europa favor what is commonly called the "thick ice" model, in which the ocean has rarely, if ever, directly interacted with the present surface.[66] The best evidence for the thick-ice model is a study of Europa's large craters. The largest impact structures are surrounded by concentric rings and appear to be filled with relatively flat, fresh ice; based on this and on the calculated amount of heat generated by Europan tides, it is estimated that the outer crust of solid ice is approximately 10–30 km (6–19 mi) thick,[67] including a ductile "warm ice" layer, which could mean that the liquid ocean underneath may be about 100 km (60 mi) deep.[68] This leads to a volume of Europa's oceans of 3 × 1018 m3, between two or three times the volume of Earth's oceans.[69][70]
|
48 |
+
|
49 |
+
The thin-ice model suggests that Europa's ice shell may be only a few kilometers thick. However, most planetary scientists conclude that this model considers only those topmost layers of Europa's crust that behave elastically when affected by Jupiter's tides. One example is flexure analysis, in which Europa's crust is modeled as a plane or sphere weighted and flexed by a heavy load. Models such as this suggest the outer elastic portion of the ice crust could be as thin as 200 metres (660 ft). If the ice shell of Europa is really only a few kilometers thick, this "thin ice" model would mean that regular contact of the liquid interior with the surface could occur through open ridges, causing the formation of areas of chaotic terrain.[71]
|
50 |
+
|
51 |
+
The Galileo orbiter found that Europa has a weak magnetic moment, which is induced by the varying part of the Jovian magnetic field. The field strength at the magnetic equator (about 120 nT) created by this magnetic moment is about one-sixth the strength of Ganymede's field and six times the value of Callisto's.[72] The existence of the induced moment requires a layer of a highly electrically conductive material in Europa's interior. The most plausible candidate for this role is a large subsurface ocean of liquid saltwater.[40]
|
52 |
+
|
53 |
+
Since the Voyager spacecraft flew past Europa in 1979, scientists have worked to understand the composition of the reddish-brown material that coats fractures and other geologically youthful features on Europa's surface.[73] Spectrographic evidence suggests that the dark, reddish streaks and features on Europa's surface may be rich in salts such as magnesium sulfate, deposited by evaporating water that emerged from within.[74] Sulfuric acid hydrate is another possible explanation for the contaminant observed spectroscopically.[75] In either case, because these materials are colorless or white when pure, some other material must also be present to account for the reddish color, and sulfur compounds are suspected.[76]
|
54 |
+
|
55 |
+
Another hypothesis for the colored regions is that they are composed of abiotic organic compounds collectively called tholins.[77][78][79] The morphology of Europa's impact craters and ridges is suggestive of fluidized material welling up from the fractures where pyrolysis and radiolysis take place. In order to generate colored tholins on Europa there must be a source of materials (carbon, nitrogen, and water) and a source of energy to make the reactions occur. Impurities in the water ice crust of Europa are presumed both to emerge from the interior as cryovolcanic events that resurface the body, and to accumulate from space as interplanetary dust.[77] Tholins bring important astrobiological implications, as they may play a role in prebiotic chemistry and abiogenesis.[80][81][82]
|
56 |
+
|
57 |
+
The presence of sodium chloride in the internal ocean has been suggested by a 450 nm absorption feature, characteristic of irradiated NaCl crystals, that has been spotted in HST observations of the chaos regions, presumed to be areas of recent subsurface upwelling.[83]
|
58 |
+
|
59 |
+
Tidal heating occurs through the tidal friction and tidal flexing processes caused by tidal acceleration: orbital and rotational energy are dissipated as heat in the core of the moon, the internal ocean, and the ice crust.[84]
|
60 |
+
|
61 |
+
Ocean tides are converted to heat by frictional losses in the oceans and their interaction with the solid bottom and with the top ice crust. In late 2008, it was suggested Jupiter may keep Europa's oceans warm by generating large planetary tidal waves on Europa because of its small but non-zero obliquity. This generates so-called Rossby waves that travel quite slowly, at just a few kilometers per day, but can generate significant kinetic energy. For the current axial tilt estimate of 0.1 degree, the resonance from Rossby waves would contain 7.3×1018 J of kinetic energy, which is two thousand times larger than that of the flow excited by the dominant tidal forces.[85][86] Dissipation of this energy could be the principal heat source of Europa's ocean.[85][86]
|
62 |
+
|
63 |
+
Tidal flexing kneads Europa's interior and ice shell, which becomes a source of heat.[87] Depending on the amount of tilt, the heat generated by the ocean flow could be 100 to thousands of times greater than the heat generated by the flexing of Europa's rocky core in response to gravitational pull from Jupiter and the other moons circling that planet.[88] Europa's seafloor could be heated by the moon's constant flexing, driving hydrothermal activity similar to undersea volcanoes in Earth's oceans.[84]
|
64 |
+
|
65 |
+
Experiments and ice modeling published in 2016, indicate that tidal flexing dissipation can generate one order of magnitude more heat in Europa's ice than scientists had previously assumed.[89][90] Their results indicate that most of the heat generated by the ice actually comes from the ice's crystalline structure (lattice) as a result of deformation, and not friction between the ice grains.[89][90] The greater the deformation of the ice sheet, the more heat is generated.
|
66 |
+
|
67 |
+
In addition to tidal heating, the interior of Europa could also be heated by the decay of radioactive material (radiogenic heating) within the rocky mantle.[84][91] But the models and values observed are one hundred times higher than those that could be produced by radiogenic heating alone,[92] thus implying that tidal heating has a leading role in Europa.[93]
|
68 |
+
|
69 |
+
The Hubble Space Telescope acquired an image of Europa in 2012 that was interpreted to be a plume of water vapour erupting from near its south pole.[96][95] The image suggests the plume may be 200 km (120 mi) high, or more than 20 times the height of Mt. Everest.[18][97][98] It has been suggested that if they exist, they are episodic[99] and likely to appear when Europa is at its farthest point from Jupiter, in agreement with tidal force modeling predictions.[100] Additional imaging evidence from the Hubble Space Telescope was presented in September 2016.[101][102]
|
70 |
+
|
71 |
+
In May 2018, astronomers provided supporting evidence of water plume activity on Europa, based on an updated critical analysis of data obtained from the Galileo space probe, which orbited Jupiter between 1995 and 2003. Galileo flew by Europa in 1997 within 206 km (128 mi) of the moon's surface and the researchers suggest it may have flown through a water plume.[19][20][21][22] Such plume activity could help researchers in a search for life from the subsurface European ocean without having to land on the moon.[19]
|
72 |
+
|
73 |
+
The tidal forces are about 1,000 times stronger than the Moon's effect on Earth. The only other moon in the Solar System exhibiting water vapor plumes is Enceladus.[18] The estimated eruption rate at Europa is about 7000 kg/s[100] compared to about 200 kg/s for the plumes of Enceladus.[103][104] If confirmed, it would open the possibility of a flyby through the plume and obtain a sample to analyze in situ without having to use a lander and drill through kilometres of ice.[101][105][106]
|
74 |
+
|
75 |
+
Observations with the Goddard High Resolution Spectrograph of the Hubble Space Telescope, first described in 1995, revealed that Europa has a thin atmosphere composed mostly of molecular oxygen (O2),[107][108] and some water vapor.[109][110][111] The surface pressure of Europa's atmosphere is 0.1 μPa, or 10−12 times that of the Earth.[12] In 1997, the Galileo spacecraft confirmed the presence of a tenuous ionosphere (an upper-atmospheric layer of charged particles) around Europa created by solar radiation and energetic particles from Jupiter's magnetosphere,[112][113] providing evidence of an atmosphere.
|
76 |
+
|
77 |
+
Unlike the oxygen in Earth's atmosphere, Europa's is not of biological origin. The surface-bounded atmosphere forms through radiolysis, the dissociation of molecules through radiation.[114] Solar ultraviolet radiation and charged particles (ions and electrons) from the Jovian magnetospheric environment collide with Europa's icy surface, splitting water into oxygen and hydrogen constituents. These chemical components are then adsorbed and "sputtered" into the atmosphere. The same radiation also creates collisional ejections of these products from the surface, and the balance of these two processes forms an atmosphere.[115] Molecular oxygen is the densest component of the atmosphere because it has a long lifetime; after returning to the surface, it does not stick (freeze) like a water or hydrogen peroxide molecule but rather desorbs from the surface and starts another ballistic arc. Molecular hydrogen never reaches the surface, as it is light enough to escape Europa's surface gravity.[116][117]
|
78 |
+
|
79 |
+
Observations of the surface have revealed that some of the molecular oxygen produced by radiolysis is not ejected from the surface. Because the surface may interact with the subsurface ocean (considering the geological discussion above), this molecular oxygen may make its way to the ocean, where it could aid in biological processes.[118] One estimate suggests that, given the turnover rate inferred from the apparent ~0.5 Gyr maximum age of Europa's surface ice, subduction of radiolytically generated oxidizing species might well lead to oceanic free oxygen concentrations that are comparable to those in terrestrial deep oceans.[119]
|
80 |
+
|
81 |
+
The molecular hydrogen that escapes Europa's gravity, along with atomic and molecular oxygen, forms a gas torus in the vicinity of Europa's orbit around Jupiter. This "neutral cloud" has been detected by both the Cassini and Galileo spacecraft, and has a greater content (number of atoms and molecules) than the neutral cloud surrounding Jupiter's inner moon Io. Models predict that almost every atom or molecule in Europa's torus is eventually ionized, thus providing a source to Jupiter's magnetospheric plasma.[120]
|
82 |
+
|
83 |
+
Exploration of Europa began with the Jupiter flybys of Pioneer 10 and 11 in 1973 and 1974 respectively. The first closeup photos were of low resolution compared to later missions. The two Voyager probes traveled through the Jovian system in 1979, providing more-detailed images of Europa's icy surface. The images caused many scientists to speculate about the possibility of a liquid ocean underneath.
|
84 |
+
Starting in 1995, the Galileo space probe orbited Jupiter for eight years, until 2003, and provided the most detailed examination of the Galilean moons to date. It included the "Galileo Europa Mission" and "Galileo Millennium Mission", with numerous close flybys of Europa.[121] In 2007, New Horizons imaged Europa, as it flew by the Jovian system while on its way to Pluto.[122]
|
85 |
+
|
86 |
+
Conjectures regarding extraterrestrial life have ensured a high profile for Europa and have led to steady lobbying for future missions.[123][124] The aims of these missions have ranged from examining Europa's chemical composition to searching for extraterrestrial life in its hypothesized subsurface oceans.[125][126] Robotic missions to Europa need to endure the high-radiation environment around itself and Jupiter.[124] Europa receives about 5.40 Sv of radiation per day.[127]
|
87 |
+
|
88 |
+
In 2011, a Europa mission was recommended by the U.S. Planetary Science Decadal Survey.[128] In response, NASA commissioned Europa lander concept studies in 2011, along with concepts for a Europa flyby (Europa Clipper), and a Europa orbiter.[129][130] The orbiter element option concentrates on the "ocean" science, while the multiple-flyby element (Clipper) concentrates on the chemistry and energy science. On 13 January 2014, the House Appropriations Committee announced a new bipartisan bill that includes $80 million funding to continue the Europa mission concept studies.[131][132]
|
89 |
+
|
90 |
+
In the early 2000s, Jupiter Europa Orbiter led by NASA and the Jupiter Ganymede Orbiter led by the ESA were proposed together as an Outer Planet Flagship Mission to Jupiter's icy moons called Europa Jupiter System Mission, with a planned launch in 2020.[140] In 2009 it was given priority over Titan Saturn System Mission.[141] At that time, there was competition from other proposals.[142] Japan proposed Jupiter Magnetospheric Orbiter.
|
91 |
+
|
92 |
+
Jovian Europa Orbiter was an ESA Cosmic Vision concept study from 2007. Another concept was Ice Clipper,[143] which would have used an impactor similar to the Deep Impact mission—it would make a controlled crash into the surface of Europa, generating a plume of debris that would then be collected by a small spacecraft flying through the plume.[143][144]
|
93 |
+
|
94 |
+
Jupiter Icy Moons Orbiter (JIMO) was a partially developed fission-powered spacecraft with ion thrusters that was cancelled in 2006.[124][145] It was part of Project Prometheus.[145] The Europa Lander Mission proposed a small nuclear-powered Europa lander for JIMO.[146] It would travel with the orbiter, which would also function as a communication relay to Earth.[146]
|
95 |
+
|
96 |
+
Europa Orbiter – Its objective would be to characterize the extent of the ocean and its relation to the deeper interior. Instrument payload could include a radio subsystem, laser altimeter, magnetometer, Langmuir probe, and a mapping camera.[147][148] The Europa Orbiter received a go-ahead in 1999 but was canceled in 2002. This orbiter featured a special ice-penetrating radar that would allow it to scan below the surface.[42]
|
97 |
+
|
98 |
+
More ambitious ideas have been put forward including an impactor in combination with a thermal drill to search for biosignatures that might be frozen in the shallow subsurface.[149][150]
|
99 |
+
|
100 |
+
Another proposal put forward in 2001 calls for a large nuclear-powered "melt probe" (cryobot) that would melt through the ice until it reached an ocean below.[124][151] Once it reached the water, it would deploy an autonomous underwater vehicle (hydrobot) that would gather information and send it back to Earth.[152] Both the cryobot and the hydrobot would have to undergo some form of extreme sterilization to prevent detection of Earth organisms instead of native life and to prevent contamination of the subsurface ocean.[153] This suggested approach has not yet reached a formal conceptual planning stage.[154]
|
101 |
+
|
102 |
+
So far, there is no evidence that life exists on Europa, but Europa has emerged as one of the most likely locations in the Solar System for potential habitability.[119][155] Life could exist in its under-ice ocean, perhaps in an environment similar to Earth's deep-ocean hydrothermal vents.[125][156] Even if Europa lacks volcanic hydrothermal activity, a 2016 NASA study found that Earth-like levels of hydrogen and oxygen could be produced through processes related to serpentinization and ice-derived oxidants, which do not directly involve volcanism.[157] In 2015, scientists announced that salt from a subsurface ocean may likely be coating some geological features on Europa, suggesting that the ocean is interacting with the seafloor. This may be important in determining if Europa could be habitable.[17][158] The likely presence of liquid water in contact with Europa's rocky mantle has spurred calls to send a probe there.[159]
|
103 |
+
|
104 |
+
The energy provided by tidal forces drives active geological processes within Europa's interior, just as they do to a far more obvious degree on its sister moon Io. Although Europa, like the Earth, may possess an internal energy source from radioactive decay, the energy generated by tidal flexing would be several orders of magnitude greater than any radiological source.[160] Life on Europa could exist clustered around hydrothermal vents on the ocean floor, or below the ocean floor, where endoliths are known to inhabit on Earth. Alternatively, it could exist clinging to the lower surface of Europa's ice layer, much like algae and bacteria in Earth's polar regions, or float freely in Europa's ocean.[161] If Europa's ocean is too cold, biological processes similar to those known on Earth could not take place. If it is too salty, only extreme halophiles could survive in that environment.[161] In 2010, a model proposed by Richard Greenberg of the University of Arizona proposed that irradiation of ice on Europa's surface could saturate its crust with oxygen and peroxide, which could then be transported by tectonic processes into the interior ocean. Such a process could render Europa's ocean as oxygenated as our own within just 12 million years, allowing the existence of complex, multicellular lifeforms.[162]
|
105 |
+
|
106 |
+
Evidence suggests the existence of lakes of liquid water entirely encased in Europa's icy outer shell and distinct from a liquid ocean thought to exist farther down beneath the ice shell.[63][64] If confirmed, the lakes could be yet another potential habitat for life. Evidence suggests that hydrogen peroxide is abundant across much of the surface of Europa.[163] Because hydrogen peroxide decays into oxygen and water when combined with liquid water, the authors argue that it could be an important energy supply for simple life forms.
|
107 |
+
|
108 |
+
Clay-like minerals (specifically, phyllosilicates), often associated with organic matter on Earth, have been detected on the icy crust of Europa.[164] The presence of the minerals may have been the result of a collision with an asteroid or comet.[164] Some scientists have speculated that life on Earth could have been blasted into space by asteroid collisions and arrived on the moons of Jupiter in a process called lithopanspermia.[165]
|
109 |
+
|
110 |
+
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
|
111 |
+
|
en/1909.html.txt
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In Greek mythology, Europa (/jʊəˈroʊpə, jə-/; Ancient Greek: Εὐρώπη, Eurṓpē, Attic Greek pronunciation: [eu̯.rɔ̌ː.pɛː]) was the mother of King Minos of Crete, a Phoenician princess of Argive origin, after whom the continent Europe is named. The story of her abduction by Zeus in the form of a bull was a Cretan story; as classicist Károly Kerényi points out, "most of the love-stories concerning Zeus originated from more ancient tales describing his marriages with goddesses. This can especially be said of the story of Europa."[1]
|
2 |
+
|
3 |
+
Europa's earliest literary reference is in the Iliad, which is commonly dated to the 8th century BC.[2] Another early reference to her is in a fragment of the Hesiodic Catalogue of Women, discovered at Oxyrhynchus.[3] The earliest vase-painting securely identifiable as Europa dates from mid-7th century BC.[4]
|
4 |
+
|
5 |
+
Greek Εὐρώπη (Eurṓpē) contains the elements εὐρύς (eurus), "wide, broad"[5] and ὤψ/ὠπ-/ὀπτ- (ōps/ōp-/opt-) "eye, face, countenance".[6] Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion.[7]
|
6 |
+
|
7 |
+
It is common in ancient Greek mythology and geography to identify lands or rivers with female figures. Thus, Europa is first used in a geographic context in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea.[8]
|
8 |
+
As a name for a part of the known world, it is first used in the 6th century BC by Anaximander and Hecataeus.[9]
|
9 |
+
The weakness of an etymology with εὐρύς (eurus), is 1. that the -u stem of εὐρύς disappears in Εὐρώπη Europa and 2. the expected form εὐρυώπη euryopa that retains the -u stem in fact exists.
|
10 |
+
|
11 |
+
An alternative suggestion due to Ernest Klein and Giovanni Semerano (1966) attempted to connect a Semitic term for "west", Akkadian erebu meaning "to go down, set" (in reference to the sun), Phoenician 'ereb "evening; west", which would parallel occident (the resemblance to Erebus, from PIE *h1regʷos, "darkness", is accidental, however). Barry (1999) adduces the word Ereb on an Assyrian stele with the meaning of "night", "[the country of] sunset", in opposition to Asu "[the country of] sunrise", i.e. Asia (Anatolia coming equally from Ἀνατολή, "(sun)rise", "east").[10] This proposal is mostly considered unlikely or untenable.[11][12][13]
|
12 |
+
|
13 |
+
Sources differ in details regarding Europa's family, but agree that she is Phoenician, and from a lineage that ultimately descended from Argive princess Io, the mythical nymph beloved of Zeus, who was transformed into a heifer. She is generally said to be the daughter of Agenor, the Phoenician King of Tyre;[14] the Syracusan poet Moschus[15] makes her mother Queen Telephassa ("far-shining") but elsewhere her mother is Argiope ("white-faced").[16] Other sources, such as the Iliad, claim that she is the daughter of Agenor's son, the "sun-red" Phoenix.[17] It is generally agreed that she had two brothers, Cadmus, who brought the alphabet to mainland Greece, and Cilix who gave his name to Cilicia in Asia Minor, with the author of Bibliotheke including Phoenix as a third. So some interpret this as her brother Phoenix (when he is assumed to be son of Agenor) gave his siblings' name to his three children and this Europa (by this case, niece of former) is also loved by Zeus, but because of the same name, gave some confusions to others. After arriving in Crete, Europa had three sons fathered by Zeus: Minos, Rhadamanthus, and Sarpedon, the three of whom became the three judges of the Underworld when they died.[14][18] In Crete she married Asterion also rendered Asterius and became mother (or step-mother) of his daughter Crete. In some accounts, Carnus[19] and Alagonia[20] were added to the list of children of Europa and Zeus.
|
14 |
+
|
15 |
+
The Dictionary of Classical Mythology explains that Zeus was enamoured of Europa and decided to seduce or rape her, the two being near-equivalent in Greek myth.[25] He transformed himself into a tame white bull and mixed in with her father's herds. While Europa and her helpers were gathering flowers, she saw the bull, caressed his flanks, and eventually got onto his back. Zeus took that opportunity and ran to the sea and swam, with her on his back, to the island of Crete. He then revealed his true identity, and Europa became the first queen of Crete. Zeus gave her a necklace made by Hephaestus[3] and three additional gifts: Talos, Laelaps and a javelin that never missed. Zeus later re-created the shape of the white bull in the stars, which is now known as the constellation Taurus. Some readers interpret as manifestations of this same bull the Cretan beast that was encountered by Heracles, the Marathonian Bull slain by Theseus (and that fathered the Minotaur). Roman mythology adopted the tale of the Raptus, also known as "The Abduction of Europa" and "The Seduction of Europa", substituting the god Jupiter for Zeus.
|
16 |
+
|
17 |
+
The myth of Europa and Zeus may have its origin in a sacred union between the Phoenician deities `Aštar and `Aštart (Astarte), in bovine form. Having given birth to three sons by Zeus, Europa married a king Asterios, this being also the name of the Minotaur and an epithet of Zeus, likely derived from the name `Aštar.[26]
|
18 |
+
|
19 |
+
According to Herodotus' rationalizing approach, Europa was kidnapped by Greeks (probably Cretans) who were seeking to avenge the kidnapping of Io, a princess from Argos. His variant story may have been an attempt to rationalize the earlier myth; or the present myth may be a garbled version of facts—the abduction of a Phoenician aristocrat—later enunciated without gloss by Herodotus.
|
20 |
+
|
21 |
+
In the territory of Phoenician Sidon, Lucian of Samosata (2nd century AD) was informed that the temple of Astarte, whom Lucian equated with the moon goddess, was sacred to Europa:
|
22 |
+
|
23 |
+
The paradox, as it seemed to Lucian, would be solved if Europa is Astarte in her guise as the full, "broad-faced" moon.
|
24 |
+
|
25 |
+
There were two competing myths[27] relating how Europa came into the Hellenic world, but they agreed that she came to Crete (Kríti), where the sacred bull was paramount. In the more familiar telling she was seduced by the god Zeus in the form of a bull, who breathed from his mouth a saffron crocus[3] and carried her away to Crete on his back—to be welcomed by Asterion,[28] but according to the more literal, euhemerist version that begins the account of Persian-Hellene confrontations of Herodotus,[29] she was kidnapped by Cretans, who likewise were said to have taken her to Crete. The mythical Europa cannot be separated from the mythology of the sacred bull, which had been worshipped in the Levant. In 2012, an archaeological mission of the British Museum led by Lebanese archaeologist, Claude Doumet Serhal, discovered at the site of the old American school in Sidon, Lebanon currency that depicts Europa riding the bull with her veil flying all over like a bow, further proof of Europa's Phoenician origin.[30]
|
26 |
+
|
27 |
+
Europa does not seem to have been venerated directly in cult anywhere in classical Greece,[31] but at Lebadaea in Boeotia, Pausanias noted in the 2nd century AD that Europa was the epithet of Demeter—"Demeter whom they surname Europa and say was the nurse of Trophonios"—among the Olympians who were addressed by seekers at the cave sanctuary of Trophonios of Orchomenus, to whom a chthonic cult and oracle were dedicated: "the grove of Trophonios by the river Herkyna ... there is also a sanctuary of Demeter Europa ... the nurse of Trophonios."[32]
|
28 |
+
|
29 |
+
Male
|
30 |
+
Female
|
31 |
+
Deity
|
32 |
+
|
33 |
+
|
34 |
+
|
35 |
+
Europa provided the substance of a brief Hellenistic epic written in the mid-2nd century BCE by Moschus, a bucolic poet and friend of the Alexandrian grammarian Aristarchus of Samothrace, born at Syracuse.[33]
|
36 |
+
|
37 |
+
In Metamorphoses Book II, the poet Ovid wrote the following depiction of Jupiter's seduction:
|
38 |
+
|
39 |
+
His picturesque details belong to anecdote and fable: in all the depictions, whether she straddles the bull, as in archaic vase-paintings or the ruined metope fragment from Sikyon, or sits gracefully sidesaddle as in a mosaic from North Africa, there is no trace of fear. Often Europa steadies herself by touching one of the bull's horns, acquiescing.
|
40 |
+
|
41 |
+
Her tale is also mentioned in Nathaniel Hawthorne's Tanglewood Tales. Though his story titled "Dragon's teeth" is largely about Cadmus, it begins with an elaborate albeit toned down version of Europa's abduction by the beautiful bull.
|
42 |
+
|
43 |
+
The tale also features as the subject of a poem and film in the Enderby (fictional character) sequence of novels by Anthony Burgess. She is remembered in De Mulieribus Claris, a collection of biographies of historical and mythological women by the Florentine author Giovanni Boccaccio, composed in 1361–62. It is notable as the first collection devoted exclusively to biographies of women in Western literature.[34].
|
44 |
+
|
45 |
+
Europa in a fresco at Pompeii, contemporary with Ovid.
|
46 |
+
|
47 |
+
Europa velificans, "her fluttering tunic… in the breeze" (mosaic, Zeugma Mosaic Museum)
|
48 |
+
|
49 |
+
The Rape of Europa by Titian (1562)
|
50 |
+
|
51 |
+
The Rape of Europa by Jean-Baptiste Marie Pierre (1750)
|
52 |
+
|
53 |
+
The Rape of Europa by Francisco Goya (1772)
|
54 |
+
|
55 |
+
The Rape of Europa by Félix Vallotton (1908)
|
56 |
+
|
57 |
+
The Rape of Europa by Valentin Serov (1910)
|
58 |
+
|
59 |
+
The name Europe, as a geographical term, was used by Ancient Greek geographers such as Strabo to refer to part of Thrace below the Balkan mountains.[35] Later, under the Roman Empire the name was given to a Thracian province.
|
60 |
+
|
61 |
+
It is derived from the Greek word Eurōpē (Εὐρώπη) in all Romance languages, Germanic languages, Slavic languages, Baltic languages, Celtic languages, Iranian languages, Uralic languages (Hungarian Európa, Finnish Eurooppa, Estonian Euroopa).
|
62 |
+
|
63 |
+
Jürgen Fischer, in Oriens-Occidens-Europa[36] summarized how the name came into use, supplanting the oriens–occidens dichotomy of the later Roman Empire, which was expressive of a divided empire, Latin in the West, Greek in the East.
|
64 |
+
|
65 |
+
In the 8th century, ecclesiastical uses of "Europa" for the imperium of Charlemagne provide the source for the modern geographical term. The first use of the term Europenses, to describe peoples of the Christian, western portion of the continent, appeared in the Hispanic Latin Chronicle of 754, sometimes attributed to an author called Isidore Pacensis[37] in reference to the Battle of Tours fought against Muslim forces.
|
66 |
+
|
67 |
+
The European Union has also used Europa as a symbol of pan-Europeanism, notably by naming its web portal after her and depicting her on the Greek €2 coin and on several gold and silver commemorative coins (e.g. the Belgian €10 European Expansion coin). Her name appeared on postage stamps celebrating the Council of Europe, which were first issued in 1956. The second series of euro banknotes is known as the Europa Series and bears her likeness in the watermark and hologram.
|
68 |
+
|
69 |
+
The metal europium, a rare-earth element, was named in 1901 after the continent.[38]
|
70 |
+
|
71 |
+
The invention of the telescope revealed that the planet Jupiter, clearly visible to the naked eye and known to humanity since prehistoric times, has an attendant family of moons. These were named for male and female lovers of the god and other mythological persons associated with him. The smallest of Jupiter's Galilean moons was named after Europa.
|
en/191.html.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Native Americans may refer to:
|
en/1910.html.txt
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In Greek mythology, Europa (/jʊəˈroʊpə, jə-/; Ancient Greek: Εὐρώπη, Eurṓpē, Attic Greek pronunciation: [eu̯.rɔ̌ː.pɛː]) was the mother of King Minos of Crete, a Phoenician princess of Argive origin, after whom the continent Europe is named. The story of her abduction by Zeus in the form of a bull was a Cretan story; as classicist Károly Kerényi points out, "most of the love-stories concerning Zeus originated from more ancient tales describing his marriages with goddesses. This can especially be said of the story of Europa."[1]
|
2 |
+
|
3 |
+
Europa's earliest literary reference is in the Iliad, which is commonly dated to the 8th century BC.[2] Another early reference to her is in a fragment of the Hesiodic Catalogue of Women, discovered at Oxyrhynchus.[3] The earliest vase-painting securely identifiable as Europa dates from mid-7th century BC.[4]
|
4 |
+
|
5 |
+
Greek Εὐρώπη (Eurṓpē) contains the elements εὐρύς (eurus), "wide, broad"[5] and ὤψ/ὠπ-/ὀπτ- (ōps/ōp-/opt-) "eye, face, countenance".[6] Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion.[7]
|
6 |
+
|
7 |
+
It is common in ancient Greek mythology and geography to identify lands or rivers with female figures. Thus, Europa is first used in a geographic context in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea.[8]
|
8 |
+
As a name for a part of the known world, it is first used in the 6th century BC by Anaximander and Hecataeus.[9]
|
9 |
+
The weakness of an etymology with εὐρύς (eurus), is 1. that the -u stem of εὐρύς disappears in Εὐρώπη Europa and 2. the expected form εὐρυώπη euryopa that retains the -u stem in fact exists.
|
10 |
+
|
11 |
+
An alternative suggestion due to Ernest Klein and Giovanni Semerano (1966) attempted to connect a Semitic term for "west", Akkadian erebu meaning "to go down, set" (in reference to the sun), Phoenician 'ereb "evening; west", which would parallel occident (the resemblance to Erebus, from PIE *h1regʷos, "darkness", is accidental, however). Barry (1999) adduces the word Ereb on an Assyrian stele with the meaning of "night", "[the country of] sunset", in opposition to Asu "[the country of] sunrise", i.e. Asia (Anatolia coming equally from Ἀνατολή, "(sun)rise", "east").[10] This proposal is mostly considered unlikely or untenable.[11][12][13]
|
12 |
+
|
13 |
+
Sources differ in details regarding Europa's family, but agree that she is Phoenician, and from a lineage that ultimately descended from Argive princess Io, the mythical nymph beloved of Zeus, who was transformed into a heifer. She is generally said to be the daughter of Agenor, the Phoenician King of Tyre;[14] the Syracusan poet Moschus[15] makes her mother Queen Telephassa ("far-shining") but elsewhere her mother is Argiope ("white-faced").[16] Other sources, such as the Iliad, claim that she is the daughter of Agenor's son, the "sun-red" Phoenix.[17] It is generally agreed that she had two brothers, Cadmus, who brought the alphabet to mainland Greece, and Cilix who gave his name to Cilicia in Asia Minor, with the author of Bibliotheke including Phoenix as a third. So some interpret this as her brother Phoenix (when he is assumed to be son of Agenor) gave his siblings' name to his three children and this Europa (by this case, niece of former) is also loved by Zeus, but because of the same name, gave some confusions to others. After arriving in Crete, Europa had three sons fathered by Zeus: Minos, Rhadamanthus, and Sarpedon, the three of whom became the three judges of the Underworld when they died.[14][18] In Crete she married Asterion also rendered Asterius and became mother (or step-mother) of his daughter Crete. In some accounts, Carnus[19] and Alagonia[20] were added to the list of children of Europa and Zeus.
|
14 |
+
|
15 |
+
The Dictionary of Classical Mythology explains that Zeus was enamoured of Europa and decided to seduce or rape her, the two being near-equivalent in Greek myth.[25] He transformed himself into a tame white bull and mixed in with her father's herds. While Europa and her helpers were gathering flowers, she saw the bull, caressed his flanks, and eventually got onto his back. Zeus took that opportunity and ran to the sea and swam, with her on his back, to the island of Crete. He then revealed his true identity, and Europa became the first queen of Crete. Zeus gave her a necklace made by Hephaestus[3] and three additional gifts: Talos, Laelaps and a javelin that never missed. Zeus later re-created the shape of the white bull in the stars, which is now known as the constellation Taurus. Some readers interpret as manifestations of this same bull the Cretan beast that was encountered by Heracles, the Marathonian Bull slain by Theseus (and that fathered the Minotaur). Roman mythology adopted the tale of the Raptus, also known as "The Abduction of Europa" and "The Seduction of Europa", substituting the god Jupiter for Zeus.
|
16 |
+
|
17 |
+
The myth of Europa and Zeus may have its origin in a sacred union between the Phoenician deities `Aštar and `Aštart (Astarte), in bovine form. Having given birth to three sons by Zeus, Europa married a king Asterios, this being also the name of the Minotaur and an epithet of Zeus, likely derived from the name `Aštar.[26]
|
18 |
+
|
19 |
+
According to Herodotus' rationalizing approach, Europa was kidnapped by Greeks (probably Cretans) who were seeking to avenge the kidnapping of Io, a princess from Argos. His variant story may have been an attempt to rationalize the earlier myth; or the present myth may be a garbled version of facts—the abduction of a Phoenician aristocrat—later enunciated without gloss by Herodotus.
|
20 |
+
|
21 |
+
In the territory of Phoenician Sidon, Lucian of Samosata (2nd century AD) was informed that the temple of Astarte, whom Lucian equated with the moon goddess, was sacred to Europa:
|
22 |
+
|
23 |
+
The paradox, as it seemed to Lucian, would be solved if Europa is Astarte in her guise as the full, "broad-faced" moon.
|
24 |
+
|
25 |
+
There were two competing myths[27] relating how Europa came into the Hellenic world, but they agreed that she came to Crete (Kríti), where the sacred bull was paramount. In the more familiar telling she was seduced by the god Zeus in the form of a bull, who breathed from his mouth a saffron crocus[3] and carried her away to Crete on his back—to be welcomed by Asterion,[28] but according to the more literal, euhemerist version that begins the account of Persian-Hellene confrontations of Herodotus,[29] she was kidnapped by Cretans, who likewise were said to have taken her to Crete. The mythical Europa cannot be separated from the mythology of the sacred bull, which had been worshipped in the Levant. In 2012, an archaeological mission of the British Museum led by Lebanese archaeologist, Claude Doumet Serhal, discovered at the site of the old American school in Sidon, Lebanon currency that depicts Europa riding the bull with her veil flying all over like a bow, further proof of Europa's Phoenician origin.[30]
|
26 |
+
|
27 |
+
Europa does not seem to have been venerated directly in cult anywhere in classical Greece,[31] but at Lebadaea in Boeotia, Pausanias noted in the 2nd century AD that Europa was the epithet of Demeter—"Demeter whom they surname Europa and say was the nurse of Trophonios"—among the Olympians who were addressed by seekers at the cave sanctuary of Trophonios of Orchomenus, to whom a chthonic cult and oracle were dedicated: "the grove of Trophonios by the river Herkyna ... there is also a sanctuary of Demeter Europa ... the nurse of Trophonios."[32]
|
28 |
+
|
29 |
+
Male
|
30 |
+
Female
|
31 |
+
Deity
|
32 |
+
|
33 |
+
|
34 |
+
|
35 |
+
Europa provided the substance of a brief Hellenistic epic written in the mid-2nd century BCE by Moschus, a bucolic poet and friend of the Alexandrian grammarian Aristarchus of Samothrace, born at Syracuse.[33]
|
36 |
+
|
37 |
+
In Metamorphoses Book II, the poet Ovid wrote the following depiction of Jupiter's seduction:
|
38 |
+
|
39 |
+
His picturesque details belong to anecdote and fable: in all the depictions, whether she straddles the bull, as in archaic vase-paintings or the ruined metope fragment from Sikyon, or sits gracefully sidesaddle as in a mosaic from North Africa, there is no trace of fear. Often Europa steadies herself by touching one of the bull's horns, acquiescing.
|
40 |
+
|
41 |
+
Her tale is also mentioned in Nathaniel Hawthorne's Tanglewood Tales. Though his story titled "Dragon's teeth" is largely about Cadmus, it begins with an elaborate albeit toned down version of Europa's abduction by the beautiful bull.
|
42 |
+
|
43 |
+
The tale also features as the subject of a poem and film in the Enderby (fictional character) sequence of novels by Anthony Burgess. She is remembered in De Mulieribus Claris, a collection of biographies of historical and mythological women by the Florentine author Giovanni Boccaccio, composed in 1361–62. It is notable as the first collection devoted exclusively to biographies of women in Western literature.[34].
|
44 |
+
|
45 |
+
Europa in a fresco at Pompeii, contemporary with Ovid.
|
46 |
+
|
47 |
+
Europa velificans, "her fluttering tunic… in the breeze" (mosaic, Zeugma Mosaic Museum)
|
48 |
+
|
49 |
+
The Rape of Europa by Titian (1562)
|
50 |
+
|
51 |
+
The Rape of Europa by Jean-Baptiste Marie Pierre (1750)
|
52 |
+
|
53 |
+
The Rape of Europa by Francisco Goya (1772)
|
54 |
+
|
55 |
+
The Rape of Europa by Félix Vallotton (1908)
|
56 |
+
|
57 |
+
The Rape of Europa by Valentin Serov (1910)
|
58 |
+
|
59 |
+
The name Europe, as a geographical term, was used by Ancient Greek geographers such as Strabo to refer to part of Thrace below the Balkan mountains.[35] Later, under the Roman Empire the name was given to a Thracian province.
|
60 |
+
|
61 |
+
It is derived from the Greek word Eurōpē (Εὐρώπη) in all Romance languages, Germanic languages, Slavic languages, Baltic languages, Celtic languages, Iranian languages, Uralic languages (Hungarian Európa, Finnish Eurooppa, Estonian Euroopa).
|
62 |
+
|
63 |
+
Jürgen Fischer, in Oriens-Occidens-Europa[36] summarized how the name came into use, supplanting the oriens–occidens dichotomy of the later Roman Empire, which was expressive of a divided empire, Latin in the West, Greek in the East.
|
64 |
+
|
65 |
+
In the 8th century, ecclesiastical uses of "Europa" for the imperium of Charlemagne provide the source for the modern geographical term. The first use of the term Europenses, to describe peoples of the Christian, western portion of the continent, appeared in the Hispanic Latin Chronicle of 754, sometimes attributed to an author called Isidore Pacensis[37] in reference to the Battle of Tours fought against Muslim forces.
|
66 |
+
|
67 |
+
The European Union has also used Europa as a symbol of pan-Europeanism, notably by naming its web portal after her and depicting her on the Greek €2 coin and on several gold and silver commemorative coins (e.g. the Belgian €10 European Expansion coin). Her name appeared on postage stamps celebrating the Council of Europe, which were first issued in 1956. The second series of euro banknotes is known as the Europa Series and bears her likeness in the watermark and hologram.
|
68 |
+
|
69 |
+
The metal europium, a rare-earth element, was named in 1901 after the continent.[38]
|
70 |
+
|
71 |
+
The invention of the telescope revealed that the planet Jupiter, clearly visible to the naked eye and known to humanity since prehistoric times, has an attendant family of moons. These were named for male and female lovers of the god and other mythological persons associated with him. The smallest of Jupiter's Galilean moons was named after Europa.
|
en/1911.html.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Europa /jʊˈroʊpə/ (listen), or Jupiter II, is the smallest of the four Galilean moons orbiting Jupiter, and the sixth-closest to the planet of all the 79 known moons of Jupiter. It is also the sixth-largest moon in the Solar System. Europa was discovered in 1610 by Galileo Galilei[1] and was named after Europa, the Phoenician mother of King Minos of Crete and lover of Zeus (the Greek equivalent of the Roman god Jupiter).
|
4 |
+
|
5 |
+
Slightly smaller than Earth's Moon, Europa is primarily made of silicate rock and has a water-ice crust[13] and probably an iron–nickel core. It has a very thin atmosphere, composed primarily of oxygen. Its surface is striated by cracks and streaks, but craters are relatively few. In addition to Earth-bound telescope observations, Europa has been examined by a succession of space-probe flybys, the first occurring in the early 1970s.
|
6 |
+
|
7 |
+
Europa has the smoothest surface of any known solid object in the Solar System. The apparent youth and smoothness of the surface have led to the hypothesis that a water ocean exists beneath the surface, which could conceivably harbor extraterrestrial life.[14] The predominant model suggests that heat from tidal flexing causes the ocean to remain liquid and drives ice movement similar to plate tectonics, absorbing chemicals from the surface into the ocean below.[15][16] Sea salt from a subsurface ocean may be coating some geological features on Europa, suggesting that the ocean is interacting with the sea floor. This may be important in determining whether Europa could be habitable.[17] In addition, the Hubble Space Telescope detected water vapor plumes similar to those observed on Saturn's moon Enceladus, which are thought to be caused by erupting cryogeysers.[18] In May 2018, astronomers provided supporting evidence of water plume activity on Europa, based on an updated analysis of data obtained from the Galileo space probe, which orbited Jupiter from 1995 to 2003. Such plume activity could help researchers in a search for life from the subsurface Europan ocean without having to land on the moon.[19][20][21][22]
|
8 |
+
|
9 |
+
The Galileo mission, launched in 1989, provides the bulk of current data on Europa. No spacecraft has yet landed on Europa, although there have been several proposed exploration missions. The European Space Agency's Jupiter Icy Moon Explorer (JUICE) is a mission to Ganymede that is due to launch in 2022 and will include two flybys of Europa.[23] NASA's planned Europa Clipper should be launched in 2025.[24]
|
10 |
+
|
11 |
+
Europa, along with Jupiter's three other large moons, Io, Ganymede, and Callisto, was discovered by Galileo Galilei on 8 January 1610,[1] and possibly independently by Simon Marius. The first reported observation of Io and Europa was made by Galileo on 7 January 1610 using a 20×-magnification refracting telescope at the University of Padua. However, in that observation, Galileo could not separate Io and Europa due to the low magnification of his telescope, so that the two were recorded as a single point of light. The following day, 8 January 1610 (used as the discovery date for Europa by the IAU), Io and Europa were seen for the first time as separate bodies during Galileo's observations of the Jupiter system.[1]
|
12 |
+
|
13 |
+
Europa is named after Europa, daughter of the king of Tyre, a Phoenician noblewoman in Greek mythology. Like all the Galilean satellites, Europa is named after a lover of Zeus, the Greek counterpart of Jupiter. Europa was courted by Zeus and became the queen of Crete.[25] The naming scheme was suggested by Simon Marius,[26] who attributed the proposal to Johannes Kepler.[26][27]:
|
14 |
+
|
15 |
+
... Inprimis autem celebrantur tres fœminæ Virgines, quarum furtivo amore Iupiter captus & positus est... Europa Agenoris filia... à me vocatur... Secundus Europa... [Io,] Europa, Ganimedes puer, atque Calisto, lascivo nimium perplacuere Jovi.
|
16 |
+
|
17 |
+
... First, three young women who were captured by Jupiter for secret love shall be honoured, [including] Europa, the daughter of Agenor... The second [moon] is called by me Europa... Io, Europa, the boy Ganymede, and Callisto greatly pleased lustful Jupiter.[28]
|
18 |
+
|
19 |
+
The names fell out of favor for a considerable time and were not revived in general use until the mid-20th century.[29] In much of the earlier astronomical literature, Europa is simply referred to by its Roman numeral designation as Jupiter II (a system also introduced by Galileo) or as the "second satellite of Jupiter". In 1892, the discovery of Amalthea, whose orbit lay closer to Jupiter than those of the Galilean moons, pushed Europa to the third position. The Voyager probes discovered three more inner satellites in 1979, so Europa is now counted as Jupiter's sixth satellite, though it is still referred to as Jupiter II.[29]
|
20 |
+
The adjectival form has stabilized as Europan.[4][30]
|
21 |
+
|
22 |
+
Europa orbits Jupiter in just over three and a half days, with an orbital radius of about 670,900 km. With an orbital eccentricity of only 0.009, the orbit itself is nearly circular, and the orbital inclination relative to Jupiter's equatorial plane is small, at 0.470°.[31] Like its fellow Galilean satellites, Europa is tidally locked to Jupiter, with one hemisphere of Europa constantly facing Jupiter. Because of this, there is a sub-Jovian point on Europa's surface, from which Jupiter would appear to hang directly overhead. Europa's prime meridian is a line passing through this point.[32] Research suggests that the tidal locking may not be full, as a non-synchronous rotation has been proposed: Europa spins faster than it orbits, or at least did so in the past. This suggests an asymmetry in internal mass distribution and that a layer of subsurface liquid separates the icy crust from the rocky interior.[9]
|
23 |
+
|
24 |
+
The slight eccentricity of Europa's orbit, maintained by the gravitational disturbances from the other Galileans, causes Europa's sub-Jovian point to oscillate around a mean position. As Europa comes slightly nearer to Jupiter, Jupiter's gravitational attraction increases, causing Europa to elongate towards and away from it. As Europa moves slightly away from Jupiter, Jupiter's gravitational force decreases, causing Europa to relax back into a more spherical shape, and creating tides in its ocean. The orbital eccentricity of Europa is continuously pumped by its mean-motion resonance with Io.[33] Thus, the tidal flexing kneads Europa's interior and gives it a source of heat, possibly allowing its ocean to stay liquid while driving subsurface geological processes.[15][33] The ultimate source of this energy is Jupiter's rotation, which is tapped by Io through the tides it raises on Jupiter and is transferred to Europa and Ganymede by the orbital resonance.[33][34]
|
25 |
+
|
26 |
+
Analysis of the unique cracks lining Europa yielded evidence that it likely spun around a tilted axis at some point in time. If correct, this would explain many of Europa's features. Europa's immense network of crisscrossing cracks serves as a record of the stresses caused by massive tides in its global ocean. Europa's tilt could influence calculations of how much of its history is recorded in its frozen shell, how much heat is generated by tides in its ocean, and even how long the ocean has been liquid. Its ice layer must stretch to accommodate these changes. When there is too much stress, it cracks. A tilt in Europa's axis could suggest that its cracks may be much more recent than previously thought. The reason for this is that the direction of the spin pole may change by as much as a few degrees per day, completing one precession period over several months. A tilt could also affect the estimates of the age of Europa's ocean. Tidal forces are thought to generate the heat that keeps Europa's ocean liquid, and a tilt in the spin axis would cause more heat to be generated by tidal forces. Such additional heat would have allowed the ocean to remain liquid for a longer time. However, it has not yet been determined when this hypothesized shift in the spin axis might have occurred.[35]
|
27 |
+
|
28 |
+
Europa is slightly smaller than the Moon. At just over 3,100 kilometres (1,900 mi) in diameter, it is the sixth-largest moon and fifteenth-largest object in the Solar System. Though by a wide margin the least massive of the Galilean satellites, it is nonetheless more massive than all known moons in the Solar System smaller than itself combined.[36] Its bulk density suggests that it is similar in composition to the terrestrial planets, being primarily composed of silicate rock.[37]
|
29 |
+
|
30 |
+
It is estimated that Europa has an outer layer of water around 100 km (62 mi) thick; a part frozen as its crust, and a part as a liquid ocean underneath the ice. Recent magnetic-field data from the Galileo orbiter showed that Europa has an induced magnetic field through interaction with Jupiter's, which suggests the presence of a subsurface conductive layer.[38] This layer is likely a salty liquid-water ocean. Portions of the crust are estimated to have undergone a rotation of nearly 80°, nearly flipping over (see true polar wander), which would be unlikely if the ice were solidly attached to the mantle.[39] Europa probably contains a metallic iron core.[40][41]
|
31 |
+
|
32 |
+
Europa is the smoothest known object in the Solar System, lacking large-scale features such as mountains and craters.[42] However, according to one study, Europa's equator may be covered in icy spikes called penitentes, which may be up to 15 meters high, due to direct overhead sunlight on the equator, causing the ice to sublime, forming vertical cracks.[43][44][45] Although the imaging available from the Galileo orbiter does not have the resolution needed to confirm this, radar and thermal data are consistent with this interpretation.[45] The prominent markings crisscrossing Europa appear to be mainly albedo features that emphasize low topography. There are few craters on Europa, because its surface is tectonically too active and therefore young.[46][47] Europa's icy crust has an albedo (light reflectivity) of 0.64, one of the highest of all moons.[31][47] This indicates a young and active surface: based on estimates of the frequency of cometary bombardment that Europa experiences, the surface is about 20 to 180 million years old.[48] There is currently no full scientific consensus among the sometimes contradictory explanations for the surface features of Europa.[49]
|
33 |
+
|
34 |
+
The radiation level at the surface of Europa is equivalent to a dose of about 5400 mSv (540 rem) per day,[50] an amount of radiation that would cause severe illness or death in human beings exposed for a single day.[51]
|
35 |
+
|
36 |
+
Europa's most striking surface features are a series of dark streaks crisscrossing the entire globe, called lineae (English: lines). Close examination shows that the edges of Europa's crust on either side of the cracks have moved relative to each other. The larger bands are more than 20 km (12 mi) across, often with dark, diffuse outer edges, regular striations, and a central band of lighter material.[52]
|
37 |
+
The most likely hypothesis is that the lineae on Europa were produced by a series of eruptions of warm ice as the Europan crust spread open to expose warmer layers beneath.[53] The effect would have been similar to that seen in Earth's oceanic ridges. These various fractures are thought to have been caused in large part by the tidal flexing exerted by Jupiter. Because Europa is tidally locked to Jupiter, and therefore always maintains approximately the same orientation towards Jupiter, the stress patterns should form a distinctive and predictable pattern. However, only the youngest of Europa's fractures conform to the predicted pattern; other fractures appear to occur at increasingly different orientations the older they are. This could be explained if Europa's surface rotates slightly faster than its interior, an effect that is possible due to the subsurface ocean mechanically decoupling Europa's surface from its rocky mantle and the effects of Jupiter's gravity tugging on Europa's outer ice crust.[54] Comparisons of Voyager and Galileo spacecraft photos serve to put an upper limit on this hypothetical slippage. A full revolution of the outer rigid shell relative to the interior of Europa takes at least 12,000 years.[55] Studies of Voyager and Galileo images have revealed evidence of subduction on Europa's surface, suggesting that, just as the cracks are analogous to ocean ridges,[56][57] so plates of icy crust analogous to tectonic plates on Earth are recycled into the molten interior. This evidence of both crustal spreading at bands[56] and convergence at other sites[57] suggests that Europa may have active plate tectonics, similar to Earth.[16][contradictory] However, the physics driving these plate tectonics are not likely to resemble those driving terrestrial plate tectonics, as the forces resisting potential Earth-like plate motions in Europa's crust are significantly stronger than the forces that could drive them.[58]
|
38 |
+
|
39 |
+
Other features present on Europa are circular and elliptical lenticulae (Latin for "freckles"). Many are domes, some are pits and some are smooth, dark spots. Others have a jumbled or rough texture. The dome tops look like pieces of the older plains around them, suggesting that the domes formed when the plains were pushed up from below.[59]
|
40 |
+
|
41 |
+
One hypothesis states that these lenticulae were formed by diapirs of warm ice rising up through the colder ice of the outer crust, much like magma chambers in Earth's crust.[59] The smooth, dark spots could be formed by meltwater released when the warm ice breaks through the surface. The rough, jumbled lenticulae (called regions of "chaos"; for example, Conamara Chaos) would then be formed from many small fragments of crust, embedded in hummocky, dark material, appearing like icebergs in a frozen sea.[60]
|
42 |
+
|
43 |
+
An alternative hypothesis suggest that lenticulae are actually small areas of chaos and that the claimed pits, spots and domes are artefacts resulting from over-interpretation of early, low-resolution Galileo images. The implication is that the ice is too thin to support the convective diapir model of feature formation.[61][62]
|
44 |
+
|
45 |
+
In November 2011, a team of researchers from the University of Texas at Austin and elsewhere presented evidence in the journal Nature suggesting that many "chaos terrain" features on Europa sit atop vast lakes of liquid water.[63][64] These lakes would be entirely encased in Europa's icy outer shell and distinct from a liquid ocean thought to exist farther down beneath the ice shell. Full confirmation of the lakes' existence will require a space mission designed to probe the ice shell either physically or indirectly, for example, using radar.[64]
|
46 |
+
|
47 |
+
Scientists' consensus is that a layer of liquid water exists beneath Europa's surface, and that heat from tidal flexing allows the subsurface ocean to remain liquid.[15][65] Europa's surface temperature averages about 110 K (−160 °C; −260 °F) at the equator and only 50 K (−220 °C; −370 °F) at the poles, keeping Europa's icy crust as hard as granite.[11] The first hints of a subsurface ocean came from theoretical considerations of tidal heating (a consequence of Europa's slightly eccentric orbit and orbital resonance with the other Galilean moons). Galileo imaging team members argue for the existence of a subsurface ocean from analysis of Voyager and Galileo images.[65] The most dramatic example is "chaos terrain", a common feature on Europa's surface that some interpret as a region where the subsurface ocean has melted through the icy crust. This interpretation is controversial. Most geologists who have studied Europa favor what is commonly called the "thick ice" model, in which the ocean has rarely, if ever, directly interacted with the present surface.[66] The best evidence for the thick-ice model is a study of Europa's large craters. The largest impact structures are surrounded by concentric rings and appear to be filled with relatively flat, fresh ice; based on this and on the calculated amount of heat generated by Europan tides, it is estimated that the outer crust of solid ice is approximately 10–30 km (6–19 mi) thick,[67] including a ductile "warm ice" layer, which could mean that the liquid ocean underneath may be about 100 km (60 mi) deep.[68] This leads to a volume of Europa's oceans of 3 × 1018 m3, between two or three times the volume of Earth's oceans.[69][70]
|
48 |
+
|
49 |
+
The thin-ice model suggests that Europa's ice shell may be only a few kilometers thick. However, most planetary scientists conclude that this model considers only those topmost layers of Europa's crust that behave elastically when affected by Jupiter's tides. One example is flexure analysis, in which Europa's crust is modeled as a plane or sphere weighted and flexed by a heavy load. Models such as this suggest the outer elastic portion of the ice crust could be as thin as 200 metres (660 ft). If the ice shell of Europa is really only a few kilometers thick, this "thin ice" model would mean that regular contact of the liquid interior with the surface could occur through open ridges, causing the formation of areas of chaotic terrain.[71]
|
50 |
+
|
51 |
+
The Galileo orbiter found that Europa has a weak magnetic moment, which is induced by the varying part of the Jovian magnetic field. The field strength at the magnetic equator (about 120 nT) created by this magnetic moment is about one-sixth the strength of Ganymede's field and six times the value of Callisto's.[72] The existence of the induced moment requires a layer of a highly electrically conductive material in Europa's interior. The most plausible candidate for this role is a large subsurface ocean of liquid saltwater.[40]
|
52 |
+
|
53 |
+
Since the Voyager spacecraft flew past Europa in 1979, scientists have worked to understand the composition of the reddish-brown material that coats fractures and other geologically youthful features on Europa's surface.[73] Spectrographic evidence suggests that the dark, reddish streaks and features on Europa's surface may be rich in salts such as magnesium sulfate, deposited by evaporating water that emerged from within.[74] Sulfuric acid hydrate is another possible explanation for the contaminant observed spectroscopically.[75] In either case, because these materials are colorless or white when pure, some other material must also be present to account for the reddish color, and sulfur compounds are suspected.[76]
|
54 |
+
|
55 |
+
Another hypothesis for the colored regions is that they are composed of abiotic organic compounds collectively called tholins.[77][78][79] The morphology of Europa's impact craters and ridges is suggestive of fluidized material welling up from the fractures where pyrolysis and radiolysis take place. In order to generate colored tholins on Europa there must be a source of materials (carbon, nitrogen, and water) and a source of energy to make the reactions occur. Impurities in the water ice crust of Europa are presumed both to emerge from the interior as cryovolcanic events that resurface the body, and to accumulate from space as interplanetary dust.[77] Tholins bring important astrobiological implications, as they may play a role in prebiotic chemistry and abiogenesis.[80][81][82]
|
56 |
+
|
57 |
+
The presence of sodium chloride in the internal ocean has been suggested by a 450 nm absorption feature, characteristic of irradiated NaCl crystals, that has been spotted in HST observations of the chaos regions, presumed to be areas of recent subsurface upwelling.[83]
|
58 |
+
|
59 |
+
Tidal heating occurs through the tidal friction and tidal flexing processes caused by tidal acceleration: orbital and rotational energy are dissipated as heat in the core of the moon, the internal ocean, and the ice crust.[84]
|
60 |
+
|
61 |
+
Ocean tides are converted to heat by frictional losses in the oceans and their interaction with the solid bottom and with the top ice crust. In late 2008, it was suggested Jupiter may keep Europa's oceans warm by generating large planetary tidal waves on Europa because of its small but non-zero obliquity. This generates so-called Rossby waves that travel quite slowly, at just a few kilometers per day, but can generate significant kinetic energy. For the current axial tilt estimate of 0.1 degree, the resonance from Rossby waves would contain 7.3×1018 J of kinetic energy, which is two thousand times larger than that of the flow excited by the dominant tidal forces.[85][86] Dissipation of this energy could be the principal heat source of Europa's ocean.[85][86]
|
62 |
+
|
63 |
+
Tidal flexing kneads Europa's interior and ice shell, which becomes a source of heat.[87] Depending on the amount of tilt, the heat generated by the ocean flow could be 100 to thousands of times greater than the heat generated by the flexing of Europa's rocky core in response to gravitational pull from Jupiter and the other moons circling that planet.[88] Europa's seafloor could be heated by the moon's constant flexing, driving hydrothermal activity similar to undersea volcanoes in Earth's oceans.[84]
|
64 |
+
|
65 |
+
Experiments and ice modeling published in 2016, indicate that tidal flexing dissipation can generate one order of magnitude more heat in Europa's ice than scientists had previously assumed.[89][90] Their results indicate that most of the heat generated by the ice actually comes from the ice's crystalline structure (lattice) as a result of deformation, and not friction between the ice grains.[89][90] The greater the deformation of the ice sheet, the more heat is generated.
|
66 |
+
|
67 |
+
In addition to tidal heating, the interior of Europa could also be heated by the decay of radioactive material (radiogenic heating) within the rocky mantle.[84][91] But the models and values observed are one hundred times higher than those that could be produced by radiogenic heating alone,[92] thus implying that tidal heating has a leading role in Europa.[93]
|
68 |
+
|
69 |
+
The Hubble Space Telescope acquired an image of Europa in 2012 that was interpreted to be a plume of water vapour erupting from near its south pole.[96][95] The image suggests the plume may be 200 km (120 mi) high, or more than 20 times the height of Mt. Everest.[18][97][98] It has been suggested that if they exist, they are episodic[99] and likely to appear when Europa is at its farthest point from Jupiter, in agreement with tidal force modeling predictions.[100] Additional imaging evidence from the Hubble Space Telescope was presented in September 2016.[101][102]
|
70 |
+
|
71 |
+
In May 2018, astronomers provided supporting evidence of water plume activity on Europa, based on an updated critical analysis of data obtained from the Galileo space probe, which orbited Jupiter between 1995 and 2003. Galileo flew by Europa in 1997 within 206 km (128 mi) of the moon's surface and the researchers suggest it may have flown through a water plume.[19][20][21][22] Such plume activity could help researchers in a search for life from the subsurface European ocean without having to land on the moon.[19]
|
72 |
+
|
73 |
+
The tidal forces are about 1,000 times stronger than the Moon's effect on Earth. The only other moon in the Solar System exhibiting water vapor plumes is Enceladus.[18] The estimated eruption rate at Europa is about 7000 kg/s[100] compared to about 200 kg/s for the plumes of Enceladus.[103][104] If confirmed, it would open the possibility of a flyby through the plume and obtain a sample to analyze in situ without having to use a lander and drill through kilometres of ice.[101][105][106]
|
74 |
+
|
75 |
+
Observations with the Goddard High Resolution Spectrograph of the Hubble Space Telescope, first described in 1995, revealed that Europa has a thin atmosphere composed mostly of molecular oxygen (O2),[107][108] and some water vapor.[109][110][111] The surface pressure of Europa's atmosphere is 0.1 μPa, or 10−12 times that of the Earth.[12] In 1997, the Galileo spacecraft confirmed the presence of a tenuous ionosphere (an upper-atmospheric layer of charged particles) around Europa created by solar radiation and energetic particles from Jupiter's magnetosphere,[112][113] providing evidence of an atmosphere.
|
76 |
+
|
77 |
+
Unlike the oxygen in Earth's atmosphere, Europa's is not of biological origin. The surface-bounded atmosphere forms through radiolysis, the dissociation of molecules through radiation.[114] Solar ultraviolet radiation and charged particles (ions and electrons) from the Jovian magnetospheric environment collide with Europa's icy surface, splitting water into oxygen and hydrogen constituents. These chemical components are then adsorbed and "sputtered" into the atmosphere. The same radiation also creates collisional ejections of these products from the surface, and the balance of these two processes forms an atmosphere.[115] Molecular oxygen is the densest component of the atmosphere because it has a long lifetime; after returning to the surface, it does not stick (freeze) like a water or hydrogen peroxide molecule but rather desorbs from the surface and starts another ballistic arc. Molecular hydrogen never reaches the surface, as it is light enough to escape Europa's surface gravity.[116][117]
|
78 |
+
|
79 |
+
Observations of the surface have revealed that some of the molecular oxygen produced by radiolysis is not ejected from the surface. Because the surface may interact with the subsurface ocean (considering the geological discussion above), this molecular oxygen may make its way to the ocean, where it could aid in biological processes.[118] One estimate suggests that, given the turnover rate inferred from the apparent ~0.5 Gyr maximum age of Europa's surface ice, subduction of radiolytically generated oxidizing species might well lead to oceanic free oxygen concentrations that are comparable to those in terrestrial deep oceans.[119]
|
80 |
+
|
81 |
+
The molecular hydrogen that escapes Europa's gravity, along with atomic and molecular oxygen, forms a gas torus in the vicinity of Europa's orbit around Jupiter. This "neutral cloud" has been detected by both the Cassini and Galileo spacecraft, and has a greater content (number of atoms and molecules) than the neutral cloud surrounding Jupiter's inner moon Io. Models predict that almost every atom or molecule in Europa's torus is eventually ionized, thus providing a source to Jupiter's magnetospheric plasma.[120]
|
82 |
+
|
83 |
+
Exploration of Europa began with the Jupiter flybys of Pioneer 10 and 11 in 1973 and 1974 respectively. The first closeup photos were of low resolution compared to later missions. The two Voyager probes traveled through the Jovian system in 1979, providing more-detailed images of Europa's icy surface. The images caused many scientists to speculate about the possibility of a liquid ocean underneath.
|
84 |
+
Starting in 1995, the Galileo space probe orbited Jupiter for eight years, until 2003, and provided the most detailed examination of the Galilean moons to date. It included the "Galileo Europa Mission" and "Galileo Millennium Mission", with numerous close flybys of Europa.[121] In 2007, New Horizons imaged Europa, as it flew by the Jovian system while on its way to Pluto.[122]
|
85 |
+
|
86 |
+
Conjectures regarding extraterrestrial life have ensured a high profile for Europa and have led to steady lobbying for future missions.[123][124] The aims of these missions have ranged from examining Europa's chemical composition to searching for extraterrestrial life in its hypothesized subsurface oceans.[125][126] Robotic missions to Europa need to endure the high-radiation environment around itself and Jupiter.[124] Europa receives about 5.40 Sv of radiation per day.[127]
|
87 |
+
|
88 |
+
In 2011, a Europa mission was recommended by the U.S. Planetary Science Decadal Survey.[128] In response, NASA commissioned Europa lander concept studies in 2011, along with concepts for a Europa flyby (Europa Clipper), and a Europa orbiter.[129][130] The orbiter element option concentrates on the "ocean" science, while the multiple-flyby element (Clipper) concentrates on the chemistry and energy science. On 13 January 2014, the House Appropriations Committee announced a new bipartisan bill that includes $80 million funding to continue the Europa mission concept studies.[131][132]
|
89 |
+
|
90 |
+
In the early 2000s, Jupiter Europa Orbiter led by NASA and the Jupiter Ganymede Orbiter led by the ESA were proposed together as an Outer Planet Flagship Mission to Jupiter's icy moons called Europa Jupiter System Mission, with a planned launch in 2020.[140] In 2009 it was given priority over Titan Saturn System Mission.[141] At that time, there was competition from other proposals.[142] Japan proposed Jupiter Magnetospheric Orbiter.
|
91 |
+
|
92 |
+
Jovian Europa Orbiter was an ESA Cosmic Vision concept study from 2007. Another concept was Ice Clipper,[143] which would have used an impactor similar to the Deep Impact mission—it would make a controlled crash into the surface of Europa, generating a plume of debris that would then be collected by a small spacecraft flying through the plume.[143][144]
|
93 |
+
|
94 |
+
Jupiter Icy Moons Orbiter (JIMO) was a partially developed fission-powered spacecraft with ion thrusters that was cancelled in 2006.[124][145] It was part of Project Prometheus.[145] The Europa Lander Mission proposed a small nuclear-powered Europa lander for JIMO.[146] It would travel with the orbiter, which would also function as a communication relay to Earth.[146]
|
95 |
+
|
96 |
+
Europa Orbiter – Its objective would be to characterize the extent of the ocean and its relation to the deeper interior. Instrument payload could include a radio subsystem, laser altimeter, magnetometer, Langmuir probe, and a mapping camera.[147][148] The Europa Orbiter received a go-ahead in 1999 but was canceled in 2002. This orbiter featured a special ice-penetrating radar that would allow it to scan below the surface.[42]
|
97 |
+
|
98 |
+
More ambitious ideas have been put forward including an impactor in combination with a thermal drill to search for biosignatures that might be frozen in the shallow subsurface.[149][150]
|
99 |
+
|
100 |
+
Another proposal put forward in 2001 calls for a large nuclear-powered "melt probe" (cryobot) that would melt through the ice until it reached an ocean below.[124][151] Once it reached the water, it would deploy an autonomous underwater vehicle (hydrobot) that would gather information and send it back to Earth.[152] Both the cryobot and the hydrobot would have to undergo some form of extreme sterilization to prevent detection of Earth organisms instead of native life and to prevent contamination of the subsurface ocean.[153] This suggested approach has not yet reached a formal conceptual planning stage.[154]
|
101 |
+
|
102 |
+
So far, there is no evidence that life exists on Europa, but Europa has emerged as one of the most likely locations in the Solar System for potential habitability.[119][155] Life could exist in its under-ice ocean, perhaps in an environment similar to Earth's deep-ocean hydrothermal vents.[125][156] Even if Europa lacks volcanic hydrothermal activity, a 2016 NASA study found that Earth-like levels of hydrogen and oxygen could be produced through processes related to serpentinization and ice-derived oxidants, which do not directly involve volcanism.[157] In 2015, scientists announced that salt from a subsurface ocean may likely be coating some geological features on Europa, suggesting that the ocean is interacting with the seafloor. This may be important in determining if Europa could be habitable.[17][158] The likely presence of liquid water in contact with Europa's rocky mantle has spurred calls to send a probe there.[159]
|
103 |
+
|
104 |
+
The energy provided by tidal forces drives active geological processes within Europa's interior, just as they do to a far more obvious degree on its sister moon Io. Although Europa, like the Earth, may possess an internal energy source from radioactive decay, the energy generated by tidal flexing would be several orders of magnitude greater than any radiological source.[160] Life on Europa could exist clustered around hydrothermal vents on the ocean floor, or below the ocean floor, where endoliths are known to inhabit on Earth. Alternatively, it could exist clinging to the lower surface of Europa's ice layer, much like algae and bacteria in Earth's polar regions, or float freely in Europa's ocean.[161] If Europa's ocean is too cold, biological processes similar to those known on Earth could not take place. If it is too salty, only extreme halophiles could survive in that environment.[161] In 2010, a model proposed by Richard Greenberg of the University of Arizona proposed that irradiation of ice on Europa's surface could saturate its crust with oxygen and peroxide, which could then be transported by tectonic processes into the interior ocean. Such a process could render Europa's ocean as oxygenated as our own within just 12 million years, allowing the existence of complex, multicellular lifeforms.[162]
|
105 |
+
|
106 |
+
Evidence suggests the existence of lakes of liquid water entirely encased in Europa's icy outer shell and distinct from a liquid ocean thought to exist farther down beneath the ice shell.[63][64] If confirmed, the lakes could be yet another potential habitat for life. Evidence suggests that hydrogen peroxide is abundant across much of the surface of Europa.[163] Because hydrogen peroxide decays into oxygen and water when combined with liquid water, the authors argue that it could be an important energy supply for simple life forms.
|
107 |
+
|
108 |
+
Clay-like minerals (specifically, phyllosilicates), often associated with organic matter on Earth, have been detected on the icy crust of Europa.[164] The presence of the minerals may have been the result of a collision with an asteroid or comet.[164] Some scientists have speculated that life on Earth could have been blasted into space by asteroid collisions and arrived on the moons of Jupiter in a process called lithopanspermia.[165]
|
109 |
+
|
110 |
+
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
|
111 |
+
|
en/1912.html.txt
ADDED
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The euro (sign: €; code: EUR) is the official currency of 19 of the 27 member states of the European Union. This group of states is known as the eurozone or euro area and includes about 343 million citizens as of 2019[update].[9][10] The euro, which is divided into 100 cents, is the second-largest and second-most traded currency in the foreign exchange market after the United States dollar.[11]
|
4 |
+
|
5 |
+
The currency is also used officially by the institutions of the European Union, by four European microstates that are not EU members,[10] the British Overseas Territory of Akrotiri and Dhekelia, as well as unilaterally by Montenegro and Kosovo. Outside Europe, a number of special territories of EU members also use the euro as their currency. Additionally, over 200 million people worldwide use currencies pegged to the euro.
|
6 |
+
|
7 |
+
The euro is the second-largest reserve currency as well as the second-most traded currency in the world after the United States dollar.[12][13][14]
|
8 |
+
As of December 2019[update], with more than €1.3 trillion in circulation, the euro has one of the highest combined values of banknotes and coins in circulation in the world.[15][16]
|
9 |
+
|
10 |
+
The name euro was officially adopted on 16 December 1995 in Madrid.[17] The euro was introduced to world financial markets as an accounting currency on 1 January 1999, replacing the former European Currency Unit (ECU) at a ratio of 1:1 (US$1.1743). Physical euro coins and banknotes entered into circulation on 1 January 2002, making it the day-to-day operating currency of its original members, and by March 2002 it had completely replaced the former currencies.[18] While the euro dropped subsequently to US$0.83 within two years (26 October 2000), it has traded above the U.S. dollar since the end of 2002, peaking at US$1.60 on 18 July 2008. However, the currency is lower than the original issue rate of US$1.1743, showing a relative decline since its issuance. [19] In late 2009, the euro became immersed in the European sovereign-debt crisis, which led to the creation of the European Financial Stability Facility as well as other reforms aimed at stabilising and strengthening the currency.
|
11 |
+
|
12 |
+
The euro is managed and administered by the Frankfurt-based European Central Bank (ECB) and the Eurosystem (composed of the central banks of the eurozone countries). As an independent central bank, the ECB has sole authority to set monetary policy. The Eurosystem participates in the printing, minting and distribution of notes and coins in all member states, and the operation of the eurozone payment systems.
|
13 |
+
|
14 |
+
The 1992 Maastricht Treaty obliges most EU member states to adopt the euro upon meeting certain monetary and budgetary convergence criteria, although not all states have done so. The United Kingdom and Denmark negotiated exemptions,[20] while Sweden (which joined the EU in 1995, after the Maastricht Treaty was signed) turned down the euro in a non-binding referendum in 2003, and has circumvented the obligation to adopt the euro by not meeting the monetary and budgetary requirements. All nations that have joined the EU since 1993 have pledged to adopt the euro in due course.
|
15 |
+
|
16 |
+
Since 1 January 2002, the national central banks (NCBs) and the ECB have issued euro banknotes on a joint basis.[21] Eurosystem NCBs are required to accept euro banknotes put into circulation by other Eurosystem members and these banknotes are not repatriated. The ECB issues 8% of the total value of banknotes issued by the Eurosystem.[21] In practice, the ECB's banknotes are put into circulation by the NCBs, thereby incurring matching liabilities vis-à-vis the ECB. These liabilities carry interest at the main refinancing rate of the ECB. The other 92% of euro banknotes are issued by the NCBs in proportion to their respective shares of the ECB capital key,[21] calculated using national share of European Union (EU) population and national share of EU GDP, equally weighted.[22]
|
17 |
+
|
18 |
+
The euro is divided into 100 cents (also referred to as euro cents, especially when distinguishing them from other currencies, and referred to as such on the common side of all cent coins). In Community legislative acts the plural forms of euro and cent are spelled without the s, notwithstanding normal English usage.[23][24] Otherwise, normal English plurals are used,[25] with many local variations such as centime in France.
|
19 |
+
|
20 |
+
All circulating coins have a common side showing the denomination or value, and a map in the background. Due to the linguistic plurality in the European Union, the Latin alphabet version of euro is used (as opposed to the less common Greek or Cyrillic) and Arabic numerals (other text is used on national sides in national languages, but other text on the common side is avoided). For the denominations except the 1-, 2- and 5-cent coins, the map only showed the 15 member states which were members when the euro was introduced. Beginning in 2007 or 2008 (depending on the country), the old map was replaced by a map of Europe also showing countries outside the EU like Norway, Ukraine, Belarus, Russia and Turkey.[citation needed] The 1-, 2- and 5-cent coins, however, keep their old design, showing a geographical map of Europe with the 15 member states of 2002 raised somewhat above the rest of the map. All common sides were designed by Luc Luycx. The coins also have a national side showing an image specifically chosen by the country that issued the coin. Euro coins from any member state may be freely used in any nation that has adopted the euro.
|
21 |
+
|
22 |
+
The coins are issued in denominations of €2, €1, 50c, 20c, 10c, 5c, 2c, and 1c. To avoid the use of the two smallest coins, some cash transactions are rounded to the nearest five cents in the Netherlands and Ireland[26][27] (by voluntary agreement) and in Finland (by law).[28] This practice is discouraged by the commission, as is the practice of certain shops of refusing to accept high-value euro notes.[29]
|
23 |
+
|
24 |
+
Commemorative coins with €2 face value have been issued with changes to the design of the national side of the coin. These include both commonly issued coins, such as the €2 commemorative coin for the fiftieth anniversary of the signing of the Treaty of Rome, and nationally issued coins, such as the coin to commemorate the 2004 Summer Olympics issued by Greece. These coins are legal tender throughout the eurozone. Collector coins with various other denominations have been issued as well, but these are not intended for general circulation, and they are legal tender only in the member state that issued them.[30]
|
25 |
+
|
26 |
+
The design for the euro banknotes has common designs on both sides. The design was created by the Austrian designer Robert Kalina.[31] Notes are issued in €500, €200, €100, €50, €20, €10, €5. Each banknote has its own colour and is dedicated to an artistic period of European architecture. The front of the note features windows or gateways while the back has bridges, symbolising links between states in the union and with the future. While the designs are supposed to be devoid of any identifiable characteristics, the initial designs by Robert Kalina were of specific bridges, including the Rialto and the Pont de Neuilly, and were subsequently rendered more generic; the final designs still bear very close similarities to their specific prototypes; thus they are not truly generic. The monuments looked similar enough to different national monuments to please everyone.[32]
|
27 |
+
|
28 |
+
The Europa series, or second series, consists of six denominations and does no longer include the €500 with issuance discontinued as of 27 April 2019.[33] However, both the first and the second series of euro banknotes, including the €500, remain legal tender throughout the euro area.[34]
|
29 |
+
|
30 |
+
Capital within the EU may be transferred in any amount from one state to another. All intra-Union transfers in euro are treated as domestic transactions and bear the corresponding domestic transfer costs.[35] This includes all member states of the EU, even those outside the eurozone providing the transactions are carried out in euro.[36] Credit/debit card charging and ATM withdrawals within the eurozone are also treated as domestic transactions; however paper-based payment orders, like cheques, have not been standardised so these are still domestic-based. The ECB has also set up a clearing system, TARGET, for large euro transactions.[37]
|
31 |
+
|
32 |
+
A special euro currency sign (€) was designed after a public survey had narrowed the original ten proposals down to two. The European Commission then chose the design created by the Belgian Alain Billiet. Of the symbol, the Commission stated[23]
|
33 |
+
|
34 |
+
Inspiration for the € symbol itself came from the Greek epsilon (Є)[note 13] – a reference to the cradle of European civilisation – and the first letter of the word Europe, crossed by two parallel lines to 'certify' the stability of the euro.
|
35 |
+
|
36 |
+
The European Commission also specified a euro logo with exact proportions and foreground and background colour tones.[38] Placement of the currency sign relative to the numeric amount varies from state to state, but for texts in English the symbol (or the ISO-standard "EUR") should precede the amount.[39]
|
37 |
+
|
38 |
+
The euro was established by the provisions in the 1992 Maastricht Treaty. To participate in the currency, member states are meant to meet strict criteria, such as a budget deficit of less than 3% of their GDP, a debt ratio of less than 60% of GDP (both of which were ultimately widely flouted after introduction), low inflation, and interest rates close to the EU average. In the Maastricht Treaty, the United Kingdom and Denmark were granted exemptions per their request from moving to the stage of monetary union which resulted in the introduction of the euro. (For macroeconomic theory, see below.)
|
39 |
+
|
40 |
+
The name "euro" was officially adopted in Madrid on 16 December 1995.[17] Belgian Esperantist Germain Pirlot, a former teacher of French and history is credited with naming the new currency by sending a letter to then President of the European Commission, Jacques Santer, suggesting the name "euro" on 4 August 1995.[41]
|
41 |
+
|
42 |
+
Due to differences in national conventions for rounding and significant digits, all conversion between the national currencies had to be carried out using the process of triangulation via the euro. The definitive values of one euro in terms of the exchange rates at which the currency entered the euro are shown on the right.
|
43 |
+
|
44 |
+
The rates were determined by the Council of the European Union,[note 14] based on a recommendation from the European Commission based on the market rates on 31 December 1998. They were set so that one European Currency Unit (ECU) would equal one euro. The European Currency Unit was an accounting unit used by the EU, based on the currencies of the member states; it was not a currency in its own right. They could not be set earlier, because the ECU depended on the closing exchange rate of the non-euro currencies (principally the pound sterling) that day.
|
45 |
+
|
46 |
+
The procedure used to fix the conversion rate between the Greek drachma and the euro was different since the euro by then was already two years old. While the conversion rates for the initial eleven currencies were determined only hours before the euro was introduced, the conversion rate for the Greek drachma was fixed several months beforehand.[note 15]
|
47 |
+
|
48 |
+
The currency was introduced in non-physical form (traveller's cheques, electronic transfers, banking, etc.) at midnight on 1 January 1999, when the national currencies of participating countries (the eurozone) ceased to exist independently. Their exchange rates were locked at fixed rates against each other. The euro thus became the successor to the European Currency Unit (ECU). The notes and coins for the old currencies, however, continued to be used as legal tender until new euro notes and coins were introduced on 1 January 2002.
|
49 |
+
|
50 |
+
The changeover period during which the former currencies' notes and coins were exchanged for those of the euro lasted about two months, until 28 February 2002. The official date on which the national currencies ceased to be legal tender varied from member state to member state. The earliest date was in Germany, where the mark officially ceased to be legal tender on 31 December 2001, though the exchange period lasted for two months more. Even after the old currencies ceased to be legal tender, they continued to be accepted by national central banks for periods ranging from several years to indefinitely (the latter for Austria, Germany, Ireland, Estonia and Latvia in banknotes and coins, and for Belgium, Luxembourg, Slovenia and Slovakia in banknotes only). The earliest coins to become non-convertible were the Portuguese escudos, which ceased to have monetary value after 31 December 2002, although banknotes remain exchangeable until 2022.
|
51 |
+
|
52 |
+
Following the U.S. financial crisis in 2008, fears of a sovereign debt crisis developed in 2009 among investors concerning some European states, with the situation becoming particularly tense in early 2010.[42][43] Greece was most acutely affected, but fellow Eurozone members Cyprus, Ireland, Italy, Portugal, and Spain were also significantly affected.[44][45] All these countries utilized EU funds except Italy, which is a major donor to the EFSF.[46] To be included in the eurozone, countries had to fulfil certain convergence criteria, but the meaningfulness of such criteria was diminished by the fact it was not enforced with the same level of strictness among countries.[47]
|
53 |
+
|
54 |
+
According to the Economist Intelligence Unit in 2011, "[I]f the [euro area] is treated as a single entity, its [economic and fiscal] position looks no worse and in some respects, rather better than that of the US or the UK" and the budget deficit for the euro area as a whole is much lower and the euro area's government debt/GDP ratio of 86% in 2010 was about the same level as that of the United States. "Moreover", they write, "private-sector indebtedness across the euro area as a whole is markedly lower than in the highly leveraged Anglo-Saxon economies". The authors conclude that the crisis "is as much political as economic" and the result of the fact that the euro area lacks the support of "institutional paraphernalia (and mutual bonds of solidarity) of a state".[48]
|
55 |
+
|
56 |
+
The crisis continued with S&P downgrading the credit rating of nine euro-area countries, including France, then downgrading the entire European Financial Stability Facility (EFSF) fund.[49]
|
57 |
+
|
58 |
+
A historical parallel – to 1931 when Germany was burdened with debt, unemployment and austerity while France and the United States were relatively strong creditors – gained attention in summer 2012[50] even as Germany received a debt-rating warning of its own.[51][52] In the enduring of this scenario the Euro serves as a mean of quantitative primitive accumulation.
|
59 |
+
|
60 |
+
The euro is the sole currency of 19 EU member states: Austria, Belgium, Cyprus, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Portugal, Slovakia, Slovenia, and Spain. These countries constitute the "eurozone", some 343 million people in total as of 2018[update].[53]
|
61 |
+
|
62 |
+
With all but one (Denmark) of the remaining EU members obliged to join when economic conditions permit, together with future members of the EU, the enlargement of the eurozone is set to continue. Outside the EU, the euro is also the sole currency of Montenegro and Kosovo and several European microstates (Andorra, Monaco, San Marino and the Vatican City) as well as in five overseas territories of EU members that are not themselves part of the EU (Saint Barthélemy, Saint Martin, Saint Pierre and Miquelon, the French Southern and Antarctic Lands and Akrotiri and Dhekelia). Together this direct usage of the euro outside the EU affects nearly 3 million people.
|
63 |
+
|
64 |
+
The euro has been used as a trading currency in Cuba since 1998,[54] Syria since 2006,[55] and Venezuela since 2018.[56] There are also various currencies pegged to the euro (see below). In 2009, Zimbabwe abandoned its local currency and used major currencies instead, including the euro and the United States dollar.[57]
|
65 |
+
|
66 |
+
Since its introduction, the euro has been the second most widely held international reserve currency after the U.S. dollar. The share of the euro as a reserve currency increased from 18% in 1999 to 27% in 2008. Over this period, the share held in U.S. dollar fell from 71% to 64% and that held in RMB fell from 6.4% to 3.3%. The euro inherited and built on the status of the Deutsche Mark as the second most important reserve currency. The euro remains underweight as a reserve currency in advanced economies while overweight in emerging and developing economies: according to the International Monetary Fund[58] the total of euro held as a reserve in the world at the end of 2008 was equal to $1.1 trillion or €850 billion, with a share of 22% of all currency reserves in advanced economies, but a total of 31% of all currency reserves in emerging and developing economies.
|
67 |
+
|
68 |
+
The possibility of the euro becoming the first international reserve currency has been debated among economists.[59] Former Federal Reserve Chairman Alan Greenspan gave his opinion in September 2007 that it was "absolutely conceivable that the euro will replace the US dollar as reserve currency, or will be traded as an equally important reserve currency".[60] In contrast to Greenspan's 2007 assessment, the euro's increase in the share of the worldwide currency reserve basket has slowed considerably since 2007 and since the beginning of the worldwide credit crunch related recession and European sovereign-debt crisis.[58]
|
69 |
+
|
70 |
+
Outside the eurozone, a total of 22 countries and territories that do not belong to the EU have currencies that are directly pegged to the euro including 14 countries in mainland Africa (CFA franc), two African island countries (Comorian franc and Cape Verdean escudo), three French Pacific territories (CFP franc) and three Balkan countries, Bosnia and Herzegovina (Bosnia and Herzegovina convertible mark), Bulgaria (Bulgarian lev) and North Macedonia (Macedonian denar).[8] On 28 July 2009, São Tomé and Príncipe signed an agreement with Portugal which will eventually tie its currency to the euro.[61] Additionally, the Moroccan dirham is tied to a basket of currencies, including the euro and the US dollar, with the euro given the highest weighting.
|
71 |
+
|
72 |
+
With the exception of Bosnia, Bulgaria, North Macedonia (which had pegged their currencies against the Deutsche Mark) and Cape Verde (formerly pegged to the Portuguese escudo), all of these non-EU countries had a currency peg to the French Franc before pegging their currencies to the euro. Pegging a country's currency to a major currency is regarded as a safety measure, especially for currencies of areas with weak economies, as the euro is seen as a stable currency, prevents runaway inflation and encourages foreign investment due to its stability.
|
73 |
+
|
74 |
+
Within the EU several currencies are pegged to the euro, mostly as a precondition to joining the eurozone. The Danish krone, Croatian kuna and Bulgarian lev are pegged due to their participation in the ERM II.
|
75 |
+
|
76 |
+
In total, as of 2013[update], 182 million people in Africa use a currency pegged to the euro, 27 million people outside the eurozone in Europe, and another 545,000 people on Pacific islands.[53]
|
77 |
+
|
78 |
+
Since 2005, stamps issued by the Sovereign Military Order of Malta have been denominated in euros, although the Order's official currency remains the Maltese scudo.[62] The Maltese scudo itself is pegged to the euro and is only recognised as legal tender within the Order.
|
79 |
+
|
80 |
+
In economics, an optimum currency area, or region (OCA or OCR), is a geographical region in which it would maximise economic efficiency to have the entire region share a single currency. There are two models, both proposed by Robert Mundell: the stationary expectations model and the international risk sharing model. Mundell himself advocates the international risk sharing model and thus concludes in favour of the euro.[63] However, even before the creation of the single currency, there were concerns over diverging economies. Before the late-2000s recession it was considered unlikely that a state would leave the euro or the whole zone would collapse.[64] However the Greek government-debt crisis led to former British Foreign Secretary Jack Straw claiming the eurozone could not last in its current form.[65] Part of the problem seems to be the rules that were created when the euro was set up. John Lanchester, writing for The New Yorker, explains it:
|
81 |
+
|
82 |
+
The guiding principle of the currency, which opened for business in 1999, were supposed to be a set of rules to limit a country's annual deficit to three per cent of gross domestic product, and the total accumulated debt to sixty per cent of G.D.P. It was a nice idea, but by 2004 the two biggest economies in the euro zone, Germany and France, had broken the rules for three years in a row.[66]
|
83 |
+
|
84 |
+
The most obvious benefit of adopting a single currency is to remove the cost of exchanging currency, theoretically allowing businesses and individuals to consummate previously unprofitable trades. For consumers, banks in the eurozone must charge the same for intra-member cross-border transactions as purely domestic transactions for electronic payments (e.g., credit cards, debit cards and cash machine withdrawals).
|
85 |
+
|
86 |
+
Financial markets on the continent are expected to be far more liquid and flexible than they were in the past. The reduction in cross-border transaction costs will allow larger banking firms to provide a wider array of banking services that can compete across and beyond the eurozone. However, although transaction costs were reduced, some studies have shown that risk aversion has increased during the last 40 years in the Eurozone.[68]
|
87 |
+
|
88 |
+
Another effect of the common European currency is that differences in prices—in particular in price levels—should decrease because of the law of one price. Differences in prices can trigger arbitrage, i.e., speculative trade in a commodity across borders purely to exploit the price differential. Therefore, prices on commonly traded goods are likely to converge, causing inflation in some regions and deflation in others during the transition. Some evidence of this has been observed in specific eurozone markets.[69]
|
89 |
+
|
90 |
+
Before the introduction of the euro, some countries had successfully contained inflation, which was then seen as a major economic problem, by establishing largely independent central banks. One such bank was the Bundesbank in Germany; the European Central Bank was modelled on the Bundesbank.[70]
|
91 |
+
|
92 |
+
The euro has come under criticism due to its regulation, lack of flexibility and rigidity towards sharing member States on issues such as nominal interest rates.[71]
|
93 |
+
Many national and corporate bonds denominated in euro are significantly more liquid and have lower interest rates than was historically the case when denominated in national currencies. While increased liquidity may lower the nominal interest rate on the bond, denominating the bond in a currency with low levels of inflation arguably plays a much larger role. A credible commitment to low levels of inflation and a stable debt reduces the risk that the value of the debt will be eroded by higher levels of inflation or default in the future, allowing debt to be issued at a lower nominal interest rate.
|
94 |
+
|
95 |
+
Unfortunately, there is also a cost in structurally keeping inflation lower than in the United States, UK, and China. The result is that seen from those countries, the euro has become expensive, making European products increasingly expensive for its largest importers. Hence export from the eurozone becomes more difficult.
|
96 |
+
|
97 |
+
In general, those in Europe who own large amounts of euros are served by high stability and low inflation.
|
98 |
+
|
99 |
+
A monetary union means states in that union lose the main mechanism of recovery of their international competitiveness by weakening (depreciating) their currency. When wages become too high compared to productivity in exports sector then these exports become more expensive and they are crowded out from the market within a country and abroad. This drive fall of employment and output in the exports sector and fall of trade and current account balances. Fall of output and employment in tradable goods sector may be offset by the growth of non-exports sectors, especially in construction and services. Increased purchases abroad and negative current account balance can be financed without a problem as long as credit is cheap.[72] The need to finance trade deficit weakens currency making exports automatically more attractive in a country and abroad. A state in a monetary union cannot use weakening of currency to recover its international competitiveness. To achieve this a state has to reduce prices, including wages (deflation). This could result in high unemployment and lower incomes as it was during European sovereign-debt crisis.[73]
|
100 |
+
|
101 |
+
A 2009 consensus from the studies of the introduction of the euro concluded that it has increased trade within the eurozone by 5% to 10%,[74] although one study suggested an increase of only 3%[75] while another estimated 9 to 14%.[76] However, a meta-analysis of all available studies suggests that the prevalence of positive estimates is caused by publication bias and that the underlying effect may be negligible.[77] Although a more recent meta-analysis shows that publication bias decreases over time and that there are positive trade effects from the introduction of the euro, as long as results from before 2010 are taken into account. This may be because of the inclusion of the Financial crisis of 2007–2008 and ongoing integration within the EU.[78] Furthermore, older studies accounting for time trend reflecting general cohesion policies in Europe that started before, and continue after implementing the common currency find no effect on trade.[79][80] These results suggest that other policies aimed at European integration might be the source of observed increase in trade.
|
102 |
+
|
103 |
+
Physical investment seems to have increased by 5% in the eurozone due to the introduction.[81] Regarding foreign direct investment, a study found that the intra-eurozone FDI stocks have increased by about 20% during the first four years of the EMU.[82] Concerning the effect on corporate investment, there is evidence that the introduction of the euro has resulted in an increase in investment rates and that it has made it easier for firms to access financing in Europe. The euro has most specifically stimulated investment in companies that come from countries that previously had weak currencies. A study found that the introduction of the euro accounts for 22% of the investment rate after 1998 in countries that previously had a weak currency.[83]
|
104 |
+
|
105 |
+
The introduction of the euro has led to extensive discussion about its possible effect on inflation. In the short term, there was a widespread impression in the population of the eurozone that the introduction of the euro had led to an increase in prices, but this impression was not confirmed by general indices of inflation and other studies.[84][85] A study of this paradox found that this was due to an asymmetric effect of the introduction of the euro on prices: while it had no effect on most goods, it had an effect on cheap goods which have seen their price round up after the introduction of the euro. The study found that consumers based their beliefs on inflation of those cheap goods which are frequently purchased.[86] It has also been suggested that the jump in small prices may be because prior to the introduction, retailers made fewer upward adjustments and waited for the introduction of the euro to do so.[87]
|
106 |
+
|
107 |
+
One of the advantages of the adoption of a common currency is the reduction of the risk associated with changes in currency exchange rates. It has been found that the introduction of the euro created "significant reductions in market risk exposures for nonfinancial firms both in and outside Europe".[88] These reductions in market risk "were concentrated in firms domiciled in the eurozone and in non-euro firms with a high fraction of foreign sales or assets in Europe".
|
108 |
+
|
109 |
+
The introduction of the euro seems to have had a strong effect on European financial integration. According to a study on this question, it has "significantly reshaped the European financial system, especially with respect to the securities markets [...] However, the real and policy barriers to integration in the retail and corporate banking sectors remain significant, even if the wholesale end of banking has been largely integrated."[89] Specifically, the euro has significantly decreased the cost of trade in bonds, equity, and banking assets within the eurozone.[90] On a global level, there is evidence that the introduction of the euro has led to an integration in terms of investment in bond portfolios, with eurozone countries lending and borrowing more between each other than with other countries.[91]
|
110 |
+
|
111 |
+
As of January 2014, and since the introduction of the euro, interest rates of most member countries (particularly those with a weak currency) have decreased. Some of these countries had the most serious sovereign financing problems.
|
112 |
+
|
113 |
+
The effect of declining interest rates, combined with excess liquidity continually provided by the ECB, made it easier for banks within the countries in which interest rates fell the most, and their linked sovereigns, to borrow significant amounts (above the 3% of GDP budget deficit imposed on the eurozone initially) and significantly inflate their public and private debt levels.[92] Following the financial crisis of 2007–2008, governments in these countries found it necessary to bail out or nationalise their privately held banks to prevent systemic failure of the banking system when underlying hard or financial asset values were found to be grossly inflated and sometimes so near worthless there was no liquid market for them.[93] This further increased the already high levels of public debt to a level the markets began to consider unsustainable, via increasing government bond interest rates, producing the ongoing European sovereign-debt crisis.
|
114 |
+
|
115 |
+
The evidence on the convergence of prices in the eurozone with the introduction of the euro is mixed. Several studies failed to find any evidence of convergence following the introduction of the euro after a phase of convergence in the early 1990s.[94][95] Other studies have found evidence of price convergence,[96][97] in particular for cars.[98] A possible reason for the divergence between the different studies is that the processes of convergence may not have been linear, slowing down substantially between 2000 and 2003, and resurfacing after 2003 as suggested by a recent study (2009).[99]
|
116 |
+
|
117 |
+
A study suggests that the introduction of the euro has had a positive effect on the amount of tourist travel within the EMU, with an increase of 6.5%.[100]
|
118 |
+
|
119 |
+
The ECB targets interest rates rather than exchange rates and in general does not intervene on the foreign exchange rate markets. This is because of the implications of the Mundell–Fleming model, which implies a central bank cannot (without capital controls) maintain interest rate and exchange rate targets simultaneously, because increasing the money supply results in a depreciation of the currency. In the years following the Single European Act, the EU has liberalised its capital markets and, as the ECB has inflation targeting as its monetary policy, the exchange-rate regime of the euro is floating.
|
120 |
+
|
121 |
+
The euro is the second-most widely held reserve currency after the U.S. dollar. After its introduction on 4 January 1999 its exchange rate against the other major currencies fell reaching its lowest exchange rates in 2000 (3 May vs Pound sterling, 25 October vs the U.S. dollar, 26 October vs Japanese yen). Afterwards it regained and its exchange rate reached its historical highest point in 2008 (15 July vs U.S. dollar, 23 July vs Japanese yen, 29 December vs Pound sterling). With the advent of the global financial crisis the euro initially fell, to regain later. Despite pressure due to the European sovereign-debt crisis the euro remained stable.[101] In November 2011 the euro's exchange rate index – measured against currencies of the bloc's major trading partners – was trading almost two percent higher on the year, approximately at the same level as it was before the crisis kicked off in 2007.[102]
|
122 |
+
|
123 |
+
Besides the economic motivations to the introduction of the euro, its creation was also partly justified as a way to foster a closer sense of joint identity between European citizens. Statements about this goal where for instance made by Wim Duisenberg, European Central Bank Governor, in 1998,[103] Laurent Fabius, French Finance Minister, in 2000, [104] Romano Prodi, President of the European Commission, in 2002.[105] However, 15 years after the introduction of the euro, a study found no evidence that it has had a positive influence on a shared sense of European identity (and no evidence that it had a negative effect either).[106]
|
124 |
+
|
125 |
+
The formal titles of the currency are euro for the major unit and cent for the minor (one-hundredth) unit and for official use in most eurozone languages; according to the ECB, all languages should use the same spelling for the nominative singular.[107] This may contradict normal rules for word formation in some languages, e.g., those in which there is no eu diphthong. Bulgaria has negotiated an exception; euro in the Bulgarian Cyrillic alphabet is spelled as eвро (evro) and not eуро (euro) in all official documents.[108] In the Greek script the term ευρώ (evró) is used; the Greek "cent" coins are denominated in λεπτό/ά (leptó/á). Official practice for English-language EU legislation is to use the words euro and cent as both singular and plural,[109] although the European Commission's Directorate-General for Translation states that the plural forms euros and cents should be used in English.[110]
|
126 |
+
|
en/1913.html.txt
ADDED
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Eurypterids, often informally called sea scorpions, are a group of extinct arthropods that form the order Eurypterida. The earliest known eurypterids date to the Darriwilian stage of the Ordovician period 467.3 million years ago. The group is likely to have appeared first either during the Early Ordovician or Late Cambrian period. With approximately 250 species, the Eurypterida is the most diverse Paleozoic chelicerate order. Following their appearance during the Ordovician, eurypterids became major components of marine faunas during the Silurian, from which the majority of eurypterid species have been described. The Silurian genus Eurypterus accounts for more than 90% of all known eurypterid specimens. Though the group continued to diversify during the subsequent Devonian period, the eurypterids were heavily affected by the Late Devonian extinction event. They declined in numbers and diversity until becoming extinct during the Permian–Triassic extinction event (or sometime shortly before) 251.9 million years ago.
|
4 |
+
|
5 |
+
Although popularly called "sea scorpions", only the earliest eurypterids were marine; many later forms lived in brackish or fresh water, and they were not true scorpions. Some studies suggest that a dual respiratory system was present, which would have allowed for short periods of time in terrestrial environments. The name Eurypterida comes from the Ancient Greek words εὐρύς (eurús), meaning "broad" or "wide", and πτερόν (pteron), meaning "wing", referring to the pair of wide swimming appendages present in many members of the group.
|
6 |
+
|
7 |
+
The eurypterids include the largest known arthropods ever to have lived. The largest, Jaekelopterus, reached 2.5 meters (8.2 ft) in length. Eurypterids were not uniformly large and most species were less than 20 centimeters (8 in) long; the smallest eurypterid, Alkenopterus, was only 2.03 centimeters (0.80 in) long. Eurypterid fossils have been recovered from every continent. A majority of fossils are from fossil sites in North America and Europe because the group lived primarily in the waters around and within the ancient supercontinent of Euramerica. Only a handful of eurypterid groups spread beyond the confines of Euramerica and a few genera, such as Adelophthalmus and Pterygotus, achieved a cosmopolitan distribution with fossils being found worldwide.
|
8 |
+
|
9 |
+
Like all other arthropods, eurypterids possessed segmented bodies and jointed appendages (limbs) covered in a cuticle composed of proteins and chitin. As in other chelicerates, the body was divided into two tagmata (sections); the frontal prosoma (head) and posterior opisthosoma (abdomen).[1] The prosoma was covered by a carapace (sometimes called the "prosomal shield") on which both compound eyes and the ocelli (simple eye-like sensory organs) were located.[2]
|
10 |
+
|
11 |
+
The prosoma also bore six pairs of appendages which are usually referred to as appendage pairs I to VI. The first pair of appendages, the only pair placed before the mouth, is called the chelicerae (homologous to the fangs of spiders). They were equipped with small pincers used to manipulate food fragments and push them into the mouth.[2] In one lineage, the Pterygotidae, the chelicerae were large and long, with strong, well-developed teeth on specialised chelae (claws).[3] The subsequent pairs of appendages, numbers II to VI, possessed gnathobases (or "tooth-plates") on the coxae (limb segments) used for feeding. These appendages were generally walking legs that were cylindrical in shape and were covered in spines in some species. In most lineages, the limbs tended to get larger the farther back they were. In the Eurypterina suborder, the larger of the two eurypterid suborders, the sixth pair of appendages was also modified into a swimming paddle to aid in traversing aquatic environments.[2]
|
12 |
+
|
13 |
+
The opisthosoma comprised 12 segments, and the telson the posteriormost segment, which in most species took the form of a blade-like shape.[2] In some lineages, notably the Pterygotioidea, the Hibbertopteridae and the Mycteroptidae, the telson was flattened and may have been used as a rudder while swimming. Some genera within the superfamily Carcinosomatoidea, notably Eusarcana, had a telson similar to that of modern scorpions and may have been capable of using it to inject venom.[4][5] The coxae of the sixth pair of appendages were overlaid by a plate that is referred to as the metastoma, originally derived from a complete exoskeleton segment. The opisthosoma itself can be divided either into a "mesosoma" (comprising segments 1 to 6) and "metasoma" (comprising segments 7 to 12) or into a "preabdomen" (generally comprising segments 1 to 7) and "postabdomen" (generally comprising segments 8 to 12).[2]
|
14 |
+
|
15 |
+
The underside of the opisthosoma was covered in structures evolved from modified opisthosomal appendages. Throughout the opisthosoma, these structures formed plate-like structures termed blatfüsse (German for "leaf-feet"). These created a branchial chamber (gill tract) between preceding blatfüsse and the ventral surface of the opisthosoma itself, which contained the respiratory organs. The second to sixth opisthosomal segments also contained oval or triangular organs that have been interpreted as organs that aid in respiration. These organs, termed kiemenplatten, or "gill tracts", would potentially have aided eurypterids to breath air above water, while blatfüssen, similar to organs in modern horseshoe crabs, would cover the parts that serve for underwater respiration.[2]
|
16 |
+
|
17 |
+
The appendages of opisthosomal segments 1 and 2 (the seventh and eighth segments overall) were fused into a structure termed the genital operculum, occupying most of the underside of the opisthosomal segment 2. Near the anterior margin of this structure, the genital appendage (also called the zipfel or the median abdominal appendage) protruded. This appendage, often preserved very prominently, has consistently been interpreted as part of the reproductive system and occurs in two recognized types, assumed to correspond to male and female.[2]
|
18 |
+
|
19 |
+
Eurypterids were highly variable in size, depending on factors such as lifestyle, living environment and taxonomic affinity. Sizes around 100 centimeters (3.3 ft) are common in most eurypterid groups.[6] The smallest eurypterid, Alkenopterus burglahrensis, measured just 2.03 centimeters (0.80 in) in length.[7]
|
20 |
+
|
21 |
+
The largest eurypterid, and the largest known arthropod ever to have lived, is Jaekelopterus rhenaniae. A chelicera from the Emsian Klerf Formation of Willwerath, Germany measured 36.4 centimeters (14.3 in) in length, but is missing a quarter of its length, suggesting that the full chelicera would have been 45.5 centimeters (17.9 in) long. If the proportions between body length and chelicerae match those of its closest relatives, where the ratio between claw size and body length is relatively consistent, the specimen of Jaekelopterus that possessed the chelicera in question would have measured between 233 and 259 centimeters (7.64 and 8.50 ft), an average 2.5 meters (8.2 ft), in length. With the chelicerae extended, another meter (3.28 ft) would be added to this length. This estimate exceeds the maximum body size of all other known giant arthropods by almost half a meter (1.64 ft) even if the extended chelicerae are not included.[8] Two other eurypterids have also been estimated to have reached lengths of 2.5 metres; Erettopterus grandis (closely related to Jaekelopterus) and Hibbertopterus wittebergensis, but E. grandis is very fragmentary and the H. wittenbergensis size estimate is based on trackway evidence, not fossil remains.[9]
|
22 |
+
|
23 |
+
The family of Jaekelopterus, the Pterygotidae, is noted for several unusually large species. Both Acutiramus, whose largest member A. bohemicus measured 2.1 meters (6.9 ft), and Pterygotus, whose largest species P. grandidentatus measured 1.75 meters (5.7 ft), were gigantic.[8] Several different contributing factors to the large size of the pterygotids have been suggested, including courtship behaviour, predation and competition over environmental resources.[10]
|
24 |
+
|
25 |
+
Giant eurypterids were not limited to the family Pterygotidae. An isolated 12.7 centimeters (5.0 in) long fossil metastoma of the carcinosomatoid eurypterid Carcinosoma punctatum indicates the animal would have reached a length of 2.2 meters (7.2 ft) in life, rivalling the pterygotids in size.[11] Another giant was Pentecopterus decorahensis, a primitive carcinosomatoid, which is estimated to have reached lengths of 1.7 meters (5.6 ft).[12]
|
26 |
+
|
27 |
+
Typical of large eurypterids is a lightweight build. Factors such as locomotion, energy costs in molting and respiration, as well as the actual physical properties of the exoskeleton, limits the size that arthropods can reach. A lightweight construction significantly decreases the influence of these factors. Pterygotids were particularly lightweight, with most fossilized large body segments preserving as thin and unmineralized.[8] Lightweight adaptations are present in other giant paleozoic arthropods as well, such as the giant millipede Arthropleura, and are possibly vital for the evolution of giant size in arthropods.[8][13]
|
28 |
+
|
29 |
+
In addition to the lightweight giant eurypterids, some deep-bodied forms in the family Hibbertopteridae were also very large. A carapace from the Carboniferous of Scotland referred to the species Hibbertoperus scouleri measures 65 cm (26 in) wide. As Hibbertopterus was very wide compared to its length, the animal in question could possibly have measured just short of 2 meters (6.6 ft) in length. More robust than the pterygotids, this giant Hibbertopterus would possibly have rivalled the largest pterygotids in weight, if not surpassed them, and as such be among the heaviest arthropods.[14]
|
30 |
+
|
31 |
+
The two eurypterid suborders, Eurypterina and Stylonurina, are separated primarily by the morphology of their final pair of appendages. In the Stylonurina, this appendage takes the form of a long and slender walking leg, while in the Eurypterina, the leg is modified and broadened into a swimming paddle.[15] Other than the swimming paddle, the legs of many eurypterines were far too small to do much more than allow them to crawl across the sea floor. In contrast, a number of stylonurines had elongated and powerful legs that might have allowed them to walk on land (similar to modern crabs).[16]
|
32 |
+
|
33 |
+
A fossil trackway was discovered in Carboniferous-aged fossil deposits of Scotland in 2005 It was attributed to the stylonurine eurypterid Hibbertopterus due to a matching size (the trackmaker was estimated to have been about 1.6 meters (5.2 ft) long) and inferred leg anatomy. It is the largest terrestrial trackway—measuring 6 meters (20 ft) long and averaging 95 centimeters (3.12 ft) in width—made by an arthropod found thus far. It is the first record of land locomotion by a eurypterid. The trackway provides evidence that some eurypterids could survive in terrestrial environments, at least for short periods of time, and reveals information about the stylonurine gait. In Hibbertopterus, as in most eurypterids, the pairs of appendages are different in size (referred to as a heteropodous limb condition). These differently sized pairs would have moved in phase, and the short stride length indicates that Hibbertopterus crawled with an exceptionally slow speed, at least on land. The large telson was dragged along the ground and left a large central groove behind the animal. Slopes in the tracks at random intervals suggest that the motion was jerky.[17] The gait of smaller stylonurines, such as Parastylonurus, was probably faster and more precise.[18]
|
34 |
+
|
35 |
+
The functionality of the eurypterine swimming paddles varied from group to group. In the Eurypteroidea, the paddles were similar in shape to oars. The condition of the joints in their appendages ensured their paddles could only be moved in near-horizontal planes, not upwards or downwards. Some other groups, such as the Pterygotioidea, would not have possessed this condition and were probably able to swim faster.[19] Most eurypterines are generally agreed to have utilized a rowing type of propulsion similar to that of crabs and water beetles. Larger individuals may have been capable of underwater flying (or subaqueous flight) in which the motion and shape of the paddles are enough to generate lift, similar to the swimming of sea turtles and sea lions. This type of movement has a relatively slower acceleration rate than the rowing type, especially since adults have proportionally smaller paddles than juveniles. However, since the larger sizes of adults mean a higher drag coefficient, using this type of propulsion is more energy-efficient.[20]
|
36 |
+
|
37 |
+
Some eurypterines, such as Mixopterus (as inferred from attributed fossil trackways), were not necessarily good swimmers. It likely kept mostly to the bottom, using its swimming paddles for occasional bursts of movements vertically, with the fourth and fifth pairs of appendages positioned backwards to produce minor movement forwards. While walking, it probably used a gait like that of most modern insects. The weight of its long abdomen would have been balanced by two heavy and specialized frontal appendages, and the center of gravity might have been adjustable by raising and positioning the tail.[21]
|
38 |
+
|
39 |
+
Preserved fossilized eurypterid trackways tend to be large and heteropodous and often have an associated telson drag mark along the mid-line (as with the Scottish Hibbertopterus track). Such trackways have been discovered on every continent except for South America. In some places where eurypterid fossil remains are otherwise rare, such as in South Africa and the rest of the former supercontinent Gondwana, the discoveries of trackways both predate and outnumber eurypterid body fossils.[22] Eurypterid trackways have been referred to several ichnogenera, most notably Palmichnium (defined as a series of four tracks often with an associated drag mark in the mid-line),[23] wherein the holotype of the ichnospecies P. kosinkiorum preserves the largest eurypterid footprints known to date with the found tracks each being about 7.6 centimeters (3.0 in) in diameter.[24] Other eurypterid ichnogenera include Merostomichnites (though it is likely that many specimens actually represent trackways of crustaceans) and Arcuites (which preserves grooves made by the swimming appendages).[23][25][26]
|
40 |
+
|
41 |
+
In eurypterids, the respiratory organs were located on the ventral body wall (the underside of the opisthosoma). Blatfüsse, evolved from opisthosomal appendages, covered the underside and created a gill chamber where the kiemenplatten (gill tracts) were located.[2] Depending on the species, the eurypterid gill tract was either triangular or oval in shape and was possibly raised into a cushion-like state. The surface of this gill tract bore several spinules (small spines), which resulted in an enlarged surface area. It was composed of spongy tissue due to many invaginations in the structure.[27]
|
42 |
+
|
43 |
+
Though the kiemenplatte is referred to as a "gill tract", it may not necessarily have functioned as actual gills. In other animals, gills are used for oxygen uptake from water and are outgrowths of the body wall. Despite eurypterids clearly being primarily aquatic animals that almost certainly evolved underwater (some eurypterids, such as the pterygotids, would even have been physically unable to walk on land), it is unlikely the gill tract contained functional gills when comparing the organ to gills in other invertebrates and even fish. Previous interpretations often identified the eurypterid "gills" as homologous with those of other groups (hence the terminology), with gas exchange occurring within the spongy tract and a pattern of branchio-cardiac and dendritic veins (as in related groups) carrying oxygenated blood into the body. The primary analogy used in previous studies has been horseshoe crabs, though their gill structure and that of eurypterids are remarkably different. In horseshoe crabs, the gills are more complex and composed of many lamellae (plates) which give a larger surface area used for gas exchange. In addition, the gill tract of eurypterids is proportionally much too small to support them if it is analogous to the gills of other groups. To be functional gills, they would have to have been highly efficient and would have required a highly efficient circulatory system. It is considered unlikely, however, that these factors would be enough to explain the large discrepancy between gill tract size and body size.[28]
|
44 |
+
|
45 |
+
It has been suggested instead that the "gill tract" was an organ for breathing air, perhaps actually being a lung, plastron or a pseudotrachea.[29] Plastrons are organs that some arthropods evolved secondarily to breathe air underwater. This is considered an unlikely explanation since eurypterids had evolved in water from the start and they would not have organs evolved from air-breathing organs present. In addition, plastrons are generally exposed on outer parts of the body while the eurypterid gill tract is located behind the blatfüssen.[30] Instead, among arthropod respiratory organs, the eurypterid gill tracts most closely resemble the pseudotrachae found in modern isopods. These organs, called pseudotrachae, because of some resemblance to the trachae (windpipes) of air-breathing organisms, are lung-like and present within the pleopods (back legs) of isopods. The structure of the pseudotrachae has been compared to the spongy structure of the eurypterid gill tracts. It is possible the two organs functioned in the same way.[31]
|
46 |
+
|
47 |
+
Some researchers have suggested that eurypterids may have been adapted to an amphibious lifestyle, using the full gill tract structure as gills and the invaginations within it as pseudotrachea. This mode of life may not have been physiologically possible, however, since water pressure would have forced water into the invaginations leading to asphyxiation. Furthermore, most eurypterids would have been aquatic their entire lives. No matter how much time was spent on land, organs for respiration in underwater environments must have been present. True gills, expected to have been located within the branchial chamber within the blatfüssen, remain unknown in eurypterids.[31]
|
48 |
+
|
49 |
+
Like all arthropods, eurypterids matured and grew through static developmental stages referred to as instars. These instars were punctuated by periods during which eurypterids went through ecdysis (molting of the cuticle) after which they underwent rapid and immediate growth. Some arthropods, such as insects and many crustaceans, undergo extreme changes over the course of maturing. Chelicerates, including eurypterids, are in general considered to be direct developers, undergoing no extreme changes after hatching (though extra body segments and extra limbs may be gained over the course of ontogeny in some lineages, such as xiphosurans and sea spiders). Whether eurypterids were true direct developers (with hatchlings more or less being identical to adults) or hemianamorphic direct developers (with extra segments and limbs potentially being added during ontogeny) has been controversial in the past.[32]
|
50 |
+
|
51 |
+
Hemianamorphic direct development has been observed in many arthropod groups, such as trilobites, megacheirans, basal crustaceans and basal myriapods. True direct development has on occasion been referred to as a trait unique to arachnids. There have been few studies on eurypterid ontogeny as there is a general lack of specimens in the fossil record that can confidently be stated to represent juveniles.[32] It is possible that many eurypterid species thought to be distinct actually represent juvenile specimens of other species, with paleontologists rarely considering the influence of ontogeny when describing new species.[33]
|
52 |
+
|
53 |
+
Studies on a well-preserved fossil assemblage of eurypterids from the Pragian-aged Beartooth Butte Formation in Cottonwood Canyon, Wyoming, composed of multiple specimens of various developmental stages of eurypterids Jaekelopterus and Strobilopterus, revealed that eurypterid ontogeny was more or less parallel and similar to that of extinct and extant xiphosurans, with the largest exception being that eurypterids hatched with a full set of appendages and opisthosomal segments. Eurypterids were thus not hemianamorphic direct developers, but true direct developers like modern arachnids.[34]
|
54 |
+
|
55 |
+
The most frequently observed change occurring through ontogeny (except for some genera, such as Eurypterus, which appear to have been static) is the metastoma becoming proportionally less wide. This ontogenetic change has been observed in members of several superfamilies, such as the Eurypteroidea, the Pterygotioidea and the Moselopteroidea.[35]
|
56 |
+
|
57 |
+
No fossil gut contents from eurypterids are known so direct evidence of their diet is lacking. The eurypterid biology is particularly suggestive of a carnivorous lifestyle. Not only were many large (in general, most predators tend to be larger than their prey), but they had stereoscopic vision (the ability to perceive depth).[36] The legs of many eurypterids were covered in thin spines, used both for locomotion and the gathering of food. In some groups, these spiny appendages became heavily specialized. In some eurypterids in the Carcinosomatoidea, forward-facing appendages were large and possessed enormously elongated spines (as in Mixopterus and Megalograptus). In derived members of the Pterygotioidea, the appendages were completely without spines, but had specialized claws instead.[37] Other eurypterids, lacking these specialized appendages, likely fed in a manner similar to modern horseshoe crabs, by grabbing and shredding food with their appendages before pushing it into their mouth using their chelicerae.[38]
|
58 |
+
|
59 |
+
Fossils preserving digestive tracts have been reported from fossils of various eurypterids, among them Carcinosoma, Acutiramus and Eurypterus. Though a potential anal opening has been reported from the telson of a specimen of Buffalopterus, it is more likely that the anus was opened through the thin cuticle between the last segment before the telson and the telson itself, as in modern horseshoe crabs.[36]
|
60 |
+
|
61 |
+
Eurypterid coprolites discovered in deposits of Ordovician age in Ohio containing fragments of a trilobite and eurypterid Megalograptus ohioensis in association with full specimens of the same eurypterid species have been suggested to represent evidence of cannibalism. Similar coprolites referred to the species Lanarkopterus dolichoschelus from the Ordovician of Ohio contain fragments of jawless fish and fragments of smaller specimens of Lanarkopterus itself.[36]
|
62 |
+
|
63 |
+
Though apex predatory roles would have been limited to the very largest eurypterids, smaller eurypterids were likely formidable predators in their own right just like their larger relatives.[6]
|
64 |
+
|
65 |
+
As in many other entirely extinct groups, understanding and researching the reproduction and sexual dimorphism of eurypterids is difficult, as they are only known from fossilized shells and carapaces. In some cases, there might not be enough apparent differences to separate the sexes based on morphology alone.[16] Sometimes two sexes of the same species have been interpreted as two different species, as was the case with two species of Drepanopterus (D. bembycoides and D. lobatus).[39]
|
66 |
+
|
67 |
+
The eurypterid prosoma is made up of the first six exoskeleton segments fused together into a larger structure. The seventh segment (thus the first opisthosomal segment) is referred to as the metastoma and the eighth segment (distinctly plate-like) is called the operculum and contains the genital aperature. The underside of this segment is occupied by the genital operculum, a structure originally evolved from ancestral seventh and eighth pair of appendages. In its center, as in modern horseshoe crabs, is a genital appendage. This appendage, an elongated rod with an internal duct, is found in two distinct morphs, generally referred to as "type A" and "type B".[16] These genital appendages are often preserved prominently in fossils and have been the subject of various interpretations of eurypterid reproduction and sexual dimorphism.[40]
|
68 |
+
|
69 |
+
Type A appendages are generally longer than those of type B. In some genera they are divided into different numbers of sections, such as in Eurypterus where the type A appendage is divided into three but the type B appendage into only two.[41] Such division of the genital appendage is common in eurypterids, but the number is not universal; for instance, the appendages of both types in the family Pterygotidae are undivided.[42] The type A appendage is also armed with two curved spines called furca (Latin for "fork"). The presence of furca in the type B appendage is also possible and the structure may represent the unfused tips of the appendages. Located between the dorsal and ventral surfaces of the blatfuss associated with the type A appendages is a set of organs traditionally described as either "tubular organs" or "horn organs". These organs are most often interpreted as spermathecae (organs for storing sperm), though this function is yet to be proven conclusively.[43] In arthropods, spermathecae are used to store the spermatophore received from males. This would imply that the type A appendage is the female morph and the type B appendage is the male.[16] Further evidence for the type A appendages representing the female morph of genital appendages comes in their more complex construction (a general trend for female arthropod genitalia). It is possible that the greater length of the type A appendage means that it was used as an ovipositor (used to deposit eggs).[44] The different types of genital appendages are not necessarily the only feature that distinguishes between the sexes of eurypterids. Depending on the genus and species in question, other features such as size, the amount of ornamentation and the proportional width of the body can be the result of sexual dimorphism.[2] In general, eurypterids with type B appendages (males) appear to have been proportionally wider than eurypterids with type A appendages (females) of the same genera.[45]
|
70 |
+
|
71 |
+
The primary function of the long, assumed female, type A appendages was likely to take up spermatophore from the substrate into the reproductive tract rather than to serve as an ovipositor, as arthropod ovipositors are generally longer than eurypterid type A appendages. By rotating the sides of the operculum, it would have been possible to lower the appendage from the body. Due to the way different plates overlay at its location, the appendage would have been impossible to move without muscular contractions moving around the operculum. It would have been kept in place when not it use. The furca on the type A appendages may have aided in breaking open the spermatophore to release the free sperm inside for uptake. The "horn organs," possibly spermathecae, are thought to have been connected directly to the appendage via tracts, but these supposed tracts remain unpreserved in available fossil material.[46]
|
72 |
+
|
73 |
+
Type B appendages, assumed male, would have produced, stored and perhaps shaped spermatophore in a heart-shaped structure on the dorsal surface of the appendage. A broad genital opening would have allowed large amounts of spermatophore to be released at once. The long furca associated with type B appendages, perhaps capable of being lowered like the type A appendage, could have been used to detect whether a substrate was suitable for spermatophore deposition.[47]
|
74 |
+
|
75 |
+
Until 1882 no eurypterids were known from before the Silurian. Contemporary discoveries since the 1880s have expanded the knowledge of early eurypterids from the Ordovician period.[48] The earliest eurypterids known today, the megalograptid Pentecopterus, date from the Darriwilian stage of the Middle Ordovician, 467.3 million years ago.[49] There are also reports of even earlier fossil eurypterids in deposits of Late Tremadocian (Early Ordovician) age in Morocco, but these have yet to be thoroughly studied.[50]
|
76 |
+
|
77 |
+
Pentecopterus was a relatively derived eurypterid, part of the megalograptid family within the carcinosomatoid superfamily. Its derived position suggests that most eurypterid clades, at least within the eurypterine suborder, had already been established at this point during the Middle Ordovician.[51] The earliest known stylonurine eurypterid, Brachyopterus,[6] is also Middle Ordovician in age. The presence of members of both suborders indicates that primitive stem-eurypterids would have preceded them, though these are so far unknown in the fossil record. The presence of several eurypterid clades during the Middle Ordovician suggests that eurypterids either originated during the Early Ordovician and experienced a rapid and explosive radiation and diversification soon after the first forms evolved, or that the group originated much earlier, perhaps during the Cambrian period.[51]
|
78 |
+
|
79 |
+
As such, the exact eurypterid time of origin remains unknown. Though fossils referred to as "primitive eurypterids" have occasionally been described from deposits of Cambrian or even Precambrian age,[52] they are not recognized as eurypterids, and sometimes not even as related forms, today. Some animals previously seen as primitive eurypterids, such as the genus Strabops from the Cambrian of Missouri,[53] are now classified as aglaspidids or strabopids. The aglaspidids, once seen as primitive chelicerates, are now seen as a group more closely related to trilobites.[54]
|
80 |
+
|
81 |
+
The fossil record of Ordovician eurypterids is quite poor. The majority of eurypterids once reportedly known from the Ordovician have since proven to be misidentifications or pseudofossils. Today only 11 species can be confidently identified as representing Ordovician eurypterids. These taxa fall into two distinct ecological categories; large and active predators from the ancient continent of Laurentia, and demersal (living on the seafloor) and basal animals from the continents Avalonia and Gondwana.[49] The Laurentian predators, classified in the family Megalograptidae (compromising the genera Echinognathus, Megalograptus and Pentecopterus), are likely to represent the first truly successful eurypterid group, experiencing a small radiation during the Late Ordovician.[55]
|
82 |
+
|
83 |
+
Eurypterids were most diverse and abundant between the Middle Silurian and the Early Devonian, with an absolute peak in diversity during the Pridoli epoch, 423 to 419.2 million years ago, of the very latest Silurian.[15] This peak in diversity has been recognized since the early twentieth century; of the approximately 150 species of eurypterids known in 1916, more than half were from the Silurian and a third were from the Late Silurian alone.[48]
|
84 |
+
|
85 |
+
Though stylonurine eurypterids generally remained rare and low in number, as had been the case during the preceding Ordovician, eurypterine eurypterids experienced a rapid rise in diversity and number.[56] In most Silurian fossil beds, eurypterine eurypterids account for 90% of all eurypterids present.[57] Though some were likely already present by the Late Ordovician (simply missing from the fossil record so far),[51] a vast majority of eurypterid groups are first recorded in strata of Silurian age. These include both stylonurine groups such as the Stylonuroidea, Kokomopteroidea and Mycteropoidea as well as eurypterine groups such as the Pterygotioidea, Eurypteroidea and Waeringopteroidea.[58]
|
86 |
+
|
87 |
+
The most successful eurypterid by far was the Middle to Late Silurian Eurypterus, a generalist, equally likely to have engaged in predation or scavenging. Thought to have hunted mainly small and soft-bodied invertebrates, such as worms,[59] species of the genus (of which the most common is the type species, E. remipes) account for more than 90% (perhaps as many as 95%) of all known fossil eurypterid specimens.[57] Despite their vast number, Eurypterus are only known from a relatively short temporal range, first appearing during the Late Llandovery epoch (around 432 million years ago) and being extinct by the end of the Pridoli epoch.[60] Eurypterus was also restricted to the minor supercontinent Euramerica (composed of the equatorial continents Avalonia, Baltica and Laurentia), which had been completely colonized by the genus during its merging and was unable to cross the vast expanses of ocean separating this continent from other parts of the world, such as the southern supercontinent Gondwana. As such, Eurypterus was limited geographically to the coastlines and shallow inland seas of Euramerica.[57][61]
|
88 |
+
|
89 |
+
During the Late Silurian the pterygotid eurypterids, large and specialized forms with several new adaptations, such as large and flattened telsons capable of being used as rudders, and large and specialized chelicerae with enlarged pincers for handling (and potentially in some cases killing) prey appeared.[3][4] Though the largest members of the family appeared in the Devonian, large two meter (6.5+ ft) pterygotids such as Acutiramus were already present during the Late Silurian.[9] Their ecology ranged from generalized predatory behavior to ambush predation and some, such as Pterygotus itself, were active apex predators in Late Silurian marine ecosystems.[62] The pterygotids were also evidently capable of crossing oceans, becoming one of only two eurypterid groups to achieve a cosmopolitan distribution.[63]
|
90 |
+
|
91 |
+
Though the eurypterids continued to be abundant and diversify during the Early Devonian (for instance leading to the evolution of the pterygotid Jaekelopterus, the largest of all arthropods), the group was one of many heavily affected by the Late Devonian extinction. The extinction event, only known to affect marine life (particularly trilobites, brachiopods and reef-building organisms) effectively crippled the abundance and diversity previously seen within the eurypterids.[64]
|
92 |
+
|
93 |
+
A major decline in diversity had already begun during the Early Devonian and eurypterids were rare in marine environments by the Late Devonian. During the Frasnian stage four families went extinct, and the later Famennian saw an additional five families going extinct.[64] As marine groups were the most affected, the eurypterids were primarily impacted within the eurypterine suborder. Only one group of stylonurines (the family Parastylonuridae) went extinct in the Early Devonian. Only two families of eurypterines survived into the Late Devonian at all (Adelophthalmidae and Waeringopteridae). The eurypterines experienced their most major declines in the Early Devonian, during which over 50% of their diversity was lost in just 10 million years. Stylonurines, on the other hand, persisted through the period with more or less consistent diversity and abundance but were affected during the Late Devonian, when many of the older groups were replaced by new forms in the families Mycteroptidae and Hibbertopteridae.[65]
|
94 |
+
|
95 |
+
It is possible that the catastrophic extinction patterns seen in the eurypterine suborder were related to the emergence of more derived fish. Eurypterine decline began at the point when jawless fish first became more developed and coincides with the emergence of placoderms (armored fish) in both North America and Europe.[66]
|
96 |
+
|
97 |
+
Stylonurines of the surviving hibbertopterid and mycteroptid families completely avoided competition with fish by evolving towards a new and distinct ecological niche. These families experienced a radiation and diversification through the Late Devonian and Early Carboniferous, the last ever radiation within the eurypterids, which gave rise to several new forms capable of "sweep-feeding" (raking through the substrate in search of prey).[67]
|
98 |
+
|
99 |
+
Only three eurypterid families—Adelophthalmidae, Hibbertopteridae and Mycteroptidae—survived the extinction event in its entirety. These were all freshwater animals, rendering the eurypterids extinct in marine environments.[64] With marine eurypterid predators gone, sarcopterygian fish, such as the rhizodonts, were the new apex predators in marine environments.[66] The sole surviving eurypterine family, Adelophthalmidae, was represented by only a single genus, Adelophthalmus. The hibbertopterids, mycteroptids and Adelophthalmus survived into the Permian.[68]
|
100 |
+
|
101 |
+
Adelophthalmus became the most common of all late Paleozoic eurypterids, existing in greater number and diversity than surviving stylonurines, and diversified in the absence of other eurypterines.[69] Out of the 32 species referred to Adelophthalmus, 22 (69%) are from the Carboniferous alone.[70] The genus reached its peak diversity in the Late Carboniferous. Though Adelophthalmus had already been relatively widespread and represented around all major landmasses in the Late Devonian, the amalgamation of Pangaea into a global supercontinent over the course of the last two periods of the Paleozoic allowed Adelophthalmus to gain an almost worldwide distribution.[57]
|
102 |
+
|
103 |
+
During the Late Carboniferous and Early Permian Adelophthalmus was widespread, living primarily in brackish and freshwater environments adjacent to coastal plains. These environments were maintained by favorable climate conditions. They did not persist as climate changes owing to Pangaea's formation altered depositional and vegetational patterns across the world. With their habitat gone, Adelophthalmus dwindled in number and went extinct in the Leonardian stage of the Early Permian.[71]
|
104 |
+
|
105 |
+
Mycteroptids and hibbertopterids continued to survive for some time, with one genus of each group known from Permian strata: Hastimima and Campylocephalus respectively.[72] Hastimima went extinct during the Early Permian,[73] as Adelophthalmus had, while Campylocephalus persisted longer. A massive incomplete carapace from Late Permian (Changhsingian stage) deposits in Russia represents the sole fossil remains of the species C. permianus, which might have reached 1.4 meters (4.6 ft) in length.[9] This giant was the last known surviving eurypterid.[6] No eurypterids are known from fossil beds higher than the Permian. This indicates that the last eurypterids died either in the catastrophic extinction event at its end or at some point shortly before it. This extinction event, the Permian–Triassic extinction event, is the most devastating mass extinction recorded, and rendered many other successful Paleozoic groups, such as the trilobites, extinct.[74]
|
106 |
+
|
107 |
+
The first known eurypterid specimen was discovered in the Silurian-aged rocks of New York, to this day one of the richest eurypterid fossil locations. Samuel L. Mitchill described the specimen, discovered near Westmoreland in Oneida county in 1818. He erroneously identified the fossil as an example of the fish Silurus, likely due to the strange, catfish-like appearance of the carapace. Seven years later, in 1825, James E. DeKay examined the fossil and recognized it as clearly belonging to an arthropod. He thought the fossil, which he named Eurypterus remipes, represented a crustacean of the order Branchiopoda, and suggested it might represent a missing link between the trilobites and more derived branchiopods.[75] The name Eurypterus derives from the Ancient Greek words εὐρύς (eurús), meaning "broad" or "wide", and πτερόν (pteron) meaning "wing".[76]
|
108 |
+
|
109 |
+
In 1843, Hermann Burmeister published his view on trilobite taxonomy and how the group related to other organisms, living and extinct, in the work Die Organisation der Trilobiten aus ihren lebenden Verwandten entwickelt. He considered the trilobites to be crustaceans, as previous authors had, and classified them together with what he assumed to be their closest relatives, Eurypterus and the genus Cytherina, within a clade he named "Palaeadae". Within Palaeadae, Burmeister erected three families; the "Trilobitae" (composed of all trilobites), the "Cytherinidae" (composed only of Cytherina, an animal today seen as an ostracod) and the Eurypteridae (composed of Eurypterus, then including three species).[77]
|
110 |
+
|
111 |
+
The fourth eurypterid genus to be described (following Hibbertopterus in 1836 and Campylocephalus in 1838, not identified as eurypterids until later), out of those still seen as taxonomically valid in modern times, was Pterygotus, described by Louis Agassiz in 1839.[78] Pterygotus was considerably larger in size than Eurypterus and when the first fossils were discovered by quarrymen in Scotland they were referred to as "Seraphims" by the workers. Agassiz first thought the fossils represented remains of fish, with the name Pterygotus meaning "winged fish", and only recognized their nature as arthropod remains five years later in 1844.[79]
|
112 |
+
|
113 |
+
In 1849, Frederick M'Coy classified Pterygotus together with Eurypterus and Bellinurus (a genus today seen as a xiphosuran) within Burmeister's Eurypteridae. M'Coy considered the Eurypteridae to be a group of crustaceans within the order Entomostraca, closely related to horseshoe crabs.[80] A fourth genus, Slimonia, based on fossil remains previously assigned to a new species of Pterygotus, was referred to the Eurypteridae in 1856 by David Page.[81]
|
114 |
+
|
115 |
+
Jan Nieszkowski's De Euryptero Remipede (1858) featured an extensive description of Eurypterus fischeri (now seen as synonymous with another species of Eurypterus, E. tetragonophthalmus), which, along with the monograph On the Genus Pterygotus by Thomas Henry Huxley and John William Salter, and an exhaustive description of the various eurypterids of New York in Volume 3 of the Palaeontology of New York (1859) by James Hall, contributed massively to the understanding of eurypterid diversity and biology. These publications were the first to fully describe the whole anatomy of eurypterids, recognizing the full number of prosomal appendages and the number of preabdominal and postabdominal segments. Both Nieszkowski and Hall recognized that the eurypterids were closely related to modern chelicerates, such as horseshoe crabs.[82]
|
116 |
+
|
117 |
+
In 1865, Henry Woodward described the genus Stylonurus (named and figured, but not thoroughly described, by David Page in 1856) and raised the rank of the Eurypteridae to that of order, effectively creating the Eurypterida as the taxonomic unit it is seen as today.[83] In the work Anatomy and Relations of the Eurypterida (1893), Malcolm Laurie added considerably to the knowledge and discussion of eurypterid anatomy and relations. He focused on how the eurypterids related to each other and to trilobites, crustaceans, scorpions, other arachnids and horseshoe crabs. The description of Eurypterus fischeri by Gerhard Holm in 1896 was so elaborate that the species became one of the most completely known of all extinct animals, so much so that the knowledge of E. fischeri was comparable with the knowledge of its modern relatives (such as the Atlantic horseshoe crab). The description also helped solidify the close relationship between the eurypterids and other chelicerates by showcasing numerous homologies between the two groups.[84]
|
118 |
+
|
119 |
+
In 1912, John Mason Clarke and Rudolf Ruedemann published The Eurypterida of New York in which all eurypterid species thus far recovered from fossil deposits there were discussed. Clarke and Ruedemann created one of the first phylogenetic trees of eurypterids, dividing the order into two families; Eurypteridae (distinguished by smooth eyes and including Eurypterus, Anthraconectes, Stylonurus, Eusarcus, Dolichopterus, Onychopterus and Drepanopterus) and Pterygotidae (distinguished by faceted eyes and including Pterygotus, Erettopterus, Slimonia and Hughmilleria). Both families were considered to be descended from a common ancestor, Strabops.[85] In line with earlier authors, Clarke and Ruedemann also supported a close relationship between the eurypterids and the horseshoe crabs (united under the class Merostomata) but also discussed alternative hypotheses such as a closer relation to arachnids.[86]
|
120 |
+
|
121 |
+
Historically, a close relationship between eurypterids and xiphosurans (such as the modern Atlantic horseshoe crab) has been assumed by most researchers. Several homologies encourage this view, such as correlating segments of the appendages and the prosoma. Additionally, the presence of plate-like appendages bearing the "gill tracts" on appendages of the opisthosoma (the blatfüssen) was cited early as an important homology. In the last few decades of the nineteenth century, further homologies were established, such as the similar structures of the compound eyes of Pterygotus and horseshoe crabs (seen as especially decisive as the eye of the horseshoe crab was seen as possessing an almost unique structure) and similarities in the ontogeny within both groups.[87] These ontogenetical similarities were seen as most apparent when studying the nepionic stages (the developmental stage immediately following the embryonic stage) in both groups, during which both xiphosurans and eurypterids have a proportionally larger carapace than adults, are generally broader, possess a distinct ridge down the middle, have a lesser number of segments which lack differentiation and have an underdeveloped telson.[88]
|
122 |
+
|
123 |
+
Due to these similarities, the xiphosurans and eurypterids have often been united under a single class or subclass called Merostomata (erected to house both groups by Henry Woodward in 1866). Though xiphosurans (like the eurypterids) were historically seen as crustaceans due to their respiratory system and their aquatic lifestyle, this hypothesis was discredited after numerous similarities were discovered between the horseshoe crabs and the arachnids.[88] Some authors, such as John Sterling Kingsley in 1894, classified the Merostomata as a sister group to the Arachnida under the class "Acerata" within a subphylum "Branchiata". Others, such as Ray Lankester in 1909, went further and classified the Merostomata as a subclass within the Arachnida, raised to the rank of class.[89]
|
124 |
+
|
125 |
+
In 1866, Ernst Haeckel classified the Merostomata (containing virtually only the Eurypterida) and Xiphosura within a group he named Gigantostraca within the crustaceans. Though Haeckel did not designate any taxonomic rank for this clade, it was interpreted as equivalent to the rank of subclass, such as the Malacostraca and Entomostraca, by later researchers such as John Sterling Kinsgsley.[90] In subsequent research, Gigantostraca has been treated as synonymous with Merostomata (rarely) and Eurypterida itself (more commonly).[91][92]
|
126 |
+
|
127 |
+
A phylogenetic analysis (the results presented in a cladogram below) conducted by James Lamsdell in 2013 on the relationships within the Xiphosura and the relations to other closely related groups (including the eurypterids, which were represented in the analysis by genera Eurypterus, Parastylonurus, Rhenopterus and Stoermeropterus) concluded that the Xiphosura, as presently understood, was paraphyletic (a group sharing a last common ancestor but not including all descendants of this ancestor) and thus not a valid phylogenetic group.[93] Eurypterids were recovered as closely related to arachnids instead of xiphosurans, forming the group Sclerophorata within the clade Dekatriata (composed of sclerophorates and chasmataspidids). Lamsdell noted that it is possible that Dekatriata is synonymous with Sclerophorata as the reproductive system, the primary defining feature of sclerophorates, has not been thoroughly studied in chasmataspidids. Dekatriata is, in turn, part of the Prosomapoda, a group including the Xiphosurida (the only monophyletic xiphosuran group) and other stem-genera.[94]
|
128 |
+
|
129 |
+
†Fuxianhuia
|
130 |
+
|
131 |
+
†Emeraldella
|
132 |
+
|
133 |
+
†Trilobitomorpha
|
134 |
+
|
135 |
+
†Sidneyia
|
136 |
+
|
137 |
+
†Yohoia
|
138 |
+
|
139 |
+
†Alalcomenaeus
|
140 |
+
|
141 |
+
†Leanchoilia
|
142 |
+
|
143 |
+
†Palaeoisopus
|
144 |
+
|
145 |
+
Pycnogonum
|
146 |
+
|
147 |
+
†Haliestes
|
148 |
+
|
149 |
+
†Offacolus
|
150 |
+
|
151 |
+
†Weinbergina
|
152 |
+
|
153 |
+
†Venustulus
|
154 |
+
|
155 |
+
†Camanchia
|
156 |
+
|
157 |
+
†Legrandella
|
158 |
+
|
159 |
+
†Kasibelinurus
|
160 |
+
|
161 |
+
†Willwerathia
|
162 |
+
|
163 |
+
†Lunataspis
|
164 |
+
|
165 |
+
†Bellinurina
|
166 |
+
|
167 |
+
Limulina
|
168 |
+
|
169 |
+
†Pseudoniscus
|
170 |
+
|
171 |
+
†Cyamocephalus
|
172 |
+
|
173 |
+
†Pasternakevia
|
174 |
+
|
175 |
+
†Bunodes
|
176 |
+
|
177 |
+
†Limuloides
|
178 |
+
|
179 |
+
†Bembicosoma
|
180 |
+
|
181 |
+
†Chasmataspidida
|
182 |
+
|
183 |
+
Arachnida
|
184 |
+
|
185 |
+
†Eurypterida
|
186 |
+
|
187 |
+
The internal classification of eurypterids within the Eurypterida is based mainly on eleven established characters. These have been used throughout the history of eurypterid research to establish clades and genera. These characters include: the shape of the prosoma, the shape of the metastoma, the shape and position of the eyes, the types of prosomal appendages, the types of swimming leg paddles, the structure of the doublure (the fringe of the dorsal exoskeleton), the structure of the opithosoma, the structure of the genital appendages, the shape of the telson and the type of ornamentation present. It is worth noting that not all of these characters are of equal taxonomic importance.[95] They are not applicable to all eurypterids either; stylonurine eurypterids lack swimming leg paddles entirely.[15] Some characters, including the prosoma and metastoma shapes and the position and shapes of the eyes, are seen as important only for the distinction between different genera.[96] Most superfamilies and families are defined based on the morphology of the appendages.[97]
|
188 |
+
|
189 |
+
The most important character used in eurypterid taxonomy is the type of prosomal appendages as this character is used to define entire suborders. General leg anatomy can also be used to define superfamilies and families. Historically, the chelicerae were considered the most important appendages from a taxonomical standpoint since they only occurred in two general types: a eurypterid type with small and toothless pincers and a pterygotid type with large pincers and teeth. This distinction has historically been used to divide the Eurypterida into the two suborders Eurypterina (small chelicerae) and "Pterygotina" (large and powerful chelicerae).[98] This classification scheme is not without problems. In Victor Tollerton's 1989 taxonomic revision of the Eurypterida, with suborders Eurypterina and Pterygotina recognized, several clades of eurypterids today recognized as stylonurines (including hibbertopterids and mycteroptids) were reclassified as non-eurypterids in the new separate order "Cyrtoctenida" on the grounds of perceived inconsistencies in the prosomal appendages.[99]
|
190 |
+
|
191 |
+
Modern research favors a classification into suborders Eurypterina and Stylonurina instead, supported by phylogenetic analyses.[100][35] In particular, pterygotid eurypterids share a number of homologies with derived eurypterine eurypterids such as the adelophthalmids, and are thus best classified as derived members of the same suborder.[101] In the Stylonurina, the sixth pair of appendages is represented by long and slender walking legs and lack a modified spine (referred to as the podomere 7a). In most eurypterids in the Eurypterina, the sixth pair of appendages is broadened into swimming paddles and always has a podomere 7a. 75% of eurypterid species are eurypterines and they represent 99% of all fossil eurypterid specimens.[15] Of all eurypterid clades, the Pterygotioidea is the most species-rich, with over 50 species. The second most species-rich clade is the Adelophthalmoidea, with over 40 species.[57]
|
192 |
+
|
193 |
+
The cladogram presented below, covering all currently recognized eurypterid families, follows a 2007 study by O. Erik Tetlie.[102] The stylonurine suborder follows a 2010 study by James Lamsdell, Simon J. Braddy and Tetlie.[103] The superfamily "Megalograptoidea", recognized by Tetlie in 2007 and then placed between the Onychopterelloidea and Eurypteroidea, has been omitted as more recent studies suggest that the megalograptids were members of the superfamily Carcinosomatoidea. As such, the phylogeny of the Carcinosomatoidea follows a 2015 study by Lamsdell and colleagues.[104]
|
194 |
+
|
195 |
+
Rhenopteridae
|
196 |
+
|
197 |
+
Parastylonuridae
|
198 |
+
|
199 |
+
Stylonuridae
|
200 |
+
|
201 |
+
Hardieopteridae
|
202 |
+
|
203 |
+
Kokomopteridae
|
204 |
+
|
205 |
+
Drepanopteridae
|
206 |
+
|
207 |
+
Hibbertopteridae
|
208 |
+
|
209 |
+
Mycteroptidae
|
210 |
+
|
211 |
+
Moselopteridae
|
212 |
+
|
213 |
+
Onychopterellidae
|
214 |
+
|
215 |
+
Dolichopteridae
|
216 |
+
|
217 |
+
Eurypteridae
|
218 |
+
|
219 |
+
Strobilopteridae
|
220 |
+
|
221 |
+
Megalograptidae
|
222 |
+
|
223 |
+
Carcinosomatidae
|
224 |
+
|
225 |
+
Mixopteridae
|
226 |
+
|
227 |
+
Waeringopteridae
|
228 |
+
|
229 |
+
Adelophthalmidae
|
230 |
+
|
231 |
+
Hughmilleriidae
|
232 |
+
|
233 |
+
Pterygotidae
|
234 |
+
|
235 |
+
Slimonidae
|
en/1914.html.txt
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Eva Anna Paula Hitler (née Braun; 6 February 1912 – 30 April 1945) was the longtime companion of Adolf Hitler and, for less than 40 hours, his wife. Braun met Hitler in Munich when she was a 17-year-old assistant and model for his personal photographer Heinrich Hoffmann. She began seeing Hitler often about two years later. She attempted suicide twice during their early relationship. By 1936, she was a part of his household at the Berghof near Berchtesgaden and lived a sheltered life throughout World War II. Braun was a photographer, and she took many of the surviving colour photographs and films of Hitler. She was a key figure within Hitler's inner social circle, but did not attend public events with him until mid-1944, when her sister Gretl married Hermann Fegelein, the SS liaison officer on his staff.
|
4 |
+
|
5 |
+
As Nazi Germany was collapsing towards the end of the war, Braun swore loyalty to Hitler and went to Berlin to be by his side in the heavily reinforced Führerbunker beneath the Reich Chancellery garden. As Red Army troops fought their way into the centre government district, on 29 April 1945 she married Hitler during a brief civil ceremony; she was 33 and he was 56. Less than 40 hours later, they committed suicide together in a sitting room of the bunker, she by biting into a capsule of cyanide, and he by a gunshot to the head. The German public was unaware of Braun's relationship with Hitler until after their deaths.
|
6 |
+
|
7 |
+
Eva Braun was born in Munich and was the second daughter of school teacher Friedrich "Fritz" Braun (1879–1964)[1] and Franziska "Fanny" Kronberger (1885–1976);[2] her mother had worked as a seamstress before her marriage.[3] She had an elder sister, Ilse (1909–1979), and a younger sister, Margarete (Gretl) (1915–1987).
|
8 |
+
|
9 |
+
Braun's parents were divorced in April 1921, but remarried in November 1922, probably for financial reasons (hyperinflation was plaguing the German economy at the time).[4] Braun was educated at a Catholic lyceum in Munich, and then for one year at a business school in the Convent of the English Sisters in Simbach am Inn, where she had average grades and a talent for athletics.[5]
|
10 |
+
|
11 |
+
At age 17, she took a job working for Heinrich Hoffmann, the official photographer for the Nazi Party (NSDAP).[2] Initially employed as a shop assistant and sales clerk, she soon learned how to use a camera and develop photographs.[6] She met Hitler, 23 years her senior, at Hoffmann's studio in Munich in October 1929. He had been introduced to her as "Herr Wolff".[7] Eva's sister, Gretl, also worked for Hoffman from 1932 onward, and the women rented an apartment together for a time. Gretl accompanied her sister on her later trips with Hitler to the Obersalzberg.[8]
|
12 |
+
|
13 |
+
Hitler lived with his half-niece, Geli Raubal, in an apartment at Prinzregentenplatz 16 in Munich from 1929 until her death.[9][10] On 18 September 1931, Raubal was found dead in the apartment with a gunshot wound, an apparent suicide with Hitler's pistol. Hitler was in Nuremberg at the time. The relationship—likely the most intense of his life—had been important to him.[11] Hitler began seeing more of Braun after Raubal's suicide.
|
14 |
+
|
15 |
+
Braun herself attempted suicide on 10 or 11 August 1932 by shooting herself in the chest with her father's pistol.[12] Historians feel the attempt was not serious, but was a bid for Hitler's attention.[13][14] After Braun's recovery, Hitler became more committed to her and by the end of 1932 they had become lovers.[15] She often stayed overnight at his Munich apartment when he was in town.[16] Beginning in 1933, Braun worked as a photographer for Hoffmann.[17] This position enabled her to travel—accompanied by Hoffmann—with Hitler's entourage as a photographer for the Nazi Party.[18] Later in her career, she worked for Hoffman's art press.[19]
|
16 |
+
|
17 |
+
According to a fragment of her diary and the account of biographer Nerin Gun, Braun's second suicide attempt occurred in May 1935. She took an overdose of sleeping pills when Hitler failed to make time for her in his life.[20] Hitler provided Eva and her sister with a three-bedroom apartment in Munich that August,[21] and the next year the sisters were provided with a villa in Bogenhausen at Wasserburgerstr. 12 (now Delpstr. 12).[22] By 1936, Braun was at Hitler's household at the Berghof near Berchtesgaden whenever he was in residence there, but she lived mostly in Munich.[23] Braun also had her own apartment at the new Reich Chancellery in Berlin, completed to a design by Albert Speer.[24]
|
18 |
+
|
19 |
+
Braun was a member of Hoffman's staff when she attended the Nuremberg Rally for the first time in 1935. Hitler's half-sister, Angela Raubal (the dead Geli's mother), took exception to her presence there, and was later dismissed from her position as housekeeper at his house in Berchtesgaden. Researchers are unable to ascertain if her dislike for Braun was the only reason for her departure, but other members of Hitler's entourage saw Braun as untouchable from then on.[25]
|
20 |
+
|
21 |
+
Hitler wished to present himself in the image of a chaste hero; in the Nazi ideology, men were the political leaders and warriors, and women were homemakers.[26] He believed that he was sexually attractive to women and wished to exploit this for political gain by remaining single, as he felt marriage would decrease his appeal.[27][28] He and Braun never appeared as a couple in public; the only time they appeared together in a published news photo was when she sat near him at the 1936 Winter Olympics. The German people were unaware of Braun's relationship with Hitler until after the war.[13] Braun had her own room adjoining Hitler's at the Berghof, in Hitler's Berlin residence, and in the Berlin bunker.[29][30]
|
22 |
+
|
23 |
+
Biographer Heike Görtemaker wrote that women did not play a big role in the politics of Nazi Germany.[31] Braun's political influence on Hitler was minimal; she was never allowed to stay in the room when business or political conversations took place and was sent out of the room when cabinet ministers or other dignitaries were present.[26][32] She was not a member of the Nazi Party.[33][34] In his post-war memoirs, Hoffmann characterized Braun's outlook as "inconsequential and feather-brained";[35] her main interests were sports, clothes, and the cinema. She led a sheltered and privileged existence and seemed uninterested in politics.[36] One instance when she took an interest was in 1943, shortly after Germany had fully transitioned to a total war economy. Among other things, this meant a potential ban on women's cosmetics and luxuries. According to Speer's memoirs, Braun approached Hitler in "high indignation"; Hitler quietly instructed Speer, who was armaments minister at the time, to halt production of women's cosmetics and luxuries rather than instituting an outright ban.[37] Speer later said, "Eva Braun will prove a great disappointment to historians."[38]
|
24 |
+
|
25 |
+
Braun continued to work for Hoffmann after commencing her relationship with Hitler. She took many photographs and movies of members of the inner circle, and some of these were sold to Hoffmann for extremely high prices. She received money from Hoffmann's company as late as 1943, and also held the position of private secretary to Hitler.[39] This guise meant she could enter and leave the Chancellery unremarked, though she used a side entrance and a rear staircase.[40][41] Görtemaker notes that Braun and Hitler enjoyed a normal sex life.[42] Braun's friends and relatives described Eva giggling over a 1938 photograph of Neville Chamberlain sitting on a sofa in Hitler's Munich flat with the remark: "If only he knew what goings-on that sofa has seen."[31]
|
26 |
+
|
27 |
+
On 3 June 1944, Braun's sister Gretl Braun married SS-Gruppenführer Hermann Fegelein, who served as Reichsführer-SS Heinrich Himmler's liaison officer on Hitler's staff. Hitler used the marriage as an excuse to allow Braun to appear at official functions, as she could then be introduced as Fegelein's sister-in-law.[36] When Fegelein was caught in the closing days of the war trying to escape to Sweden or Switzerland, Hitler ordered his execution.[43][44] He was shot for desertion in the garden of the Reich Chancellery on 28 April 1945.[45]
|
28 |
+
|
29 |
+
In 1933, Hitler purchased a small holiday home on the mountain at Obersalzberg. Renovations began in 1934 and were completed by 1936. A large wing was added onto the original house and several additional buildings were constructed. The entire area was fenced off, and remaining houses on the mountain were purchased by the Nazi Party and demolished. Braun and the other members of the entourage were cut off from the outside world when in residence. Speer, Hermann Göring, and Martin Bormann had houses constructed inside the compound.[46]
|
30 |
+
|
31 |
+
Hitler's valet, Heinz Linge, stated in his memoirs that Hitler and Braun had two bedrooms and two bathrooms with interconnecting doors at the Berghof, and Hitler would end most evenings alone with her in his study before they retired to bed. She would wear a "dressing gown or house-coat" and drink wine; Hitler would have tea.[47] Public displays of affection or physical contact were nonexistent, even in the enclosed world of the Berghof.[48] Braun took the role of hostess amongst the regular visitors, though she was not involved in running the household. She regularly invited friends and family members to accompany her during her stays, the only guest to do so.[49]
|
32 |
+
|
33 |
+
When Henriette von Schirach suggested that Braun should go into hiding after the war, Braun replied, "Do you think I would let him die alone? I will stay with him up until the last moment ..."[19] Hitler named Braun in his will, to receive 12,000 Reichsmarks yearly after his death.[50] He was very fond of her, and worried when she participated in sports or was late returning for tea.[51]
|
34 |
+
|
35 |
+
Braun was very fond of Negus and Stasi, her two Scottish Terrier dogs, and they appear in her home movies. She usually kept them away from Hitler's German Shepherd, Blondi.[52] Blondi was killed by one of the entourage on 29 April 1945 when Hitler ordered that one of the cyanide capsules obtained for Braun and Hitler's suicide the next day be tested on the dog.[53] Braun's dogs and Blondi's puppies were shot on 30 April by Hitler's dog handler, Fritz Tornow.[54]
|
36 |
+
|
37 |
+
In early April 1945, Braun travelled from Munich to Berlin to be with Hitler at the Führerbunker. She refused to leave as the Red Army closed in on the capital.[55] After midnight on the night of 28–29 April, Hitler and Braun were married in a small civil ceremony within the Führerbunker.[56] The event was witnessed by Joseph Goebbels and Martin Bormann. Thereafter, Hitler hosted a modest wedding breakfast with his new wife.[57] When Braun married Hitler, her legal name changed to Eva Hitler. When she signed her marriage certificate, she wrote the letter B for her family name, then crossed this out and replaced it with Hitler.[57]
|
38 |
+
|
39 |
+
After 1:00 pm on 30 April 1945, Braun and Hitler said their farewells to staff and members of the inner circle.[58] Later that afternoon, at approximately 3:30 pm, several people reported hearing a loud gunshot.[59] After waiting a few minutes, Hitler's valet, Heinz Linge, and Hitler's SS adjutant, Otto Günsche, entered the small study and found the lifeless bodies of Hitler and Braun on a small sofa. Braun had bitten into a cyanide capsule,[60] and Hitler had shot himself in the right temple with his pistol.[61][62] The corpses were carried up the stairs and through the bunker's emergency exit to the garden behind the Reich Chancellery, where they were burned.[63] Braun was 33 years old when she died.
|
40 |
+
|
41 |
+
The charred remains were found by the Soviets. By 11 May, Hitler's dentist, Hugo Blaschke, dental assistant Käthe Heusermann and dental technician Fritz Echtmann confirmed that the dental remains belonged to Hitler and Braun.[64][65] The Soviets secretly buried the remains at the SMERSH compound in Magdeburg, East Germany, along with the bodies of Joseph and Magda Goebbels and their six children. On 4 April 1970, a Soviet KGB team with detailed burial charts secretly exhumed five wooden boxes of remains. The remains were thoroughly burned and crushed, after which the ashes were thrown into the Biederitz river, a tributary of the nearby Elbe.[66]
|
42 |
+
|
43 |
+
The rest of Braun's family survived the war. Her mother, Franziska, died at age 96 in January 1976, having lived out her days in an old farmhouse in Ruhpolding, Bavaria.[67] Her father, Fritz, died in 1964. Gretl gave birth to a daughter—whom she named Eva—on 5 May 1945. She later married Kurt Beringhoff, a businessman. She died in 1987.[68] Braun's elder sister, Ilse, was not part of Hitler's inner circle. She married twice and died in 1979.[68]
|
44 |
+
|
en/1915.html.txt
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Evaporation is a type of vaporization that occurs on the surface of a liquid as it changes into the gas phase.[1] The surrounding gas must not be saturated with the evaporating substance. When the molecules of the liquid collide, they transfer energy to each other based on how they collide with each other. When a molecule near the surface absorbs enough energy to overcome the vapor pressure, it will escape and enter the surrounding air as a gas.[2] When evaporation occurs, the energy removed from the vaporized liquid will reduce the temperature of the liquid, resulting in evaporative cooling.[3]
|
4 |
+
|
5 |
+
On average, only a fraction of the molecules in a liquid have enough heat energy to escape from the liquid. The evaporation will continue until an equilibrium is reached when the evaporation of the liquid is equal to its condensation. In an enclosed environment, a liquid will evaporate until the surrounding air is saturated.
|
6 |
+
|
7 |
+
Evaporation is an essential part of the water cycle. The sun (solar energy) drives evaporation of water from oceans, lakes, moisture in the soil, and other sources of water. In hydrology, evaporation and transpiration (which involves evaporation within plant stomata) are collectively termed evapotranspiration. Evaporation of water occurs when the surface of the liquid is exposed, allowing molecules to escape and form water vapor; this vapor can then rise up and form clouds. With sufficient energy, the liquid will turn into vapor.
|
8 |
+
|
9 |
+
For molecules of a liquid to evaporate, they must be located near the surface, they have to be moving in the proper direction, and have sufficient kinetic energy to overcome liquid-phase intermolecular forces.[4] When only a small proportion of the molecules meet these criteria, the rate of evaporation is low. Since the kinetic energy of a molecule is proportional to its temperature, evaporation proceeds more quickly at higher temperatures. As the faster-moving molecules escape, the remaining molecules have lower average kinetic energy, and the temperature of the liquid decreases. This phenomenon is also called evaporative cooling. This is why evaporating sweat cools the human body.
|
10 |
+
Evaporation also tends to proceed more quickly with higher flow rates between the gaseous and liquid phase and in liquids with higher vapor pressure. For example, laundry on a clothes line will dry (by evaporation) more rapidly on a windy day than on a still day. Three key parts to evaporation are heat, atmospheric pressure (determines the percent humidity), and air movement.
|
11 |
+
|
12 |
+
On a molecular level, there is no strict boundary between the liquid state and the vapor state. Instead, there is a Knudsen layer, where the phase is undetermined. Because this layer is only a few molecules thick, at a macroscopic scale a clear phase transition interface cannot be seen.[citation needed]
|
13 |
+
|
14 |
+
Liquids that do not evaporate visibly at a given temperature in a given gas (e.g., cooking oil at room temperature) have molecules that do not tend to transfer energy to each other in a pattern sufficient to frequently give a molecule the heat energy necessary to turn into vapor. However, these liquids are evaporating. It is just that the process is much slower and thus significantly less visible.
|
15 |
+
|
16 |
+
If evaporation takes place in an enclosed area, the escaping molecules accumulate as a vapor above the liquid. Many of the molecules return to the liquid, with returning molecules becoming more frequent as the density and pressure of the vapor increases. When the process of escape and return reaches an equilibrium,[4] the vapor is said to be "saturated", and no further change in either vapor pressure and density or liquid temperature will occur. For a system consisting of vapor and liquid of a pure substance, this equilibrium state is directly related to the vapor pressure of the substance, as given by the Clausius–Clapeyron relation:
|
17 |
+
|
18 |
+
where P1, P2 are the vapor pressures at temperatures T1, T2 respectively, ΔHvap is the enthalpy of vaporization, and R is the universal gas constant. The rate of evaporation in an open system is related to the vapor pressure found in a closed system. If a liquid is heated, when the vapor pressure reaches the ambient pressure the liquid will boil.
|
19 |
+
|
20 |
+
The ability for a molecule of a liquid to evaporate is based largely on the amount of kinetic energy an individual particle may possess. Even at lower temperatures, individual molecules of a liquid can evaporate if they have more than the minimum amount of kinetic energy required for vaporization.
|
21 |
+
|
22 |
+
Note: Air used here is a common example; however, the vapor phase can be other gases.
|
23 |
+
|
24 |
+
In the US, the National Weather Service measures the actual rate of evaporation from a standardized "pan" open water surface outdoors, at various locations nationwide. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over 120 inches (3,000 mm) per year.
|
25 |
+
|
26 |
+
Evaporation is an endothermic process, in that heat is absorbed during evaporation.
|
27 |
+
|
28 |
+
Fuel droplets vaporize as they receive heat by mixing with the hot gases in the combustion chamber. Heat (energy) can also be received by radiation from any hot refractory wall of the combustion chamber.
|
29 |
+
|
30 |
+
Internal combustion engines rely upon the vaporization of the fuel in the cylinders to form a fuel/air mixture in order to burn well.
|
31 |
+
The chemically correct air/fuel mixture for total burning of gasoline has been determined to be 15 parts air to one part gasoline or 15/1 by weight. Changing this to a volume ratio yields 8000 parts air to one part gasoline or 8,000/1 by volume.
|
32 |
+
|
33 |
+
Thin films may be deposited by evaporating a substance and condensing it onto a substrate, or by dissolving the substance in a solvent, spreading the resulting solution thinly over a substrate, and evaporating the solvent. The Hertz–Knudsen equation is often used to estimate the rate of evaporation in these instances.
|
34 |
+
|
35 |
+
Media related to Evaporation at Wikimedia Commons
|
en/1916.html.txt
ADDED
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A bishop is an ordained, consecrated, or appointed member of the Christian clergy who is generally entrusted with a position of authority and oversight.
|
4 |
+
|
5 |
+
Within the Catholic, Eastern Orthodox, Oriental Orthodox, Moravian, Anglican, Old Catholic and Independent Catholic churches, as well as the Assyrian Church of the East, bishops claim apostolic succession, a direct historical lineage dating back to the original Twelve Apostles. Within these churches, bishops are seen as those who possess the full priesthood and can ordain clergy, including other bishops. Some Protestant churches, including the Lutheran, Anglican and Methodist churches, have bishops serving similar functions as well, though not always understood to be within apostolic succession in the same way. A person ordained as a deacon, priest, and then bishop is understood to hold the fullness of the (ministerial) priesthood, given responsibility by Christ to govern, teach, and sanctify the Body of Christ. Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry.
|
6 |
+
|
7 |
+
The English term bishop derives from the Greek word ἐπίσκοπος epískopos, meaning "overseer" in Greek, the early language of the Christian Church. In the early Christian era the term was not always clearly distinguished from presbýteros (literally: "elder" or "senior", origin of the modern English word "priest"), but is used in the sense of the order or office of bishop, distinct from that of presbyter, in the writings attributed to Ignatius of Antioch.[1] (died c. 110).
|
8 |
+
|
9 |
+
The earliest organization of the Church in Jerusalem was, according to most scholars, similar to that of Jewish synagogues, but it had a council or college of ordained presbyters (Ancient Greek: πρεσβύτεροι elders). In Acts 11:30 and Acts 15:22, we see a collegiate system of government in Jerusalem chaired by James the Just, according to tradition the first bishop of the city. In Acts 14:23, the Apostle Paul ordains presbyters in churches in Anatolia.[2]
|
10 |
+
|
11 |
+
Often, the word presbyter was not yet distinguished from overseer (Ancient Greek: ἐπίσκοπος episkopos, later used exclusively to mean bishop), as in Acts 20:17, Titus 1:5–7 and 1 Peter 5:1.[a][b] The earliest writings of the Apostolic Fathers, the Didache and the First Epistle of Clement, for example, show the church used two terms for local church offices—presbyters (seen by many as an interchangeable term with episcopos or overseer) and deacon.
|
12 |
+
|
13 |
+
In Timothy and Titus in the New Testament a more clearly defined episcopate can be seen. We are told that Paul had left Timothy in Ephesus and Titus in Crete to oversee the local church.[6][7] Paul commands Titus to ordain presbyters/bishops and to exercise general oversight.
|
14 |
+
|
15 |
+
Early sources are unclear but various groups of Christian communities may have had the bishop surrounded by a group or college functioning as leaders of the local churches.[8][9] Eventually the head or "monarchic" bishop came to rule more clearly,[10] and all local churches would eventually follow the example of the other churches and structure themselves after the model of the others with the one bishop in clearer charge,[8] though the role of the body of presbyters remained important.[10]
|
16 |
+
|
17 |
+
Eventually, as Christendom grew, bishops no longer directly served individual congregations. Instead, the Metropolitan bishop (the bishop in a large city) appointed priests to minister each congregation, acting as the bishop's delegate.
|
18 |
+
|
19 |
+
Around the end of the 1st century, the church's organization became clearer in historical documents. In the works of the Apostolic Fathers, and Ignatius of Antioch in particular, the role of the episkopos, or bishop, became more important or, rather, already was very important and being clearly defined. While Ignatius of Antioch offers the earliest clear description of monarchial bishops (a single bishop over all house churches in a city) he is an advocate of monepiscopal structure rather than describing an accepted reality. To the bishops and house churches to which he writes, he offers strategies on how to pressure house churches who don't recognize the bishop into compliance. Other contemporary Christian writers do not describe monarchial bishops, either continuing to equate them with the presbyters or speaking of episkopoi (bishops, plural) in a city.
|
20 |
+
|
21 |
+
"Blessed be God, who has granted unto you, who are yourselves so excellent, to obtain such an excellent bishop." — Epistle of Ignatius to the Ephesians 1:1 [11]
|
22 |
+
"and that, being subject to the bishop and the presbytery, ye may in all respects be sanctified." — Epistle of Ignatius to the Ephesians 2:1
|
23 |
+
[12]
|
24 |
+
"For your justly renowned presbytery, worthy of God, is fitted as exactly to the bishop as the strings are to the harp." — Epistle of Ignatius to the Ephesians 4:1 [13]
|
25 |
+
"Do ye, beloved, be careful to be subject to the bishop, and the presbyters and the deacons." — Epistle of Ignatius to the Ephesians 5:1 [13]
|
26 |
+
"Plainly therefore we ought to regard the bishop as the Lord Himself" — Epistle of Ignatius to the Ephesians 6:1.
|
27 |
+
"your godly bishop" — Epistle of Ignatius to the Magnesians 2:1.
|
28 |
+
"the bishop presiding after the likeness of God and the presbyters after the likeness of the council of the Apostles, with the deacons also who are most dear to me, having been entrusted with the diaconate of Jesus Christ" — Epistle of Ignatius to the Magnesians 6:1.
|
29 |
+
"Therefore as the Lord did nothing without the Father, [being united with Him], either by Himself or by the Apostles, so neither do ye anything without the bishop and the presbyters." — Epistle of Ignatius to the Magnesians 7:1.
|
30 |
+
"Be obedient to the bishop and to one another, as Jesus Christ was to the Father [according to the flesh], and as the Apostles were to Christ and to the Father, that there may be union both of flesh and of spirit." — Epistle of Ignatius to the Magnesians 13:2.
|
31 |
+
"In like manner let all men respect the deacons as Jesus Christ, even as they should respect the bishop as being a type of the Father and the presbyters as the council of God and as the college of Apostles. Apart from these there is not even the name of a church." — Epistle of Ignatius to the Trallesians 3:1.
|
32 |
+
"follow your bishop, as Jesus Christ followed the Father, and the presbytery as the Apostles; and to the deacons pay respect, as to God's commandment" — Epistle of Ignatius to the Smyrnans 8:1.
|
33 |
+
"He that honoureth the bishop is honoured of God; he that doeth aught without the knowledge of the bishop rendereth service to the devil" — Epistle of Ignatius to the Smyrnans 9:1.
|
34 |
+
|
35 |
+
As the Church continued to expand, new churches in important cities gained their own bishop. Churches in the regions outside an important city were served by Chorbishop, an official rank of bishops. However, soon, presbyters and deacons were sent from bishop of a city church. Gradually priests replaced the chorbishops. Thus, in time, the bishop changed from being the leader of a single church confined to an urban area to being the leader of the churches of a given geographical area.
|
36 |
+
|
37 |
+
Clement of Alexandria (end of the 2nd century) writes about the ordination of a certain Zachæus as bishop by the imposition of Simon Peter Bar-Jonah's hands. The words bishop and ordination are used in their technical meaning by the same Clement of Alexandria.[14] The bishops in the 2nd century are defined also as the only clergy to whom the ordination to priesthood (presbyterate) and diaconate is entrusted: "a priest (presbyter) lays on hands, but does not ordain." (cheirothetei ou cheirotonei[15])
|
38 |
+
|
39 |
+
At the beginning of the 3rd century, Hippolytus of Rome describes another feature of the ministry of a bishop, which is that of the "Spiritum primatus sacerdotii habere potestatem dimittere peccata": the primate of sacrificial priesthood and the power to forgive sins.[16]
|
40 |
+
|
41 |
+
The efficient organization of the Roman Empire became the template for the organisation of the church in the 4th century, particularly after Constantine's Edict of Milan. As the church moved from the shadows of privacy into the public forum it acquired land for churches, burials and clergy. In 391, Theodosius I decreed that any land that had been confiscated from the church by Roman authorities be returned.
|
42 |
+
|
43 |
+
The most usual term for the geographic area of a bishop's authority and ministry, the diocese, began as part of the structure of the Roman Empire under Diocletian. As Roman authority began to fail in the western portion of the empire, the church took over much of the civil administration. This can be clearly seen in the ministry of two popes: Pope Leo I in the 5th century, and Pope Gregory I in the 6th century. Both of these men were statesmen and public administrators in addition to their role as Christian pastors, teachers and leaders. In the Eastern churches, latifundia entailed to a bishop's see were much less common, the state power did not collapse the way it did in the West, and thus the tendency of bishops acquiring civil power was much weaker than in the West. However, the role of Western bishops as civil authorities, often called prince bishops, continued throughout much of the Middle Ages.
|
44 |
+
|
45 |
+
As well as being archchancellors of the Holy Roman Empire after the 9th century, bishops generally served as chancellors to medieval monarchs, acting as head of the justiciary and chief chaplain. The Lord Chancellor of England was almost always a bishop up until the dismissal of Cardinal Thomas Wolsey by Henry VIII. Similarly, the position of Kanclerz in the Polish kingdom was always held by a bishop until the 16th century. And today, the principality of Andorra is headed by two co-princes, one of whom is a Catholic bishop (and the other, the President of France).
|
46 |
+
|
47 |
+
In France before the French Revolution, representatives of the clergy — in practice, bishops and abbots of the largest monasteries — comprised the First Estate of the Estates-General, until their role was abolished during the French Revolution.
|
48 |
+
|
49 |
+
In the 21st century, the more senior bishops of the Church of England continue to sit in the House of Lords of the Parliament of the United Kingdom, as representatives of the established church, and are known as Lords Spiritual. The Bishop of Sodor and Man, whose diocese lies outside the United Kingdom, is an ex officio member of the Legislative Council of the Isle of Man. In the past, the Bishop of Durham, known as a prince bishop, had extensive viceregal powers within his northern diocese — the power to mint money, collect taxes and raise an army to defend against the Scots.
|
50 |
+
|
51 |
+
Eastern Orthodox bishops, along with all other members of the clergy, are canonically forbidden to hold political office. Occasional exceptions to this rule are tolerated when the alternative is political chaos. In the Ottoman Empire, the Patriarch of Constantinople, for example, had de facto administrative, fiscal, cultural and legal jurisdiction, as well as spiritual, over all the Christians of the empire. More recently, Archbishop Makarios III of Cyprus, served as President of the Republic of Cyprus from 1960 to 1977.
|
52 |
+
|
53 |
+
In 2001, Peter Hollingworth, AC, OBE – then the Anglican Archbishop of Brisbane – was controversially appointed Governor-General of Australia. Although Hollingworth gave up his episcopal position to accept the appointment, it still attracted considerable opposition in a country which maintains a formal separation between Church and State.
|
54 |
+
|
55 |
+
During the period of the English Civil War, the role of bishops as wielders of political power and as upholders of the established church became a matter of heated political controversy. Indeed, Presbyterianism was the polity of most Reformed Churches in Europe, and had been favored by many in England since the English Reformation. Since in the primitive church the offices of presbyter and episkopos were identical, many Puritans held that this was the only form of government the church should have. The Anglican divine, Richard Hooker, objected to this claim in his famous work Of the Laws of Ecclesiastic Polity while, at the same time, defending Presbyterian ordination as valid (in particular Calvin's ordination of Beza). This was the official stance of the English Church until the Commonwealth, during which time, the views of Presbyterians and Independents (Congregationalists) were more freely expressed and practiced.
|
56 |
+
|
57 |
+
Bishops form the leadership in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, the Anglican Communion, the Lutheran Church, the Independent Catholic Churches, the Independent Anglican Churches, and certain other, smaller, denominations.
|
58 |
+
|
59 |
+
The traditional role of a bishop is as pastor of a diocese (also called a bishopric, synod, eparchy or see), and so to serve as a "diocesan bishop," or "eparch" as it is called in many Eastern Christian churches. Dioceses vary considerably in size, geographically and population-wise. Some dioceses around the Mediterranean Sea which were Christianised early are rather compact, whereas dioceses in areas of rapid modern growth in Christian commitment—as in some parts of Sub-Saharan Africa, South America and the Far East—are much larger and more populous.
|
60 |
+
|
61 |
+
As well as traditional diocesan bishops, many churches have a well-developed structure of church leadership that involves a number of layers of authority and responsibility.
|
62 |
+
|
63 |
+
In Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and Anglicanism, only a bishop can ordain other bishops, priests, and deacons.
|
64 |
+
|
65 |
+
In the Eastern liturgical tradition, a priest can celebrate the Divine Liturgy only with the blessing of a bishop. In Byzantine usage, an antimension signed by the bishop is kept on the altar partly as a reminder of whose altar it is and under whose omophorion the priest at a local parish is serving. In Syriac Church usage, a consecrated wooden block called a thabilitho is kept for the same reasons.
|
66 |
+
|
67 |
+
The pope, in addition to being the Bishop of Rome and spiritual head of the Catholic Church, is also the Patriarch of the Latin Rite. Each bishop within the Latin Rite is answerable directly to the Pope and not any other bishop except to metropolitans in certain oversight instances. The pope previously used the title Patriarch of the West, but this title was dropped from use in 2006[18] a move which caused some concern within the Eastern Orthodox Communion as, to them, it implied wider papal jurisdiction.[19]
|
68 |
+
|
69 |
+
In Catholic, Eastern Orthodox, Oriental Orthodox and Anglican cathedrals there is a special chair set aside for the exclusive use of the bishop. This is the bishop's cathedra and is often called the throne. In some Christian denominations, for example, the Anglican Communion, parish churches may maintain a chair for the use of the bishop when he visits; this is to signify the parish's union with the bishop.
|
70 |
+
|
71 |
+
The bishop is the ordinary minister of the sacrament of confirmation in the Latin Rite Catholic Church, and in the Anglican and Old Catholic communion only a bishop may administer this sacrament. However, in the Byzantine and other Eastern rites, whether Eastern or Oriental Orthodox or Eastern Catholic, chrismation is done immediately after baptism, and thus the priest is the one who confirms, using chrism blessed by a bishop.[20]
|
72 |
+
|
73 |
+
Bishops in all of these communions are ordained by other bishops through the laying on of hands. While traditional teaching maintains that any bishop with apostolic succession can validly perform the ordination of another bishop, some churches require two or three bishops participate, either to ensure sacramental validity or to conform with church law. Catholic doctrine holds that one bishop can validly ordain another (priest) as a bishop. Though a minimum of three bishops participating is desirable (there are usually several more) in order to demonstrate collegiality, canonically only one bishop is necessary. The practice of only one bishop ordaining was normal in countries where the Church was persecuted under Communist rule.
|
74 |
+
The title of archbishop or metropolitan may be granted to a senior bishop, usually one who is in charge of a large ecclesiastical jurisdiction. He may, or may not, have provincial oversight of suffragan bishops and may possibly have auxiliary bishops assisting him.
|
75 |
+
Ordination of a bishop, and thus continuation of apostolic succession, takes place through a ritual centred on the imposition of hands and prayer.
|
76 |
+
Apart from the ordination, which is always done by other bishops, there are different methods as to the actual selection of a candidate for ordination as bishop. In the Catholic Church the Congregation for Bishops generally oversees the selection of new bishops with the approval of the pope. The papal nuncio usually solicits names from the bishops of a country, consults with priests and leading members of a laity, and then selects three to be forwarded to the Holy See. In Europe, some cathedral chapters have duties to elect bishops. The Eastern Catholic churches generally elect their own bishops. Most Eastern Orthodox churches allow varying amounts of formalised laity or lower clergy influence on the choice of bishops. This also applies in those Eastern churches which are in union with the pope, though it is required that he give assent.
|
77 |
+
|
78 |
+
Catholic, Eastern Orthodox, Oriental Orthodox, Anglican, Old Catholic and some Lutheran bishops claim to be part of the continuous sequence of ordained bishops since the days of the apostles referred to as apostolic succession. Since Pope Leo XIII issued the bull Apostolicae curae in 1896, the Catholic Church has insisted that Anglican orders are invalid because of changes in the Anglican ordination rites of the 16th century and divergence in understanding of the theology of priesthood, episcopacy and Eucharist. However, since the 1930s, Utrecht Old Catholic bishops (recognised by the Holy See as validily ordained) have sometimes taken part in the ordination of Anglican bishops. According to the writer Timothy Dufort, by 1969, all Church of England bishops had acquired Old Catholic lines of apostolic succession recognised by the Holy See.[21] This development has muddied the waters somewhat as it could be argued that the strain of apostolic succession has been re-introduced into Anglicanism, at least within the Church of England.
|
79 |
+
|
80 |
+
The Catholic Church does recognise as valid (though illicit) ordinations done by breakaway Catholic, Old Catholic or Oriental bishops, and groups descended from them; it also regards as both valid and licit those ordinations done by bishops of the Eastern churches,[c] so long as those receiving the ordination conform to other canonical requirements (for example, is an adult male) and an eastern orthodox rite of episcopal ordination, expressing the proper functions and sacramental status of a bishop, is used; this has given rise to the phenomenon of episcopi vagantes (for example, clergy of the Independent Catholic groups which claim apostolic succession, though this claim is rejected by both Catholicism and Eastern Orthodoxy).
|
81 |
+
|
82 |
+
The Eastern Orthodox Churches would not accept the validity of any ordinations performed by the Independent Catholic groups, as Eastern Orthodoxy considers to be spurious any consecration outside the Church as a whole. Eastern Orthodoxy considers apostolic succession to exist only within the Universal Church, and not through any authority held by individual bishops; thus, if a bishop ordains someone to serve outside the (Eastern Orthodox) Church, the ceremony is ineffectual, and no ordination has taken place regardless of the ritual used or the ordaining prelate's position within the Eastern Orthodox Churches.
|
83 |
+
|
84 |
+
The position of the Catholic Church is slightly different. Whilst it does recognise the validity of the orders of certain groups which separated from communion with Holy See. The Holy See accepts as valid the ordinations of the Old Catholics in communion with Utrecht, as well as the Polish National Catholic Church (which received its orders directly from Utrecht, and was—until recently—part of that communion); but Catholicism does not recognise the orders of any group whose teaching is at variance with what they consider the core tenets of Christianity; this is the case even though the clergy of the Independent Catholic groups may use the proper ordination ritual. There are also other reasons why the Holy See does not recognise the validity of the orders of the Independent clergy:
|
85 |
+
|
86 |
+
Whilst members of the Independent Catholic movement take seriously the issue of valid orders, it is highly significant that the relevant Vatican Congregations tend not to respond to petitions from Independent Catholic bishops and clergy who seek to be received into communion with the Holy See, hoping to continue in some sacramental role. In those instances where the pope does grant reconciliation, those deemed to be clerics within the Independent Old Catholic movement are invariably admitted as laity and not priests or bishops.
|
87 |
+
|
88 |
+
There is a mutual recognition of the validity of orders amongst Catholic, Eastern Orthodox, Old Catholic, Oriental Orthodox and Assyrian Church of the East churches.[22]
|
89 |
+
|
90 |
+
Some provinces of the Anglican Communion have begun ordaining women as bishops in recent decades – for example, England, Ireland, Scotland, Wales, the United States, Australia, New Zealand, Canada and Cuba. The first woman to be consecrated a bishop within Anglicanism was Barbara Harris, who was ordained in the United States in 1989. In 2006, Katharine Jefferts Schori, the Episcopal Bishop of Nevada, became the first woman to become the presiding bishop of the Episcopal Church.
|
91 |
+
|
92 |
+
In the Evangelical Lutheran Church in America (ELCA) and the Evangelical Lutheran Church in Canada (ELCIC), the largest Lutheran Church bodies in the United States and Canada, respectively, and roughly based on the Nordic Lutheran state churches (similar to that of the Church of England), bishops are elected by Synod Assemblies, consisting of both lay members and clergy, for a term of six years, which can be renewed, depending upon the local synod's "constitution" (which is mirrored on either the ELCA or ELCIC's national constitution). Since the implementation of concordats between the ELCA and the Episcopal Church of the United States and the ELCIC and the Anglican Church of Canada, all bishops, including the presiding bishop (ELCA) or the national bishop (ELCIC), have been consecrated using the historic succession, with at least one Anglican bishop serving as co-consecrator.[23][24]
|
93 |
+
|
94 |
+
Since going into ecumenical communion with their respective Anglican body, bishops in the ELCA or the ELCIC not only approve the "rostering" of all ordained pastors, diaconal ministers, and associates in ministry, but they serve as the principal celebrant of all pastoral ordination and installation ceremonies, diaconal consecration ceremonies, as well as serving as the "chief pastor" of the local synod, upholding the teachings of Martin Luther as well as the documentations of the Ninety-Five Theses and the Augsburg Confession. Unlike their counterparts in the United Methodist Church, ELCA and ELCIC synod bishops do not appoint pastors to local congregations (pastors, like their counterparts in the Episcopal Church, are called by local congregations). The presiding bishop of the ELCA and the national bishop of the ELCIC, the national bishops of their respective bodies, are elected for a single 6-year term and may be elected to an additional term.
|
95 |
+
|
96 |
+
Although ELCA agreed with the Episcopal Church to limit ordination to the bishop "ordinarily", ELCA pastor-ordinators are given permission to perform the rites in "extraordinary" circumstance. In practice, "extraordinary" circumstance have included disagreeing with Episcopalian views of the episcopate, and as a result, ELCA pastors ordained by other pastors are not permitted to be deployed to Episcopal Churches (they can, however, serve in Presbyterian Church USA, United Methodist Church, Reformed Church in America, and Moravian Church congregations, as the ELCA is in full communion with these denominations). The Lutheran Church–Missouri Synod (LCMS) and the Wisconsin Evangelical Lutheran Synod (WELS), the second and third largest Lutheran bodies in the United States and the two largest Confessional Lutheran bodies in North America, do not follow an episcopal form of governance, settling instead on a form of quasi-congregationalism patterned off what they believe to be the practice of the early church. The second largest of the three predecessor bodies of the ELCA, the American Lutheran Church, was a congregationalist body, with national and synod presidents before they were re-titled as bishops (borrowing from the Lutheran churches in Germany) in the 1980s. With regard to ecclesial discipline and oversight, national and synod presidents typically function similarly to bishops in episcopal bodies.[25]
|
97 |
+
|
98 |
+
In the African Methodist Episcopal Church, "Bishops are the Chief Officers of the Connectional Organization. They are elected for life by a majority vote of the General Conference which meets every four years."[26]
|
99 |
+
|
100 |
+
In the Christian Methodist Episcopal Church in the United States, bishops are administrative superintendents of the church; they are elected by "delegate" votes for as many years deemed until the age of 74, then he/she must retire. Among their duties, are responsibility for appointing clergy to serve local churches as pastor, for performing ordinations, and for safeguarding the doctrine and discipline of the Church. The General Conference, a meeting every four years, has an equal number of clergy and lay delegates. In each Annual Conference, CME bishops serve for four-year terms. CME Church bishops may be male or female.
|
101 |
+
|
102 |
+
In the United Methodist Church (the largest branch of Methodism in the world) bishops serve as administrative and pastoral superintendents of the church. They are elected for life from among the ordained elders (presbyters) by vote of the delegates in regional (called jurisdictional) conferences, and are consecrated by the other bishops present at the conference through the laying on of hands. In the United Methodist Church bishops remain members of the "Order of Elders" while being consecrated to the "Office of the Episcopacy". Within the United Methodist Church only bishops are empowered to consecrate bishops and ordain clergy. Among their most critical duties is the ordination and appointment of clergy to serve local churches as pastor, presiding at sessions of the Annual, Jurisdictional, and General Conferences, providing pastoral ministry for the clergy under their charge, and safeguarding the doctrine and discipline of the Church. Furthermore, individual bishops, or the Council of Bishops as a whole, often serve a prophetic role, making statements on important social issues and setting forth a vision for the denomination, though they have no legislative authority of their own. In all of these areas, bishops of the United Methodist Church function very much in the historic meaning of the term. According to the Book of Discipline of the United Methodist Church, a bishop's responsibilities are
|
103 |
+
|
104 |
+
Leadership.—Spiritual and Temporal—
|
105 |
+
|
106 |
+
Presidential Duties.—1. To preside in the General, Jurisdictional, Central, and Annual Conferences. 2. To form the districts after consultation with the district superintendents and after the number of the same has been determined by vote of the Annual Conference. 3. To appoint the district superintendents annually (¶¶ 517–518). 4. To consecrate bishops, to ordain elders and deacons, to consecrate diaconal ministers, to commission deaconesses and home missionaries, and to see that the names of the persons commissioned and consecrated are entered on the journals of the conference and that proper credentials are funised to these persons.
|
107 |
+
|
108 |
+
Working with Ministers.—1. To make and fix the appointments in the Annual Conferences, Provisional Annual Conferences, and Missions as the Discipline may direct (¶¶ 529–533).
|
109 |
+
|
110 |
+
2. To divide or to unite a circuit(s), stations(s), or mission(s) as judged necessary for missionary strategy and then to make appropriate appointments. 3. To read the appointments of deaconesses, diaconal ministers, lay persons in service under the World Division of the General Board of Global Ministries, and home missionaries. 4. To fix the Charge Conference membership of all ordained ministers appointed to ministries other than the local church in keeping with ¶443.3. 5. To transfer, upon the request of the receiving bishop, ministerial member(s) of one Annual Conference to another, provided said member(s) agrees to transfer; and to send immediately to the secretaries of both conferences involved, to the conference Boards of Ordained Ministry, and to the clearing house of the General Board of Pensions written notices of the transfer of members and of their standing in the course of study if they are undergraduates.[27]
|
111 |
+
|
112 |
+
In each Annual Conference, United Methodist bishops serve for four-year terms, and may serve up to three terms before either retirement or appointment to a new Conference. United Methodist bishops may be male or female, with Marjorie Matthews being the first woman to be consecrated a bishop in 1980.
|
113 |
+
|
114 |
+
The collegial expression of episcopal leadership in the United Methodist Church is known as the Council of Bishops. The Council of Bishops speaks to the Church and through the Church into the world and gives leadership in the quest for Christian unity and interreligious relationships.[27] The Conference of Methodist Bishops includes the United Methodist Council of Bishops plus bishops from affiliated autonomous Methodist or United Churches.
|
115 |
+
|
116 |
+
John Wesley consecrated Thomas Coke a "General Superintendent," and directed that Francis Asbury also be consecrated for the United States of America in 1784, where the Methodist Episcopal Church first became a separate denomination apart from the Church of England. Coke soon returned to England, but Asbury was the primary builder of the new church. At first he did not call himself bishop, but eventually submitted to the usage by the denomination.
|
117 |
+
|
118 |
+
Notable bishops in United Methodist history include Coke, Asbury, Richard Whatcoat, Philip William Otterbein, Martin Boehm, Jacob Albright, John Seybert, Matthew Simpson, John S. Stamm, William Ragsdale Cannon, Marjorie Matthews, Leontine T. Kelly, William B. Oden, Ntambo Nkulu Ntanda, Joseph Sprague, William Henry Willimon, and Thomas Bickerton.
|
119 |
+
|
120 |
+
In The Church of Jesus Christ of Latter-day Saints, the Bishop is the leader of a local congregation, called a ward. As with most LDS priesthood holders, the bishop is a part-time lay minister and earns a living through other employment; in all cases, he is a married man. As such, it is his duty to preside at services, call local leaders, and judge the worthiness of members for service. The bishop does not deliver sermons at every service (generally asking members to do so), but is expected to be a spiritual guide for his congregation. It is therefore believed that he has both the right and ability to receive divine inspiration (through the Holy Spirit) for the ward under his direction. Because it is a part-time position, all able members are expected to assist in the management of the ward by holding delegated lay positions (for example, women's and youth leaders, teachers) referred to as callings. Although members are asked to confess serious sins to him, unlike the Catholic Church, he is not the instrument of divine forgiveness, but merely a guide through the repentance process (and a judge in case transgressions warrant excommunication or other official discipline). The bishop is also responsible for the physical welfare of the ward, and thus collects tithing and fast offerings and distributes financial assistance where needed.
|
121 |
+
|
122 |
+
A bishop is the president of the Aaronic priesthood in his ward (and is thus a form of Mormon Kohen; in fact, a literal descendant of Aaron has "legal right" to act as a bishop[28] after being found worthy and ordained by the First Presidency[29]). In the absence of a literal descendant of Aaron, a high priest in the Melchizedek priesthood is called to be a bishop.[29] Each bishop is selected from resident members of the ward by the stake presidency with approval of the First Presidency, and chooses two counselors to form a bishopric. In special circumstances (such as a ward consisting entirely of young university students), a bishop may be chosen from outside the ward. A bishop is typically released after about five years and a new bishop is called to the position. Although the former bishop is released from his duties, he continues to hold the Aaronic priesthood office of bishop. Church members frequently refer to a former bishop as "Bishop" as a sign of respect and affection.
|
123 |
+
|
124 |
+
Latter-day Saint bishops do not wear any special clothing or insignia the way clergy in many other churches do, but are expected to dress and groom themselves neatly and conservatively per their local culture, especially when performing official duties. Bishops (as well as other members of the priesthood) can trace their line of authority back to Joseph Smith, who, according to church doctrine, was ordained to lead the Church in modern times by the ancient apostles Peter, James, and John, who were ordained to lead the Church by Jesus Christ.[30]
|
125 |
+
|
126 |
+
At the global level, the presiding bishop oversees the temporal affairs (buildings, properties, commercial corporations, and so on) of the worldwide Church, including the Church's massive global humanitarian aid and social welfare programs. The presiding bishop has two counselors; the three together form the presiding bishopric.[31] As opposed to ward bishoprics, where the counselors do not hold the office of bishop, all three men in the presiding bishopric hold the office of bishop, and thus the counselors, as with the presiding bishop, are formally referred to as "Bishop".[32]
|
127 |
+
|
128 |
+
The New Apostolic Church (NAC) knows three classes of ministries: Deacons, Priests and Apostles. The Apostles, who are all included in the apostolate with the Chief Apostle as head, are the highest ministries.
|
129 |
+
|
130 |
+
Of the several kinds of priest....ministries, the bishop is the highest. Nearly all bishops are set in line directly from the chief apostle. They support and help their superior apostle.
|
131 |
+
|
132 |
+
In the Church of God in Christ (COGIC), the ecclesiastical structure is composed of large dioceses that are called "jurisdictions" within COGIC, each under the authority of a bishop, sometimes called "state bishops". They can either be made up of large geographical regions of churches or churches that are grouped and organized together as their own separate jurisdictions because of similar affiliations, regardless of geographical location or dispersion. Each state in the U.S. has at least one jurisdiction while others may have several more, and each jurisdiction is usually composed of between 30 and 100 churches. Each jurisdiction is then broken down into several districts, which are smaller groups of churches (either grouped by geographical situation or by similar affiliations) which are each under the authority of District Superintendents who answer to the authority of their jurisdictional/state bishop. There are currently over 170 jurisdictions in the United States, and over 30 jurisdictions in other countries. The bishops of each jurisdiction, according to the COGIC Manual, are considered to be the modern day equivalent in the church of the early apostles and overseers of the New Testament church, and as the highest ranking clergymen in the COGIC, they are tasked with the responsibilities of being the head overseers of all religious, civil, and economic ministries and protocol for the church denomination.[33] They also have the authority to appoint and ordain local pastors, elders, ministers, and reverends within the denomination. The bishops of the COGIC denomination are all collectively called "The Board of Bishops."[34] From the Board of Bishops, and the General Assembly of the COGIC, the body of the church composed of clergy and lay delegates that are responsible for making and enforcing the bylaws of the denomination, every four years, twelve bishops from the COGIC are elected as "The General Board" of the church, who work alongside the delegates of the General Assembly and Board of Bishops to provide administration over the denomination as the church's head executive leaders.[35] One of twelve bishops of the General Board is also elected the "presiding bishop" of the church, and two others are appointed by the presiding bishop himself, as his first and second assistant presiding bishops.
|
133 |
+
|
134 |
+
Bishops in the Church of God in Christ usually wear black clergy suits which consist of a black suit blazer, black pants, a purple or scarlet clergy shirt and a white clerical collar, which is usually referred to as "Class B Civic attire." Bishops in COGIC also typically wear the Anglican Choir Dress style vestments of a long purple or scarlet chimere, cuffs, and tippet worn over a long white rochet, and a gold pectoral cross worn around the neck with the tippet. This is usually referred to as "Class A Ceremonial attire". The bishops of COGIC alternate between Class A Ceremonial attire and Class B Civic attire depending on the protocol of the religious services and other events they have to attend.[34][33]
|
135 |
+
|
136 |
+
In the polity of the Church of God (Cleveland, Tennessee), the international leader is the presiding bishop, and the members of the executive committee are executive bishops. Collectively, they supervise and appoint national and state leaders across the world. Leaders of individual states and regions are administrative bishops, who have jurisdiction over local churches in their respective states and are vested with appointment authority for local pastorates. All ministers are credentialed at one of three levels of licensure, the most senior of which is the rank of ordained bishop. To be eligible to serve in state, national, or international positions of authority, a minister must hold the rank of ordained bishop.
|
137 |
+
|
138 |
+
In 2002, the general convention of the Pentecostal Church of God came to a consensus to change the title of their overseer from general superintendent to bishop. The change was brought on because internationally, the term bishop is more commonly related to religious leaders than the previous title.
|
139 |
+
|
140 |
+
The title bishop is used for both the general (international leader) and the district (state) leaders. The title is sometimes used in conjunction with the previous, thus becoming general (district) superintendent/bishop.
|
141 |
+
|
142 |
+
According to the Seventh-day Adventist understanding of the doctrine of the Church:
|
143 |
+
|
144 |
+
"The "elders" (Greek, presbuteros) or "bishops" (episkopos) were the most important officers of the church. The term elder means older one, implying dignity and respect. His position was similar to that of the one who had supervision of the synagogue. The term bishop means "overseer." Paul used these terms interchangeably, equating elders with overseers or bishops (Acts 20:17,28; Titus 1:5, 7).
|
145 |
+
|
146 |
+
"Those who held this position supervised the newly formed churches. Elder referred to the status or rank of the office, while bishop denoted the duty or responsibility of the office—"overseer." Since the apostles also called themselves elders (1 Peter 5:1; 2 John 1; 3 John 1), it is apparent that there were both local elders and itinerant elders, or elders at large. But both kinds of elder functioned as shepherds of the congregations.[36]"
|
147 |
+
|
148 |
+
The above understanding is part of the basis of Adventist organizational structure. The world wide Seventh-day Adventist church is organized into local districts, conferences or missions, union conferences or union missions, divisions, and finally at the top is the general conference. At each level (with exception to the local districts), there is an elder who is elected president and a group of elders who serve on the executive committee with the elected president. Those who have been elected president would in effect be the "bishop" while never actually carrying the title or ordained as such because the term is usually associated with the episcopal style of church governance most often found in Catholic, Anglican, Methodist and some Pentecostal/Charismatic circles.
|
149 |
+
|
150 |
+
Some Baptists also have begun taking on the title of bishop.[37]
|
151 |
+
In some smaller Protestant denominations and independent churches, the term bishop is used in the same way as pastor, to refer to the leader of the local congregation, and may be male or female. This usage is especially common in African-American churches in the US.
|
152 |
+
|
153 |
+
In the Church of Scotland, which has a Presbyterian church structure, the word "bishop" refers to an ordained person, usually a normal parish minister, who has temporary oversight of a trainee minister. In the Presbyterian Church (USA), the term bishop is an expressive name for a Minister of Word and Sacrament who serves a congregation and exercises "the oversight of the flock of Christ."[38] The term is traceable to the 1789 Form of Government of the PC (USA) and the Presbyterian understanding of the pastoral office.[39]
|
154 |
+
|
155 |
+
While not considered orthodox Christian, the Ecclesia Gnostica Catholica uses roles and titles derived from Christianity for its clerical hierarchy, including bishops who have much the same authority and responsibilities as in Catholicism.
|
156 |
+
|
157 |
+
The Salvation Army does not have bishops but has appointed leaders of geographical areas, known as Divisional Commanders. Larger geographical areas, called Territories, are led by a Territorial Commander, who is the highest-ranking officer in that Territory.
|
158 |
+
|
159 |
+
Jehovah's Witnesses do not use the title ‘Bishop’ within their organizational structure, but appoint elders to be overseers (to fulfill the role of oversight) within their congregations.[40][41]
|
160 |
+
|
161 |
+
The HKBP of Indonesia, the most prominent Protestant denomination in Indonesia, uses the term Ephorus instead of Bishop.[42]
|
162 |
+
|
163 |
+
In the Vietnamese syncretist religion of Caodaism, bishops (giáo sư) comprise the fifth of nine hierarchical levels, and are responsible for spiritual and temporal education as well as record-keeping and ceremonies in their parishes. At any one time there are seventy-two bishops. Their authority is described in Section I of the text Tân Luật (revealed through seances in December 1926). Caodai bishops wear robes and headgear of embroidered silk depicting the Divine Eye and the Eight Trigrams. (The color varies according to branch.) This is the full ceremonial dress; the simple version consists of a seven-layered turban.
|
164 |
+
|
165 |
+
Traditionally, a number of items are associated with the office of a bishop, most notably the mitre, crosier, and ecclesiastical ring. Other vestments and insignia vary between Eastern and Western Christianity.
|
166 |
+
|
167 |
+
In the Latin Rite of the Catholic Church, the choir dress of a bishop includes the purple cassock with amaranth trim, rochet, purple zucchetto (skull cap), purple biretta, and pectoral cross. The cappa magna may be worn, but only within the bishop's own diocese and on especially solemn occasions.[43] The mitre, zuchetto, and stole are generally worn by bishops when presiding over liturgical functions. For liturgical functions other than the Mass the bishop typically wears the cope. Within his own diocese and when celebrating solemnly elsewhere with the consent of the local ordinary, he also uses the crosier.[43] When celebrating Mass, a bishop, like a priest, wears the chasuble. The Caeremoniale Episcoporum recommends, but does not impose, that in solemn celebrations a bishop should also wear a dalmatic, which can always be white, beneath the chasuble, especially when administering the sacrament of holy orders, blessing an abbot or abbess, and dedicating a church or an altar.[43] The Caeremoniale Episcoporum no longer makes mention of episcopal gloves, episcopal sandals, liturgical stockings (also known as buskins), or the accoutrements that it once prescribed for the bishop's horse. The coat of arms of a Latin Rite Catholic bishop usually displays a galero with a cross and crosier behind the escutcheon; the specifics differ by location and ecclesiastical rank (see Ecclesiastical heraldry).
|
168 |
+
|
169 |
+
Anglican bishops generally make use of the mitre, crosier, ecclesiastical ring, purple cassock, purple zucchetto, and pectoral cross. However, the traditional choir dress of Anglican bishops retains its late mediaeval form, and looks quite different from that of their Catholic counterparts; it consists of a long rochet which is worn with a chimere.
|
170 |
+
|
171 |
+
In the Eastern Churches (Eastern Orthodox, Eastern Rite Catholic) a bishop will wear the mandyas, panagia (and perhaps an enkolpion), sakkos, omophorion and an Eastern-style mitre. Eastern bishops do not normally wear an episcopal ring; the faithful kiss (or, alternatively, touch their forehead to) the bishop's hand. To seal official documents, he will usually use an inked stamp. An Eastern bishop's coat of arms will normally display an Eastern-style mitre, cross, eastern style crosier and a red and white (or red and gold) mantle. The arms of Oriental Orthodox bishops will display the episcopal insignia (mitre or turban) specific to their own liturgical traditions. Variations occur based upon jurisdiction and national customs.
|
172 |
+
|
173 |
+
Eastern Rite Catholic bishops celebrating Divine Liturgy in their proper pontifical vestments
|
174 |
+
|
175 |
+
An Anglican bishop with a crosier, wearing a rochet under a red chimere and cuffs, a black tippet, and a pectoral cross
|
176 |
+
|
177 |
+
An Episcopal bishop immediately before presiding at the Great Vigil of Easter in the narthex of St. Michael's Episcopal Cathedral in Boise, Idaho.
|
178 |
+
|
179 |
+
Catholic bishop dressed for the Sacrifice of the Mass. No Pontifical gloves.
|
en/1917.html.txt
ADDED
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A bishop is an ordained, consecrated, or appointed member of the Christian clergy who is generally entrusted with a position of authority and oversight.
|
4 |
+
|
5 |
+
Within the Catholic, Eastern Orthodox, Oriental Orthodox, Moravian, Anglican, Old Catholic and Independent Catholic churches, as well as the Assyrian Church of the East, bishops claim apostolic succession, a direct historical lineage dating back to the original Twelve Apostles. Within these churches, bishops are seen as those who possess the full priesthood and can ordain clergy, including other bishops. Some Protestant churches, including the Lutheran, Anglican and Methodist churches, have bishops serving similar functions as well, though not always understood to be within apostolic succession in the same way. A person ordained as a deacon, priest, and then bishop is understood to hold the fullness of the (ministerial) priesthood, given responsibility by Christ to govern, teach, and sanctify the Body of Christ. Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry.
|
6 |
+
|
7 |
+
The English term bishop derives from the Greek word ἐπίσκοπος epískopos, meaning "overseer" in Greek, the early language of the Christian Church. In the early Christian era the term was not always clearly distinguished from presbýteros (literally: "elder" or "senior", origin of the modern English word "priest"), but is used in the sense of the order or office of bishop, distinct from that of presbyter, in the writings attributed to Ignatius of Antioch.[1] (died c. 110).
|
8 |
+
|
9 |
+
The earliest organization of the Church in Jerusalem was, according to most scholars, similar to that of Jewish synagogues, but it had a council or college of ordained presbyters (Ancient Greek: πρεσβύτεροι elders). In Acts 11:30 and Acts 15:22, we see a collegiate system of government in Jerusalem chaired by James the Just, according to tradition the first bishop of the city. In Acts 14:23, the Apostle Paul ordains presbyters in churches in Anatolia.[2]
|
10 |
+
|
11 |
+
Often, the word presbyter was not yet distinguished from overseer (Ancient Greek: ἐπίσκοπος episkopos, later used exclusively to mean bishop), as in Acts 20:17, Titus 1:5–7 and 1 Peter 5:1.[a][b] The earliest writings of the Apostolic Fathers, the Didache and the First Epistle of Clement, for example, show the church used two terms for local church offices—presbyters (seen by many as an interchangeable term with episcopos or overseer) and deacon.
|
12 |
+
|
13 |
+
In Timothy and Titus in the New Testament a more clearly defined episcopate can be seen. We are told that Paul had left Timothy in Ephesus and Titus in Crete to oversee the local church.[6][7] Paul commands Titus to ordain presbyters/bishops and to exercise general oversight.
|
14 |
+
|
15 |
+
Early sources are unclear but various groups of Christian communities may have had the bishop surrounded by a group or college functioning as leaders of the local churches.[8][9] Eventually the head or "monarchic" bishop came to rule more clearly,[10] and all local churches would eventually follow the example of the other churches and structure themselves after the model of the others with the one bishop in clearer charge,[8] though the role of the body of presbyters remained important.[10]
|
16 |
+
|
17 |
+
Eventually, as Christendom grew, bishops no longer directly served individual congregations. Instead, the Metropolitan bishop (the bishop in a large city) appointed priests to minister each congregation, acting as the bishop's delegate.
|
18 |
+
|
19 |
+
Around the end of the 1st century, the church's organization became clearer in historical documents. In the works of the Apostolic Fathers, and Ignatius of Antioch in particular, the role of the episkopos, or bishop, became more important or, rather, already was very important and being clearly defined. While Ignatius of Antioch offers the earliest clear description of monarchial bishops (a single bishop over all house churches in a city) he is an advocate of monepiscopal structure rather than describing an accepted reality. To the bishops and house churches to which he writes, he offers strategies on how to pressure house churches who don't recognize the bishop into compliance. Other contemporary Christian writers do not describe monarchial bishops, either continuing to equate them with the presbyters or speaking of episkopoi (bishops, plural) in a city.
|
20 |
+
|
21 |
+
"Blessed be God, who has granted unto you, who are yourselves so excellent, to obtain such an excellent bishop." — Epistle of Ignatius to the Ephesians 1:1 [11]
|
22 |
+
"and that, being subject to the bishop and the presbytery, ye may in all respects be sanctified." — Epistle of Ignatius to the Ephesians 2:1
|
23 |
+
[12]
|
24 |
+
"For your justly renowned presbytery, worthy of God, is fitted as exactly to the bishop as the strings are to the harp." — Epistle of Ignatius to the Ephesians 4:1 [13]
|
25 |
+
"Do ye, beloved, be careful to be subject to the bishop, and the presbyters and the deacons." — Epistle of Ignatius to the Ephesians 5:1 [13]
|
26 |
+
"Plainly therefore we ought to regard the bishop as the Lord Himself" — Epistle of Ignatius to the Ephesians 6:1.
|
27 |
+
"your godly bishop" — Epistle of Ignatius to the Magnesians 2:1.
|
28 |
+
"the bishop presiding after the likeness of God and the presbyters after the likeness of the council of the Apostles, with the deacons also who are most dear to me, having been entrusted with the diaconate of Jesus Christ" — Epistle of Ignatius to the Magnesians 6:1.
|
29 |
+
"Therefore as the Lord did nothing without the Father, [being united with Him], either by Himself or by the Apostles, so neither do ye anything without the bishop and the presbyters." — Epistle of Ignatius to the Magnesians 7:1.
|
30 |
+
"Be obedient to the bishop and to one another, as Jesus Christ was to the Father [according to the flesh], and as the Apostles were to Christ and to the Father, that there may be union both of flesh and of spirit." — Epistle of Ignatius to the Magnesians 13:2.
|
31 |
+
"In like manner let all men respect the deacons as Jesus Christ, even as they should respect the bishop as being a type of the Father and the presbyters as the council of God and as the college of Apostles. Apart from these there is not even the name of a church." — Epistle of Ignatius to the Trallesians 3:1.
|
32 |
+
"follow your bishop, as Jesus Christ followed the Father, and the presbytery as the Apostles; and to the deacons pay respect, as to God's commandment" — Epistle of Ignatius to the Smyrnans 8:1.
|
33 |
+
"He that honoureth the bishop is honoured of God; he that doeth aught without the knowledge of the bishop rendereth service to the devil" — Epistle of Ignatius to the Smyrnans 9:1.
|
34 |
+
|
35 |
+
As the Church continued to expand, new churches in important cities gained their own bishop. Churches in the regions outside an important city were served by Chorbishop, an official rank of bishops. However, soon, presbyters and deacons were sent from bishop of a city church. Gradually priests replaced the chorbishops. Thus, in time, the bishop changed from being the leader of a single church confined to an urban area to being the leader of the churches of a given geographical area.
|
36 |
+
|
37 |
+
Clement of Alexandria (end of the 2nd century) writes about the ordination of a certain Zachæus as bishop by the imposition of Simon Peter Bar-Jonah's hands. The words bishop and ordination are used in their technical meaning by the same Clement of Alexandria.[14] The bishops in the 2nd century are defined also as the only clergy to whom the ordination to priesthood (presbyterate) and diaconate is entrusted: "a priest (presbyter) lays on hands, but does not ordain." (cheirothetei ou cheirotonei[15])
|
38 |
+
|
39 |
+
At the beginning of the 3rd century, Hippolytus of Rome describes another feature of the ministry of a bishop, which is that of the "Spiritum primatus sacerdotii habere potestatem dimittere peccata": the primate of sacrificial priesthood and the power to forgive sins.[16]
|
40 |
+
|
41 |
+
The efficient organization of the Roman Empire became the template for the organisation of the church in the 4th century, particularly after Constantine's Edict of Milan. As the church moved from the shadows of privacy into the public forum it acquired land for churches, burials and clergy. In 391, Theodosius I decreed that any land that had been confiscated from the church by Roman authorities be returned.
|
42 |
+
|
43 |
+
The most usual term for the geographic area of a bishop's authority and ministry, the diocese, began as part of the structure of the Roman Empire under Diocletian. As Roman authority began to fail in the western portion of the empire, the church took over much of the civil administration. This can be clearly seen in the ministry of two popes: Pope Leo I in the 5th century, and Pope Gregory I in the 6th century. Both of these men were statesmen and public administrators in addition to their role as Christian pastors, teachers and leaders. In the Eastern churches, latifundia entailed to a bishop's see were much less common, the state power did not collapse the way it did in the West, and thus the tendency of bishops acquiring civil power was much weaker than in the West. However, the role of Western bishops as civil authorities, often called prince bishops, continued throughout much of the Middle Ages.
|
44 |
+
|
45 |
+
As well as being archchancellors of the Holy Roman Empire after the 9th century, bishops generally served as chancellors to medieval monarchs, acting as head of the justiciary and chief chaplain. The Lord Chancellor of England was almost always a bishop up until the dismissal of Cardinal Thomas Wolsey by Henry VIII. Similarly, the position of Kanclerz in the Polish kingdom was always held by a bishop until the 16th century. And today, the principality of Andorra is headed by two co-princes, one of whom is a Catholic bishop (and the other, the President of France).
|
46 |
+
|
47 |
+
In France before the French Revolution, representatives of the clergy — in practice, bishops and abbots of the largest monasteries — comprised the First Estate of the Estates-General, until their role was abolished during the French Revolution.
|
48 |
+
|
49 |
+
In the 21st century, the more senior bishops of the Church of England continue to sit in the House of Lords of the Parliament of the United Kingdom, as representatives of the established church, and are known as Lords Spiritual. The Bishop of Sodor and Man, whose diocese lies outside the United Kingdom, is an ex officio member of the Legislative Council of the Isle of Man. In the past, the Bishop of Durham, known as a prince bishop, had extensive viceregal powers within his northern diocese — the power to mint money, collect taxes and raise an army to defend against the Scots.
|
50 |
+
|
51 |
+
Eastern Orthodox bishops, along with all other members of the clergy, are canonically forbidden to hold political office. Occasional exceptions to this rule are tolerated when the alternative is political chaos. In the Ottoman Empire, the Patriarch of Constantinople, for example, had de facto administrative, fiscal, cultural and legal jurisdiction, as well as spiritual, over all the Christians of the empire. More recently, Archbishop Makarios III of Cyprus, served as President of the Republic of Cyprus from 1960 to 1977.
|
52 |
+
|
53 |
+
In 2001, Peter Hollingworth, AC, OBE – then the Anglican Archbishop of Brisbane – was controversially appointed Governor-General of Australia. Although Hollingworth gave up his episcopal position to accept the appointment, it still attracted considerable opposition in a country which maintains a formal separation between Church and State.
|
54 |
+
|
55 |
+
During the period of the English Civil War, the role of bishops as wielders of political power and as upholders of the established church became a matter of heated political controversy. Indeed, Presbyterianism was the polity of most Reformed Churches in Europe, and had been favored by many in England since the English Reformation. Since in the primitive church the offices of presbyter and episkopos were identical, many Puritans held that this was the only form of government the church should have. The Anglican divine, Richard Hooker, objected to this claim in his famous work Of the Laws of Ecclesiastic Polity while, at the same time, defending Presbyterian ordination as valid (in particular Calvin's ordination of Beza). This was the official stance of the English Church until the Commonwealth, during which time, the views of Presbyterians and Independents (Congregationalists) were more freely expressed and practiced.
|
56 |
+
|
57 |
+
Bishops form the leadership in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, the Anglican Communion, the Lutheran Church, the Independent Catholic Churches, the Independent Anglican Churches, and certain other, smaller, denominations.
|
58 |
+
|
59 |
+
The traditional role of a bishop is as pastor of a diocese (also called a bishopric, synod, eparchy or see), and so to serve as a "diocesan bishop," or "eparch" as it is called in many Eastern Christian churches. Dioceses vary considerably in size, geographically and population-wise. Some dioceses around the Mediterranean Sea which were Christianised early are rather compact, whereas dioceses in areas of rapid modern growth in Christian commitment—as in some parts of Sub-Saharan Africa, South America and the Far East—are much larger and more populous.
|
60 |
+
|
61 |
+
As well as traditional diocesan bishops, many churches have a well-developed structure of church leadership that involves a number of layers of authority and responsibility.
|
62 |
+
|
63 |
+
In Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and Anglicanism, only a bishop can ordain other bishops, priests, and deacons.
|
64 |
+
|
65 |
+
In the Eastern liturgical tradition, a priest can celebrate the Divine Liturgy only with the blessing of a bishop. In Byzantine usage, an antimension signed by the bishop is kept on the altar partly as a reminder of whose altar it is and under whose omophorion the priest at a local parish is serving. In Syriac Church usage, a consecrated wooden block called a thabilitho is kept for the same reasons.
|
66 |
+
|
67 |
+
The pope, in addition to being the Bishop of Rome and spiritual head of the Catholic Church, is also the Patriarch of the Latin Rite. Each bishop within the Latin Rite is answerable directly to the Pope and not any other bishop except to metropolitans in certain oversight instances. The pope previously used the title Patriarch of the West, but this title was dropped from use in 2006[18] a move which caused some concern within the Eastern Orthodox Communion as, to them, it implied wider papal jurisdiction.[19]
|
68 |
+
|
69 |
+
In Catholic, Eastern Orthodox, Oriental Orthodox and Anglican cathedrals there is a special chair set aside for the exclusive use of the bishop. This is the bishop's cathedra and is often called the throne. In some Christian denominations, for example, the Anglican Communion, parish churches may maintain a chair for the use of the bishop when he visits; this is to signify the parish's union with the bishop.
|
70 |
+
|
71 |
+
The bishop is the ordinary minister of the sacrament of confirmation in the Latin Rite Catholic Church, and in the Anglican and Old Catholic communion only a bishop may administer this sacrament. However, in the Byzantine and other Eastern rites, whether Eastern or Oriental Orthodox or Eastern Catholic, chrismation is done immediately after baptism, and thus the priest is the one who confirms, using chrism blessed by a bishop.[20]
|
72 |
+
|
73 |
+
Bishops in all of these communions are ordained by other bishops through the laying on of hands. While traditional teaching maintains that any bishop with apostolic succession can validly perform the ordination of another bishop, some churches require two or three bishops participate, either to ensure sacramental validity or to conform with church law. Catholic doctrine holds that one bishop can validly ordain another (priest) as a bishop. Though a minimum of three bishops participating is desirable (there are usually several more) in order to demonstrate collegiality, canonically only one bishop is necessary. The practice of only one bishop ordaining was normal in countries where the Church was persecuted under Communist rule.
|
74 |
+
The title of archbishop or metropolitan may be granted to a senior bishop, usually one who is in charge of a large ecclesiastical jurisdiction. He may, or may not, have provincial oversight of suffragan bishops and may possibly have auxiliary bishops assisting him.
|
75 |
+
Ordination of a bishop, and thus continuation of apostolic succession, takes place through a ritual centred on the imposition of hands and prayer.
|
76 |
+
Apart from the ordination, which is always done by other bishops, there are different methods as to the actual selection of a candidate for ordination as bishop. In the Catholic Church the Congregation for Bishops generally oversees the selection of new bishops with the approval of the pope. The papal nuncio usually solicits names from the bishops of a country, consults with priests and leading members of a laity, and then selects three to be forwarded to the Holy See. In Europe, some cathedral chapters have duties to elect bishops. The Eastern Catholic churches generally elect their own bishops. Most Eastern Orthodox churches allow varying amounts of formalised laity or lower clergy influence on the choice of bishops. This also applies in those Eastern churches which are in union with the pope, though it is required that he give assent.
|
77 |
+
|
78 |
+
Catholic, Eastern Orthodox, Oriental Orthodox, Anglican, Old Catholic and some Lutheran bishops claim to be part of the continuous sequence of ordained bishops since the days of the apostles referred to as apostolic succession. Since Pope Leo XIII issued the bull Apostolicae curae in 1896, the Catholic Church has insisted that Anglican orders are invalid because of changes in the Anglican ordination rites of the 16th century and divergence in understanding of the theology of priesthood, episcopacy and Eucharist. However, since the 1930s, Utrecht Old Catholic bishops (recognised by the Holy See as validily ordained) have sometimes taken part in the ordination of Anglican bishops. According to the writer Timothy Dufort, by 1969, all Church of England bishops had acquired Old Catholic lines of apostolic succession recognised by the Holy See.[21] This development has muddied the waters somewhat as it could be argued that the strain of apostolic succession has been re-introduced into Anglicanism, at least within the Church of England.
|
79 |
+
|
80 |
+
The Catholic Church does recognise as valid (though illicit) ordinations done by breakaway Catholic, Old Catholic or Oriental bishops, and groups descended from them; it also regards as both valid and licit those ordinations done by bishops of the Eastern churches,[c] so long as those receiving the ordination conform to other canonical requirements (for example, is an adult male) and an eastern orthodox rite of episcopal ordination, expressing the proper functions and sacramental status of a bishop, is used; this has given rise to the phenomenon of episcopi vagantes (for example, clergy of the Independent Catholic groups which claim apostolic succession, though this claim is rejected by both Catholicism and Eastern Orthodoxy).
|
81 |
+
|
82 |
+
The Eastern Orthodox Churches would not accept the validity of any ordinations performed by the Independent Catholic groups, as Eastern Orthodoxy considers to be spurious any consecration outside the Church as a whole. Eastern Orthodoxy considers apostolic succession to exist only within the Universal Church, and not through any authority held by individual bishops; thus, if a bishop ordains someone to serve outside the (Eastern Orthodox) Church, the ceremony is ineffectual, and no ordination has taken place regardless of the ritual used or the ordaining prelate's position within the Eastern Orthodox Churches.
|
83 |
+
|
84 |
+
The position of the Catholic Church is slightly different. Whilst it does recognise the validity of the orders of certain groups which separated from communion with Holy See. The Holy See accepts as valid the ordinations of the Old Catholics in communion with Utrecht, as well as the Polish National Catholic Church (which received its orders directly from Utrecht, and was—until recently—part of that communion); but Catholicism does not recognise the orders of any group whose teaching is at variance with what they consider the core tenets of Christianity; this is the case even though the clergy of the Independent Catholic groups may use the proper ordination ritual. There are also other reasons why the Holy See does not recognise the validity of the orders of the Independent clergy:
|
85 |
+
|
86 |
+
Whilst members of the Independent Catholic movement take seriously the issue of valid orders, it is highly significant that the relevant Vatican Congregations tend not to respond to petitions from Independent Catholic bishops and clergy who seek to be received into communion with the Holy See, hoping to continue in some sacramental role. In those instances where the pope does grant reconciliation, those deemed to be clerics within the Independent Old Catholic movement are invariably admitted as laity and not priests or bishops.
|
87 |
+
|
88 |
+
There is a mutual recognition of the validity of orders amongst Catholic, Eastern Orthodox, Old Catholic, Oriental Orthodox and Assyrian Church of the East churches.[22]
|
89 |
+
|
90 |
+
Some provinces of the Anglican Communion have begun ordaining women as bishops in recent decades – for example, England, Ireland, Scotland, Wales, the United States, Australia, New Zealand, Canada and Cuba. The first woman to be consecrated a bishop within Anglicanism was Barbara Harris, who was ordained in the United States in 1989. In 2006, Katharine Jefferts Schori, the Episcopal Bishop of Nevada, became the first woman to become the presiding bishop of the Episcopal Church.
|
91 |
+
|
92 |
+
In the Evangelical Lutheran Church in America (ELCA) and the Evangelical Lutheran Church in Canada (ELCIC), the largest Lutheran Church bodies in the United States and Canada, respectively, and roughly based on the Nordic Lutheran state churches (similar to that of the Church of England), bishops are elected by Synod Assemblies, consisting of both lay members and clergy, for a term of six years, which can be renewed, depending upon the local synod's "constitution" (which is mirrored on either the ELCA or ELCIC's national constitution). Since the implementation of concordats between the ELCA and the Episcopal Church of the United States and the ELCIC and the Anglican Church of Canada, all bishops, including the presiding bishop (ELCA) or the national bishop (ELCIC), have been consecrated using the historic succession, with at least one Anglican bishop serving as co-consecrator.[23][24]
|
93 |
+
|
94 |
+
Since going into ecumenical communion with their respective Anglican body, bishops in the ELCA or the ELCIC not only approve the "rostering" of all ordained pastors, diaconal ministers, and associates in ministry, but they serve as the principal celebrant of all pastoral ordination and installation ceremonies, diaconal consecration ceremonies, as well as serving as the "chief pastor" of the local synod, upholding the teachings of Martin Luther as well as the documentations of the Ninety-Five Theses and the Augsburg Confession. Unlike their counterparts in the United Methodist Church, ELCA and ELCIC synod bishops do not appoint pastors to local congregations (pastors, like their counterparts in the Episcopal Church, are called by local congregations). The presiding bishop of the ELCA and the national bishop of the ELCIC, the national bishops of their respective bodies, are elected for a single 6-year term and may be elected to an additional term.
|
95 |
+
|
96 |
+
Although ELCA agreed with the Episcopal Church to limit ordination to the bishop "ordinarily", ELCA pastor-ordinators are given permission to perform the rites in "extraordinary" circumstance. In practice, "extraordinary" circumstance have included disagreeing with Episcopalian views of the episcopate, and as a result, ELCA pastors ordained by other pastors are not permitted to be deployed to Episcopal Churches (they can, however, serve in Presbyterian Church USA, United Methodist Church, Reformed Church in America, and Moravian Church congregations, as the ELCA is in full communion with these denominations). The Lutheran Church–Missouri Synod (LCMS) and the Wisconsin Evangelical Lutheran Synod (WELS), the second and third largest Lutheran bodies in the United States and the two largest Confessional Lutheran bodies in North America, do not follow an episcopal form of governance, settling instead on a form of quasi-congregationalism patterned off what they believe to be the practice of the early church. The second largest of the three predecessor bodies of the ELCA, the American Lutheran Church, was a congregationalist body, with national and synod presidents before they were re-titled as bishops (borrowing from the Lutheran churches in Germany) in the 1980s. With regard to ecclesial discipline and oversight, national and synod presidents typically function similarly to bishops in episcopal bodies.[25]
|
97 |
+
|
98 |
+
In the African Methodist Episcopal Church, "Bishops are the Chief Officers of the Connectional Organization. They are elected for life by a majority vote of the General Conference which meets every four years."[26]
|
99 |
+
|
100 |
+
In the Christian Methodist Episcopal Church in the United States, bishops are administrative superintendents of the church; they are elected by "delegate" votes for as many years deemed until the age of 74, then he/she must retire. Among their duties, are responsibility for appointing clergy to serve local churches as pastor, for performing ordinations, and for safeguarding the doctrine and discipline of the Church. The General Conference, a meeting every four years, has an equal number of clergy and lay delegates. In each Annual Conference, CME bishops serve for four-year terms. CME Church bishops may be male or female.
|
101 |
+
|
102 |
+
In the United Methodist Church (the largest branch of Methodism in the world) bishops serve as administrative and pastoral superintendents of the church. They are elected for life from among the ordained elders (presbyters) by vote of the delegates in regional (called jurisdictional) conferences, and are consecrated by the other bishops present at the conference through the laying on of hands. In the United Methodist Church bishops remain members of the "Order of Elders" while being consecrated to the "Office of the Episcopacy". Within the United Methodist Church only bishops are empowered to consecrate bishops and ordain clergy. Among their most critical duties is the ordination and appointment of clergy to serve local churches as pastor, presiding at sessions of the Annual, Jurisdictional, and General Conferences, providing pastoral ministry for the clergy under their charge, and safeguarding the doctrine and discipline of the Church. Furthermore, individual bishops, or the Council of Bishops as a whole, often serve a prophetic role, making statements on important social issues and setting forth a vision for the denomination, though they have no legislative authority of their own. In all of these areas, bishops of the United Methodist Church function very much in the historic meaning of the term. According to the Book of Discipline of the United Methodist Church, a bishop's responsibilities are
|
103 |
+
|
104 |
+
Leadership.—Spiritual and Temporal—
|
105 |
+
|
106 |
+
Presidential Duties.—1. To preside in the General, Jurisdictional, Central, and Annual Conferences. 2. To form the districts after consultation with the district superintendents and after the number of the same has been determined by vote of the Annual Conference. 3. To appoint the district superintendents annually (¶¶ 517–518). 4. To consecrate bishops, to ordain elders and deacons, to consecrate diaconal ministers, to commission deaconesses and home missionaries, and to see that the names of the persons commissioned and consecrated are entered on the journals of the conference and that proper credentials are funised to these persons.
|
107 |
+
|
108 |
+
Working with Ministers.—1. To make and fix the appointments in the Annual Conferences, Provisional Annual Conferences, and Missions as the Discipline may direct (¶¶ 529–533).
|
109 |
+
|
110 |
+
2. To divide or to unite a circuit(s), stations(s), or mission(s) as judged necessary for missionary strategy and then to make appropriate appointments. 3. To read the appointments of deaconesses, diaconal ministers, lay persons in service under the World Division of the General Board of Global Ministries, and home missionaries. 4. To fix the Charge Conference membership of all ordained ministers appointed to ministries other than the local church in keeping with ¶443.3. 5. To transfer, upon the request of the receiving bishop, ministerial member(s) of one Annual Conference to another, provided said member(s) agrees to transfer; and to send immediately to the secretaries of both conferences involved, to the conference Boards of Ordained Ministry, and to the clearing house of the General Board of Pensions written notices of the transfer of members and of their standing in the course of study if they are undergraduates.[27]
|
111 |
+
|
112 |
+
In each Annual Conference, United Methodist bishops serve for four-year terms, and may serve up to three terms before either retirement or appointment to a new Conference. United Methodist bishops may be male or female, with Marjorie Matthews being the first woman to be consecrated a bishop in 1980.
|
113 |
+
|
114 |
+
The collegial expression of episcopal leadership in the United Methodist Church is known as the Council of Bishops. The Council of Bishops speaks to the Church and through the Church into the world and gives leadership in the quest for Christian unity and interreligious relationships.[27] The Conference of Methodist Bishops includes the United Methodist Council of Bishops plus bishops from affiliated autonomous Methodist or United Churches.
|
115 |
+
|
116 |
+
John Wesley consecrated Thomas Coke a "General Superintendent," and directed that Francis Asbury also be consecrated for the United States of America in 1784, where the Methodist Episcopal Church first became a separate denomination apart from the Church of England. Coke soon returned to England, but Asbury was the primary builder of the new church. At first he did not call himself bishop, but eventually submitted to the usage by the denomination.
|
117 |
+
|
118 |
+
Notable bishops in United Methodist history include Coke, Asbury, Richard Whatcoat, Philip William Otterbein, Martin Boehm, Jacob Albright, John Seybert, Matthew Simpson, John S. Stamm, William Ragsdale Cannon, Marjorie Matthews, Leontine T. Kelly, William B. Oden, Ntambo Nkulu Ntanda, Joseph Sprague, William Henry Willimon, and Thomas Bickerton.
|
119 |
+
|
120 |
+
In The Church of Jesus Christ of Latter-day Saints, the Bishop is the leader of a local congregation, called a ward. As with most LDS priesthood holders, the bishop is a part-time lay minister and earns a living through other employment; in all cases, he is a married man. As such, it is his duty to preside at services, call local leaders, and judge the worthiness of members for service. The bishop does not deliver sermons at every service (generally asking members to do so), but is expected to be a spiritual guide for his congregation. It is therefore believed that he has both the right and ability to receive divine inspiration (through the Holy Spirit) for the ward under his direction. Because it is a part-time position, all able members are expected to assist in the management of the ward by holding delegated lay positions (for example, women's and youth leaders, teachers) referred to as callings. Although members are asked to confess serious sins to him, unlike the Catholic Church, he is not the instrument of divine forgiveness, but merely a guide through the repentance process (and a judge in case transgressions warrant excommunication or other official discipline). The bishop is also responsible for the physical welfare of the ward, and thus collects tithing and fast offerings and distributes financial assistance where needed.
|
121 |
+
|
122 |
+
A bishop is the president of the Aaronic priesthood in his ward (and is thus a form of Mormon Kohen; in fact, a literal descendant of Aaron has "legal right" to act as a bishop[28] after being found worthy and ordained by the First Presidency[29]). In the absence of a literal descendant of Aaron, a high priest in the Melchizedek priesthood is called to be a bishop.[29] Each bishop is selected from resident members of the ward by the stake presidency with approval of the First Presidency, and chooses two counselors to form a bishopric. In special circumstances (such as a ward consisting entirely of young university students), a bishop may be chosen from outside the ward. A bishop is typically released after about five years and a new bishop is called to the position. Although the former bishop is released from his duties, he continues to hold the Aaronic priesthood office of bishop. Church members frequently refer to a former bishop as "Bishop" as a sign of respect and affection.
|
123 |
+
|
124 |
+
Latter-day Saint bishops do not wear any special clothing or insignia the way clergy in many other churches do, but are expected to dress and groom themselves neatly and conservatively per their local culture, especially when performing official duties. Bishops (as well as other members of the priesthood) can trace their line of authority back to Joseph Smith, who, according to church doctrine, was ordained to lead the Church in modern times by the ancient apostles Peter, James, and John, who were ordained to lead the Church by Jesus Christ.[30]
|
125 |
+
|
126 |
+
At the global level, the presiding bishop oversees the temporal affairs (buildings, properties, commercial corporations, and so on) of the worldwide Church, including the Church's massive global humanitarian aid and social welfare programs. The presiding bishop has two counselors; the three together form the presiding bishopric.[31] As opposed to ward bishoprics, where the counselors do not hold the office of bishop, all three men in the presiding bishopric hold the office of bishop, and thus the counselors, as with the presiding bishop, are formally referred to as "Bishop".[32]
|
127 |
+
|
128 |
+
The New Apostolic Church (NAC) knows three classes of ministries: Deacons, Priests and Apostles. The Apostles, who are all included in the apostolate with the Chief Apostle as head, are the highest ministries.
|
129 |
+
|
130 |
+
Of the several kinds of priest....ministries, the bishop is the highest. Nearly all bishops are set in line directly from the chief apostle. They support and help their superior apostle.
|
131 |
+
|
132 |
+
In the Church of God in Christ (COGIC), the ecclesiastical structure is composed of large dioceses that are called "jurisdictions" within COGIC, each under the authority of a bishop, sometimes called "state bishops". They can either be made up of large geographical regions of churches or churches that are grouped and organized together as their own separate jurisdictions because of similar affiliations, regardless of geographical location or dispersion. Each state in the U.S. has at least one jurisdiction while others may have several more, and each jurisdiction is usually composed of between 30 and 100 churches. Each jurisdiction is then broken down into several districts, which are smaller groups of churches (either grouped by geographical situation or by similar affiliations) which are each under the authority of District Superintendents who answer to the authority of their jurisdictional/state bishop. There are currently over 170 jurisdictions in the United States, and over 30 jurisdictions in other countries. The bishops of each jurisdiction, according to the COGIC Manual, are considered to be the modern day equivalent in the church of the early apostles and overseers of the New Testament church, and as the highest ranking clergymen in the COGIC, they are tasked with the responsibilities of being the head overseers of all religious, civil, and economic ministries and protocol for the church denomination.[33] They also have the authority to appoint and ordain local pastors, elders, ministers, and reverends within the denomination. The bishops of the COGIC denomination are all collectively called "The Board of Bishops."[34] From the Board of Bishops, and the General Assembly of the COGIC, the body of the church composed of clergy and lay delegates that are responsible for making and enforcing the bylaws of the denomination, every four years, twelve bishops from the COGIC are elected as "The General Board" of the church, who work alongside the delegates of the General Assembly and Board of Bishops to provide administration over the denomination as the church's head executive leaders.[35] One of twelve bishops of the General Board is also elected the "presiding bishop" of the church, and two others are appointed by the presiding bishop himself, as his first and second assistant presiding bishops.
|
133 |
+
|
134 |
+
Bishops in the Church of God in Christ usually wear black clergy suits which consist of a black suit blazer, black pants, a purple or scarlet clergy shirt and a white clerical collar, which is usually referred to as "Class B Civic attire." Bishops in COGIC also typically wear the Anglican Choir Dress style vestments of a long purple or scarlet chimere, cuffs, and tippet worn over a long white rochet, and a gold pectoral cross worn around the neck with the tippet. This is usually referred to as "Class A Ceremonial attire". The bishops of COGIC alternate between Class A Ceremonial attire and Class B Civic attire depending on the protocol of the religious services and other events they have to attend.[34][33]
|
135 |
+
|
136 |
+
In the polity of the Church of God (Cleveland, Tennessee), the international leader is the presiding bishop, and the members of the executive committee are executive bishops. Collectively, they supervise and appoint national and state leaders across the world. Leaders of individual states and regions are administrative bishops, who have jurisdiction over local churches in their respective states and are vested with appointment authority for local pastorates. All ministers are credentialed at one of three levels of licensure, the most senior of which is the rank of ordained bishop. To be eligible to serve in state, national, or international positions of authority, a minister must hold the rank of ordained bishop.
|
137 |
+
|
138 |
+
In 2002, the general convention of the Pentecostal Church of God came to a consensus to change the title of their overseer from general superintendent to bishop. The change was brought on because internationally, the term bishop is more commonly related to religious leaders than the previous title.
|
139 |
+
|
140 |
+
The title bishop is used for both the general (international leader) and the district (state) leaders. The title is sometimes used in conjunction with the previous, thus becoming general (district) superintendent/bishop.
|
141 |
+
|
142 |
+
According to the Seventh-day Adventist understanding of the doctrine of the Church:
|
143 |
+
|
144 |
+
"The "elders" (Greek, presbuteros) or "bishops" (episkopos) were the most important officers of the church. The term elder means older one, implying dignity and respect. His position was similar to that of the one who had supervision of the synagogue. The term bishop means "overseer." Paul used these terms interchangeably, equating elders with overseers or bishops (Acts 20:17,28; Titus 1:5, 7).
|
145 |
+
|
146 |
+
"Those who held this position supervised the newly formed churches. Elder referred to the status or rank of the office, while bishop denoted the duty or responsibility of the office—"overseer." Since the apostles also called themselves elders (1 Peter 5:1; 2 John 1; 3 John 1), it is apparent that there were both local elders and itinerant elders, or elders at large. But both kinds of elder functioned as shepherds of the congregations.[36]"
|
147 |
+
|
148 |
+
The above understanding is part of the basis of Adventist organizational structure. The world wide Seventh-day Adventist church is organized into local districts, conferences or missions, union conferences or union missions, divisions, and finally at the top is the general conference. At each level (with exception to the local districts), there is an elder who is elected president and a group of elders who serve on the executive committee with the elected president. Those who have been elected president would in effect be the "bishop" while never actually carrying the title or ordained as such because the term is usually associated with the episcopal style of church governance most often found in Catholic, Anglican, Methodist and some Pentecostal/Charismatic circles.
|
149 |
+
|
150 |
+
Some Baptists also have begun taking on the title of bishop.[37]
|
151 |
+
In some smaller Protestant denominations and independent churches, the term bishop is used in the same way as pastor, to refer to the leader of the local congregation, and may be male or female. This usage is especially common in African-American churches in the US.
|
152 |
+
|
153 |
+
In the Church of Scotland, which has a Presbyterian church structure, the word "bishop" refers to an ordained person, usually a normal parish minister, who has temporary oversight of a trainee minister. In the Presbyterian Church (USA), the term bishop is an expressive name for a Minister of Word and Sacrament who serves a congregation and exercises "the oversight of the flock of Christ."[38] The term is traceable to the 1789 Form of Government of the PC (USA) and the Presbyterian understanding of the pastoral office.[39]
|
154 |
+
|
155 |
+
While not considered orthodox Christian, the Ecclesia Gnostica Catholica uses roles and titles derived from Christianity for its clerical hierarchy, including bishops who have much the same authority and responsibilities as in Catholicism.
|
156 |
+
|
157 |
+
The Salvation Army does not have bishops but has appointed leaders of geographical areas, known as Divisional Commanders. Larger geographical areas, called Territories, are led by a Territorial Commander, who is the highest-ranking officer in that Territory.
|
158 |
+
|
159 |
+
Jehovah's Witnesses do not use the title ‘Bishop’ within their organizational structure, but appoint elders to be overseers (to fulfill the role of oversight) within their congregations.[40][41]
|
160 |
+
|
161 |
+
The HKBP of Indonesia, the most prominent Protestant denomination in Indonesia, uses the term Ephorus instead of Bishop.[42]
|
162 |
+
|
163 |
+
In the Vietnamese syncretist religion of Caodaism, bishops (giáo sư) comprise the fifth of nine hierarchical levels, and are responsible for spiritual and temporal education as well as record-keeping and ceremonies in their parishes. At any one time there are seventy-two bishops. Their authority is described in Section I of the text Tân Luật (revealed through seances in December 1926). Caodai bishops wear robes and headgear of embroidered silk depicting the Divine Eye and the Eight Trigrams. (The color varies according to branch.) This is the full ceremonial dress; the simple version consists of a seven-layered turban.
|
164 |
+
|
165 |
+
Traditionally, a number of items are associated with the office of a bishop, most notably the mitre, crosier, and ecclesiastical ring. Other vestments and insignia vary between Eastern and Western Christianity.
|
166 |
+
|
167 |
+
In the Latin Rite of the Catholic Church, the choir dress of a bishop includes the purple cassock with amaranth trim, rochet, purple zucchetto (skull cap), purple biretta, and pectoral cross. The cappa magna may be worn, but only within the bishop's own diocese and on especially solemn occasions.[43] The mitre, zuchetto, and stole are generally worn by bishops when presiding over liturgical functions. For liturgical functions other than the Mass the bishop typically wears the cope. Within his own diocese and when celebrating solemnly elsewhere with the consent of the local ordinary, he also uses the crosier.[43] When celebrating Mass, a bishop, like a priest, wears the chasuble. The Caeremoniale Episcoporum recommends, but does not impose, that in solemn celebrations a bishop should also wear a dalmatic, which can always be white, beneath the chasuble, especially when administering the sacrament of holy orders, blessing an abbot or abbess, and dedicating a church or an altar.[43] The Caeremoniale Episcoporum no longer makes mention of episcopal gloves, episcopal sandals, liturgical stockings (also known as buskins), or the accoutrements that it once prescribed for the bishop's horse. The coat of arms of a Latin Rite Catholic bishop usually displays a galero with a cross and crosier behind the escutcheon; the specifics differ by location and ecclesiastical rank (see Ecclesiastical heraldry).
|
168 |
+
|
169 |
+
Anglican bishops generally make use of the mitre, crosier, ecclesiastical ring, purple cassock, purple zucchetto, and pectoral cross. However, the traditional choir dress of Anglican bishops retains its late mediaeval form, and looks quite different from that of their Catholic counterparts; it consists of a long rochet which is worn with a chimere.
|
170 |
+
|
171 |
+
In the Eastern Churches (Eastern Orthodox, Eastern Rite Catholic) a bishop will wear the mandyas, panagia (and perhaps an enkolpion), sakkos, omophorion and an Eastern-style mitre. Eastern bishops do not normally wear an episcopal ring; the faithful kiss (or, alternatively, touch their forehead to) the bishop's hand. To seal official documents, he will usually use an inked stamp. An Eastern bishop's coat of arms will normally display an Eastern-style mitre, cross, eastern style crosier and a red and white (or red and gold) mantle. The arms of Oriental Orthodox bishops will display the episcopal insignia (mitre or turban) specific to their own liturgical traditions. Variations occur based upon jurisdiction and national customs.
|
172 |
+
|
173 |
+
Eastern Rite Catholic bishops celebrating Divine Liturgy in their proper pontifical vestments
|
174 |
+
|
175 |
+
An Anglican bishop with a crosier, wearing a rochet under a red chimere and cuffs, a black tippet, and a pectoral cross
|
176 |
+
|
177 |
+
An Episcopal bishop immediately before presiding at the Great Vigil of Easter in the narthex of St. Michael's Episcopal Cathedral in Boise, Idaho.
|
178 |
+
|
179 |
+
Catholic bishop dressed for the Sacrifice of the Mass. No Pontifical gloves.
|
en/1918.html.txt
ADDED
@@ -0,0 +1,387 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Mount Everest (Nepali: Sagarmatha सगरमाथा; Tibetan: Chomolungma ཇོ་མོ་གླང་མ; Chinese: Zhumulangma 珠穆朗玛) is Earth's highest mountain above sea level, located in the Mahalangur Himal sub-range of the Himalayas. The China–Nepal border runs across its summit point.[5]
|
4 |
+
|
5 |
+
The current official elevation of 8,848 m (29,029 ft), recognised by China and Nepal, was established by a 1955 Indian survey and confirmed by a 1975 Chinese survey.[1]
|
6 |
+
|
7 |
+
In 1865, Everest was given its official English name by the Royal Geographical Society, as recommended by Andrew Waugh, the British Surveyor General of India, who chose the name of his predecessor in the post, Sir George Everest, despite Everest's objections.[6]
|
8 |
+
|
9 |
+
Mount Everest attracts many climbers, some of them highly experienced mountaineers. There are two main climbing routes, one approaching the summit from the southeast in Nepal (known as the "standard route") and the other from the north in Tibet. While not posing substantial technical climbing challenges on the standard route, Everest presents dangers such as altitude sickness, weather, and wind, as well as significant hazards from avalanches and the Khumbu Icefall. As of 2019[update], over 300 people have died on Everest,[7] many of whose bodies remain on the mountain.[8]
|
10 |
+
|
11 |
+
The first recorded efforts to reach Everest's summit were made by British mountaineers. As Nepal did not allow foreigners to enter the country at the time, the British made several attempts on the north ridge route from the Tibetan side. After the first reconnaissance expedition by the British in 1921 reached 7,000 m (22,970 ft) on the North Col, the 1922 expedition pushed the north ridge route up to 8,320 m (27,300 ft), marking the first time a human had climbed above 8,000 m (26,247 ft). Seven porters were killed in an avalanche on the descent from the North Col. The 1924 expedition resulted in one of the greatest mysteries on Everest to this day: George Mallory and Andrew Irvine made a final summit attempt on 8 June but never returned, sparking debate as to whether or not they were the first to reach the top. They had been spotted high on the mountain that day but disappeared in the clouds, never to be seen again, until Mallory's body was found in 1999 at 8,155 m (26,755 ft) on the north face. Tenzing Norgay and Edmund Hillary made the first official ascent of Everest in 1953, using the southeast ridge route. Norgay had reached 8,595 m (28,199 ft) the previous year as a member of the 1952 Swiss expedition. The Chinese mountaineering team of Wang Fuzhou, Gonpo, and Qu Yinhua made the first reported ascent of the peak from the north ridge on 25 May 1960.[9][10]
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
The Tibetan name for Everest is Qomolangma (ཇོ་མོ་གླང་མ, lit. "Holy Mother"). The name was first recorded with a Chinese transcription on the 1721 Kangxi Atlas, and then appeared as Tchoumour Lancma on a 1733 map published in Paris by the French geographer D'Anville based on the former map.[11] It is also popularly romanised as Chomolungma and (in Wylie) as Jo-mo-glang-ma.[16] The official Chinese transcription is 珠穆朗玛峰 (t 珠穆朗瑪峰), whose pinyin form is Zhūmùlǎngmǎ Fēng. It is also infrequently simply translated into Chinese as Shèngmǔ Fēng (t 聖母峰, s 圣母峰, lit. "Holy Mother Peak"). Many other local names exist, including "Deodungha" ("Holy Mountain") in Darjeeling.[17]
|
16 |
+
|
17 |
+
In the late 19th century, many European cartographers incorrectly believed that a native name for the mountain was Gaurishankar, a mountain between Kathmandu and Everest.[18]
|
18 |
+
|
19 |
+
In 1849, the British survey wanted to preserve local names if possible (e.g., Kangchenjunga and Dhaulagiri), Waugh argued that he could not find any commonly used local name. Waugh's search for a local name was hampered by Nepal and Tibet's exclusion of foreigners. Waugh argued that because there were many local names, it would be difficult to favour one name over all others; he decided that Peak XV should be named after British surveyor Sir George Everest, his predecessor as Surveyor General of India.[19][20][21] Everest himself opposed the name suggested by Waugh and told the Royal Geographical Society in 1857 that "Everest" could not be written in Hindi nor pronounced by "the native of India". Waugh's proposed name prevailed despite the objections, and in 1865, the Royal Geographical Society officially adopted Mount Everest as the name for the highest mountain in the world.[19] The modern pronunciation of Everest (/ˈɛvərɪst/)[22] is different from Sir George's pronunciation of his surname (/ˈiːvrɪst/ EEV-rist).[23]
|
20 |
+
|
21 |
+
In the early 1960s, the Nepalese government coined the Nepali name Sagarmāthā or Sagar-Matha[24] (सागर-मथ्था, lit. "goddess of the sky"[25]).[26]
|
22 |
+
|
23 |
+
In 1802, the British began the Great Trigonometric Survey of India to fix the locations, heights, and names of the world's highest mountains. Starting in southern India, the survey teams moved northward using giant theodolites, each weighing 500 kg (1,100 lb) and requiring 12 men to carry, to measure heights as accurately as possible. They reached the Himalayan foothills by the 1830s, but Nepal was unwilling to allow the British to enter the country due to suspicions of political aggression and possible annexation. Several requests by the surveyors to enter Nepal were turned down.[19]
|
24 |
+
|
25 |
+
The British were forced to continue their observations from Terai, a region south of Nepal which is parallel to the Himalayas. Conditions in Terai were difficult because of torrential rains and malaria. Three survey officers died from malaria while two others had to retire because of failing health.[19]
|
26 |
+
|
27 |
+
Nonetheless, in 1847, the British continued the survey and began detailed observations of the Himalayan peaks from observation stations up to 240 km (150 mi) distant. Weather restricted work to the last three months of the year. In November 1847, Andrew Waugh, the British Surveyor General of India made several observations from the Sawajpore station at the east end of the Himalayas. Kangchenjunga was then considered the highest peak in the world, and with interest, he noted a peak beyond it, about 230 km (140 mi) away. John Armstrong, one of Waugh's subordinates, also saw the peak from a site farther west and called it peak "b". Waugh would later write that the observations indicated that peak "b" was higher than Kangchenjunga, but given the great distance of the observations, closer observations were required for verification. The following year, Waugh sent a survey official back to Terai to make closer observations of peak "b", but clouds thwarted his attempts.[19]
|
28 |
+
|
29 |
+
In 1849, Waugh dispatched James Nicolson to the area, who made two observations from Jirol, 190 km (120 mi) away. Nicolson then took the largest theodolite and headed east, obtaining over 30 observations from five different locations, with the closest being 174 km (108 mi) from the peak.[19]
|
30 |
+
|
31 |
+
Nicolson retreated to Patna on the Ganges to perform the necessary calculations based on his observations. His raw data gave an average height of 9,200 m (30,200 ft) for peak "b", but this did not consider light refraction, which distorts heights. However, the number clearly indicated that peak "b" was higher than Kangchenjunga. Nicolson contracted malaria and was forced to return home without finishing his calculations. Michael Hennessy, one of Waugh's assistants, had begun designating peaks based on Roman numerals, with Kangchenjunga named Peak IX. Peak "b" now became known as Peak XV.[19]
|
32 |
+
|
33 |
+
In 1852, stationed at the survey headquarters in Dehradun, Radhanath Sikdar, an Indian mathematician and surveyor from Bengal was the first to identify Everest as the world's highest peak, using trigonometric calculations based on Nicolson's measurements.[27] An official announcement that Peak XV was the highest was delayed for several years as the calculations were repeatedly verified. Waugh began work on Nicolson's data in 1854, and along with his staff spent almost two years working on the numbers, having to deal with the problems of light refraction, barometric pressure, and temperature over the vast distances of the observations. Finally, in March 1856 he announced his findings in a letter to his deputy in Calcutta. Kangchenjunga was declared to be 8,582 m (28,156 ft), while Peak XV was given the height of 8,840 m (29,002 ft). Waugh concluded that Peak XV was "most probably the highest in the world".[19] Peak XV (measured in feet) was calculated to be exactly 29,000 ft (8,839.2 m) high, but was publicly declared to be 29,002 ft (8,839.8 m) in order to avoid the impression that an exact height of 29,000 feet (8,839.2 m) was nothing more than a rounded estimate.[28] Waugh is sometimes playfully credited with being "the first person to put two feet on top of Mount Everest".[29]
|
34 |
+
|
35 |
+
In 1856, Andrew Waugh announced Everest (then known as Peak XV) as 8,840 m (29,002 ft) high, after several years of calculations based on observations made by the Great Trigonometric Survey.[30] The 8,848 m (29,029 ft) height given is officially recognised by Nepal and China.[31] Nepal plans a new survey in 2019 to determine if the April 2015 Nepal earthquake affected the height of the mountain.[32]
|
36 |
+
|
37 |
+
In 1955, the elevation of 8,848 m (29,029 ft) was first determined by an Indian survey, made closer to the mountain, also using theodolites.[citation needed] In 1975 it was subsequently reaffirmed by a Chinese measurement of 8,848.13 m (29,029.30 ft).[33] In both cases the snow cap, not the rock head, was measured.
|
38 |
+
In May 1999 an American Everest Expedition, directed by Bradford Washburn, anchored a GPS unit into the highest bedrock. A rock head elevation of 8,850 m (29,035 ft), and a snow/ice elevation 1 m (3 ft) higher, were obtained via this device.[34] Although as of 2001, it has not been officially recognised by Nepal,[35] this figure is widely quoted. Geoid uncertainty casts doubt upon the accuracy claimed by both the 1999 and 2005 [See below] surveys.[citation needed]
|
39 |
+
|
40 |
+
In 1955, a detailed photogrammetric map (at a scale of 1:50,000) of the Khumbu region, including the south side of Mount Everest, was made by Erwin Schneider as part of the 1955 International Himalayan Expedition, which also attempted Lhotse.
|
41 |
+
|
42 |
+
In the late 1980s, an even more detailed topographic map of the Everest area was made under the direction of Bradford Washburn, using extensive aerial photography.[36]
|
43 |
+
|
44 |
+
On 9 October 2005, after several months of measurement and calculation, the Chinese Academy of Sciences and State Bureau of Surveying and Mapping announced the height of Everest as 8,844.43 m (29,017.16 ft) with accuracy of ±0.21 m (8.3 in), claiming it was the most accurate and precise measurement to date.[37][38] This height is based on the highest point of rock and not the snow and ice covering it. The Chinese team measured a snow-ice depth of 3.5 m (11 ft),[33] which is in agreement with a net elevation of 8,848 m (29,029 ft). An argument arose between China and Nepal as to whether the official height should be the rock height (8,844 m, China) or the snow height (8,848 m, Nepal). In 2010, both sides agreed that the height of Everest is 8,848 m, and Nepal recognises China's claim that the rock height of Everest is 8,844 m.[39]
|
45 |
+
|
46 |
+
It is thought that the plate tectonics of the area are adding to the height and moving the summit northeastwards. Two accounts suggest the rates of change are 4 mm (0.16 in) per year (upwards) and 3 to 6 mm (0.12 to 0.24 in) per year (northeastwards),[34][40] but another account mentions more lateral movement (27 mm or 1.1 in),[41] and even shrinkage has been suggested.[42]
|
47 |
+
|
48 |
+
The summit of Everest is the point at which earth's surface reaches the greatest distance above sea level. Several other mountains are sometimes claimed to be the "tallest mountains on earth". Mauna Kea in Hawaii is tallest when measured from its base;[43] it rises over 10,200 m (33,464.6 ft) when measured from its base on the mid-ocean floor, but only attains 4,205 m (13,796 ft) above sea level.
|
49 |
+
|
50 |
+
By the same measure of base to summit, Denali, in Alaska, also known as Mount McKinley, is taller than Everest as well.[43] Despite its height above sea level of only 6,190 m (20,308 ft), Denali sits atop a sloping plain with elevations from 300 to 900 m (980 to 2,950 ft), yielding a height above base in the range of 5,300 to 5,900 m (17,400 to 19,400 ft); a commonly quoted figure is 5,600 m (18,400 ft).[44][45] By comparison, reasonable base elevations for Everest range from 4,200 m (13,800 ft) on the south side to 5,200 m (17,100 ft) on the Tibetan Plateau, yielding a height above base in the range of 3,650 to 4,650 m (11,980 to 15,260 ft).[36]
|
51 |
+
|
52 |
+
The summit of Chimborazo in Ecuador is 2,168 m (7,113 ft) farther from earth's centre (6,384.4 km (3,967.1 mi)) than that of Everest (6,382.3 km (3,965.8 mi)), because the earth bulges at the equator.[46] This is despite Chimborazo having a peak 6,268 m (20,564.3 ft) above sea level versus Mount Everest's 8,848 m (29,028.9 ft).
|
53 |
+
|
54 |
+
Geologists have subdivided the rocks comprising Mount Everest into three units called formations.[47][48] Each formation is separated from the other by low-angle faults, called detachments, along which they have been thrust southward over each other. From the summit of Mount Everest to its base these rock units are the Qomolangma Formation, the North Col Formation, and the Rongbuk Formation.
|
55 |
+
|
56 |
+
The Qomolangma Formation, also known as the Jolmo Lungama Formation,[49] runs from the summit to the top of the Yellow Band, about 8,600 m (28,200 ft) above sea level. It consists of greyish to dark grey or white, parallel laminated and bedded, Ordovician limestone interlayered with subordinate beds of recrystallised dolomite with argillaceous laminae and siltstone. Gansser first reported finding microscopic fragments of crinoids in this limestone.[50][51] Later petrographic analysis of samples of the limestone from near the summit revealed them to be composed of carbonate pellets and finely fragmented remains of trilobites, crinoids, and ostracods. Other samples were so badly sheared and recrystallised that their original constituents could not be determined. A thick, white-weathering thrombolite bed that is 60 m (200 ft) thick comprises the foot of the "Third Step", and base of the summit pyramid of Everest. This bed, which crops out starting about 70 m (230 ft) below the summit of Mount Everest, consists of sediments trapped, bound, and cemented by the biofilms of micro-organisms, especially cyanobacteria, in shallow marine waters. The Qomolangma Formation is broken up by several high-angle faults that terminate at the low angle normal fault, the Qomolangma Detachment. This detachment separates it from the underlying Yellow Band. The lower five metres of the Qomolangma Formation overlying this detachment are very highly deformed.[47][48][52]
|
57 |
+
|
58 |
+
The bulk of Mount Everest, between 7,000 and 8,600 m (23,000 and 28,200 ft), consists of the North Col Formation, of which the Yellow Band forms its upper part between 8,200 to 8,600 m (26,900 to 28,200 ft). The Yellow Band consists of intercalated beds of Middle Cambrian diopside-epidote-bearing marble, which weathers a distinctive yellowish brown, and muscovite-biotite phyllite and semischist. Petrographic analysis of marble collected from about 8,300 m (27,200 ft) found it to consist as much as five percent of the ghosts of recrystallised crinoid ossicles. The upper five metres of the Yellow Band lying adjacent to the Qomolangma Detachment is badly deformed. A 5–40 cm (2.0–15.7 in) thick fault breccia separates it from the overlying Qomolangma Formation.[47][48][52]
|
59 |
+
|
60 |
+
The remainder of the North Col Formation, exposed between 7,000 to 8,200 m (23,000 to 26,900 ft) on Mount Everest, consists of interlayered and deformed schist, phyllite, and minor marble. Between 7,600 and 8,200 m (24,900 and 26,900 ft), the North Col Formation consists chiefly of biotite-quartz phyllite and chlorite-biotite phyllite intercalated with minor amounts of biotite-sericite-quartz schist. Between 7,000 and 7,600 m (23,000 and 24,900 ft), the lower part of the North Col Formation consists of biotite-quartz schist intercalated with epidote-quartz schist, biotite-calcite-quartz schist, and thin layers of quartzose marble. These metamorphic rocks appear to be the result of the metamorphism of Middle to Early Cambrian deep sea flysch composed of interbedded, mudstone, shale, clayey sandstone, calcareous sandstone, graywacke, and sandy limestone. The base of the North Col Formation is a regional low-angle normal fault called the "Lhotse detachment".[47][48][52]
|
61 |
+
|
62 |
+
Below 7,000 m (23,000 ft), the Rongbuk Formation underlies the North Col Formation and forms the base of Mount Everest. It consists of sillimanite-K-feldspar grade schist and gneiss intruded by numerous sills and dikes of leucogranite ranging in thickness from 1 cm to 1,500 m (0.4 in to 4,900 ft).[48][53] These leucogranites are part of a belt of Late Oligocene–Miocene intrusive rocks known as the Higher Himalayan leucogranite. They formed as the result of partial melting of Paleoproterozoic to Ordovician high-grade metasedimentary rocks of the Higher Himalayan Sequence about 20 to 24 million years ago during the subduction of the Indian Plate.[54]
|
63 |
+
|
64 |
+
Mount Everest consists of sedimentary and metamorphic rocks that have been faulted southward over continental crust composed of Archean granulites of the Indian Plate during the Cenozoic collision of India with Asia.[55][56][57] Current interpretations argue that the Qomolangma and North Col formations consist of marine sediments that accumulated within the continental shelf of the northern passive continental margin of India before it collided with Asia. The Cenozoic collision of India with Asia subsequently deformed and metamorphosed these strata as it thrust them southward and upward.[58][59] The Rongbuk Formation consists of a sequence of high-grade metamorphic and granitic rocks that were derived from the alteration of high-grade metasedimentary rocks. During the collision of India with Asia, these rocks were thrust downward and to the north as they were overridden by other strata; heated, metamorphosed, and partially melted at depths of over 15 to 20 kilometres (9.3 to 12.4 mi) below sea level; and then forced upward to surface by thrusting towards the south between two major detachments.[60] The Himalayas are rising by about 5 mm per year.
|
65 |
+
|
66 |
+
There is very little native flora or fauna on Everest. A moss grows at 6,480 metres (21,260 ft) on Mount Everest.[61] It may be the highest altitude plant species.[61] An alpine cushion plant called Arenaria is known to grow below 5,500 metres (18,000 ft) in the region.[62] According to the study based on satellite data from 1993 to 2018, vegetation is expanding in the Everest region. Researchers have found plants in areas that were previously deemed bare.[63]
|
67 |
+
|
68 |
+
Euophrys omnisuperstes, a minute black jumping spider, has been found at elevations as high as 6,700 metres (22,000 ft), possibly making it the highest confirmed non-microscopic permanent resident on Earth. It lurks in crevices and may feed on frozen insects that have been blown there by the wind. There is a high likelihood of microscopic life at even higher altitudes.[64]
|
69 |
+
|
70 |
+
Birds, such as the bar-headed goose, have been seen flying at the higher altitudes of the mountain, while others, such as the chough, have been spotted as high as the South Col at 7,920 metres (25,980 ft).[65] Yellow-billed choughs have been seen as high as 7,900 metres (26,000 ft) and bar-headed geese migrate over the Himalayas.[66] In 1953, George Lowe (part of the expedition of Tenzing and Hillary) said that he saw bar-headed geese flying over Everest's summit.[67]
|
71 |
+
|
72 |
+
Yaks are often used to haul gear for Mount Everest climbs. They can haul 100 kg (220 pounds), have thick fur and large lungs.[62] One common piece of advice for those in the Everest region is to be on the higher ground when around yaks and other animals, as they can knock people off the mountain if standing on the downhill edge of a trail.[68] Other animals in the region include the Himalayan tahr which is sometimes eaten by the snow leopard.[69] The Himalayan black bear can be found up to about 4,300 metres (14,000 ft) and the red panda is also present in the region.[70] One expedition found a surprising range of species in the region including a pika and ten new species of ants.[71]
|
73 |
+
|
74 |
+
In 2008, a new weather station at about 8,000 m altitude (26,246 feet) went online.[75] The station's first data in May 2008 were air temperature −17 °C (1 °F), relative humidity 41.3 percent, atmospheric pressure 382.1 hPa (38.21 kPa), wind direction 262.8°, wind speed 12.8 m/s (28.6 mph, 46.1 km/h), global solar radiation 711.9 watts/m2, solar UVA radiation 30.4 W/m2.[75] The project was orchestrated by Stations at High Altitude for Research on the Environment (SHARE), which also placed the Mount Everest webcam in 2011.[75][76] The solar-powered weather station is on the South Col.[77]
|
75 |
+
|
76 |
+
One of the issues facing climbers is the frequent presence of high-speed winds.[78] The peak of Mount Everest extends into the upper troposphere and penetrates the stratosphere,[79] which can expose it to the fast and freezing winds of the jet stream.[80] In February 2004, a wind speed of 280 km/h (175 mph) was recorded at the summit and winds over 160 km/h (100 mph) are common.[78] These winds can blow climbers off Everest. Climbers typically aim for a 7- to 10-day window in the spring and fall when the Asian monsoon season is either starting up or ending and the winds are lighter. The air pressure at the summit is about one-third what it is at sea level, and by Bernoulli's principle, the winds can lower the pressure further, causing an additional 14 percent reduction in oxygen to climbers.[80] The reduction in oxygen availability comes from the reduced overall pressure, not a reduction in the ratio of oxygen to other gases.[81]
|
77 |
+
|
78 |
+
In the summer, the Indian monsoon brings warm wet air from the Indian Ocean to Everest's south side. During the winter, the west-southwest flowing jet stream shifts south and blows on the peak.[citation needed]
|
79 |
+
|
80 |
+
Because Mount Everest is the highest mountain in the world, it has attracted considerable attention and climbing attempts. Whether the mountain was climbed in ancient times is unknown. It may have been climbed in 1924, although this has never been confirmed, as both of the men making the attempt failed to return from the mountain. Several climbing routes has been established over several decades of climbing expeditions to the mountain.[82][83][better source needed]
|
81 |
+
|
82 |
+
Everest's first known summiting occurred by 1953, and interest by climbers increased.[84] Despite the effort and attention poured into expeditions, only about 200 people had sumitted by 1987.[84] Everest remained a difficult climb for decades, even for serious attempts by professional climbers and large national expeditions, which were the norm until the commercial era began in the 1990s.[85]
|
83 |
+
|
84 |
+
By March 2012, Everest had been climbed 5,656 times with 223 deaths.[86] Although lower mountains have longer or steeper climbs, Everest is so high the jet stream can hit it. Climbers can be faced with winds beyond 320 km/h (200 mph) when the weather shifts.[87] At certain times of the year the jet stream shifts north, providing periods of relative calm at the mountain.[88] Other dangers include blizzards and avalanches.[88]
|
85 |
+
|
86 |
+
By 2013, The Himalayan Database recorded 6,871 summits by 4,042 different people.[89]
|
87 |
+
|
88 |
+
In 1885, Clinton Thomas Dent, president of the Alpine Club, suggested that climbing Mount Everest was possible in his book Above the Snow Line.[90]
|
89 |
+
|
90 |
+
The northern approach to the mountain was discovered by George Mallory and Guy Bullock on the initial 1921 British Reconnaissance Expedition. It was an exploratory expedition not equipped for a serious attempt to climb the mountain. With Mallory leading (and thus becoming the first European to set foot on Everest's flanks) they climbed the North Col to an altitude of 7,005 metres (22,982 ft). From there, Mallory espied a route to the top, but the party was unprepared for the great task of climbing any further and descended.
|
91 |
+
|
92 |
+
The British returned for a 1922 expedition. George Finch climbed using oxygen for the first time. He ascended at a remarkable speed—290 metres (951 ft) per hour, and reached an altitude of 8,320 m (27,300 ft), the first time a human reported to climb higher than 8,000 m. Mallory and Col. Felix Norton made a second unsuccessful attempt. Mallory was faulted[citation needed] for leading a group down from the North Col which got caught in an avalanche. Mallory was pulled down too but survived. Seven native porters were killed.
|
93 |
+
|
94 |
+
The next expedition was in 1924. The initial attempt by Mallory and Geoffrey Bruce was aborted when weather conditions prevented the establishment of Camp VI. The next attempt was that of Norton and Somervell, who climbed without oxygen and in perfect weather, traversing the North Face into the Great Couloir. Norton managed to reach 8,550 m (28,050 ft), though he ascended only 30 m (98 ft) or so in the last hour. Mallory rustled up oxygen equipment for a last-ditch effort. He chose young Andrew Irvine as his partner.
|
95 |
+
|
96 |
+
On 8 June 1924, George Mallory and Andrew Irvine made an attempt on the summit via the North Col-North Ridge-Northeast Ridge route from which they never returned. On 1 May 1999, the Mallory and Irvine Research Expedition found Mallory's body on the North Face in a snow basin below and to the west of the traditional site of Camp VI. Controversy has raged in the mountaineering community whether one or both of them reached the summit 29 years before the confirmed ascent and safe descent of Everest by Sir Edmund Hillary and Tenzing Norgay in 1953.
|
97 |
+
|
98 |
+
In 1933, Lady Houston, a British millionairess, funded the Houston Everest Flight of 1933, which saw a formation of two aircraft led by the Marquess of Clydesdale fly over the summit.[91][92][93][94]
|
99 |
+
|
100 |
+
Early expeditions—such as General Charles Bruce's in the 1920s and Hugh Ruttledge's two unsuccessful attempts in 1933 and 1936—tried to ascend the mountain from Tibet, via the North Face. Access was closed from the north to Western expeditions in 1950 after China took control of Tibet. In 1950, Bill Tilman and a small party which included Charles Houston, Oscar Houston, and Betsy Cowles undertook an exploratory expedition to Everest through Nepal along the route which has now become the standard approach to Everest from the south.[95]
|
101 |
+
|
102 |
+
The 1952 Swiss Mount Everest Expedition, led by Edouard Wyss-Dunant, was granted permission to attempt a climb from Nepal. It established a route through the Khumbu icefall and ascended to the South Col at an elevation of 7,986 m (26,201 ft). Raymond Lambert and Sherpa Tenzing Norgay were able to reach an elevation of about 8,595 m (28,199 ft) on the southeast ridge, setting a new climbing altitude record. Tenzing's experience was useful when he was hired to be part of the British expedition in 1953.[96] One reference says that no attempt at an ascent of Everest was ever under consideration in this case,[97]
|
103 |
+
although John Hunt (who met the team in Zurich on their return) wrote that when the Swiss Expedition "just failed" in the spring they decided to make another post-monsoon (summit ascent) attempt in the autumn; although as it was only decided in June the second party arrived too late, when winter winds were buffeting the mountain.[98]
|
104 |
+
|
105 |
+
In 1953, a ninth British expedition, led by John Hunt, returned to Nepal. Hunt selected two climbing pairs to attempt to reach the summit. The first pair, Tom Bourdillon and Charles Evans, came within 100 m (330 ft) of the summit on 26 May 1953, but turned back after running into oxygen problems. As planned, their work in route finding and breaking trail and their oxygen caches were of great aid to the following pair. Two days later, the expedition made its second assault on the summit with the second climbing pair: the New Zealander Edmund Hillary and Tenzing Norgay, a Nepali Sherpa climber. They reached the summit at 11:30 local time on 29 May 1953 via the South Col route. At the time, both acknowledged it as a team effort by the whole expedition, but Tenzing revealed a few years later that Hillary had put his foot on the summit first.[99] They paused at the summit to take photographs and buried a few sweets and a small cross in the snow before descending.
|
106 |
+
|
107 |
+
News of the expedition's success reached London on the morning of Queen Elizabeth II's coronation, 2 June. A few days later, the Queen gave orders that Hunt (a Briton) and Hillary (a New Zealander) were to be knighted in the Order of the British Empire for the ascent.[100] Tenzing, a Nepali Sherpa who was a citizen of India, was granted the George Medal by the UK. Hunt was ultimately made a life peer in Britain, while Hillary became a founding member of the Order of New Zealand.[101] Hillary and Tenzing have also been recognised in Nepal. In 2009, statues were raised in their honor, and in 2014, Hillary Peak and Tenzing Peak were named for them.[102][103]
|
108 |
+
|
109 |
+
On 23 May 1956 Ernst Schmied and Juerg Marmet ascended.[104] This was followed by Dölf Reist and Hans-Rudolf von Gunten on 24 May 1957.[104] Wang Fuzhou, Gonpo and Qu Yinhua of China made the first reported ascent of the peak from the North Ridge on 25 May 1960.[9][10] The first American to climb Everest, Jim Whittaker, joined by Nawang Gombu, reached the summit on 1 May 1963.[105][106]
|
110 |
+
|
111 |
+
In 1970 Japanese mountaineers conducted a major expedition. The centerpiece was a large "siege"-style expedition led by Saburo Matsukata, working on finding a new route up the southwest face.[107] Another element of the expedition was an attempt to ski Mount Everest.[85] Despite a staff of over one hundred people and a decade of planning work, the expedition suffered eight deaths and failed to summit via the planned routes.[85] However, Japanese expeditions did enjoy some successes. For example, Yuichiro Miura became the first man to ski down Everest from the South Col (he descended nearly 4,200 vertical feet from the South Col before falling with extreme injuries). Another success was an expedition that put four on the summit via the South Col route.[85][108][109] Miura's exploits became the subject of film, and he went on to become the oldest person to summit Mount Everest in 2003 at age 70 and again in 2013 at the age of 80.[110]
|
112 |
+
|
113 |
+
In 1975, Junko Tabei, a Japanese woman, became the first woman to summit Mount Everest.[85]
|
114 |
+
|
115 |
+
In 1978, Reinhold Messner and Peter Habeler made the first ascent of Everest without supplemental oxygen.
|
116 |
+
|
117 |
+
The Polish climber Andrzej Zawada headed the first winter ascent of Mount Everest, the first winter ascent of an eight-thousander. The team of 20 Polish climbers and 4 Sherpas established a base camp on Khumbu Glacier in early January 1980. On 15 January, the team managed to set up Camp III at 7150 meters above sea level, but further action was stopped by hurricane-force winds. The weather improved after 11 February, when Leszek Cichy, Walenty Fiut and Krzysztof Wielicki set up camp IV on South Col (7906 m). Cichy and Wielicki started the final ascent at 6:50 am on 17 February. At 2:40 pm Andrzej Zawada at base camp heard the climbers' voices over the radio – "We are on the summit! The strong wind blows all the time. It is unimaginably cold."[111][112][113][114] The successful winter ascent of Mount Everest started a new decade of Winter Himalaism, which became a Polish specialisation. After 1980 Poles did ten first winter ascents on 8000 metre peaks, which earned Polish climbers a reputation of "Ice Warriors".[115][112][116][117]
|
118 |
+
|
119 |
+
In May 1989, Polish climbers under the leadership of Eugeniusz Chrobak organised an international expedition to Mount Everest on a difficult western ridge. Ten Poles and nine foreigners participated, but ultimately only the Poles remained in the attempt for the summit. On 24 May, Chrobak and Andrzej Marciniak, starting from camp V at 8,200 m, overcame the ridge and reached the summit. But on 27 May in the avalanche from the wall Khumbutse on the Lho La Pass, four Polish climbers were killed: Mirosław Dąsal, Mirosław Gardzielewski, Zygmunt Andrzej Heinrich and Wacław Otręba. The following day, due to his injuries, Chrobak also died. Marciniak, who was also injured, was saved by a rescue expedition in which Artur Hajzer and New Zealanders Gary Ball and Rob Hall took part. In the organisation of the rescue expedition they took part, inter alia Reinhold Messner, Elizabeth Hawley, Carlos Carsolio and the US consul.[118]
|
120 |
+
|
121 |
+
On 10 and 11 May 1996 eight climbers died after several guided expeditions were caught in a blizzard high up on the mountain during a summit attempt on 10 May. During the 1996 season, 15 people died while climbing on Mount Everest. These were the highest death tolls for a single weather event, and for a single season, until the sixteen deaths in the 2014 Mount Everest avalanche. The guiding disaster gained wide publicity and raised questions about the commercialization of climbing and the safety of guiding clients on Mount Everest.
|
122 |
+
|
123 |
+
Journalist Jon Krakauer, on assignment from Outside magazine, was in one of the affected guided parties, and afterward published the bestseller Into Thin Air, which related his experience. Anatoli Boukreev, a guide who felt impugned by Krakauer's book, co-authored a rebuttal book called The Climb. The dispute sparked a debate within the climbing community.
|
124 |
+
|
125 |
+
In May 2004, Kent Moore, a physicist, and John L. Semple, a surgeon, both researchers from the University of Toronto, told New Scientist magazine that an analysis of weather conditions on 11 May suggested that weather caused oxygen levels to plunge approximately 14 percent.[119][120]
|
126 |
+
|
127 |
+
One of the survivors was Beck Weathers, an American client of New Zealand-based guide service Adventure Consultants. Weathers was left for dead about 275 metres (900 feet) from Camp 4 at 7,950 metres (26,085 feet). After spending a night on the mountain, Weathers managed to make it back to Camp 4 with massive frostbite and vision impaired due to snow blindness.[121] When he arrived at Camp 4, fellow climbers considered his condition terminal and left him in a tent to die overnight.[122]
|
128 |
+
|
129 |
+
Weathers' condition had not improved and an immediate descent to a lower elevation was deemed essential.[122] A helicopter rescue was out of the question: Camp 4 was higher than the rated ceiling of any available helicopter. Weathers was lowered to Camp 2. Eventually, a helicopter rescue was organised thanks to the Nepalese Army.[121][122]
|
130 |
+
|
131 |
+
The storm's impact on climbers on the North Ridge of Everest, where several climbers also died, was detailed in a first-hand account by British filmmaker and writer Matt Dickinson in his book The Other Side of Everest. 16-year-old Mark Pfetzer was on the climb and wrote about it in his account, Within Reach: My Everest Story.
|
132 |
+
|
133 |
+
The 2015 feature film Everest, directed by Baltasar Kormákur, is based on the events of this guiding disaster.[123]
|
134 |
+
|
135 |
+
In 2006 12 people died. One death in particular (see below) triggered an international debate and years of discussion about climbing ethics.[127] The season was also remembered for the rescue of Lincoln Hall who had been left by his climbing team and declared dead, but was later discovered alive and survived being helped off the mountain.
|
136 |
+
|
137 |
+
There was an international controversy about the death of a solo British climber David Sharp, who attempted to climb Mount Everest in 2006 but died in his attempt. The story broke out of the mountaineering community into popular media, with a series of interviews, allegations, and critiques. The question was whether climbers that season had left a man to die and whether he could have been saved. He was said to have attempted to summit Mount Everest by himself with no Sherpa or guide and fewer oxygen bottles than considered normal.[128] He went with a low-budget Nepali guide firm that only provides support up to Base Camp, after which climbers go as a "loose group", offering a high degree of independence. The manager at Sharp's guide support said Sharp did not take enough oxygen for his summit attempt and did not have a Sherpa guide.[129] It is less clear who knew Sharp was in trouble, and if they did know, whether they were qualified or capable of helping him.[128]
|
138 |
+
|
139 |
+
Double-amputee climber Mark Inglis said in an interview with the press on 23 May 2006, that his climbing party, and many others, had passed Sharp, on 15 May, sheltering under a rock overhang 450 metres (1,480 ft) below the summit, without attempting a rescue.[130] Inglis said 40 people had passed by Sharp, but he might have been overlooked as climbers assumed Sharp was the corpse nicknamed "Green Boots",[131] but Inglis was not aware that Turkish climbers had tried to help Sharp despite being in the process of helping an injured woman down (a Turkish woman named Burçak Poçan). There has also been some discussion about Himex in the commentary on Inglis and Sharp. In regards to Inglis's initial comments, he later revised certain details because he had been interviewed while he was "...physically and mentally exhausted, and in a lot of pain. He had suffered severe frostbite – he later had five fingertips amputated." When they went through Sharp's possessions they found a receipt for US$7,490, believed to be the whole financial cost.[132] Comparatively, most expeditions are between $35,000 to US$100,000 plus an additional $20,000 in other expenses that range from gear to bonuses.[133] It was estimated on 14 May that Sharp summitted Mount Everest and began his descent down, but 15 May he was in trouble but being passed by climbers on their way up and down.[134] On 15 May 2006 it is believed he was suffering from hypoxia and was about 1,000 feet from the summit on the North Side route.[134]
|
140 |
+
|
141 |
+
"Dawa from Arun Treks also gave oxygen to David and tried to help him move, repeatedly, for perhaps an hour. But he could not get David to stand alone or even stand to rest on his shoulders, and crying, Dawa had to leave him too. Even with two Sherpas, it was not going to be possible to get David down the tricky sections below."
|
142 |
+
|
143 |
+
Some climbers who left him said that the rescue efforts would have been useless and only have caused more deaths.[citation needed] Beck Weathers of the 1996 Mount Everest disaster said that those who are dying are often left behind and that he himself had been left for dead twice but was able to keep walking.[136] The Tribune of India quoted someone who described what happened to Sharp as "the most shameful act in the history of mountaineering".[137] In addition to Sharp's death, at least nine other climbers perished that year, including multiple Sherpas working for various guiding companies.[138]
|
144 |
+
|
145 |
+
"You are never on your own. There are climbers everywhere."
|
146 |
+
|
147 |
+
Much of this controversy was captured by the Discovery Channel while filming the television program Everest: Beyond the Limit. A crucial decision affecting the fate of Sharp is shown in the program, where an early returning climber Lebanese adventurer Maxim Chaya is descending from the summit and radios to his base camp manager (Russell Brice) that he has found a frostbitten and unconscious climber in distress. Chaya is unable to identify Sharp, who had chosen to climb solo without any support and so did not identify himself to other climbers. The base camp manager assumes that Sharp is part of a group that has already calculated that they must abandon him, and informs his lone climber that there is no chance of him being able to help Sharp by himself. As Sharp's condition deteriorates through the day and other descending climbers pass him, his opportunities for rescue diminish: his legs and feet curl from frostbite, preventing him from walking; the later descending climbers are lower on oxygen and lack the strength to offer aid; time runs out for any Sherpas to return and rescue him.
|
148 |
+
|
149 |
+
David Sharp's body remained just below the summit on the Chinese side next to "Green Boots"; they shared a space in a small rock cave that was an ad hoc tomb for them.[134] Sharp's body was removed from the cave in 2007, according to the BBC,[140] and since 2014, Green Boots has been missing, presumably removed or buried.[141]
|
150 |
+
|
151 |
+
As the Sharp debate kicked off on 26 May 2006, Australian climber Lincoln Hall was found alive after being left for dead the day before.[142] He was found by a party of four climbers (Dan Mazur, Andrew Brash, Myles Osborne and Jangbu Sherpa) who, giving up their own summit attempt, stayed with Hall and descended with him and a party of 11 Sherpas sent up to carry him down. Hall later fully recovered. His team assumed he had died from cerebral edema, and they were instructed to cover him with rocks.[142] There were no rocks around to do this and he was abandoned.[143] The erroneous information of his death was passed on to his family. The next day he was discovered by another party alive.[144]
|
152 |
+
|
153 |
+
I was shocked to see a guy without gloves, hat, oxygen bottles or sleeping bag at sunrise at 28,200-feet height, just sitting up there.
|
154 |
+
|
155 |
+
Lincoln greeted his fellow mountaineers with this:[144]
|
156 |
+
|
157 |
+
I imagine you are surprised to see me here.
|
158 |
+
|
159 |
+
Lincoln Hall went on to live for several more years, often giving talks about his near-death experience and rescue, before dying from unrelated medical issues in 2012 at the age of 56 (born in 1955).[144]
|
160 |
+
|
161 |
+
Heroic rescue actions have been recorded since Hall, including on 21 May 2007, when Canadian climber Meagan McGrath initiated the successful high-altitude rescue of Nepali Usha Bista. Recognising her heroic rescue, Major Meagan McGrath was selected as a 2011 recipient of the Sir Edmund Hillary Foundation of Canada Humanitarian Award, which recognises a Canadian who has personally or administratively contributed a significant service or act in the Himalayan Region of Nepal.[145]
|
162 |
+
|
163 |
+
By the end of the 2010 climbing season, there had been 5,104 ascents to the summit by about 3,142 individuals, with 77% of these ascents being accomplished since 2000.[146] The summit was achieved in 7 of the 22 years from 1953 to 1974 and was not missed between 1975 and 2014.[146] In 2007, the record number of 633 ascents was recorded, by 350 climbers and 253 sherpas.[146]
|
164 |
+
|
165 |
+
An illustration of the explosion of popularity of Everest is provided by the numbers of daily ascents. Analysis of the 1996 Mount Everest disaster shows that part of the blame was on the bottleneck caused by a large number of climbers (33 to 36) attempting to summit on the same day; this was considered unusually high at the time. By comparison, on 23 May 2010, the summit of Mount Everest was reached by 169 climbers – more summits in a single day than in the cumulative 31 years from the first successful summit in 1953 through 1983.[146]
|
166 |
+
|
167 |
+
There have been 219 fatalities recorded on Mount Everest from the 1922 British Mount Everest Expedition through the end of 2010, a rate of 4.3 fatalities for every 100 summits (this is a general rate, and includes fatalities amongst support climbers, those who turned back before the peak, those who died en route to the peak and those who died while descending from the peak). Of the 219 fatalities, 58 (26.5%) were climbers who had summited but did not complete their descent.[146] Though the rate of fatalities has decreased since the year 2000 (1.4 fatalities for every 100 summits, with 3938 summits since 2000), the significant increase in the total number of climbers still means 54 fatalities since 2000: 33 on the northeast ridge, 17 on the southeast ridge, 2 on the southwest face, and 2 on the north face.[146]
|
168 |
+
|
169 |
+
Nearly all attempts at the summit are done using one of the two main routes. The traffic seen by each route varies from year to year. In 2005–07, more than half of all climbers elected to use the more challenging, but cheaper northeast route. In 2008, the northeast route was closed by the Chinese government for the entire climbing season, and the only people able to reach the summit from the north that year were athletes responsible for carrying the Olympic torch for the 2008 Summer Olympics.[147] The route was closed to foreigners once again in 2009 in the run-up to the 50th anniversary of the Dalai Lama's exile.[148] These closures led to declining interest in the north route, and, in 2010, two-thirds of the climbers reached the summit from the south.[146]
|
170 |
+
|
171 |
+
The 2010s were a time of new highs and lows for the mountain, with back to back disasters in 2013 and 2014 causing record deaths. In 2015 there were no summits for the first time in decades. However, other years set records for numbers of summits – 2013's record number of summiters, around 667, was surpassed in 2018 with around 800 summiting the peak,[149] and a subsequent record was set in 2019 with over 890 summiters.[150]
|
172 |
+
|
173 |
+
On 18 April 2014, an avalanche hit the area just below the Base Camp 2 at around 01:00 UTC (06:30 local time) and at an elevation of about 5,900 metres (19,400 ft).[158] Sixteen people were killed in the avalanche (all Nepalese guides) and nine more were injured.[159]
|
174 |
+
|
175 |
+
During the season, a 13-year-old girl, Malavath Purna, reached the summit, becoming the youngest female climber to do so.[160] Additionally, one team used a helicopter to fly from south base camp to Camp 2 to avoid the Khumbu Icefall, then reached the Everest summit. This team had to use the south side because the Chinese had denied them a permit to climb. A team member later donated tens of thousands of dollars to local hospitals.[citation needed] She was named the Nepalese "International Mountaineer of the Year".[161]
|
176 |
+
|
177 |
+
|
178 |
+
|
179 |
+
Over 100 people summited Everest from China (Tibet region), and six from Nepal in the 2014 season.[162] This included 72-year-old Bill Burke, the Indian teenage girl, and a Chinese woman Jing Wang.[163] Another teen girl summiter was Ming Kipa Sherpa who summited with her elder sister Lhakpa Sherpa in 2003, and who had achieved the most times for woman to the summit of Mount Everest at that time.[164] (see also Santosh Yadav)
|
180 |
+
|
181 |
+
2015 was set to be a record-breaking season of climbs, with hundreds of permits issued in Nepal and many additional permits in Tibet (China). However, on 25 April 2015, an earthquake measuring 7.8 Mw triggered an avalanche that hit Everest Base Camp,[165] effectively shutting down the Everest climbing season.[166] 18 bodies were recovered from Mount Everest by the Indian Army mountaineering team.[167] The avalanche began on Pumori,[168] moved through the Khumbu Icefall on the southwest side of Mount Everest, and slammed into the South Base Camp.[169] 2015 was the first time since 1974 with no spring summits, as all climbing teams pulled out after the quakes and avalanche.[170][171] One of the reasons for this was the high probability of aftershocks (over 50 percent according to the USGS).[172] Just weeks after the first quake, the region was rattled again by a 7.3 magnitude quake and there were also many considerable aftershocks.[173]
|
182 |
+
|
183 |
+
The quakes trapped hundreds of climbers above the Khumbu icefall, and they had to be evacuated by helicopter as they ran low on supplies.[174] The quake shifted the route through the ice fall, making it essentially impassable to climbers.[174] Bad weather also made helicopter evacuation difficult.[174] The Everest tragedy was small compared the impact overall on Nepal, with almost nine thousand dead[175][176] and about 22,000 injured.[175] In Tibet, by 28 April at least 25 had died, and 117 were injured.[177] By 29 April 2015, the Tibet Mountaineering Association (North/Chinese side) closed Everest and other peaks to climbing, stranding 25 teams and about 300 people on the north side of Everest.[178] On the south side, helicopters evacuated 180 people trapped at Camps 1 and 2.[179]
|
184 |
+
|
185 |
+
On 24 August 2015 Nepal re-opened Everest to tourism including mountain climbers.[180] The only climber permit for the autumn season was awarded to Japanese climber Nobukazu Kuriki, who had tried four times previously to summit Everest without success. He made his fifth attempt in October, but had to give up just 700 m (2,300 ft) from the summit due to "strong winds and deep snow".[181][182] Kuriki noted the dangers of climbing Everest, having himself survived being stuck in a freezing snow hole for two days near the top, which came at the cost of all his fingertips and his thumb, lost to frostbite, which added further difficulty to his climb.[183]
|
186 |
+
|
187 |
+
Some sections of the trail from Lukla to Everest Base Camp (Nepal) were damaged in the earthquakes earlier in the year and needed repairs to handle trekkers.[184]
|
188 |
+
|
189 |
+
Hawley's database records 641 made it to the summit in early 2016.[185]
|
190 |
+
|
191 |
+
2017 was the biggest season yet, permit-wise, yielding hundreds of summiters and a handful of deaths.[186] On 27 May 2017, Kami Rita Sherpa made his 21st climb to the summit with the Alpine Ascents Everest Expedition, one of three people in the World along with Apa Sherpa and Phurba Tashi Sherpa to make it to the summit of Mount Everest 21 times.[187][188] The season had a tragic start with the death of Ueli Steck of Switzerland, who died from a fall during a warm-up climb.[189] There was a continued discussion about the nature of possible changes to the Hillary Step.[190] Total summiters for 2017 was tallied up to be 648.[156] 449 summited via Nepal (from the South) and 120 from Chinese Tibet (North side).[191]
|
192 |
+
|
193 |
+
807 climbers summited Mount Everest in 2018,[192] including 563 on the Nepal side and 240 from the Chinese Tibet side.[149] This broke the previous record for total summits in year from which was 667 in 2013, and one factor that aided in this was an especially long and clear weather window of 11 days during the critical spring climbing season.[149][193][157] Various records were broken including an extraordinary case of a 70-year-old double-amputee, who undertook his climb after winning a court case in the Nepali Supreme Court.[149] There were no major disasters, but seven climbers died in various situations including several sherpas as well as international climbers.[149] Although record numbers of climbers reached the summit, old-time summiters that made expeditions in the 1980s lamented the crowding, feces, and cost.[193]
|
194 |
+
|
195 |
+
Himalayan record keeper Elizabeth Hawley died in late January 2018.[194]
|
196 |
+
|
197 |
+
Figures for the number of permits issued by Nepal range from 347[195] to 375.[196]
|
198 |
+
|
199 |
+
The spring or pre-monsoon window for 2019 witnessed the deaths of a number of climbers and worldwide publication of images of hundreds of mountaineers queuing to reach the summit and sensational media reports of climbers stepping over dead bodies dismayed people around the world.[199][200][201][202]
|
200 |
+
|
201 |
+
There were reports of various winter expeditions in the Himalayas, including K2, Nanga Parbat, and Meru with the buzz for the Everest 2019 beginning just 14 weeks to the weather window.[203] Noted climber Cory Richards announced on Twitter that he was hoping to establish a new climbing route to the summit in 2019.[203] Also announced was an expedition to re-measure the height of Everest, particularly in light of the 2015 earthquakes.[204][205] China closed the base-camp to those without climbing permits in February 2019 on the northern side of Mount Everest.[206] By early April, climbing teams from around the world were arriving for the 2019 spring climbing season.[157] Among the teams was a scientific expedition with a planned study of pollution, and how things like snow and vegetation influence the availability of food and water in the region.[207] In the 2019 spring mountaineering season, there were roughly 40 teams with almost 400 climbers and several hundred guides attempting to summit on the Nepali side.[208][209][210] Nepal issued 381 climbing permits for 2019.[192] For the northern routes in Chinese Tibet, several hundred more permits were issued for climbing by authorities there.[211]
|
202 |
+
|
203 |
+
In May 2019, Nepali mountaineering guide Kami Rita summited Mount Everest twice within a week, his 23rd and 24th ascents, making international news headlines.[212][208][209] He first summited Everest in 1994, and has summited several other extremely high mountains, such as K2 and Lhotse.[208][209][210][212]
|
204 |
+
|
205 |
+
By 23 May 2019, about seven people had died, possibly due to crowding leading to delays high on the mountain, and shorter weather windows.[192] One 19-year-old who summited previously noted that when the weather window opens (the high winds calm down), long lines form as everyone rushes to get the top and back down.[213] In Chinese Tibet, one Austrian climber died from a fall,[192] and by 26 May 2019 the overall number of deaths for the spring climbing season rose to 10.[214][215][216] By 28 May, the death toll increased to 11 when a climber died at about 26,000 feet during the descent,[197] and a 12th climber missing and presumed dead.[198] Despite the number of deaths, reports indicated that a record 891 climbers summited in the spring 2019 climbing season.[217][150]
|
206 |
+
|
207 |
+
Although China has had various permit restrictions, and Nepal requires a doctor to sign off on climbing permits,[217] the natural dangers of climbing such as falls and avalanches combined with medical issues aggravated by Everest's extreme altitude led to 2019 being a year with a comparatively high death toll.[217] In light of this, the government of Nepal is now considering reviewing its existing laws on expeditions to the mountain peak.[218][219]
|
208 |
+
|
209 |
+
Regardless of the number of permits, the weather window impacts the timing of when climbers head to the summit.[220] The 2018 season had a similar number of climbers, but an 11-day straight calm, while in 2019 the weather windows were shorter and broken up by periods of bad weather.[220][221]
|
210 |
+
|
211 |
+
By 31 May 2019, the climbing season was thought to be all but concluded, as shifting weather patterns and increased wind speed make summiting more difficult.[222] The next window is generally when the monsoon season ends in September; however, this is a less popular option as the monsoon leaves large amounts of snow on the mountain, thus increasing the danger of avalanches,[223] which historically account for about one quarter of Everest fatalities.[150]
|
212 |
+
|
213 |
+
334 climbing permits were issued in 2014 in Nepal. These were extended until 2019 due to the closure.[225] In 2015 there were 357 permits to climb Everest, but the mountain was closed again because of the avalanche and earthquake, and these permits were given a two-year extension to 2017 (not to 2019 as with the 2014 issue).[226][225][clarification needed]
|
214 |
+
|
215 |
+
In 2017, a permit evader who tried to climb Everest without the $11,000 permit faced, among other penalties, a $22,000 fine, bans, and a possible four years in jail after he was caught (he had made it up past the Khumbu icefall) In the end he was given a 10-year mountaineering ban in Nepal and allowed to return home.[227]
|
216 |
+
|
217 |
+
The number of permits issued each year by Nepal is listed below.[226][228]
|
218 |
+
|
219 |
+
The Chinese side in Tibet is also managed with permits for summiting Everest.[229] They did not issue permits in 2008, due to the Olympic torch relay being taken to the summit of Mount Everest.[230]
|
220 |
+
|
221 |
+
In March 2020, the governments of China and Nepal announced a cancellation of all climbing permits for Mount Everest due to the COVID-19 pandemic.[231][232] In April 2020, a group of Chinese mountaineers began an expedition from the Chinese side. The mountain remained closed on the Chinese side to all foreign climbers.[233]
|
222 |
+
|
223 |
+
Mount Everest has two main climbing routes, the southeast ridge from Nepal and the north ridge from Tibet, as well as many other less frequently climbed routes.[234] Of the two main routes, the southeast ridge is technically easier and more frequently used. It was the route used by Edmund Hillary and Tenzing Norgay in 1953 and the first recognised of 15 routes to the top by 1996.[234] This was, however, a route decision dictated more by politics than by design, as the Chinese border was closed to the western world in the 1950s, after the People's Republic of China invaded Tibet.[235]
|
224 |
+
|
225 |
+
Most attempts are made during May, before the summer monsoon season. As the monsoon season approaches, the jet stream shifts northward, thereby reducing the average wind speeds high on the mountain.[236][237] While attempts are sometimes made in September and October, after the monsoons, when the jet stream is again temporarily pushed northward, the additional snow deposited by the monsoons and the less stable weather patterns at the monsoons' tail end makes climbing extremely difficult.
|
226 |
+
|
227 |
+
The ascent via the southeast ridge begins with a trek to Base Camp at 5,380 m (17,700 ft) on the south side of Everest, in Nepal. Expeditions usually fly into Lukla (2,860 m) from Kathmandu and pass through Namche Bazaar. Climbers then hike to Base Camp, which usually takes six to eight days, allowing for proper altitude acclimatisation in order to prevent altitude sickness.[238] Climbing equipment and supplies are carried by yaks, dzopkyos (yak-cow hybrids), and human porters to Base Camp on the Khumbu Glacier. When Hillary and Tenzing climbed Everest in 1953, the British expedition they were part of (comprising over 400 climbers, porters, and Sherpas at that point) started from the Kathmandu Valley, as there were no roads further east at that time.
|
228 |
+
|
229 |
+
Climbers spend a couple of weeks in Base Camp, acclimatising to the altitude. During that time, Sherpas and some expedition climbers set up ropes and ladders in the treacherous Khumbu Icefall.
|
230 |
+
|
231 |
+
Seracs, crevasses, and shifting blocks of ice make the icefall one of the most dangerous sections of the route. Many climbers and Sherpas have been killed in this section. To reduce the hazard, climbers usually begin their ascent well before dawn, when the freezing temperatures glue ice blocks in place.
|
232 |
+
|
233 |
+
Above the icefall is Camp I at 6,065 metres (19,900 ft).
|
234 |
+
|
235 |
+
From Camp I, climbers make their way up the Western Cwm to the base of the Lhotse face, where Camp II or Advanced Base Camp (ABC) is established at 6,500 m (21,300 ft). The Western Cwm is a flat, gently rising glacial valley, marked by huge lateral crevasses in the centre, which prevent direct access to the upper reaches of the Cwm. Climbers are forced to cross on the far right, near the base of Nuptse, to a small passageway known as the "Nuptse corner". The Western Cwm is also called the "Valley of Silence" as the topography of the area generally cuts off wind from the climbing route. The high altitude and a clear, windless day can make the Western Cwm unbearably hot for climbers.[239]
|
236 |
+
|
237 |
+
From Advanced Base Camp, climbers ascend the Lhotse face on fixed ropes, up to Camp III, located on a small ledge at 7,470 m (24,500 ft). From there, it is another 500 metres to Camp IV on the South Col at 7,920 m (26,000 ft).
|
238 |
+
|
239 |
+
From Camp III to Camp IV, climbers are faced with two additional challenges: the Geneva Spur and the Yellow Band. The Geneva Spur is an anvil shaped rib of black rock named by the 1952 Swiss expedition. Fixed ropes assist climbers in scrambling over this snow-covered rock band. The Yellow Band is a section of interlayered marble, phyllite, and semischist, which also requires about 100 metres of rope for traversing it.[239]
|
240 |
+
|
241 |
+
On the South Col, climbers enter the death zone. Climbers making summit bids typically can endure no more than two or three days at this altitude. That's one reason why clear weather and low winds are critical factors in deciding whether to make a summit attempt. If the weather does not cooperate within these short few days, climbers are forced to descend, many all the way back down to Base Camp.
|
242 |
+
|
243 |
+
From Camp IV, climbers begin their summit push around midnight, with hopes of reaching the summit (still another 1,000 metres above) within 10 to 12 hours. Climbers first reach "The Balcony" at 8,400 m (27,600 ft), a small platform where they can rest and gaze at peaks to the south and east in the early light of dawn. Continuing up the ridge, climbers are then faced with a series of imposing rock steps which usually forces them to the east into the waist-deep snow, a serious avalanche hazard. At 8,750 m (28,700 ft), a small table-sized dome of ice and snow marks the South Summit.[239]
|
244 |
+
|
245 |
+
From the South Summit, climbers follow the knife-edge southeast ridge along what is known as the "Cornice traverse", where snow clings to intermittent rock. This is the most exposed section of the climb, and a misstep to the left would send one 2,400 m (7,900 ft) down the southwest face, while to the immediate right is the 3,050 m (10,010 ft) Kangshung Face. At the end of this traverse is an imposing 12 m (39 ft) rock wall, the Hillary Step, at 8,790 m (28,840 ft).[240]
|
246 |
+
|
247 |
+
Hillary and Tenzing were the first climbers to ascend this step, and they did so using primitive ice climbing equipment and ropes. Nowadays, climbers ascend this step using fixed ropes previously set up by Sherpas. Once above the step, it is a comparatively easy climb to the top on moderately angled snow slopes—though the exposure on the ridge is extreme, especially while traversing large cornices of snow. With increasing numbers of people climbing the mountain in recent years, the Step has frequently become a bottleneck, with climbers forced to wait significant amounts of time for their turn on the ropes, leading to problems in getting climbers efficiently up and down the mountain.
|
248 |
+
|
249 |
+
After the Hillary Step, climbers also must traverse a loose and rocky section that has a large entanglement of fixed ropes that can be troublesome in bad weather. Climbers typically spend less than half an hour at the summit to allow time to descend to Camp IV before darkness sets in, to avoid serious problems with afternoon weather, or because supplemental oxygen tanks run out.
|
250 |
+
|
251 |
+
The north ridge route begins from the north side of Everest, in Tibet. Expeditions trek to the Rongbuk Glacier, setting up base camp at 5,180 m (16,990 ft) on a gravel plain just below the glacier. To reach Camp II, climbers ascend the medial moraine of the east Rongbuk Glacier up to the base of Changtse, at around 6,100 m (20,000 ft). Camp III (ABC—Advanced Base Camp) is situated below the North Col at 6,500 m (21,300 ft). To reach Camp IV on the North Col, climbers ascend the glacier to the foot of the col where fixed ropes are used to reach the North Col at 7,010 m (23,000 ft). From the North Col, climbers ascend the rocky north ridge to set up Camp V at around 7,775 m (25,500 ft). The route crosses the North Face in a diagonal climb to the base of the Yellow Band, reaching the site of Camp VI at 8,230 m (27,000 ft). From Camp VI, climbers make their final summit push.
|
252 |
+
|
253 |
+
Climbers face a treacherous traverse from the base of the First Step: ascending from 8,501 to 8,534 m (27,890 to 28,000 ft), to the crux of the climb, the Second Step, ascending from 8,577 to 8,626 m (28,140 to 28,300 ft). (The Second Step includes a climbing aid called the "Chinese ladder", a metal ladder placed semi-permanently in 1975 by a party of Chinese climbers.[241] It has been almost continuously in place since, and ladders have been used by virtually all climbers on the route.) Once above the Second Step the inconsequential Third Step is clambered over, ascending from 8,690 to 8,800 m (28,510 to 28,870 ft). Once above these steps, the summit pyramid is climbed by a snow slope of 50 degrees, to the final summit ridge along which the top is reached.[242]
|
254 |
+
|
255 |
+
The summit of Everest has been described as "the size of a dining room table".[243] The summit is capped with snow over ice over rock, and the layer of snow varies from year to year.[244] The rock summit is made of Ordovician limestone and is a low-grade metamorphic rock according to Montana State University.[245] (see survey section for more on its height and about the Everest rock summit)
|
256 |
+
|
257 |
+
Below the summit there is an area known as "rainbow valley", filled with dead bodies still wearing brightly coloured winter gear. Down to about 8000 metres is an area commonly called the "death zone", due to the high danger and low oxygen because of the low pressure.[79]
|
258 |
+
|
259 |
+
Below the summit the mountain slopes downward to the three main sides, or faces, of Mount Everest: the North Face, the South-West Face, and the East/Kangshung Face.[246]
|
260 |
+
|
261 |
+
At the higher regions of Mount Everest, climbers seeking the summit typically spend substantial time within the death zone (altitudes higher than 8,000 metres (26,000 ft)), and face significant challenges to survival. Temperatures can dip to very low levels, resulting in frostbite of any body part exposed to the air. Since temperatures are so low, snow is well-frozen in certain areas and death or injury by slipping and falling can occur. High winds at these altitudes on Everest are also a potential threat to climbers.
|
262 |
+
|
263 |
+
Another significant threat to climbers is low atmospheric pressure. The atmospheric pressure at the top of Everest is about a third of sea level pressure or 0.333 standard atmospheres (337 mbar), resulting in the availability of only about a third as much oxygen to breathe.[247]
|
264 |
+
|
265 |
+
Debilitating effects of the death zone are so great that it takes most climbers up to 12 hours to walk the distance of 1.72 kilometres (1.07 mi) from South Col to the summit.[248] Achieving even this level of performance requires prolonged altitude acclimatisation, which takes 40–60 days for a typical expedition. A sea-level dweller exposed to the atmospheric conditions at the altitude above 8,500 m (27,900 ft) without acclimatisation would likely lose consciousness within 2 to 3 minutes.[249]
|
266 |
+
|
267 |
+
In May 2007, the Caudwell Xtreme Everest undertook a medical study of oxygen levels in human blood at extreme altitude. Over 200 volunteers climbed to Everest Base Camp where various medical tests were performed to examine blood oxygen levels. A small team also performed tests on the way to the summit.[250] Even at base camp, the low partial pressure of oxygen had direct effect on blood oxygen saturation levels. At sea level, blood oxygen saturation is generally 98–99%. At base camp, blood saturation fell to between 85 and 87%. Blood samples taken at the summit indicated very low oxygen levels in the blood. A side effect of low blood oxygen is a greatly increased breathing rate, often 80–90 breaths per minute as opposed to a more typical 20–30. Exhaustion can occur merely attempting to breathe.[251]
|
268 |
+
|
269 |
+
Lack of oxygen, exhaustion, extreme cold, and climbing hazards all contribute to the death toll. An injured person who cannot walk is in serious trouble, since rescue by helicopter is generally impractical and carrying the person off the mountain is very risky. People who die during the climb are typically left behind. As of 2006, about 150 bodies had never been recovered. It is not uncommon to find corpses near the standard climbing routes.[252]
|
270 |
+
|
271 |
+
Debilitating symptoms consistent with high altitude cerebral oedema commonly present during descent from the summit of Mount Everest. Profound fatigue and late times in reaching the summit are early features associated with subsequent death.
|
272 |
+
|
273 |
+
A 2008 study noted that the "death zone" is indeed where most Everest deaths occur, but also noted that most deaths occur during descent from the summit.[254] A 2014 article in The Atlantic about deaths on Everest noted that while falling is one of the greatest dangers the death zone presents for all 8000ers, avalanches are a more common cause of death at lower altitudes.[255] However, Everest climbing is more deadly than BASE jumping, although some have combined extreme sports and Everest including a Russian who base-jumped off Everest in a wingsuit (he did survive, though).[256]
|
274 |
+
|
275 |
+
Despite this, Everest is safer for climbers than a number of peaks by some measurements, but it depends on the period.[257] Some examples are Kangchenjunga, K2, Annapurna, Nanga Parbat, and the Eiger (especially the nordwand).[257] Mont Blanc has more deaths each year than Everest, with over one hundred dying in a typical year and over eight thousand killed since records were kept.[258] Some factors that affect total mountain lethality include the level of popularity of the mountain, the skill of those climbing, and the difficulty of the climb.[258]
|
276 |
+
|
277 |
+
Another health hazard is retinal haemorrhages, which can damage eyesight and cause blindness.[259] Up to a quarter of Everest climbers can experience retinal haemorrhages, and although they usually heal within weeks of returning to lower altitudes, in 2010 a climber went blind and ended up dying in the death zone.[259]
|
278 |
+
|
279 |
+
At one o'clock in the afternoon, the British climber Peter Kinloch was on the roof of the world, in bright sunlight, taking photographs of the Himalayas below, "elated, cheery and bubbly". But Mount Everest is now his grave, because only minutes later, he suddenly went blind and had to be abandoned to die from the cold.
|
280 |
+
|
281 |
+
The team made a huge effort for the next 12 hours to try to get him down the mountain, but to no avail, as they were unsuccessful in getting him through the difficult sections.[260] Even for the able, the Everest North-East ridge is recognised as a challenge. It is hard to rescue someone who has become incapacitated and it can be beyond the ability of rescuers to save anyone in such a difficult spot.[260] One way around this situation was pioneered by two Nepali men in 2011, who had intended to paraglide off the summit. They had no choice and were forced to go through with their plan anyway, because they had run out of bottled oxygen and supplies.[261] They successfully launched off the summit and para-glided down to Namche Bazaar in just 42 minutes, without having to climb down the mountain.[261]
|
282 |
+
|
283 |
+
Most expeditions use oxygen masks and tanks above 8,000 m (26,000 ft).[262] Everest can be climbed without supplementary oxygen, but only by the most accomplished mountaineers and at increased risk. Humans' ability to think clearly is hindered with low oxygen, and the combination of extreme weather, low temperatures, and steep slopes often requires quick, accurate decisions. While about 95 percent of climbers who reach the summit use bottled oxygen in order to reach the top, about five percent of climbers have summited Everest without supplemental oxygen. The death rate is double for those who attempt to reach the summit without supplemental oxygen.[263] Travelling above 8,000 feet altitude is a factor in cerebral hypoxia.[264] This decrease of oxygen to the brain can cause dementia and brain damage, as well as other symptoms.[265] One study found that Mount Everest may be the highest an acclimatised human could go, but also found that climbers may suffer permanent neurological damage despite returning to lower altitudes.[266]
|
284 |
+
|
285 |
+
Brain cells are extremely sensitive to a lack of oxygen. Some brain cells start dying less than 5 minutes after their oxygen supply disappears. As a result, brain hypoxia can rapidly cause severe brain damage or death.
|
286 |
+
|
287 |
+
The use of bottled oxygen to ascend Mount Everest has been controversial. It was first used on the 1922 British Mount Everest Expedition by George Finch and Geoffrey Bruce who climbed up to 7,800 m (25,600 ft) at a spectacular speed of 1,000 vertical feet per hour (vf/h). Pinned down by a fierce storm, they escaped death by breathing oxygen from a jury-rigged set-up during the night. The next day they climbed to 8,100 m (26,600 ft) at 900 vf/h—nearly three times as fast as non-oxygen users. Yet the use of oxygen was considered so unsportsmanlike that none of the rest of the Alpine world recognised this high ascent rate.[citation needed]
|
288 |
+
|
289 |
+
George Mallory described the use of such oxygen as unsportsmanlike, but he later concluded that it would be impossible for him to summit without it and consequently used it on his final attempt in 1924.[267] When Tenzing and Hillary made the first successful summit in 1953, they also used open-circuit bottled oxygen sets, with the expedition's physiologist Griffith Pugh referring to the oxygen debate as a "futile controversy", noting that oxygen "greatly increases subjective appreciation of the surroundings, which after all is one of the chief reasons for climbing."[268] For the next twenty-five years, bottled oxygen was considered standard for any successful summit.
|
290 |
+
|
291 |
+
...although an acclimatised lowlander can survive for a time on the summit of Everest without supplemental oxygen, one is so close to the limit that even a modicum of excess exertion may impair brain function.
|
292 |
+
|
293 |
+
Reinhold Messner was the first climber to break the bottled oxygen tradition and in 1978, with Peter Habeler, made the first successful climb without it. In 1980, Messner summited the mountain solo, without supplemental oxygen or any porters or climbing partners, on the more difficult northwest route. Once the climbing community was satisfied that the mountain could be climbed without supplemental oxygen, many purists then took the next logical step of insisting that is how it should be climbed.[25]:154
|
294 |
+
|
295 |
+
The aftermath of the 1996 disaster further intensified the debate. Jon Krakauer's Into Thin Air (1997) expressed the author's personal criticisms of the use of bottled oxygen. Krakauer wrote that the use of bottled oxygen allowed otherwise unqualified climbers to attempt to summit, leading to dangerous situations and more deaths. The disaster was partially caused by the sheer number of climbers (34 on that day) attempting to ascend, causing bottlenecks at the Hillary Step and delaying many climbers, most of whom summitted after the usual 14:00 turnaround time. He proposed banning bottled oxygen except for emergency cases, arguing that this would both decrease the growing pollution on Everest—many bottles have accumulated on its slopes—and keep marginally qualified climbers off the mountain.
|
296 |
+
|
297 |
+
The 1996 disaster also introduced the issue of the guide's role in using bottled oxygen.[269]
|
298 |
+
|
299 |
+
Guide Anatoli Boukreev's decision not to use bottled oxygen was sharply criticised by Jon Krakauer. Boukreev's supporters (who include G. Weston DeWalt, who co-wrote The Climb) state that using bottled oxygen gives a false sense of security.[270] Krakauer and his supporters point out that, without bottled oxygen, Boukreev could not directly help his clients descend.[271] They state that Boukreev said that he was going down with client Martin Adams,[271] but just below the south summit, Boukreev determined that Adams was doing fine on the descent and so descended at a faster pace, leaving Adams behind. Adams states in The Climb, "For me, it was business as usual, Anatoli's going by, and I had no problems with that."[272]
|
300 |
+
|
301 |
+
The low oxygen can cause a mental fog-like impairment of cognitive abilities described as "delayed and lethargic thought process, clinically defined as bradypsychia" even after returning to lower altitudes.[273] In severe cases, climbers can experience hallucinations. Some studies have found that high-altitude climbers, including Everest climbers, experience altered brain structure.[273] The effects of high altitude on the brain, particularly if it can cause permanent brain damage, continue to be studied.[273]
|
302 |
+
|
303 |
+
Although generally less popular than spring, Mount Everest has also been climbed in the autumn (also called the "post-monsoon season").[62][274] For example, in 2010 Eric Larsen and five Nepali guides summited Everest in the autumn for the first time in ten years.[274] The first mainland British ascent of Mount Everest (Hillary was from New Zealand), which was also the first ascent via a face rather than a ridge, was the autumn 1975 Southwest Face expedition led by Chris Bonington.[citation needed] The autumn season, when the monsoon ends, is regarded as more dangerous because there is typically a lot of new snow which can be unstable.[275] However, this increased snow can make it more popular with certain winter sports like skiing and snowboarding.[62] Two Japanese climbers also summited in October 1973.[276]
|
304 |
+
|
305 |
+
Chris Chandler and Bob Cormack summited Everest in October 1976 as part of the American Bicentennial Everest Expedition that year, the first Americans to make an autumn ascent of Mount Everest according to the Los Angeles Times.[277] By the 21st century, summer and autumn can be more popular with skiing and snowboard attempts on Mount Everest.[278] During the 1980s, climbing in autumn was actually more popular than in spring.[279] U.S. astronaut Karl Gordon Henize died in October 1993 on an autumn expedition, conducting an experiment on radiation. The amount of background radiation increases with higher altitudes.[280]
|
306 |
+
|
307 |
+
The mountain has also been climbed in the winter, but that is not popular because of the combination of cold high winds and shorter days.[281] By January the peak is typically battered by 170 mph (270 km/h) winds and the average temperature of the summit is around −33 °F (−36 °C).[62]
|
308 |
+
|
309 |
+
By the end of the 2010 climbing season, there had been 5,104 ascents to the summit by about 3,142 individuals.[146] Some notable "firsts" by climbers include:
|
310 |
+
|
311 |
+
Summiting Everest with disabilities such as amputations and diseases has become popular in the 21st century, with stories like that of Sudarshan Gautam, a man with no arms who made it to the top in 2013.[307] A teenager with Down's syndrome made it to Base camp, which has become a substitute for more extreme record-breaking because it carries many of the same thrills including the trip to the Himalayas and rustic scenery.[308] Danger lurks even at base camp though, which was the site where dozens were killed in the 2015 Mount Everest avalanches. Others that have climbed Everest with amputations include Mark Inglis (no legs), Paul Hockey (one arm only), and Arunima Sinha (one leg only).
|
312 |
+
|
313 |
+
Lucy, Lady Houston, a British millionaire former showgirl, funded the Houston Everest Flight of 1933. A formation of airplanes led by the Marquess of Clydesdale flew over the summit in an effort to photograph the unknown terrain .[309]
|
314 |
+
|
315 |
+
On 26 September 1988, having climbed the mountain via the south-east ridge, Jean-Marc Boivin made the first paraglider descent of Everest,[310] in the process creating the record for the fastest descent of the mountain and the highest paraglider flight. Boivin said: "I was tired when I reached the top because I had broken much of the trail, and to run at this altitude was quite hard."[311] Boivin ran 18 m (60 ft) from below the summit on 40-degree slopes to launch his paraglider, reaching Camp II at 5,900 m (19,400 ft) in 12 minutes (some sources say 11 minutes).[311][312] Boivin would not repeat this feat, as he was killed two years later in 1990, base-jumping off Venezuela's Angel Falls.[313]
|
316 |
+
|
317 |
+
In 1991 four men in two balloons achieved the first hot-air balloon flight over Mount Everest.[314] In one balloon was Andy Elson and Eric Jones (cameraman), and in the other balloon Chris Dewhirst and Leo Dickinson (cameraman).[315] Dickinson went on to write a book about the adventure called Ballooning Over Everest.[315] The hot-air balloons were modified to function at up to 40,000 feet altitude.[315] Reinhold Messner called one of Dickinson's panoramic views of Everest, captured on the now discontinued Kodak Kodachrome film, the "best snap on Earth", according to UK newspaper The Telegraph.[316] Dewhirst has offered to take passengers on a repeat of this feat for US$2.6 million per passenger.[314]
|
318 |
+
|
319 |
+
In May 2005, pilot Didier Delsalle of France landed a Eurocopter AS350 B3 helicopter on the summit of Mount Everest.[317] He needed to land for two minutes to set the Fédération Aéronautique Internationale (FAI) official record, but he stayed for about four minutes, twice.[317] In this type of landing the rotors stay engaged, which avoids relying on the snow to fully support the aircraft. The flight set rotorcraft world records, for highest of both landing and take-off.[318]
|
320 |
+
|
321 |
+
Some press reports suggested that the report of the summit landing was a misunderstanding of a South Col landing, but he had also landed on South Col two days earlier,[319] with this landing and the Everest records confirmed by the FAI.[318] Delsalle also rescued two Japanese climbers at 4,880 m (16,000 ft) while he was there. One climber noted that the new record meant a better chance of rescue.[317]
|
322 |
+
|
323 |
+
In 2011 two Nepali paraglided from the Everest Summit to Namche Bazaar in 42 minutes.[261] They had run out of oxygen and supplies, so it was a very fast way off the mountain.[261] The duo won National Geographic Adventurers of the Year for 2012 for their exploits.[320] After the paraglide they kayaked to the Indian Ocean, and they had made it to the Bay of Bengal by the end of June 2011.[320] One had never climbed, and one had never para-glided, but together they accomplished a ground-breaking feat.[320] By 2013 footage of the flight was shown on the television news program Nightline.[321]
|
324 |
+
|
325 |
+
In 2014, a team financed and led by mountaineer Wang Jing used a helicopter to fly from South base camp to Camp 2 to avoid the Khumbu Icefall, and thence climbed to the Everest summit.[322] This climb immediately sparked outrage and controversy in much of the mountaineering world over the legitimacy and propriety of her climb.[161][323] Nepal ended up investigating Wang, who initially denied the claim that she had flown to Camp 2, admitting only that some support crew were flown to that higher camp, over the Khumbu Icefall.[324] In August 2014, however, she stated that she had flown to Camp 2 because the icefall was impassable. "If you don't fly to Camp II, you just go home," she said in an interview. In that same interview she also insisted that she had never tried to hide this fact.[161]
|
326 |
+
|
327 |
+
Her team had had to use the south side because the Chinese had denied them a permit to climb. Ultimately, the Chinese refusal may have been beneficial to Nepalese interests, allowing the government to showcase improved local hospitals and provided the opportunity for a new hybrid aviation/mountaineering style, triggering discussions about helicopter use in the mountaineering world.[161] National Geographic noted that a village festooned Wang with honours after she donated US$30,000 to the town's hospital. Wang won the International Mountaineer of the Year Award from the Nepal government in June 2014.[322]
|
328 |
+
|
329 |
+
In 2016 the increased use of helicopters was noted for increased efficiency and for hauling material over the deadly Khumbu icefall.[325] In particular it was noted that flights saved icefall porters 80 trips but still increased commercial activity at Everest.[325] After many Nepalis died in the icefall in 2014, the government had wanted helicopters to handle more transportation to Camp 1 but this was not possible because of the 2015 deaths and earthquake closing the mountain, so this was then implemented in 2016 (helicopters did prove instrumental in rescuing many people in 2015 though).[325] That summer Bell tested the 412EPI, which conducted a series of tests including hovering at 18,000 feet and flying as high as 20,000 feet altitude near Mount Everest.[326]
|
330 |
+
|
331 |
+
Climbing Mount Everest can be a relatively expensive undertaking for climbers.
|
332 |
+
|
333 |
+
Climbing gear required to reach the summit may cost in excess of US$8,000, and most climbers also use bottled oxygen, which adds around US$3,000.[citation needed] The permit to enter the Everest area from the south via Nepal costs US$10,000 to US$25,000 per person, depending on the size of the team.[citation needed] The ascent typically starts at one of the two base camps near the mountain, both of which are approximately 100 kilometres (60 mi) from Kathmandu and 300 kilometres (190 mi) from Lhasa (the two nearest cities with major airports). Transferring one's equipment from the airport to the base camp may add as much as US$2,000.[citation needed]
|
334 |
+
|
335 |
+
Going with a "celebrity guide", usually a well-known mountaineer typically with decades of climbing experience and perhaps several Everest summits, can cost over £100,000 as of 2015.[327] On the other hand, a limited support service, offering only some meals at base camp and bureaucratic overhead like a permit, can cost as little as US$7,000 as of 2007.[132] There are issues with the management of guiding firms in Nepal, and one Canadian woman was left begging for help when her guide firm, which she had paid perhaps US$40,000 to, couldn't stop her from dying in 2012. She ran out of bottled oxygen after climbing for 27 hours straight. Despite decades of concern over inexperienced climbers, neither she nor the guide firm had summited Everest before. The Tibetan/Chinese side has been described as "out of control" due to reports of thefts and threats.[328] By 2015, Nepal was considering to require that climbers have some experience and wanted to make the mountain safer, and especially increase revenue.[329] One barrier to this is that low-budget firms make money not taking inexperienced climbers to the summit.[330] Those turned away by Western firms can often find another firm willing to take them for a price—that they return home soon after arriving after base camp, or part way up the mountain.[330] Whereas a Western firm will convince those they deem incapable to turn back, other firms simply give people the freedom to choose.[330]
|
336 |
+
|
337 |
+
According to Jon Krakauer, the era of commercialisation of Everest started in 1985, when the summit was reached by a guided expedition led by David Breashears that included Richard Bass, a wealthy 55-year-old businessman and an amateur mountain climber with only four years of climbing experience.[332] By the early-1990s, several companies were offering guided tours to the mountain. Rob Hall, one of the mountaineers who died in the 1996 disaster, had successfully guided 39 clients to the summit before that incident.[25]:24,42
|
338 |
+
|
339 |
+
By 2016, most guiding services cost between US$35,000–200,000.[330] However, the services offered vary widely and it is "buyer beware" when doing deals in Nepal, one of the poorest and least developed countries in the world.[330][333] Tourism is about four percent of Nepal's economy, but Everest is special in that an Everest porter can make nearly double the nations's average wage in a region in which other sources of income are lacking.[334]
|
340 |
+
|
341 |
+
Beyond this point, costs may vary widely. It is technically possible to reach the summit with minimal additional expenses, and there are "budget" travel agencies which offer logistical support for such trips. However, this is considered difficult and dangerous (as illustrated by the case of David Sharp). Many climbers hire "full service" guide companies, which provide a wide spectrum of services, including acquisition of permits, transportation to/from base camp, food, tents, fixed ropes,[335] medical assistance while on the mountain, an experienced mountaineer guide, and even personal porters to carry one's backpack and cook one's meals. The cost of such a guide service may range from US$40,000–80,000 per person.[336] Since most equipment is moved by Sherpas, clients of full-service guide companies can often keep their backpack weights under 10 kilograms (22 lb), or hire a Sherpa to carry their backpack for them. By contrast, climbers attempting less commercialised peaks, like Denali, are often expected to carry backpacks over 30 kilograms (66 lb) and, occasionally, to tow a sled with 35 kilograms (77 lb) of gear and food.[337]
|
342 |
+
|
343 |
+
The degree of commercialisation of Mount Everest is a frequent subject of criticism.[338] Jamling Tenzing Norgay, the son of Tenzing Norgay, said in a 2003 interview that his late father would have been shocked to discover that rich thrill-seekers with no climbing experience were now routinely reaching the summit, "You still have to climb this mountain yourself with your feet. But the spirit of adventure is not there any more. It is lost. There are people going up there who have no idea how to put on crampons. They are climbing because they have paid someone $65,000. It is very selfish. It endangers the lives of others."[339]
|
344 |
+
|
345 |
+
Reinhold Messner concurred in 2004, "You could die in each climb and that meant you were responsible for yourself. We were real mountaineers: careful, aware and even afraid. By climbing mountains we were not learning how big we were. We were finding out how breakable, how weak and how full of fear we are. You can only get this if you expose yourself to high danger. I have always said that a mountain without danger is not a mountain....High altitude alpinism has become tourism and show. These commercial trips to Everest, they are still dangerous. But the guides and organisers tell clients, 'Don't worry, it's all organised.' The route is prepared by hundreds of Sherpas. Extra oxygen is available in all camps, right up to the summit. People will cook for you and lay out your beds. Clients feel safe and don't care about the risks."[340]
|
346 |
+
|
347 |
+
However, not all opinions on the subject among prominent mountaineers are strictly negative. For example, Edmund Hillary, who went on record saying that he has not liked "the commercialisation of mountaineering, particularly of Mount Everest"[341] and claimed that "Having people pay $65,000 and then be led up the mountain by a couple of experienced guides...isn't really mountaineering at all",[342] nevertheless noted that he was pleased by the changes brought to Everest area by Westerners, "I don't have any regrets because I worked very hard indeed to improve the condition for the local people. When we first went in there they didn't have any schools, they didn't have any medical facilities, all over the years we have established 27 schools, we have two hospitals and a dozen medical clinics and then we've built bridges over wild mountain rivers and put in fresh water pipelines so in cooperation with the Sherpas we've done a lot to benefit them."[343]
|
348 |
+
|
349 |
+
One of the early guided summiters, Richard Bass (of Seven Summits fame) responded in an interview about Everest climbers and what it took to survive there, "Climbers should have high altitude experience before they attempt the really big mountains. People don't realise the difference between a 20,000-foot mountain and 29,000 feet. It's not just arithmetic. The reduction of oxygen in the air is proportionate to the altitude alright, but the effect on the human body is disproportionate—an exponential curve. People climb Denali [20,320 feet] or Aconcagua [22,834 feet] and think, 'Heck, I feel great up here, I'm going to try Everest.' But it's not like that."[344]
|
350 |
+
|
351 |
+
Some climbers have reported life-threatening thefts from supply caches. Vitor Negrete, the first Brazilian to climb Everest without oxygen and part of David Sharp's party, died during his descent, and theft of gear and food from his high-altitude camp may have contributed.[345][346]
|
352 |
+
|
353 |
+
"Several members were bullied, gear was stolen, and threats were made against me and my climbing partner, Michael Kodas, making an already stressful situation even more dire", said one climber.[347]
|
354 |
+
|
355 |
+
In addition to theft, Michael Kodas describes in his book, High Crimes: The Fate of Everest in an Age of Greed (2008):[348] unethical guides and Sherpas, prostitution and gambling at the Tibet Base Camp, fraud related to the sale of oxygen bottles, and climbers collecting donations under the pretense of removing trash from the mountain.[349][350]
|
356 |
+
|
357 |
+
The Chinese side of Everest in Tibet was described as "out of control" after one Canadian had all his gear stolen and was abandoned by his Sherpa.[328] Another sherpa helped the victim get off the mountain safely and gave him some spare gear. Other climbers have also reported missing oxygen bottles, which can be worth hundreds of dollars each. Hundreds of climbers pass by people's tents, making it hard to safeguard against theft.[328]
|
358 |
+
|
359 |
+
In the late 2010s, the reports of theft of oxygen bottles from camps became more common.[351] Westerners have sometimes struggled to understand the ancient culture and desperate poverty that drives some locals, some of whom have a different concept of the value of a human life.[352][353] For example, for just 1,000 rupees (£6.30) per person, several foreigners were forced to leave the lodge where they were staying and tricked into believing they were being led to safety. Instead they were abandoned and died in the snowstorm.[352]
|
360 |
+
|
361 |
+
On 18 April 2014, in one of the worst disasters to ever hit the Everest climbing community up to that time, 16 Sherpas died in Nepal due to the avalanche that swept them off Mount Everest. In response to the tragedy numerous Sherpa climbing guides walked off the job and most climbing companies pulled out in respect for the Sherpa people mourning the loss.[354][355] Some still wanted to climb but there was too much controversy to continue that year.[354] One of the issues that triggered the work action by Sherpas was unreasonable client demands during climbs.[354]
|
362 |
+
|
363 |
+
Mount Everest has been host to other winter sports and adventuring besides mountaineering, including snowboarding, skiing, paragliding, and BASE jumping.
|
364 |
+
|
365 |
+
Yuichiro Miura became the first man to ski down Everest in the 1970s. He descended nearly 4,200 vertical feet from the South Col before falling with extreme injuries.[85] Stefan Gatt and Marco Siffredi snowboarded Mount Everest in 2001.[356] Other Everest skiers include Davo Karničar of Slovenia, who completed a top to south base camp descent in 2000, Hans Kammerlander of Italy in 1996 on the north side,[357] and Kit DesLauriers of the United States in 2006.[358] In 2006 Swede Tomas Olsson and Norwegian Tormod Granheim skied together down the north face. Olsson's anchor broke while they were rappelling down a cliff in the Norton couloir at about 8,500 metres, resulting in his death from a two and a half-kilometre fall. Granheim skied down to camp III.[359] Also, Marco Siffredi died in 2002 on his second snow-boarding expedition.[356]
|
366 |
+
|
367 |
+
Various types of gliding descents have slowly become more popular, and are noted for their rapid descents to lower camps. In 1986 Steve McKinney led an expedition to Mount Everest,[360] during which he became the first person to fly a hang-glider off the mountain.[312] Frenchman Jean-Marc Boivin made the first paraglider descent of Everest in September 1988, descending in minutes from the south-east ridge to a lower camp.[310] In 2011, two Nepalese made a gliding descent from the Everest summit down 5,000 metres (16,400 ft) in 45 minutes.[361] On 5 May 2013, the beverage company Red Bull sponsored Valery Rozov, who successfully BASE jumped off of the mountain while wearing a wingsuit, setting a record for world's highest BASE jump in the process.[256][362]
|
368 |
+
|
369 |
+
The southern part of Mount Everest is regarded as one of several "hidden valleys" of refuge designated by Padmasambhava, a ninth-century "lotus-born" Buddhist saint.[363]
|
370 |
+
|
371 |
+
Near the base of the north side of Everest lies Rongbuk Monastery, which has been called the "sacred threshold to Mount Everest, with the most dramatic views of the world."[364] For Sherpas living on the slopes of Everest in the Khumbu region of Nepal, Rongbuk Monastery is an important pilgrimage site, accessed in a few days of travel across the Himalayas through Nangpa La.[99]
|
372 |
+
|
373 |
+
Miyolangsangma, a Tibetan Buddhist "Goddess of Inexhaustible Giving", is believed to have lived at the top of Mt Everest. According to Sherpa Buddhist monks, Mt Everest is Miyolangsangma's palace and playground, and all climbers are only partially welcome guests, having arrived without invitation.[363]
|
374 |
+
|
375 |
+
The Sherpa people also believe that Mount Everest and its flanks are blessed with spiritual energy, and one should show reverence when passing through this sacred landscape. Here, the karmic effects of one's actions are magnified, and impure thoughts are best avoided.[363]
|
376 |
+
|
377 |
+
In 2015 the president of the Nepal Mountaineering Association warned that pollution, especially human waste, has reached critical levels. As much as "26,500 pounds of human excrement" each season is left behind on the mountain.[365] Human waste is strewn across the verges of the route to the summit, making the four sleeping areas on the route up Everest's south side minefields of human excrement. Climbers above Base Camp—for the 62-year history of climbing on the mountain—have most commonly either buried their excrement in holes they dug by hand in the snow, or slung it into crevasses, or simply defecated wherever convenient, often within meters of their tents. The only place where climbers can defecate without worrying about contaminating the mountain is Base Camp. At approximately 18,000 feet, Base Camp sees the most activity of all camps on Everest because climbers acclimate and rest there. In the late-1990s, expeditions began using toilets that they fashioned from blue plastic 50-gallon barrels fitted with a toilet seat and enclosed.[366]
|
378 |
+
|
379 |
+
The problem of human waste is compounded by the presence of more anodyne waste: spent oxygen tanks, abandoned tents, empty cans and bottles. The Nepalese government now requires each climber to pack out eight kilograms of waste when descending the mountain.[367]
|
380 |
+
|
381 |
+
In February 2019, due to the mounting waste problem, China closed the base camp on its side of Everest to visitors without climbing permits. Tourists are allowed to go as far as the Rongbuk Monastery.[368]
|
382 |
+
|
383 |
+
In April 2019, the Solukhumbu district's Khumbu Pasanglhamu Rural Municipality launched a campaign to collect nearly 10,000 kg of garbage from Everest.[369]
|
384 |
+
|
385 |
+
Mount Everest has a polar climate (Köppen EF) with all months averaging well below freezing. Note: In the table below, the temperature given is the average lowest temperature recorded in that month. So, in an average year, the lowest recorded July temperature will be -18 degrees Celsius, and the lowest recorded January temperature will be -36 degrees Celsius.
|
386 |
+
|
387 |
+
Nearby peaks include Lhotse, 8,516 m (27,940 ft); Nuptse, 7,855 m (25,771 ft), and Changtse, 7,580 m (24,870 ft) among others. Another nearby peak is Khumbutse, and many of the highest mountains in the world are near Mount Everest. On the southwest side, a major feature in the lower areas is the Khumbu icefall and glacier, an obstacle to climbers on those routes but also to the base camps.
|
en/1919.html.txt
ADDED
@@ -0,0 +1,387 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Mount Everest (Nepali: Sagarmatha सगरमाथा; Tibetan: Chomolungma ཇོ་མོ་གླང་མ; Chinese: Zhumulangma 珠穆朗玛) is Earth's highest mountain above sea level, located in the Mahalangur Himal sub-range of the Himalayas. The China–Nepal border runs across its summit point.[5]
|
4 |
+
|
5 |
+
The current official elevation of 8,848 m (29,029 ft), recognised by China and Nepal, was established by a 1955 Indian survey and confirmed by a 1975 Chinese survey.[1]
|
6 |
+
|
7 |
+
In 1865, Everest was given its official English name by the Royal Geographical Society, as recommended by Andrew Waugh, the British Surveyor General of India, who chose the name of his predecessor in the post, Sir George Everest, despite Everest's objections.[6]
|
8 |
+
|
9 |
+
Mount Everest attracts many climbers, some of them highly experienced mountaineers. There are two main climbing routes, one approaching the summit from the southeast in Nepal (known as the "standard route") and the other from the north in Tibet. While not posing substantial technical climbing challenges on the standard route, Everest presents dangers such as altitude sickness, weather, and wind, as well as significant hazards from avalanches and the Khumbu Icefall. As of 2019[update], over 300 people have died on Everest,[7] many of whose bodies remain on the mountain.[8]
|
10 |
+
|
11 |
+
The first recorded efforts to reach Everest's summit were made by British mountaineers. As Nepal did not allow foreigners to enter the country at the time, the British made several attempts on the north ridge route from the Tibetan side. After the first reconnaissance expedition by the British in 1921 reached 7,000 m (22,970 ft) on the North Col, the 1922 expedition pushed the north ridge route up to 8,320 m (27,300 ft), marking the first time a human had climbed above 8,000 m (26,247 ft). Seven porters were killed in an avalanche on the descent from the North Col. The 1924 expedition resulted in one of the greatest mysteries on Everest to this day: George Mallory and Andrew Irvine made a final summit attempt on 8 June but never returned, sparking debate as to whether or not they were the first to reach the top. They had been spotted high on the mountain that day but disappeared in the clouds, never to be seen again, until Mallory's body was found in 1999 at 8,155 m (26,755 ft) on the north face. Tenzing Norgay and Edmund Hillary made the first official ascent of Everest in 1953, using the southeast ridge route. Norgay had reached 8,595 m (28,199 ft) the previous year as a member of the 1952 Swiss expedition. The Chinese mountaineering team of Wang Fuzhou, Gonpo, and Qu Yinhua made the first reported ascent of the peak from the north ridge on 25 May 1960.[9][10]
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
The Tibetan name for Everest is Qomolangma (ཇོ་མོ་གླང་མ, lit. "Holy Mother"). The name was first recorded with a Chinese transcription on the 1721 Kangxi Atlas, and then appeared as Tchoumour Lancma on a 1733 map published in Paris by the French geographer D'Anville based on the former map.[11] It is also popularly romanised as Chomolungma and (in Wylie) as Jo-mo-glang-ma.[16] The official Chinese transcription is 珠穆朗玛峰 (t 珠穆朗瑪峰), whose pinyin form is Zhūmùlǎngmǎ Fēng. It is also infrequently simply translated into Chinese as Shèngmǔ Fēng (t 聖母峰, s 圣母峰, lit. "Holy Mother Peak"). Many other local names exist, including "Deodungha" ("Holy Mountain") in Darjeeling.[17]
|
16 |
+
|
17 |
+
In the late 19th century, many European cartographers incorrectly believed that a native name for the mountain was Gaurishankar, a mountain between Kathmandu and Everest.[18]
|
18 |
+
|
19 |
+
In 1849, the British survey wanted to preserve local names if possible (e.g., Kangchenjunga and Dhaulagiri), Waugh argued that he could not find any commonly used local name. Waugh's search for a local name was hampered by Nepal and Tibet's exclusion of foreigners. Waugh argued that because there were many local names, it would be difficult to favour one name over all others; he decided that Peak XV should be named after British surveyor Sir George Everest, his predecessor as Surveyor General of India.[19][20][21] Everest himself opposed the name suggested by Waugh and told the Royal Geographical Society in 1857 that "Everest" could not be written in Hindi nor pronounced by "the native of India". Waugh's proposed name prevailed despite the objections, and in 1865, the Royal Geographical Society officially adopted Mount Everest as the name for the highest mountain in the world.[19] The modern pronunciation of Everest (/ˈɛvərɪst/)[22] is different from Sir George's pronunciation of his surname (/ˈiːvrɪst/ EEV-rist).[23]
|
20 |
+
|
21 |
+
In the early 1960s, the Nepalese government coined the Nepali name Sagarmāthā or Sagar-Matha[24] (सागर-मथ्था, lit. "goddess of the sky"[25]).[26]
|
22 |
+
|
23 |
+
In 1802, the British began the Great Trigonometric Survey of India to fix the locations, heights, and names of the world's highest mountains. Starting in southern India, the survey teams moved northward using giant theodolites, each weighing 500 kg (1,100 lb) and requiring 12 men to carry, to measure heights as accurately as possible. They reached the Himalayan foothills by the 1830s, but Nepal was unwilling to allow the British to enter the country due to suspicions of political aggression and possible annexation. Several requests by the surveyors to enter Nepal were turned down.[19]
|
24 |
+
|
25 |
+
The British were forced to continue their observations from Terai, a region south of Nepal which is parallel to the Himalayas. Conditions in Terai were difficult because of torrential rains and malaria. Three survey officers died from malaria while two others had to retire because of failing health.[19]
|
26 |
+
|
27 |
+
Nonetheless, in 1847, the British continued the survey and began detailed observations of the Himalayan peaks from observation stations up to 240 km (150 mi) distant. Weather restricted work to the last three months of the year. In November 1847, Andrew Waugh, the British Surveyor General of India made several observations from the Sawajpore station at the east end of the Himalayas. Kangchenjunga was then considered the highest peak in the world, and with interest, he noted a peak beyond it, about 230 km (140 mi) away. John Armstrong, one of Waugh's subordinates, also saw the peak from a site farther west and called it peak "b". Waugh would later write that the observations indicated that peak "b" was higher than Kangchenjunga, but given the great distance of the observations, closer observations were required for verification. The following year, Waugh sent a survey official back to Terai to make closer observations of peak "b", but clouds thwarted his attempts.[19]
|
28 |
+
|
29 |
+
In 1849, Waugh dispatched James Nicolson to the area, who made two observations from Jirol, 190 km (120 mi) away. Nicolson then took the largest theodolite and headed east, obtaining over 30 observations from five different locations, with the closest being 174 km (108 mi) from the peak.[19]
|
30 |
+
|
31 |
+
Nicolson retreated to Patna on the Ganges to perform the necessary calculations based on his observations. His raw data gave an average height of 9,200 m (30,200 ft) for peak "b", but this did not consider light refraction, which distorts heights. However, the number clearly indicated that peak "b" was higher than Kangchenjunga. Nicolson contracted malaria and was forced to return home without finishing his calculations. Michael Hennessy, one of Waugh's assistants, had begun designating peaks based on Roman numerals, with Kangchenjunga named Peak IX. Peak "b" now became known as Peak XV.[19]
|
32 |
+
|
33 |
+
In 1852, stationed at the survey headquarters in Dehradun, Radhanath Sikdar, an Indian mathematician and surveyor from Bengal was the first to identify Everest as the world's highest peak, using trigonometric calculations based on Nicolson's measurements.[27] An official announcement that Peak XV was the highest was delayed for several years as the calculations were repeatedly verified. Waugh began work on Nicolson's data in 1854, and along with his staff spent almost two years working on the numbers, having to deal with the problems of light refraction, barometric pressure, and temperature over the vast distances of the observations. Finally, in March 1856 he announced his findings in a letter to his deputy in Calcutta. Kangchenjunga was declared to be 8,582 m (28,156 ft), while Peak XV was given the height of 8,840 m (29,002 ft). Waugh concluded that Peak XV was "most probably the highest in the world".[19] Peak XV (measured in feet) was calculated to be exactly 29,000 ft (8,839.2 m) high, but was publicly declared to be 29,002 ft (8,839.8 m) in order to avoid the impression that an exact height of 29,000 feet (8,839.2 m) was nothing more than a rounded estimate.[28] Waugh is sometimes playfully credited with being "the first person to put two feet on top of Mount Everest".[29]
|
34 |
+
|
35 |
+
In 1856, Andrew Waugh announced Everest (then known as Peak XV) as 8,840 m (29,002 ft) high, after several years of calculations based on observations made by the Great Trigonometric Survey.[30] The 8,848 m (29,029 ft) height given is officially recognised by Nepal and China.[31] Nepal plans a new survey in 2019 to determine if the April 2015 Nepal earthquake affected the height of the mountain.[32]
|
36 |
+
|
37 |
+
In 1955, the elevation of 8,848 m (29,029 ft) was first determined by an Indian survey, made closer to the mountain, also using theodolites.[citation needed] In 1975 it was subsequently reaffirmed by a Chinese measurement of 8,848.13 m (29,029.30 ft).[33] In both cases the snow cap, not the rock head, was measured.
|
38 |
+
In May 1999 an American Everest Expedition, directed by Bradford Washburn, anchored a GPS unit into the highest bedrock. A rock head elevation of 8,850 m (29,035 ft), and a snow/ice elevation 1 m (3 ft) higher, were obtained via this device.[34] Although as of 2001, it has not been officially recognised by Nepal,[35] this figure is widely quoted. Geoid uncertainty casts doubt upon the accuracy claimed by both the 1999 and 2005 [See below] surveys.[citation needed]
|
39 |
+
|
40 |
+
In 1955, a detailed photogrammetric map (at a scale of 1:50,000) of the Khumbu region, including the south side of Mount Everest, was made by Erwin Schneider as part of the 1955 International Himalayan Expedition, which also attempted Lhotse.
|
41 |
+
|
42 |
+
In the late 1980s, an even more detailed topographic map of the Everest area was made under the direction of Bradford Washburn, using extensive aerial photography.[36]
|
43 |
+
|
44 |
+
On 9 October 2005, after several months of measurement and calculation, the Chinese Academy of Sciences and State Bureau of Surveying and Mapping announced the height of Everest as 8,844.43 m (29,017.16 ft) with accuracy of ±0.21 m (8.3 in), claiming it was the most accurate and precise measurement to date.[37][38] This height is based on the highest point of rock and not the snow and ice covering it. The Chinese team measured a snow-ice depth of 3.5 m (11 ft),[33] which is in agreement with a net elevation of 8,848 m (29,029 ft). An argument arose between China and Nepal as to whether the official height should be the rock height (8,844 m, China) or the snow height (8,848 m, Nepal). In 2010, both sides agreed that the height of Everest is 8,848 m, and Nepal recognises China's claim that the rock height of Everest is 8,844 m.[39]
|
45 |
+
|
46 |
+
It is thought that the plate tectonics of the area are adding to the height and moving the summit northeastwards. Two accounts suggest the rates of change are 4 mm (0.16 in) per year (upwards) and 3 to 6 mm (0.12 to 0.24 in) per year (northeastwards),[34][40] but another account mentions more lateral movement (27 mm or 1.1 in),[41] and even shrinkage has been suggested.[42]
|
47 |
+
|
48 |
+
The summit of Everest is the point at which earth's surface reaches the greatest distance above sea level. Several other mountains are sometimes claimed to be the "tallest mountains on earth". Mauna Kea in Hawaii is tallest when measured from its base;[43] it rises over 10,200 m (33,464.6 ft) when measured from its base on the mid-ocean floor, but only attains 4,205 m (13,796 ft) above sea level.
|
49 |
+
|
50 |
+
By the same measure of base to summit, Denali, in Alaska, also known as Mount McKinley, is taller than Everest as well.[43] Despite its height above sea level of only 6,190 m (20,308 ft), Denali sits atop a sloping plain with elevations from 300 to 900 m (980 to 2,950 ft), yielding a height above base in the range of 5,300 to 5,900 m (17,400 to 19,400 ft); a commonly quoted figure is 5,600 m (18,400 ft).[44][45] By comparison, reasonable base elevations for Everest range from 4,200 m (13,800 ft) on the south side to 5,200 m (17,100 ft) on the Tibetan Plateau, yielding a height above base in the range of 3,650 to 4,650 m (11,980 to 15,260 ft).[36]
|
51 |
+
|
52 |
+
The summit of Chimborazo in Ecuador is 2,168 m (7,113 ft) farther from earth's centre (6,384.4 km (3,967.1 mi)) than that of Everest (6,382.3 km (3,965.8 mi)), because the earth bulges at the equator.[46] This is despite Chimborazo having a peak 6,268 m (20,564.3 ft) above sea level versus Mount Everest's 8,848 m (29,028.9 ft).
|
53 |
+
|
54 |
+
Geologists have subdivided the rocks comprising Mount Everest into three units called formations.[47][48] Each formation is separated from the other by low-angle faults, called detachments, along which they have been thrust southward over each other. From the summit of Mount Everest to its base these rock units are the Qomolangma Formation, the North Col Formation, and the Rongbuk Formation.
|
55 |
+
|
56 |
+
The Qomolangma Formation, also known as the Jolmo Lungama Formation,[49] runs from the summit to the top of the Yellow Band, about 8,600 m (28,200 ft) above sea level. It consists of greyish to dark grey or white, parallel laminated and bedded, Ordovician limestone interlayered with subordinate beds of recrystallised dolomite with argillaceous laminae and siltstone. Gansser first reported finding microscopic fragments of crinoids in this limestone.[50][51] Later petrographic analysis of samples of the limestone from near the summit revealed them to be composed of carbonate pellets and finely fragmented remains of trilobites, crinoids, and ostracods. Other samples were so badly sheared and recrystallised that their original constituents could not be determined. A thick, white-weathering thrombolite bed that is 60 m (200 ft) thick comprises the foot of the "Third Step", and base of the summit pyramid of Everest. This bed, which crops out starting about 70 m (230 ft) below the summit of Mount Everest, consists of sediments trapped, bound, and cemented by the biofilms of micro-organisms, especially cyanobacteria, in shallow marine waters. The Qomolangma Formation is broken up by several high-angle faults that terminate at the low angle normal fault, the Qomolangma Detachment. This detachment separates it from the underlying Yellow Band. The lower five metres of the Qomolangma Formation overlying this detachment are very highly deformed.[47][48][52]
|
57 |
+
|
58 |
+
The bulk of Mount Everest, between 7,000 and 8,600 m (23,000 and 28,200 ft), consists of the North Col Formation, of which the Yellow Band forms its upper part between 8,200 to 8,600 m (26,900 to 28,200 ft). The Yellow Band consists of intercalated beds of Middle Cambrian diopside-epidote-bearing marble, which weathers a distinctive yellowish brown, and muscovite-biotite phyllite and semischist. Petrographic analysis of marble collected from about 8,300 m (27,200 ft) found it to consist as much as five percent of the ghosts of recrystallised crinoid ossicles. The upper five metres of the Yellow Band lying adjacent to the Qomolangma Detachment is badly deformed. A 5–40 cm (2.0–15.7 in) thick fault breccia separates it from the overlying Qomolangma Formation.[47][48][52]
|
59 |
+
|
60 |
+
The remainder of the North Col Formation, exposed between 7,000 to 8,200 m (23,000 to 26,900 ft) on Mount Everest, consists of interlayered and deformed schist, phyllite, and minor marble. Between 7,600 and 8,200 m (24,900 and 26,900 ft), the North Col Formation consists chiefly of biotite-quartz phyllite and chlorite-biotite phyllite intercalated with minor amounts of biotite-sericite-quartz schist. Between 7,000 and 7,600 m (23,000 and 24,900 ft), the lower part of the North Col Formation consists of biotite-quartz schist intercalated with epidote-quartz schist, biotite-calcite-quartz schist, and thin layers of quartzose marble. These metamorphic rocks appear to be the result of the metamorphism of Middle to Early Cambrian deep sea flysch composed of interbedded, mudstone, shale, clayey sandstone, calcareous sandstone, graywacke, and sandy limestone. The base of the North Col Formation is a regional low-angle normal fault called the "Lhotse detachment".[47][48][52]
|
61 |
+
|
62 |
+
Below 7,000 m (23,000 ft), the Rongbuk Formation underlies the North Col Formation and forms the base of Mount Everest. It consists of sillimanite-K-feldspar grade schist and gneiss intruded by numerous sills and dikes of leucogranite ranging in thickness from 1 cm to 1,500 m (0.4 in to 4,900 ft).[48][53] These leucogranites are part of a belt of Late Oligocene–Miocene intrusive rocks known as the Higher Himalayan leucogranite. They formed as the result of partial melting of Paleoproterozoic to Ordovician high-grade metasedimentary rocks of the Higher Himalayan Sequence about 20 to 24 million years ago during the subduction of the Indian Plate.[54]
|
63 |
+
|
64 |
+
Mount Everest consists of sedimentary and metamorphic rocks that have been faulted southward over continental crust composed of Archean granulites of the Indian Plate during the Cenozoic collision of India with Asia.[55][56][57] Current interpretations argue that the Qomolangma and North Col formations consist of marine sediments that accumulated within the continental shelf of the northern passive continental margin of India before it collided with Asia. The Cenozoic collision of India with Asia subsequently deformed and metamorphosed these strata as it thrust them southward and upward.[58][59] The Rongbuk Formation consists of a sequence of high-grade metamorphic and granitic rocks that were derived from the alteration of high-grade metasedimentary rocks. During the collision of India with Asia, these rocks were thrust downward and to the north as they were overridden by other strata; heated, metamorphosed, and partially melted at depths of over 15 to 20 kilometres (9.3 to 12.4 mi) below sea level; and then forced upward to surface by thrusting towards the south between two major detachments.[60] The Himalayas are rising by about 5 mm per year.
|
65 |
+
|
66 |
+
There is very little native flora or fauna on Everest. A moss grows at 6,480 metres (21,260 ft) on Mount Everest.[61] It may be the highest altitude plant species.[61] An alpine cushion plant called Arenaria is known to grow below 5,500 metres (18,000 ft) in the region.[62] According to the study based on satellite data from 1993 to 2018, vegetation is expanding in the Everest region. Researchers have found plants in areas that were previously deemed bare.[63]
|
67 |
+
|
68 |
+
Euophrys omnisuperstes, a minute black jumping spider, has been found at elevations as high as 6,700 metres (22,000 ft), possibly making it the highest confirmed non-microscopic permanent resident on Earth. It lurks in crevices and may feed on frozen insects that have been blown there by the wind. There is a high likelihood of microscopic life at even higher altitudes.[64]
|
69 |
+
|
70 |
+
Birds, such as the bar-headed goose, have been seen flying at the higher altitudes of the mountain, while others, such as the chough, have been spotted as high as the South Col at 7,920 metres (25,980 ft).[65] Yellow-billed choughs have been seen as high as 7,900 metres (26,000 ft) and bar-headed geese migrate over the Himalayas.[66] In 1953, George Lowe (part of the expedition of Tenzing and Hillary) said that he saw bar-headed geese flying over Everest's summit.[67]
|
71 |
+
|
72 |
+
Yaks are often used to haul gear for Mount Everest climbs. They can haul 100 kg (220 pounds), have thick fur and large lungs.[62] One common piece of advice for those in the Everest region is to be on the higher ground when around yaks and other animals, as they can knock people off the mountain if standing on the downhill edge of a trail.[68] Other animals in the region include the Himalayan tahr which is sometimes eaten by the snow leopard.[69] The Himalayan black bear can be found up to about 4,300 metres (14,000 ft) and the red panda is also present in the region.[70] One expedition found a surprising range of species in the region including a pika and ten new species of ants.[71]
|
73 |
+
|
74 |
+
In 2008, a new weather station at about 8,000 m altitude (26,246 feet) went online.[75] The station's first data in May 2008 were air temperature −17 °C (1 °F), relative humidity 41.3 percent, atmospheric pressure 382.1 hPa (38.21 kPa), wind direction 262.8°, wind speed 12.8 m/s (28.6 mph, 46.1 km/h), global solar radiation 711.9 watts/m2, solar UVA radiation 30.4 W/m2.[75] The project was orchestrated by Stations at High Altitude for Research on the Environment (SHARE), which also placed the Mount Everest webcam in 2011.[75][76] The solar-powered weather station is on the South Col.[77]
|
75 |
+
|
76 |
+
One of the issues facing climbers is the frequent presence of high-speed winds.[78] The peak of Mount Everest extends into the upper troposphere and penetrates the stratosphere,[79] which can expose it to the fast and freezing winds of the jet stream.[80] In February 2004, a wind speed of 280 km/h (175 mph) was recorded at the summit and winds over 160 km/h (100 mph) are common.[78] These winds can blow climbers off Everest. Climbers typically aim for a 7- to 10-day window in the spring and fall when the Asian monsoon season is either starting up or ending and the winds are lighter. The air pressure at the summit is about one-third what it is at sea level, and by Bernoulli's principle, the winds can lower the pressure further, causing an additional 14 percent reduction in oxygen to climbers.[80] The reduction in oxygen availability comes from the reduced overall pressure, not a reduction in the ratio of oxygen to other gases.[81]
|
77 |
+
|
78 |
+
In the summer, the Indian monsoon brings warm wet air from the Indian Ocean to Everest's south side. During the winter, the west-southwest flowing jet stream shifts south and blows on the peak.[citation needed]
|
79 |
+
|
80 |
+
Because Mount Everest is the highest mountain in the world, it has attracted considerable attention and climbing attempts. Whether the mountain was climbed in ancient times is unknown. It may have been climbed in 1924, although this has never been confirmed, as both of the men making the attempt failed to return from the mountain. Several climbing routes has been established over several decades of climbing expeditions to the mountain.[82][83][better source needed]
|
81 |
+
|
82 |
+
Everest's first known summiting occurred by 1953, and interest by climbers increased.[84] Despite the effort and attention poured into expeditions, only about 200 people had sumitted by 1987.[84] Everest remained a difficult climb for decades, even for serious attempts by professional climbers and large national expeditions, which were the norm until the commercial era began in the 1990s.[85]
|
83 |
+
|
84 |
+
By March 2012, Everest had been climbed 5,656 times with 223 deaths.[86] Although lower mountains have longer or steeper climbs, Everest is so high the jet stream can hit it. Climbers can be faced with winds beyond 320 km/h (200 mph) when the weather shifts.[87] At certain times of the year the jet stream shifts north, providing periods of relative calm at the mountain.[88] Other dangers include blizzards and avalanches.[88]
|
85 |
+
|
86 |
+
By 2013, The Himalayan Database recorded 6,871 summits by 4,042 different people.[89]
|
87 |
+
|
88 |
+
In 1885, Clinton Thomas Dent, president of the Alpine Club, suggested that climbing Mount Everest was possible in his book Above the Snow Line.[90]
|
89 |
+
|
90 |
+
The northern approach to the mountain was discovered by George Mallory and Guy Bullock on the initial 1921 British Reconnaissance Expedition. It was an exploratory expedition not equipped for a serious attempt to climb the mountain. With Mallory leading (and thus becoming the first European to set foot on Everest's flanks) they climbed the North Col to an altitude of 7,005 metres (22,982 ft). From there, Mallory espied a route to the top, but the party was unprepared for the great task of climbing any further and descended.
|
91 |
+
|
92 |
+
The British returned for a 1922 expedition. George Finch climbed using oxygen for the first time. He ascended at a remarkable speed—290 metres (951 ft) per hour, and reached an altitude of 8,320 m (27,300 ft), the first time a human reported to climb higher than 8,000 m. Mallory and Col. Felix Norton made a second unsuccessful attempt. Mallory was faulted[citation needed] for leading a group down from the North Col which got caught in an avalanche. Mallory was pulled down too but survived. Seven native porters were killed.
|
93 |
+
|
94 |
+
The next expedition was in 1924. The initial attempt by Mallory and Geoffrey Bruce was aborted when weather conditions prevented the establishment of Camp VI. The next attempt was that of Norton and Somervell, who climbed without oxygen and in perfect weather, traversing the North Face into the Great Couloir. Norton managed to reach 8,550 m (28,050 ft), though he ascended only 30 m (98 ft) or so in the last hour. Mallory rustled up oxygen equipment for a last-ditch effort. He chose young Andrew Irvine as his partner.
|
95 |
+
|
96 |
+
On 8 June 1924, George Mallory and Andrew Irvine made an attempt on the summit via the North Col-North Ridge-Northeast Ridge route from which they never returned. On 1 May 1999, the Mallory and Irvine Research Expedition found Mallory's body on the North Face in a snow basin below and to the west of the traditional site of Camp VI. Controversy has raged in the mountaineering community whether one or both of them reached the summit 29 years before the confirmed ascent and safe descent of Everest by Sir Edmund Hillary and Tenzing Norgay in 1953.
|
97 |
+
|
98 |
+
In 1933, Lady Houston, a British millionairess, funded the Houston Everest Flight of 1933, which saw a formation of two aircraft led by the Marquess of Clydesdale fly over the summit.[91][92][93][94]
|
99 |
+
|
100 |
+
Early expeditions—such as General Charles Bruce's in the 1920s and Hugh Ruttledge's two unsuccessful attempts in 1933 and 1936—tried to ascend the mountain from Tibet, via the North Face. Access was closed from the north to Western expeditions in 1950 after China took control of Tibet. In 1950, Bill Tilman and a small party which included Charles Houston, Oscar Houston, and Betsy Cowles undertook an exploratory expedition to Everest through Nepal along the route which has now become the standard approach to Everest from the south.[95]
|
101 |
+
|
102 |
+
The 1952 Swiss Mount Everest Expedition, led by Edouard Wyss-Dunant, was granted permission to attempt a climb from Nepal. It established a route through the Khumbu icefall and ascended to the South Col at an elevation of 7,986 m (26,201 ft). Raymond Lambert and Sherpa Tenzing Norgay were able to reach an elevation of about 8,595 m (28,199 ft) on the southeast ridge, setting a new climbing altitude record. Tenzing's experience was useful when he was hired to be part of the British expedition in 1953.[96] One reference says that no attempt at an ascent of Everest was ever under consideration in this case,[97]
|
103 |
+
although John Hunt (who met the team in Zurich on their return) wrote that when the Swiss Expedition "just failed" in the spring they decided to make another post-monsoon (summit ascent) attempt in the autumn; although as it was only decided in June the second party arrived too late, when winter winds were buffeting the mountain.[98]
|
104 |
+
|
105 |
+
In 1953, a ninth British expedition, led by John Hunt, returned to Nepal. Hunt selected two climbing pairs to attempt to reach the summit. The first pair, Tom Bourdillon and Charles Evans, came within 100 m (330 ft) of the summit on 26 May 1953, but turned back after running into oxygen problems. As planned, their work in route finding and breaking trail and their oxygen caches were of great aid to the following pair. Two days later, the expedition made its second assault on the summit with the second climbing pair: the New Zealander Edmund Hillary and Tenzing Norgay, a Nepali Sherpa climber. They reached the summit at 11:30 local time on 29 May 1953 via the South Col route. At the time, both acknowledged it as a team effort by the whole expedition, but Tenzing revealed a few years later that Hillary had put his foot on the summit first.[99] They paused at the summit to take photographs and buried a few sweets and a small cross in the snow before descending.
|
106 |
+
|
107 |
+
News of the expedition's success reached London on the morning of Queen Elizabeth II's coronation, 2 June. A few days later, the Queen gave orders that Hunt (a Briton) and Hillary (a New Zealander) were to be knighted in the Order of the British Empire for the ascent.[100] Tenzing, a Nepali Sherpa who was a citizen of India, was granted the George Medal by the UK. Hunt was ultimately made a life peer in Britain, while Hillary became a founding member of the Order of New Zealand.[101] Hillary and Tenzing have also been recognised in Nepal. In 2009, statues were raised in their honor, and in 2014, Hillary Peak and Tenzing Peak were named for them.[102][103]
|
108 |
+
|
109 |
+
On 23 May 1956 Ernst Schmied and Juerg Marmet ascended.[104] This was followed by Dölf Reist and Hans-Rudolf von Gunten on 24 May 1957.[104] Wang Fuzhou, Gonpo and Qu Yinhua of China made the first reported ascent of the peak from the North Ridge on 25 May 1960.[9][10] The first American to climb Everest, Jim Whittaker, joined by Nawang Gombu, reached the summit on 1 May 1963.[105][106]
|
110 |
+
|
111 |
+
In 1970 Japanese mountaineers conducted a major expedition. The centerpiece was a large "siege"-style expedition led by Saburo Matsukata, working on finding a new route up the southwest face.[107] Another element of the expedition was an attempt to ski Mount Everest.[85] Despite a staff of over one hundred people and a decade of planning work, the expedition suffered eight deaths and failed to summit via the planned routes.[85] However, Japanese expeditions did enjoy some successes. For example, Yuichiro Miura became the first man to ski down Everest from the South Col (he descended nearly 4,200 vertical feet from the South Col before falling with extreme injuries). Another success was an expedition that put four on the summit via the South Col route.[85][108][109] Miura's exploits became the subject of film, and he went on to become the oldest person to summit Mount Everest in 2003 at age 70 and again in 2013 at the age of 80.[110]
|
112 |
+
|
113 |
+
In 1975, Junko Tabei, a Japanese woman, became the first woman to summit Mount Everest.[85]
|
114 |
+
|
115 |
+
In 1978, Reinhold Messner and Peter Habeler made the first ascent of Everest without supplemental oxygen.
|
116 |
+
|
117 |
+
The Polish climber Andrzej Zawada headed the first winter ascent of Mount Everest, the first winter ascent of an eight-thousander. The team of 20 Polish climbers and 4 Sherpas established a base camp on Khumbu Glacier in early January 1980. On 15 January, the team managed to set up Camp III at 7150 meters above sea level, but further action was stopped by hurricane-force winds. The weather improved after 11 February, when Leszek Cichy, Walenty Fiut and Krzysztof Wielicki set up camp IV on South Col (7906 m). Cichy and Wielicki started the final ascent at 6:50 am on 17 February. At 2:40 pm Andrzej Zawada at base camp heard the climbers' voices over the radio – "We are on the summit! The strong wind blows all the time. It is unimaginably cold."[111][112][113][114] The successful winter ascent of Mount Everest started a new decade of Winter Himalaism, which became a Polish specialisation. After 1980 Poles did ten first winter ascents on 8000 metre peaks, which earned Polish climbers a reputation of "Ice Warriors".[115][112][116][117]
|
118 |
+
|
119 |
+
In May 1989, Polish climbers under the leadership of Eugeniusz Chrobak organised an international expedition to Mount Everest on a difficult western ridge. Ten Poles and nine foreigners participated, but ultimately only the Poles remained in the attempt for the summit. On 24 May, Chrobak and Andrzej Marciniak, starting from camp V at 8,200 m, overcame the ridge and reached the summit. But on 27 May in the avalanche from the wall Khumbutse on the Lho La Pass, four Polish climbers were killed: Mirosław Dąsal, Mirosław Gardzielewski, Zygmunt Andrzej Heinrich and Wacław Otręba. The following day, due to his injuries, Chrobak also died. Marciniak, who was also injured, was saved by a rescue expedition in which Artur Hajzer and New Zealanders Gary Ball and Rob Hall took part. In the organisation of the rescue expedition they took part, inter alia Reinhold Messner, Elizabeth Hawley, Carlos Carsolio and the US consul.[118]
|
120 |
+
|
121 |
+
On 10 and 11 May 1996 eight climbers died after several guided expeditions were caught in a blizzard high up on the mountain during a summit attempt on 10 May. During the 1996 season, 15 people died while climbing on Mount Everest. These were the highest death tolls for a single weather event, and for a single season, until the sixteen deaths in the 2014 Mount Everest avalanche. The guiding disaster gained wide publicity and raised questions about the commercialization of climbing and the safety of guiding clients on Mount Everest.
|
122 |
+
|
123 |
+
Journalist Jon Krakauer, on assignment from Outside magazine, was in one of the affected guided parties, and afterward published the bestseller Into Thin Air, which related his experience. Anatoli Boukreev, a guide who felt impugned by Krakauer's book, co-authored a rebuttal book called The Climb. The dispute sparked a debate within the climbing community.
|
124 |
+
|
125 |
+
In May 2004, Kent Moore, a physicist, and John L. Semple, a surgeon, both researchers from the University of Toronto, told New Scientist magazine that an analysis of weather conditions on 11 May suggested that weather caused oxygen levels to plunge approximately 14 percent.[119][120]
|
126 |
+
|
127 |
+
One of the survivors was Beck Weathers, an American client of New Zealand-based guide service Adventure Consultants. Weathers was left for dead about 275 metres (900 feet) from Camp 4 at 7,950 metres (26,085 feet). After spending a night on the mountain, Weathers managed to make it back to Camp 4 with massive frostbite and vision impaired due to snow blindness.[121] When he arrived at Camp 4, fellow climbers considered his condition terminal and left him in a tent to die overnight.[122]
|
128 |
+
|
129 |
+
Weathers' condition had not improved and an immediate descent to a lower elevation was deemed essential.[122] A helicopter rescue was out of the question: Camp 4 was higher than the rated ceiling of any available helicopter. Weathers was lowered to Camp 2. Eventually, a helicopter rescue was organised thanks to the Nepalese Army.[121][122]
|
130 |
+
|
131 |
+
The storm's impact on climbers on the North Ridge of Everest, where several climbers also died, was detailed in a first-hand account by British filmmaker and writer Matt Dickinson in his book The Other Side of Everest. 16-year-old Mark Pfetzer was on the climb and wrote about it in his account, Within Reach: My Everest Story.
|
132 |
+
|
133 |
+
The 2015 feature film Everest, directed by Baltasar Kormákur, is based on the events of this guiding disaster.[123]
|
134 |
+
|
135 |
+
In 2006 12 people died. One death in particular (see below) triggered an international debate and years of discussion about climbing ethics.[127] The season was also remembered for the rescue of Lincoln Hall who had been left by his climbing team and declared dead, but was later discovered alive and survived being helped off the mountain.
|
136 |
+
|
137 |
+
There was an international controversy about the death of a solo British climber David Sharp, who attempted to climb Mount Everest in 2006 but died in his attempt. The story broke out of the mountaineering community into popular media, with a series of interviews, allegations, and critiques. The question was whether climbers that season had left a man to die and whether he could have been saved. He was said to have attempted to summit Mount Everest by himself with no Sherpa or guide and fewer oxygen bottles than considered normal.[128] He went with a low-budget Nepali guide firm that only provides support up to Base Camp, after which climbers go as a "loose group", offering a high degree of independence. The manager at Sharp's guide support said Sharp did not take enough oxygen for his summit attempt and did not have a Sherpa guide.[129] It is less clear who knew Sharp was in trouble, and if they did know, whether they were qualified or capable of helping him.[128]
|
138 |
+
|
139 |
+
Double-amputee climber Mark Inglis said in an interview with the press on 23 May 2006, that his climbing party, and many others, had passed Sharp, on 15 May, sheltering under a rock overhang 450 metres (1,480 ft) below the summit, without attempting a rescue.[130] Inglis said 40 people had passed by Sharp, but he might have been overlooked as climbers assumed Sharp was the corpse nicknamed "Green Boots",[131] but Inglis was not aware that Turkish climbers had tried to help Sharp despite being in the process of helping an injured woman down (a Turkish woman named Burçak Poçan). There has also been some discussion about Himex in the commentary on Inglis and Sharp. In regards to Inglis's initial comments, he later revised certain details because he had been interviewed while he was "...physically and mentally exhausted, and in a lot of pain. He had suffered severe frostbite – he later had five fingertips amputated." When they went through Sharp's possessions they found a receipt for US$7,490, believed to be the whole financial cost.[132] Comparatively, most expeditions are between $35,000 to US$100,000 plus an additional $20,000 in other expenses that range from gear to bonuses.[133] It was estimated on 14 May that Sharp summitted Mount Everest and began his descent down, but 15 May he was in trouble but being passed by climbers on their way up and down.[134] On 15 May 2006 it is believed he was suffering from hypoxia and was about 1,000 feet from the summit on the North Side route.[134]
|
140 |
+
|
141 |
+
"Dawa from Arun Treks also gave oxygen to David and tried to help him move, repeatedly, for perhaps an hour. But he could not get David to stand alone or even stand to rest on his shoulders, and crying, Dawa had to leave him too. Even with two Sherpas, it was not going to be possible to get David down the tricky sections below."
|
142 |
+
|
143 |
+
Some climbers who left him said that the rescue efforts would have been useless and only have caused more deaths.[citation needed] Beck Weathers of the 1996 Mount Everest disaster said that those who are dying are often left behind and that he himself had been left for dead twice but was able to keep walking.[136] The Tribune of India quoted someone who described what happened to Sharp as "the most shameful act in the history of mountaineering".[137] In addition to Sharp's death, at least nine other climbers perished that year, including multiple Sherpas working for various guiding companies.[138]
|
144 |
+
|
145 |
+
"You are never on your own. There are climbers everywhere."
|
146 |
+
|
147 |
+
Much of this controversy was captured by the Discovery Channel while filming the television program Everest: Beyond the Limit. A crucial decision affecting the fate of Sharp is shown in the program, where an early returning climber Lebanese adventurer Maxim Chaya is descending from the summit and radios to his base camp manager (Russell Brice) that he has found a frostbitten and unconscious climber in distress. Chaya is unable to identify Sharp, who had chosen to climb solo without any support and so did not identify himself to other climbers. The base camp manager assumes that Sharp is part of a group that has already calculated that they must abandon him, and informs his lone climber that there is no chance of him being able to help Sharp by himself. As Sharp's condition deteriorates through the day and other descending climbers pass him, his opportunities for rescue diminish: his legs and feet curl from frostbite, preventing him from walking; the later descending climbers are lower on oxygen and lack the strength to offer aid; time runs out for any Sherpas to return and rescue him.
|
148 |
+
|
149 |
+
David Sharp's body remained just below the summit on the Chinese side next to "Green Boots"; they shared a space in a small rock cave that was an ad hoc tomb for them.[134] Sharp's body was removed from the cave in 2007, according to the BBC,[140] and since 2014, Green Boots has been missing, presumably removed or buried.[141]
|
150 |
+
|
151 |
+
As the Sharp debate kicked off on 26 May 2006, Australian climber Lincoln Hall was found alive after being left for dead the day before.[142] He was found by a party of four climbers (Dan Mazur, Andrew Brash, Myles Osborne and Jangbu Sherpa) who, giving up their own summit attempt, stayed with Hall and descended with him and a party of 11 Sherpas sent up to carry him down. Hall later fully recovered. His team assumed he had died from cerebral edema, and they were instructed to cover him with rocks.[142] There were no rocks around to do this and he was abandoned.[143] The erroneous information of his death was passed on to his family. The next day he was discovered by another party alive.[144]
|
152 |
+
|
153 |
+
I was shocked to see a guy without gloves, hat, oxygen bottles or sleeping bag at sunrise at 28,200-feet height, just sitting up there.
|
154 |
+
|
155 |
+
Lincoln greeted his fellow mountaineers with this:[144]
|
156 |
+
|
157 |
+
I imagine you are surprised to see me here.
|
158 |
+
|
159 |
+
Lincoln Hall went on to live for several more years, often giving talks about his near-death experience and rescue, before dying from unrelated medical issues in 2012 at the age of 56 (born in 1955).[144]
|
160 |
+
|
161 |
+
Heroic rescue actions have been recorded since Hall, including on 21 May 2007, when Canadian climber Meagan McGrath initiated the successful high-altitude rescue of Nepali Usha Bista. Recognising her heroic rescue, Major Meagan McGrath was selected as a 2011 recipient of the Sir Edmund Hillary Foundation of Canada Humanitarian Award, which recognises a Canadian who has personally or administratively contributed a significant service or act in the Himalayan Region of Nepal.[145]
|
162 |
+
|
163 |
+
By the end of the 2010 climbing season, there had been 5,104 ascents to the summit by about 3,142 individuals, with 77% of these ascents being accomplished since 2000.[146] The summit was achieved in 7 of the 22 years from 1953 to 1974 and was not missed between 1975 and 2014.[146] In 2007, the record number of 633 ascents was recorded, by 350 climbers and 253 sherpas.[146]
|
164 |
+
|
165 |
+
An illustration of the explosion of popularity of Everest is provided by the numbers of daily ascents. Analysis of the 1996 Mount Everest disaster shows that part of the blame was on the bottleneck caused by a large number of climbers (33 to 36) attempting to summit on the same day; this was considered unusually high at the time. By comparison, on 23 May 2010, the summit of Mount Everest was reached by 169 climbers – more summits in a single day than in the cumulative 31 years from the first successful summit in 1953 through 1983.[146]
|
166 |
+
|
167 |
+
There have been 219 fatalities recorded on Mount Everest from the 1922 British Mount Everest Expedition through the end of 2010, a rate of 4.3 fatalities for every 100 summits (this is a general rate, and includes fatalities amongst support climbers, those who turned back before the peak, those who died en route to the peak and those who died while descending from the peak). Of the 219 fatalities, 58 (26.5%) were climbers who had summited but did not complete their descent.[146] Though the rate of fatalities has decreased since the year 2000 (1.4 fatalities for every 100 summits, with 3938 summits since 2000), the significant increase in the total number of climbers still means 54 fatalities since 2000: 33 on the northeast ridge, 17 on the southeast ridge, 2 on the southwest face, and 2 on the north face.[146]
|
168 |
+
|
169 |
+
Nearly all attempts at the summit are done using one of the two main routes. The traffic seen by each route varies from year to year. In 2005–07, more than half of all climbers elected to use the more challenging, but cheaper northeast route. In 2008, the northeast route was closed by the Chinese government for the entire climbing season, and the only people able to reach the summit from the north that year were athletes responsible for carrying the Olympic torch for the 2008 Summer Olympics.[147] The route was closed to foreigners once again in 2009 in the run-up to the 50th anniversary of the Dalai Lama's exile.[148] These closures led to declining interest in the north route, and, in 2010, two-thirds of the climbers reached the summit from the south.[146]
|
170 |
+
|
171 |
+
The 2010s were a time of new highs and lows for the mountain, with back to back disasters in 2013 and 2014 causing record deaths. In 2015 there were no summits for the first time in decades. However, other years set records for numbers of summits – 2013's record number of summiters, around 667, was surpassed in 2018 with around 800 summiting the peak,[149] and a subsequent record was set in 2019 with over 890 summiters.[150]
|
172 |
+
|
173 |
+
On 18 April 2014, an avalanche hit the area just below the Base Camp 2 at around 01:00 UTC (06:30 local time) and at an elevation of about 5,900 metres (19,400 ft).[158] Sixteen people were killed in the avalanche (all Nepalese guides) and nine more were injured.[159]
|
174 |
+
|
175 |
+
During the season, a 13-year-old girl, Malavath Purna, reached the summit, becoming the youngest female climber to do so.[160] Additionally, one team used a helicopter to fly from south base camp to Camp 2 to avoid the Khumbu Icefall, then reached the Everest summit. This team had to use the south side because the Chinese had denied them a permit to climb. A team member later donated tens of thousands of dollars to local hospitals.[citation needed] She was named the Nepalese "International Mountaineer of the Year".[161]
|
176 |
+
|
177 |
+
|
178 |
+
|
179 |
+
Over 100 people summited Everest from China (Tibet region), and six from Nepal in the 2014 season.[162] This included 72-year-old Bill Burke, the Indian teenage girl, and a Chinese woman Jing Wang.[163] Another teen girl summiter was Ming Kipa Sherpa who summited with her elder sister Lhakpa Sherpa in 2003, and who had achieved the most times for woman to the summit of Mount Everest at that time.[164] (see also Santosh Yadav)
|
180 |
+
|
181 |
+
2015 was set to be a record-breaking season of climbs, with hundreds of permits issued in Nepal and many additional permits in Tibet (China). However, on 25 April 2015, an earthquake measuring 7.8 Mw triggered an avalanche that hit Everest Base Camp,[165] effectively shutting down the Everest climbing season.[166] 18 bodies were recovered from Mount Everest by the Indian Army mountaineering team.[167] The avalanche began on Pumori,[168] moved through the Khumbu Icefall on the southwest side of Mount Everest, and slammed into the South Base Camp.[169] 2015 was the first time since 1974 with no spring summits, as all climbing teams pulled out after the quakes and avalanche.[170][171] One of the reasons for this was the high probability of aftershocks (over 50 percent according to the USGS).[172] Just weeks after the first quake, the region was rattled again by a 7.3 magnitude quake and there were also many considerable aftershocks.[173]
|
182 |
+
|
183 |
+
The quakes trapped hundreds of climbers above the Khumbu icefall, and they had to be evacuated by helicopter as they ran low on supplies.[174] The quake shifted the route through the ice fall, making it essentially impassable to climbers.[174] Bad weather also made helicopter evacuation difficult.[174] The Everest tragedy was small compared the impact overall on Nepal, with almost nine thousand dead[175][176] and about 22,000 injured.[175] In Tibet, by 28 April at least 25 had died, and 117 were injured.[177] By 29 April 2015, the Tibet Mountaineering Association (North/Chinese side) closed Everest and other peaks to climbing, stranding 25 teams and about 300 people on the north side of Everest.[178] On the south side, helicopters evacuated 180 people trapped at Camps 1 and 2.[179]
|
184 |
+
|
185 |
+
On 24 August 2015 Nepal re-opened Everest to tourism including mountain climbers.[180] The only climber permit for the autumn season was awarded to Japanese climber Nobukazu Kuriki, who had tried four times previously to summit Everest without success. He made his fifth attempt in October, but had to give up just 700 m (2,300 ft) from the summit due to "strong winds and deep snow".[181][182] Kuriki noted the dangers of climbing Everest, having himself survived being stuck in a freezing snow hole for two days near the top, which came at the cost of all his fingertips and his thumb, lost to frostbite, which added further difficulty to his climb.[183]
|
186 |
+
|
187 |
+
Some sections of the trail from Lukla to Everest Base Camp (Nepal) were damaged in the earthquakes earlier in the year and needed repairs to handle trekkers.[184]
|
188 |
+
|
189 |
+
Hawley's database records 641 made it to the summit in early 2016.[185]
|
190 |
+
|
191 |
+
2017 was the biggest season yet, permit-wise, yielding hundreds of summiters and a handful of deaths.[186] On 27 May 2017, Kami Rita Sherpa made his 21st climb to the summit with the Alpine Ascents Everest Expedition, one of three people in the World along with Apa Sherpa and Phurba Tashi Sherpa to make it to the summit of Mount Everest 21 times.[187][188] The season had a tragic start with the death of Ueli Steck of Switzerland, who died from a fall during a warm-up climb.[189] There was a continued discussion about the nature of possible changes to the Hillary Step.[190] Total summiters for 2017 was tallied up to be 648.[156] 449 summited via Nepal (from the South) and 120 from Chinese Tibet (North side).[191]
|
192 |
+
|
193 |
+
807 climbers summited Mount Everest in 2018,[192] including 563 on the Nepal side and 240 from the Chinese Tibet side.[149] This broke the previous record for total summits in year from which was 667 in 2013, and one factor that aided in this was an especially long and clear weather window of 11 days during the critical spring climbing season.[149][193][157] Various records were broken including an extraordinary case of a 70-year-old double-amputee, who undertook his climb after winning a court case in the Nepali Supreme Court.[149] There were no major disasters, but seven climbers died in various situations including several sherpas as well as international climbers.[149] Although record numbers of climbers reached the summit, old-time summiters that made expeditions in the 1980s lamented the crowding, feces, and cost.[193]
|
194 |
+
|
195 |
+
Himalayan record keeper Elizabeth Hawley died in late January 2018.[194]
|
196 |
+
|
197 |
+
Figures for the number of permits issued by Nepal range from 347[195] to 375.[196]
|
198 |
+
|
199 |
+
The spring or pre-monsoon window for 2019 witnessed the deaths of a number of climbers and worldwide publication of images of hundreds of mountaineers queuing to reach the summit and sensational media reports of climbers stepping over dead bodies dismayed people around the world.[199][200][201][202]
|
200 |
+
|
201 |
+
There were reports of various winter expeditions in the Himalayas, including K2, Nanga Parbat, and Meru with the buzz for the Everest 2019 beginning just 14 weeks to the weather window.[203] Noted climber Cory Richards announced on Twitter that he was hoping to establish a new climbing route to the summit in 2019.[203] Also announced was an expedition to re-measure the height of Everest, particularly in light of the 2015 earthquakes.[204][205] China closed the base-camp to those without climbing permits in February 2019 on the northern side of Mount Everest.[206] By early April, climbing teams from around the world were arriving for the 2019 spring climbing season.[157] Among the teams was a scientific expedition with a planned study of pollution, and how things like snow and vegetation influence the availability of food and water in the region.[207] In the 2019 spring mountaineering season, there were roughly 40 teams with almost 400 climbers and several hundred guides attempting to summit on the Nepali side.[208][209][210] Nepal issued 381 climbing permits for 2019.[192] For the northern routes in Chinese Tibet, several hundred more permits were issued for climbing by authorities there.[211]
|
202 |
+
|
203 |
+
In May 2019, Nepali mountaineering guide Kami Rita summited Mount Everest twice within a week, his 23rd and 24th ascents, making international news headlines.[212][208][209] He first summited Everest in 1994, and has summited several other extremely high mountains, such as K2 and Lhotse.[208][209][210][212]
|
204 |
+
|
205 |
+
By 23 May 2019, about seven people had died, possibly due to crowding leading to delays high on the mountain, and shorter weather windows.[192] One 19-year-old who summited previously noted that when the weather window opens (the high winds calm down), long lines form as everyone rushes to get the top and back down.[213] In Chinese Tibet, one Austrian climber died from a fall,[192] and by 26 May 2019 the overall number of deaths for the spring climbing season rose to 10.[214][215][216] By 28 May, the death toll increased to 11 when a climber died at about 26,000 feet during the descent,[197] and a 12th climber missing and presumed dead.[198] Despite the number of deaths, reports indicated that a record 891 climbers summited in the spring 2019 climbing season.[217][150]
|
206 |
+
|
207 |
+
Although China has had various permit restrictions, and Nepal requires a doctor to sign off on climbing permits,[217] the natural dangers of climbing such as falls and avalanches combined with medical issues aggravated by Everest's extreme altitude led to 2019 being a year with a comparatively high death toll.[217] In light of this, the government of Nepal is now considering reviewing its existing laws on expeditions to the mountain peak.[218][219]
|
208 |
+
|
209 |
+
Regardless of the number of permits, the weather window impacts the timing of when climbers head to the summit.[220] The 2018 season had a similar number of climbers, but an 11-day straight calm, while in 2019 the weather windows were shorter and broken up by periods of bad weather.[220][221]
|
210 |
+
|
211 |
+
By 31 May 2019, the climbing season was thought to be all but concluded, as shifting weather patterns and increased wind speed make summiting more difficult.[222] The next window is generally when the monsoon season ends in September; however, this is a less popular option as the monsoon leaves large amounts of snow on the mountain, thus increasing the danger of avalanches,[223] which historically account for about one quarter of Everest fatalities.[150]
|
212 |
+
|
213 |
+
334 climbing permits were issued in 2014 in Nepal. These were extended until 2019 due to the closure.[225] In 2015 there were 357 permits to climb Everest, but the mountain was closed again because of the avalanche and earthquake, and these permits were given a two-year extension to 2017 (not to 2019 as with the 2014 issue).[226][225][clarification needed]
|
214 |
+
|
215 |
+
In 2017, a permit evader who tried to climb Everest without the $11,000 permit faced, among other penalties, a $22,000 fine, bans, and a possible four years in jail after he was caught (he had made it up past the Khumbu icefall) In the end he was given a 10-year mountaineering ban in Nepal and allowed to return home.[227]
|
216 |
+
|
217 |
+
The number of permits issued each year by Nepal is listed below.[226][228]
|
218 |
+
|
219 |
+
The Chinese side in Tibet is also managed with permits for summiting Everest.[229] They did not issue permits in 2008, due to the Olympic torch relay being taken to the summit of Mount Everest.[230]
|
220 |
+
|
221 |
+
In March 2020, the governments of China and Nepal announced a cancellation of all climbing permits for Mount Everest due to the COVID-19 pandemic.[231][232] In April 2020, a group of Chinese mountaineers began an expedition from the Chinese side. The mountain remained closed on the Chinese side to all foreign climbers.[233]
|
222 |
+
|
223 |
+
Mount Everest has two main climbing routes, the southeast ridge from Nepal and the north ridge from Tibet, as well as many other less frequently climbed routes.[234] Of the two main routes, the southeast ridge is technically easier and more frequently used. It was the route used by Edmund Hillary and Tenzing Norgay in 1953 and the first recognised of 15 routes to the top by 1996.[234] This was, however, a route decision dictated more by politics than by design, as the Chinese border was closed to the western world in the 1950s, after the People's Republic of China invaded Tibet.[235]
|
224 |
+
|
225 |
+
Most attempts are made during May, before the summer monsoon season. As the monsoon season approaches, the jet stream shifts northward, thereby reducing the average wind speeds high on the mountain.[236][237] While attempts are sometimes made in September and October, after the monsoons, when the jet stream is again temporarily pushed northward, the additional snow deposited by the monsoons and the less stable weather patterns at the monsoons' tail end makes climbing extremely difficult.
|
226 |
+
|
227 |
+
The ascent via the southeast ridge begins with a trek to Base Camp at 5,380 m (17,700 ft) on the south side of Everest, in Nepal. Expeditions usually fly into Lukla (2,860 m) from Kathmandu and pass through Namche Bazaar. Climbers then hike to Base Camp, which usually takes six to eight days, allowing for proper altitude acclimatisation in order to prevent altitude sickness.[238] Climbing equipment and supplies are carried by yaks, dzopkyos (yak-cow hybrids), and human porters to Base Camp on the Khumbu Glacier. When Hillary and Tenzing climbed Everest in 1953, the British expedition they were part of (comprising over 400 climbers, porters, and Sherpas at that point) started from the Kathmandu Valley, as there were no roads further east at that time.
|
228 |
+
|
229 |
+
Climbers spend a couple of weeks in Base Camp, acclimatising to the altitude. During that time, Sherpas and some expedition climbers set up ropes and ladders in the treacherous Khumbu Icefall.
|
230 |
+
|
231 |
+
Seracs, crevasses, and shifting blocks of ice make the icefall one of the most dangerous sections of the route. Many climbers and Sherpas have been killed in this section. To reduce the hazard, climbers usually begin their ascent well before dawn, when the freezing temperatures glue ice blocks in place.
|
232 |
+
|
233 |
+
Above the icefall is Camp I at 6,065 metres (19,900 ft).
|
234 |
+
|
235 |
+
From Camp I, climbers make their way up the Western Cwm to the base of the Lhotse face, where Camp II or Advanced Base Camp (ABC) is established at 6,500 m (21,300 ft). The Western Cwm is a flat, gently rising glacial valley, marked by huge lateral crevasses in the centre, which prevent direct access to the upper reaches of the Cwm. Climbers are forced to cross on the far right, near the base of Nuptse, to a small passageway known as the "Nuptse corner". The Western Cwm is also called the "Valley of Silence" as the topography of the area generally cuts off wind from the climbing route. The high altitude and a clear, windless day can make the Western Cwm unbearably hot for climbers.[239]
|
236 |
+
|
237 |
+
From Advanced Base Camp, climbers ascend the Lhotse face on fixed ropes, up to Camp III, located on a small ledge at 7,470 m (24,500 ft). From there, it is another 500 metres to Camp IV on the South Col at 7,920 m (26,000 ft).
|
238 |
+
|
239 |
+
From Camp III to Camp IV, climbers are faced with two additional challenges: the Geneva Spur and the Yellow Band. The Geneva Spur is an anvil shaped rib of black rock named by the 1952 Swiss expedition. Fixed ropes assist climbers in scrambling over this snow-covered rock band. The Yellow Band is a section of interlayered marble, phyllite, and semischist, which also requires about 100 metres of rope for traversing it.[239]
|
240 |
+
|
241 |
+
On the South Col, climbers enter the death zone. Climbers making summit bids typically can endure no more than two or three days at this altitude. That's one reason why clear weather and low winds are critical factors in deciding whether to make a summit attempt. If the weather does not cooperate within these short few days, climbers are forced to descend, many all the way back down to Base Camp.
|
242 |
+
|
243 |
+
From Camp IV, climbers begin their summit push around midnight, with hopes of reaching the summit (still another 1,000 metres above) within 10 to 12 hours. Climbers first reach "The Balcony" at 8,400 m (27,600 ft), a small platform where they can rest and gaze at peaks to the south and east in the early light of dawn. Continuing up the ridge, climbers are then faced with a series of imposing rock steps which usually forces them to the east into the waist-deep snow, a serious avalanche hazard. At 8,750 m (28,700 ft), a small table-sized dome of ice and snow marks the South Summit.[239]
|
244 |
+
|
245 |
+
From the South Summit, climbers follow the knife-edge southeast ridge along what is known as the "Cornice traverse", where snow clings to intermittent rock. This is the most exposed section of the climb, and a misstep to the left would send one 2,400 m (7,900 ft) down the southwest face, while to the immediate right is the 3,050 m (10,010 ft) Kangshung Face. At the end of this traverse is an imposing 12 m (39 ft) rock wall, the Hillary Step, at 8,790 m (28,840 ft).[240]
|
246 |
+
|
247 |
+
Hillary and Tenzing were the first climbers to ascend this step, and they did so using primitive ice climbing equipment and ropes. Nowadays, climbers ascend this step using fixed ropes previously set up by Sherpas. Once above the step, it is a comparatively easy climb to the top on moderately angled snow slopes—though the exposure on the ridge is extreme, especially while traversing large cornices of snow. With increasing numbers of people climbing the mountain in recent years, the Step has frequently become a bottleneck, with climbers forced to wait significant amounts of time for their turn on the ropes, leading to problems in getting climbers efficiently up and down the mountain.
|
248 |
+
|
249 |
+
After the Hillary Step, climbers also must traverse a loose and rocky section that has a large entanglement of fixed ropes that can be troublesome in bad weather. Climbers typically spend less than half an hour at the summit to allow time to descend to Camp IV before darkness sets in, to avoid serious problems with afternoon weather, or because supplemental oxygen tanks run out.
|
250 |
+
|
251 |
+
The north ridge route begins from the north side of Everest, in Tibet. Expeditions trek to the Rongbuk Glacier, setting up base camp at 5,180 m (16,990 ft) on a gravel plain just below the glacier. To reach Camp II, climbers ascend the medial moraine of the east Rongbuk Glacier up to the base of Changtse, at around 6,100 m (20,000 ft). Camp III (ABC—Advanced Base Camp) is situated below the North Col at 6,500 m (21,300 ft). To reach Camp IV on the North Col, climbers ascend the glacier to the foot of the col where fixed ropes are used to reach the North Col at 7,010 m (23,000 ft). From the North Col, climbers ascend the rocky north ridge to set up Camp V at around 7,775 m (25,500 ft). The route crosses the North Face in a diagonal climb to the base of the Yellow Band, reaching the site of Camp VI at 8,230 m (27,000 ft). From Camp VI, climbers make their final summit push.
|
252 |
+
|
253 |
+
Climbers face a treacherous traverse from the base of the First Step: ascending from 8,501 to 8,534 m (27,890 to 28,000 ft), to the crux of the climb, the Second Step, ascending from 8,577 to 8,626 m (28,140 to 28,300 ft). (The Second Step includes a climbing aid called the "Chinese ladder", a metal ladder placed semi-permanently in 1975 by a party of Chinese climbers.[241] It has been almost continuously in place since, and ladders have been used by virtually all climbers on the route.) Once above the Second Step the inconsequential Third Step is clambered over, ascending from 8,690 to 8,800 m (28,510 to 28,870 ft). Once above these steps, the summit pyramid is climbed by a snow slope of 50 degrees, to the final summit ridge along which the top is reached.[242]
|
254 |
+
|
255 |
+
The summit of Everest has been described as "the size of a dining room table".[243] The summit is capped with snow over ice over rock, and the layer of snow varies from year to year.[244] The rock summit is made of Ordovician limestone and is a low-grade metamorphic rock according to Montana State University.[245] (see survey section for more on its height and about the Everest rock summit)
|
256 |
+
|
257 |
+
Below the summit there is an area known as "rainbow valley", filled with dead bodies still wearing brightly coloured winter gear. Down to about 8000 metres is an area commonly called the "death zone", due to the high danger and low oxygen because of the low pressure.[79]
|
258 |
+
|
259 |
+
Below the summit the mountain slopes downward to the three main sides, or faces, of Mount Everest: the North Face, the South-West Face, and the East/Kangshung Face.[246]
|
260 |
+
|
261 |
+
At the higher regions of Mount Everest, climbers seeking the summit typically spend substantial time within the death zone (altitudes higher than 8,000 metres (26,000 ft)), and face significant challenges to survival. Temperatures can dip to very low levels, resulting in frostbite of any body part exposed to the air. Since temperatures are so low, snow is well-frozen in certain areas and death or injury by slipping and falling can occur. High winds at these altitudes on Everest are also a potential threat to climbers.
|
262 |
+
|
263 |
+
Another significant threat to climbers is low atmospheric pressure. The atmospheric pressure at the top of Everest is about a third of sea level pressure or 0.333 standard atmospheres (337 mbar), resulting in the availability of only about a third as much oxygen to breathe.[247]
|
264 |
+
|
265 |
+
Debilitating effects of the death zone are so great that it takes most climbers up to 12 hours to walk the distance of 1.72 kilometres (1.07 mi) from South Col to the summit.[248] Achieving even this level of performance requires prolonged altitude acclimatisation, which takes 40–60 days for a typical expedition. A sea-level dweller exposed to the atmospheric conditions at the altitude above 8,500 m (27,900 ft) without acclimatisation would likely lose consciousness within 2 to 3 minutes.[249]
|
266 |
+
|
267 |
+
In May 2007, the Caudwell Xtreme Everest undertook a medical study of oxygen levels in human blood at extreme altitude. Over 200 volunteers climbed to Everest Base Camp where various medical tests were performed to examine blood oxygen levels. A small team also performed tests on the way to the summit.[250] Even at base camp, the low partial pressure of oxygen had direct effect on blood oxygen saturation levels. At sea level, blood oxygen saturation is generally 98–99%. At base camp, blood saturation fell to between 85 and 87%. Blood samples taken at the summit indicated very low oxygen levels in the blood. A side effect of low blood oxygen is a greatly increased breathing rate, often 80–90 breaths per minute as opposed to a more typical 20–30. Exhaustion can occur merely attempting to breathe.[251]
|
268 |
+
|
269 |
+
Lack of oxygen, exhaustion, extreme cold, and climbing hazards all contribute to the death toll. An injured person who cannot walk is in serious trouble, since rescue by helicopter is generally impractical and carrying the person off the mountain is very risky. People who die during the climb are typically left behind. As of 2006, about 150 bodies had never been recovered. It is not uncommon to find corpses near the standard climbing routes.[252]
|
270 |
+
|
271 |
+
Debilitating symptoms consistent with high altitude cerebral oedema commonly present during descent from the summit of Mount Everest. Profound fatigue and late times in reaching the summit are early features associated with subsequent death.
|
272 |
+
|
273 |
+
A 2008 study noted that the "death zone" is indeed where most Everest deaths occur, but also noted that most deaths occur during descent from the summit.[254] A 2014 article in The Atlantic about deaths on Everest noted that while falling is one of the greatest dangers the death zone presents for all 8000ers, avalanches are a more common cause of death at lower altitudes.[255] However, Everest climbing is more deadly than BASE jumping, although some have combined extreme sports and Everest including a Russian who base-jumped off Everest in a wingsuit (he did survive, though).[256]
|
274 |
+
|
275 |
+
Despite this, Everest is safer for climbers than a number of peaks by some measurements, but it depends on the period.[257] Some examples are Kangchenjunga, K2, Annapurna, Nanga Parbat, and the Eiger (especially the nordwand).[257] Mont Blanc has more deaths each year than Everest, with over one hundred dying in a typical year and over eight thousand killed since records were kept.[258] Some factors that affect total mountain lethality include the level of popularity of the mountain, the skill of those climbing, and the difficulty of the climb.[258]
|
276 |
+
|
277 |
+
Another health hazard is retinal haemorrhages, which can damage eyesight and cause blindness.[259] Up to a quarter of Everest climbers can experience retinal haemorrhages, and although they usually heal within weeks of returning to lower altitudes, in 2010 a climber went blind and ended up dying in the death zone.[259]
|
278 |
+
|
279 |
+
At one o'clock in the afternoon, the British climber Peter Kinloch was on the roof of the world, in bright sunlight, taking photographs of the Himalayas below, "elated, cheery and bubbly". But Mount Everest is now his grave, because only minutes later, he suddenly went blind and had to be abandoned to die from the cold.
|
280 |
+
|
281 |
+
The team made a huge effort for the next 12 hours to try to get him down the mountain, but to no avail, as they were unsuccessful in getting him through the difficult sections.[260] Even for the able, the Everest North-East ridge is recognised as a challenge. It is hard to rescue someone who has become incapacitated and it can be beyond the ability of rescuers to save anyone in such a difficult spot.[260] One way around this situation was pioneered by two Nepali men in 2011, who had intended to paraglide off the summit. They had no choice and were forced to go through with their plan anyway, because they had run out of bottled oxygen and supplies.[261] They successfully launched off the summit and para-glided down to Namche Bazaar in just 42 minutes, without having to climb down the mountain.[261]
|
282 |
+
|
283 |
+
Most expeditions use oxygen masks and tanks above 8,000 m (26,000 ft).[262] Everest can be climbed without supplementary oxygen, but only by the most accomplished mountaineers and at increased risk. Humans' ability to think clearly is hindered with low oxygen, and the combination of extreme weather, low temperatures, and steep slopes often requires quick, accurate decisions. While about 95 percent of climbers who reach the summit use bottled oxygen in order to reach the top, about five percent of climbers have summited Everest without supplemental oxygen. The death rate is double for those who attempt to reach the summit without supplemental oxygen.[263] Travelling above 8,000 feet altitude is a factor in cerebral hypoxia.[264] This decrease of oxygen to the brain can cause dementia and brain damage, as well as other symptoms.[265] One study found that Mount Everest may be the highest an acclimatised human could go, but also found that climbers may suffer permanent neurological damage despite returning to lower altitudes.[266]
|
284 |
+
|
285 |
+
Brain cells are extremely sensitive to a lack of oxygen. Some brain cells start dying less than 5 minutes after their oxygen supply disappears. As a result, brain hypoxia can rapidly cause severe brain damage or death.
|
286 |
+
|
287 |
+
The use of bottled oxygen to ascend Mount Everest has been controversial. It was first used on the 1922 British Mount Everest Expedition by George Finch and Geoffrey Bruce who climbed up to 7,800 m (25,600 ft) at a spectacular speed of 1,000 vertical feet per hour (vf/h). Pinned down by a fierce storm, they escaped death by breathing oxygen from a jury-rigged set-up during the night. The next day they climbed to 8,100 m (26,600 ft) at 900 vf/h—nearly three times as fast as non-oxygen users. Yet the use of oxygen was considered so unsportsmanlike that none of the rest of the Alpine world recognised this high ascent rate.[citation needed]
|
288 |
+
|
289 |
+
George Mallory described the use of such oxygen as unsportsmanlike, but he later concluded that it would be impossible for him to summit without it and consequently used it on his final attempt in 1924.[267] When Tenzing and Hillary made the first successful summit in 1953, they also used open-circuit bottled oxygen sets, with the expedition's physiologist Griffith Pugh referring to the oxygen debate as a "futile controversy", noting that oxygen "greatly increases subjective appreciation of the surroundings, which after all is one of the chief reasons for climbing."[268] For the next twenty-five years, bottled oxygen was considered standard for any successful summit.
|
290 |
+
|
291 |
+
...although an acclimatised lowlander can survive for a time on the summit of Everest without supplemental oxygen, one is so close to the limit that even a modicum of excess exertion may impair brain function.
|
292 |
+
|
293 |
+
Reinhold Messner was the first climber to break the bottled oxygen tradition and in 1978, with Peter Habeler, made the first successful climb without it. In 1980, Messner summited the mountain solo, without supplemental oxygen or any porters or climbing partners, on the more difficult northwest route. Once the climbing community was satisfied that the mountain could be climbed without supplemental oxygen, many purists then took the next logical step of insisting that is how it should be climbed.[25]:154
|
294 |
+
|
295 |
+
The aftermath of the 1996 disaster further intensified the debate. Jon Krakauer's Into Thin Air (1997) expressed the author's personal criticisms of the use of bottled oxygen. Krakauer wrote that the use of bottled oxygen allowed otherwise unqualified climbers to attempt to summit, leading to dangerous situations and more deaths. The disaster was partially caused by the sheer number of climbers (34 on that day) attempting to ascend, causing bottlenecks at the Hillary Step and delaying many climbers, most of whom summitted after the usual 14:00 turnaround time. He proposed banning bottled oxygen except for emergency cases, arguing that this would both decrease the growing pollution on Everest—many bottles have accumulated on its slopes—and keep marginally qualified climbers off the mountain.
|
296 |
+
|
297 |
+
The 1996 disaster also introduced the issue of the guide's role in using bottled oxygen.[269]
|
298 |
+
|
299 |
+
Guide Anatoli Boukreev's decision not to use bottled oxygen was sharply criticised by Jon Krakauer. Boukreev's supporters (who include G. Weston DeWalt, who co-wrote The Climb) state that using bottled oxygen gives a false sense of security.[270] Krakauer and his supporters point out that, without bottled oxygen, Boukreev could not directly help his clients descend.[271] They state that Boukreev said that he was going down with client Martin Adams,[271] but just below the south summit, Boukreev determined that Adams was doing fine on the descent and so descended at a faster pace, leaving Adams behind. Adams states in The Climb, "For me, it was business as usual, Anatoli's going by, and I had no problems with that."[272]
|
300 |
+
|
301 |
+
The low oxygen can cause a mental fog-like impairment of cognitive abilities described as "delayed and lethargic thought process, clinically defined as bradypsychia" even after returning to lower altitudes.[273] In severe cases, climbers can experience hallucinations. Some studies have found that high-altitude climbers, including Everest climbers, experience altered brain structure.[273] The effects of high altitude on the brain, particularly if it can cause permanent brain damage, continue to be studied.[273]
|
302 |
+
|
303 |
+
Although generally less popular than spring, Mount Everest has also been climbed in the autumn (also called the "post-monsoon season").[62][274] For example, in 2010 Eric Larsen and five Nepali guides summited Everest in the autumn for the first time in ten years.[274] The first mainland British ascent of Mount Everest (Hillary was from New Zealand), which was also the first ascent via a face rather than a ridge, was the autumn 1975 Southwest Face expedition led by Chris Bonington.[citation needed] The autumn season, when the monsoon ends, is regarded as more dangerous because there is typically a lot of new snow which can be unstable.[275] However, this increased snow can make it more popular with certain winter sports like skiing and snowboarding.[62] Two Japanese climbers also summited in October 1973.[276]
|
304 |
+
|
305 |
+
Chris Chandler and Bob Cormack summited Everest in October 1976 as part of the American Bicentennial Everest Expedition that year, the first Americans to make an autumn ascent of Mount Everest according to the Los Angeles Times.[277] By the 21st century, summer and autumn can be more popular with skiing and snowboard attempts on Mount Everest.[278] During the 1980s, climbing in autumn was actually more popular than in spring.[279] U.S. astronaut Karl Gordon Henize died in October 1993 on an autumn expedition, conducting an experiment on radiation. The amount of background radiation increases with higher altitudes.[280]
|
306 |
+
|
307 |
+
The mountain has also been climbed in the winter, but that is not popular because of the combination of cold high winds and shorter days.[281] By January the peak is typically battered by 170 mph (270 km/h) winds and the average temperature of the summit is around −33 °F (−36 °C).[62]
|
308 |
+
|
309 |
+
By the end of the 2010 climbing season, there had been 5,104 ascents to the summit by about 3,142 individuals.[146] Some notable "firsts" by climbers include:
|
310 |
+
|
311 |
+
Summiting Everest with disabilities such as amputations and diseases has become popular in the 21st century, with stories like that of Sudarshan Gautam, a man with no arms who made it to the top in 2013.[307] A teenager with Down's syndrome made it to Base camp, which has become a substitute for more extreme record-breaking because it carries many of the same thrills including the trip to the Himalayas and rustic scenery.[308] Danger lurks even at base camp though, which was the site where dozens were killed in the 2015 Mount Everest avalanches. Others that have climbed Everest with amputations include Mark Inglis (no legs), Paul Hockey (one arm only), and Arunima Sinha (one leg only).
|
312 |
+
|
313 |
+
Lucy, Lady Houston, a British millionaire former showgirl, funded the Houston Everest Flight of 1933. A formation of airplanes led by the Marquess of Clydesdale flew over the summit in an effort to photograph the unknown terrain .[309]
|
314 |
+
|
315 |
+
On 26 September 1988, having climbed the mountain via the south-east ridge, Jean-Marc Boivin made the first paraglider descent of Everest,[310] in the process creating the record for the fastest descent of the mountain and the highest paraglider flight. Boivin said: "I was tired when I reached the top because I had broken much of the trail, and to run at this altitude was quite hard."[311] Boivin ran 18 m (60 ft) from below the summit on 40-degree slopes to launch his paraglider, reaching Camp II at 5,900 m (19,400 ft) in 12 minutes (some sources say 11 minutes).[311][312] Boivin would not repeat this feat, as he was killed two years later in 1990, base-jumping off Venezuela's Angel Falls.[313]
|
316 |
+
|
317 |
+
In 1991 four men in two balloons achieved the first hot-air balloon flight over Mount Everest.[314] In one balloon was Andy Elson and Eric Jones (cameraman), and in the other balloon Chris Dewhirst and Leo Dickinson (cameraman).[315] Dickinson went on to write a book about the adventure called Ballooning Over Everest.[315] The hot-air balloons were modified to function at up to 40,000 feet altitude.[315] Reinhold Messner called one of Dickinson's panoramic views of Everest, captured on the now discontinued Kodak Kodachrome film, the "best snap on Earth", according to UK newspaper The Telegraph.[316] Dewhirst has offered to take passengers on a repeat of this feat for US$2.6 million per passenger.[314]
|
318 |
+
|
319 |
+
In May 2005, pilot Didier Delsalle of France landed a Eurocopter AS350 B3 helicopter on the summit of Mount Everest.[317] He needed to land for two minutes to set the Fédération Aéronautique Internationale (FAI) official record, but he stayed for about four minutes, twice.[317] In this type of landing the rotors stay engaged, which avoids relying on the snow to fully support the aircraft. The flight set rotorcraft world records, for highest of both landing and take-off.[318]
|
320 |
+
|
321 |
+
Some press reports suggested that the report of the summit landing was a misunderstanding of a South Col landing, but he had also landed on South Col two days earlier,[319] with this landing and the Everest records confirmed by the FAI.[318] Delsalle also rescued two Japanese climbers at 4,880 m (16,000 ft) while he was there. One climber noted that the new record meant a better chance of rescue.[317]
|
322 |
+
|
323 |
+
In 2011 two Nepali paraglided from the Everest Summit to Namche Bazaar in 42 minutes.[261] They had run out of oxygen and supplies, so it was a very fast way off the mountain.[261] The duo won National Geographic Adventurers of the Year for 2012 for their exploits.[320] After the paraglide they kayaked to the Indian Ocean, and they had made it to the Bay of Bengal by the end of June 2011.[320] One had never climbed, and one had never para-glided, but together they accomplished a ground-breaking feat.[320] By 2013 footage of the flight was shown on the television news program Nightline.[321]
|
324 |
+
|
325 |
+
In 2014, a team financed and led by mountaineer Wang Jing used a helicopter to fly from South base camp to Camp 2 to avoid the Khumbu Icefall, and thence climbed to the Everest summit.[322] This climb immediately sparked outrage and controversy in much of the mountaineering world over the legitimacy and propriety of her climb.[161][323] Nepal ended up investigating Wang, who initially denied the claim that she had flown to Camp 2, admitting only that some support crew were flown to that higher camp, over the Khumbu Icefall.[324] In August 2014, however, she stated that she had flown to Camp 2 because the icefall was impassable. "If you don't fly to Camp II, you just go home," she said in an interview. In that same interview she also insisted that she had never tried to hide this fact.[161]
|
326 |
+
|
327 |
+
Her team had had to use the south side because the Chinese had denied them a permit to climb. Ultimately, the Chinese refusal may have been beneficial to Nepalese interests, allowing the government to showcase improved local hospitals and provided the opportunity for a new hybrid aviation/mountaineering style, triggering discussions about helicopter use in the mountaineering world.[161] National Geographic noted that a village festooned Wang with honours after she donated US$30,000 to the town's hospital. Wang won the International Mountaineer of the Year Award from the Nepal government in June 2014.[322]
|
328 |
+
|
329 |
+
In 2016 the increased use of helicopters was noted for increased efficiency and for hauling material over the deadly Khumbu icefall.[325] In particular it was noted that flights saved icefall porters 80 trips but still increased commercial activity at Everest.[325] After many Nepalis died in the icefall in 2014, the government had wanted helicopters to handle more transportation to Camp 1 but this was not possible because of the 2015 deaths and earthquake closing the mountain, so this was then implemented in 2016 (helicopters did prove instrumental in rescuing many people in 2015 though).[325] That summer Bell tested the 412EPI, which conducted a series of tests including hovering at 18,000 feet and flying as high as 20,000 feet altitude near Mount Everest.[326]
|
330 |
+
|
331 |
+
Climbing Mount Everest can be a relatively expensive undertaking for climbers.
|
332 |
+
|
333 |
+
Climbing gear required to reach the summit may cost in excess of US$8,000, and most climbers also use bottled oxygen, which adds around US$3,000.[citation needed] The permit to enter the Everest area from the south via Nepal costs US$10,000 to US$25,000 per person, depending on the size of the team.[citation needed] The ascent typically starts at one of the two base camps near the mountain, both of which are approximately 100 kilometres (60 mi) from Kathmandu and 300 kilometres (190 mi) from Lhasa (the two nearest cities with major airports). Transferring one's equipment from the airport to the base camp may add as much as US$2,000.[citation needed]
|
334 |
+
|
335 |
+
Going with a "celebrity guide", usually a well-known mountaineer typically with decades of climbing experience and perhaps several Everest summits, can cost over £100,000 as of 2015.[327] On the other hand, a limited support service, offering only some meals at base camp and bureaucratic overhead like a permit, can cost as little as US$7,000 as of 2007.[132] There are issues with the management of guiding firms in Nepal, and one Canadian woman was left begging for help when her guide firm, which she had paid perhaps US$40,000 to, couldn't stop her from dying in 2012. She ran out of bottled oxygen after climbing for 27 hours straight. Despite decades of concern over inexperienced climbers, neither she nor the guide firm had summited Everest before. The Tibetan/Chinese side has been described as "out of control" due to reports of thefts and threats.[328] By 2015, Nepal was considering to require that climbers have some experience and wanted to make the mountain safer, and especially increase revenue.[329] One barrier to this is that low-budget firms make money not taking inexperienced climbers to the summit.[330] Those turned away by Western firms can often find another firm willing to take them for a price—that they return home soon after arriving after base camp, or part way up the mountain.[330] Whereas a Western firm will convince those they deem incapable to turn back, other firms simply give people the freedom to choose.[330]
|
336 |
+
|
337 |
+
According to Jon Krakauer, the era of commercialisation of Everest started in 1985, when the summit was reached by a guided expedition led by David Breashears that included Richard Bass, a wealthy 55-year-old businessman and an amateur mountain climber with only four years of climbing experience.[332] By the early-1990s, several companies were offering guided tours to the mountain. Rob Hall, one of the mountaineers who died in the 1996 disaster, had successfully guided 39 clients to the summit before that incident.[25]:24,42
|
338 |
+
|
339 |
+
By 2016, most guiding services cost between US$35,000–200,000.[330] However, the services offered vary widely and it is "buyer beware" when doing deals in Nepal, one of the poorest and least developed countries in the world.[330][333] Tourism is about four percent of Nepal's economy, but Everest is special in that an Everest porter can make nearly double the nations's average wage in a region in which other sources of income are lacking.[334]
|
340 |
+
|
341 |
+
Beyond this point, costs may vary widely. It is technically possible to reach the summit with minimal additional expenses, and there are "budget" travel agencies which offer logistical support for such trips. However, this is considered difficult and dangerous (as illustrated by the case of David Sharp). Many climbers hire "full service" guide companies, which provide a wide spectrum of services, including acquisition of permits, transportation to/from base camp, food, tents, fixed ropes,[335] medical assistance while on the mountain, an experienced mountaineer guide, and even personal porters to carry one's backpack and cook one's meals. The cost of such a guide service may range from US$40,000–80,000 per person.[336] Since most equipment is moved by Sherpas, clients of full-service guide companies can often keep their backpack weights under 10 kilograms (22 lb), or hire a Sherpa to carry their backpack for them. By contrast, climbers attempting less commercialised peaks, like Denali, are often expected to carry backpacks over 30 kilograms (66 lb) and, occasionally, to tow a sled with 35 kilograms (77 lb) of gear and food.[337]
|
342 |
+
|
343 |
+
The degree of commercialisation of Mount Everest is a frequent subject of criticism.[338] Jamling Tenzing Norgay, the son of Tenzing Norgay, said in a 2003 interview that his late father would have been shocked to discover that rich thrill-seekers with no climbing experience were now routinely reaching the summit, "You still have to climb this mountain yourself with your feet. But the spirit of adventure is not there any more. It is lost. There are people going up there who have no idea how to put on crampons. They are climbing because they have paid someone $65,000. It is very selfish. It endangers the lives of others."[339]
|
344 |
+
|
345 |
+
Reinhold Messner concurred in 2004, "You could die in each climb and that meant you were responsible for yourself. We were real mountaineers: careful, aware and even afraid. By climbing mountains we were not learning how big we were. We were finding out how breakable, how weak and how full of fear we are. You can only get this if you expose yourself to high danger. I have always said that a mountain without danger is not a mountain....High altitude alpinism has become tourism and show. These commercial trips to Everest, they are still dangerous. But the guides and organisers tell clients, 'Don't worry, it's all organised.' The route is prepared by hundreds of Sherpas. Extra oxygen is available in all camps, right up to the summit. People will cook for you and lay out your beds. Clients feel safe and don't care about the risks."[340]
|
346 |
+
|
347 |
+
However, not all opinions on the subject among prominent mountaineers are strictly negative. For example, Edmund Hillary, who went on record saying that he has not liked "the commercialisation of mountaineering, particularly of Mount Everest"[341] and claimed that "Having people pay $65,000 and then be led up the mountain by a couple of experienced guides...isn't really mountaineering at all",[342] nevertheless noted that he was pleased by the changes brought to Everest area by Westerners, "I don't have any regrets because I worked very hard indeed to improve the condition for the local people. When we first went in there they didn't have any schools, they didn't have any medical facilities, all over the years we have established 27 schools, we have two hospitals and a dozen medical clinics and then we've built bridges over wild mountain rivers and put in fresh water pipelines so in cooperation with the Sherpas we've done a lot to benefit them."[343]
|
348 |
+
|
349 |
+
One of the early guided summiters, Richard Bass (of Seven Summits fame) responded in an interview about Everest climbers and what it took to survive there, "Climbers should have high altitude experience before they attempt the really big mountains. People don't realise the difference between a 20,000-foot mountain and 29,000 feet. It's not just arithmetic. The reduction of oxygen in the air is proportionate to the altitude alright, but the effect on the human body is disproportionate—an exponential curve. People climb Denali [20,320 feet] or Aconcagua [22,834 feet] and think, 'Heck, I feel great up here, I'm going to try Everest.' But it's not like that."[344]
|
350 |
+
|
351 |
+
Some climbers have reported life-threatening thefts from supply caches. Vitor Negrete, the first Brazilian to climb Everest without oxygen and part of David Sharp's party, died during his descent, and theft of gear and food from his high-altitude camp may have contributed.[345][346]
|
352 |
+
|
353 |
+
"Several members were bullied, gear was stolen, and threats were made against me and my climbing partner, Michael Kodas, making an already stressful situation even more dire", said one climber.[347]
|
354 |
+
|
355 |
+
In addition to theft, Michael Kodas describes in his book, High Crimes: The Fate of Everest in an Age of Greed (2008):[348] unethical guides and Sherpas, prostitution and gambling at the Tibet Base Camp, fraud related to the sale of oxygen bottles, and climbers collecting donations under the pretense of removing trash from the mountain.[349][350]
|
356 |
+
|
357 |
+
The Chinese side of Everest in Tibet was described as "out of control" after one Canadian had all his gear stolen and was abandoned by his Sherpa.[328] Another sherpa helped the victim get off the mountain safely and gave him some spare gear. Other climbers have also reported missing oxygen bottles, which can be worth hundreds of dollars each. Hundreds of climbers pass by people's tents, making it hard to safeguard against theft.[328]
|
358 |
+
|
359 |
+
In the late 2010s, the reports of theft of oxygen bottles from camps became more common.[351] Westerners have sometimes struggled to understand the ancient culture and desperate poverty that drives some locals, some of whom have a different concept of the value of a human life.[352][353] For example, for just 1,000 rupees (£6.30) per person, several foreigners were forced to leave the lodge where they were staying and tricked into believing they were being led to safety. Instead they were abandoned and died in the snowstorm.[352]
|
360 |
+
|
361 |
+
On 18 April 2014, in one of the worst disasters to ever hit the Everest climbing community up to that time, 16 Sherpas died in Nepal due to the avalanche that swept them off Mount Everest. In response to the tragedy numerous Sherpa climbing guides walked off the job and most climbing companies pulled out in respect for the Sherpa people mourning the loss.[354][355] Some still wanted to climb but there was too much controversy to continue that year.[354] One of the issues that triggered the work action by Sherpas was unreasonable client demands during climbs.[354]
|
362 |
+
|
363 |
+
Mount Everest has been host to other winter sports and adventuring besides mountaineering, including snowboarding, skiing, paragliding, and BASE jumping.
|
364 |
+
|
365 |
+
Yuichiro Miura became the first man to ski down Everest in the 1970s. He descended nearly 4,200 vertical feet from the South Col before falling with extreme injuries.[85] Stefan Gatt and Marco Siffredi snowboarded Mount Everest in 2001.[356] Other Everest skiers include Davo Karničar of Slovenia, who completed a top to south base camp descent in 2000, Hans Kammerlander of Italy in 1996 on the north side,[357] and Kit DesLauriers of the United States in 2006.[358] In 2006 Swede Tomas Olsson and Norwegian Tormod Granheim skied together down the north face. Olsson's anchor broke while they were rappelling down a cliff in the Norton couloir at about 8,500 metres, resulting in his death from a two and a half-kilometre fall. Granheim skied down to camp III.[359] Also, Marco Siffredi died in 2002 on his second snow-boarding expedition.[356]
|
366 |
+
|
367 |
+
Various types of gliding descents have slowly become more popular, and are noted for their rapid descents to lower camps. In 1986 Steve McKinney led an expedition to Mount Everest,[360] during which he became the first person to fly a hang-glider off the mountain.[312] Frenchman Jean-Marc Boivin made the first paraglider descent of Everest in September 1988, descending in minutes from the south-east ridge to a lower camp.[310] In 2011, two Nepalese made a gliding descent from the Everest summit down 5,000 metres (16,400 ft) in 45 minutes.[361] On 5 May 2013, the beverage company Red Bull sponsored Valery Rozov, who successfully BASE jumped off of the mountain while wearing a wingsuit, setting a record for world's highest BASE jump in the process.[256][362]
|
368 |
+
|
369 |
+
The southern part of Mount Everest is regarded as one of several "hidden valleys" of refuge designated by Padmasambhava, a ninth-century "lotus-born" Buddhist saint.[363]
|
370 |
+
|
371 |
+
Near the base of the north side of Everest lies Rongbuk Monastery, which has been called the "sacred threshold to Mount Everest, with the most dramatic views of the world."[364] For Sherpas living on the slopes of Everest in the Khumbu region of Nepal, Rongbuk Monastery is an important pilgrimage site, accessed in a few days of travel across the Himalayas through Nangpa La.[99]
|
372 |
+
|
373 |
+
Miyolangsangma, a Tibetan Buddhist "Goddess of Inexhaustible Giving", is believed to have lived at the top of Mt Everest. According to Sherpa Buddhist monks, Mt Everest is Miyolangsangma's palace and playground, and all climbers are only partially welcome guests, having arrived without invitation.[363]
|
374 |
+
|
375 |
+
The Sherpa people also believe that Mount Everest and its flanks are blessed with spiritual energy, and one should show reverence when passing through this sacred landscape. Here, the karmic effects of one's actions are magnified, and impure thoughts are best avoided.[363]
|
376 |
+
|
377 |
+
In 2015 the president of the Nepal Mountaineering Association warned that pollution, especially human waste, has reached critical levels. As much as "26,500 pounds of human excrement" each season is left behind on the mountain.[365] Human waste is strewn across the verges of the route to the summit, making the four sleeping areas on the route up Everest's south side minefields of human excrement. Climbers above Base Camp—for the 62-year history of climbing on the mountain—have most commonly either buried their excrement in holes they dug by hand in the snow, or slung it into crevasses, or simply defecated wherever convenient, often within meters of their tents. The only place where climbers can defecate without worrying about contaminating the mountain is Base Camp. At approximately 18,000 feet, Base Camp sees the most activity of all camps on Everest because climbers acclimate and rest there. In the late-1990s, expeditions began using toilets that they fashioned from blue plastic 50-gallon barrels fitted with a toilet seat and enclosed.[366]
|
378 |
+
|
379 |
+
The problem of human waste is compounded by the presence of more anodyne waste: spent oxygen tanks, abandoned tents, empty cans and bottles. The Nepalese government now requires each climber to pack out eight kilograms of waste when descending the mountain.[367]
|
380 |
+
|
381 |
+
In February 2019, due to the mounting waste problem, China closed the base camp on its side of Everest to visitors without climbing permits. Tourists are allowed to go as far as the Rongbuk Monastery.[368]
|
382 |
+
|
383 |
+
In April 2019, the Solukhumbu district's Khumbu Pasanglhamu Rural Municipality launched a campaign to collect nearly 10,000 kg of garbage from Everest.[369]
|
384 |
+
|
385 |
+
Mount Everest has a polar climate (Köppen EF) with all months averaging well below freezing. Note: In the table below, the temperature given is the average lowest temperature recorded in that month. So, in an average year, the lowest recorded July temperature will be -18 degrees Celsius, and the lowest recorded January temperature will be -36 degrees Celsius.
|
386 |
+
|
387 |
+
Nearby peaks include Lhotse, 8,516 m (27,940 ft); Nuptse, 7,855 m (25,771 ft), and Changtse, 7,580 m (24,870 ft) among others. Another nearby peak is Khumbutse, and many of the highest mountains in the world are near Mount Everest. On the southwest side, a major feature in the lower areas is the Khumbu icefall and glacier, an obstacle to climbers on those routes but also to the base camps.
|
en/192.html.txt
ADDED
@@ -0,0 +1,228 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Central America (Spanish: América Central, pronounced [aˈmeɾika senˈtɾal] (listen), Centroamérica pronounced [sentɾoaˈmeɾika] (listen)) is a region in the southern tip of North America and is sometimes defined as a subregion of the Americas.[1] This region is bordered by Mexico to the north, Colombia to the southeast, the Caribbean Sea to the east and the Pacific Ocean to the west and south. Central America consists of seven countries: El Salvador, Costa Rica, Belize, Guatemala, Honduras, Nicaragua and Panama. The combined population of Central America is estimated at 44.53 million (2016).[2]
|
4 |
+
|
5 |
+
Central America is a part of the Mesoamerican biodiversity hotspot, which extends from northern Guatemala to central Panama. Due to the presence of several active geologic faults and the Central America Volcanic Arc, there is a great deal of seismic activity in the region, such as volcanic eruptions and earthquakes, which has resulted in death, injury and property damage.
|
6 |
+
|
7 |
+
In the Pre-Columbian era, Central America was inhabited by the indigenous peoples of Mesoamerica to the north and west and the Isthmo-Colombian peoples to the south and east. Following the Spanish expedition of Christopher Columbus' voyages to the Americas, Spain began to colonize the Americas. From 1609 to 1821, the majority of Central American territories (except for what would become Belize and Panama, and including the modern Mexican state of Chiapas) were governed by the viceroyalty of New Spain from Mexico City as the Captaincy General of Guatemala. On 24 August 1821, Spanish Viceroy Juan de O'Donojú signed the Treaty of Córdoba, which established New Spain's independence from Spain.[3] On 15 September 1821, the Act of Independence of Central America was enacted to announce Central America's separation from the Spanish Empire and provide for the establishment of a new Central American state. Some of New Spain's provinces in the Central American region (i.e. what would become Guatemala, Honduras, El Salvador, Nicaragua and Costa Rica) were annexed to the First Mexican Empire; however, in 1823 they seceded from Mexico to form the Federal Republic of Central America until 1838.
|
8 |
+
|
9 |
+
In 1838, Nicaragua, Honduras, Costa Rica and Guatemala became the first of Central America's seven states to become independent autonomous countries, followed by El Salvador in 1841, Panama in 1903 and Belize in 1981.[citation needed] Despite the dissolution of the Federal Republic of Central America, there is anecdotal evidence that demonstrates that Salvadorans, Panamanians, Costa Ricans, Guatemalans, Hondurans and Nicaraguans continue to maintain a Central American identity. For instance, Central Americans sometimes refer to their nations as if they were provinces of a Central American state. It is not unusual to write "C.A." after the country's name in formal and informal contexts. Governments in the region sometimes reinforce this sense of belonging to Central America in its citizens. For example, automobile licence plates in many of the region's countries include the moniker, Centroamerica, alongside the country's name. Belizeans are mostly associated to be culturally West Indian rather than Central American.
|
10 |
+
|
11 |
+
El Salvador
|
12 |
+
|
13 |
+
Guatemala
|
14 |
+
|
15 |
+
Honduras
|
16 |
+
|
17 |
+
Nicaragua
|
18 |
+
|
19 |
+
Costa Rica
|
20 |
+
|
21 |
+
Panama
|
22 |
+
|
23 |
+
Belize
|
24 |
+
|
25 |
+
"Central America" may mean different things to various people, based upon different contexts:
|
26 |
+
|
27 |
+
Ancient footprints of Acahualinca, Nicaragua
|
28 |
+
|
29 |
+
Stone spheres of Costa Rica
|
30 |
+
|
31 |
+
Tazumal, El Salvador
|
32 |
+
|
33 |
+
Tikal, Guatemala
|
34 |
+
|
35 |
+
Copan, Honduras
|
36 |
+
|
37 |
+
Altun Ha, Belize
|
38 |
+
|
39 |
+
In the Pre-Columbian era, the northern areas of Central America were inhabited by the indigenous peoples of Mesoamerica. Most notable among these were the Mayans, who had built numerous cities throughout the region, and the Aztecs, who had created a vast empire. The pre-Columbian cultures of eastern El Salvador, eastern Honduras, Caribbean Nicaragua, most of Costa Rica and Panama were predominantly speakers of the Chibchan languages at the time of European contact and are considered by some[6] culturally different and grouped in the Isthmo-Colombian Area.
|
40 |
+
|
41 |
+
Following the Spanish expedition of Christopher Columbus's voyages to the Americas, the Spanish sent many expeditions to the region, and they began their conquest of Maya territory in 1523. Soon after the conquest of the Aztec Empire, Spanish conquistador Pedro de Alvarado commenced the conquest of northern Central America for the Spanish Empire. Beginning with his arrival in Soconusco in 1523, Alvarado's forces systematically conquered and subjugated most of the major Maya kingdoms, including the K'iche', Tz'utujil, Pipil, and the Kaqchikel. By 1528, the conquest of Guatemala was nearly complete, with only the Petén Basin remaining outside the Spanish sphere of influence. The last independent Maya kingdoms – the Kowoj and the Itza people – were finally defeated in 1697, as part of the Spanish conquest of Petén.[citation needed]
|
42 |
+
|
43 |
+
In 1538, Spain established the Real Audiencia of Panama, which had jurisdiction over all land from the Strait of Magellan to the Gulf of Fonseca. This entity was dissolved in 1543, and most of the territory within Central America then fell under the jurisdiction of the Audiencia Real de Guatemala. This area included the current territories of Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Mexican state of Chiapas, but excluded the lands that would become Belize and Panama. The president of the Audiencia, which had its seat in Antigua Guatemala, was the governor of the entire area. In 1609 the area became a captaincy general and the governor was also granted the title of captain general. The Captaincy General of Guatemala encompassed most of Central America, with the exception of present-day Belize and Panama.
|
44 |
+
|
45 |
+
The Captaincy General of Guatemala lasted for more than two centuries, but began to fray after a rebellion in 1811 which began in the intendancy of San Salvador. The Captaincy General formally ended on 15 September 1821, with the signing of the Act of Independence of Central America. Mexican independence was achieved at virtually the same time with the signing of the Treaty of Córdoba and the Declaration of Independence of the Mexican Empire, and the entire region was finally independent from Spanish authority by 28 September 1821.
|
46 |
+
|
47 |
+
From its independence from Spain in 1821 until 1823, the former Captaincy General remained intact as part of the short-lived First Mexican Empire. When the Emperor of Mexico abdicated on 19 March 1823, Central America again became independent. On 1 July 1823, the Congress of Central America peacefully seceded from Mexico and declared absolute independence from all foreign nations, and the region formed the Federal Republic of Central America.[citation needed]
|
48 |
+
|
49 |
+
The Federal Republic of Central America was a representative democracy with its capital at Guatemala City. This union consisted of the provinces of Costa Rica, El Salvador, Guatemala, Honduras, Los Altos, Mosquito Coast, and Nicaragua. The lowlands of southwest Chiapas, including Soconusco, initially belonged to the Republic until 1824, when Mexico annexed most of Chiapas and began its claims to Soconusco. The Republic lasted from 1823 to 1838, when it disintegrated as a result of civil wars.[citation needed]
|
50 |
+
|
51 |
+
The United Provinces of Central America
|
52 |
+
|
53 |
+
Federal Republic of Central America
|
54 |
+
|
55 |
+
National Representation of Central America
|
56 |
+
|
57 |
+
Greater Republic of Central America
|
58 |
+
|
59 |
+
Guatemala
|
60 |
+
|
61 |
+
El Salvador
|
62 |
+
|
63 |
+
Honduras
|
64 |
+
|
65 |
+
Nicaragua
|
66 |
+
|
67 |
+
Costa Rica
|
68 |
+
|
69 |
+
Panama
|
70 |
+
|
71 |
+
Belize
|
72 |
+
|
73 |
+
The territory that now makes up Belize was heavily contested in a dispute that continued for decades after Guatemala achieved independence (see History of Belize (1506–1862). Spain, and later Guatemala, considered this land a Guatemalan department. In 1862, Britain formally declared it a British colony and named it British Honduras. It became independent as Belize in 1981.[citation needed]
|
74 |
+
|
75 |
+
Panama, situated in the southernmost part of Central America on the Isthmus of Panama, has for most of its history been culturally and politically linked to South America. Panama was part of the Province of Tierra Firme from 1510 until 1538 when it came under the jurisdiction of the newly formed Audiencia Real de Panama. Beginning in 1543, Panama was administered as part of the Viceroyalty of Peru, along with all other Spanish possessions in South America. Panama remained as part of the Viceroyalty of Peru until 1739, when it was transferred to the Viceroyalty of New Granada, the capital of which was located at Santa Fé de Bogotá. Panama remained as part of the Viceroyalty of New Granada until the disestablishment of that viceroyalty in 1819. A series of military and political struggles took place from that time until 1822, the result of which produced the republic of Gran Colombia. After the dissolution of Gran Colombia in 1830, Panama became part of a successor state, the Republic of New Granada. From 1855 until 1886, Panama existed as Panama State, first within the Republic of New Granada, then within the Granadine Confederation, and finally within the United States of Colombia. The United States of Colombia was replaced by the Republic of Colombia in 1886. As part of the Republic of Colombia, Panama State was abolished and it became the Isthmus Department. Despite the many political reorganizations, Colombia was still deeply plagued by conflict, which eventually led to the secession of Panama on 3 November 1903. Only after that time did some begin to regard Panama as a North or Central American entity.[citation needed]
|
76 |
+
|
77 |
+
By the 1930s the United Fruit Company owned 14,000 square kilometres (3.5 million acres) of land in Central America and the Caribbean and was the single largest land owner in Guatemala. Such holdings gave it great power over the governments of small countries. That was one of the factors that led to the coining of the phrase banana republic.[7]
|
78 |
+
|
79 |
+
After more than two hundred years of social unrest, violent conflict, and revolution, Central America today remains in a period of political transformation. Poverty, social injustice, and violence are still widespread.[8] Nicaragua is the second poorest country in the western hemisphere (only Haiti is poorer).[9]
|
80 |
+
|
81 |
+
Central America is the tapering isthmus of southern North America, with unique and varied geographic features. The Pacific Ocean lies to the southwest, the Caribbean Sea lies to the northeast, and the Gulf of Mexico lies to the north. Some physiographists define the Isthmus of Tehuantepec as the northern geographic border of Central America,[10] while others use the northwestern borders of Belize and Guatemala. From there, the Central American land mass extends southeastward to the Atrato River, where it connects to the Pacific Lowlands in northwestern South America.
|
82 |
+
|
83 |
+
Of the many mountain ranges within Central America, the longest are the Sierra Madre de Chiapas, the Cordillera Isabelia and the Cordillera de Talamanca. At 4,220 meters (13,850 ft), Volcán Tajumulco is the highest peak in Central America. Other high points of Central America are as listed in the table below:
|
84 |
+
|
85 |
+
Between the mountain ranges lie fertile valleys that are suitable for the raising of livestock and for the production of coffee, tobacco, beans and other crops. Most of the population of Honduras, Costa Rica and Guatemala lives in valleys.[11]
|
86 |
+
|
87 |
+
Trade winds have a significant effect upon the climate of Central America. Temperatures in Central America are highest just prior to the summer wet season, and are lowest during the winter dry season, when trade winds contribute to a cooler climate. The highest temperatures occur in April, due to higher levels of sunlight, lower cloud cover and a decrease in trade winds.[12]
|
88 |
+
|
89 |
+
Central America is part of the Mesoamerican biodiversity hotspot, boasting 7% of the world's biodiversity.[13] The Pacific Flyway is a major north–south flyway for migratory birds in the Americas, extending from Alaska to Tierra del Fuego. Due to the funnel-like shape of its land mass, migratory birds can be seen in very high concentrations in Central America, especially in the spring and autumn. As a bridge between North America and South America, Central America has many species from the Nearctic and the Neotropical realms. However the southern countries (Costa Rica and Panama) of the region have more biodiversity than the northern countries (Guatemala and Belize), meanwhile the central countries (Honduras, Nicaragua and El Salvador) have the least biodiversity.[13] The table below shows recent statistics:
|
90 |
+
|
91 |
+
Belize
|
92 |
+
|
93 |
+
Montecristo National Park, El Salvador
|
94 |
+
|
95 |
+
Maderas forest, Nicaragua
|
96 |
+
|
97 |
+
Texiguat Wildlife Refuge Honduras
|
98 |
+
|
99 |
+
Monteverde Cloud Forest Reserve, Costa Rica.
|
100 |
+
|
101 |
+
Parque Internacional la Amistad, Panama
|
102 |
+
|
103 |
+
Petén-Veracruz moist forests, Guatemala
|
104 |
+
|
105 |
+
Over 300 species of the region's flora and fauna are threatened, 107 of which are classified as critically endangered. The underlying problems are deforestation, which is estimated by FAO at 1.2% per year in Central America and Mexico combined, fragmentation of rainforests and the fact that 80% of the vegetation in Central America has already been converted to agriculture.[21]
|
106 |
+
|
107 |
+
Efforts to protect fauna and flora in the region are made by creating ecoregions and nature reserves. 36% of Belize's land territory falls under some form of official protected status, giving Belize one of the most extensive systems of terrestrial protected areas in the Americas. In addition, 13% of Belize's marine territory are also protected.[22] A large coral reef extends from Mexico to Honduras: the Mesoamerican Barrier Reef System. The Belize Barrier Reef is part of this. The Belize Barrier Reef is home to a large diversity of plants and animals, and is one of the most diverse ecosystems of the world. It is home to 70 hard coral species, 36 soft coral species, 500 species of fish and hundreds of invertebrate species.
|
108 |
+
So far only about 10% of the species in the Belize barrier reef have been discovered.[23]
|
109 |
+
|
110 |
+
Lycaste skinneri, Guatemala
|
111 |
+
|
112 |
+
Yucca gigantea, El Salvador
|
113 |
+
|
114 |
+
Rhyncholaelia digbyana, Honduras
|
115 |
+
|
116 |
+
Plumeria, Nicaragua
|
117 |
+
|
118 |
+
Guarianthe skinneri, Costa Rica
|
119 |
+
|
120 |
+
Peristeria elata, Panama
|
121 |
+
|
122 |
+
Prosthechea cochleata, Belize
|
123 |
+
|
124 |
+
From 2001 to 2010, 5,376 square kilometers (2,076 sq mi) of forest were lost in the region. In 2010 Belize had 63% of remaining forest cover, Costa Rica 46%, Panama 45%, Honduras 41%, Guatemala 37%, Nicaragua 29%, and El Salvador 21%. Most of the loss occurred in the moist forest biome, with 12,201 square kilometers (4,711 sq mi). Woody vegetation loss was partially set off by a gain in the coniferous forest biome with 4,730 square kilometers (1,830 sq mi), and a gain in the dry forest biome at 2,054 square kilometers (793 sq mi). Mangroves and deserts contributed only 1% to the loss in forest vegetation. The bulk of the deforestation was located at the Caribbean slopes of Nicaragua with a loss of 8,574 square kilometers (3,310 sq mi) of forest in the period from 2001 to 2010. The most significant regrowth of 3,050 square kilometers (1,180 sq mi) of forest was seen in the coniferous woody vegetation of Honduras.[24]
|
125 |
+
|
126 |
+
The Central American pine-oak forests ecoregion, in the tropical and subtropical coniferous forests biome, is found in Central America and southern Mexico. The Central American pine-oak forests occupy an area of 111,400 square kilometers (43,000 sq mi),[25] extending along the mountainous spine of Central America, extending from the Sierra Madre de Chiapas in Mexico's Chiapas state through the highlands of Guatemala, El Salvador, and Honduras to central Nicaragua. The pine-oak forests lie between 600–1,800 metres (2,000–5,900 ft) elevation,[25] and are surrounded at lower elevations by tropical moist forests and tropical dry forests. Higher elevations above 1,800 metres (5,900 ft) are usually covered with Central American montane forests. The Central American pine-oak forests are composed of many species characteristic of temperate North America including oak, pine, fir, and cypress.
|
127 |
+
|
128 |
+
Laurel forest is the most common type of Central American temperate evergreen cloud forest, found in almost all Central American countries, normally more than 1,000 meters (3,300 ft) above sea level. Tree species include evergreen oaks, members of the laurel family, and species of Weinmannia, Drimys, and Magnolia.[26] The cloud forest of Sierra de las Minas, Guatemala, is the largest in Central America. In some areas of southeastern Honduras there are cloud forests, the largest located near the border with Nicaragua. In Nicaragua, cloud forests are situated near the border with Honduras, but many were cleared to grow coffee. There are still some temperate evergreen hills in the north. The only cloud forest in the Pacific coastal zone of Central America is on the Mombacho volcano in Nicaragua. In Costa Rica, there are laurel forests in the Cordillera de Tilarán and Volcán Arenal, called Monteverde, also in the Cordillera de Talamanca.
|
129 |
+
|
130 |
+
The Central American montane forests are an ecoregion of the tropical and subtropical moist broadleaf forests biome, as defined by the World Wildlife Fund.[27] These forests are of the moist deciduous and the semi-evergreen seasonal subtype of tropical and subtropical moist broadleaf forests and receive high overall rainfall with a warm summer wet season and a cooler winter dry season. Central American montane forests consist of forest patches located at altitudes ranging from 1,800–4,000 metres (5,900–13,100 ft), on the summits and slopes of the highest mountains in Central America ranging from Southern Mexico, through Guatemala, El Salvador, and Honduras, to northern Nicaragua. The entire ecoregion covers an area of 13,200 square kilometers (5,100 sq mi) and has a temperate climate with relatively high precipitation levels.[27]
|
131 |
+
|
132 |
+
Resplendent quetzal, Guatemala
|
133 |
+
|
134 |
+
Turquoise-browed motmot, El Salvador and Nicaragua
|
135 |
+
|
136 |
+
Keel-billed toucan, Belize
|
137 |
+
|
138 |
+
Scarlet macaw, Honduras
|
139 |
+
|
140 |
+
Clay-colored thrush, Costa Rica
|
141 |
+
|
142 |
+
Harpy eagle, Panama
|
143 |
+
|
144 |
+
Ecoregions are not only established to protect the forests themselves but also because they are habitats for an incomparably rich and often endemic fauna. Almost half of the bird population of the Talamancan montane forests in Costa Rica and Panama are endemic to this region. Several birds are listed as threatened, most notably the resplendent quetzal (Pharomacrus mocinno), three-wattled bellbird (Procnias tricarunculata), bare-necked umbrellabird (Cephalopterus glabricollis), and black guan (Chamaepetes unicolor). Many of the amphibians are endemic and depend on the existence of forest. The golden toad that once inhabited a small region in the Monteverde Reserve, which is part of the Talamancan montane forests, has not been seen alive since 1989 and is listed as extinct by IUCN. The exact causes for its extinction are unknown. Global warming may have played a role, because the development of that frog is typical for this area may have been compromised. Seven small mammals are endemic to the Costa Rica-Chiriqui highlands within the Talamancan montane forest region. Jaguars, cougars, spider monkeys, as well as tapirs, and anteaters live in the woods of Central America.[26] The Central American red brocket is a brocket deer found in Central America's tropical forest.
|
145 |
+
|
146 |
+
Coatepeque Caldera, El Salvador
|
147 |
+
|
148 |
+
Lake Atitlán, Guatemala
|
149 |
+
|
150 |
+
Mombacho, Nicaragua
|
151 |
+
|
152 |
+
Arenal Volcano, Costa Rica
|
153 |
+
|
154 |
+
Central America is geologically very active, with volcanic eruptions and earthquakes occurring frequently, and tsunamis occurring occasionally. Many thousands of people have died as a result of these natural disasters.
|
155 |
+
|
156 |
+
Most of Central America rests atop the Caribbean Plate. This tectonic plate converges with the Cocos, Nazca, and North American plates to form the Middle America Trench, a major subduction zone. The Middle America Trench is situated some 60–160 kilometers (37–99 mi) off the Pacific coast of Central America and runs roughly parallel to it. Many large earthquakes have occurred as a result of seismic activity at the Middle America Trench.[28] For example, subduction of the Cocos Plate beneath the North American Plate at the Middle America Trench is believed to have caused the 1985 Mexico City earthquake that killed as many as 40,000 people. Seismic activity at the Middle America Trench is also responsible for earthquakes in 1902, 1942, 1956, 1982, 1992, 2001, 2007, 2012, 2014, and many other earthquakes throughout Central America.
|
157 |
+
|
158 |
+
The Middle America Trench is not the only source of seismic activity in Central America. The Motagua Fault is an onshore continuation of the Cayman Trough which forms part of the tectonic boundary between the North American Plate and the Caribbean Plate. This transform fault cuts right across Guatemala and then continues offshore until it merges with the Middle America Trench along the Pacific coast of Mexico, near Acapulco. Seismic activity at the Motagua Fault has been responsible for earthquakes in 1717, 1773, 1902, 1976, 1980, and 2009.
|
159 |
+
|
160 |
+
Another onshore continuation of the Cayman Trough is the Chixoy-Polochic Fault, which runs parallel to, and roughly 80 kilometers (50 mi) to the north, of the Motagua Fault. Though less active than the Motagua Fault, seismic activity at the Chixoy-Polochic Fault is still thought to be capable of producing very large earthquakes, such as the 1816 earthquake of Guatemala.[29]
|
161 |
+
|
162 |
+
Managua, the capital of Nicaragua, was devastated by earthquakes in 1931 and 1972.
|
163 |
+
|
164 |
+
Volcanic eruptions are also common in Central America. In 1968 the Arenal Volcano, in Costa Rica, erupted killing 87 people as the 3 villages of Tabacon, Pueblo Nuevo and San Luis were buried under pyroclastic flows and debris. Fertile soils from weathered volcanic lava have made it possible to sustain dense populations in the agriculturally productive highland areas.
|
165 |
+
|
166 |
+
The population of Central America is estimated at 47,448,333 as of 2018.[30][31] With an area of 523,780 square kilometers (202,230 sq mi),[32] it has a population density of 91 per square kilometer (240/sq mi). Human Development Index values are from the estimates for 2017.[33]
|
167 |
+
|
168 |
+
The official language majority in all Central American countries is Spanish, except in Belize, where the official language is English. Mayan languages constitute a language family consisting of about 26 related languages. Guatemala formally recognized 21 of these in 1996. Xinca and Garifuna are also present in Central America.
|
169 |
+
|
170 |
+
This region of the continent is very rich in terms of ethnic groups. The majority of the population is mestizo, with sizable Mayan and African descendent populations present, including Xinca and Garifuna minorities. The immigration of Arabs, Jews, Chinese, Europeans and others brought additional groups to the area.
|
171 |
+
|
172 |
+
The predominant religion in Central America is Christianity (95.6%).[35] Beginning with the Spanish colonization of Central America in the 16th century, Roman Catholicism became the most popular religion in the region until the first half of the 20th century. Since the 1960s, there has been an increase in other Christian groups, particularly Protestantism, as well as other religious organizations, and individuals identifying themselves as having no religion.[36]
|
173 |
+
|
174 |
+
Guatemalan textiles
|
175 |
+
|
176 |
+
Mola (art form), Panama
|
177 |
+
|
178 |
+
El Salvador La Plama art form
|
179 |
+
|
180 |
+
Central America is currently undergoing a process of political, economic and cultural transformation that started in 1907 with the creation of the Central American Court of Justice.
|
181 |
+
|
182 |
+
In 1951 the integration process continued with the signature of the San Salvador Treaty, which created the ODECA, the Organization of Central American States. However, the unity of the ODECA was limited by conflicts between several member states.
|
183 |
+
|
184 |
+
In 1991, the integration agenda was further advanced by the creation of the Central American Integration System (Sistema para la Integración Centroamericana, or SICA). SICA provides a clear legal basis to avoid disputes between the member states. SICA membership includes the 7 nations of Central America plus the Dominican Republic, a state that is traditionally considered part of the Caribbean.
|
185 |
+
|
186 |
+
On 6 December 2008, SICA announced an agreement to pursue a common currency and common passport for the member nations.[37] No timeline for implementation was discussed.
|
187 |
+
|
188 |
+
Central America already has several supranational institutions such as the Central American Parliament, the Central American Bank for Economic Integration and the Central American Common Market.
|
189 |
+
|
190 |
+
On 22 July 2011, President Mauricio Funes of El Salvador became the first president pro tempore to SICA. El Salvador also became the headquarters of SICA with the inauguration of a new building.[38]
|
191 |
+
|
192 |
+
Until recently,[when?] all Central American countries have maintained diplomatic relations with Taiwan instead of China. President Óscar Arias of Costa Rica, however, established diplomatic relations with China in 2007, severing formal diplomatic ties with Taiwan.[39] After breaking off relations with the Republic of China in 2017, Panama established diplomatic relations with the People's Republic of China. In August 2018, El Salvador also severed ties with Taiwan to formally start recognizing the People's Republic of China as sole China, a move many considered lacked transparency due to its abruptness and reports of the Chinese government's desires to invest in the department of La Union while also promising to fund the ruling party's reelection campaign.[40] President of El Salvador, Nayib Bukele, broke diplomatic relations with Taiwan and establish better ones with China.
|
193 |
+
|
194 |
+
The Central American Parliament (also known as PARLACEN) is a political and parliamentary body of SICA. The parliament started around 1980, and its primary goal was to resolve conflicts in Nicaragua, Guatemala, and El Salvador. Although the group was disbanded in 1986, ideas of unity of Central Americans still remained, so a treaty was signed in 1987 to create the Central American Parliament and other political bodies. Its original members were Guatemala, El Salvador, Nicaragua and Honduras. The parliament is the political organ of Central America, and is part of SICA. New members have since then joined including Panama and the Dominican Republic.
|
195 |
+
|
196 |
+
Costa Rica is not a member State of the Central American Parliament and its adhesion remains as a very unpopular topic at all levels of the Costa Rican society due to existing strong political criticism towards the regional parliament, since it is regarded by Costa Ricans as a menace to democratic accountability and effectiveness of integration efforts. Excessively high salaries for its members, legal immunity of jurisdiction from any member State, corruption, lack of a binding nature and effectiveness of the regional parliament's decisions, high operative costs and immediate membership of Central American Presidents once they leave their office and presidential terms, are the most common reasons invoked by Costa Ricans against the Central American Parliament.[citation needed]
|
197 |
+
|
198 |
+
Signed in 2004, the Central American Free Trade Agreement (CAFTA) is an agreement between the United States, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic. The treaty is aimed at promoting free trade among its members.
|
199 |
+
|
200 |
+
Guatemala has the largest economy in the region.[41][42] Its main exports are coffee, sugar, bananas, petroleum, clothing, and cardamom. Of its 10.29 billion dollar annual exports,[43] 40.2% go to the United States, 11.1% to neighboring El Salvador, 8% to Honduras, 5.5% to Mexico, 4.7% to Nicaragua, and 4.3% to Costa Rica.[44]
|
201 |
+
|
202 |
+
The region is particularly attractive for companies (especially clothing companies) because of its geographical proximity to the[United States], very low wages and considerable tax advantages. In addition, the decline in the prices of coffee and other export products and the structural adjustment measures promoted by the international financial institutions have partly ruined agriculture, favouring the emergence of maquiladoras. This sector accounts for 42 per cent of total exports from El Salvador, 55 per cent from Guatemala, and 65 per cent from Honduras. However, its contribution to the economies of these countries is disputed; raw materials are imported, jobs are precarious and low-paid, and tax exemptions weaken public finances.[45]
|
203 |
+
|
204 |
+
They are also criticised for the working conditions of employees: insults and physical violence, abusive dismissals (especially of pregnant workers), working hours, non-payment of overtime. According to Lucrecia Bautista, coordinator of themaquilas sector of the audit firm Coverco,labour law regulations are regularly violated in maquilas and there is no political will to enforce their application. In the case of infringements, the labour inspectorate shows remarkable leniency. It is a question of not discouraging investors. "Trade unionists are subject to pressure, and sometimes to kidnapping or murder. In some cases, business leaders have used the services of the maras. Finally, black lists containing the names of trade unionists or political activists are circulating in employers' circles.[45]
|
205 |
+
|
206 |
+
Economic growth in Central America is projected to slow slightly in 2014–15, as country-specific domestic factors offset the positive effects from stronger economic activity in the United States.[46]
|
207 |
+
|
208 |
+
Playa Blanca Guatemala
|
209 |
+
|
210 |
+
Jiquilisco Bay, El Salvador
|
211 |
+
|
212 |
+
Roatán, Honduras
|
213 |
+
|
214 |
+
Pink Pearl Island Nicaragua
|
215 |
+
|
216 |
+
Tamarindo, Costa Rica
|
217 |
+
|
218 |
+
Cayos Zapatilla, Panama
|
219 |
+
|
220 |
+
Corozal Beach, Belize
|
221 |
+
|
222 |
+
Tourism in Belize has grown considerably in more recent times, and it is now the second largest industry in the nation. Belizean Prime Minister Dean Barrow has stated his intention to use tourism to combat poverty throughout the country.[49] The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Belize's tourism-driven economy have been significant, with the nation welcoming almost one million tourists in a calendar year for the first time in its history in 2012.[50] Belize is also the only country in Central America with English as its official language, making this country a comfortable destination for English-speaking tourists.[51]
|
223 |
+
|
224 |
+
Costa Rica is the most visited nation in Central America.[52] Tourism in Costa Rica is one of the fastest growing economic sectors of the country,[53] having become the largest source of foreign revenue by 1995.[54] Since 1999, tourism has earned more foreign exchange than bananas, pineapples and coffee exports combined.[55] The tourism boom began in 1987,[54] with the number of visitors up from 329,000 in 1988, through 1.03 million in 1999, to a historical record of 2.43 million foreign visitors and $1.92-billion in revenue in 2013.[52] In 2012 tourism contributed with 12.5% of the country's GDP and it was responsible for 11.7% of direct and indirect employment.[56]
|
225 |
+
|
226 |
+
Tourism in Nicaragua has grown considerably recently, and it is now the second largest industry in the nation. Nicaraguan President Daniel Ortega has stated his intention to use tourism to combat poverty throughout the country.[57] The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Nicaragua's tourism-driven economy have been significant, with the nation welcoming one million tourists in a calendar year for the first time in its history in 2010.[58]
|
227 |
+
|
228 |
+
The Inter-American Highway is the Central American section of the Pan-American Highway, and spans 5,470 kilometers (3,400 mi) between Nuevo Laredo, Mexico, and Panama City, Panama. Because of the 87 kilometers (54 mi) break in the highway known as the Darién Gap, it is not possible to cross between Central America and South America in an automobile.
|
en/1920.html.txt
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Evolution is change in the heritable characteristics of biological populations over successive generations.[1][2] These characteristics are the expressions of genes that are passed on from parent to offspring during reproduction. Different characteristics tend to exist within any given population as a result of mutation, genetic recombination and other sources of genetic variation.[3] Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on this variation, resulting in certain characteristics becoming more common or rare within a population.[4] It is this process of evolution that has given rise to biodiversity at every level of biological organisation, including the levels of species, individual organisms and molecules.[5][6]
|
6 |
+
|
7 |
+
The scientific theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century and was set out in detail in Darwin's book On the Origin of Species.[7] Evolution by natural selection was first demonstrated by the observation that more offspring are often produced than can possibly survive. This is followed by three observable facts about living organisms: (1) traits vary among individuals with respect to their morphology, physiology and behaviour (phenotypic variation), (2) different traits confer different rates of survival and reproduction (differential fitness) and (3) traits can be passed from generation to generation (heritability of fitness).[8] Thus, in successive generations members of a population are more likely to be replaced by the progenies of parents with favourable characteristics that have enabled them to survive and reproduce in their respective environments. In the early 20th century, other competing ideas of evolution such as mutationism and orthogenesis were refuted as the modern synthesis reconciled Darwinian evolution with classical genetics, which established adaptive evolution as being caused by natural selection acting on Mendelian genetic variation.[9]
|
8 |
+
|
9 |
+
All life on Earth shares a last universal common ancestor (LUCA)[10][11][12] that lived approximately 3.5–3.8 billion years ago.[13] The fossil record includes a progression from early biogenic graphite,[14] to microbial mat fossils,[15][16][17] to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis) and loss of species (extinction) throughout the evolutionary history of life on Earth.[18] Morphological and biochemical traits are more similar among species that share a more recent common ancestor, and can be used to reconstruct phylogenetic trees.[19][20]
|
10 |
+
|
11 |
+
Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but numerous other scientific and industrial fields, including agriculture, medicine and computer science.[21]
|
12 |
+
|
13 |
+
The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles.[23] Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura (On the Nature of Things).[24][25]
|
14 |
+
|
15 |
+
In contrast to these materialistic views, Aristotelianism considered all natural things as actualisations of fixed natural possibilities, known as forms.[26][27] This was part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.[28]
|
16 |
+
|
17 |
+
In the 17th century, the new method of modern science rejected the Aristotelian approach. It sought explanations of natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences, the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation.[29] The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.[30]
|
18 |
+
|
19 |
+
Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species.[31] Georges-Louis Leclerc, Comte de Buffon suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament").[32] The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809,[33] which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents.[34][35] (The latter process was later called Lamarckism.)[34][36][37][38] These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.[39][40][41]
|
20 |
+
|
21 |
+
The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution".[42] Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism.[43][44][45][46] Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London.[47] At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.[48]
|
22 |
+
|
23 |
+
The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis.[49] In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory.[50] August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline.[51] To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries.[35][52][53] In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.[54]
|
24 |
+
|
25 |
+
In the 1920s and 1930s the so-called modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that applied generally to any branch of biology. The modern synthesis explained patterns observed across species in populations, through fossil transitions in palaeontology, and complex cellular mechanisms in developmental biology.[35][55] The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance.[56] Molecular biology improved understanding of the relationship between genotype and phenotype. Advancements were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees.[57][58] In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution," because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.[59]
|
26 |
+
|
27 |
+
Since then, the modern synthesis has been further extended to explain biological phenomena across the full and integrative scale of the biological hierarchy, from genes to species.[60] One extension, known as evolutionary developmental biology and informally called "evo-devo," emphasises how changes between generations (evolution) acts on patterns of change within individual organisms (development).[61][62][63] Since the beginning of the 21st century and in light of discoveries made in recent decades, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.[64][65]
|
28 |
+
|
29 |
+
Evolution in organisms occurs through changes in heritable traits—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents.[66] Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.[67]
|
30 |
+
|
31 |
+
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. These traits come from the interaction of its genotype with the environment.[68] As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.[69]
|
32 |
+
|
33 |
+
Heritable traits are passed from one generation to the next via DNA, a molecule that encodes genetic information.[67] DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specify the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism.[70] However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by quantitative trait loci (multiple interacting genes).[71][72]
|
34 |
+
|
35 |
+
Recent findings have confirmed important examples of heritable changes that cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems.[73] DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level.[74][75] Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation.[76] Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors.[77] Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.[78][79]
|
36 |
+
|
37 |
+
An individual organism's phenotype results from both its genotype and the influence from the environment it has lived in. A substantial part of the phenotypic variation in a population is caused by genotypic variation.[72] The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.[80]
|
38 |
+
|
39 |
+
Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural selection implausible. The Hardy–Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. The frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift.[81]
|
40 |
+
|
41 |
+
Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is identical in all individuals of that species.[82] However, even relatively small differences in genotype can lead to dramatic differences in phenotype: for example, chimpanzees and humans differ in only about 5% of their genomes.[83]
|
42 |
+
|
43 |
+
Mutations are changes in the DNA sequence of a cell's genome. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. Based on studies in the fly Drosophila melanogaster, it has been suggested that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70% of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial.[84]
|
44 |
+
|
45 |
+
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome.[85] Extra copies of genes are a major source of the raw material needed for new genes to evolve.[86] This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors.[87] For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.[88]
|
46 |
+
|
47 |
+
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function.[89][90] Other types of mutations can even generate entirely new genes from previously noncoding DNA.[91][92]
|
48 |
+
|
49 |
+
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions.[93][94] When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions.[95] For example, polyketide synthases are large enzymes that make antibiotics; they contain up to one hundred independent domains that each catalyse one step in the overall process, like a step in an assembly line.[96]
|
50 |
+
|
51 |
+
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes.[97] Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles.[98] Sex usually increases genetic variation and may increase the rate of evolution.[99][100]
|
52 |
+
|
53 |
+
The two-fold cost of sex was first described by John Maynard Smith.[101] The first cost is that in sexually dimorphic species only one of the two sexes can bear young. (This cost does not apply to hermaphroditic species, like most plants and many invertebrates.) The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes.[102] Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment.[102][103][104][105]
|
54 |
+
|
55 |
+
Gene flow is the exchange of genes between populations and between species.[106] It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.
|
56 |
+
|
57 |
+
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria.[107] In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species.[108] Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred.[109][110] An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants.[111] Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.[112]
|
58 |
+
|
59 |
+
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.[113]
|
60 |
+
|
61 |
+
From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms,[81] for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, gene flow and mutation bias.
|
62 |
+
|
63 |
+
Evolution by means of natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It has often been called a "self-evident" mechanism because it necessarily follows from three simple facts:[8]
|
64 |
+
|
65 |
+
More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage.[114] This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform.[115] Consequences of selection include nonrandom mating[116] and genetic hitchhiking.
|
66 |
+
|
67 |
+
The central concept of natural selection is the evolutionary fitness of an organism.[117] Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation.[117] However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes.[118] For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.[117]
|
68 |
+
|
69 |
+
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele will become more common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele becoming rarer—they are "selected against."[119] Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful.[70] However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form (see Dollo's law).[120][121] However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc.[122] "Throwbacks" such as these are known as atavisms.
|
70 |
+
|
71 |
+
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller.[123] Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity.[114][124] This would, for example, cause organisms to eventually have a similar height.
|
72 |
+
|
73 |
+
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates.[125] Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males.[126][127] This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.[128]
|
74 |
+
|
75 |
+
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...."[129] Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
|
76 |
+
|
77 |
+
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species.[130][131][132] Selection can act at multiple levels simultaneously.[133] An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome.[134] Selection at a level above the individual, such as group selection, may allow the evolution of cooperation, as discussed below.[135]
|
78 |
+
|
79 |
+
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage.[136] This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft.[137] Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.[138]
|
80 |
+
|
81 |
+
Genetic drift is the random fluctuations of allele frequencies within a population from one generation to the next.[139] When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward at each successive generation because the alleles are subject to sampling error.[140] This drift halts when an allele eventually becomes fixed, either by disappearing from the population or replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that began with the same genetic structure to drift apart into two divergent populations with different sets of alleles.[141]
|
82 |
+
|
83 |
+
The neutral theory of molecular evolution proposed that most evolutionary changes are the result of the fixation of neutral mutations by genetic drift.[142] Hence, in this model, most genetic changes in a population are the result of constant mutation pressure and genetic drift.[143] This form of the neutral theory is now largely abandoned, since it does not seem to fit the genetic variation seen in nature.[144][145] However, a more recent and better-supported version of this model is the nearly neutral theory, where a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population.[114] Other alternative theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft.[140][138][146]
|
84 |
+
|
85 |
+
The time for a neutral allele to become fixed by genetic drift depends on population size, with fixation occurring more rapidly in smaller populations.[147] The number of individuals in a population is not critical, but instead a measure known as the effective population size.[148] The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest.[148] The effective population size may not be the same for every gene in the same population.[149]
|
86 |
+
|
87 |
+
It is usually difficult to measure the relative importance of selection and neutral processes, including drift.[150] The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.[151]
|
88 |
+
|
89 |
+
Gene flow involves the exchange of genes between populations and between species.[106] The presence or absence of gene flow fundamentally changes the course of evolution. Due to the complexity of organisms, any two completely isolated populations will eventually evolve genetic incompatibilities through neutral processes, as in the Bateson-Dobzhansky-Muller model, even if both populations remain essentially identical in terms of their adaptation to the environment.
|
90 |
+
|
91 |
+
If genetic differentiation between populations develops, gene flow between populations can introduce traits or alleles which are disadvantageous in the local population and this may lead to organisms within these populations evolving mechanisms that prevent mating with genetically distant populations, eventually resulting in the appearance of new species. Thus, exchange of genetic information between individuals is fundamentally important for the development of the Biological Species Concept (BSC).
|
92 |
+
|
93 |
+
During the development of the modern synthesis, Sewall Wright developed his shifting balance theory, which regarded gene flow between partially isolated populations as an important aspect of adaptive evolution.[152] However, recently there has been substantial criticism of the importance of the shifting balance theory.[153]
|
94 |
+
|
95 |
+
Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias.
|
96 |
+
|
97 |
+
Haldane[154] and Fisher[155] argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution,[156] until the molecular era prompted renewed interest in neutral evolution.
|
98 |
+
|
99 |
+
Noboru Sueoka[157] and Ernst Freese[158] proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967,[159] along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.
|
100 |
+
|
101 |
+
For instance, mutation biases are frequently invoked in models of codon usage.[160] Such models also include effects of selection, following the mutation-selection-drift model,[161] which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores.[162] Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes.[163][164] The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.
|
102 |
+
|
103 |
+
However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals[165] and (2) bacterial genomes frequently have AT-biased mutation.[166]
|
104 |
+
|
105 |
+
Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work[156] showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on the introduction of new alleles, mutational and developmental biases in introduction can impose biases on evolution without requiring neutral evolution or high mutation rates.
|
106 |
+
|
107 |
+
Several recent studies report that the mutations implicated in adaptation reflect common mutation biases[167][168][169] though others dispute this interpretation.[170]
|
108 |
+
|
109 |
+
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed.
|
110 |
+
|
111 |
+
These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction; whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation.[172] In general, macroevolution is regarded as the outcome of long periods of microevolution.[173] Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved.[174] However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.[175][176][177]
|
112 |
+
|
113 |
+
A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity.[178][179][180] Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere.[181] For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size,[182] and constitute the vast majority of Earth's biodiversity.[183] Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable.[184] Indeed, the evolution of microorganisms is particularly important to modern evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.[185][186]
|
114 |
+
|
115 |
+
Adaptation is the process that makes organisms better suited to their habitat.[187][188] Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection.[189] The following definitions are due to Theodosius Dobzhansky:
|
116 |
+
|
117 |
+
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell.[193] Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment,[194] Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing,[195][196] and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol.[197][198] An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).[199][200][201][202][203]
|
118 |
+
|
119 |
+
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor.[205] However, since all living organisms are related to some extent,[206] even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.[207][208]
|
120 |
+
|
121 |
+
During evolution, some structures may lose their original function and become vestigial structures.[209] Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes,[210] the non-functional remains of eyes in blind cave-dwelling fish,[211] wings in flightless birds,[212] the presence of hip bones in whales and snakes,[204] and sexual traits in organisms that reproduce via asexual reproduction.[213] Examples of vestigial structures in humans include wisdom teeth,[214] the coccyx,[209] the vermiform appendix,[209] and other behavioural vestiges such as goose bumps[215][216] and primitive reflexes.[217][218][219]
|
122 |
+
|
123 |
+
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process.[220] One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation.[220] Within cells, molecular machines such as the bacterial flagella[221] and protein sorting machinery[222] evolved by the recruitment of several pre-existing proteins that previously had different functions.[172] Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.[223][224]
|
124 |
+
|
125 |
+
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations.[225] This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features.[226] These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals.[227] It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles.[228] It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.[229]
|
126 |
+
|
127 |
+
Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution.[230] An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.[231]
|
128 |
+
|
129 |
+
Not all co-evolved interactions between species involve conflict.[232] Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil.[233] This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.[234]
|
130 |
+
|
131 |
+
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.[235]
|
132 |
+
|
133 |
+
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring.[236] This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on.[237] Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.[238]
|
134 |
+
|
135 |
+
Speciation is the process where a species diverges into two or more descendant species.[239]
|
136 |
+
|
137 |
+
There are multiple ways to define the concept of "species." The choice of definition is dependent on the particularities of the species concerned.[240] For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic.[241] The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups."[242] Despite its wide and long-term use, the BSC like others is not without controversy, for example because these concepts cannot be applied to prokaryotes,[243] and this is called the species problem.[240] Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.[240][241]
|
138 |
+
|
139 |
+
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules.[244] Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype.[245] The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals,[246] with the gray tree frog being a particularly well-studied example.[247]
|
140 |
+
|
141 |
+
Speciation has been observed multiple times under both controlled laboratory conditions (see laboratory experiments of speciation) and in nature.[248] In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms.[249][250] As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.[251]
|
142 |
+
|
143 |
+
The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.[252]
|
144 |
+
|
145 |
+
The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations.[239] Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines.[253] Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.[254]
|
146 |
+
|
147 |
+
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population.[255] Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.[256]
|
148 |
+
|
149 |
+
One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids.[257] This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already.[258] An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica.[259] This happened about 20,000 years ago,[260] and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process.[261] Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.[262]
|
150 |
+
|
151 |
+
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged.[263] In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.[176]
|
152 |
+
|
153 |
+
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction.[264] Nearly all animal and plant species that have lived on Earth are now extinct,[265] and extinction appears to be the ultimate fate of all species.[266] These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events.[267] The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction.[267] The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century.[268] Human activities are now the primary cause of the ongoing extinction event;[269]
|
154 |
+
[270] global warming may further accelerate it in the future.[271] Despite the estimated extinction of more than 99 percent of all species that ever lived on Earth,[272][273] about 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described.[274]
|
155 |
+
|
156 |
+
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered.[267] The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle).[61] If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction.[131] The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.[275]
|
157 |
+
|
158 |
+
The Earth is about 4.54 billion years old.[276][277][278] The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago,[13][279] during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia.[15][16][17] Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland[14] as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia.[280][281] Commenting on the Australian findings, Stephen Blair Hedges wrote, "If life arose relatively quickly on Earth, then it could be common in the universe."[280][282] In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.[283]
|
159 |
+
|
160 |
+
More than 99 percent of all species, amounting to over five billion species,[284] that ever lived on Earth are estimated to be extinct.[272][273] Estimates on the number of Earth's current species range from 10 million to 14 million,[285][286] of which about 1.9 million are estimated to have been named[287] and 1.6 million documented in a central database to date,[288] leaving at least 80 percent not yet described.
|
161 |
+
|
162 |
+
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed.[11] The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions.[289] The beginning of life may have included self-replicating molecules such as RNA[290] and the assembly of simple cells.[291]
|
163 |
+
|
164 |
+
All organisms on Earth are descended from a common ancestor or ancestral gene pool.[206][292] Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events.[293] The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.[294]
|
165 |
+
|
166 |
+
Modern research has suggested that, due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree since some genes have spread independently between distantly related species.[295][296] To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.[297]
|
167 |
+
|
168 |
+
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record.[298] By comparing the anatomies of both modern and extinct species, paleontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
|
169 |
+
|
170 |
+
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids.[299] The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations.[300] For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.[301]
|
171 |
+
|
172 |
+
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago.[303][304] No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years.[305] The eukaryotic cells emerged between 1.6–2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis.[306][307] The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes.[308] Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.[309]
|
173 |
+
|
174 |
+
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period.[303][310] The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria.[311] In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.[312]
|
175 |
+
|
176 |
+
Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct.[313] Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.[314]
|
177 |
+
|
178 |
+
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals.[315] Insects were particularly successful and even today make up the majority of animal species.[316] Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, homininae around 10 million years ago and modern humans around 250,000 years ago.[317][318][319] However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.[183]
|
179 |
+
|
180 |
+
Concepts and models used in evolutionary biology, such as natural selection, have many applications.[320]
|
181 |
+
|
182 |
+
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals.[321] More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.[322]
|
183 |
+
|
184 |
+
Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders.[323] For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves.[324] This helped identify genes required for vision and pigmentation.[325]
|
185 |
+
|
186 |
+
Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as pharmaceutical drugs.[326][327][328] These same problems occur in agriculture with pesticide[329] and herbicide[330] resistance. It is possible that we are facing the end of the effective life of most of available antibiotics[331] and predicting the evolution and evolvability[332] of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.[333]
|
187 |
+
|
188 |
+
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection.[334] Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems.[335] Genetic algorithms in particular became popular through the writing of John Henry Holland.[336] Practical applications also include automatic evolution of computer programmes.[337] Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.[338]
|
189 |
+
|
190 |
+
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists.[61] However, evolution remains a contentious concept for some theists.[340]
|
191 |
+
|
192 |
+
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution.[172][341][342] As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals.[343] In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education.[344] While other scientific fields such as cosmology[345] and Earth science[346] also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
|
193 |
+
|
194 |
+
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case.[347] The debate over Darwin's ideas did not generate significant controversy in China.[348]
|
195 |
+
|
en/1921.html.txt
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Evolution is change in the heritable characteristics of biological populations over successive generations.[1][2] These characteristics are the expressions of genes that are passed on from parent to offspring during reproduction. Different characteristics tend to exist within any given population as a result of mutation, genetic recombination and other sources of genetic variation.[3] Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on this variation, resulting in certain characteristics becoming more common or rare within a population.[4] It is this process of evolution that has given rise to biodiversity at every level of biological organisation, including the levels of species, individual organisms and molecules.[5][6]
|
6 |
+
|
7 |
+
The scientific theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century and was set out in detail in Darwin's book On the Origin of Species.[7] Evolution by natural selection was first demonstrated by the observation that more offspring are often produced than can possibly survive. This is followed by three observable facts about living organisms: (1) traits vary among individuals with respect to their morphology, physiology and behaviour (phenotypic variation), (2) different traits confer different rates of survival and reproduction (differential fitness) and (3) traits can be passed from generation to generation (heritability of fitness).[8] Thus, in successive generations members of a population are more likely to be replaced by the progenies of parents with favourable characteristics that have enabled them to survive and reproduce in their respective environments. In the early 20th century, other competing ideas of evolution such as mutationism and orthogenesis were refuted as the modern synthesis reconciled Darwinian evolution with classical genetics, which established adaptive evolution as being caused by natural selection acting on Mendelian genetic variation.[9]
|
8 |
+
|
9 |
+
All life on Earth shares a last universal common ancestor (LUCA)[10][11][12] that lived approximately 3.5–3.8 billion years ago.[13] The fossil record includes a progression from early biogenic graphite,[14] to microbial mat fossils,[15][16][17] to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis) and loss of species (extinction) throughout the evolutionary history of life on Earth.[18] Morphological and biochemical traits are more similar among species that share a more recent common ancestor, and can be used to reconstruct phylogenetic trees.[19][20]
|
10 |
+
|
11 |
+
Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but numerous other scientific and industrial fields, including agriculture, medicine and computer science.[21]
|
12 |
+
|
13 |
+
The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles.[23] Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura (On the Nature of Things).[24][25]
|
14 |
+
|
15 |
+
In contrast to these materialistic views, Aristotelianism considered all natural things as actualisations of fixed natural possibilities, known as forms.[26][27] This was part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.[28]
|
16 |
+
|
17 |
+
In the 17th century, the new method of modern science rejected the Aristotelian approach. It sought explanations of natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences, the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation.[29] The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.[30]
|
18 |
+
|
19 |
+
Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species.[31] Georges-Louis Leclerc, Comte de Buffon suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament").[32] The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809,[33] which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents.[34][35] (The latter process was later called Lamarckism.)[34][36][37][38] These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.[39][40][41]
|
20 |
+
|
21 |
+
The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution".[42] Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism.[43][44][45][46] Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London.[47] At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.[48]
|
22 |
+
|
23 |
+
The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis.[49] In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory.[50] August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline.[51] To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries.[35][52][53] In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.[54]
|
24 |
+
|
25 |
+
In the 1920s and 1930s the so-called modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that applied generally to any branch of biology. The modern synthesis explained patterns observed across species in populations, through fossil transitions in palaeontology, and complex cellular mechanisms in developmental biology.[35][55] The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance.[56] Molecular biology improved understanding of the relationship between genotype and phenotype. Advancements were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees.[57][58] In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution," because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.[59]
|
26 |
+
|
27 |
+
Since then, the modern synthesis has been further extended to explain biological phenomena across the full and integrative scale of the biological hierarchy, from genes to species.[60] One extension, known as evolutionary developmental biology and informally called "evo-devo," emphasises how changes between generations (evolution) acts on patterns of change within individual organisms (development).[61][62][63] Since the beginning of the 21st century and in light of discoveries made in recent decades, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.[64][65]
|
28 |
+
|
29 |
+
Evolution in organisms occurs through changes in heritable traits—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents.[66] Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.[67]
|
30 |
+
|
31 |
+
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. These traits come from the interaction of its genotype with the environment.[68] As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.[69]
|
32 |
+
|
33 |
+
Heritable traits are passed from one generation to the next via DNA, a molecule that encodes genetic information.[67] DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specify the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism.[70] However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by quantitative trait loci (multiple interacting genes).[71][72]
|
34 |
+
|
35 |
+
Recent findings have confirmed important examples of heritable changes that cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems.[73] DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level.[74][75] Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation.[76] Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors.[77] Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.[78][79]
|
36 |
+
|
37 |
+
An individual organism's phenotype results from both its genotype and the influence from the environment it has lived in. A substantial part of the phenotypic variation in a population is caused by genotypic variation.[72] The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.[80]
|
38 |
+
|
39 |
+
Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural selection implausible. The Hardy–Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. The frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift.[81]
|
40 |
+
|
41 |
+
Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is identical in all individuals of that species.[82] However, even relatively small differences in genotype can lead to dramatic differences in phenotype: for example, chimpanzees and humans differ in only about 5% of their genomes.[83]
|
42 |
+
|
43 |
+
Mutations are changes in the DNA sequence of a cell's genome. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. Based on studies in the fly Drosophila melanogaster, it has been suggested that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70% of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial.[84]
|
44 |
+
|
45 |
+
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome.[85] Extra copies of genes are a major source of the raw material needed for new genes to evolve.[86] This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors.[87] For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.[88]
|
46 |
+
|
47 |
+
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function.[89][90] Other types of mutations can even generate entirely new genes from previously noncoding DNA.[91][92]
|
48 |
+
|
49 |
+
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions.[93][94] When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions.[95] For example, polyketide synthases are large enzymes that make antibiotics; they contain up to one hundred independent domains that each catalyse one step in the overall process, like a step in an assembly line.[96]
|
50 |
+
|
51 |
+
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes.[97] Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles.[98] Sex usually increases genetic variation and may increase the rate of evolution.[99][100]
|
52 |
+
|
53 |
+
The two-fold cost of sex was first described by John Maynard Smith.[101] The first cost is that in sexually dimorphic species only one of the two sexes can bear young. (This cost does not apply to hermaphroditic species, like most plants and many invertebrates.) The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes.[102] Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment.[102][103][104][105]
|
54 |
+
|
55 |
+
Gene flow is the exchange of genes between populations and between species.[106] It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.
|
56 |
+
|
57 |
+
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria.[107] In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species.[108] Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred.[109][110] An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants.[111] Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.[112]
|
58 |
+
|
59 |
+
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.[113]
|
60 |
+
|
61 |
+
From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms,[81] for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, gene flow and mutation bias.
|
62 |
+
|
63 |
+
Evolution by means of natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It has often been called a "self-evident" mechanism because it necessarily follows from three simple facts:[8]
|
64 |
+
|
65 |
+
More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage.[114] This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform.[115] Consequences of selection include nonrandom mating[116] and genetic hitchhiking.
|
66 |
+
|
67 |
+
The central concept of natural selection is the evolutionary fitness of an organism.[117] Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation.[117] However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes.[118] For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.[117]
|
68 |
+
|
69 |
+
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele will become more common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele becoming rarer—they are "selected against."[119] Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful.[70] However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form (see Dollo's law).[120][121] However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc.[122] "Throwbacks" such as these are known as atavisms.
|
70 |
+
|
71 |
+
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller.[123] Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity.[114][124] This would, for example, cause organisms to eventually have a similar height.
|
72 |
+
|
73 |
+
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates.[125] Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males.[126][127] This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.[128]
|
74 |
+
|
75 |
+
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...."[129] Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
|
76 |
+
|
77 |
+
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species.[130][131][132] Selection can act at multiple levels simultaneously.[133] An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome.[134] Selection at a level above the individual, such as group selection, may allow the evolution of cooperation, as discussed below.[135]
|
78 |
+
|
79 |
+
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage.[136] This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft.[137] Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.[138]
|
80 |
+
|
81 |
+
Genetic drift is the random fluctuations of allele frequencies within a population from one generation to the next.[139] When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward at each successive generation because the alleles are subject to sampling error.[140] This drift halts when an allele eventually becomes fixed, either by disappearing from the population or replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that began with the same genetic structure to drift apart into two divergent populations with different sets of alleles.[141]
|
82 |
+
|
83 |
+
The neutral theory of molecular evolution proposed that most evolutionary changes are the result of the fixation of neutral mutations by genetic drift.[142] Hence, in this model, most genetic changes in a population are the result of constant mutation pressure and genetic drift.[143] This form of the neutral theory is now largely abandoned, since it does not seem to fit the genetic variation seen in nature.[144][145] However, a more recent and better-supported version of this model is the nearly neutral theory, where a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population.[114] Other alternative theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft.[140][138][146]
|
84 |
+
|
85 |
+
The time for a neutral allele to become fixed by genetic drift depends on population size, with fixation occurring more rapidly in smaller populations.[147] The number of individuals in a population is not critical, but instead a measure known as the effective population size.[148] The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest.[148] The effective population size may not be the same for every gene in the same population.[149]
|
86 |
+
|
87 |
+
It is usually difficult to measure the relative importance of selection and neutral processes, including drift.[150] The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.[151]
|
88 |
+
|
89 |
+
Gene flow involves the exchange of genes between populations and between species.[106] The presence or absence of gene flow fundamentally changes the course of evolution. Due to the complexity of organisms, any two completely isolated populations will eventually evolve genetic incompatibilities through neutral processes, as in the Bateson-Dobzhansky-Muller model, even if both populations remain essentially identical in terms of their adaptation to the environment.
|
90 |
+
|
91 |
+
If genetic differentiation between populations develops, gene flow between populations can introduce traits or alleles which are disadvantageous in the local population and this may lead to organisms within these populations evolving mechanisms that prevent mating with genetically distant populations, eventually resulting in the appearance of new species. Thus, exchange of genetic information between individuals is fundamentally important for the development of the Biological Species Concept (BSC).
|
92 |
+
|
93 |
+
During the development of the modern synthesis, Sewall Wright developed his shifting balance theory, which regarded gene flow between partially isolated populations as an important aspect of adaptive evolution.[152] However, recently there has been substantial criticism of the importance of the shifting balance theory.[153]
|
94 |
+
|
95 |
+
Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias.
|
96 |
+
|
97 |
+
Haldane[154] and Fisher[155] argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution,[156] until the molecular era prompted renewed interest in neutral evolution.
|
98 |
+
|
99 |
+
Noboru Sueoka[157] and Ernst Freese[158] proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967,[159] along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.
|
100 |
+
|
101 |
+
For instance, mutation biases are frequently invoked in models of codon usage.[160] Such models also include effects of selection, following the mutation-selection-drift model,[161] which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores.[162] Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes.[163][164] The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.
|
102 |
+
|
103 |
+
However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals[165] and (2) bacterial genomes frequently have AT-biased mutation.[166]
|
104 |
+
|
105 |
+
Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work[156] showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on the introduction of new alleles, mutational and developmental biases in introduction can impose biases on evolution without requiring neutral evolution or high mutation rates.
|
106 |
+
|
107 |
+
Several recent studies report that the mutations implicated in adaptation reflect common mutation biases[167][168][169] though others dispute this interpretation.[170]
|
108 |
+
|
109 |
+
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed.
|
110 |
+
|
111 |
+
These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction; whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation.[172] In general, macroevolution is regarded as the outcome of long periods of microevolution.[173] Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved.[174] However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.[175][176][177]
|
112 |
+
|
113 |
+
A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity.[178][179][180] Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere.[181] For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size,[182] and constitute the vast majority of Earth's biodiversity.[183] Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable.[184] Indeed, the evolution of microorganisms is particularly important to modern evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.[185][186]
|
114 |
+
|
115 |
+
Adaptation is the process that makes organisms better suited to their habitat.[187][188] Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection.[189] The following definitions are due to Theodosius Dobzhansky:
|
116 |
+
|
117 |
+
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell.[193] Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment,[194] Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing,[195][196] and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol.[197][198] An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).[199][200][201][202][203]
|
118 |
+
|
119 |
+
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor.[205] However, since all living organisms are related to some extent,[206] even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.[207][208]
|
120 |
+
|
121 |
+
During evolution, some structures may lose their original function and become vestigial structures.[209] Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes,[210] the non-functional remains of eyes in blind cave-dwelling fish,[211] wings in flightless birds,[212] the presence of hip bones in whales and snakes,[204] and sexual traits in organisms that reproduce via asexual reproduction.[213] Examples of vestigial structures in humans include wisdom teeth,[214] the coccyx,[209] the vermiform appendix,[209] and other behavioural vestiges such as goose bumps[215][216] and primitive reflexes.[217][218][219]
|
122 |
+
|
123 |
+
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process.[220] One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation.[220] Within cells, molecular machines such as the bacterial flagella[221] and protein sorting machinery[222] evolved by the recruitment of several pre-existing proteins that previously had different functions.[172] Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.[223][224]
|
124 |
+
|
125 |
+
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations.[225] This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features.[226] These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals.[227] It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles.[228] It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.[229]
|
126 |
+
|
127 |
+
Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution.[230] An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.[231]
|
128 |
+
|
129 |
+
Not all co-evolved interactions between species involve conflict.[232] Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil.[233] This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.[234]
|
130 |
+
|
131 |
+
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.[235]
|
132 |
+
|
133 |
+
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring.[236] This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on.[237] Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.[238]
|
134 |
+
|
135 |
+
Speciation is the process where a species diverges into two or more descendant species.[239]
|
136 |
+
|
137 |
+
There are multiple ways to define the concept of "species." The choice of definition is dependent on the particularities of the species concerned.[240] For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic.[241] The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups."[242] Despite its wide and long-term use, the BSC like others is not without controversy, for example because these concepts cannot be applied to prokaryotes,[243] and this is called the species problem.[240] Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.[240][241]
|
138 |
+
|
139 |
+
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules.[244] Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype.[245] The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals,[246] with the gray tree frog being a particularly well-studied example.[247]
|
140 |
+
|
141 |
+
Speciation has been observed multiple times under both controlled laboratory conditions (see laboratory experiments of speciation) and in nature.[248] In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms.[249][250] As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.[251]
|
142 |
+
|
143 |
+
The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.[252]
|
144 |
+
|
145 |
+
The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations.[239] Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines.[253] Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.[254]
|
146 |
+
|
147 |
+
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population.[255] Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.[256]
|
148 |
+
|
149 |
+
One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids.[257] This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already.[258] An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica.[259] This happened about 20,000 years ago,[260] and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process.[261] Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.[262]
|
150 |
+
|
151 |
+
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged.[263] In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.[176]
|
152 |
+
|
153 |
+
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction.[264] Nearly all animal and plant species that have lived on Earth are now extinct,[265] and extinction appears to be the ultimate fate of all species.[266] These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events.[267] The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction.[267] The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century.[268] Human activities are now the primary cause of the ongoing extinction event;[269]
|
154 |
+
[270] global warming may further accelerate it in the future.[271] Despite the estimated extinction of more than 99 percent of all species that ever lived on Earth,[272][273] about 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described.[274]
|
155 |
+
|
156 |
+
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered.[267] The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle).[61] If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction.[131] The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.[275]
|
157 |
+
|
158 |
+
The Earth is about 4.54 billion years old.[276][277][278] The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago,[13][279] during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia.[15][16][17] Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland[14] as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia.[280][281] Commenting on the Australian findings, Stephen Blair Hedges wrote, "If life arose relatively quickly on Earth, then it could be common in the universe."[280][282] In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.[283]
|
159 |
+
|
160 |
+
More than 99 percent of all species, amounting to over five billion species,[284] that ever lived on Earth are estimated to be extinct.[272][273] Estimates on the number of Earth's current species range from 10 million to 14 million,[285][286] of which about 1.9 million are estimated to have been named[287] and 1.6 million documented in a central database to date,[288] leaving at least 80 percent not yet described.
|
161 |
+
|
162 |
+
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed.[11] The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions.[289] The beginning of life may have included self-replicating molecules such as RNA[290] and the assembly of simple cells.[291]
|
163 |
+
|
164 |
+
All organisms on Earth are descended from a common ancestor or ancestral gene pool.[206][292] Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events.[293] The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.[294]
|
165 |
+
|
166 |
+
Modern research has suggested that, due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree since some genes have spread independently between distantly related species.[295][296] To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.[297]
|
167 |
+
|
168 |
+
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record.[298] By comparing the anatomies of both modern and extinct species, paleontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
|
169 |
+
|
170 |
+
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids.[299] The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations.[300] For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.[301]
|
171 |
+
|
172 |
+
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago.[303][304] No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years.[305] The eukaryotic cells emerged between 1.6–2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis.[306][307] The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes.[308] Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.[309]
|
173 |
+
|
174 |
+
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period.[303][310] The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria.[311] In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.[312]
|
175 |
+
|
176 |
+
Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct.[313] Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.[314]
|
177 |
+
|
178 |
+
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals.[315] Insects were particularly successful and even today make up the majority of animal species.[316] Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, homininae around 10 million years ago and modern humans around 250,000 years ago.[317][318][319] However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.[183]
|
179 |
+
|
180 |
+
Concepts and models used in evolutionary biology, such as natural selection, have many applications.[320]
|
181 |
+
|
182 |
+
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals.[321] More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.[322]
|
183 |
+
|
184 |
+
Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders.[323] For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves.[324] This helped identify genes required for vision and pigmentation.[325]
|
185 |
+
|
186 |
+
Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as pharmaceutical drugs.[326][327][328] These same problems occur in agriculture with pesticide[329] and herbicide[330] resistance. It is possible that we are facing the end of the effective life of most of available antibiotics[331] and predicting the evolution and evolvability[332] of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.[333]
|
187 |
+
|
188 |
+
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection.[334] Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems.[335] Genetic algorithms in particular became popular through the writing of John Henry Holland.[336] Practical applications also include automatic evolution of computer programmes.[337] Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.[338]
|
189 |
+
|
190 |
+
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists.[61] However, evolution remains a contentious concept for some theists.[340]
|
191 |
+
|
192 |
+
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution.[172][341][342] As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals.[343] In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education.[344] While other scientific fields such as cosmology[345] and Earth science[346] also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
|
193 |
+
|
194 |
+
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case.[347] The debate over Darwin's ideas did not generate significant controversy in China.[348]
|
195 |
+
|
en/1922.html.txt
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Evolution is change in the heritable characteristics of biological populations over successive generations.[1][2] These characteristics are the expressions of genes that are passed on from parent to offspring during reproduction. Different characteristics tend to exist within any given population as a result of mutation, genetic recombination and other sources of genetic variation.[3] Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on this variation, resulting in certain characteristics becoming more common or rare within a population.[4] It is this process of evolution that has given rise to biodiversity at every level of biological organisation, including the levels of species, individual organisms and molecules.[5][6]
|
6 |
+
|
7 |
+
The scientific theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century and was set out in detail in Darwin's book On the Origin of Species.[7] Evolution by natural selection was first demonstrated by the observation that more offspring are often produced than can possibly survive. This is followed by three observable facts about living organisms: (1) traits vary among individuals with respect to their morphology, physiology and behaviour (phenotypic variation), (2) different traits confer different rates of survival and reproduction (differential fitness) and (3) traits can be passed from generation to generation (heritability of fitness).[8] Thus, in successive generations members of a population are more likely to be replaced by the progenies of parents with favourable characteristics that have enabled them to survive and reproduce in their respective environments. In the early 20th century, other competing ideas of evolution such as mutationism and orthogenesis were refuted as the modern synthesis reconciled Darwinian evolution with classical genetics, which established adaptive evolution as being caused by natural selection acting on Mendelian genetic variation.[9]
|
8 |
+
|
9 |
+
All life on Earth shares a last universal common ancestor (LUCA)[10][11][12] that lived approximately 3.5–3.8 billion years ago.[13] The fossil record includes a progression from early biogenic graphite,[14] to microbial mat fossils,[15][16][17] to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis) and loss of species (extinction) throughout the evolutionary history of life on Earth.[18] Morphological and biochemical traits are more similar among species that share a more recent common ancestor, and can be used to reconstruct phylogenetic trees.[19][20]
|
10 |
+
|
11 |
+
Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but numerous other scientific and industrial fields, including agriculture, medicine and computer science.[21]
|
12 |
+
|
13 |
+
The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles.[23] Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura (On the Nature of Things).[24][25]
|
14 |
+
|
15 |
+
In contrast to these materialistic views, Aristotelianism considered all natural things as actualisations of fixed natural possibilities, known as forms.[26][27] This was part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.[28]
|
16 |
+
|
17 |
+
In the 17th century, the new method of modern science rejected the Aristotelian approach. It sought explanations of natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences, the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation.[29] The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.[30]
|
18 |
+
|
19 |
+
Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species.[31] Georges-Louis Leclerc, Comte de Buffon suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament").[32] The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809,[33] which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents.[34][35] (The latter process was later called Lamarckism.)[34][36][37][38] These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.[39][40][41]
|
20 |
+
|
21 |
+
The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution".[42] Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism.[43][44][45][46] Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London.[47] At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.[48]
|
22 |
+
|
23 |
+
The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis.[49] In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory.[50] August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline.[51] To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries.[35][52][53] In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.[54]
|
24 |
+
|
25 |
+
In the 1920s and 1930s the so-called modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that applied generally to any branch of biology. The modern synthesis explained patterns observed across species in populations, through fossil transitions in palaeontology, and complex cellular mechanisms in developmental biology.[35][55] The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance.[56] Molecular biology improved understanding of the relationship between genotype and phenotype. Advancements were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees.[57][58] In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution," because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.[59]
|
26 |
+
|
27 |
+
Since then, the modern synthesis has been further extended to explain biological phenomena across the full and integrative scale of the biological hierarchy, from genes to species.[60] One extension, known as evolutionary developmental biology and informally called "evo-devo," emphasises how changes between generations (evolution) acts on patterns of change within individual organisms (development).[61][62][63] Since the beginning of the 21st century and in light of discoveries made in recent decades, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.[64][65]
|
28 |
+
|
29 |
+
Evolution in organisms occurs through changes in heritable traits—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents.[66] Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.[67]
|
30 |
+
|
31 |
+
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. These traits come from the interaction of its genotype with the environment.[68] As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.[69]
|
32 |
+
|
33 |
+
Heritable traits are passed from one generation to the next via DNA, a molecule that encodes genetic information.[67] DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specify the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism.[70] However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by quantitative trait loci (multiple interacting genes).[71][72]
|
34 |
+
|
35 |
+
Recent findings have confirmed important examples of heritable changes that cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems.[73] DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level.[74][75] Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation.[76] Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors.[77] Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.[78][79]
|
36 |
+
|
37 |
+
An individual organism's phenotype results from both its genotype and the influence from the environment it has lived in. A substantial part of the phenotypic variation in a population is caused by genotypic variation.[72] The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.[80]
|
38 |
+
|
39 |
+
Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural selection implausible. The Hardy–Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. The frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift.[81]
|
40 |
+
|
41 |
+
Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is identical in all individuals of that species.[82] However, even relatively small differences in genotype can lead to dramatic differences in phenotype: for example, chimpanzees and humans differ in only about 5% of their genomes.[83]
|
42 |
+
|
43 |
+
Mutations are changes in the DNA sequence of a cell's genome. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. Based on studies in the fly Drosophila melanogaster, it has been suggested that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70% of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial.[84]
|
44 |
+
|
45 |
+
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome.[85] Extra copies of genes are a major source of the raw material needed for new genes to evolve.[86] This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors.[87] For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.[88]
|
46 |
+
|
47 |
+
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function.[89][90] Other types of mutations can even generate entirely new genes from previously noncoding DNA.[91][92]
|
48 |
+
|
49 |
+
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions.[93][94] When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions.[95] For example, polyketide synthases are large enzymes that make antibiotics; they contain up to one hundred independent domains that each catalyse one step in the overall process, like a step in an assembly line.[96]
|
50 |
+
|
51 |
+
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes.[97] Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles.[98] Sex usually increases genetic variation and may increase the rate of evolution.[99][100]
|
52 |
+
|
53 |
+
The two-fold cost of sex was first described by John Maynard Smith.[101] The first cost is that in sexually dimorphic species only one of the two sexes can bear young. (This cost does not apply to hermaphroditic species, like most plants and many invertebrates.) The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes.[102] Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment.[102][103][104][105]
|
54 |
+
|
55 |
+
Gene flow is the exchange of genes between populations and between species.[106] It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.
|
56 |
+
|
57 |
+
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria.[107] In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species.[108] Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred.[109][110] An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants.[111] Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.[112]
|
58 |
+
|
59 |
+
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.[113]
|
60 |
+
|
61 |
+
From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms,[81] for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, gene flow and mutation bias.
|
62 |
+
|
63 |
+
Evolution by means of natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It has often been called a "self-evident" mechanism because it necessarily follows from three simple facts:[8]
|
64 |
+
|
65 |
+
More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage.[114] This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform.[115] Consequences of selection include nonrandom mating[116] and genetic hitchhiking.
|
66 |
+
|
67 |
+
The central concept of natural selection is the evolutionary fitness of an organism.[117] Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation.[117] However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes.[118] For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.[117]
|
68 |
+
|
69 |
+
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele will become more common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele becoming rarer—they are "selected against."[119] Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful.[70] However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form (see Dollo's law).[120][121] However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc.[122] "Throwbacks" such as these are known as atavisms.
|
70 |
+
|
71 |
+
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller.[123] Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity.[114][124] This would, for example, cause organisms to eventually have a similar height.
|
72 |
+
|
73 |
+
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates.[125] Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males.[126][127] This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.[128]
|
74 |
+
|
75 |
+
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...."[129] Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
|
76 |
+
|
77 |
+
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species.[130][131][132] Selection can act at multiple levels simultaneously.[133] An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome.[134] Selection at a level above the individual, such as group selection, may allow the evolution of cooperation, as discussed below.[135]
|
78 |
+
|
79 |
+
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage.[136] This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft.[137] Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.[138]
|
80 |
+
|
81 |
+
Genetic drift is the random fluctuations of allele frequencies within a population from one generation to the next.[139] When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward at each successive generation because the alleles are subject to sampling error.[140] This drift halts when an allele eventually becomes fixed, either by disappearing from the population or replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that began with the same genetic structure to drift apart into two divergent populations with different sets of alleles.[141]
|
82 |
+
|
83 |
+
The neutral theory of molecular evolution proposed that most evolutionary changes are the result of the fixation of neutral mutations by genetic drift.[142] Hence, in this model, most genetic changes in a population are the result of constant mutation pressure and genetic drift.[143] This form of the neutral theory is now largely abandoned, since it does not seem to fit the genetic variation seen in nature.[144][145] However, a more recent and better-supported version of this model is the nearly neutral theory, where a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population.[114] Other alternative theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft.[140][138][146]
|
84 |
+
|
85 |
+
The time for a neutral allele to become fixed by genetic drift depends on population size, with fixation occurring more rapidly in smaller populations.[147] The number of individuals in a population is not critical, but instead a measure known as the effective population size.[148] The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest.[148] The effective population size may not be the same for every gene in the same population.[149]
|
86 |
+
|
87 |
+
It is usually difficult to measure the relative importance of selection and neutral processes, including drift.[150] The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.[151]
|
88 |
+
|
89 |
+
Gene flow involves the exchange of genes between populations and between species.[106] The presence or absence of gene flow fundamentally changes the course of evolution. Due to the complexity of organisms, any two completely isolated populations will eventually evolve genetic incompatibilities through neutral processes, as in the Bateson-Dobzhansky-Muller model, even if both populations remain essentially identical in terms of their adaptation to the environment.
|
90 |
+
|
91 |
+
If genetic differentiation between populations develops, gene flow between populations can introduce traits or alleles which are disadvantageous in the local population and this may lead to organisms within these populations evolving mechanisms that prevent mating with genetically distant populations, eventually resulting in the appearance of new species. Thus, exchange of genetic information between individuals is fundamentally important for the development of the Biological Species Concept (BSC).
|
92 |
+
|
93 |
+
During the development of the modern synthesis, Sewall Wright developed his shifting balance theory, which regarded gene flow between partially isolated populations as an important aspect of adaptive evolution.[152] However, recently there has been substantial criticism of the importance of the shifting balance theory.[153]
|
94 |
+
|
95 |
+
Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias.
|
96 |
+
|
97 |
+
Haldane[154] and Fisher[155] argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution,[156] until the molecular era prompted renewed interest in neutral evolution.
|
98 |
+
|
99 |
+
Noboru Sueoka[157] and Ernst Freese[158] proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967,[159] along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.
|
100 |
+
|
101 |
+
For instance, mutation biases are frequently invoked in models of codon usage.[160] Such models also include effects of selection, following the mutation-selection-drift model,[161] which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores.[162] Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes.[163][164] The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.
|
102 |
+
|
103 |
+
However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals[165] and (2) bacterial genomes frequently have AT-biased mutation.[166]
|
104 |
+
|
105 |
+
Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work[156] showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on the introduction of new alleles, mutational and developmental biases in introduction can impose biases on evolution without requiring neutral evolution or high mutation rates.
|
106 |
+
|
107 |
+
Several recent studies report that the mutations implicated in adaptation reflect common mutation biases[167][168][169] though others dispute this interpretation.[170]
|
108 |
+
|
109 |
+
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed.
|
110 |
+
|
111 |
+
These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction; whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation.[172] In general, macroevolution is regarded as the outcome of long periods of microevolution.[173] Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved.[174] However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.[175][176][177]
|
112 |
+
|
113 |
+
A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity.[178][179][180] Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere.[181] For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size,[182] and constitute the vast majority of Earth's biodiversity.[183] Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable.[184] Indeed, the evolution of microorganisms is particularly important to modern evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.[185][186]
|
114 |
+
|
115 |
+
Adaptation is the process that makes organisms better suited to their habitat.[187][188] Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection.[189] The following definitions are due to Theodosius Dobzhansky:
|
116 |
+
|
117 |
+
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell.[193] Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment,[194] Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing,[195][196] and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol.[197][198] An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).[199][200][201][202][203]
|
118 |
+
|
119 |
+
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor.[205] However, since all living organisms are related to some extent,[206] even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.[207][208]
|
120 |
+
|
121 |
+
During evolution, some structures may lose their original function and become vestigial structures.[209] Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes,[210] the non-functional remains of eyes in blind cave-dwelling fish,[211] wings in flightless birds,[212] the presence of hip bones in whales and snakes,[204] and sexual traits in organisms that reproduce via asexual reproduction.[213] Examples of vestigial structures in humans include wisdom teeth,[214] the coccyx,[209] the vermiform appendix,[209] and other behavioural vestiges such as goose bumps[215][216] and primitive reflexes.[217][218][219]
|
122 |
+
|
123 |
+
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process.[220] One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation.[220] Within cells, molecular machines such as the bacterial flagella[221] and protein sorting machinery[222] evolved by the recruitment of several pre-existing proteins that previously had different functions.[172] Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.[223][224]
|
124 |
+
|
125 |
+
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations.[225] This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features.[226] These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals.[227] It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles.[228] It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.[229]
|
126 |
+
|
127 |
+
Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution.[230] An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.[231]
|
128 |
+
|
129 |
+
Not all co-evolved interactions between species involve conflict.[232] Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil.[233] This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.[234]
|
130 |
+
|
131 |
+
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.[235]
|
132 |
+
|
133 |
+
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring.[236] This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on.[237] Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.[238]
|
134 |
+
|
135 |
+
Speciation is the process where a species diverges into two or more descendant species.[239]
|
136 |
+
|
137 |
+
There are multiple ways to define the concept of "species." The choice of definition is dependent on the particularities of the species concerned.[240] For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic.[241] The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups."[242] Despite its wide and long-term use, the BSC like others is not without controversy, for example because these concepts cannot be applied to prokaryotes,[243] and this is called the species problem.[240] Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.[240][241]
|
138 |
+
|
139 |
+
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules.[244] Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype.[245] The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals,[246] with the gray tree frog being a particularly well-studied example.[247]
|
140 |
+
|
141 |
+
Speciation has been observed multiple times under both controlled laboratory conditions (see laboratory experiments of speciation) and in nature.[248] In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms.[249][250] As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.[251]
|
142 |
+
|
143 |
+
The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.[252]
|
144 |
+
|
145 |
+
The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations.[239] Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines.[253] Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.[254]
|
146 |
+
|
147 |
+
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population.[255] Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.[256]
|
148 |
+
|
149 |
+
One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids.[257] This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already.[258] An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica.[259] This happened about 20,000 years ago,[260] and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process.[261] Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.[262]
|
150 |
+
|
151 |
+
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged.[263] In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.[176]
|
152 |
+
|
153 |
+
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction.[264] Nearly all animal and plant species that have lived on Earth are now extinct,[265] and extinction appears to be the ultimate fate of all species.[266] These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events.[267] The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction.[267] The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century.[268] Human activities are now the primary cause of the ongoing extinction event;[269]
|
154 |
+
[270] global warming may further accelerate it in the future.[271] Despite the estimated extinction of more than 99 percent of all species that ever lived on Earth,[272][273] about 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described.[274]
|
155 |
+
|
156 |
+
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered.[267] The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle).[61] If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction.[131] The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.[275]
|
157 |
+
|
158 |
+
The Earth is about 4.54 billion years old.[276][277][278] The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago,[13][279] during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia.[15][16][17] Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland[14] as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia.[280][281] Commenting on the Australian findings, Stephen Blair Hedges wrote, "If life arose relatively quickly on Earth, then it could be common in the universe."[280][282] In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.[283]
|
159 |
+
|
160 |
+
More than 99 percent of all species, amounting to over five billion species,[284] that ever lived on Earth are estimated to be extinct.[272][273] Estimates on the number of Earth's current species range from 10 million to 14 million,[285][286] of which about 1.9 million are estimated to have been named[287] and 1.6 million documented in a central database to date,[288] leaving at least 80 percent not yet described.
|
161 |
+
|
162 |
+
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed.[11] The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions.[289] The beginning of life may have included self-replicating molecules such as RNA[290] and the assembly of simple cells.[291]
|
163 |
+
|
164 |
+
All organisms on Earth are descended from a common ancestor or ancestral gene pool.[206][292] Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events.[293] The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.[294]
|
165 |
+
|
166 |
+
Modern research has suggested that, due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree since some genes have spread independently between distantly related species.[295][296] To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.[297]
|
167 |
+
|
168 |
+
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record.[298] By comparing the anatomies of both modern and extinct species, paleontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
|
169 |
+
|
170 |
+
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids.[299] The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations.[300] For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.[301]
|
171 |
+
|
172 |
+
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago.[303][304] No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years.[305] The eukaryotic cells emerged between 1.6–2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis.[306][307] The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes.[308] Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.[309]
|
173 |
+
|
174 |
+
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period.[303][310] The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria.[311] In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.[312]
|
175 |
+
|
176 |
+
Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct.[313] Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.[314]
|
177 |
+
|
178 |
+
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals.[315] Insects were particularly successful and even today make up the majority of animal species.[316] Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, homininae around 10 million years ago and modern humans around 250,000 years ago.[317][318][319] However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.[183]
|
179 |
+
|
180 |
+
Concepts and models used in evolutionary biology, such as natural selection, have many applications.[320]
|
181 |
+
|
182 |
+
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals.[321] More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.[322]
|
183 |
+
|
184 |
+
Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders.[323] For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves.[324] This helped identify genes required for vision and pigmentation.[325]
|
185 |
+
|
186 |
+
Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as pharmaceutical drugs.[326][327][328] These same problems occur in agriculture with pesticide[329] and herbicide[330] resistance. It is possible that we are facing the end of the effective life of most of available antibiotics[331] and predicting the evolution and evolvability[332] of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.[333]
|
187 |
+
|
188 |
+
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection.[334] Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems.[335] Genetic algorithms in particular became popular through the writing of John Henry Holland.[336] Practical applications also include automatic evolution of computer programmes.[337] Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.[338]
|
189 |
+
|
190 |
+
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists.[61] However, evolution remains a contentious concept for some theists.[340]
|
191 |
+
|
192 |
+
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution.[172][341][342] As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals.[343] In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education.[344] While other scientific fields such as cosmology[345] and Earth science[346] also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
|
193 |
+
|
194 |
+
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case.[347] The debate over Darwin's ideas did not generate significant controversy in China.[348]
|
195 |
+
|
en/1923.html.txt
ADDED
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Existence is the ability of an entity to interact with physical or mental reality. In philosophy, it refers to the ontological property[1] of being.[2]
|
2 |
+
|
3 |
+
The term existence comes from Old French existence, from Medieval Latin existentia/exsistentia.[3]
|
4 |
+
|
5 |
+
Materialism holds that the only things that exist are matter and energy, that all things are composed of material, that all actions require energy, and that all phenomena (including consciousness) are the result of the interaction of matter. Dialectical materialism does not make a distinction between being and existence, and defines it as the objective reality of various forms of matter.[2]
|
6 |
+
|
7 |
+
Idealism holds that the only things that exist are thoughts and ideas, while the material world is secondary.[4][5] In idealism, existence is sometimes contrasted with transcendence, the ability to go beyond the limits of existence.[2] As a form of epistemological idealism, rationalism interprets existence as cognizable and rational, that all things are composed of strings of reasoning, requiring an associated idea of the thing, and all phenomena (including consciousness) are the result of an understanding of the imprint from the noumenal world in which lies beyond the thing-in-itself.
|
8 |
+
|
9 |
+
In scholasticism, existence of a thing is not derived from its essence but is determined by the creative volition of God, the dichotomy of existence and essence demonstrates that the dualism of the created universe is only resolvable through God.[2]
|
10 |
+
Empiricism recognizes existence of singular facts, which are not derivable and which are observable through empirical experience.
|
11 |
+
|
12 |
+
The exact definition of existence is one of the most important and fundamental topics of ontology, the philosophical study of the nature of being, existence, or reality in general, as well as of the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology deals with questions concerning what things or entities exist or can be said to exist, and how such things or entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences.
|
13 |
+
|
14 |
+
In the Western tradition of philosophy, the earliest known comprehensive treatments of the subject are from Plato's Phaedo, Republic, and Statesman and Aristotle's Metaphysics, though earlier fragmentary writing exists. Aristotle developed a comprehensive theory of being, according to which only individual things, called substances, fully have to be, but other things such as relations, quantity, time, and place (called the categories) have a derivative kind of being, dependent on individual things. In Aristotle's Metaphysics, there are four causes of existence or change in nature: the material cause, the formal cause, the efficient cause and the final cause.
|
15 |
+
|
16 |
+
The Neo-Platonists and some early Christian philosophers argued about whether existence had any reality except in the mind of God.[citation needed] Some taught that existence was a snare and a delusion, that the world, the flesh, and the devil existed only to tempt weak humankind away from God.
|
17 |
+
|
18 |
+
In Hindu philosophy, the term Advaita refers to its idea that the true self, Atman, is the same as the highest metaphysical Reality (Brahman). The Upanishads describe the universe, and the human experience, as an interplay of Purusha (the eternal, unchanging principles, consciousness) and Prakṛti (the temporary, changing material world, nature).The former manifests itself as Ātman (Soul, Self), and the latter as Māyā. The Upanishads refer to the knowledge of Atman as "true knowledge" (Vidya), and the knowledge of Maya as "not true knowledge" (Avidya, Nescience, lack of awareness, lack of true knowledge).
|
19 |
+
|
20 |
+
The medieval philosopher Thomas Aquinas argued that God is pure being, and that in God essence and existence are the same. More specifically, what is identical in God, according to Aquinas, is God's essence and God's actus essendi.[6] At about the same time, the nominalist philosopher William of Ockham argued, in Book I of his Summa Totius Logicae (Treatise on all Logic, written some time before 1327), that Categories are not a form of Being in their own right, but derivative on the existence of individuals.
|
21 |
+
|
22 |
+
The Indian philosopher Nagarjuna (c. 150–250 CE) largely advanced existence concepts and founded the Madhyamaka school of Mahāyāna Buddhism.
|
23 |
+
|
24 |
+
In Eastern philosophy, Anicca (Sanskrit anitya) or "impermanence" describes existence. It refers to the fact that all conditioned things (sankhara) are in a constant state of flux. In reality there is no thing that ultimately ceases to exist; only the appearance of a thing ceases as it changes from one form to another. Imagine a leaf that falls to the ground and decomposes. While the appearance and relative existence of the leaf ceases, the components that formed the leaf become particulate material that goes on to form new plants. Buddhism teaches a middle way, avoiding the extreme views of eternalism and nihilism.[7] The middle way recognizes there are vast differences between the way things are perceived to exist and the way things really exist. The differences are reconciled in the concept of Shunyata by addressing the existing object's served purpose for the subject's identity in being. What exists is in non-existence, because the subject changes.
|
25 |
+
|
26 |
+
Trailokya elaborates on three kinds of existence, those of desire, form, and formlessness in which there are karmic rebirths. Taken further to the Trikaya doctrine, it describes how the Buddha exists. In this philosophy, it is accepted that Buddha exists in more than one absolute way.
|
27 |
+
|
28 |
+
The early modern treatment of the subject derives from Antoine Arnauld and Pierre Nicole's Logic, or The Art of Thinking, better known as the Port-Royal Logic, first published in 1662. Arnauld thought that a proposition or judgment consists of taking two different ideas and either putting them together or rejecting them:
|
29 |
+
|
30 |
+
After conceiving things by our ideas, we compare these ideas and, finding that some belong together and others do not, we unite or separate them. This is called affirming or denying, and in general judging.
|
31 |
+
This judgment is also called a proposition, and it is easy to see that it must have two terms. One term, of which one affirms or denies something, is called the subject; the other term, which is affirmed or denied, is called the attribute or Praedicatum.
|
32 |
+
|
33 |
+
The two terms are joined by the verb "is" (or "is not", if the predicate is denied of the subject). Thus every proposition has three components: the two terms, and the "copula" that connects or separates them. Even when the proposition has only two words, the three terms are still there. For example, "God loves humanity", really means "God is a lover of humanity", "God exists" means "God is a thing".
|
34 |
+
|
35 |
+
This theory of judgment dominated logic for centuries, but it has some obvious difficulties: it only considers proposition of the form "All A are B.", a form logicians call universal. It does not allow propositions of the form "Some A are B", a form logicians call existential. If neither A nor B includes the idea of existence, then "some A are B" simply adjoins A to B. Conversely, if A or B do include the idea of existence in the way that "triangle" contains the idea "three angles equal to two right angles", then "A exists" is automatically true, and we have an ontological proof of A's existence. (Indeed, Arnauld's contemporary Descartes famously argued so, regarding the concept "God" (discourse 4, Meditation 5)). Arnauld's theory was current until the middle of the nineteenth century.
|
36 |
+
|
37 |
+
David Hume argued that the claim that a thing exists, when added to our notion of a thing, does not add anything to the concept. For example, if we form a complete notion of Moses, and superadd to that notion the claim that Moses existed, we are not adding anything to the notion of Moses.
|
38 |
+
|
39 |
+
Kant also argued that existence is not a "real" predicate, but gave no explanation of how this is possible. Indeed, his famous discussion of the subject is merely a restatement of Arnauld's doctrine that in the proposition "God is omnipotent", the verb "is" signifies the joining or separating of two concepts such as "God" and "omnipotence".[original research?]
|
40 |
+
|
41 |
+
Schopenhauer claimed that “everything that exists for knowledge, and hence the whole of this world, only object in relation to the subject, the perception of the perceiver, in a word, representation.”[8] According to him there can be "No object without subject" because "everything objective is already conditioned as such in manifold ways by the knowing subject with the forms of its knowing, and presupposes these forms..."[9]
|
42 |
+
|
43 |
+
John Stuart Mill (and also Kant's pupil Herbart) argued that the predicative nature of existence was proved by sentences like "A centaur is a poetic fiction"[10] or "A greatest number is impossible" (Herbart).[11] Franz Brentano challenged this; so also (as is better known) did Frege. Brentano argued that we can join the concept represented by a noun phrase "an A" to the concept represented by an adjective "B" to give the concept represented by the noun phrase "a B-A". For example, we can join "a man" to "wise" to give "a wise man". But the noun phrase "a wise man" is not a sentence, whereas "some man is wise" is a sentence. Hence the copula must do more than merely join or separate concepts. Furthermore, adding "exists" to "a wise man", to give the complete sentence "a wise man exists" has the same effect as joining "some man" to "wise" using the copula. So the copula has the same effect as "exists". Brentano argued that every categorical proposition can be translated into an existential one without change in meaning and that the "exists" and "does not exist" of the existential proposition take the place of the copula. He showed this by the following examples:
|
44 |
+
|
45 |
+
Frege developed a similar view (though later) in his great work The Foundations of Arithmetic, as did Charles Sanders Peirce (but Peirce held that the possible and the real are not limited to the actual, individually existent). The Frege-Brentano view is the basis of the dominant position in modern Anglo-American philosophy: that existence is asserted by the existential quantifier (as expressed by Quine's slogan "To be is to be the value of a variable." — On What There Is, 1948).[12]
|
46 |
+
|
47 |
+
In mathematical logic, there are two quantifiers, "some" and "all", though as Brentano (1838–1917) pointed out, we can make do with just one quantifier and negation. The first of these quantifiers, "some", is also expressed as "there exists". Thus, in the sentence "There exists a man", the term "man" is asserted to be part of existence. But we can also assert, "There exists a triangle." Is a "triangle"—an abstract idea—part of existence in the same way that a "man"—a physical body—is part of existence? Do abstractions such as goodness, blindness, and virtue exist in the same sense that chairs, tables, and houses exist? What categories, or kinds of thing, can be the subject or the predicate of a proposition?
|
48 |
+
|
49 |
+
Worse, does "existence" exist?[13]
|
50 |
+
|
51 |
+
In some statements, existence is implied without being mentioned. The statement "A bridge crosses the Thames at Hammersmith" cannot just be about a bridge, the Thames, and Hammersmith. It must be about "existence" as well. On the other hand, the statement "A bridge crosses the Styx at Limbo" has the same form, but while in the first case we understand a real bridge in the real world made of stone or brick, what "existence" would mean in the second case is less clear.
|
52 |
+
|
53 |
+
The nominalist approach is to argue that certain noun phrases can be "eliminated" by rewriting a sentence in a form that has the same meaning but does not contain the noun phrase. Thus Ockham argued that "Socrates has wisdom", which apparently asserts the existence of a reference for "wisdom", can be rewritten as "Socrates is wise", which contains only the referring phrase "Socrates".[14] This method became widely accepted in the twentieth century by the analytic school of philosophy.
|
54 |
+
|
55 |
+
However, this argument may be inverted by realists in arguing that since the sentence "Socrates is wise" can be rewritten as "Socrates has wisdom", this proves the existence of a hidden referent for "wise".
|
56 |
+
|
57 |
+
A further problem is that human beings seem to process information about fictional characters in much the same way that they process information about real people. For example, in the 2008 United States presidential election, a politician and actor named Fred Thompson ran for the Republican Party nomination. In polls, potential voters identified Fred Thompson as a "law and order" candidate. Thompson plays a fictional character on the television series Law and Order. The people who make the comment are aware that Law and Order is fiction, but at some level, they may process fiction as if it were fact, a process included in what is called the Paradox of Fiction.[dubious – discuss][15] Another example of this is the common experience of actresses who play the villain in a soap opera being accosted in public as if they are to blame for the actions of the characters they play.
|
58 |
+
|
59 |
+
A scientist might make a clear distinction between objects that exist, and assert that all objects that exist are made up of either matter or energy. But in the layperson's worldview, existence includes real, fictional, and even contradictory objects. Thus if we reason from the statement "Pegasus flies" to the statement "Pegasus exists", we are not asserting that Pegasus is made up of atoms, but rather that Pegasus exists in the worldview of classical myth. When a mathematician reasons from the statement "ABC is a triangle" to the statement "triangles exist", the mathematician is not asserting that triangles are made up of atoms but rather that triangles exist within a particular mathematical model.
|
60 |
+
|
61 |
+
According to Bertrand Russell's Theory of Descriptions, the negation operator in a singular sentence can take either wide or narrow scope: we distinguish between "some S is not P" (where negation takes "narrow scope") and "it is not the case that 'some S is P'" (where negation takes "wide scope"). The problem with this view is that there appears to be no such scope distinction in the case of proper names. The sentences "Socrates is not bald" and "it is not the case that Socrates is bald" both appear to have the same meaning, and they both appear to assert or presuppose the existence of someone (Socrates) who is not bald, so that negation takes a narrow scope. However, Russell's theory analyses proper names into a logical structure which makes sense of this problem. According to Russell, Socrates can be analyzed into the form 'The Philosopher of Greece.' In the wide scope, this would then read: It is not the case that there existed a philosopher of Greece who was bald. In the narrow scope, it would read the Philosopher of Greece was not bald.
|
62 |
+
|
63 |
+
According to the direct-reference view, an early version of which was originally proposed by Bertrand Russell, and perhaps earlier by Gottlob Frege, a proper name strictly has no meaning when there is no object to which it refers. This view relies on the argument that the semantic function of a proper name is to tell us which object bears the name, and thus to identify some object. But no object can be identified if none exists. Thus, a proper name must have a bearer if it is to be meaningful.
|
64 |
+
|
65 |
+
According to the "two sense" view of existence, which derives from Alexius Meinong, existential statements fall into two classes.
|
66 |
+
|
67 |
+
The problem is then evaded as follows. "Pegasus flies" implies existence in the wide sense, for it implies that something flies. But it does not imply existence in the narrow sense, for we deny existence in this sense by saying that Pegasus does not exist. In effect, the world of all things divides, on this view, into those (like Socrates, the planet Venus, and New York City) that have existed in the narrow sense, and those (like Sherlock Holmes, the goddess Venus, and Minas Tirith) that do not.
|
68 |
+
|
69 |
+
However, common sense suggests the non-existence of such things as fictional characters or places.
|
70 |
+
|
71 |
+
Influenced by the views of Brentano's pupil Alexius Meinong, and by Edmund Husserl, Germanophone and Francophone philosophy took a different direction regarding the question of existence.
|
72 |
+
|
73 |
+
Anti-realism is the view of idealists who are skeptics about the physical world, maintaining either: (1) that nothing exists outside the mind, or (2) that we would have no access to a mind-independent reality even if it may exist. Realists, in contrast, hold that perceptions or sense data are caused by mind-independent objects. An "anti-realist" who denies that other minds exist (i. e., a solipsist) is different from an "anti-realist" who claims that there is no fact of the matter as to whether or not there are unobservable other minds (i. e., a logical behaviorist).
|
en/1924.html.txt
ADDED
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Existence is the ability of an entity to interact with physical or mental reality. In philosophy, it refers to the ontological property[1] of being.[2]
|
2 |
+
|
3 |
+
The term existence comes from Old French existence, from Medieval Latin existentia/exsistentia.[3]
|
4 |
+
|
5 |
+
Materialism holds that the only things that exist are matter and energy, that all things are composed of material, that all actions require energy, and that all phenomena (including consciousness) are the result of the interaction of matter. Dialectical materialism does not make a distinction between being and existence, and defines it as the objective reality of various forms of matter.[2]
|
6 |
+
|
7 |
+
Idealism holds that the only things that exist are thoughts and ideas, while the material world is secondary.[4][5] In idealism, existence is sometimes contrasted with transcendence, the ability to go beyond the limits of existence.[2] As a form of epistemological idealism, rationalism interprets existence as cognizable and rational, that all things are composed of strings of reasoning, requiring an associated idea of the thing, and all phenomena (including consciousness) are the result of an understanding of the imprint from the noumenal world in which lies beyond the thing-in-itself.
|
8 |
+
|
9 |
+
In scholasticism, existence of a thing is not derived from its essence but is determined by the creative volition of God, the dichotomy of existence and essence demonstrates that the dualism of the created universe is only resolvable through God.[2]
|
10 |
+
Empiricism recognizes existence of singular facts, which are not derivable and which are observable through empirical experience.
|
11 |
+
|
12 |
+
The exact definition of existence is one of the most important and fundamental topics of ontology, the philosophical study of the nature of being, existence, or reality in general, as well as of the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology deals with questions concerning what things or entities exist or can be said to exist, and how such things or entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences.
|
13 |
+
|
14 |
+
In the Western tradition of philosophy, the earliest known comprehensive treatments of the subject are from Plato's Phaedo, Republic, and Statesman and Aristotle's Metaphysics, though earlier fragmentary writing exists. Aristotle developed a comprehensive theory of being, according to which only individual things, called substances, fully have to be, but other things such as relations, quantity, time, and place (called the categories) have a derivative kind of being, dependent on individual things. In Aristotle's Metaphysics, there are four causes of existence or change in nature: the material cause, the formal cause, the efficient cause and the final cause.
|
15 |
+
|
16 |
+
The Neo-Platonists and some early Christian philosophers argued about whether existence had any reality except in the mind of God.[citation needed] Some taught that existence was a snare and a delusion, that the world, the flesh, and the devil existed only to tempt weak humankind away from God.
|
17 |
+
|
18 |
+
In Hindu philosophy, the term Advaita refers to its idea that the true self, Atman, is the same as the highest metaphysical Reality (Brahman). The Upanishads describe the universe, and the human experience, as an interplay of Purusha (the eternal, unchanging principles, consciousness) and Prakṛti (the temporary, changing material world, nature).The former manifests itself as Ātman (Soul, Self), and the latter as Māyā. The Upanishads refer to the knowledge of Atman as "true knowledge" (Vidya), and the knowledge of Maya as "not true knowledge" (Avidya, Nescience, lack of awareness, lack of true knowledge).
|
19 |
+
|
20 |
+
The medieval philosopher Thomas Aquinas argued that God is pure being, and that in God essence and existence are the same. More specifically, what is identical in God, according to Aquinas, is God's essence and God's actus essendi.[6] At about the same time, the nominalist philosopher William of Ockham argued, in Book I of his Summa Totius Logicae (Treatise on all Logic, written some time before 1327), that Categories are not a form of Being in their own right, but derivative on the existence of individuals.
|
21 |
+
|
22 |
+
The Indian philosopher Nagarjuna (c. 150–250 CE) largely advanced existence concepts and founded the Madhyamaka school of Mahāyāna Buddhism.
|
23 |
+
|
24 |
+
In Eastern philosophy, Anicca (Sanskrit anitya) or "impermanence" describes existence. It refers to the fact that all conditioned things (sankhara) are in a constant state of flux. In reality there is no thing that ultimately ceases to exist; only the appearance of a thing ceases as it changes from one form to another. Imagine a leaf that falls to the ground and decomposes. While the appearance and relative existence of the leaf ceases, the components that formed the leaf become particulate material that goes on to form new plants. Buddhism teaches a middle way, avoiding the extreme views of eternalism and nihilism.[7] The middle way recognizes there are vast differences between the way things are perceived to exist and the way things really exist. The differences are reconciled in the concept of Shunyata by addressing the existing object's served purpose for the subject's identity in being. What exists is in non-existence, because the subject changes.
|
25 |
+
|
26 |
+
Trailokya elaborates on three kinds of existence, those of desire, form, and formlessness in which there are karmic rebirths. Taken further to the Trikaya doctrine, it describes how the Buddha exists. In this philosophy, it is accepted that Buddha exists in more than one absolute way.
|
27 |
+
|
28 |
+
The early modern treatment of the subject derives from Antoine Arnauld and Pierre Nicole's Logic, or The Art of Thinking, better known as the Port-Royal Logic, first published in 1662. Arnauld thought that a proposition or judgment consists of taking two different ideas and either putting them together or rejecting them:
|
29 |
+
|
30 |
+
After conceiving things by our ideas, we compare these ideas and, finding that some belong together and others do not, we unite or separate them. This is called affirming or denying, and in general judging.
|
31 |
+
This judgment is also called a proposition, and it is easy to see that it must have two terms. One term, of which one affirms or denies something, is called the subject; the other term, which is affirmed or denied, is called the attribute or Praedicatum.
|
32 |
+
|
33 |
+
The two terms are joined by the verb "is" (or "is not", if the predicate is denied of the subject). Thus every proposition has three components: the two terms, and the "copula" that connects or separates them. Even when the proposition has only two words, the three terms are still there. For example, "God loves humanity", really means "God is a lover of humanity", "God exists" means "God is a thing".
|
34 |
+
|
35 |
+
This theory of judgment dominated logic for centuries, but it has some obvious difficulties: it only considers proposition of the form "All A are B.", a form logicians call universal. It does not allow propositions of the form "Some A are B", a form logicians call existential. If neither A nor B includes the idea of existence, then "some A are B" simply adjoins A to B. Conversely, if A or B do include the idea of existence in the way that "triangle" contains the idea "three angles equal to two right angles", then "A exists" is automatically true, and we have an ontological proof of A's existence. (Indeed, Arnauld's contemporary Descartes famously argued so, regarding the concept "God" (discourse 4, Meditation 5)). Arnauld's theory was current until the middle of the nineteenth century.
|
36 |
+
|
37 |
+
David Hume argued that the claim that a thing exists, when added to our notion of a thing, does not add anything to the concept. For example, if we form a complete notion of Moses, and superadd to that notion the claim that Moses existed, we are not adding anything to the notion of Moses.
|
38 |
+
|
39 |
+
Kant also argued that existence is not a "real" predicate, but gave no explanation of how this is possible. Indeed, his famous discussion of the subject is merely a restatement of Arnauld's doctrine that in the proposition "God is omnipotent", the verb "is" signifies the joining or separating of two concepts such as "God" and "omnipotence".[original research?]
|
40 |
+
|
41 |
+
Schopenhauer claimed that “everything that exists for knowledge, and hence the whole of this world, only object in relation to the subject, the perception of the perceiver, in a word, representation.”[8] According to him there can be "No object without subject" because "everything objective is already conditioned as such in manifold ways by the knowing subject with the forms of its knowing, and presupposes these forms..."[9]
|
42 |
+
|
43 |
+
John Stuart Mill (and also Kant's pupil Herbart) argued that the predicative nature of existence was proved by sentences like "A centaur is a poetic fiction"[10] or "A greatest number is impossible" (Herbart).[11] Franz Brentano challenged this; so also (as is better known) did Frege. Brentano argued that we can join the concept represented by a noun phrase "an A" to the concept represented by an adjective "B" to give the concept represented by the noun phrase "a B-A". For example, we can join "a man" to "wise" to give "a wise man". But the noun phrase "a wise man" is not a sentence, whereas "some man is wise" is a sentence. Hence the copula must do more than merely join or separate concepts. Furthermore, adding "exists" to "a wise man", to give the complete sentence "a wise man exists" has the same effect as joining "some man" to "wise" using the copula. So the copula has the same effect as "exists". Brentano argued that every categorical proposition can be translated into an existential one without change in meaning and that the "exists" and "does not exist" of the existential proposition take the place of the copula. He showed this by the following examples:
|
44 |
+
|
45 |
+
Frege developed a similar view (though later) in his great work The Foundations of Arithmetic, as did Charles Sanders Peirce (but Peirce held that the possible and the real are not limited to the actual, individually existent). The Frege-Brentano view is the basis of the dominant position in modern Anglo-American philosophy: that existence is asserted by the existential quantifier (as expressed by Quine's slogan "To be is to be the value of a variable." — On What There Is, 1948).[12]
|
46 |
+
|
47 |
+
In mathematical logic, there are two quantifiers, "some" and "all", though as Brentano (1838–1917) pointed out, we can make do with just one quantifier and negation. The first of these quantifiers, "some", is also expressed as "there exists". Thus, in the sentence "There exists a man", the term "man" is asserted to be part of existence. But we can also assert, "There exists a triangle." Is a "triangle"—an abstract idea—part of existence in the same way that a "man"—a physical body—is part of existence? Do abstractions such as goodness, blindness, and virtue exist in the same sense that chairs, tables, and houses exist? What categories, or kinds of thing, can be the subject or the predicate of a proposition?
|
48 |
+
|
49 |
+
Worse, does "existence" exist?[13]
|
50 |
+
|
51 |
+
In some statements, existence is implied without being mentioned. The statement "A bridge crosses the Thames at Hammersmith" cannot just be about a bridge, the Thames, and Hammersmith. It must be about "existence" as well. On the other hand, the statement "A bridge crosses the Styx at Limbo" has the same form, but while in the first case we understand a real bridge in the real world made of stone or brick, what "existence" would mean in the second case is less clear.
|
52 |
+
|
53 |
+
The nominalist approach is to argue that certain noun phrases can be "eliminated" by rewriting a sentence in a form that has the same meaning but does not contain the noun phrase. Thus Ockham argued that "Socrates has wisdom", which apparently asserts the existence of a reference for "wisdom", can be rewritten as "Socrates is wise", which contains only the referring phrase "Socrates".[14] This method became widely accepted in the twentieth century by the analytic school of philosophy.
|
54 |
+
|
55 |
+
However, this argument may be inverted by realists in arguing that since the sentence "Socrates is wise" can be rewritten as "Socrates has wisdom", this proves the existence of a hidden referent for "wise".
|
56 |
+
|
57 |
+
A further problem is that human beings seem to process information about fictional characters in much the same way that they process information about real people. For example, in the 2008 United States presidential election, a politician and actor named Fred Thompson ran for the Republican Party nomination. In polls, potential voters identified Fred Thompson as a "law and order" candidate. Thompson plays a fictional character on the television series Law and Order. The people who make the comment are aware that Law and Order is fiction, but at some level, they may process fiction as if it were fact, a process included in what is called the Paradox of Fiction.[dubious – discuss][15] Another example of this is the common experience of actresses who play the villain in a soap opera being accosted in public as if they are to blame for the actions of the characters they play.
|
58 |
+
|
59 |
+
A scientist might make a clear distinction between objects that exist, and assert that all objects that exist are made up of either matter or energy. But in the layperson's worldview, existence includes real, fictional, and even contradictory objects. Thus if we reason from the statement "Pegasus flies" to the statement "Pegasus exists", we are not asserting that Pegasus is made up of atoms, but rather that Pegasus exists in the worldview of classical myth. When a mathematician reasons from the statement "ABC is a triangle" to the statement "triangles exist", the mathematician is not asserting that triangles are made up of atoms but rather that triangles exist within a particular mathematical model.
|
60 |
+
|
61 |
+
According to Bertrand Russell's Theory of Descriptions, the negation operator in a singular sentence can take either wide or narrow scope: we distinguish between "some S is not P" (where negation takes "narrow scope") and "it is not the case that 'some S is P'" (where negation takes "wide scope"). The problem with this view is that there appears to be no such scope distinction in the case of proper names. The sentences "Socrates is not bald" and "it is not the case that Socrates is bald" both appear to have the same meaning, and they both appear to assert or presuppose the existence of someone (Socrates) who is not bald, so that negation takes a narrow scope. However, Russell's theory analyses proper names into a logical structure which makes sense of this problem. According to Russell, Socrates can be analyzed into the form 'The Philosopher of Greece.' In the wide scope, this would then read: It is not the case that there existed a philosopher of Greece who was bald. In the narrow scope, it would read the Philosopher of Greece was not bald.
|
62 |
+
|
63 |
+
According to the direct-reference view, an early version of which was originally proposed by Bertrand Russell, and perhaps earlier by Gottlob Frege, a proper name strictly has no meaning when there is no object to which it refers. This view relies on the argument that the semantic function of a proper name is to tell us which object bears the name, and thus to identify some object. But no object can be identified if none exists. Thus, a proper name must have a bearer if it is to be meaningful.
|
64 |
+
|
65 |
+
According to the "two sense" view of existence, which derives from Alexius Meinong, existential statements fall into two classes.
|
66 |
+
|
67 |
+
The problem is then evaded as follows. "Pegasus flies" implies existence in the wide sense, for it implies that something flies. But it does not imply existence in the narrow sense, for we deny existence in this sense by saying that Pegasus does not exist. In effect, the world of all things divides, on this view, into those (like Socrates, the planet Venus, and New York City) that have existed in the narrow sense, and those (like Sherlock Holmes, the goddess Venus, and Minas Tirith) that do not.
|
68 |
+
|
69 |
+
However, common sense suggests the non-existence of such things as fictional characters or places.
|
70 |
+
|
71 |
+
Influenced by the views of Brentano's pupil Alexius Meinong, and by Edmund Husserl, Germanophone and Francophone philosophy took a different direction regarding the question of existence.
|
72 |
+
|
73 |
+
Anti-realism is the view of idealists who are skeptics about the physical world, maintaining either: (1) that nothing exists outside the mind, or (2) that we would have no access to a mind-independent reality even if it may exist. Realists, in contrast, hold that perceptions or sense data are caused by mind-independent objects. An "anti-realist" who denies that other minds exist (i. e., a solipsist) is different from an "anti-realist" who claims that there is no fact of the matter as to whether or not there are unobservable other minds (i. e., a logical behaviorist).
|
en/1925.html.txt
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An exoplanet or extrasolar planet is a planet outside the Solar System. The first possible evidence of an exoplanet was noted in 1917, but was not recognized as such.[4] The first confirmation of detection occurred in 1992. This was followed by the confirmation of a different planet, originally detected in 1988. As of 1 July 2020, there are 4,281 confirmed exoplanets in 3,163 systems, with 701 systems having more than one planet.[5]
|
4 |
+
|
5 |
+
There are many methods of detecting exoplanets. Transit photometry and Doppler spectroscopy have found the most, but these methods suffer from a clear observational bias favoring the detection of planets near the star; thus, 85% of the exoplanets detected are inside the tidal locking zone.[6] In several cases, multiple planets have been observed around a star.[7] About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone.[c][8][9] Assuming there are 200 billion stars in the Milky Way,[d] it can be hypothesized that there are 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if planets orbiting the numerous red dwarfs are included.[10]
|
6 |
+
|
7 |
+
The least massive planet known is Draugr (also known as PSR B1257+12 A or PSR B1257+12 b), which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is HR 2562 b,[11][12] about 30 times the mass of Jupiter, although according to some definitions of a planet (based on the nuclear fusion of deuterium[13]), it is too massive to be a planet and may be a brown dwarf instead. Known orbital times for exoplanets vary from a few hours (for those closest to their star) to thousands of years. Some exoplanets are so far away from the star that it is difficult to tell whether they are gravitationally bound to it. Almost all of the planets detected so far are within the Milky Way. There is evidence that extragalactic planets, exoplanets farther away in galaxies beyond the local Milky Way galaxy, may exist.[14][15] The nearest exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 parsecs) from Earth and orbiting Proxima Centauri, the closest star to the Sun.[16]
|
8 |
+
|
9 |
+
The discovery of exoplanets has intensified interest in the search for extraterrestrial life. There is special interest in planets that orbit in a star's habitable zone, where it is possible for liquid water, a prerequisite for life on Earth, to exist on the surface. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life.[17]
|
10 |
+
|
11 |
+
Rogue planets do not orbit any star. Such objects are considered as a separate category of planet, especially if they are gas giants, which are often counted as sub-brown dwarfs.[18] The rogue planets in the Milky Way possibly number in the billions or more.[19][20]
|
12 |
+
|
13 |
+
The convention for designating exoplanets is an extension of the system used for designating multiple-star systems as adopted by the International Astronomical Union (IAU). For exoplanets orbiting a single star, the IAU designation is formed by taking the designated or proper name of its parent star, and adding a lower case letter.[22] Letters are given in order of each planet's discovery around the parent star, so that the first planet discovered in a system is designated "b" (the parent star is considered to be "a") and later planets are given subsequent letters. If several planets in the same system are discovered at the same time, the closest one to the star gets the next letter, followed by the other planets in order of orbital size. A provisional IAU-sanctioned standard exists to accommodate the designation of circumbinary planets. A limited number of exoplanets have IAU-sanctioned proper names. Other naming systems exist.
|
14 |
+
|
15 |
+
For centuries scientists, philosophers, and science fiction writers suspected that extrasolar planets existed, but there was no way of knowing whether they existed, how common they were, or how similar they might be to the planets of the Solar System. Various detection claims made in the nineteenth century were rejected by astronomers.
|
16 |
+
|
17 |
+
The first evidence of a possible exoplanet, orbiting Van Maanen 2, was noted in 1917, but was not recognized as such. The astronomer Walter Sydney Adams, who later became director of the Mount Wilson Observatory, produced a spectrum of the star using Mount Wilson's 60-inch telescope. He interpreted the spectrum to be of an F-type main-sequence star, but it is now thought that such a spectrum could be caused by the residue of a nearby exoplanet that had been pulverized into dust by the gravity of the star, the resulting dust then falling onto the star.[4]
|
18 |
+
|
19 |
+
The first suspected scientific detection of an exoplanet occurred in 1988. Shortly afterwards, the first confirmation of detection came in 1992, with the discovery of several terrestrial-mass planets orbiting the pulsar PSR B1257+12.[23] The first confirmation of an exoplanet orbiting a main-sequence star was made in 1995, when a giant planet was found in a four-day orbit around the nearby star 51 Pegasi. Some exoplanets have been imaged directly by telescopes, but the vast majority have been detected through indirect methods, such as the transit method and the radial-velocity method. In February 2018, researchers using the Chandra X-ray Observatory, combined with a planet detection technique called microlensing, found evidence of planets in a distant galaxy, stating "Some of these exoplanets are as (relatively) small as the moon, while others are as massive as Jupiter. Unlike Earth, most of the exoplanets are not tightly bound to stars, so they're actually wandering through space or loosely orbiting between stars. We can estimate that the number of planets in this [faraway] galaxy is more than a trillion.[24]
|
20 |
+
|
21 |
+
This space we declare to be infinite... In it are an infinity of worlds of the same kind as our own.
|
22 |
+
|
23 |
+
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets.
|
24 |
+
|
25 |
+
In the eighteenth century, the same possibility was mentioned by Isaac Newton in the "General Scholium" that concludes his Principia. Making a comparison to the Sun's planets, he wrote "And if the fixed stars are the centres of similar systems, they will all be constructed according to a similar design and subject to the dominion of One."[26]
|
26 |
+
|
27 |
+
In 1952, more than 40 years before the first hot Jupiter was discovered, Otto Struve wrote that there is no compelling reason why planets could not be much closer to their parent star than is the case in the Solar System, and proposed that Doppler spectroscopy and the transit method could detect super-Jupiters in short orbits.[27]
|
28 |
+
|
29 |
+
Claims of exoplanet detections have been made since the nineteenth century. Some of the earliest involve the binary star 70 Ophiuchi. In 1855 William Stephen Jacob at the East India Company's Madras Observatory reported that orbital anomalies made it "highly probable" that there was a "planetary body" in this system.[28] In the 1890s, Thomas J. J. See of the University of Chicago and the United States Naval Observatory stated that the orbital anomalies proved the existence of a dark body in the 70 Ophiuchi system with a 36-year period around one of the stars.[29] However, Forest Ray Moulton published a paper proving that a three-body system with those orbital parameters would be highly unstable.[30] During the 1950s and 1960s, Peter van de Kamp of Swarthmore College made another prominent series of detection claims, this time for planets orbiting Barnard's Star.[31] Astronomers now generally regard all the early reports of detection as erroneous.[32]
|
30 |
+
|
31 |
+
In 1991 Andrew Lyne, M. Bailes and S. L. Shemar claimed to have discovered a pulsar planet in orbit around PSR 1829-10, using pulsar timing variations.[33] The claim briefly received intense attention, but Lyne and his team soon retracted it.[34]
|
32 |
+
|
33 |
+
As of 1 July 2020, a total of 4,281 confirmed exoplanets are listed in the Extrasolar Planets Encyclopedia, including a few that were confirmations of controversial claims from the late 1980s.[5] The first published discovery to receive subsequent confirmation was made in 1988 by the Canadian astronomers Bruce Campbell, G. A. H. Walker, and Stephenson Yang of the University of Victoria and the University of British Columbia.[35] Although they were cautious about claiming a planetary detection, their radial-velocity observations suggested that a planet orbits the star Gamma Cephei. Partly because the observations were at the very limits of instrumental capabilities at the time, astronomers remained skeptical for several years about this and other similar observations. It was thought some of the apparent planets might instead have been brown dwarfs, objects intermediate in mass between planets and stars. In 1990, additional observations were published that supported the existence of the planet orbiting Gamma Cephei,[36] but subsequent work in 1992 again raised serious doubts.[37] Finally, in 2003, improved techniques allowed the planet's existence to be confirmed.[38]
|
34 |
+
|
35 |
+
On 9 January 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12.[23] This discovery was confirmed, and is generally considered to be the first definitive detection of exoplanets. Follow-up observations solidified these results, and confirmation of a third planet in 1994 revived the topic in the popular press.[39] These pulsar planets are thought to have formed from the unusual remnants of the supernova that produced the pulsar, in a second round of planet formation, or else to be the remaining rocky cores of gas giants that somehow survived the supernova and then decayed into their current orbits.
|
36 |
+
|
37 |
+
On 6 October 1995, Michel Mayor and Didier Queloz of the University of Geneva announced the first definitive detection of an exoplanet orbiting a main-sequence star, nearby G-type star 51 Pegasi.[40][41] This discovery, made at the Observatoire de Haute-Provence, ushered in the modern era of exoplanetary discovery, and was recognized by a share of the 2019 Nobel Prize in Physics. Technological advances, most notably in high-resolution spectroscopy, led to the rapid detection of many new exoplanets: astronomers could detect exoplanets indirectly by measuring their gravitational influence on the motion of their host stars. More extrasolar planets were later detected by observing the variation in a star's apparent luminosity as an orbiting planet transited in front of it.
|
38 |
+
|
39 |
+
Initially, most known exoplanets were massive planets that orbited very close to their parent stars. Astronomers were surprised by these "hot Jupiters", because theories of planetary formation had indicated that giant planets should only form at large distances from stars. But eventually more planets of other sorts were found, and it is now clear that hot Jupiters make up the minority of exoplanets. In 1999, Upsilon Andromedae became the first main-sequence star known to have multiple planets.[42] Kepler-16 contains the first discovered planet that orbits around a binary main-sequence star system.[43]
|
40 |
+
|
41 |
+
On 26 February 2014, NASA announced the discovery of 715 newly verified exoplanets around 305 stars by the Kepler Space Telescope. These exoplanets were checked using a statistical technique called "verification by multiplicity".[44][45][46] Before these results, most confirmed planets were gas giants comparable in size to Jupiter or larger because they are more easily detected, but the Kepler planets are mostly between the size of Neptune and the size of Earth.[44]
|
42 |
+
|
43 |
+
On 23 July 2015, NASA announced Kepler-452b, a near-Earth-size planet orbiting the habitable zone of a G2-type star.[47]
|
44 |
+
|
45 |
+
On 6 September 2018, NASA discovered an exoplanet about 145 light years away from Earth in the constellation Virgo.[48] This exoplanet, Wolf 503b, is twice the size of Earth and was discovered orbiting a type of star known as an "Orange Dwarf". Wolf 503b completes one orbit in as few as six days because it is very close to the star. Wolf 503b is the only exoplanet that large that can be found near the so-called Fulton gap. The Fulton gap, first noticed in 2017, is the observation that it is unusual to find planets within a certain mass range.[48] Under the Fulton gap studies, this opens up a new field for astronomers, who are still studying whether planets found in the Fulton gap are gaseous or rocky.[48]
|
46 |
+
|
47 |
+
In January 2020, scientists announced the discovery of TOI 700 d, the first Earth-sized planet in the habitable zone detected by TESS.[49]
|
48 |
+
|
49 |
+
As of January 2020, NASA's Kepler and TESS missions had identified 4374 planetary candidates yet to be confirmed,[50] several of them being nearly Earth-sized and located in the habitable zone, some around Sun-like stars.[51][52][53]
|
50 |
+
|
51 |
+
About 97% of all the confirmed exoplanets have been discovered by indirect techniques of detection, mainly by radial velocity measurements and transit monitoring techniques.[57] Recently the techniques of singular optics have been applied in the search for exoplanets.[58]
|
52 |
+
|
53 |
+
Planets may form within a few to tens (or more) of millions of years of their star forming.[59][60][61][62][63]
|
54 |
+
The planets of the Solar System can only be observed in their current state, but observations of different planetary systems of varying ages allows us to observe planets at different stages of evolution. Available observations range from young proto-planetary disks where planets are still forming[64] to planetary systems of over 10 Gyr old.[65] When planets form in a gaseous protoplanetary disk,[66] they accrete hydrogen/helium envelopes.[67][68] These envelopes cool and contract over time and, depending on the mass of the planet, some or all of the hydrogen/helium is eventually lost to space.[66] This means that even terrestrial planets may start off with large radii if they form early enough.[69][70][71] An example is Kepler-51b which has only about twice the mass of Earth but is almost the size of Saturn which is a hundred times the mass of Earth. Kepler-51b is quite young at a few hundred million years old.[72]
|
55 |
+
|
56 |
+
There is at least one planet on average per star.[7]
|
57 |
+
About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone.[74]
|
58 |
+
|
59 |
+
Most known exoplanets orbit stars roughly similar to the Sun, i.e. main-sequence stars of spectral categories F, G, or K. Lower-mass stars (red dwarfs, of spectral category M) are less likely to have planets massive enough to be detected by the radial-velocity method.[75][76] Despite this, several tens of planets around red dwarfs have been discovered by the Kepler spacecraft, which uses the transit method to detect smaller planets.
|
60 |
+
|
61 |
+
Using data from Kepler, a correlation has been found between the metallicity of a star and the probability that the star host planets. Stars with higher metallicity are more likely to have planets, especially giant planets, than stars with lower metallicity.[77]
|
62 |
+
|
63 |
+
Some planets orbit one member of a binary star system,[78] and several circumbinary planets have been discovered which orbit around both members of binary star. A few planets in triple star systems are known[79] and one in the quadruple system Kepler-64.
|
64 |
+
|
65 |
+
In 2013 the color of an exoplanet was determined for the first time. The best-fit albedo measurements of HD 189733b suggest that it is deep dark blue.[80][81] Later that same year, the colors of several other exoplanets were determined, including GJ 504 b which visually has a magenta color,[82] and Kappa Andromedae b, which if seen up close would appear reddish in color.[83]
|
66 |
+
Helium planets are expected to be white or grey in appearance.[84]
|
67 |
+
|
68 |
+
The apparent brightness (apparent magnitude) of a planet depends on how far away the observer is, how reflective the planet is (albedo), and how much light the planet receives from its star, which depends on how far the planet is from the star and how bright the star is. So, a planet with a low albedo that is close to its star can appear brighter than a planet with high albedo that is far from the star.[85]
|
69 |
+
|
70 |
+
The darkest known planet in terms of geometric albedo is TrES-2b, a hot Jupiter that reflects less than 1% of the light from its star, making it less reflective than coal or black acrylic paint. Hot Jupiters are expected to be quite dark due to sodium and potassium in their atmospheres but it is not known why TrES-2b is so dark—it could be due to an unknown chemical compound.[86][87][88]
|
71 |
+
|
72 |
+
For gas giants, geometric albedo generally decreases with increasing metallicity or atmospheric temperature unless there are clouds to modify this effect. Increased cloud-column depth increases the albedo at optical wavelengths, but decreases it at some infrared wavelengths. Optical albedo increases with age, because older planets have higher cloud-column depths. Optical albedo decreases with increasing mass, because higher-mass giant planets have higher surface gravities, which produces lower cloud-column depths. Also, elliptical orbits can cause major fluctuations in atmospheric composition, which can have a significant effect.[89]
|
73 |
+
|
74 |
+
There is more thermal emission than reflection at some near-infrared wavelengths for massive and/or young gas giants. So, although optical brightness is fully phase-dependent, this is not always the case in the near infrared.[89]
|
75 |
+
|
76 |
+
Temperatures of gas giants reduce over time and with distance from their star. Lowering the temperature increases optical albedo even without clouds. At a sufficiently low temperature, water clouds form, which further increase optical albedo. At even lower temperatures ammonia clouds form, resulting in the highest albedos at most optical and near-infrared wavelengths.[89]
|
77 |
+
|
78 |
+
In 2014, a magnetic field around HD 209458 b was inferred from the way hydrogen was evaporating from the planet. It is the first (indirect) detection of a magnetic field on an exoplanet. The magnetic field is estimated to be about one tenth as strong as Jupiter's.[90][91]
|
79 |
+
|
80 |
+
Exoplanets magnetic fields may be detectable by their auroral radio emissions with sensitive enough radio telescopes such as LOFAR.[92][93] The radio emissions could enable determination of the rotation rate of the interior of an exoplanet, and may yield a more accurate way to measure exoplanet rotation than by examining the motion of clouds.[94]
|
81 |
+
|
82 |
+
Earth's magnetic field results from its flowing liquid metallic core, but in massive super-Earths with high pressure, different compounds may form which do not match those created under terrestrial conditions. Compounds may form with greater viscosities and high melting temperatures which could prevent the interiors from separating into different layers and so result in undifferentiated coreless mantles. Forms of magnesium oxide such as MgSi3O12 could be a liquid metal at the pressures and temperatures found in super-Earths and could generate a magnetic field in the mantles of super-Earths.[95][96]
|
83 |
+
|
84 |
+
Hot Jupiters have been observed to have a larger radius than expected. This could be caused by the interaction between the stellar wind and the planet's magnetosphere creating an electric current through the planet that heats it up causing it to expand. The more magnetically active a star is the greater the stellar wind and the larger the electric current leading to more heating and expansion of the planet. This theory matches the observation that stellar activity is correlated with inflated planetary radii.[97]
|
85 |
+
|
86 |
+
In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields.[98][99]
|
87 |
+
|
88 |
+
Although scientists previously announced that the magnetic fields of close-in exoplanets may cause increased stellar flares and starspots on their host stars, in 2019 this claim was demonstrated to be false in the HD 189733 system. The failure to detect "star-planet interactions" in the well-studied HD 189733 system calls other related claims of the effect into question.[100]
|
89 |
+
|
90 |
+
In 2019 the strength of the surface magnetic fields of 4 hot Jupiters were estimated and ranged between 20 and 120 gauss compared to Jupiter's surface magnetic field of 4.3 gauss.[101][102]
|
91 |
+
|
92 |
+
In 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths[103][104] with one team saying that plate tectonics would be episodic or stagnant[105] and the other team saying that plate tectonics is very likely on super-Earths even if the planet is dry.[106]
|
93 |
+
|
94 |
+
If super-Earths have more than 80 times as much water as Earth then they become ocean planets with all land completely submerged. However, if there is less water than this limit, then the deep water cycle will move enough water between the oceans and mantle to allow continents to exist.[107][108]
|
95 |
+
|
96 |
+
Large surface temperature variations on 55 Cancri e have been attributed to possible volcanic activity releasing large clouds of dust which blanket the planet and block thermal emissions.[109][110]
|
97 |
+
|
98 |
+
The star 1SWASP J140747.93-394542.6 is orbited by an object that is circled by a ring system much larger than Saturn's rings. However, the mass of the object is not known; it could be a brown dwarf or low-mass star instead of a planet.[111][112]
|
99 |
+
|
100 |
+
The brightness of optical images of Fomalhaut b could be due to starlight reflecting off a circumplanetary ring system with a radius between 20 and 40 times that of Jupiter's radius, about the size of the orbits of the Galilean moons.[113]
|
101 |
+
|
102 |
+
The rings of the Solar System's gas giants are aligned with their planet's equator. However, for exoplanets that orbit close to their star, tidal forces from the star would lead to the outermost rings of a planet being aligned with the planet's orbital plane around the star. A planet's innermost rings would still be aligned with the planet's equator so that if the planet has a tilted rotational axis, then the different alignments between the inner and outer rings would create a warped ring system.[114]
|
103 |
+
|
104 |
+
In December 2013 a candidate exomoon of a rogue planet was announced.[115] On 3 October 2018, evidence suggesting a large exomoon orbiting Kepler-1625b was reported.[116]
|
105 |
+
|
106 |
+
Atmospheres have been detected around several exoplanets. The first to be observed was HD 209458 b in 2001.[118]
|
107 |
+
|
108 |
+
In May 2017, glints of light from Earth, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere.[119][120] The technology used to determine this may be useful in studying the atmospheres of distant worlds, including those of exoplanets.
|
109 |
+
|
110 |
+
KIC 12557548 b is a small rocky planet, very close to its star, that is evaporating and leaving a trailing tail of cloud and dust like a comet.[121] The dust could be ash erupting from volcanos and escaping due to the small planet's low surface-gravity, or it could be from metals that are vaporized by the high temperatures of being so close to the star with the metal vapor then condensing into dust.[122]
|
111 |
+
|
112 |
+
In June 2015, scientists reported that the atmosphere of GJ 436 b was evaporating, resulting in a giant cloud around the planet and, due to radiation from the host star, a long trailing tail 14 million km (9 million mi) long.[123]
|
113 |
+
|
114 |
+
Tidally locked planets in a 1:1 spin-orbit resonance would have their star always shining directly overhead on one spot which would be hot with the opposite hemisphere receiving no light and being freezing cold. Such a planet could resemble an eyeball with the hotspot being the pupil.[124] Planets with an eccentric orbit could be locked in other resonances. 3:2 and 5:2 resonances would result in a double-eyeball pattern with hotspots in both eastern and western hemispheres.[125] Planets with both an eccentric orbit and a tilted axis of rotation would have more complicated insolation patterns.[126]
|
115 |
+
|
116 |
+
As more planets are discovered, the field of exoplanetology continues to grow into a deeper study of extrasolar worlds, and will ultimately tackle the prospect of life on planets beyond the Solar System.[57] At cosmic distances, life can only be detected if it is developed at a planetary scale and strongly modified the planetary environment, in such a way that the modifications cannot be explained by classical physico-chemical processes (out of equilibrium processes).[57] For example, molecular oxygen (O2) in the atmosphere of Earth is a result of photosynthesis by living plants and many kinds of microorganisms, so it can be used as an indication of life on exoplanets, although small amounts of oxygen could also be produced by non-biological means.[127] Furthermore, a potentially habitable planet must orbit a stable star at a distance within which planetary-mass objects with sufficient atmospheric pressure can support liquid water at their surfaces.[128][129]
|
en/1926.html.txt
ADDED
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Mining is the extraction of valuable minerals or other geological materials from the Earth, usually from an ore body, lode, vein, seam, reef or placer deposit. These deposits form a mineralized package that is of economic interest to the miner.
|
2 |
+
|
3 |
+
Ores recovered by mining include metals, coal, oil shale, gemstones, limestone, chalk, dimension stone, rock salt, potash, gravel, and clay. Mining is required to obtain any material that cannot be grown through agricultural processes, or feasibly created artificially in a laboratory or factory. Mining in a wider sense includes extraction of any non-renewable resource such as petroleum, natural gas, or even water.
|
4 |
+
|
5 |
+
Mining of stones and metal has been a human activity since pre-historic times. Modern mining processes involve prospecting for ore bodies, analysis of the profit potential of a proposed mine, extraction of the desired materials, and final reclamation of the land after the mine is closed.[1]
|
6 |
+
|
7 |
+
Mining operations usually create a negative environmental impact, both during the mining activity and after the mine has closed. Hence, most of the world's nations have passed regulations to decrease the impact. Work safety has long been a concern as well, and modern practices have significantly improved safety in mines.
|
8 |
+
|
9 |
+
Levels of metal recycling are generally low. Unless future end-of-life recycling rates are stepped up, some rare metals may become unavailable for use in a variety of consumer products. Due to the low recycling rates, some landfills now contain higher concentrations of metal than mines themselves.[2]
|
10 |
+
|
11 |
+
Since the beginning of civilization, people have used stone, ceramics and, later, metals found close to the Earth's surface. These were used to make early tools and weapons; for example, high quality flint found in northern France, southern England and Poland was used to create flint tools.[3] Flint mines have been found in chalk areas where seams of the stone were followed underground by shafts and galleries. The mines at Grimes Graves and Krzemionki are especially famous, and like most other flint mines, are Neolithic in origin (c. 4000–3000 BC). Other hard rocks mined or collected for axes included the greenstone of the Langdale axe industry based in the English Lake District.
|
12 |
+
|
13 |
+
The oldest-known mine on archaeological record is the Ngwenya Mine in Eswatini (Swaziland), which radiocarbon dating shows to be about 43,000 years old. At this site Paleolithic humans mined hematite to make the red pigment ochre.[4][5] Mines of a similar age in Hungary are believed to be sites where Neanderthals may have mined flint for weapons and tools.[6]
|
14 |
+
|
15 |
+
Ancient Egyptians mined malachite at Maadi.[7] At first, Egyptians used the bright green malachite stones for ornamentations and pottery. Later, between 2613 and 2494 BC, large building projects required expeditions abroad to the area of Wadi Maghareh in order to secure minerals and other resources not available in Egypt itself.[8] Quarries for turquoise and copper were also found at Wadi Hammamat, Tura, Aswan and various other Nubian sites on the Sinai Peninsula and at Timna.[8]
|
16 |
+
|
17 |
+
Mining in Egypt occurred in the earliest dynasties. The gold mines of Nubia were among the largest and most extensive of any in Ancient Egypt. These mines are described by the Greek author Diodorus Siculus, who mentions fire-setting as one method used to break down the hard rock holding the gold. One of the complexes is shown in one of the earliest known maps. The miners crushed the ore and ground it to a fine powder before washing the powder for the gold dust.
|
18 |
+
|
19 |
+
Mining in Europe has a very long history. Examples include the silver mines of Laurium, which helped support the Greek city state of Athens. Although they had over 20,000 slaves working them, their technology was essentially identical to their Bronze Age predecessors.[9] At other mines, such as on the island of Thassos, marble was quarried by the Parians after they arrived in the 7th century BC.[10] The marble was shipped away and was later found by archaeologists to have been used in buildings including the tomb of Amphipolis. Philip II of Macedon, the father of Alexander the Great, captured the gold mines of Mount Pangeo in 357 BC to fund his military campaigns.[11] He also captured gold mines in Thrace for minting coinage, eventually producing 26 tons per year.
|
20 |
+
|
21 |
+
However, it was the Romans who developed large scale mining methods, especially the use of large volumes of water brought to the minehead by numerous aqueducts. The water was used for a variety of purposes, including removing overburden and rock debris, called hydraulic mining, as well as washing comminuted, or crushed, ores and driving simple machinery.
|
22 |
+
|
23 |
+
The Romans used hydraulic mining methods on a large scale to prospect for the veins of ore, especially a now-obsolete form of mining known as hushing. They built numerous aqueducts to supply water to the minehead. There, the water stored in large reservoirs and tanks. When a full tank was opened, the flood of water sluiced away the overburden to expose the bedrock underneath and any gold veins. The rock was then worked upon by fire-setting to heat the rock, which would be quenched with a stream of water. The resulting thermal shock cracked the rock, enabling it to be removed by further streams of water from the overhead tanks. The Roman miners used similar methods to work cassiterite deposits in Cornwall and lead ore in the Pennines.
|
24 |
+
|
25 |
+
The methods had been developed by the Romans in Spain in 25 AD to exploit large alluvial gold deposits, the largest site being at Las Medulas, where seven long aqueducts tapped local rivers and sluiced the deposits. The Romans also exploited the silver present in the argentiferous galena in the mines of Cartagena (Cartago Nova), Linares (Castulo), Plasenzuela and Azuaga, among many others.[12] Spain was one of the most important mining regions, but all regions of the Roman Empire were exploited. In Great Britain the natives had mined minerals for millennia,[13] but after the Roman conquest, the scale of the operations increased dramatically, as the Romans needed Britannia's resources, especially gold, silver, tin, and lead.
|
26 |
+
|
27 |
+
Roman techniques were not limited to surface mining. They followed the ore veins underground once opencast mining was no longer feasible. At Dolaucothi they stoped out the veins and drove adits through bare rock to drain the stopes. The same adits were also used to ventilate the workings, especially important when fire-setting was used. At other parts of the site, they penetrated the water table and dewatered the mines using several kinds of machines, especially reverse overshot water-wheels. These were used extensively in the copper mines at Rio Tinto in Spain, where one sequence comprised 16 such wheels arranged in pairs, and lifting water about 24 metres (79 ft). They were worked as treadmills with miners standing on the top slats. Many examples of such devices have been found in old Roman mines and some examples are now preserved in the British Museum and the National Museum of Wales.[14]
|
28 |
+
|
29 |
+
Mining as an industry underwent dramatic changes in medieval Europe. The mining industry in the early Middle Ages was mainly focused on the extraction of copper and iron. Other precious metals were also used, mainly for gilding or coinage. Initially, many metals were obtained through open-pit mining, and ore was primarily extracted from shallow depths, rather than through deep mine shafts. Around the 14th century, the growing use of weapons, armour, stirrups, and horseshoes greatly increased the demand for iron. Medieval knights, for example, were often laden with up to 100 pounds (45 kg) of plate or chain link armour in addition to swords, lances and other weapons.[15] The overwhelming dependency on iron for military purposes spurred iron production and extraction processes.
|
30 |
+
|
31 |
+
The silver crisis of 1465 occurred when all mines had reached depths at which the shafts could no longer be pumped dry with the available technology.[16] Although an increased use of banknotes, credit and copper coins during this period did decrease the value of, and dependence on, precious metals, gold and silver still remained vital to the story of medieval mining.
|
32 |
+
|
33 |
+
Due to differences in the social structure of society, the increasing extraction of mineral deposits spread from central Europe to England in the mid-sixteenth century. On the continent, mineral deposits belonged to the crown, and this regalian right was stoutly maintained. But in England, royal mining rights were restricted to gold and silver (of which England had virtually no deposits) by a judicial decision of 1568 and a law in 1688. England had iron, zinc, copper, lead, and tin ores. Landlords who owned the base metals and coal under their estates then had a strong inducement to extract these metals or to lease the deposits and collect royalties from mine operators. English, German, and Dutch capital combined to finance extraction and refining. Hundreds of German technicians and skilled workers were brought over; in 1642 a colony of 4,000 foreigners was mining and smelting copper at Keswick in the northwestern mountains.[17]
|
34 |
+
|
35 |
+
Use of water power in the form of water mills was extensive. The water mills were employed in crushing ore, raising ore from shafts, and ventilating galleries by powering giant bellows. Black powder was first used in mining in Selmecbánya, Kingdom of Hungary (now Banská Štiavnica, Slovakia) in 1627.[18] Black powder allowed blasting of rock and earth to loosen and reveal ore veins. Blasting was much faster than fire-setting and allowed the mining of previously impenetrable metals and ores.[19] In 1762, the world's first mining academy was established in the same town there.
|
36 |
+
|
37 |
+
The widespread adoption of agricultural innovations such as the iron plowshare, as well as the growing use of metal as a building material, was also a driving force in the tremendous growth of the iron industry during this period. Inventions like the arrastra were often used by the Spanish to pulverize ore after being mined. This device was powered by animals and used the same principles used for grain threshing.[20]
|
38 |
+
|
39 |
+
Much of the knowledge of medieval mining techniques comes from books such as Biringuccio’s De la pirotechnia and probably most importantly from Georg Agricola's De re metallica (1556). These books detail many different mining methods used in German and Saxon mines. A prime issue in medieval mines, which Agricola explains in detail, was the removal of water from mining shafts. As miners dug deeper to access new veins, flooding became a very real obstacle. The mining industry became dramatically more efficient and prosperous with the invention of mechanical and animal driven pumps.
|
40 |
+
|
41 |
+
Mining in the Philippines began around 1000 BC. The early Filipinos worked various mines of gold, silver, copper and iron. Jewels, gold ingots, chains, calombigas and earrings were handed down from antiquity and inherited from their ancestors. Gold dagger handles, gold dishes, tooth plating, and huge gold ornaments were also used.[21] In Laszlo Legeza's "Tantric elements in pre-Hispanic Philippines Gold Art", he mentioned that gold jewelry of Philippine origin was found in Ancient Egypt.[21] According to Antonio Pigafetta, the people of Mindoro possessed great skill in mixing gold with other metals and gave it a natural and perfect appearance that could deceive even the best of silversmiths.[21] The natives were also known for the pieces of jewelry made of other precious stones such as carnelian, agate and pearl. Some outstanding examples of Philippine jewelry included necklaces, belts, armlets and rings placed around the waist.
|
42 |
+
|
43 |
+
During prehistoric times, large amounts of copper was mined along Lake Superior's Keweenaw Peninsula and in nearby Isle Royale; metallic copper was still present near the surface in colonial times.[22][23][24] Indigenous peoples used Lake Superior copper from at least 5,000 years ago;[22] copper tools, arrowheads, and other artifacts that were part of an extensive native trade network have been discovered. In addition, obsidian, flint, and other minerals were mined, worked, and traded.[23] Early French explorers who encountered the sites[clarification needed] made no use of the metals due to the difficulties of transporting them,[23] but the copper was eventually traded throughout the continent along major river routes.
|
44 |
+
|
45 |
+
In the early colonial history of the Americas, "native gold and silver was quickly expropriated and sent back to Spain in fleets of gold- and silver-laden galleons,"[25] the gold and silver originating mostly from mines in Central and South America. Turquoise dated at 700 AD was mined in pre-Columbian America; in the Cerillos Mining District in New Mexico, estimates are that "about 15,000 tons of rock had been removed from Mt. Chalchihuitl using stone tools before 1700."[26][27]
|
46 |
+
|
47 |
+
In 1727, Louis Denys (Denis) (1675–1741), sieur de La Ronde – brother of Simon-Pierre Denys de Bonaventure and the son-in-law of René Chartier – took command of Fort La Pointe at Chequamegon Bay; where natives informed him of an island of copper. La Ronde obtained permission from the French crown to operate mines in 1733, becoming "the first practical miner on Lake Superior"; seven years later, mining was halted by an outbreak between Sioux and Chippewa tribes.[28]
|
48 |
+
|
49 |
+
Mining in the United States became prevalent in the 19th century, and the General Mining Act of 1872 was passed to encourage mining of federal lands.[29] As with the California Gold Rush in the mid-19th century, mining for minerals and precious metals, along with ranching, was a driving factor in the Westward Expansion to the Pacific coast. With the exploration of the West, mining camps were established and "expressed a distinctive spirit, an enduring legacy to the new nation;" Gold Rushers would experience the same problems as the Land Rushers of the transient West that preceded them.[30] Aided by railroads, many traveled West for work opportunities in mining. Western cities such as Denver and Sacramento originated as mining towns.
|
50 |
+
|
51 |
+
When new areas were explored, it was usually the gold (placer and then lode) and then silver that were taken into possession and extracted first. Other metals would often wait for railroads or canals, as coarse gold dust and nuggets do not require smelting and are easy to identify and transport.[24]
|
52 |
+
|
53 |
+
In the early 20th century, the gold and silver rush to the western United States also stimulated mining for coal as well as base metals such as copper, lead, and iron. Areas in modern Montana, Utah, Arizona, and later Alaska became predominate suppliers of copper to the world, which was increasingly demanding copper for electrical and households goods.[31] Canada's mining industry grew more slowly than did the United States' due to limitations in transportation, capital, and U.S. competition; Ontario was the major producer of the early 20th century with nickel, copper, and gold.[31]
|
54 |
+
|
55 |
+
Meanwhile, Australia experienced the Australian gold rushes and by the 1850s was producing 40% of the world's gold, followed by the establishment of large mines such as the Mount Morgan Mine, which ran for nearly a hundred years, Broken Hill ore deposit (one of the largest zinc-lead ore deposits), and the iron ore mines at Iron Knob. After declines in production, another boom in mining occurred in the 1960s. Now, in the early 21st century, Australia remains a major world mineral producer.[32]
|
56 |
+
|
57 |
+
As the 21st century begins, a globalized mining industry of large multinational corporations has arisen. Peak minerals and environmental impacts have also become a concern. Different elements, particularly rare earth minerals, have begun to increase in demand as a result of new technologies.
|
58 |
+
|
59 |
+
The process of mining from discovery of an ore body through extraction of minerals and finally to returning the land to its natural state consists of several distinct steps. The first is discovery of the ore body, which is carried out through prospecting or exploration to find and then define the extent, location and value of the ore body. This leads to a mathematical resource estimation to estimate the size and grade of the deposit.
|
60 |
+
|
61 |
+
This estimation is used to conduct a pre-feasibility study to determine the theoretical economics of the ore deposit. This identifies, early on, whether further investment in estimation and engineering studies is warranted and identifies key risks and areas for further work. The next step is to conduct a feasibility study to evaluate the financial viability, the technical and financial risks, and the robustness of the project.
|
62 |
+
|
63 |
+
This is when the mining company makes the decision whether to develop the mine or to walk away from the project. This includes mine planning to evaluate the economically recoverable portion of the deposit, the metallurgy and ore recoverability, marketability and payability of the ore concentrates, engineering concerns, milling and infrastructure costs, finance and equity requirements, and an analysis of the proposed mine from the initial excavation all the way through to reclamation. The proportion of a deposit that is economically recoverable is dependent on the enrichment factor of the ore in the area.
|
64 |
+
|
65 |
+
To gain access to the mineral deposit within an area it is often necessary to mine through or remove waste material which is not of immediate interest to the miner. The total movement of ore and waste constitutes the mining process. Often more waste than ore is mined during the life of a mine, depending on the nature and location of the ore body. Waste removal and placement is a major cost to the mining operator, so a detailed characterization of the waste material forms an essential part of the geological exploration program for a mining operation.
|
66 |
+
|
67 |
+
Once the analysis determines a given ore body is worth recovering, development begins to create access to the ore body. The mine buildings and processing plants are built, and any necessary equipment is obtained. The operation of the mine to recover the ore begins and continues as long as the company operating the mine finds it economical to do so. Once all the ore that the mine can produce profitably is recovered, reclamation begins to make the land used by the mine suitable for future use.
|
68 |
+
|
69 |
+
Mining techniques can be divided into two common excavation types: surface mining and sub-surface (underground) mining. Today, surface mining is much more common, and produces, for example, 85% of minerals (excluding petroleum and natural gas) in the United States, including 98% of metallic ores.[33]
|
70 |
+
|
71 |
+
Targets are divided into two general categories of materials: placer deposits, consisting of valuable minerals contained within river gravels, beach sands, and other unconsolidated materials; and lode deposits, where valuable minerals are found in veins, in layers, or in mineral grains generally distributed throughout a mass of actual rock. Both types of ore deposit, placer or lode, are mined by both surface and underground methods.
|
72 |
+
|
73 |
+
Some mining, including much of the rare earth elements and uranium mining, is done by less-common methods, such as in-situ leaching: this technique involves digging neither at the surface nor underground. The extraction of target minerals by this technique requires that they be soluble, e.g., potash, potassium chloride, sodium chloride, sodium sulfate, which dissolve in water. Some minerals, such as copper minerals and uranium oxide, require acid or carbonate solutions to dissolve.[34][35]
|
74 |
+
|
75 |
+
Surface mining is done by removing (stripping) surface vegetation, dirt, and, if necessary, layers of bedrock in order to reach buried ore deposits. Techniques of surface mining include: open-pit mining, which is the recovery of materials from an open pit in the ground, quarrying, identical to open-pit mining except that it refers to sand, stone and clay;[36] strip mining, which consists of stripping surface layers off to reveal ore/seams underneath; and mountaintop removal, commonly associated with coal mining, which involves taking the top of a mountain off to reach ore deposits at depth. Most (but not all) placer deposits, because of their shallowly buried nature, are mined by surface methods. Finally, landfill mining involves sites where landfills are excavated and processed.[37] Landfill mining has been thought of as a solution to dealing with long-term methane emissions and local pollution.[38]
|
76 |
+
|
77 |
+
High wall mining is another form of surface mining that evolved from auger mining. In high wall mining, the coal seam is penetrated by a continuous miner propelled by a hydraulic Push-beam Transfer Mechanism (PTM). A typical cycle includes sumping (launch-pushing forward) and shearing (raising and lowering the cutter-head boom to cut the entire height of the coal seam). As the coal recovery cycle continues, the cutter-head is progressively launched into the coal seam for 19.72 feet (6.01 m). Then, the Push-beam Transfer Mechanism (PTM) automatically inserts a 19.72-foot (6.01 m) long rectangular Push-beam (Screw-Conveyor Segment) into the center section of the machine between the Powerhead and the cutter-head. The Push-beam system can penetrate nearly 1,000 feet (300 m) into the coal seam. One patented high wall mining system uses augers enclosed inside the Push-beam that prevent the mined coal from being contaminated by rock debris during the conveyance process. Using a video imaging and/or a gamma ray sensor and/or other Geo-Radar systems like a coal-rock interface detection sensor (CID), the operator can see ahead projection of the seam-rock interface and guide the continuous miner's progress. High wall mining can produce thousands of tons of coal in contour-strip operations with narrow benches, previously mined areas, trench mine applications and steep-dip seams with controlled water-inflow pump system and/or a gas (inert) venting system.
|
78 |
+
|
79 |
+
Sub-surface mining consists of digging tunnels or shafts into the earth to reach buried ore deposits. Ore, for processing, and waste rock, for disposal, are brought to the surface through the tunnels and shafts. Sub-surface mining can be classified by the type of access shafts used, and the extraction method or the technique used to reach the mineral deposit. Drift mining utilizes horizontal access tunnels, slope mining uses diagonally sloping access shafts, and shaft mining utilizes vertical access shafts. Mining in hard and soft rock formations require different techniques.
|
80 |
+
|
81 |
+
Other methods include shrinkage stope mining, which is mining upward, creating a sloping underground room, long wall mining, which is grinding a long ore surface underground, and room and pillar mining, which is removing ore from rooms while leaving pillars in place to support the roof of the room. Room and pillar mining often leads to retreat mining, in which supporting pillars are removed as miners retreat, allowing the room to cave in, thereby loosening more ore. Additional sub-surface mining methods include hard rock mining, which is mining of hard rock (igneous, metamorphic or sedimentary) materials, bore hole mining, drift and fill mining, long hole slope mining, sub level caving, and block caving.
|
82 |
+
|
83 |
+
Heavy machinery is used in mining to explore and develop sites, to remove and stockpile overburden, to break and remove rocks of various hardness and toughness, to process the ore, and to carry out reclamation projects after the mine is closed. Bulldozers, drills, explosives and trucks are all necessary for excavating the land. In the case of placer mining, unconsolidated gravel, or alluvium, is fed into machinery consisting of a hopper and a shaking screen or trommel which frees the desired minerals from the waste gravel. The minerals are then concentrated using sluices or jigs.
|
84 |
+
|
85 |
+
Large drills are used to sink shafts, excavate stopes, and obtain samples for analysis. Trams are used to transport miners, minerals and waste. Lifts carry miners into and out of mines, and move rock and ore out, and machinery in and out, of underground mines. Huge trucks, shovels and cranes are employed in surface mining to move large quantities of overburden and ore. Processing plants utilize large crushers, mills, reactors, roasters and other equipment to consolidate the mineral-rich material and extract the desired compounds and metals from the ore.
|
86 |
+
|
87 |
+
Once the mineral is extracted, it is often then processed. The science of extractive metallurgy is a specialized area in the science of metallurgy that studies the extraction of valuable metals from their ores, especially through chemical or mechanical means.
|
88 |
+
|
89 |
+
Mineral processing (or mineral dressing) is a specialized area in the science of metallurgy that studies the mechanical means of crushing, grinding, and washing that enable the separation (extractive metallurgy) of valuable metals or minerals from their gangue (waste material). Processing of placer ore material consists of gravity-dependent methods of separation, such as sluice boxes. Only minor shaking or washing may be necessary to disaggregate (unclump) the sands or gravels before processing. Processing of ore from a lode mine, whether it is a surface or subsurface mine, requires that the rock ore be crushed and pulverized before extraction of the valuable minerals begins. After lode ore is crushed, recovery of the valuable minerals is done by one, or a combination of several, mechanical and chemical techniques.
|
90 |
+
|
91 |
+
Since most metals are present in ores as oxides or sulfides, the metal needs to be reduced to its metallic form. This can be accomplished through chemical means such as smelting or through electrolytic reduction, as in the case of aluminium. Geometallurgy combines the geologic sciences with extractive metallurgy and mining.
|
92 |
+
|
93 |
+
In 2018, led by Chemistry and Biochemistry professor Bradley D. Smith, University of Notre Dame researchers "invented a new class of molecules whose shape and size enable them to capture and contain precious metal ions," reported in a study published by the Journal of the American Chemical Society. The new method "converts gold-containing ore into chloroauric acid and extracts it using an industrial solvent. The container molecules are able to selectively separate the gold from the solvent without the use of water stripping." The newly developed molecules can eliminate water stripping, whereas mining traditionally "relies on a 125-year-old method that treats gold-containing ore with large quantities of poisonous sodium cyanide... this new process has a milder environmental impact and that, besides gold, it can be used for capturing other metals such as platinum and palladium," and could also be used in urban mining processes that remove precious metals from wastewater streams.[39]
|
94 |
+
|
95 |
+
Environmental issues can include erosion, formation of sinkholes, loss of biodiversity, and contamination of soil, groundwater and surface water by chemicals from mining processes. In some cases, additional forest logging is done in the vicinity of mines to create space for the storage of the created debris and soil.[40] Contamination resulting from leakage of chemicals can also affect the health of the local population if not properly controlled.[41] Extreme examples of pollution from mining activities include coal fires, which can last for years or even decades, producing massive amounts of environmental damage.
|
96 |
+
|
97 |
+
Mining companies in most countries are required to follow stringent environmental and rehabilitation codes in order to minimize environmental impact and avoid impacting human health. These codes and regulations all require the common steps of environmental impact assessment, development of environmental management plans, mine closure planning (which must be done before the start of mining operations), and environmental monitoring during operation and after closure. However, in some areas, particularly in the developing world, government regulations may not be well enforced.
|
98 |
+
|
99 |
+
For major mining companies and any company seeking international financing, there are a number of other mechanisms to enforce environmental standards. These generally relate to financing standards such as the Equator Principles, IFC environmental standards, and criteria for Socially responsible investing. Mining companies have used this oversight from the financial sector to argue for some level of industry self-regulation.[42] In 1992, a Draft Code of Conduct for Transnational Corporations was proposed at the Rio Earth Summit by the UN Centre for Transnational Corporations (UNCTC), but the Business Council for Sustainable Development (BCSD) together with the International Chamber of Commerce (ICC) argued successfully for self-regulation instead.[43]
|
100 |
+
|
101 |
+
This was followed by the Global Mining Initiative which was begun by nine of the largest metals and mining companies and which led to the formation of the International Council on Mining and Metals, whose purpose was to "act as a catalyst" in an effort to improve social and environmental performance in the mining and metals industry internationally.[42] The mining industry has provided funding to various conservation groups, some of which have been working with conservation agendas that are at odds with an emerging acceptance of the rights of indigenous people – particularly the right to make land-use decisions.[44]
|
102 |
+
|
103 |
+
Certification of mines with good practices occurs through the International Organization for Standardization (ISO). For example, ISO 9000 and ISO 14001, which certify an "auditable environmental management system", involve short inspections, although they have been accused of lacking rigor.[clarification needed][42]:183–84 Certification is also available through Ceres' Global Reporting Initiative, but these reports are voluntary and unverified. Miscellaneous other certification programs exist for various projects, typically through nonprofit groups.[42]:185–86
|
104 |
+
|
105 |
+
The purpose of a 2012 EPS PEAKS paper[45] was to provide evidence on policies managing ecological costs and maximise socio-economic benefits of mining using host country regulatory initiatives. It found existing literature suggesting donors encourage developing countries to:
|
106 |
+
|
107 |
+
Ore mills generate large amounts of waste, called tailings. For example, 99 tons of waste are generated per ton of copper, with even higher ratios in gold mining – because only 5.3 g of gold is extracted per ton of ore, a ton of gold produces 200,000 tons of tailings.[46] (As time goes on and richer deposits are exhausted – and technology improves to permit – this number is going down to .5 g and less.) These tailings can be toxic. Tailings, which are usually produced as a slurry, are most commonly dumped into ponds made from naturally existing valleys.[47] These ponds are secured by impoundments (dams or embankment dams).[47] In 2000 it was estimated that 3,500 tailings impoundments existed, and that every year, 2 to 5 major failures and 35 minor failures occurred;[48] for example, in the Marcopper mining disaster at least 2 million tons of tailings were released into a local river.[48] In 2015, Barrick Gold spilled over 1 million liters of cyanide into a total of five rivers in Argentina near their Veladero mine.[49] In central Finland, Talvivaara Terrafame polymetal mine waste effluent since 2008 and numerous leaks of saline mine water has resulted in ecological collapse of nearby lake.[50] Subaqueous tailings disposal is another option.[47] The mining industry has argued that submarine tailings disposal (STD), which disposes of tailings in the sea, is ideal because it avoids the risks of tailings ponds; although the practice is illegal in the United States and Canada, it is used in the developing world.[51]
|
108 |
+
|
109 |
+
The waste is classified as either sterile or mineralised, with acid generating potential, and the movement and storage of this material forms a major part of the mine planning process. When the mineralised package is determined by an economic cut-off, the near-grade mineralised waste is usually dumped separately with view to later treatment should market conditions change and it becomes economically viable. Civil engineering design parameters are used in the design of the waste dumps, and special conditions apply to high-rainfall areas and to seismically active areas. Waste dump designs must meet all regulatory requirements of the country in whose jurisdiction the mine is located. It is also common practice to rehabilitate dumps to an internationally acceptable standard, which in some cases means that higher standards than the local regulatory standard are applied.[48]
|
110 |
+
|
111 |
+
Many mining sites are remote and not connected to the grid. Electricity is typically generated with diesel generators. Due to high transportation cost and theft during transportation the cost for generating electricity is normally high. Renewable energy applications are becoming an alternative or amendment. Both solar and wind power plants can contribute in saving diesel costs at mining sites. Renewable energy applications have been built at mining sites.[52]
|
112 |
+
Cost savings can reach up to 70%.[53]
|
113 |
+
|
114 |
+
Mining exists in many countries. London is known as the capital of global "mining houses" such as Rio Tinto Group, BHP Billiton, and Anglo American PLC.[54] The US mining industry is also large, but it is dominated by the coal and other nonmetal minerals (e.g., rock and sand), and various regulations have worked to reduce the significance of mining in the United States.[54] In 2007 the total market capitalization of mining companies was reported at US$962 billion, which compares to a total global market cap of publicly traded companies of about US$50 trillion in 2007.[55] In 2002, Chile and Peru were reportedly the major mining countries of South America.[56] The mineral industry of Africa includes the mining of various minerals; it produces relatively little of the industrial metals copper, lead, and zinc, but according to one estimate has as a percent of world reserves 40% of gold, 60% of cobalt, and 90% of the world's platinum group metals.[57] Mining in India is a significant part of that country's economy. In the developed world, mining in Australia, with BHP Billiton founded and headquartered in the country, and mining in Canada are particularly significant. For rare earth minerals mining, China reportedly controlled 95% of production in 2013.[58]
|
115 |
+
|
116 |
+
While exploration and mining can be conducted by individual entrepreneurs or small businesses, most modern-day mines are large enterprises requiring large amounts of capital to establish. Consequently, the mining sector of the industry is dominated by large, often multinational, companies, most of them publicly listed. It can be argued that what is referred to as the 'mining industry' is actually two sectors, one specializing in exploration for new resources and the other in mining those resources. The exploration sector is typically made up of individuals and small mineral resource companies, called "juniors", which are dependent on venture capital. The mining sector is made up of large multinational companies that are sustained by production from their mining operations. Various other industries such as equipment manufacture, environmental testing, and metallurgy analysis rely on, and support, the mining industry throughout the world. Canadian stock exchanges have a particular focus on mining companies, particularly junior exploration companies through Toronto's TSX Venture Exchange; Canadian companies raise capital on these exchanges and then invest the money in exploration globally.[54] Some have argued that below juniors there exists a substantial sector of illegitimate companies primarily focused on manipulating stock prices.[54]
|
117 |
+
|
118 |
+
Mining operations can be grouped into five major categories in terms of their respective resources. These are oil and gas extraction, coal mining, metal ore mining, nonmetallic mineral mining and quarrying, and mining support activities.[59] Of all of these categories, oil and gas extraction remains one of the largest in terms of its global economic importance. Prospecting potential mining sites, a vital area of concern for the mining industry, is now done using sophisticated new technologies such as seismic prospecting and remote-sensing satellites. Mining is heavily affected by the prices of the commodity minerals, which are often volatile. The 2000s commodities boom ("commodities supercycle") increased the prices of commodities, driving aggressive mining. In addition, the price of gold increased dramatically in the 2000s, which increased gold mining; for example, one study found that conversion of forest in the Amazon increased six-fold from the period 2003–2006 (292 ha/yr) to the period 2006–2009 (1,915 ha/yr), largely due to artisanal mining.[60]
|
119 |
+
|
120 |
+
Mining companies can be classified based on their size and financial capabilities:
|
121 |
+
|
122 |
+
New regulations and a process of legislative reforms aim to improve the harmonization and stability of the mining sector in mineral-rich countries.[62] New legislation for mining industry in African countries still appears to be an issue, but has the potential to be solved, when a consensus is reached on the best approach.[63] By the beginning of the 21st century the booming and increasingly complex mining sector in mineral-rich countries was providing only slight benefits to local communities, especially in given the sustainability issues. Increasing debate and influence by NGOs and local communities called for a new approaches which would also include disadvantaged communities, and work towards sustainable development even after mine closure (including transparency and revenue management). By the early 2000s, community development issues and resettlements became mainstream concerns in World Bank mining projects.[63] Mining-industry expansion after mineral prices increased in 2003 and also potential fiscal revenues in those countries created an omission in the other economic sectors in terms of finances and development. Furthermore, this highlighted regional and local demand for mining revenues and an inability of sub-national governments to effectively use the revenues. The Fraser Institute (a Canadian think tank) has highlighted[clarification needed] the environmental protection laws in developing countries, as well as voluntary efforts by mining companies to improve their environmental impact.[64]
|
123 |
+
|
124 |
+
In 2007 the Extractive Industries Transparency Initiative (EITI) was mainstreamed[clarification needed] in all countries cooperating with the World Bank in mining industry reform.[63] The EITI operates and was implemented with the support of the EITI multi-donor trust fund, managed by the World Bank.[65] The EITI aims to increase transparency in transactions between governments and companies in extractive industries[66] by monitoring the revenues and benefits between industries and recipient governments. The entrance process is voluntary for each country and is monitored by multiple stakeholders including governments, private companies and civil society representatives, responsible for disclosure and dissemination of the reconciliation report;[63] however, the competitive disadvantage of company-by company public report is for some of the businesses in Ghana at least, the main constraint.[67] Therefore, the outcome assessment in terms of failure or success of the new EITI regulation does not only "rest on the government's shoulders" but also on civil society and companies.[68]
|
125 |
+
|
126 |
+
On the other hand, implementation has issues; inclusion or exclusion of artisanal mining and small-scale mining (ASM) from the EITI and how to deal with "non-cash" payments made by companies to subnational governments. Furthermore, the disproportionate revenues the mining industry can bring to the comparatively small number of people that it employs,[69] causes other problems, like a lack of investment in other less lucrative sectors, leading to swings in government revenuebecause of volatility in the oil markets. Artisanal mining is clearly an issue in EITI Countries such as the Central African Republic, D.R. Congo, Guinea, Liberia and Sierra Leone – i.e. almost half of the mining countries implementing the EITI.[69] Among other things, limited scope of the EITI involving disparity in terms of knowledge of the industry and negotiation skills, thus far flexibility of the policy (e.g. liberty of the countries to expand beyond the minimum requirements and adapt it to their needs), creates another risk of unsuccessful implementation. Public awareness increase, where government should act as a bridge between public and initiative for a successful outcome of the policy is an important element to be considered.[70]
|
127 |
+
|
128 |
+
The World Bank has been involved in mining since 1955, mainly through grants from its International Bank for Reconstruction and Development, with the Bank's Multilateral Investment Guarantee Agency offering political risk insurance.[71] Between 1955 and 1990 it provided about $2 billion to fifty mining projects, broadly categorized as reform and rehabilitation, greenfield mine construction, mineral processing, technical assistance, and engineering. These projects have been criticized, particularly the Ferro Carajas project of Brazil, begun in 1981.[72] The World Bank established mining codes intended to increase foreign investment; in 1988 it solicited feedback from 45 mining companies on how to increase their involvement.[42]:20
|
129 |
+
|
130 |
+
In 1992 the World Bank began to push for privatization of government-owned mining companies with a new set of codes, beginning with its report The Strategy for African Mining. In 1997, Latin America's largest miner Companhia Vale do Rio Doce (CVRD) was privatized. These and other developments such as the Philippines 1995 Mining Act led the bank to publish a third report (Assistance for Minerals Sector Development and Reform in Member Countries) which endorsed mandatory environment impact assessments and attention to the concerns of the local population. The codes based on this report are influential in the legislation of developing nations. The new codes are intended to encourage development through tax holidays, zero custom duties, reduced income taxes, and related measures.[42]:22 The results of these codes were analyzed by a group from the University of Quebec, which concluded that the codes promote foreign investment but "fall very short of permitting sustainable development".[73] The observed negative correlation between natural resources and economic development is known as the resource curse.
|
131 |
+
|
132 |
+
Safety has long been a concern in the mining business, especially in sub-surface mining. The Courrières mine disaster, Europe's worst mining accident, involved the death of 1,099 miners in Northern France on March 10, 1906. This disaster was surpassed only by the Benxihu Colliery accident in China on April 26, 1942, which killed 1,549 miners.[74] While mining today is substantially safer than it was in previous decades, mining accidents still occur. Government figures indicate that 5,000 Chinese miners die in accidents each year, while other reports have suggested a figure as high as 20,000.[75] Mining accidents continue worldwide, including accidents causing dozens of fatalities at a time such as the 2007 Ulyanovskaya Mine disaster in Russia, the 2009 Heilongjiang mine explosion in China, and the 2010 Upper Big Branch Mine disaster in the United States. Mining has been identified by the National Institute for Occupational Safety and Health (NIOSH) as a priority industry sector in the National Occupational Research Agenda (NORA) to identify and provide intervention strategies regarding occupational health and safety issues.[76] The Mining Safety and Health Administration (MSHA) was established in 1978 to "work to prevent death, illness, and injury from mining and promote safe and healthful workplaces for US miners."[77] Since its implementation in 1978, the number of miner fatalities has decreased from 242 miners in 1978 to 24 miners in 2019.
|
133 |
+
|
134 |
+
There are numerous occupational hazards associated with mining, including exposure to rockdust which can lead to diseases such as silicosis, asbestosis, and pneumoconiosis. Gases in the mine can lead to asphyxiation and could also be ignited. Mining equipment can generate considerable noise, putting workers at risk for hearing loss. Cave-ins, rock falls, and exposure to excess heat are also known hazards. The current NIOSH Recommended Exposure Limit (REL) of noise is 85 dBA with a 3 dBA exchange rate and the MSHA Permissible Exposure Limit (PEL) is 90 dBA with a 5 dBA exchange rate as an 8-hour time-weighted average. NIOSH has found that 25% of noise-exposed workers in Mining, Quarrying, and Oil and Gas Extraction have hearing impairment.[78] The prevalence of hearing loss increased by 1% from 1991-2001 within these workers.
|
135 |
+
|
136 |
+
Noise studies have been conducted in several mining environments. Stageloaders (84-102 dBA), shearers (85-99 dBA), auxiliary fans (84–120 dBA), continuous mining machines (78–109 dBA), and roof bolters (92–103 dBA) represent some of the noisiest equipment in underground coal mines.[79] Dragline oilers, dozer operators, and welders using air arcing were occupations with the highest noise exposures among surface coal miners.[80] Coal mines had the highest hearing loss injury likelihood.[81]
|
137 |
+
|
138 |
+
Proper ventilation, hearing protection, and spraying equipment with water are important safety practices in mines.
|
139 |
+
|
140 |
+
As of 2008, the deepest mine in the world is TauTona in Carletonville, South Africa, at 3.9 kilometres (2.4 mi),[82] replacing the neighboring Savuka Mine in the North West Province of South Africa at 3,774 metres (12,382 ft).[83] East Rand Mine in Boksburg, South Africa briefly held the record at 3,585 metres (11,762 ft), and the first mine declared the deepest in the world was also TauTona when it was at 3,581 metres (11,749 ft).
|
141 |
+
|
142 |
+
The Moab Khutsong gold mine in North West Province (South Africa) has the world's longest winding steel wire rope, which is able to lower workers to 3,054 metres (10,020 ft) in one uninterrupted four-minute journey.[84]
|
143 |
+
|
144 |
+
The deepest mine in Europe is the 16th shaft of the uranium mines in Příbram, Czech Republic, at 1,838 metres (6,030 ft),[85] second is Bergwerk Saar in Saarland, Germany, at 1,750 metres (5,740 ft).
|
145 |
+
|
146 |
+
The deepest open-pit mine in the world is Bingham Canyon Mine in Bingham Canyon, Utah, United States, at over 1,200 metres (3,900 ft). The largest and second deepest open-pit copper mine in the world is Chuquicamata in northern Chile at 900 metres (3,000 ft), which annually produces 443,000 tons of copper and 20,000 tons of molybdenum.[86][87][88]
|
147 |
+
|
148 |
+
The deepest open-pit mine with respect to sea level is Tagebau Hambach in Germany, where the base of the pit is 293 metres (961 ft) below sea level.
|
149 |
+
|
150 |
+
The largest underground mine is Kiirunavaara Mine in Kiruna, Sweden. With 450 kilometres (280 mi) of roads, 40 million tonnes of annually produced ore, and a depth of 1,270 metres (4,170 ft), it is also one of the most modern underground mines. The deepest borehole in the world is Kola Superdeep Borehole at 12,262 metres (40,230 ft), but this is connected to scientific drilling, not mining.
|
151 |
+
|
152 |
+
During the 20th century, the variety of metals used in society grew rapidly. Today, the development of major nations such as China and India and advances in technologies are fueling an ever-greater demand. The result is that metal mining activities are expanding and more and more of the world's metal stocks are above ground in use rather than below ground as unused reserves. An example is the in-use stock of copper. Between 1932 and 1999, copper in use in the US rose from 73 kilograms (161 lb) to 238 kilograms (525 lb) per person.[89]
|
153 |
+
|
154 |
+
95% of the energy used to make aluminium from bauxite ore is saved by using recycled material.[90] However, levels of metals recycling are generally low. In 2010, the International Resource Panel, hosted by the United Nations Environment Programme (UNEP), published reports on metal stocks that exist within society[91] and their recycling rates.[89]
|
155 |
+
|
156 |
+
The report's authors observed that the metal stocks in society can serve as huge mines above ground. However, they warned that the recycling rates of some rare metals used in applications such as mobile phones, battery packs for hybrid cars, and fuel cells are so low that unless future end-of-life recycling rates are dramatically stepped up these critical metals will become unavailable for use in modern technology.
|
157 |
+
|
158 |
+
As recycling rates are low and so much metal has already been extracted, some landfills now contain a higher concentrations of metal than mines themselves.[92] This is especially true of aluminium, used in cans, and precious metals, found in discarded electronics.[93] Furthermore, waste after 15 years has still not broken down, so less processing would be required when compared to mining ores. A study undertaken by Cranfield University has found £360 million of metals could be mined from just 4 landfill sites.[94] There is also up to 20MJ/kg of energy in waste, potentially making the re-extraction more profitable.[95] However, although the first landfill mine opened in Tel Aviv, Israel in 1953, little work has followed due to the abundance of accessible ores.[96]
|
159 |
+
|
en/1927.html.txt
ADDED
@@ -0,0 +1,174 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Space exploration is the use of astronomy and space technology to explore outer space.[1] While the exploration of space is carried out mainly by astronomers with telescopes, its physical exploration though is conducted both by unmanned robotic space probes and human spaceflight. Space exploration, like its classical form astronomy, is one of the main sources for space science.
|
4 |
+
|
5 |
+
While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[2]
|
6 |
+
|
7 |
+
The early era of space exploration was driven by a "Space Race" between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union's Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Alexei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971.
|
8 |
+
After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).
|
9 |
+
|
10 |
+
With the substantial completion of the ISS[3] following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[4] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[5]
|
11 |
+
The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as Earth–Moon L1, the Moon, Earth–Sun L2, near-Earth asteroids, and Phobos or Mars orbit.[6]
|
12 |
+
|
13 |
+
In the 2000s, the People's Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future crewed space missions. China, Russia, Japan, and India have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.
|
14 |
+
|
15 |
+
From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize).
|
16 |
+
|
17 |
+
The first telescope was said to be invented in 1608 in the Netherlands by an eyeglass maker named Hans Lippershey. The Orbiting Astronomical Observatory 2 was the first space telescope launched on December 7, 1968.[7] As of February 2, 2019, there was 3,891 confirmed exoplanets discovered. The Milky Way is estimated to contain 100–400 billion stars[8] and more than 100 billion planets.[9] There are at least 2 trillion galaxies in the observable universe.[10][11] GN-z11 is the most distant known object from Earth, reported as 32 billion light-years away.[12][13]
|
18 |
+
|
19 |
+
In 1949, the Bumper-WAC reached an altitude of 393 kilometres (244 mi), becoming the first human-made object to enter space, according to NASA,[14] although V-2 Rocket MW 18014 crossed the Kármán line earlier, in 1944.[15]
|
20 |
+
|
21 |
+
The first successful orbital launch was of the Soviet uncrewed Sputnik 1 ("Satellite 1") mission on 4 October 1957. The satellite weighed about 83 kg (183 lb), and is believed to have orbited Earth at a height of about 250 km (160 mi). It had two radio transmitters (20 and 40 MHz), which emitted "beeps" that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.
|
22 |
+
|
23 |
+
The first successful human spaceflight was Vostok 1 ("East 1"), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin's flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.
|
24 |
+
|
25 |
+
The first artificial object to reach another celestial body was Luna 2 reaching the Moon in 1959.[16] The first soft landing on another celestial body was performed by Luna 9 landing on the Moon on February 3, 1966.[17] Luna 10 became the first artificial satellite of the Moon, entering Moon Orbit on April 3, 1966.[18]
|
26 |
+
|
27 |
+
The first crewed landing on another celestial body was performed by Apollo 11 on July 20, 1969, landing on the Moon. There have been a total of six spacecraft with humans landing on the Moon starting from 1969 to the last human landing in 1972.
|
28 |
+
|
29 |
+
The first interplanetary flyby was the 1961 Venera 1 flyby of Venus, though the 1962 Mariner 2 was the first flyby of Venus to return data (closest approach 34,773 kilometers). Pioneer 6 was the first satellite to orbit the Sun, launched on December 16, 1965. The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively. This accounts for flybys of each of the eight planets in our Solar System, the Sun, the Moon and Ceres & Pluto (2 of the 5 recognized dwarf planets).
|
30 |
+
|
31 |
+
The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 which returned data to Earth for 23 minutes from Venus. In 1975 the Venera 9 was the first to return images from the surface of another planet, returning images from Venus. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission. Venus and Mars are the two planets outside of Earth, humans have conducted surface missions on with unmanned robotic spacecraft.
|
32 |
+
|
33 |
+
Salyut 1 was the first space station of any kind, launched into low Earth orbit by the Soviet Union on April 19, 1971. The International Space Station is currently the only fully functional space station, with continuous inhabitance since the year 2000.
|
34 |
+
|
35 |
+
Voyager 1 became the first human-made object to leave our Solar System into interstellar space on August 25, 2012. The probe passed the heliopause at 121 AU to enter interstellar space.[19]
|
36 |
+
|
37 |
+
The Apollo 13 flight passed the far side of the Moon at an altitude of 254 kilometers (158 miles; 137 nautical miles) above the lunar surface, and 400,171 km (248,655 mi) from Earth, marking the record for the farthest humans have ever traveled from Earth in 1970.
|
38 |
+
|
39 |
+
Voyager 1 is currently at a distance of 145.11 astronomical units (2.1708×1010 km; 1.3489×1010 mi) (21.708 billion kilometers; 13.489 billion miles) from Earth as of January 1, 2019.[20] It is the most distant human-made object from Earth.[21]
|
40 |
+
|
41 |
+
GN-z11 is the most distant known object from Earth, reported as 13.4 billion light-years away.[12][13]
|
42 |
+
|
43 |
+
The dream of stepping into the outer reaches of Earth's atmosphere was driven by the fiction of Jules Verne[22][23][24] and H. G. Wells,[25] and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a "Space Race" with the Soviets.
|
44 |
+
|
45 |
+
Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.
|
46 |
+
|
47 |
+
Wernher von Braun was the lead rocket engineer for Nazi Germany's World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development ("Operation Paperclip"). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA's Marshall Space Flight Center which developed the Saturn V moon rocket.
|
48 |
+
|
49 |
+
Initially the race for space was often led by Sergei Korolev, whose legacy includes both the R7 and Soyuz—which remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.
|
50 |
+
|
51 |
+
Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolev. After Korolev's death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[26][27]
|
52 |
+
|
53 |
+
Other key people:
|
54 |
+
|
55 |
+
Starting in the mid-20th century probes and then human mission were sent into Earth orbit, and then on to the Moon. Also, probes were sent throughout the known Solar system, and into Solar orbit. Unmanned spacecraft have been sent into orbit around Saturn, Jupiter, Mars, Venus, and Mercury by the 21st century, and the most distance active spacecraft, Voyager 1 and 2 traveled beyond 100 times the Earth-Sun distance. The instruments were enough though that it is thought they have left the Sun's heliosphere, a sort of bubble of particles made in the Galaxy by the Sun's solar wind.
|
56 |
+
|
57 |
+
The Sun is a major focus of space exploration. Being above the atmosphere in particular and Earth's magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth's surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, launched in 2018, will approach the Sun to within 1/8th the orbit of Mercury.
|
58 |
+
|
59 |
+
Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).
|
60 |
+
|
61 |
+
A third mission to Mercury, scheduled to arrive in 2025, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10's flybys.
|
62 |
+
|
63 |
+
Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.
|
64 |
+
|
65 |
+
Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first flyby was the 1961 Venera 1, though the 1962 Mariner 2 was the first flyby to successfully return data. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.
|
66 |
+
|
67 |
+
Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.
|
68 |
+
|
69 |
+
For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States' first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth's magnetic fields, which currently renders construction of habitable space stations above 1000 km impractical.
|
70 |
+
Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth's atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.
|
71 |
+
|
72 |
+
The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.
|
73 |
+
|
74 |
+
In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon's surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.
|
75 |
+
|
76 |
+
Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the sixth landing and the most recent human visit there. Artemis 2 will flyby the Moon in 2022. Robotic missions are still pursued vigorously.
|
77 |
+
|
78 |
+
The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.
|
79 |
+
|
80 |
+
The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul[28] which subsists on a diet of Mars probes. This phenomenon is also informally known as the "Mars Curse".[29]
|
81 |
+
In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India's Mars Orbiter Mission (MOM)[30][31][32] is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of ₹450 Crore (US$73 million).[33][34] The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The uncrewed exploratory probe has been named "Hope Probe" and will be sent to Mars to study its atmosphere in detail.[35]
|
82 |
+
|
83 |
+
SpaceX CEO Elon Musk hopes that the SpaceX Starship will explore the Mars.
|
84 |
+
|
85 |
+
The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit.[36] It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a "trans-shipment point" for spaceships traveling to Mars.[37]
|
86 |
+
|
87 |
+
The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been "flybys", in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet's orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is precluded.
|
88 |
+
|
89 |
+
Reaching Jupiter from Earth requires a delta-v of 9.2 km/s,[38] which is comparable to the 9.7 km/s delta-v needed to reach low Earth orbit.[39] Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.[38]
|
90 |
+
|
91 |
+
Jupiter has 79 known moons, many of which have relatively little known information about them.
|
92 |
+
|
93 |
+
Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (Cassini–Huygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.
|
94 |
+
|
95 |
+
Saturn has at least 62 known moons, although the exact number is debatable since Saturn's rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.
|
96 |
+
|
97 |
+
The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77°, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet's unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.
|
98 |
+
|
99 |
+
Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be unique, being profoundly affected by the planet's unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.
|
100 |
+
|
101 |
+
The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.
|
102 |
+
|
103 |
+
Although the extremely uniform appearance of Uranus during Voyager 2's visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter's small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100 km/h.[40] Voyager 2 also examined Neptune's ring and moon system. It discovered 900 complete rings and additional partial ring "arcs" around Neptune. In addition to examining Neptune's three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune's largest moon, Triton, is a captured Kuiper belt object.[41]
|
104 |
+
|
105 |
+
The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn's moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.[42]
|
106 |
+
|
107 |
+
Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as "plutinos"). After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003.[43] New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.
|
108 |
+
|
109 |
+
Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery.
|
110 |
+
Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo's planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA's Dawn spacecraft, launched in 2007.
|
111 |
+
|
112 |
+
Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet's tail. The Philae lander successfully landed on Comet Churyumov–Gerasimenko in 2014 as part of the broader Rosetta mission.
|
113 |
+
|
114 |
+
Hayabusa was a robotic spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid's shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid twice to collect samples. The spacecraft returned to Earth on 13 June 2010.
|
115 |
+
|
116 |
+
Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space.[44] Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.
|
117 |
+
|
118 |
+
Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion.[45] The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.[46]
|
119 |
+
|
120 |
+
Breakthrough Starshot is a research and engineering project by the Breakthrough Initiatives to develop a proof-of-concept fleet of light sail spacecraft named StarChip,[47] to be capable of making the journey to the Alpha Centauri star system 4.37 light-years away. It was founded in 2016 by Yuri Milner, Stephen Hawking, and Mark Zuckerberg.[48][49]
|
121 |
+
|
122 |
+
An article in science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars. In order to make such an approach viable, three requirements need to be fulfilled: first, "a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit"; second, "extending flight duration and distance capability to ever-increasing ranges out to Mars"; and finally, "developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin." Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them without great risk to radiation exposure.
|
123 |
+
|
124 |
+
The James Webb Space Telescope (JWST or "Webb") is a space telescope that is planned to be the successor to the Hubble Space Telescope.[50][51] The JWST will provide greatly improved resolution and sensitivity over the Hubble, and will enable a broad range of investigations across the fields of astronomy and cosmology, including observing some of the most distant events and objects in the universe, such as the formation of the first galaxies. Other goals include understanding the formation of stars and planets, and direct imaging of exoplanets and novas.[52]
|
125 |
+
|
126 |
+
The primary mirror of the JWST, the Optical Telescope Element, is composed of 18 hexagonal mirror segments made of gold-plated beryllium which combine to create a 6.5-meter (21 ft; 260 in) diameter mirror that is much larger than the Hubble's 2.4-meter (7.9 ft; 94 in) mirror. Unlike the Hubble, which observes in the near ultraviolet, visible, and near infrared (0.1 to 1 μm) spectra, the JWST will observe in a lower frequency range, from long-wavelength visible light through mid-infrared (0.6 to 27 μm), which will allow it to observe high redshift objects that are too old and too distant for the Hubble to observe.[53] The telescope must be kept very cold in order to observe in the infrared without interference, so it will be deployed in space near the Earth–Sun L2 Lagrangian point, and a large sunshield made of silicon- and aluminum-coated Kapton will keep its mirror and instruments below 50 K (−220 °C; −370 °F).[54]
|
127 |
+
|
128 |
+
The Artemis program is an ongoing crewed spaceflight program carried out by NASA, U.S. commercial spaceflight companies, and international partners such as ESA,[55] with the goal of landing "the first woman and the next man" on the Moon, specifically at the lunar south pole region by 2024. Artemis would be the next step towards the long-term goal of establishing a sustainable presence on the Moon, laying the foundation for private companies to build a lunar economy, and eventually sending humans to Mars.
|
129 |
+
|
130 |
+
In 2017, the lunar campaign was authorized by Space Policy Directive 1, utilizing various ongoing spacecraft programs such as Orion, the Lunar Gateway, Commercial Lunar Payload Services, and adding an undeveloped crewed lander. The Space Launch System will serve as the primary launch vehicle for Orion, while commercial launch vehicles are planned for use to launch various other elements of the campaign.[56] NASA requested $1.6 billion in additional funding for Artemis for fiscal year 2020,[57] while the Senate Appropriations Committee requested from NASA a five-year budget profile[58] which is needed for evaluation and approval by Congress.[59][60]
|
131 |
+
|
132 |
+
The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[61] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars that worth of minerals and metals. Such expeditions could generate a lot of revenue.[62] In addition, it has been argued that space exploration programs help inspire youth to study in science and engineering.[63] Space exploration also gives scientists the ability to perform experiments in other settings and expand humanity's knowledge.[64]
|
133 |
+
|
134 |
+
Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that "I don't think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I'm an optimist. We will reach out to the stars."[65] Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[66] He argued that humanity's choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.
|
135 |
+
|
136 |
+
NASA has produced a series of public service announcement videos supporting the concept of space exploration.[67]
|
137 |
+
|
138 |
+
Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is "a good investment", compared to 21% who did not.[68]
|
139 |
+
|
140 |
+
Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.
|
141 |
+
|
142 |
+
Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.
|
143 |
+
|
144 |
+
A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.
|
145 |
+
|
146 |
+
Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.
|
147 |
+
|
148 |
+
Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.
|
149 |
+
|
150 |
+
Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future.
|
151 |
+
|
152 |
+
Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[69] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: έξω, exo, "outside").[70][71][72] The term "Xenobiology" has been used as well, but this is technically incorrect because its terminology means "biology of the foreigners".[73] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[74] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.[75]
|
153 |
+
|
154 |
+
To date, the longest human occupation of space is the International Space Station which has been in continuous use for 19 years, 267 days. Valeri Polyakov's record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. The health effects of space have been well documented through years of research conducted in the field of aerospace medicine. Analog environments similar to those one may experience in space travel (like deep sea submarines) have been used in this research to further explore the relationship between isolation and extreme environments.[76] It is imperative that the health of the crew be maintained as any deviation from baseline may compromise the integrity of the mission as well as the safety of the crew, hence the reason why astronauts must endure rigorous medical screenings and tests prior to embarking on any missions. However, it does not take long for the environmental dynamics of spaceflight to commence its toll on the human body; for example, space motion sickness (SMS) - a condition which affects the neurovestibular system and culminates in mild to severe signs and symptoms such as vertigo, dizziness, fatigue, nausea, and disorientation - plagues almost all space travelers within their first few days in orbit.[76] Space travel can also have a profound impact on the psyche of the crew members as delineated in anecdotal writings composed after their retirement. Space travel can adversely affect the body's natural biological clock (circadian rhythm); sleep patterns causing sleep deprivation and fatigue; and social interaction; consequently, residing in a Low Earth Orbit (LEO) environment for a prolonged amount of time can result in both mental and physical exhaustion.[76] Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure. The lack of gravity causes fluid to rise upward which can cause pressure to build up in the eye, resulting in vision problems; the loss of bone minerals and densities; cardiovascular deconditioning; and decreased endurance and muscle mass.[77]
|
155 |
+
|
156 |
+
Radiation is perhaps the most insidious health hazard to space travelers as it is invisible to the naked eye and can cause cancer. Space craft are no longer protected from the sun's radiation as they are positioned above the Earth's magnetic field; the danger of radiation is even more potent when one enters deep space. The hazards of radiation can be ameliorated through protective shielding on the spacecraft, alerts, and dosimetry.[78]
|
157 |
+
|
158 |
+
Fortunately, with new and rapidly evolving technological advancements, those in Mission Control are able to monitor the health of their astronauts more closely utilizing telemedicine. One may not be able to completely evade the physiological effects of space flight, but they can be mitigated. For example, medical systems aboard space vessels such as the International Space Station (ISS) are well equipped and designed to counteract the effects of lack of gravity and weightlessness; on-board treadmills can help prevent muscle loss and reduce the risk of developing premature osteoporosis.[76][78] Additionally, a crew medical officer is appointed for each ISS mission and a flight surgeon is available 24/7 via the ISS Mission Control Center located in Houston, Texas. [79]Although the interactions are intended to take place in real time, communications between the space and terrestrial crew may become delayed - sometimes by as much as 20 minutes[78] - as their distance from each other increases when the spacecraft moves further out of LEO; because of this the crew are trained and need to be prepared to respond to any medical emergencies that may arise on the vessel as the ground crew are hundreds of miles away. As one can see, travelling and possibly living in space poses many challenges. Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a "stepping stone" to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[80]
|
159 |
+
|
160 |
+
Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012[update], ratified by all spacefaring nations.[81]
|
161 |
+
Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.
|
162 |
+
|
163 |
+
The first woman to ever enter space was Valentina Tereshkova. She flew in 1963 but it was not until the 1980s that another woman entered space again. All astronauts were required to be military test pilots at the time and women were not able to enter this career, this is one reason for the delay in allowing women to join space crews.[citation needed] After the rule changed, Svetlana Savitskaya became the second woman to enter space, she was also from the Soviet Union. Sally Ride became the next woman to enter space and the first woman to enter space through the United States program.
|
164 |
+
|
165 |
+
Since then, eleven other countries have allowed women astronauts. Due to some slow changes in the space programs to allow women.
|
166 |
+
The first all female space walk occurred in 2018, including Christina Koch and Jessica Meir. These two women have both participated in separate space walks with NASA. The first woman to go to the moon is planned for 2024.
|
167 |
+
|
168 |
+
Despite these developments women are still underrepresented among astronauts and especially cosmonauts. Issues that block potential applicants from the programs and limit the space missions they are able to go on, are for example:
|
169 |
+
|
170 |
+
Additionally women have been discriminately treated for example as with Sally Ride by beeing scrutinized more than her male counterparts and asked sexist questions by the press.
|
171 |
+
|
172 |
+
Artistry in and from space ranges from signals, capturing and arranging material like Yuri Gagarin's selfie in space or the image The Blue Marble, over drawings like the first one in space by cosmonaut and artist Alexei Leonov, music videos like Chris Hadfield's cover of Space Oddity onboard the ISS, to permanent installations on celestial bodies like on the Moon.
|
173 |
+
|
174 |
+
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
|
en/1928.html.txt
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An explosion is a rapid increase in volume and release of energy in an extreme manner, usually with the generation of high temperatures and the release of gases. Supersonic explosions created by high explosives are known as detonations and travel via supersonic shock waves. Subsonic explosions are created by low explosives through a slower burning process known as deflagration.
|
4 |
+
|
5 |
+
Explosions can occur in nature due to a large influx of energy. Most natural explosions arise from volcanic or stellar processes of various sorts. Explosive volcanic eruptions occur when magma rising from below has much dissolved gas in it; the reduction of pressure as the magma rises causes the gas to bubble out of solution, resulting in a rapid increase in volume. Explosions also occur as a result of impact events and in phenomena such as hydrothermal explosions (also due to volcanic processes). Explosions can also occur outside of Earth in the universe in events such as supernovae. Explosions frequently occur during bushfires in eucalyptus forests where the volatile oils in the tree tops suddenly combust.[1]
|
6 |
+
|
7 |
+
Among the largest known explosions in the universe are supernovae, which result from the sudden starting or stopping of nuclear fusion gamma-ray bursts in stars, whose nature is still in some dispute. Solar flares are an example of a common explosion on the Sun, and presumably on most other stars as well. The energy source for solar flare activity comes from the tangling of magnetic field lines resulting from the rotation of the Sun's conductive plasma. Another type of large astronomical explosion occurs when a very large meteoroid or an asteroid impacts the surface of another object, such as a planet.
|
8 |
+
|
9 |
+
The most common artificial explosives are chemical explosives, usually involving a rapid and violent oxidation reaction that produces large amounts of hot gas. Gunpowder was the first explosive to be discovered and put to use. Other notable early developments in chemical explosive technology were Frederick Augustus Abel's development of nitrocellulose in 1865 and Alfred Nobel's invention of dynamite in 1866. Chemical explosions (both intentional and accidental) are often initiated by an electric spark or flame in the presence of Oxygen. Accidental explosions may occur in fuel tanks, rocket engines, etc.
|
10 |
+
|
11 |
+
A high current electrical fault can create an 'electrical explosion' by forming a high energy electrical arc which rapidly vaporizes metal and insulation material. This arc flash hazard is a danger to persons working on energized switchgear. Also, excessive magnetic pressure within an ultra-strong electromagnet can cause a magnetic explosion.
|
12 |
+
|
13 |
+
Strictly a physical process, as opposed to chemical or nuclear, e.g., the bursting of a sealed or partially sealed container under internal pressure is often referred to as an explosion. Examples include an overheated boiler or a simple tin can of beans tossed into a fire.
|
14 |
+
|
15 |
+
Boiling liquid expanding vapor explosions are one type of mechanical explosion that can occur when a vessel containing a pressurized liquid is ruptured, causing a rapid increase in volume as the liquid evaporates. Note that the contents of the container may cause a subsequent chemical explosion, the effects of which can be dramatically more serious, such as a propane tank in the midst of a fire. In such a case, to the effects of the mechanical explosion when the tank fails are added the effects from the explosion resulting from the released (initially liquid and then almost instantaneously gaseous) propane in the presence of an ignition source. For this reason, emergency workers often differentiate between the two events.
|
16 |
+
|
17 |
+
In addition to stellar nuclear explosions, a nuclear weapon is a type of explosive weapon that derives its destructive force from nuclear fission or from a combination of fission and fusion. As a result, even a nuclear weapon with a small yield is significantly more powerful than the largest conventional explosives available, with a single weapon capable of completely destroying an entire city.
|
18 |
+
|
19 |
+
Explosive force is released in a direction perpendicular to the surface of the explosive. If a grenade is in mid air during the explosion, the direction of the blast will be 360°. In contrast, in a shaped charge the explosive forces are focused to produce a greater local explosion; shaped charges are often used by military to breach doors or walls.
|
20 |
+
|
21 |
+
The speed of the reaction is what distinguishes an explosive reaction from an ordinary combustion reaction. Unless the reaction occurs very rapidly, the thermally expanding gases will be moderately dissipated in the medium, with no large differential in pressure and no explosion. As a wood fire burns in a fireplace, for example, there certainly is the evolution of heat and the formation of gases, but neither is liberated rapidly enough to build up a sudden substantial pressure differential and then cause an explosion. This can be likened to the difference between the energy discharge of a battery, which is slow, and that of a flash capacitor like that in a camera flash, which releases its energy all at once.
|
22 |
+
|
23 |
+
The generation of heat in large quantities accompanies most explosive chemical reactions. The exceptions are called entropic explosives and include organic peroxides such as acetone peroxide.[2] It is the rapid liberation of heat that causes the gaseous products of most explosive reactions to expand and generate high pressures. This rapid generation of high pressures of the released gas constitutes the explosion. The liberation of heat with insufficient rapidity will not cause an explosion. For example, although a unit mass of coal yields five times as much heat as a unit mass of nitroglycerin, the coal cannot be used as an explosive (except in the form of coal dust) because the rate at which it yields this heat is quite slow. In fact, a substance that burns less rapidly (i.e. slow combustion) may actually evolve more total heat than an explosive that detonates rapidly (i.e. fast combustion). In the former, slow combustion converts more of the internal energy (i.e. chemical potential) of the burning substance into heat released to the surroundings, while in the latter, fast combustion (i.e. detonation) instead converts more internal energy into work on the surroundings (i.e. less internal energy converted into heat); c.f. heat and work (thermodynamics) are equivalent forms of energy. See Heat of Combustion for a more thorough treatment of this topic.
|
24 |
+
|
25 |
+
When a chemical compound is formed from its constituents, heat may either be absorbed or released. The quantity of heat absorbed or given off during transformation is called the heat of formation. Heats of formations for solids and gases found in explosive reactions have been determined for a temperature of 25 °C and atmospheric pressure, and are normally given in units of kilojoules per gram-molecule. A positive value indicates that heat is absorbed during the formation of the compound from its elements; such a reaction is called an endothermic reaction. In explosive technology only materials that are exothermic—that have a net liberation of heat and have a negative heat of formation—are of interest. Reaction heat is measured under conditions either of constant pressure or constant volume. It is this heat of reaction that may be properly expressed as the "heat of explosion."
|
26 |
+
|
27 |
+
A chemical explosive is a compound or mixture which, upon the application of heat or shock, decomposes or rearranges with extreme rapidity, yielding much gas and heat. Many substances not ordinarily classed as explosives may do one, or even two, of these things.
|
28 |
+
|
29 |
+
A reaction must be capable of being initiated by the application of shock, heat, or a catalyst (in the case of some explosive chemical reactions) to a small portion of the mass of the explosive material. A material in which the first three factors exist cannot be accepted as an explosive unless the reaction can be made to occur when needed.
|
30 |
+
|
31 |
+
Fragmentation is the accumulation and projection of particles as the result of a high explosives detonation. Fragments could originate from: parts of a structure (such as glass, bits of structural material, or roofing material), revealed strata and/or various surface-level geologic features (such as loose rocks, soil, or sand), the casing surrounding the explosive, and/or any other loose miscellaneous items not vaporized by the shock wave from the explosion. High velocity, low angle fragments can travel hundreds of metres with enough energy to initiate other surrounding high explosive items, injure or kill personnel, and/or damage vehicles or structures.
|
32 |
+
|
33 |
+
|
34 |
+
|
35 |
+
|
36 |
+
|
37 |
+
Classical Latin explōdōcode: lat promoted to code: la means "to hiss a bad actor off the stage", "to drive an actor off the stage by making noise", from ex-code: lat promoted to code: la (“out”) + plaudōcode: lat promoted to code: la (“to clap; to applaud”). The modern meaning developed later:[3]
|
38 |
+
|
39 |
+
In English:
|
en/1929.html.txt
ADDED
@@ -0,0 +1,248 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Holocaust, also known as the Shoah,[c] was the World War II genocide of the European Jews. Between 1941 and 1945, across German-occupied Europe, Nazi Germany and its collaborators systematically murdered some six million Jews, around two-thirds of Europe's Jewish population.[a][d] The murders were carried out in pogroms and mass shootings; by a policy of extermination through work in concentration camps; and in gas chambers and gas vans in German extermination camps, chiefly Auschwitz, Bełżec, Chełmno, Majdanek, Sobibór, and Treblinka in occupied Poland.[4]
|
4 |
+
|
5 |
+
Germany implemented the persecution in stages. Following Adolf Hitler's appointment as Chancellor on 30 January 1933, the regime built a network of concentration camps in Germany for political opponents and those deemed "undesirable", starting with Dachau on 22 March 1933.[5] After the passing of the Enabling Act on 24 March,[6] which gave Hitler plenary powers, the government began isolating Jews from civil society; this included boycotting Jewish businesses in April 1933 and enacting the Nuremberg Laws in September 1935. On 9–10 November 1938, eight months after Germany annexed Austria, Jewish businesses and other buildings were ransacked or set on fire throughout Germany and Austria during what became known as Kristallnacht (the "Night of Broken Glass"). After Germany invaded Poland in September 1939, triggering World War II, the regime set up ghettos to segregate Jews. Eventually thousands of camps and other detention sites were established across German-occupied Europe.
|
6 |
+
|
7 |
+
The segregation of Jews in ghettos culminated in the policy of extermination the Nazis called the "Final Solution to the Jewish Question", discussed by senior Nazi officials at the Wannsee Conference in Berlin in January 1942. As German forces captured territories in the East, all anti-Jewish measures were radicalized. Under the coordination of the SS, with directions from the highest leadership of the Nazi Party, killings were committed within Germany itself, throughout occupied Europe, and within territories controlled by Germany's allies. Paramilitary death squads called Einsatzgruppen, in cooperation with the German Army and local collaborators, murdered around 1.3 million Jews in mass shootings and pogroms between 1941 and 1945. By mid-1942, victims were being deported from ghettos across Europe in sealed freight trains to extermination camps where, if they survived the journey, they were gassed, worked or beaten to death, or killed by disease or during death marches. The killing continued until the end of World War II in Europe in May 1945.
|
8 |
+
|
9 |
+
The European Jews were targeted for extermination as part of a larger event during the Holocaust era (1933–1945),[7][8] in which Germany and its collaborators persecuted and murdered other groups, including ethnic Poles, Soviet civilians and prisoners of war, the Roma, the handicapped, political and religious dissidents, and gay men.[e] The death toll of these other groups is thought to be over 11 million.[b]
|
10 |
+
|
11 |
+
The term holocaust, first used in 1895 by the New York Times to describe the massacre of Armenian Christians by Ottoman Muslims,[9] comes from the Greek: ὁλόκαυστος, romanized: holókaustos; ὅλος hólos, "whole" + καυστός kaustós, "burnt offering".[f] The biblical term shoah (Hebrew: שׁוֹאָה), meaning "destruction", became the standard Hebrew term for the murder of the European Jews. According to Haaretz, the writer Yehuda Erez may have been the first to describe events in Germany as the shoah. Davar and later Haaretz both used the term in September 1939.[12][g] Yom HaShoah became Israel's Holocaust Remembrance Day in 1951.[14]
|
12 |
+
|
13 |
+
On 3 October 1941 the American Hebrew used the phrase "before the Holocaust", apparently to refer to the situation in France,[15] and in May 1943 the New York Times, discussing the Bermuda Conference, referred to the "hundreds of thousands of European Jews still surviving the Nazi Holocaust".[16] In 1968 the Library of Congress created a new category, "Holocaust, Jewish (1939–1945)".[17] The term was popularized in the United States by the NBC mini-series Holocaust (1978), about a fictional family of German Jews,[18] and in November that year the President's Commission on the Holocaust was established.[19] As non-Jewish groups began to include themselves as Holocaust victims, many Jews chose to use the Hebrew terms Shoah or Churban.[20][h] The Nazis used the phrase "Final Solution to the Jewish Question" (German: die Endlösung der Judenfrage).[22]
|
14 |
+
|
15 |
+
Most Holocaust historians define the Holocaust as the genocide of the European Jews by Nazi Germany and its collaborators between 1941 and 1945.[a]
|
16 |
+
|
17 |
+
Michael Gray, a specialist in Holocaust education,[30] offers three definitions: (a) "the persecution and murder of Jews by the Nazis and their collaborators between 1933 and 1945", which views Kristallnacht in 1938 as an early phase of the Holocaust; (b) "the systematic mass murder of the Jews by the Nazi regime and its collaborators between 1941 and 1945", which recognizes the policy shift in 1941 toward extermination; and (c) "the persecution and murder of various groups by the Nazi regime and its collaborators between 1933 and 1945", which includes all the Nazis' victims, a definition that fails, Gray writes, to acknowledge that only the Jews were singled out for annihilation.[31] Donald Niewyk and Francis Nicosia, in The Columbia Guide to the Holocaust (2000), favor a definition that focuses on the Jews, Roma and handicapped: "the systematic, state-sponsored murder of entire groups determined by heredity".[32]
|
18 |
+
|
19 |
+
The United States Holocaust Memorial Museum distinguishes between the Holocaust (the murder of six million Jews) and "the era of the Holocaust", which began when Hitler became Chancellor of Germany in January 1933.[33] Victims of the era of the Holocaust include those the Nazis viewed as inherently inferior (chiefly Slavs, the Roma and the handicapped), and those targeted because of their beliefs or behavior (such as Jehovah's Witnesses, communists and homosexuals).[34] Peter Hayes writes that the persecution of these groups was less consistent than that of the Jews; the Nazis' treatment of the Slavs, for example, consisted of "enslavement and gradual attrition", while some Slavs (Hayes lists Bulgarians, Croats, Slovaks and some Ukrainians) were favored.[24] Against this, Hitler regarded the Jews as what Dan Stone calls "a Gegenrasse: a 'counter-race' ... not really human at all".[e]
|
20 |
+
|
21 |
+
The logistics of the mass murder turned Germany into what Michael Berenbaum called a "genocidal state".[36] Eberhard Jäckel wrote in 1986 during the German Historikerstreit—a dispute among historians about the uniqueness of the Holocaust and its relationship with the crimes of the Soviet Union—that it was the first time a state had thrown its power behind the idea that an entire people should be wiped out.[i] Anyone with three or four Jewish grandparents was to be exterminated,[38] and complex rules were devised to deal with Mischlinge ("mixed breeds").[39] Bureaucrats identified who was a Jew, confiscated property, and scheduled trains to deport them. Companies fired Jews and later used them as slave labor. Universities dismissed Jewish faculty and students. German pharmaceutical companies tested drugs on camp prisoners; other companies built the crematoria.[36] As prisoners entered the death camps, they surrendered all personal property,[40] which was catalogued and tagged before being sent to Germany for reuse or recycling.[41] Through a concealed account, the German National Bank helped launder valuables stolen from the victims.[42]
|
22 |
+
|
23 |
+
Dan Stone writes that since the opening of archives following the fall of former communist states in Eastern Europe, it has become increasingly clear that the Holocaust was a pan-European phenomenon, a series of "Holocausts" impossible to conduct without the help of local collaborators. Without collaborators, the Germans could not have extended the killing across most of the continent.[43][j][45] According to Donald Bloxham, in many parts of Europe "extreme collective violence was becoming an accepted measure of resolving identity crises".[46] Christian Gerlach writes that non-Germans "not under German command" killed 5–6 percent of the six million, but that their involvement was crucial in other ways.[47]
|
24 |
+
|
25 |
+
The industrialization and scale of the murder was unprecedented. Killings were systematically conducted in virtually all areas of occupied Europe—more than 20 occupied countries.[48] Nearly three million Jews in occupied Poland and between 700,000 and 2.5 million Jews in the Soviet Union were killed. Hundreds of thousands more died in the rest of Europe.[49] Some Christian churches defended converted Jews, but otherwise, Saul Friedländer wrote in 2007: "Not one social group, not one religious community, not one scholarly institution or professional association in Germany and throughout Europe declared its solidarity with the Jews ..."[50]
|
26 |
+
|
27 |
+
Medical experiments conducted on camp inmates by the SS were another distinctive feature.[51] At least 7,000 prisoners were subjected to experiments; most died as a result, during the experiments or later.[52] Twenty-three senior physicians and other medical personnel were charged at Nuremberg, after the war, with crimes against humanity. They included the head of the German Red Cross, tenured professors, clinic directors, and biomedical researchers.[53] Experiments took place at Auschwitz, Buchenwald, Dachau, Natzweiler-Struthof, Neuengamme, Ravensbrück, Sachsenhausen, and elsewhere. Some dealt with sterilization of men and women, the treatment of war wounds, ways to counteract chemical weapons, research into new vaccines and drugs, and the survival of harsh conditions.[52]
|
28 |
+
|
29 |
+
The most notorious physician was Josef Mengele, an SS officer who became the Auschwitz camp doctor on 30 May 1943.[54] Interested in genetics[54] and keen to experiment on twins, he would pick out subjects from the new arrivals during "selection" on the ramp, shouting "Zwillinge heraus!" (twins step forward!).[55] They would be measured, killed, and dissected. One of Mengele's assistants said in 1946 that he was told to send organs of interest to the directors of the "Anthropological Institute in Berlin-Dahlem". This is thought to refer to Mengele's academic supervisor, Otmar Freiherr von Verschuer, director from October 1942 of the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics in Berlin-Dahlem.[56][k]
|
30 |
+
|
31 |
+
There were around 9.5 million Jews in Europe in 1933.[60] Most heavily concentrated in the east, the pre-war population was 3.5 million in Poland; 3 million in the Soviet Union; nearly 800,000 in Romania, and 700,000 in Hungary. Germany had over 500,000.[49]
|
32 |
+
|
33 |
+
Throughout the Middle Ages in Europe, Jews were subjected to antisemitism based on Christian theology, which blamed them for killing Jesus. Even after the Reformation, Catholicism and Lutheranism continued to persecute Jews, accusing them of blood libels and subjecting them to pogroms and expulsions.[61] The second half of the 19th century saw the emergence in the German empire and Austria-Hungary of the völkisch movement, developed by such thinkers as Houston Stewart Chamberlain and Paul de Lagarde. The movement embraced a pseudo-scientific racism that viewed Jews as a race whose members were locked in mortal combat with the Aryan race for world domination.[62] These ideas became commonplace throughout Germany; the professional classes adopted an ideology that did not see humans as racial equals with equal hereditary value.[63] The Nazi Party (the Nationalsozialistische Deutsche Arbeiterpartei or National Socialist German Workers' Party) originated as an offshoot of the völkisch movement, and it adopted that movement's antisemitism.[64]
|
34 |
+
|
35 |
+
After World War I (1914–1918), many Germans did not accept that their country had been defeated, which gave birth to the stab-in-the-back myth. This insinuated that it was disloyal politicians, chiefly Jews and communists, who had orchestrated Germany's surrender. Inflaming the anti-Jewish sentiment was the apparent over-representation of Jews in the leadership of communist revolutionary governments in Europe, such as Ernst Toller, head of a short-lived revolutionary government in Bavaria. This perception contributed to the canard of Jewish Bolshevism.[65]
|
36 |
+
|
37 |
+
Early antisemites in the Nazi Party included Dietrich Eckart, publisher of the Völkischer Beobachter, the party's newspaper, and Alfred Rosenberg, who wrote antisemitic articles for it in the 1920s. Rosenberg's vision of a secretive Jewish conspiracy ruling the world would influence Hitler's views of Jews by making them the driving force behind communism.[66] Central to Hitler's world view was the idea of expansion and Lebensraum (living space) in Eastern Europe for German Aryans, a policy of what Doris Bergen called "race and space". Open about his hatred of Jews, he subscribed to common antisemitic stereotypes.[67] From the early 1920s onwards, he compared the Jews to germs and said they should be dealt with in the same way. He viewed Marxism as a Jewish doctrine, said he was fighting against "Jewish Marxism", and believed that Jews had created communism as part of a conspiracy to destroy Germany.[68]
|
38 |
+
|
39 |
+
With the appointment in January 1933 of Adolf Hitler as Chancellor of Germany and the Nazi's seizure of power, German leaders proclaimed the rebirth of the Volksgemeinschaft ("people's community").[70] Nazi policies divided the population into two groups: the Volksgenossen ("national comrades") who belonged to the Volksgemeinschaft, and the Gemeinschaftsfremde ("community aliens") who did not. Enemies were divided into three groups: the "racial" or "blood" enemies, such as the Jews and Roma; political opponents of Nazism, such as Marxists, liberals, Christians, and the "reactionaries" viewed as wayward "national comrades"; and moral opponents, such as gay men, the work-shy, and habitual criminals. The latter two groups were to be sent to concentration camps for "re-education", with the aim of eventual absorption into the Volksgemeinschaft. "Racial" enemies could never belong to the Volksgemeinschaft; they were to be removed from society.[71]
|
40 |
+
|
41 |
+
Before and after the March 1933 Reichstag elections, the Nazis intensified their campaign of violence against opponents,[72] setting up concentration camps for extrajudicial imprisonment.[73] One of the first, at Dachau, opened on 22 March 1933.[74] Initially the camp contained mostly Communists and Social Democrats.[75] Other early prisons were consolidated by mid-1934 into purpose-built camps outside the cities, run exclusively by the SS.[76] The camps served as a deterrent by terrorizing Germans who did not support the regime.[77]
|
42 |
+
|
43 |
+
Throughout the 1930s, the legal, economic, and social rights of Jews were steadily restricted.[78]On 1 April 1933, there was a boycott of Jewish businesses.[79] On 7 April 1933, the Law for the Restoration of the Professional Civil Service was passed, which excluded Jews and other "non-Aryans" from the civil service.[80] Jews were disbarred from practicing law, being editors or proprietors of newspapers, joining the Journalists' Association, or owning farms.[81] In Silesia, in March 1933, a group of men entered the courthouse and beat up Jewish lawyers; Friedländer writes that, in Dresden, Jewish lawyers and judges were dragged out of courtrooms during trials.[82] Jewish students were restricted by quotas from attending schools and universities.[80] Jewish businesses were targeted for closure or "Aryanization", the forcible sale to Germans; of the approximately 50,000 Jewish-owned businesses in Germany in 1933, about 7,000 were still Jewish-owned in April 1939. Works by Jewish composers,[83] authors, and artists were excluded from publications, performances, and exhibitions.[84] Jewish doctors were dismissed or urged to resign. The Deutsches Ärzteblatt (a medical journal) reported on 6 April 1933: "Germans are to be treated by Germans only."[85]
|
44 |
+
|
45 |
+
The economic strain of the Great Depression led Protestant charities and some members of the German medical establishment to advocate compulsory sterilization of the "incurable" mentally and physically handicapped,[87] people the Nazis called Lebensunwertes Leben (life unworthy of life).[88] On 14 July 1933, the Law for the Prevention of Hereditarily Diseased Offspring (Gesetz zur Verhütung erbkranken Nachwuchses), the Sterilization Law, was passed.[89][90] The New York Times reported on 21 December that year: "400,000 Germans to be sterilized".[91] There were 84,525 applications from doctors in the first year. The courts reached a decision in 64,499 of those cases; 56,244 were in favor of sterilization.[92] Estimates for the number of involuntary sterilizations during the whole of the Third Reich range from 300,000 to 400,000.[93]
|
46 |
+
|
47 |
+
In October 1939 Hitler signed a "euthanasia decree" backdated to 1 September 1939 that authorized Reichsleiter Philipp Bouhler, the chief of Hitler's Chancellery, and Karl Brandt, Hitler's personal physician, to carry out a program of involuntary euthanasia. After the war this program came to be known as Aktion T4,[94] named after Tiergartenstraße 4, the address of a villa in the Berlin borough of Tiergarten, where the various organizations involved were headquartered.[95] T4 was mainly directed at adults, but the euthanasia of children was also carried out.[96] Between 1939 and 1941, 80,000 to 100,000 mentally ill adults in institutions were killed, as were 5,000 children and 1,000 Jews, also in institutions. There were also dedicated killing centers, where the deaths were estimated at 20,000, according to Georg Renno, deputy director of Schloss Hartheim, one of the euthanasia centers, or 400,000, according to Frank Zeireis, commandant of the Mauthausen concentration camp.[97] Overall, the number of mentally and physically handicapped murdered was about 150,000.[98]
|
48 |
+
|
49 |
+
Although not ordered to take part, psychiatrists and many psychiatric institutions were involved in the planning and carrying out of Aktion T4.[99] In August 1941, after protests from Germany's Catholic and Protestant churches, Hitler cancelled the T4 program,[100] although the handicapped continued to be killed until the end of the war.[98] The medical community regularly received bodies for research; for example, the University of Tübingen received 1,077 bodies from executions between 1933 and 1945. The German neuroscientist Julius Hallervorden received 697 brains from one hospital between 1940 and 1944: "I accepted these brains of course. Where they came from and how they came to me was really none of my business."[101]
|
50 |
+
|
51 |
+
On 15 September 1935, the Reichstag passed the Reich Citizenship Law and the Law for the Protection of German Blood and German Honor, known as the Nuremberg Laws. The former said that only those of "German or kindred blood" could be citizens. Anyone with three or more Jewish grandparents was classified as a Jew.[103] The second law said: "Marriages between Jews and subjects of the state of German or related blood are forbidden." Sexual relationships between them were also criminalized; Jews were not allowed to employ German women under the age of 45 in their homes.[104][103] The laws referred to Jews but applied equally to the Roma and black Germans. Although other European countries—Bulgaria, Croatia, Hungary, Italy, Romania, Slovakia, and Vichy France—passed similar legislation,[103] Gerlach notes that "Nazi Germany adopted more nationwide anti-Jewish laws and regulations (about 1,500) than any other state."[105]
|
52 |
+
|
53 |
+
By the end of 1934, 50,000 German Jews had left Germany,[106] and by the end of 1938, approximately half the German Jewish population had left,[107] among them the conductor Bruno Walter, who fled after being told that the hall of the Berlin Philharmonic would be burned down if he conducted a concert there.[108] Albert Einstein, who was in the United States when Hitler came to power, never returned to Germany; his citizenship was revoked and he was expelled from the Kaiser Wilhelm Society and Prussian Academy of Sciences.[109] Other Jewish scientists, including Gustav Hertz, lost their teaching positions and left the country.[110]
|
54 |
+
|
55 |
+
On 12 March 1938, Germany annexed Austria. Austrian Nazis broke into Jewish shops, stole from Jewish homes and businesses, and forced Jews to perform humiliating acts such as scrubbing the streets or cleaning toilets.[111] Jewish businesses were "Aryanized", and all the legal restrictions on Jews in Germany were imposed.[112] In August that year, Adolf Eichmann was put in charge of the Central Agency for Jewish Emigration in Vienna (Zentralstelle für jüdische Auswanderung in Wien). About 100,000 Austrian Jews had left the country by May 1939, including Sigmund Freud and his family, who moved to London.[113] The Évian Conference was held in France in July 1938 by 32 countries, as an attempt to help the increased refugees from Germany, but aside from establishing the largely ineffectual Intergovernmental Committee on Refugees, little was accomplished and most countries participating did not increase the number of refugees they would accept.[114]
|
56 |
+
|
57 |
+
On 7 November 1938, Herschel Grynszpan, a Polish Jew, shot the German diplomat Ernst vom Rath in the German Embassy in Paris, in retaliation for the expulsion of his parents and siblings from Germany.[115][l] When vom Rath died on 9 November, the government used his death as a pretext to instigate a pogrom against the Jews. The government claimed it was spontaneous, but in fact it had been ordered and planned by Adolf Hitler and Joseph Goebbels, although with no clear goals, according to David Cesarani. The result, he writes, was "murder, rape, looting, destruction of property, and terror on an unprecedented scale".[117]
|
58 |
+
|
59 |
+
Known as Kristallnacht (or "Night of Broken Glass"), the attacks on 9–10 November 1938 were partly carried out by the SS and SA,[118] but ordinary Germans joined in; in some areas, the violence began before the SS or SA arrived.[119] Over 7,500 Jewish shops (out of 9,000) were looted and attacked, and over 1,000 synagogues damaged or destroyed. Groups of Jews were forced by the crowd to watch their synagogues burn; in Bensheim they were made to dance around it, and in Laupheim to kneel before it.[120] At least 90 Jews died. The damage was estimated at 39 million Reichmarks.[121] Cesarani writes that "[t]he extent of the desolation stunned the population and rocked the regime."[122] It also shocked the rest of the world. The Times of London wrote on 11 November 1938: "No foreign propagandist bent upon blackening Germany before the world could outdo the tale of burnings and beatings, of blackguardly assaults upon defenseless and innocent people, which disgraced that country yesterday. Either the German authorities were a party to this outbreak or their powers over public order and a hooligan minority are not what they are proudly claimed to be."[123]
|
60 |
+
|
61 |
+
Between 9 and 16 November, 30,000 Jews were sent to the Buchenwald, Dachau, and Sachsenhausen concentration camps.[124] Many were released within weeks; by early 1939, 2,000 remained in the camps.[125] German Jewry was held collectively responsible for restitution of the damage; they also had to pay an "atonement tax" of over a billion Reichmarks. Insurance payments for damage to their property were confiscated by the government. A decree on 12 November 1938 barred Jews from most remaining occupations.[126] Kristallnacht marked the end of any sort of public Jewish activity and culture, and Jews stepped up their efforts to leave the country.[127]
|
62 |
+
|
63 |
+
Before World War II, Germany considered mass deportation from Europe of German, and later European, Jewry.[128] Among the areas considered for possible resettlement were British Palestine and, after the war began, French Madagascar,[129] Siberia, and two reservations in Poland.[130][m] Palestine was the only location to which any German resettlement plan produced results, via the Haavara Agreement between the Zionist Federation of Germany and the German government. Between November 1933 and December 1939, the agreement resulted in the emigration of about 53,000 German Jews, who were allowed to transfer RM 100 million of their assets to Palestine by buying German goods, in violation of the Jewish-led anti-Nazi boycott of 1933.[132]
|
64 |
+
|
65 |
+
When Germany invaded Poland on 1 September 1939, triggering a declaration of war from France and the UK, it gained control of an additional two million Jews, reduced to around 1.7 – 1.8 million in the German zone when the Soviet Union invaded from the east on 17 September.[133] The German army, the Wehrmacht, was accompanied by seven SS Einsatzgruppen ("special task forces") and an Einsatzkommando, numbering altogether 3,000 men, whose role was to deal with "all anti-German elements in hostile country behind the troops in combat". Most of the Einsatzgruppen commanders were professionals; 15 of the 25 leaders had PhDs. By 29 August, two days before the invasion, they had already drawn up a list of 30,000 people to send to concentration camps. By the first week of the invasion, 200 people were being executed every day.[134]
|
66 |
+
|
67 |
+
The Germans began sending Jews from territories they had recently annexed (Austria, Czechoslovakia, and western Poland) to the central section of Poland, which they called the General Government.[135] To
|
68 |
+
make it easier to control and deport them, the Jews were concentrated in ghettos in major cities.[136] The Germans planned to set up a Jewish reservation in southeast Poland around the transit camp in Nisko, but the "Nisko plan" failed, in part because it was opposed by Hans Frank, the new Governor-General of the General Government.[137] In mid-October 1940 the idea was revived, this time to be located in Lublin. Resettlement continued until January 1941 under SS officer Odilo Globocnik, but further plans for the Lublin reservation failed for logistical and political reasons.[138]
|
69 |
+
|
70 |
+
There had been anti-Jewish pogroms in Poland before the war, including in around 100 towns between 1935 and 1937,[139] and again in 1938.[140] In June and July 1941, during the Lviv pogroms in Lwów (now Lviv, Ukraine), around 6,000 Polish Jews were murdered in the streets by the Ukrainian People's Militia and local people.[141][n] Another 2,500–3,500 Jews died in mass shootings by Einsatzgruppe C.[142] During the Jedwabne pogrom on 10 July 1941, a group of Poles in Jedwabne killed the town's Jewish community, many of whom were burned alive in a barn.[143] The attack may have been engineered by the German Security Police.[144][o]
|
71 |
+
|
72 |
+
The Germans established ghettos in Poland, in the incorporated territories and General Government area, to confine Jews.[135] These were closed off from the outside world at different times and for different reasons.[146] In early 1941, the Warsaw ghetto contained 445,000 people, including 130,000 from elsewhere,[147] while the second largest, the Łódź ghetto, held 160,000.[148] Although the Warsaw ghetto contained 30 percent of the city's population, it occupied only 2.5 percent of its area, averaging over nine people per room.[149] The massive overcrowding, poor hygiene facilities and lack of food killed thousands. Over 43,000 residents died in 1941.[150]
|
73 |
+
|
74 |
+
According to a letter dated 21 September 1939 from SS-Obergruppenführer Reinhard Heydrich, head of the Reichssicherheitshauptamt (RSHA or Reich Security Head Office), to the Einsatzgruppen, each ghetto had to be run by a Judenrat, or "Jewish Council of Elders", to consist of 24 male Jews with local influence.[151] Judenräte were responsible for the ghetto's day-to-day operations, including distributing food, water, heat, medical care, and shelter. The Germans also required them to confiscate property, organize forced labor, and, finally, facilitate deportations to extermination camps.[152] The Jewish councils' basic strategy was one of trying to minimize losses by cooperating with German authorities, bribing officials, and petitioning for better conditions.[153]
|
75 |
+
|
76 |
+
Germany invaded Norway and Denmark on 9 April 1940, during Operation Weserübung. Denmark was overrun so quickly that there was no time for a resistance to form. Consequently, the Danish government stayed in power and the Germans found it easier to work through it. Because of this, few measures were taken against the Danish Jews before 1942.[154] By June 1940 Norway was completely occupied.[155] In late 1940, the country's 1,800 Jews were banned from certain occupations, and in 1941 all Jews had to register their property with the government.[156] On 26 November 1942, 532 Jews were taken by police officers, at four o'clock in the morning, to Oslo harbor, where they boarded a German ship. From Germany they were sent by freight train to Auschwitz. According to Dan Stone, only nine survived the war.[157]
|
77 |
+
|
78 |
+
In May 1940, Germany invaded the Netherlands, Luxembourg, Belgium, and France. After Belgium's surrender, the country was ruled by a German military governor, Alexander von Falkenhausen, who enacted anti-Jewish measures against its 90,000 Jews, many of them refugees from Germany or Eastern Europe.[158] In the Netherlands, the Germans installed Arthur Seyss-Inquart as Reichskommissar, who began to persecute the country's 140,000 Jews. Jews were forced out of their jobs and had to register with the government. In February 1941, non-Jewish Dutch citizens staged a strike in protest that was quickly crushed.[159] From July 1942, over 107,000 Dutch Jews were deported; only 5,000 survived the war. Most were sent to Auschwitz; the first transport of 1,135 Jews left Holland for Auschwitz on 15 July 1942. Between 2 March and 20 July 1943, 34,313 Jews were sent in 19 transports to the Sobibór extermination camp, where all but 18 are thought to have been gassed on arrival.[160]
|
79 |
+
|
80 |
+
France had approximately 300,000 Jews, divided between the German-occupied north and the unoccupied collaborationist southern areas in Vichy France (named after the town Vichy). The occupied regions were under the control of a military governor, and there, anti-Jewish measures were not enacted as quickly as they were in the Vichy-controlled areas.[161] In July 1940, the Jews in the parts of Alsace-Lorraine that had been annexed to Germany were expelled into Vichy France.[162] Vichy France's government implemented anti-Jewish measures in French Algeria and the two French Protectorates of Tunisia and Morocco.[163] Tunisia had 85,000 Jews when the Germans and Italians arrived in November 1942; an estimated 5,000 Jews were subjected to forced labor.[164]
|
81 |
+
|
82 |
+
The fall of France gave rise to the Madagascar Plan in the summer of 1940, when French Madagascar in Southeast Africa became the focus of discussions about deporting all European Jews there; it was thought that the area's harsh living conditions would hasten deaths.[165] Several Polish, French and British leaders had discussed the idea in the 1930s, as did German leaders from 1938.[166] Adolf Eichmann's office was ordered to investigate the option, but no evidence of planning exists until after the defeat of France in June 1940.[167] Germany's inability to defeat Britain, something that was obvious to the Germans by September 1940, prevented the movement of Jews across the seas,[168] and the Foreign Ministry abandoned the plan in February 1942.[169]
|
83 |
+
|
84 |
+
Yugoslavia and Greece were invaded in April 1941 and surrendered before the end of the month. Germany and Italy divided Greece into occupation zones but did not eliminate it as a country. Yugoslavia, home to around 80,000 Jews, was dismembered; regions in the north were annexed by Germany and regions along the coast made part of Italy. The rest of the country was divided into the Independent State of Croatia, nominally an ally of Germany, and Serbia, which was governed by a combination of military and police administrators.[170] Serbia was declared free of Jews (Judenfrei) in August 1942.[171][172] Croatia's ruling party, the Ustashe, killed the majority of the country's Jews and massacred, expelled or forcibly converted to Catholicism the area's local Orthodox Christian Serb population.[173] On 18 April 1944 Croatia was declared as Judenfrei.[174][175] Jews and Serbs alike were "hacked to death and burned in barns", writes historian Jeremy Black. One difference between the Germans and Croatians was that the Ustashe allowed its Jewish and Serbian victims to convert to Catholicism so they could escape death.[171] According to Jozo Tomasevich of the 115 Jewish religious communities from Yugoslavia which existed in 1939 and 1940, only the Jewish communities from Zagreb managed to survive the war.[176]
|
85 |
+
|
86 |
+
Germany invaded the Soviet Union on 22 June 1941, a day Timothy Snyder called "one of the most significant days in the history of Europe ... the beginning of a calamity that defies description".[177] Jürgen Matthäus described it as "a quantum leap toward the Holocaust".[178] German propaganda portrayed the conflict as an ideological war between German National Socialism and Jewish Bolshevism and as a racial war between the Germans and the Jewish, Romani, and Slavic Untermenschen ("sub-humans").[179] David Cesarani writes that the war was driven primarily by the need for resources: agricultural land to feed Germany, natural resources for German industry, and control over Europe's largest oil fields. But precisely because of the Soviet Union's vast resources, "[v]ictory would have to be swift".[180] Between early fall 1941 and late spring 1942, according to Matthäus, 2 million of the 3.5 million Soviet soldiers captured by the Wehrmacht (Germany's armed forces) had been executed or had died of neglect and abuse. By 1944 the Soviet death toll was at least 20 million.[181]
|
87 |
+
|
88 |
+
As German troops advanced, the mass shooting of "anti-German elements" was assigned, as in Poland, to the Einsatzgruppen, this time under the command of Reinhard Heydrich.[182] The point of the attacks was to destroy the local Communist Party leadership and therefore the state, including "Jews in the Party and State employment", and any "radical elements".[p] Cesarani writes that the killing of Jews was at this point a "subset" of these activities.[184]
|
89 |
+
|
90 |
+
Einsatzgruppe A arrived in the Baltic states (Estonia, Latvia, and Lithuania) with Army Group North; Einsatzgruppe B in Belarus with Army Group Center; Einsatzgruppe C in the Ukraine with Army Group South; and Einsatzgruppe D went further south into Ukraine with the 11th Army.[185] Each Einsatzgruppe numbered around 600–1,000 men, with a few women in administrative roles.[186] Travelling with nine German Order Police battalions and three units of the Waffen-SS,[187] the Einsatzgruppen and their local collaborators had murdered almost 500,000 people by the winter of 1941–1942. By the end of the war, they had killed around two million, including about 1.3 million Jews and up to a quarter of a million Roma.[188] According to Wolfram Wette, the Germany army took part in these shootings as bystanders, photographers and active shooters; to justify their troops' involvement, army commanders would describe the victims as "hostages", "bandits" and "partisans".[189] Local populations helped by identifying and rounding up Jews, and by actively participating in the killing. In Lithuania, Latvia and western Ukraine, locals were deeply involved; Latvian and Lithuanian units participated in the murder of Jews in Belarus, and in the south, Ukrainians killed about 24,000 Jews. Some Ukrainians went to Poland to serve as guards in the camps.[190]
|
91 |
+
|
92 |
+
Typically, victims would undress and give up their valuables before lining up beside a ditch to be shot, or they would be forced to climb into the ditch, lie on a lower layer of corpses, and wait to be killed.[192] The latter was known as Sardinenpackung ("packing sardines"), a method reportedly started by SS officer Friedrich Jeckeln.[193]
|
93 |
+
|
94 |
+
At first the Einsatzgruppen targeted the male Jewish intelligentsia, defined as male Jews aged 15–60 who had worked for the state and in certain professions (the commandos would describe them as "Bolshevist functionaries" and similar), but from August 1941 they began to murder women and children too.[194] Christopher Browning reports that on 1 August, the SS Cavalry Brigade passed an order to its units: "Explicit order by RF-SS [Heinrich Himmler, Reichsführer-SS]. All Jews must be shot. Drive the female Jews into the swamps."[195] In a speech on 6 October 1943 to party leaders, Heinrich Himmler said he had ordered that women and children be shot, but Peter Longerich and Christian Gerlach write that the murder of women and children began at different times in different areas, suggesting local influence.[196]
|
95 |
+
|
96 |
+
Notable massacres include the July 1941 Ponary massacre near Vilnius (Soviet Lithuania), in which Einsatgruppe B and Lithuanian collaborators shot 72,000 Jews and 8,000 non-Jewish Lithuanians and Poles.[197] In the Kamianets-Podilskyi massacre (Soviet Ukraine), nearly 24,000 Jews were killed between 27 and 30 August 1941.[181] The largest massacre was at a ravine called Babi Yar outside Kiev (also Soviet Ukraine), where 33,771 Jews were killed on 29–30 September 1941.[198] Einsatzgruppe C and the Order Police, assisted by Ukrainian militia, carried out the killings,[199] while the German 6th Army helped round up and transport the victims to be shot.[200] The Germans continued to use the ravine for mass killings throughout the war; the total killed there could be as high as 100,000.[201]
|
97 |
+
|
98 |
+
Historians agree that there was a "gradual radicalization" between the spring and autumn of 1941 of what Longerich calls Germany's Judenpolitik, but they disagree about whether a decision—Führerentscheidung (Führer's decision)—to murder the European Jews was made at this point.[202][q] According to Christopher Browning, writing in 2004, most historians maintain that there was no order before the invasion to kill all the Soviet Jews.[204] Longerich wrote in 2010 that the gradual increase in brutality and numbers killed between July and September 1941 suggests there was "no particular order"; instead it was a question of "a process of increasingly radical interpretations of orders".[205]
|
99 |
+
|
100 |
+
According to Dan Stone, the murder of Jews in Romania was "essentially an independent undertaking".[206] Romania implemented anti-Jewish measures in May and June 1940 as part of its efforts towards an alliance with Germany. Jews were forced from government service, pogroms were carried out, and by March 1941 all Jews had lost their jobs and had their property confiscated.[207] In June 1941 Romania joined Germany in its invasion of the Soviet Union.
|
101 |
+
|
102 |
+
Thousands of Jews were killed in January and June 1941 in the Bucharest pogrom and Iaşi pogrom.[208] According to a 2004 report by Tuvia Friling and others, up to 14,850 Jews died during the Iaşi pogrom.[209] The Romanian military killed up to 25,000 Jews during the Odessa massacre between 18 October 1941 and March 1942, assisted by gendarmes and the police.[210] Mihai Antonescu, Romania's deputy prime minister, was reported to have said it was "the most favorable moment in our history" to solve the "Jewish problem".[211] In July 1941 he said it was time for "total ethnic purification, for a revision of national life, and for purging our race of all those elements which are foreign to its soul, which have grown like mistletoes and darken our future".[212] Romania set up concentration camps under its control in Transnistria, reportedly extremely brutal, where 154,000–170,000 Jews were deported from 1941 to 1943.[213]
|
103 |
+
|
104 |
+
Bulgaria introduced anti-Jewish measures between 1940 and 1943, which included a curfew, the requirement to wear a yellow star, restrictions on owning telephones or radios, the banning of mixed marriages (except for Jews who had converted to Christianity), and the registration of property.[214] It annexed Thrace and Macedonia, and in February 1943 agreed to a demand from Germany that it deport 20,000 Jews to the Treblinka extermination camp. All 11,000 Jews from the annexed territories were sent to their deaths, and plans were made to deport an additional 6,000–8,000 Bulgarian Jews from Sofia to meet the quota.[215] When the plans became public, the Orthodox Church and many Bulgarians protested, and King Boris III canceled the deportation of Jews native to Bulgaria.[216] Instead, they were expelled to provincial areas of the country.[215]
|
105 |
+
|
106 |
+
Stone writes that Slovakia, led by Roman Catholic priest Jozef Tiso (president of the Slovak State, 1939–1945), was "one of the most loyal of the collaborationist regimes". It deported 7,500 Jews in 1938 on its own initiative; introduced anti-Jewish measures in 1940; and by the autumn of 1942 had deported around 60,000 Jews to ghettos and concentration camps in Poland. Another 2,396 were deported and 2,257 killed that autumn during an uprising, and 13,500 were deported between October 1944 and March 1945.[217] According to Stone, "the Holocaust in Slovakia was far more than a German project, even if it was carried out in the context of a 'puppet' state."[218]
|
107 |
+
|
108 |
+
Although Hungary expelled Jews who were not citizens from its newly annexed lands in 1941, it did not deport most of its Jews[219] until the German invasion of Hungary in March 1944. Between 15 May and early July 1944, 437,000 Jews were deported from Hungary, mostly to Auschwitz, where most of them were gassed; there were four transports a day, each carrying 3,000 people.[220] In Budapest in October and November 1944, the Hungarian Arrow Cross forced 50,000 Jews to march to the Austrian border as part of a deal with Germany to supply forced labor. So many died that the marches were stopped.[221]
|
109 |
+
|
110 |
+
Italy introduced some antisemitic measures, but there was less antisemitism there than in Germany, and Italian-occupied countries were generally safer for Jews than those occupied by Germany. There were no deportations of Italian Jews to Germany while Italy remained an ally. In some areas, the Italian authorities even tried to protect Jews, such as in the Croatian areas of the Balkans. But while Italian forces in Russia were not as vicious towards Jews as the Germans, they did not try to stop German atrocities either.[222] Several forced labor camps for Jews were established in Italian-controlled Libya; almost 2,600 Libyan Jews were sent to camps, where 562 died.[223] In Finland, the government was pressured in 1942 to hand over its 150–200 non-Finnish Jews to Germany. After opposition from both the government and public, eight non-Finnish Jews were deported in late 1942; only one survived the war.[224] Japan had little antisemitism in its society and did not persecute Jews in most of the territories it controlled. Jews in Shanghai were confined, but despite German pressure they were not killed.[225]
|
111 |
+
|
112 |
+
Germany first used concentration camps as places of terror and unlawful incarceration of political opponents.[227] Large numbers of Jews were not sent there until after Kristallnacht in November 1938.[228] After war broke out in 1939, new camps were established, many outside Germany in occupied Europe.[229] Most wartime prisoners of the camps were not Germans but belonged to countries under German occupation.[230]
|
113 |
+
|
114 |
+
After 1942, the economic function of the camps, previously secondary to their penal and terror functions, came to the fore. Forced labor of camp prisoners became commonplace.[228] The guards became much more brutal, and the death rate increased as the guards not only beat and starved prisoners, but killed them more frequently.[230] Vernichtung durch Arbeit ("extermination through labor") was a policy; camp inmates would literally be worked to death, or to physical exhaustion, at which point they would be gassed or shot.[231] The Germans estimated the average prisoner's lifespan in a concentration camp at three months, as a result of lack of food and clothing, constant epidemics, and frequent punishments for the most minor transgressions.[232] The shifts were long and often involved exposure to dangerous materials.[233]
|
115 |
+
|
116 |
+
Transportation to and between camps was often carried out in closed freight cars with littie air or water, long delays and prisoners packed tightly.[234] In mid-1942 work camps began requiring newly arrived prisoners to be placed in quarantine for four weeks.[235] Prisoners wore colored triangles on their uniforms, the color denoting the reason for their incarceration. Red signified a political prisoner, Jehovah's Witnesses had purple triangles, "asocials" and criminals wore black and green, and gay men wore pink.[236] Jews wore two yellow triangles, one over another to form a six-pointed star.[237] Prisoners in Auschwitz were tattooed on arrival with an identification number.[238]
|
117 |
+
|
118 |
+
On 7 December 1941, Japanese aircraft attacked Pearl Harbor, an American naval base in Honolulu, Hawaii, killing 2,403 Americans. The following day, the United States declared war on Japan, and on 11 December, Germany declared war on the United States.[239] According to Deborah Dwork and Robert Jan van Pelt, Hitler had trusted American Jews, whom he assumed were all powerful, to keep the United States out of the war in the interests of German Jews. When America declared war, he blamed the Jews.[240]
|
119 |
+
|
120 |
+
Nearly three years earlier, on 30 January 1939, Hitler had told the Reichstag: "if the international Jewish financiers in and outside Europe should succeed in plunging the nations once more into a world war, then the result will be not the Bolshevising of the earth, and thus a victory of Jewry, but the annihilation of the Jewish race in Europe!"[241] In the view of Christian Gerlach, Hitler "announced his decision in principle" to annihilate the Jews on or around 12 December 1941, one day after his declaration of war. On that day, Hitler gave a speech in his private apartment at the Reich Chancellery to senior Nazi Party leaders: the Reichsleiter, the most senior, and the Gauleiter, the regional leaders.[242] The following day, Joseph Goebbels, the Reich Minister of Propaganda, noted in his diary:
|
121 |
+
|
122 |
+
Regarding the Jewish question, the Führer is determined to clear the table. He warned the Jews that if they were to cause another world war, it would lead to their destruction. Those were not empty words. Now the world war has come. The destruction of the Jews must be its necessary consequence. We cannot be sentimental about it.[t]
|
123 |
+
|
124 |
+
Christopher Browning argues that Hitler gave no order during the Reich Chancellery meeting but did make clear that he had intended his 1939 warning to the Jews to be taken literally, and he signaled to party leaders that they could give appropriate orders to others.[244] Peter Longerich interprets Hitler's speech to the party leaders as an appeal to radicalize a policy that was already being executed.[245] According to Gerlach, an unidentified former German Sicherheitsdienst officer wrote in a report in 1944, after defecting to Switzerland: "After America entered the war, the annihilation (Ausrottung) of all European Jews was initiated on the Führer's order."[246]
|
125 |
+
|
126 |
+
Four days after Hitler's meeting with party leaders, Hans Frank, Governor-General of the General Government area of occupied Poland, who was at the meeting, spoke to district governors: "We must put an end to the Jews ... I will in principle proceed only on the assumption that they will disappear. They must go."[247][u] On 18 December Hitler and Himmler held a meeting to which Himmler referred in his appointment book as "Juden frage | als Partisanen auszurotten" ("Jewish question / to be exterminated as partisans"). Browning interprets this as a meeting to discuss how to justify and speak about the killing.[249]
|
127 |
+
|
128 |
+
SS-Obergruppenführer Reinhard Heydrich, head of the Reich Security Head Office (RSHA), convened what became known as the Wannsee Conference on 20 January 1942 at Am Großen Wannsee 56–58, a villa in Berlin's Wannsee suburb.[250][251] The meeting had been scheduled for 9 December 1941, and invitations had been sent on 29 November, but it had been postponed indefinitely. A month later, invitations were sent out again, this time for 20 January.[252]
|
129 |
+
|
130 |
+
The 15 men present at Wannsee included Adolf Eichmann (head of Jewish affairs for the RSHA), Heinrich Müller (head of the Gestapo), and other SS and party leaders and department heads.[w] Browning writes that eight of the 15 had doctorates: "Thus it was not a dimwitted crowd unable to grasp what was going to be said to them."[254] Thirty copies of the minutes, known as the Wannsee Protocol, were made. Copy no. 16 was found by American prosecutors in March 1947 in a German Foreign Office folder.[255] Written by Eichmann and stamped "Top Secret", the minutes were written in "euphemistic language" on Heydrich's instructions, according to Eichmann's later testimony.[245]
|
131 |
+
|
132 |
+
Discussing plans for a "final solution to the Jewish question" ("Endlösung der Judenfrage"), and a "final solution to the Jewish question in Europe" ("Endlösung der europäischen Judenfrage"),[250] the conference was held to share information and responsibility, coordinate efforts and policies ("Parallelisierung der Linienführung"), and ensure that authority rested with Heydrich. There was also discussion about whether to include the German Mischlinge (half-Jews).[256] Heydrich told the meeting: "Another possible solution of the problem has now taken the place of emigration, i.e. the evacuation of the Jews to the East, provided that the Fuehrer gives the appropriate approval in advance."[250] He continued:
|
133 |
+
|
134 |
+
Under proper guidance, in the course of the Final Solution, the Jews are to be allocated for appropriate labor in the East. Able-bodied Jews, separated according to sex, will be taken in large work columns to these areas for work on roads, in the course of which action doubtless a large portion will be eliminated by natural causes.
|
135 |
+
|
136 |
+
The possible final remnant will, since it will undoubtedly consist of the most resistant portion, have to be treated accordingly because it is the product of natural selection and would, if released, act as the seed of a new Jewish revival. (See the experience of history.)
|
137 |
+
|
138 |
+
In the course of the practical execution of the Final Solution, Europe will be combed through from west to east. Germany proper, including the Protectorate of Bohemia and Moravia, will have to be handled first due to the housing problem and additional social and political necessities.
|
139 |
+
|
140 |
+
The evacuated Jews will first be sent, group by group, to so-called transit ghettos, from which they will be transported to the East.[250]
|
141 |
+
|
142 |
+
These evacuations were regarded as provisional or "temporary solutions" ("Ausweichmöglichkeiten").[257][x] The final solution would encompass the 11 million Jews living not only in territories controlled by Germany, but elsewhere in Europe and adjacent territories, such as Britain, Ireland, Switzerland, Turkey, Sweden, Portugal, Spain, and Hungary, "dependent on military developments".[257] There was little doubt what the final solution was, writes Longerich: "the Jews were to be annihilated by a combination of forced labour and mass murder."[259]
|
143 |
+
|
144 |
+
At the end of 1941 in occupied Poland, the Germans began building additional camps or expanding existing ones. Auschwitz, for example, was expanded in October 1941 by building Auschwitz II-Birkenau a few kilometers away.[4] By the spring or summer of 1942, gas chambers had been installed in these new facilities, except for Chełmno, which used gas vans.
|
145 |
+
|
146 |
+
Other camps sometimes described as extermination camps include Maly Trostinets near Minsk in the occupied Soviet Union, where 65,000 are thought to have died, mostly by shooting but also in gas vans;[278] Mauthausen in Austria;[279] Stutthof, near Gdańsk, Poland;[280] and Sachsenhausen and Ravensbrück in Germany. The camps in Austria, Germany and Poland all had gas chambers to kill inmates deemed unable to work.[281]
|
147 |
+
|
148 |
+
Chełmno, with gas vans only, had its roots in the Aktion T4 euthanasia program.[282] In December 1939 and January 1940, gas vans equipped with gas cylinders and a sealed compartment had been used to kill the handicapped in occupied Poland.[283] As the mass shootings continued in Russia, Himmler and his subordinates in the field feared that the murders were causing psychological problems for the SS,[284] and began searching for more efficient methods. In December 1941, similar vans, using exhaust fumes rather than bottled gas, were introduced into the camp at Chełmno,[267] Victims were asphyxiated while being driven to prepared burial pits in the nearby forests.[285] The vans were also used in the occupied Soviet Union, for example in smaller clearing actions in the Minsk ghetto,[286] and in Yugoslavia.[287] Apparently, as with the mass shootings, the vans caused emotional problems for the operators, and the small number of victims the vans could handle made them ineffective.[288]
|
149 |
+
|
150 |
+
Christian Gerlach writes that over three million Jews were murdered in 1942, the year that "marked the peak" of the mass murder.[290] At least 1.4 million of these were in the General Government area of Poland.[291] Victims usually arrived at the extermination camps by freight train.[292] Almost all arrivals at Bełżec, Sobibór and Treblinka were sent directly to the gas chambers,[293] with individuals occasionally selected to replace dead workers.[294] At Auschwitz, about 20 percent of Jews were selected to work.[295] Those selected for death at all camps were told to undress and hand their valuables to camp workers.[40] They were then herded naked into the gas chambers. To prevent panic, they were told the gas chambers were showers or delousing chambers.[296]
|
151 |
+
|
152 |
+
At Auschwitz, after the chambers were filled, the doors were shut and pellets of Zyklon-B were dropped into the chambers through vents,[297] releasing toxic prussic acid.[298] Those inside died within 20 minutes; the speed of death depended on how close the inmate was standing to a gas vent, according to the commandant Rudolf Höss, who estimated that about one-third of the victims died immediately.[299] Johann Kremer, an SS doctor who oversaw the gassings, testified that: "Shouting and screaming of the victims could be heard through the opening and it was clear that they fought for their lives."[300] The gas was then pumped out, and the Sonderkommando—work groups of mostly Jewish prisoners—carried out the bodies, extracted gold fillings, cut off women's hair, and removed jewellery, artificial limbs and glasses.[301] At Auschwitz, the bodies were at first buried in deep pits and covered with lime, but between September and November 1942, on the orders of Himmler, 100,000 bodies were dug up and burned. In early 1943, new gas chambers and crematoria were built to accommodate the numbers.[302]
|
153 |
+
|
154 |
+
Bełżec, Sobibór and Treblinka became known as the Operation Reinhard camps, named after the German plan to murder the Jews in the General Government area of occupied Poland.[303] Between March 1942 and November 1943, around 1,526,500 Jews were gassed in these three camps in gas chambers using carbon monoxide from the exhaust fumes of stationary diesel engines.[4] Gold fillings were pulled from the corpses before burial, but unlike in Auschwitz the women's hair was cut before death. At Treblinka, to calm the victims, the arrival platform was made to look like a train station, complete with fake clock.[304] Most of the victims at these three camps were buried in pits at first. From mid-1942, as part of Sonderaktion 1005, prisoners at Auschwitz, Chelmno, Bełżec, Sobibór, and Treblinka were forced to exhume and burn bodies that had been buried, in part to hide the evidence, and in part because of the terrible smell pervading the camps and a fear that the drinking water would become polluted.[305] The corpses—700,000 in Treblinka—were burned on wood in open fire pits and the remaining bones crushed into powder.[306]
|
155 |
+
|
156 |
+
There was almost no resistance in the ghettos in Poland until the end of 1942.[308] Raul Hilberg accounted for this by evoking the history of Jewish persecution: compliance might avoid inflaming the situation until the onslaught abated.[309] Timothy Snyder noted that it was only during the three months after the deportations of July–September 1942 that agreement on the need for armed resistance was reached.[310]
|
157 |
+
|
158 |
+
Several resistance groups were formed, such as the Jewish Combat Organization (ŻOB) and Jewish Military Union (ŻZW) in the Warsaw Ghetto and the United Partisan Organization in Vilna.[311] Over 100 revolts and uprisings occurred in at least 19 ghettos and elsewhere in Eastern Europe. The best known is the Warsaw Ghetto Uprising in April 1943, when the Germans arrived to send the remaining inhabitants to extermination camps. Forced to retreat on 19 April from the ŻOB and ŻZW fighters, they returned later that day under the command of SS General Jürgen Stroop (author of the Stroop Report about the uprising).[312] Around 1,000 poorly armed fighters held the SS at bay for four weeks.[313] Polish and Jewish accounts stated that hundreds or thousands of Germans had been killed,[314] while the Germans reported 16 dead.[315] The Germans said that 14,000 Jews had been killed—7000 during the fighting and 7000 sent to Treblinka[316]—and between 53,000[317] and 56,000 deported.[315] According to Gwardia Ludowa, a Polish resistance newspaper, in May 1943:
|
159 |
+
|
160 |
+
From behind the screen of smoke and fire, in which the ranks of fighting Jewish partisans are dying, the legend of the exceptional fighting qualities of the Germans is being undermined. ... The fighting Jews have won for us what is most important: the truth about the weakness of the Germans."[318]
|
161 |
+
|
162 |
+
During a revolt in Treblinka on 2 August 1943, inmates killed five or six guards and set fire to camp buildings; several managed to escape.[319] In the Białystok Ghetto on 16 August, Jewish insurgents fought for five days when the Germans announced mass deportations.[320] On 14 October, Jewish prisoners in Sobibór attempted an escape, killing 11 SS officers, as well as two or three Ukrainian and Volksdeutsche guards. According to Yitzhak Arad, this was the highest number of SS officers killed in a single revolt.[321] Around 300 inmates escaped (out of 600 in the main camp), but 100 were recaptured and shot.[322] On 7 October 1944, 300 Jewish members, mostly Greek or Hungarian, of the Sonderkommando at Auschwitz learned they were about to be killed, and staged an uprising, blowing up crematorium IV.[323] Three SS officers were killed.[324] The Sonderkommando at crematorium II threw their Oberkapo into an oven when they heard the commotion, believing that a camp uprising had begun.[325] By the time the SS had regained control, 451 members of the Sonderkommando were dead; 212 survived.[326]
|
163 |
+
|
164 |
+
Estimates of Jewish participation in partisan units throughout Europe range from 20,000 to 100,000.[327] In the occupied Polish and Soviet territories, thousands of Jews fled into the swamps or forests and joined the partisans,[328] although the partisan movements did not always welcome them.[329] An estimated 20,000 to 30,000 joined the Soviet partisan movement.[330] One of the famous Jewish groups was the Bielski partisans in Belarus, led by the Bielski brothers.[328] Jews also joined Polish forces, including the Home Army. According to Timothy Snyder, "more Jews fought in the Warsaw Uprising of August 1944 than in the Warsaw Ghetto Uprising of April 1943."[331][ab]
|
165 |
+
|
166 |
+
The Polish government-in-exile in London learned about Auschwitz from the Polish leadership in Warsaw, who from late 1940 "received a continual flow of information" about the camp, according to historian Michael Fleming.[337] This was in large measure thanks to Captain Witold Pilecki of the Polish Home Army, who allowed himself to be arrested in September 1940 and sent there. An inmate until he escaped in April 1943, his mission was to set up a resistance movement (ZOW), prepare to take over the camp, and smuggle out information about it.[338]
|
167 |
+
|
168 |
+
On 6 January 1942, the Soviet Minister of Foreign Affairs, Vyacheslav Molotov, sent out diplomatic notes about German atrocities. The notes were based on reports about mass graves and bodies surfacing from pits and quarries in areas the Red Army had liberated, as well as witness reports from German-occupied areas.[339] The following month, Szlama Ber Winer escaped from the Chełmno concentration camp in Poland and passed information about it to the Oneg Shabbat group in the Warsaw Ghetto. His report, known by his pseudonym as the Grojanowski Report, had reached London by June 1942.[268][340] Also in 1942, Jan Karski sent information to the Allies after being smuggled into the Warsaw Ghetto twice.[341] By late July or early August 1942, Polish leaders in Warsaw had learned about the mass killing of Jews in Auschwitz, according to Fleming.[337][ac] The Polish Interior Ministry prepared a report, Sprawozdanie 6/42,[343] which said at the end:
|
169 |
+
|
170 |
+
There are different methods of execution. People are shot by firing squads, killed by an "air hammer" /Hammerluft/, and poisoned by gas in special gas chambers. Prisoners condemned to death by the Gestapo are murdered by the first two methods. The third method, the gas chamber, is employed for those who are ill or incapable of work and those who have been brought in transports especially for the purpose /Soviet prisoners of war, and, recently Jews/.[337]
|
171 |
+
|
172 |
+
Sprawozdanie 6/42 was sent to Polish officials in London by courier and had reached them by 12 November 1942, where it was translated into English and added to another, "Report on Conditions in Poland", dated 27 November. Fleming writes that the latter was sent to the Polish Embassy in the United States.[344] On 10 December 1942, the Polish Foreign Affairs Minister, Edward Raczyński, addressed the fledgling United Nations on the killings; the address was distributed with the title The Mass Extermination of Jews in German Occupied Poland. He told them about the use of poison gas; about Treblinka, Bełżec and Sobibór; that the Polish underground had referred to them as extermination camps; and that tens of thousands of Jews had been killed in Bełżec in March and April 1942.[345] One in three Jews in Poland were already dead, he estimated, from a population of 3,130,000.[346] Raczyński's address was covered by the New York Times and The Times of London. Winston Churchill received it, and Anthony Eden presented it to the British cabinet. On 17 December 1942, 11 Allies issued the Joint Declaration by Members of the United Nations condemning the "bestial policy of cold-blooded extermination".[347][348]
|
173 |
+
|
174 |
+
The British and American governments were reluctant to publicize the intelligence they had received. A BBC Hungarian Service memo, written by Carlile Macartney, a BBC broadcaster and senior Foreign Office adviser on Hungary, stated in 1942: "We shouldn't mention the Jews at all." The British government's view was that the Hungarian people's antisemitism would make them distrust the Allies if their broadcasts focused on the Jews.[349] The US government similarly feared turning the war into one about the Jews; antisemitism and isolationism were common in the US before its entry into the war.[350] Although governments and the German public appear to have understood what was happening, it seems the Jews themselves did not. According to Saul Friedländer, "[t]estimonies left by Jews from all over occupied Europe indicate that, in contradistinction to vast segments of surrounding society, the victims did not understand what was ultimately in store for them." In Western Europe, he writes, Jewish communities seem to have failed to piece the information together, while in Eastern Europe, they could not accept that the stories they had heard from elsewhere would end up applying to them too.[351]
|
175 |
+
|
176 |
+
The SS liquidated most of the Jewish ghettos of the General Government area of Poland in 1942–1943 and shipped their populations to the camps for extermination.[353] The only exception was the Lodz Ghetto, which was not liquidated until mid-1944.[354] About 42,000 Jews in the General Government were shot during Operation Harvest Festival (Aktion Erntefest) on 3–4 November 1943.[355] At the same time, rail shipments were arriving regularly from western and southern Europe at the extermination camps.[356] Shipments of Jews to the camps had priority on the German railways over anything but the army's needs, and continued even in the face of the increasingly dire military situation at the end of 1942.[357] Army leaders and economic managers complained about this diversion of resources and the killing of skilled Jewish workers,[358] but Nazi leaders rated ideological imperatives above economic considerations.[359]
|
177 |
+
|
178 |
+
By 1943 it was evident to the armed forces leadership that Germany was losing the war.[360] The mass murder continued nevertheless, reaching a "frenetic" pace in 1944[361] when Auschwitz gassed nearly 500,000 people.[362] On 19 March 1944, Hitler ordered the military occupation of Hungary and dispatched Adolf Eichmann to Budapest to supervise the deportation of the country's Jews.[363] From 22 March Jews were required to wear the yellow star; were forbidden from owning cars, bicycles, radios or telephones; and were later forced into ghettos.[364] Between 15 May and 9 July, 437,000 Jews were deported from Hungary to Auschwitz II-Birkenau, almost all sent directly to the gas chambers.[ad] A month before the deportations began, Eichmann offered through an intermediary, Joel Brand, to exchange one million Jews for 10,000 trucks from the Allies, which the Germans would undertake not to use on the Western front.[367] The British leaked the proposal to the press; The Times called it "a new level of fantasy and self-deception".[368] By mid-1944 Jewish communities within easy reach of the Nazi regime had largely been exterminated.[369]
|
179 |
+
|
180 |
+
As the Soviet armed forces advanced, the SS closed down the camps in eastern Poland and made efforts to conceal what had happened. The gas chambers were dismantled, the crematoria dynamited, and the mass graves dug up and corpses cremated.[370] From January to April 1945, the SS sent inmates westward on "death marches" to camps in Germany and Austria.[371][372] In January 1945, the Germans held records of 714,000 inmates in concentration camps; by May, 250,000 (35 percent) had died during death marches.[373] Already sick after months or years of violence and starvation, they were marched to train stations and transported for days at a time without food or shelter in open freight cars, then forced to march again at the other end to the new camp. Some went by truck or wagons; others were marched the entire distance to the new camp. Those who lagged behind or fell were shot.[374]
|
181 |
+
|
182 |
+
The first major camp to be encountered by Allied troops, Majdanek, was discovered by the advancing Soviets, along with its gas chambers, on 25 July 1944.[375] Treblinka, Sobibór, and Bełżec were never liberated, but were destroyed by the Germans in 1943.[376] On 17 January 1945, 58,000 Auschwitz inmates were sent on a death march westwards;[377] when the camp was liberated by the Soviets on 27 January, they found just 7,000 inmates in the three main camps and 500 in subcamps.[378] Buchenwald was liberated by the Americans on 11 April;[379] Bergen-Belsen by the British on 15 April;[380] Dachau by the Americans on 29 April;[381] Ravensbrück by the Soviets on 30 April;[382] and Mauthausen by the Americans on 5 May.[383] The Red Cross took control of Theresienstadt on 3 May, days before the Soviets arrived.[384]
|
183 |
+
|
184 |
+
The British 11th Armoured Division found around 60,000 prisoners (90 percent Jews) when they liberated Bergen-Belsen,[380][385] as well as 13,000 unburied corpses; another 10,000 people died from typhus or malnutrition over the following weeks.[386] The BBC's war correspondent Richard Dimbleby described the scenes that greeted him and the British Army at Belsen, in a report so graphic the BBC declined to broadcast it for four days, and did so, on 19 April, only after Dimbleby threatened to resign.[387] He said he had "never seen British soldiers so moved to cold fury":[388]
|
185 |
+
|
186 |
+
Here over an acre of ground lay dead and dying people. You could not see which was which. ... The living lay with their heads against the corpses and around them moved the awful, ghostly procession of emaciated, aimless people, with nothing to do and with no hope of life, unable to move out of your way, unable to look at the terrible sights around them ... Babies had been born here, tiny wizened things that could not live. A mother, driven mad, screamed at a British sentry to give her milk for her child, and thrust the tiny mite into his arms. ... He opened the bundle and found the baby had been dead for days. This day at Belsen was the most horrible of my life.
|
187 |
+
|
188 |
+
The Jews killed represented around one third of world Jewry[394] and about two-thirds of European Jewry, based on a pre-war estimate of 9.7 million Jews in Europe.[395] According to the Yad Vashem Holocaust Martyrs' and Heroes' Remembrance Authority in Jerusalem, "[a]ll the serious research" confirms that between five and six million Jews died.[396] Early postwar calculations were 4.2–4.5 million from Gerald Reitlinger;[397] 5.1 million from Raul Hilberg; and 5.95 million from Jacob Lestschinsky.[398] In 1990 Yehuda Bauer and Robert Rozett estimated 5.59–5.86 million,[399] and in 1991 Wolfgang Benz suggested 5.29 to just over 6 million.[400][af] The figures include over one million children.[401] Much of the uncertainty stems from the lack of a reliable figure for the number of Jews in Europe in 1939, border changes that make double-counting of victims difficult to avoid, lack of accurate records from the perpetrators, and uncertainty about whether to include post-liberation deaths caused by the persecution.[397]
|
189 |
+
|
190 |
+
The death camps in occupied Poland accounted for half the Jews killed. At Auschwitz the Jewish death toll was 960,000;[402] Treblinka 870,000;[276] Bełżec 600,000;[266] Chełmno 320,000;[268] Sobibór 250,000;[274] and Majdanek 79,000.[273]
|
191 |
+
|
192 |
+
Death rates were heavily dependent on the survival of European states willing to protect their Jewish citizens.[403] In countries allied to Germany, the control over Jewish citizens was sometimes seen as a matter of sovereignty; the continuous presence of state institutions prevented the Jewish communities' complete destruction.[403] In occupied countries, the survival of the state was likewise correlated with lower Jewish death rates: 75 percent of Jews died in the Netherlands, as did 99 percent of Jews who were in Estonia when the Germans arrived—the Nazis declared Estonia Judenfrei ("free of Jews") in January 1942 at the Wannsee Conference—while 75 percent survived in France and 99 percent in Denmark.[404]
|
193 |
+
|
194 |
+
The survival of Jews in countries where states survived demonstrates, writes Christian Gerlach, "that there were limits to German power" and that the influence of non-Germans—governments and others—was "crucial".[47] Jews who lived where pre-war statehood was destroyed (Poland and the Baltic states) or displaced (western USSR) were at the mercy of both German power and sometimes hostile local populations. Almost all Jews living in German-occupied Poland, Baltic states and the USSR were killed, with a 5 percent chance of survival on average.[403] Of Poland's 3.3 million Jews, about 90 percent were killed.[405]
|
195 |
+
|
196 |
+
The Nazis regarded the Slavs as Untermenschen (subhuman).[24] German troops destroyed villages throughout the Soviet Union,[409] rounded up civilians for forced labor in Germany, and caused famine by taking foodstuffs.[410] In Belarus, Germany imposed a regime that deported 380,000 people for slave labor and killed hundreds of thousands. Over 600 villages had their populations killed and at least 5,295 Belarusian settlements were destroyed. According to Timothy Snyder, of nine million people in Soviet Belarus in 1941, around 1.6 million were killed by Germans away from the battlefield.[411] The United States Holocaust Memorial Museum estimates that 3.3 million of 5.7 million Soviet POWs died in German custody.[412]
|
197 |
+
|
198 |
+
Hitler made clear that Polish workers were to be kept in what Robert Gellately called a "permanent condition of inferiority".[413] In a memorandum to Hitler dated 25 May 1940, "A Few Thoughts on the Treatment of the Ethnically Alien Population in the East", Himmler stated that it was in German interests to foster divisions between the ethnic groups in the East. He wanted to restrict non-Germans in the conquered territories to an elementary-school education that would teach them how to write their names, count up to 500, work hard, and obey Germans.[414] The Polish political class became the target of a campaign of murder (Intelligenzaktion and AB-Aktion).[415] Between 1.8 and 1.9 million non-Jewish Polish citizens were killed by Germans during the war; about four-fifths were ethnic Poles and the rest Ukrainians and Belarusians.[406] At least 200,000 died in concentration camps, around 146,000 in Auschwitz. Others died in massacres or in uprisings such as the Warsaw Uprising, where 120,000–200,000 were killed.[416]
|
199 |
+
|
200 |
+
Germany and its allies killed up to 220,000 Roma, around 25 percent of the community in Europe,[417] in what the Romani people call the Pořajmos.[418] Robert Ritter, head of the Rassenhygienische und Bevolkerungsbiologische Forschungsstelle called them "a peculiar form of the human species who are incapable of development and came about by mutation".[419] In May 1942 they were placed under similar laws to the Jews, and in December Himmler ordered that they be sent to Auschwitz, unless they had served in the Wehrmacht.[420] He adjusted the order on 15 November 1943 to allow "sedentary Gypsies and part-Gypsies" in the occupied Soviet areas to be viewed as citizens.[421] In Belgium, France and the Netherlands, the Roma were subject to restrictions on movement and confinement to collection camps,[422] while in Eastern Europe they were sent to concentration camps, where large numbers were murdered.[423] In the camps, they were usually counted among the asocials and required to wear brown or black triangles on their prison clothes.[424]
|
201 |
+
|
202 |
+
German communists, socialists and trade unionists were among the earliest opponents of the Nazis[425] and among the first to be sent to concentration camps.[426] Nacht und Nebel ("Night and Fog"), a directive issued by Hitler on 7 December 1941, resulted in the disappearance, torture and death of political activists throughout German-occupied Europe; the courts had sentenced 1,793 people to death by April 1944, according to Jack Fischel.[427] Because they refused to pledge allegiance to the Nazi party or serve in the military, Jehovah's Witnesses were sent to concentration camps, where they were identified by purple triangles and given the option of renouncing their faith and submitting to the state's authority.[428] The United States Holocaust Memorial Museum estimates that between 2,700 and 3,300 were sent to the camps, where 1,400 died.[407] According to German historian Detlef Garbe, "no other religious movement resisted the pressure to conform to National Socialism with comparable unanimity and steadfastness."[429]
|
203 |
+
|
204 |
+
Around 100,000 gay men were arrested in Germany and 50,000 jailed between 1933 and 1945; 5,000–15,000 are thought to have been sent to concentration camps, where they were identified by a pink triangle on their camp clothes. It is not known how many died.[408] Hundreds were castrated, sometimes "voluntarily" to avoid criminal sentences.[430] In 1936 Himmler created the Reich Central Office for the Combating of Homosexuality and Abortion.[431] The police closed gay bars and shut down gay publications.[408] Lesbians were left relatively unaffected; the Nazis saw them as "asocials", rather than sexual deviants.[432]
|
205 |
+
|
206 |
+
There were 5,000–25,000 Afro-Germans in Germany when the Nazis came to power.[433] Although blacks in Germany and German-occupied Europe were subjected to incarceration, sterilization and murder, there was no program to kill them as a group.[434]
|
207 |
+
|
208 |
+
The Nuremberg trials were a series of military tribunals held after the war by the Allies in Nuremberg, Germany, to prosecute the German leadership. The first was the 1945–1946 trial of 22 political and military leaders before the International Military Tribunal.[435] Adolf Hitler, Heinrich Himmler, and Joseph Goebbels had committed suicide months earlier.[436] The prosecution entered indictments against 24 men (two were dropped before the end of the trial)[ag] and seven organizations: the Reich Cabinet, Schutzstaffel (SS), Sicherheitsdienst (SD), Gestapo, Sturmabteilung (SA), and the "General Staff and High Command".[437]
|
209 |
+
|
210 |
+
The indictments were for participation in a common plan or conspiracy for the accomplishment of a crime against peace; planning, initiating and waging wars of aggression and other crimes against peace; war crimes; and crimes against humanity. The tribunal passed judgements ranging from acquittal to death by hanging.[437] Eleven defendants were executed, including Joachim von Ribbentrop, Wilhelm Keitel, Alfred Rosenberg, and Alfred Jodl. Ribbentrop, the judgement declared, "played an important part in Hitler's 'final solution of the Jewish question'".[438]
|
211 |
+
|
212 |
+
The subsequent Nuremberg trials, 1946–1949, tried another 185 defendants.[439] West Germany initially tried few ex-Nazis, but after the 1958 Ulm Einsatzkommando trial, the government set up a dedicated agency.[440] Other trials of Nazis and collaborators took place in Western and Eastern Europe. In 1960 Mossad agents captured Adolf Eichmann in Argentina and brought him to Israel to stand trial on 15 indictments, including war crimes, crimes against humanity, and crimes against the Jewish people. He was convicted in December 1961 and executed in June 1962. Eichmann's trial and death revived interest in war criminals and the Holocaust in general.[441]
|
213 |
+
|
214 |
+
The government of Israel requested $1.5 billion from the Federal Republic of Germany in March 1951 to finance the rehabilitation of 500,000 Jewish survivors, arguing that Germany had stolen $6 billion from the European Jews. Israelis were divided about the idea of taking money from Germany. The Conference on Jewish Material Claims Against Germany (known as the Claims Conference) was opened in New York, and after negotiations the claim was reduced to $845 million.[442][443]
|
215 |
+
|
216 |
+
West Germany allocated another $125 million for reparations in 1988. Companies such as BMW, Deutsche Bank, Ford, Opel, Siemens, and Volkswagen faced lawsuits for their use of forced labor during the war.[442] In response, Germany set up the "Remembrance, Responsibility and Future" Foundation in 2000, which paid €4.45 billion to former slave laborers (up to €7,670 each).[444] In 2013 Germany agreed to provide €772 million to fund nursing care, social services, and medication for 56,000 Holocaust survivors around the world.[445] The French state-owned railway company, the SNCF, agreed in 2014 to pay $60 million to Jewish-American survivors, around $100,000 each, for its role in the transport of 76,000 Jews from France to extermination camps between 1942 and 1944.[446]
|
217 |
+
|
218 |
+
In the early decades of Holocaust studies, scholars approached the Holocaust as a genocide unique in its reach and specificity; Nora Levin called the "world of Auschwitz" a "new planet."[447] This was questioned in the 1980s during the West German Historikerstreit ("historians' dispute"), an attempt to re-position the Holocaust within German historiography.[448][ah]
|
219 |
+
|
220 |
+
Ernst Nolte triggered the Historikerstreit in June 1986 with an article in the conservative newspaper Frankfurter Allgemeine Zeitung: "The past that will not pass: A speech that could be written but no longer delivered".[450][451][ai] Rather than being studied as an historical event, the Nazi era was suspended like a sword over Germany's present, he wrote. He compared "the guilt of the Germans" to the Nazi idea of "the guilt of the Jews", and argued that the focus on the Final Solution overlooked the Nazi's euthanasia program and treatment of Soviet POWs, as well as post-war issues such as the Vietnam War and Soviet–Afghan War. Comparing Auschwitz to the Gulag, he suggested that the Holocaust was a response to Hitler's fear of the Soviet Union: "Did the Gulag Archipelago not precede Auschwitz? Was the Bolshevik murder of an entire class not the logical and factual prius of the 'racial murder' of National Socialism? ... Was Auschwitz perhaps rooted in a past that would not pass?"[aj]
|
221 |
+
|
222 |
+
Nolte's arguments were viewed as an attempt to normalize the Holocaust; one of the debate's key questions, according to historian Ernst Piper, was whether history should "historicize" or "moralize".[455][ak] In September 1986 in the left-leaning Die Zeit, Eberhard Jäckel responded that "never before had a state, with the authority of its leader, decided and announced that a specific group of humans, including the elderly, women, children and infants, would be killed as quickly as possible, then carried out this resolution using every possible means of state power."[i]
|
223 |
+
|
224 |
+
Despite the criticism of Nolte, the Historikerstreit put "the question of comparison" on the agenda, according to Dan Stone in 2010. [448] He argued that the idea of the Holocaust as unique was overtaken by attempts to place it within the context of Stalinism, ethnic cleansing, and the Nazis' intentions for post-war "demographic reordering", particularly the Generalplan Ost, the plan to kill tens of millions of Slavs to create living space for Germans.[457] Jäckel's position continued nevertheless to inform the views of many specialists. Richard J. Evans argued in 2015:
|
225 |
+
|
226 |
+
Thus although the Nazi "Final Solution" was one genocide among many, it had features that made it stand out from all the rest as well. Unlike all the others it was bounded neither by space nor by time. It was launched not against a local or regional obstacle, but at a world-enemy seen as operating on a global scale. It was bound to an even larger plan of racial reordering and reconstruction involving further genocidal killing on an almost unimaginable scale, aimed, however, at clearing the way in a particular region – Eastern Europe – for a further struggle against the Jews and those the Nazis regarded as their puppets. It was set in motion by ideologues who saw world history in racial terms. It was, in part, carried out by industrial methods. These things all make it unique.
|
227 |
+
|
228 |
+
In September 2018, an online CNN–ComRes poll of 7,092 adults in seven European countries—Austria, France, Germany, Great Britain, Hungary, Poland, and Sweden—found that one in 20 had never heard of the Holocaust. The figure included one in five people in France aged 18–34. Four in 10 Austrians said they knew "just a little" about it; 12 percent of young people there said they had never heard of it.[459] A 2018 survey in the United States found that 22 percent of 1,350 adults said they had never heard of it, while 41 percent of Americans and 66 percent of millennials did not know what Auschwitz was.[460][461] In 2019, a survey of 1,100 Canadians found that 49 percent could not name any of the concentration camps.[462]
|
229 |
+
|
230 |
+
Yad Vashem (undated): "The Holocaust was the murder of approximately six million Jews by the Nazis and their collaborators. Between the German invasion of the Soviet Union in the summer of 1941 and the end of the war in Europe in May 1945, Nazi Germany and its accomplices strove to murder every Jew under their domination."[29]
|
231 |
+
|
232 |
+
The Century Dictionary (1904): "a sacrifice or offering entirely consumed by fire, in use among the Jews and some pagan nations. Figuratively, a great slaughter or sacrifice of life, as by fire or other accident, or in battle."[11]
|
233 |
+
|
234 |
+
Translation, Avalon Project: "These actions are, however, only to be considered provisional, but practical experience is already being collected which is of the greatest importance in relation to the future final solution of the Jewish question."[250]
|
235 |
+
|
236 |
+
The speech that could not be delivered referred to a lecture Nolte had planned to give to the Römerberg-Gesprächen (Römerberg Colloquium) in Frankfurt; he said his invitation had been withdrawn, which the organizers disputed.[453] At that point, his lecture had the title "The Past That Will Not Pass: To Debate or to Draw the Line?".[454]
|
237 |
+
|
238 |
+
"The Holocaust: Definition and Preliminary Discussion". Holocaust Resource Center, Yad Vashem. Archived from the original on 26 June 2015.
|
239 |
+
|
240 |
+
German: "Wannsee-Protokoll". EuroDocs. Harold B. Lee Library, Brigham Young University. Archived from the original on 22 June 2006.
|
241 |
+
|
242 |
+
"Audio slideshow: Liberation of Belsen". BBC News. 15 April 2005. Archived from the original on 13 February 2009.
|
243 |
+
|
244 |
+
For France, Gerlach 2016, p. 14; for Denmark and Estonia, Snyder 2015, p. 212; for Estonia (Estland) and the Wannsee Conference, "Besprechungsprotokoll" (PDF). Haus der Wannsee-Konferenz. p. 171. Archived (PDF) from the original on 2 February 2019.
|
245 |
+
|
246 |
+
Davies, Lizzie (17 February 2009). "France responsible for sending Jews to concentration camps, says court". The Guardian. Archived from the original on 10 October 2017.
|
247 |
+
|
248 |
+
Knowlton & Cates 1993, pp. 18–23; partly reproduced in "The Past That Will Not Pass" (translation), German History in Documents and Images.
|
en/193.html.txt
ADDED
@@ -0,0 +1,228 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Central America (Spanish: América Central, pronounced [aˈmeɾika senˈtɾal] (listen), Centroamérica pronounced [sentɾoaˈmeɾika] (listen)) is a region in the southern tip of North America and is sometimes defined as a subregion of the Americas.[1] This region is bordered by Mexico to the north, Colombia to the southeast, the Caribbean Sea to the east and the Pacific Ocean to the west and south. Central America consists of seven countries: El Salvador, Costa Rica, Belize, Guatemala, Honduras, Nicaragua and Panama. The combined population of Central America is estimated at 44.53 million (2016).[2]
|
4 |
+
|
5 |
+
Central America is a part of the Mesoamerican biodiversity hotspot, which extends from northern Guatemala to central Panama. Due to the presence of several active geologic faults and the Central America Volcanic Arc, there is a great deal of seismic activity in the region, such as volcanic eruptions and earthquakes, which has resulted in death, injury and property damage.
|
6 |
+
|
7 |
+
In the Pre-Columbian era, Central America was inhabited by the indigenous peoples of Mesoamerica to the north and west and the Isthmo-Colombian peoples to the south and east. Following the Spanish expedition of Christopher Columbus' voyages to the Americas, Spain began to colonize the Americas. From 1609 to 1821, the majority of Central American territories (except for what would become Belize and Panama, and including the modern Mexican state of Chiapas) were governed by the viceroyalty of New Spain from Mexico City as the Captaincy General of Guatemala. On 24 August 1821, Spanish Viceroy Juan de O'Donojú signed the Treaty of Córdoba, which established New Spain's independence from Spain.[3] On 15 September 1821, the Act of Independence of Central America was enacted to announce Central America's separation from the Spanish Empire and provide for the establishment of a new Central American state. Some of New Spain's provinces in the Central American region (i.e. what would become Guatemala, Honduras, El Salvador, Nicaragua and Costa Rica) were annexed to the First Mexican Empire; however, in 1823 they seceded from Mexico to form the Federal Republic of Central America until 1838.
|
8 |
+
|
9 |
+
In 1838, Nicaragua, Honduras, Costa Rica and Guatemala became the first of Central America's seven states to become independent autonomous countries, followed by El Salvador in 1841, Panama in 1903 and Belize in 1981.[citation needed] Despite the dissolution of the Federal Republic of Central America, there is anecdotal evidence that demonstrates that Salvadorans, Panamanians, Costa Ricans, Guatemalans, Hondurans and Nicaraguans continue to maintain a Central American identity. For instance, Central Americans sometimes refer to their nations as if they were provinces of a Central American state. It is not unusual to write "C.A." after the country's name in formal and informal contexts. Governments in the region sometimes reinforce this sense of belonging to Central America in its citizens. For example, automobile licence plates in many of the region's countries include the moniker, Centroamerica, alongside the country's name. Belizeans are mostly associated to be culturally West Indian rather than Central American.
|
10 |
+
|
11 |
+
El Salvador
|
12 |
+
|
13 |
+
Guatemala
|
14 |
+
|
15 |
+
Honduras
|
16 |
+
|
17 |
+
Nicaragua
|
18 |
+
|
19 |
+
Costa Rica
|
20 |
+
|
21 |
+
Panama
|
22 |
+
|
23 |
+
Belize
|
24 |
+
|
25 |
+
"Central America" may mean different things to various people, based upon different contexts:
|
26 |
+
|
27 |
+
Ancient footprints of Acahualinca, Nicaragua
|
28 |
+
|
29 |
+
Stone spheres of Costa Rica
|
30 |
+
|
31 |
+
Tazumal, El Salvador
|
32 |
+
|
33 |
+
Tikal, Guatemala
|
34 |
+
|
35 |
+
Copan, Honduras
|
36 |
+
|
37 |
+
Altun Ha, Belize
|
38 |
+
|
39 |
+
In the Pre-Columbian era, the northern areas of Central America were inhabited by the indigenous peoples of Mesoamerica. Most notable among these were the Mayans, who had built numerous cities throughout the region, and the Aztecs, who had created a vast empire. The pre-Columbian cultures of eastern El Salvador, eastern Honduras, Caribbean Nicaragua, most of Costa Rica and Panama were predominantly speakers of the Chibchan languages at the time of European contact and are considered by some[6] culturally different and grouped in the Isthmo-Colombian Area.
|
40 |
+
|
41 |
+
Following the Spanish expedition of Christopher Columbus's voyages to the Americas, the Spanish sent many expeditions to the region, and they began their conquest of Maya territory in 1523. Soon after the conquest of the Aztec Empire, Spanish conquistador Pedro de Alvarado commenced the conquest of northern Central America for the Spanish Empire. Beginning with his arrival in Soconusco in 1523, Alvarado's forces systematically conquered and subjugated most of the major Maya kingdoms, including the K'iche', Tz'utujil, Pipil, and the Kaqchikel. By 1528, the conquest of Guatemala was nearly complete, with only the Petén Basin remaining outside the Spanish sphere of influence. The last independent Maya kingdoms – the Kowoj and the Itza people – were finally defeated in 1697, as part of the Spanish conquest of Petén.[citation needed]
|
42 |
+
|
43 |
+
In 1538, Spain established the Real Audiencia of Panama, which had jurisdiction over all land from the Strait of Magellan to the Gulf of Fonseca. This entity was dissolved in 1543, and most of the territory within Central America then fell under the jurisdiction of the Audiencia Real de Guatemala. This area included the current territories of Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Mexican state of Chiapas, but excluded the lands that would become Belize and Panama. The president of the Audiencia, which had its seat in Antigua Guatemala, was the governor of the entire area. In 1609 the area became a captaincy general and the governor was also granted the title of captain general. The Captaincy General of Guatemala encompassed most of Central America, with the exception of present-day Belize and Panama.
|
44 |
+
|
45 |
+
The Captaincy General of Guatemala lasted for more than two centuries, but began to fray after a rebellion in 1811 which began in the intendancy of San Salvador. The Captaincy General formally ended on 15 September 1821, with the signing of the Act of Independence of Central America. Mexican independence was achieved at virtually the same time with the signing of the Treaty of Córdoba and the Declaration of Independence of the Mexican Empire, and the entire region was finally independent from Spanish authority by 28 September 1821.
|
46 |
+
|
47 |
+
From its independence from Spain in 1821 until 1823, the former Captaincy General remained intact as part of the short-lived First Mexican Empire. When the Emperor of Mexico abdicated on 19 March 1823, Central America again became independent. On 1 July 1823, the Congress of Central America peacefully seceded from Mexico and declared absolute independence from all foreign nations, and the region formed the Federal Republic of Central America.[citation needed]
|
48 |
+
|
49 |
+
The Federal Republic of Central America was a representative democracy with its capital at Guatemala City. This union consisted of the provinces of Costa Rica, El Salvador, Guatemala, Honduras, Los Altos, Mosquito Coast, and Nicaragua. The lowlands of southwest Chiapas, including Soconusco, initially belonged to the Republic until 1824, when Mexico annexed most of Chiapas and began its claims to Soconusco. The Republic lasted from 1823 to 1838, when it disintegrated as a result of civil wars.[citation needed]
|
50 |
+
|
51 |
+
The United Provinces of Central America
|
52 |
+
|
53 |
+
Federal Republic of Central America
|
54 |
+
|
55 |
+
National Representation of Central America
|
56 |
+
|
57 |
+
Greater Republic of Central America
|
58 |
+
|
59 |
+
Guatemala
|
60 |
+
|
61 |
+
El Salvador
|
62 |
+
|
63 |
+
Honduras
|
64 |
+
|
65 |
+
Nicaragua
|
66 |
+
|
67 |
+
Costa Rica
|
68 |
+
|
69 |
+
Panama
|
70 |
+
|
71 |
+
Belize
|
72 |
+
|
73 |
+
The territory that now makes up Belize was heavily contested in a dispute that continued for decades after Guatemala achieved independence (see History of Belize (1506–1862). Spain, and later Guatemala, considered this land a Guatemalan department. In 1862, Britain formally declared it a British colony and named it British Honduras. It became independent as Belize in 1981.[citation needed]
|
74 |
+
|
75 |
+
Panama, situated in the southernmost part of Central America on the Isthmus of Panama, has for most of its history been culturally and politically linked to South America. Panama was part of the Province of Tierra Firme from 1510 until 1538 when it came under the jurisdiction of the newly formed Audiencia Real de Panama. Beginning in 1543, Panama was administered as part of the Viceroyalty of Peru, along with all other Spanish possessions in South America. Panama remained as part of the Viceroyalty of Peru until 1739, when it was transferred to the Viceroyalty of New Granada, the capital of which was located at Santa Fé de Bogotá. Panama remained as part of the Viceroyalty of New Granada until the disestablishment of that viceroyalty in 1819. A series of military and political struggles took place from that time until 1822, the result of which produced the republic of Gran Colombia. After the dissolution of Gran Colombia in 1830, Panama became part of a successor state, the Republic of New Granada. From 1855 until 1886, Panama existed as Panama State, first within the Republic of New Granada, then within the Granadine Confederation, and finally within the United States of Colombia. The United States of Colombia was replaced by the Republic of Colombia in 1886. As part of the Republic of Colombia, Panama State was abolished and it became the Isthmus Department. Despite the many political reorganizations, Colombia was still deeply plagued by conflict, which eventually led to the secession of Panama on 3 November 1903. Only after that time did some begin to regard Panama as a North or Central American entity.[citation needed]
|
76 |
+
|
77 |
+
By the 1930s the United Fruit Company owned 14,000 square kilometres (3.5 million acres) of land in Central America and the Caribbean and was the single largest land owner in Guatemala. Such holdings gave it great power over the governments of small countries. That was one of the factors that led to the coining of the phrase banana republic.[7]
|
78 |
+
|
79 |
+
After more than two hundred years of social unrest, violent conflict, and revolution, Central America today remains in a period of political transformation. Poverty, social injustice, and violence are still widespread.[8] Nicaragua is the second poorest country in the western hemisphere (only Haiti is poorer).[9]
|
80 |
+
|
81 |
+
Central America is the tapering isthmus of southern North America, with unique and varied geographic features. The Pacific Ocean lies to the southwest, the Caribbean Sea lies to the northeast, and the Gulf of Mexico lies to the north. Some physiographists define the Isthmus of Tehuantepec as the northern geographic border of Central America,[10] while others use the northwestern borders of Belize and Guatemala. From there, the Central American land mass extends southeastward to the Atrato River, where it connects to the Pacific Lowlands in northwestern South America.
|
82 |
+
|
83 |
+
Of the many mountain ranges within Central America, the longest are the Sierra Madre de Chiapas, the Cordillera Isabelia and the Cordillera de Talamanca. At 4,220 meters (13,850 ft), Volcán Tajumulco is the highest peak in Central America. Other high points of Central America are as listed in the table below:
|
84 |
+
|
85 |
+
Between the mountain ranges lie fertile valleys that are suitable for the raising of livestock and for the production of coffee, tobacco, beans and other crops. Most of the population of Honduras, Costa Rica and Guatemala lives in valleys.[11]
|
86 |
+
|
87 |
+
Trade winds have a significant effect upon the climate of Central America. Temperatures in Central America are highest just prior to the summer wet season, and are lowest during the winter dry season, when trade winds contribute to a cooler climate. The highest temperatures occur in April, due to higher levels of sunlight, lower cloud cover and a decrease in trade winds.[12]
|
88 |
+
|
89 |
+
Central America is part of the Mesoamerican biodiversity hotspot, boasting 7% of the world's biodiversity.[13] The Pacific Flyway is a major north–south flyway for migratory birds in the Americas, extending from Alaska to Tierra del Fuego. Due to the funnel-like shape of its land mass, migratory birds can be seen in very high concentrations in Central America, especially in the spring and autumn. As a bridge between North America and South America, Central America has many species from the Nearctic and the Neotropical realms. However the southern countries (Costa Rica and Panama) of the region have more biodiversity than the northern countries (Guatemala and Belize), meanwhile the central countries (Honduras, Nicaragua and El Salvador) have the least biodiversity.[13] The table below shows recent statistics:
|
90 |
+
|
91 |
+
Belize
|
92 |
+
|
93 |
+
Montecristo National Park, El Salvador
|
94 |
+
|
95 |
+
Maderas forest, Nicaragua
|
96 |
+
|
97 |
+
Texiguat Wildlife Refuge Honduras
|
98 |
+
|
99 |
+
Monteverde Cloud Forest Reserve, Costa Rica.
|
100 |
+
|
101 |
+
Parque Internacional la Amistad, Panama
|
102 |
+
|
103 |
+
Petén-Veracruz moist forests, Guatemala
|
104 |
+
|
105 |
+
Over 300 species of the region's flora and fauna are threatened, 107 of which are classified as critically endangered. The underlying problems are deforestation, which is estimated by FAO at 1.2% per year in Central America and Mexico combined, fragmentation of rainforests and the fact that 80% of the vegetation in Central America has already been converted to agriculture.[21]
|
106 |
+
|
107 |
+
Efforts to protect fauna and flora in the region are made by creating ecoregions and nature reserves. 36% of Belize's land territory falls under some form of official protected status, giving Belize one of the most extensive systems of terrestrial protected areas in the Americas. In addition, 13% of Belize's marine territory are also protected.[22] A large coral reef extends from Mexico to Honduras: the Mesoamerican Barrier Reef System. The Belize Barrier Reef is part of this. The Belize Barrier Reef is home to a large diversity of plants and animals, and is one of the most diverse ecosystems of the world. It is home to 70 hard coral species, 36 soft coral species, 500 species of fish and hundreds of invertebrate species.
|
108 |
+
So far only about 10% of the species in the Belize barrier reef have been discovered.[23]
|
109 |
+
|
110 |
+
Lycaste skinneri, Guatemala
|
111 |
+
|
112 |
+
Yucca gigantea, El Salvador
|
113 |
+
|
114 |
+
Rhyncholaelia digbyana, Honduras
|
115 |
+
|
116 |
+
Plumeria, Nicaragua
|
117 |
+
|
118 |
+
Guarianthe skinneri, Costa Rica
|
119 |
+
|
120 |
+
Peristeria elata, Panama
|
121 |
+
|
122 |
+
Prosthechea cochleata, Belize
|
123 |
+
|
124 |
+
From 2001 to 2010, 5,376 square kilometers (2,076 sq mi) of forest were lost in the region. In 2010 Belize had 63% of remaining forest cover, Costa Rica 46%, Panama 45%, Honduras 41%, Guatemala 37%, Nicaragua 29%, and El Salvador 21%. Most of the loss occurred in the moist forest biome, with 12,201 square kilometers (4,711 sq mi). Woody vegetation loss was partially set off by a gain in the coniferous forest biome with 4,730 square kilometers (1,830 sq mi), and a gain in the dry forest biome at 2,054 square kilometers (793 sq mi). Mangroves and deserts contributed only 1% to the loss in forest vegetation. The bulk of the deforestation was located at the Caribbean slopes of Nicaragua with a loss of 8,574 square kilometers (3,310 sq mi) of forest in the period from 2001 to 2010. The most significant regrowth of 3,050 square kilometers (1,180 sq mi) of forest was seen in the coniferous woody vegetation of Honduras.[24]
|
125 |
+
|
126 |
+
The Central American pine-oak forests ecoregion, in the tropical and subtropical coniferous forests biome, is found in Central America and southern Mexico. The Central American pine-oak forests occupy an area of 111,400 square kilometers (43,000 sq mi),[25] extending along the mountainous spine of Central America, extending from the Sierra Madre de Chiapas in Mexico's Chiapas state through the highlands of Guatemala, El Salvador, and Honduras to central Nicaragua. The pine-oak forests lie between 600–1,800 metres (2,000–5,900 ft) elevation,[25] and are surrounded at lower elevations by tropical moist forests and tropical dry forests. Higher elevations above 1,800 metres (5,900 ft) are usually covered with Central American montane forests. The Central American pine-oak forests are composed of many species characteristic of temperate North America including oak, pine, fir, and cypress.
|
127 |
+
|
128 |
+
Laurel forest is the most common type of Central American temperate evergreen cloud forest, found in almost all Central American countries, normally more than 1,000 meters (3,300 ft) above sea level. Tree species include evergreen oaks, members of the laurel family, and species of Weinmannia, Drimys, and Magnolia.[26] The cloud forest of Sierra de las Minas, Guatemala, is the largest in Central America. In some areas of southeastern Honduras there are cloud forests, the largest located near the border with Nicaragua. In Nicaragua, cloud forests are situated near the border with Honduras, but many were cleared to grow coffee. There are still some temperate evergreen hills in the north. The only cloud forest in the Pacific coastal zone of Central America is on the Mombacho volcano in Nicaragua. In Costa Rica, there are laurel forests in the Cordillera de Tilarán and Volcán Arenal, called Monteverde, also in the Cordillera de Talamanca.
|
129 |
+
|
130 |
+
The Central American montane forests are an ecoregion of the tropical and subtropical moist broadleaf forests biome, as defined by the World Wildlife Fund.[27] These forests are of the moist deciduous and the semi-evergreen seasonal subtype of tropical and subtropical moist broadleaf forests and receive high overall rainfall with a warm summer wet season and a cooler winter dry season. Central American montane forests consist of forest patches located at altitudes ranging from 1,800–4,000 metres (5,900–13,100 ft), on the summits and slopes of the highest mountains in Central America ranging from Southern Mexico, through Guatemala, El Salvador, and Honduras, to northern Nicaragua. The entire ecoregion covers an area of 13,200 square kilometers (5,100 sq mi) and has a temperate climate with relatively high precipitation levels.[27]
|
131 |
+
|
132 |
+
Resplendent quetzal, Guatemala
|
133 |
+
|
134 |
+
Turquoise-browed motmot, El Salvador and Nicaragua
|
135 |
+
|
136 |
+
Keel-billed toucan, Belize
|
137 |
+
|
138 |
+
Scarlet macaw, Honduras
|
139 |
+
|
140 |
+
Clay-colored thrush, Costa Rica
|
141 |
+
|
142 |
+
Harpy eagle, Panama
|
143 |
+
|
144 |
+
Ecoregions are not only established to protect the forests themselves but also because they are habitats for an incomparably rich and often endemic fauna. Almost half of the bird population of the Talamancan montane forests in Costa Rica and Panama are endemic to this region. Several birds are listed as threatened, most notably the resplendent quetzal (Pharomacrus mocinno), three-wattled bellbird (Procnias tricarunculata), bare-necked umbrellabird (Cephalopterus glabricollis), and black guan (Chamaepetes unicolor). Many of the amphibians are endemic and depend on the existence of forest. The golden toad that once inhabited a small region in the Monteverde Reserve, which is part of the Talamancan montane forests, has not been seen alive since 1989 and is listed as extinct by IUCN. The exact causes for its extinction are unknown. Global warming may have played a role, because the development of that frog is typical for this area may have been compromised. Seven small mammals are endemic to the Costa Rica-Chiriqui highlands within the Talamancan montane forest region. Jaguars, cougars, spider monkeys, as well as tapirs, and anteaters live in the woods of Central America.[26] The Central American red brocket is a brocket deer found in Central America's tropical forest.
|
145 |
+
|
146 |
+
Coatepeque Caldera, El Salvador
|
147 |
+
|
148 |
+
Lake Atitlán, Guatemala
|
149 |
+
|
150 |
+
Mombacho, Nicaragua
|
151 |
+
|
152 |
+
Arenal Volcano, Costa Rica
|
153 |
+
|
154 |
+
Central America is geologically very active, with volcanic eruptions and earthquakes occurring frequently, and tsunamis occurring occasionally. Many thousands of people have died as a result of these natural disasters.
|
155 |
+
|
156 |
+
Most of Central America rests atop the Caribbean Plate. This tectonic plate converges with the Cocos, Nazca, and North American plates to form the Middle America Trench, a major subduction zone. The Middle America Trench is situated some 60–160 kilometers (37–99 mi) off the Pacific coast of Central America and runs roughly parallel to it. Many large earthquakes have occurred as a result of seismic activity at the Middle America Trench.[28] For example, subduction of the Cocos Plate beneath the North American Plate at the Middle America Trench is believed to have caused the 1985 Mexico City earthquake that killed as many as 40,000 people. Seismic activity at the Middle America Trench is also responsible for earthquakes in 1902, 1942, 1956, 1982, 1992, 2001, 2007, 2012, 2014, and many other earthquakes throughout Central America.
|
157 |
+
|
158 |
+
The Middle America Trench is not the only source of seismic activity in Central America. The Motagua Fault is an onshore continuation of the Cayman Trough which forms part of the tectonic boundary between the North American Plate and the Caribbean Plate. This transform fault cuts right across Guatemala and then continues offshore until it merges with the Middle America Trench along the Pacific coast of Mexico, near Acapulco. Seismic activity at the Motagua Fault has been responsible for earthquakes in 1717, 1773, 1902, 1976, 1980, and 2009.
|
159 |
+
|
160 |
+
Another onshore continuation of the Cayman Trough is the Chixoy-Polochic Fault, which runs parallel to, and roughly 80 kilometers (50 mi) to the north, of the Motagua Fault. Though less active than the Motagua Fault, seismic activity at the Chixoy-Polochic Fault is still thought to be capable of producing very large earthquakes, such as the 1816 earthquake of Guatemala.[29]
|
161 |
+
|
162 |
+
Managua, the capital of Nicaragua, was devastated by earthquakes in 1931 and 1972.
|
163 |
+
|
164 |
+
Volcanic eruptions are also common in Central America. In 1968 the Arenal Volcano, in Costa Rica, erupted killing 87 people as the 3 villages of Tabacon, Pueblo Nuevo and San Luis were buried under pyroclastic flows and debris. Fertile soils from weathered volcanic lava have made it possible to sustain dense populations in the agriculturally productive highland areas.
|
165 |
+
|
166 |
+
The population of Central America is estimated at 47,448,333 as of 2018.[30][31] With an area of 523,780 square kilometers (202,230 sq mi),[32] it has a population density of 91 per square kilometer (240/sq mi). Human Development Index values are from the estimates for 2017.[33]
|
167 |
+
|
168 |
+
The official language majority in all Central American countries is Spanish, except in Belize, where the official language is English. Mayan languages constitute a language family consisting of about 26 related languages. Guatemala formally recognized 21 of these in 1996. Xinca and Garifuna are also present in Central America.
|
169 |
+
|
170 |
+
This region of the continent is very rich in terms of ethnic groups. The majority of the population is mestizo, with sizable Mayan and African descendent populations present, including Xinca and Garifuna minorities. The immigration of Arabs, Jews, Chinese, Europeans and others brought additional groups to the area.
|
171 |
+
|
172 |
+
The predominant religion in Central America is Christianity (95.6%).[35] Beginning with the Spanish colonization of Central America in the 16th century, Roman Catholicism became the most popular religion in the region until the first half of the 20th century. Since the 1960s, there has been an increase in other Christian groups, particularly Protestantism, as well as other religious organizations, and individuals identifying themselves as having no religion.[36]
|
173 |
+
|
174 |
+
Guatemalan textiles
|
175 |
+
|
176 |
+
Mola (art form), Panama
|
177 |
+
|
178 |
+
El Salvador La Plama art form
|
179 |
+
|
180 |
+
Central America is currently undergoing a process of political, economic and cultural transformation that started in 1907 with the creation of the Central American Court of Justice.
|
181 |
+
|
182 |
+
In 1951 the integration process continued with the signature of the San Salvador Treaty, which created the ODECA, the Organization of Central American States. However, the unity of the ODECA was limited by conflicts between several member states.
|
183 |
+
|
184 |
+
In 1991, the integration agenda was further advanced by the creation of the Central American Integration System (Sistema para la Integración Centroamericana, or SICA). SICA provides a clear legal basis to avoid disputes between the member states. SICA membership includes the 7 nations of Central America plus the Dominican Republic, a state that is traditionally considered part of the Caribbean.
|
185 |
+
|
186 |
+
On 6 December 2008, SICA announced an agreement to pursue a common currency and common passport for the member nations.[37] No timeline for implementation was discussed.
|
187 |
+
|
188 |
+
Central America already has several supranational institutions such as the Central American Parliament, the Central American Bank for Economic Integration and the Central American Common Market.
|
189 |
+
|
190 |
+
On 22 July 2011, President Mauricio Funes of El Salvador became the first president pro tempore to SICA. El Salvador also became the headquarters of SICA with the inauguration of a new building.[38]
|
191 |
+
|
192 |
+
Until recently,[when?] all Central American countries have maintained diplomatic relations with Taiwan instead of China. President Óscar Arias of Costa Rica, however, established diplomatic relations with China in 2007, severing formal diplomatic ties with Taiwan.[39] After breaking off relations with the Republic of China in 2017, Panama established diplomatic relations with the People's Republic of China. In August 2018, El Salvador also severed ties with Taiwan to formally start recognizing the People's Republic of China as sole China, a move many considered lacked transparency due to its abruptness and reports of the Chinese government's desires to invest in the department of La Union while also promising to fund the ruling party's reelection campaign.[40] President of El Salvador, Nayib Bukele, broke diplomatic relations with Taiwan and establish better ones with China.
|
193 |
+
|
194 |
+
The Central American Parliament (also known as PARLACEN) is a political and parliamentary body of SICA. The parliament started around 1980, and its primary goal was to resolve conflicts in Nicaragua, Guatemala, and El Salvador. Although the group was disbanded in 1986, ideas of unity of Central Americans still remained, so a treaty was signed in 1987 to create the Central American Parliament and other political bodies. Its original members were Guatemala, El Salvador, Nicaragua and Honduras. The parliament is the political organ of Central America, and is part of SICA. New members have since then joined including Panama and the Dominican Republic.
|
195 |
+
|
196 |
+
Costa Rica is not a member State of the Central American Parliament and its adhesion remains as a very unpopular topic at all levels of the Costa Rican society due to existing strong political criticism towards the regional parliament, since it is regarded by Costa Ricans as a menace to democratic accountability and effectiveness of integration efforts. Excessively high salaries for its members, legal immunity of jurisdiction from any member State, corruption, lack of a binding nature and effectiveness of the regional parliament's decisions, high operative costs and immediate membership of Central American Presidents once they leave their office and presidential terms, are the most common reasons invoked by Costa Ricans against the Central American Parliament.[citation needed]
|
197 |
+
|
198 |
+
Signed in 2004, the Central American Free Trade Agreement (CAFTA) is an agreement between the United States, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and the Dominican Republic. The treaty is aimed at promoting free trade among its members.
|
199 |
+
|
200 |
+
Guatemala has the largest economy in the region.[41][42] Its main exports are coffee, sugar, bananas, petroleum, clothing, and cardamom. Of its 10.29 billion dollar annual exports,[43] 40.2% go to the United States, 11.1% to neighboring El Salvador, 8% to Honduras, 5.5% to Mexico, 4.7% to Nicaragua, and 4.3% to Costa Rica.[44]
|
201 |
+
|
202 |
+
The region is particularly attractive for companies (especially clothing companies) because of its geographical proximity to the[United States], very low wages and considerable tax advantages. In addition, the decline in the prices of coffee and other export products and the structural adjustment measures promoted by the international financial institutions have partly ruined agriculture, favouring the emergence of maquiladoras. This sector accounts for 42 per cent of total exports from El Salvador, 55 per cent from Guatemala, and 65 per cent from Honduras. However, its contribution to the economies of these countries is disputed; raw materials are imported, jobs are precarious and low-paid, and tax exemptions weaken public finances.[45]
|
203 |
+
|
204 |
+
They are also criticised for the working conditions of employees: insults and physical violence, abusive dismissals (especially of pregnant workers), working hours, non-payment of overtime. According to Lucrecia Bautista, coordinator of themaquilas sector of the audit firm Coverco,labour law regulations are regularly violated in maquilas and there is no political will to enforce their application. In the case of infringements, the labour inspectorate shows remarkable leniency. It is a question of not discouraging investors. "Trade unionists are subject to pressure, and sometimes to kidnapping or murder. In some cases, business leaders have used the services of the maras. Finally, black lists containing the names of trade unionists or political activists are circulating in employers' circles.[45]
|
205 |
+
|
206 |
+
Economic growth in Central America is projected to slow slightly in 2014–15, as country-specific domestic factors offset the positive effects from stronger economic activity in the United States.[46]
|
207 |
+
|
208 |
+
Playa Blanca Guatemala
|
209 |
+
|
210 |
+
Jiquilisco Bay, El Salvador
|
211 |
+
|
212 |
+
Roatán, Honduras
|
213 |
+
|
214 |
+
Pink Pearl Island Nicaragua
|
215 |
+
|
216 |
+
Tamarindo, Costa Rica
|
217 |
+
|
218 |
+
Cayos Zapatilla, Panama
|
219 |
+
|
220 |
+
Corozal Beach, Belize
|
221 |
+
|
222 |
+
Tourism in Belize has grown considerably in more recent times, and it is now the second largest industry in the nation. Belizean Prime Minister Dean Barrow has stated his intention to use tourism to combat poverty throughout the country.[49] The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Belize's tourism-driven economy have been significant, with the nation welcoming almost one million tourists in a calendar year for the first time in its history in 2012.[50] Belize is also the only country in Central America with English as its official language, making this country a comfortable destination for English-speaking tourists.[51]
|
223 |
+
|
224 |
+
Costa Rica is the most visited nation in Central America.[52] Tourism in Costa Rica is one of the fastest growing economic sectors of the country,[53] having become the largest source of foreign revenue by 1995.[54] Since 1999, tourism has earned more foreign exchange than bananas, pineapples and coffee exports combined.[55] The tourism boom began in 1987,[54] with the number of visitors up from 329,000 in 1988, through 1.03 million in 1999, to a historical record of 2.43 million foreign visitors and $1.92-billion in revenue in 2013.[52] In 2012 tourism contributed with 12.5% of the country's GDP and it was responsible for 11.7% of direct and indirect employment.[56]
|
225 |
+
|
226 |
+
Tourism in Nicaragua has grown considerably recently, and it is now the second largest industry in the nation. Nicaraguan President Daniel Ortega has stated his intention to use tourism to combat poverty throughout the country.[57] The growth in tourism has positively affected the agricultural, commercial, and finance industries, as well as the construction industry. The results for Nicaragua's tourism-driven economy have been significant, with the nation welcoming one million tourists in a calendar year for the first time in its history in 2010.[58]
|
227 |
+
|
228 |
+
The Inter-American Highway is the Central American section of the Pan-American Highway, and spans 5,470 kilometers (3,400 mi) between Nuevo Laredo, Mexico, and Panama City, Panama. Because of the 87 kilometers (54 mi) break in the highway known as the Darién Gap, it is not possible to cross between Central America and South America in an automobile.
|
en/1930.html.txt
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fable is a literary genre: a succinct fictional story, in prose or verse, that features animals, legendary creatures, plants, inanimate objects, or forces of nature that are anthropomorphized, and that illustrates or leads to a particular moral lesson (a "moral"), which may at the end be added explicitly as a concise maxim or saying.
|
2 |
+
|
3 |
+
A fable differs from a parable in that the latter excludes animals, plants, inanimate objects, and forces of nature as actors that assume speech or other powers of humankind.
|
4 |
+
|
5 |
+
Usage has not always been so clearly distinguished. In the King James Version of the New Testament, "μῦθος" ("mythos") was rendered by the translators as "fable"[1] in the First Epistle to Timothy, the Second Epistle to Timothy, the Epistle to Titus and the First Epistle of Peter.[2]
|
6 |
+
|
7 |
+
A person who writes fables is a fabulist.
|
8 |
+
|
9 |
+
The fable is one of the most enduring forms of folk literature, spread abroad, modern researchers agree,[3] less by literary anthologies than by oral transmission. Fables can be found in the literature of almost every country.
|
10 |
+
|
11 |
+
The varying corpus denoted Aesopica or Aesop's Fables includes most of the best-known western fables, which are attributed to the legendary Aesop, supposed to have been a slave in ancient Greece around 550 BCE. When Babrius set down fables from the Aesopica in verse for a Hellenistic Prince "Alexander," he expressly stated at the head of Book II that this type of "myth" that Aesop had introduced to the "sons of the Hellenes" had been an invention of "Syrians" from the time of "Ninos" (personifying Nineveh to Greeks) and Belos ("ruler").[4] Epicharmus of Kos and Phormis are reported as having been among the first to invent comic fables.[5] Many familiar fables of Aesop include "The Crow and the Pitcher", "The Tortoise and the Hare" and "The Lion and the Mouse". In ancient Greek and Roman education, the fable was the first of the progymnasmata—training exercises in prose composition and public speaking—wherein students would be asked to learn fables, expand upon them, invent their own, and finally use them as persuasive examples in longer forensic or deliberative speeches. The need of instructors to teach, and students to learn, a wide range of fables as material for their declamations resulted in their being gathered together in collections, like those of Aesop.
|
12 |
+
|
13 |
+
African oral culture[6] has a rich story-telling tradition. As they have for thousands of years, people of all ages in Africa continue to interact with nature, including plants, animals and earthly structures such as rivers, plains, and mountains. Grandparents enjoy enormous respect in African societies and fill the new role of story-telling during retirement years. Children and, to some extent, adults are mesmerized by good story-tellers when they become animated in their quest to tell a good fable.
|
14 |
+
|
15 |
+
Joel Chandler Harris wrote African-American fables in the Southern context of slavery under the name of Uncle Remus. His stories of the animal characters Brer Rabbit, Brer Fox, and Brer Bear are modern examples of African-American story-telling, this though should not transcend critiques and controversies as to whether or not Uncle Remus was a racist or apologist for slavery. The Disney movie Song of the South introduced many of the stories to the public and others not familiar with the role that storytelling played in the life of cultures and groups without training in speaking, reading, writing, or the cultures to which they had been relocated to from world practices of capturing Africans and other indigenous populations to provide slave labor to colonized countries.
|
16 |
+
|
17 |
+
India has a rich tradition of fabulous novels, mostly explainable by the fact that the culture derives traditions and learns qualities from natural elements. Some of the gods are forms of animals with ideal qualities. Also, hundreds of fables were composed in ancient India during the first millennium BCE, often as stories within frame stories. Indian fables have a mixed cast of humans and animals. The dialogues are often longer than in fables of Aesop and often witty as the animals try to outwit one another by trickery and deceit. In Indian fables, man is not superior to the animals. The tales are often comical. The Indian fable adhered to the universally known traditions of the fable. The best examples of the fable in India are the Panchatantra and the Jataka tales. These included Vishnu Sarma's Panchatantra, the Hitopadesha, Vikram and The Vampire, and Syntipas' Seven Wise Masters, which were collections of fables that were later influential throughout the Old World. Ben E. Perry (compiler of the "Perry Index" of Aesop's fables) has argued controversially that some of the Buddhist Jataka tales and some of the fables in the Panchatantra may have been influenced by similar Greek and Near Eastern ones.[7] Earlier Indian epics such as Vyasa's Mahabharata and Valmiki's Ramayana also contained fables within the main story, often as side stories or back-story. The most famous folk stories from the Near East were the One Thousand and One Nights, also known as the Arabian Nights.
|
18 |
+
|
19 |
+
A case of spiritualist India, Indian Fables Stories Panchatantra are the most seasoned enduring accounts of humankind, getting by for a considerable length of time, from mouth to mouth, before they were recorded. You will cherish the clear pace of the tales, and they generally make for extraordinary sleep time or narrating meetings. Offer them will all story darlings, and let them appreciate the light, captivating existence of the narratives.
|
20 |
+
|
21 |
+
The Panchatantra is an antiquated Indian assortment of between related creature tales in Sanskrit section and composition. The soonest recorded work, ascribed to Vishnu Sharma, dates to around 300 BCE. The tales are likely a lot more established, having been passed down ages orally.The word “Panchatantra” is a blend of the words Pancha – which means five in Sanskrit, and Tantra – which means weave. Truly interpreted, it implies interlacing five skeins of customs and lessons into a book.
|
22 |
+
|
23 |
+
They are really archived forms of oral stories that stumbled into ages previously. Indeed, even today, they are supported stories across India and different pieces of the world. We are pleased to present to you an assortment of the most well known Panchatantra stories to you, for they are cherished by grown-ups and kids the same.
|
24 |
+
|
25 |
+
They give the children an early balance onto good and social qualities, forming the youthful personalities into a moral future. The vast majority of the accounts accompany recordings after the content, to give you an overall encounter. Appreciate them, with your children, and offer them with everybody, and help manufacture an increasingly wonderful world.
|
26 |
+
|
27 |
+
These Indian Panchatantra stories are converted into straightforward English, and consequently they fill in as extraordinary material for short Indian stories for youngsters. Antiquated tales, still pertinent for the present howdy tech way of life
|
28 |
+
|
29 |
+
Fables had a further long tradition through the Middle Ages, and became part of European high literature. During the 17th century, the French fabulist Jean de La Fontaine (1621–1695) saw the soul of the fable in the moral — a rule of behavior. Starting with the Aesopian pattern, La Fontaine set out to satirize the court, the church, the rising bourgeoisie, indeed the entire human scene of his time.[9] La Fontaine's model was subsequently emulated by England's John Gay (1685–1732);[10] Poland's Ignacy Krasicki (1735–1801);[11] Italy's Lorenzo Pignotti (1739–1812)[12][verification needed] and Giovanni Gherardo de Rossi (1754–1827);[13][verification needed] Serbia's Dositej Obradović (1739–1811); Spain's Félix María de Samaniego (1745–1801)[14] and Tomás de Iriarte y Oropesa (1750–1791);[15][verification needed] France's Jean-Pierre Claris de Florian (1755–94);[16] and Russia's Ivan Krylov (1769–1844).[17]
|
30 |
+
|
31 |
+
In modern times, while the fable has been trivialized in children's books, it has also been fully adapted to modern adult literature. Felix Salten's Bambi (1923) is a Bildungsroman — a story of a protagonist's coming-of-age — cast in the form of a fable. James Thurber used the ancient fable style in his books Fables for Our Time (1940) and Further Fables for Our Time (1956), and in his stories "The Princess and the Tin Box" in The Beast in Me and Other Animals (1948) and "The Last Clock: A Fable for the Time, Such As It Is, of Man" in Lanterns and Lances (1961). Władysław Reymont's The Revolt (1922), a metaphor for the Bolshevik Revolution of 1917, described a revolt by animals that take over their farm in order to introduce "equality." George Orwell's Animal Farm (1945) similarly satirized Stalinist Communism in particular, and totalitarianism in general, in the guise of animal fable.
|
32 |
+
|
33 |
+
In the 21st century, the Neapolitan writer Sabatino Scia is the author of more than two hundred fables that he describes as “western protest fables.” The characters are not only animals, but also things, beings, and elements from nature. Scia's aim is the same as in the traditional fable, playing the role of revealer of human society. In Latin America, the brothers Juan and Victor Ataucuri Garcia have contributed to the resurgence of the fable. But they do so with a novel idea: use the fable as a means of dissemination of traditional literature of that place. In the book "Fábulas Peruanas" published in 2003, they have collected myths, legends, beliefs of Andean and Amazonian Peru, to write as fables. The result has been an extraordinary work rich in regional nuances. Here we discover the relationship between man and his origin, with nature, with its history, its customs and beliefs then become norms and values.[18]
|
34 |
+
|
35 |
+
Aesop, by Velázquez
|
36 |
+
|
37 |
+
Vyasa
|
38 |
+
|
39 |
+
Valmiki
|
40 |
+
|
41 |
+
Jean de La Fontaine
|
42 |
+
|
43 |
+
Sulkhan-Saba Orbeliani
|
44 |
+
|
45 |
+
John Gay
|
46 |
+
|
47 |
+
Christian Fürchtegott Gellert
|
48 |
+
|
49 |
+
Gotthold Ephraim Lessing
|
50 |
+
|
51 |
+
Ignacy Krasicki
|
52 |
+
|
53 |
+
Dositej Obradović
|
54 |
+
|
55 |
+
Félix María de Samaniego
|
56 |
+
|
57 |
+
Tomás de Iriarte y Oropesa
|
58 |
+
|
59 |
+
Jean-Pierre Claris de Florian
|
60 |
+
|
61 |
+
Ivan Krylov
|
62 |
+
|
63 |
+
Hans Christian Andersen
|
64 |
+
|
65 |
+
Ambrose Bierce
|
66 |
+
|
67 |
+
Joel Chandler Harris
|
68 |
+
|
69 |
+
Władysław Reymont
|
70 |
+
|
71 |
+
Felix Salten
|
72 |
+
|
73 |
+
Don Marquis
|
74 |
+
|
75 |
+
James Thurber
|
76 |
+
|
77 |
+
George Orwell
|
en/1931.html.txt
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fable is a literary genre: a succinct fictional story, in prose or verse, that features animals, legendary creatures, plants, inanimate objects, or forces of nature that are anthropomorphized, and that illustrates or leads to a particular moral lesson (a "moral"), which may at the end be added explicitly as a concise maxim or saying.
|
2 |
+
|
3 |
+
A fable differs from a parable in that the latter excludes animals, plants, inanimate objects, and forces of nature as actors that assume speech or other powers of humankind.
|
4 |
+
|
5 |
+
Usage has not always been so clearly distinguished. In the King James Version of the New Testament, "μῦθος" ("mythos") was rendered by the translators as "fable"[1] in the First Epistle to Timothy, the Second Epistle to Timothy, the Epistle to Titus and the First Epistle of Peter.[2]
|
6 |
+
|
7 |
+
A person who writes fables is a fabulist.
|
8 |
+
|
9 |
+
The fable is one of the most enduring forms of folk literature, spread abroad, modern researchers agree,[3] less by literary anthologies than by oral transmission. Fables can be found in the literature of almost every country.
|
10 |
+
|
11 |
+
The varying corpus denoted Aesopica or Aesop's Fables includes most of the best-known western fables, which are attributed to the legendary Aesop, supposed to have been a slave in ancient Greece around 550 BCE. When Babrius set down fables from the Aesopica in verse for a Hellenistic Prince "Alexander," he expressly stated at the head of Book II that this type of "myth" that Aesop had introduced to the "sons of the Hellenes" had been an invention of "Syrians" from the time of "Ninos" (personifying Nineveh to Greeks) and Belos ("ruler").[4] Epicharmus of Kos and Phormis are reported as having been among the first to invent comic fables.[5] Many familiar fables of Aesop include "The Crow and the Pitcher", "The Tortoise and the Hare" and "The Lion and the Mouse". In ancient Greek and Roman education, the fable was the first of the progymnasmata—training exercises in prose composition and public speaking—wherein students would be asked to learn fables, expand upon them, invent their own, and finally use them as persuasive examples in longer forensic or deliberative speeches. The need of instructors to teach, and students to learn, a wide range of fables as material for their declamations resulted in their being gathered together in collections, like those of Aesop.
|
12 |
+
|
13 |
+
African oral culture[6] has a rich story-telling tradition. As they have for thousands of years, people of all ages in Africa continue to interact with nature, including plants, animals and earthly structures such as rivers, plains, and mountains. Grandparents enjoy enormous respect in African societies and fill the new role of story-telling during retirement years. Children and, to some extent, adults are mesmerized by good story-tellers when they become animated in their quest to tell a good fable.
|
14 |
+
|
15 |
+
Joel Chandler Harris wrote African-American fables in the Southern context of slavery under the name of Uncle Remus. His stories of the animal characters Brer Rabbit, Brer Fox, and Brer Bear are modern examples of African-American story-telling, this though should not transcend critiques and controversies as to whether or not Uncle Remus was a racist or apologist for slavery. The Disney movie Song of the South introduced many of the stories to the public and others not familiar with the role that storytelling played in the life of cultures and groups without training in speaking, reading, writing, or the cultures to which they had been relocated to from world practices of capturing Africans and other indigenous populations to provide slave labor to colonized countries.
|
16 |
+
|
17 |
+
India has a rich tradition of fabulous novels, mostly explainable by the fact that the culture derives traditions and learns qualities from natural elements. Some of the gods are forms of animals with ideal qualities. Also, hundreds of fables were composed in ancient India during the first millennium BCE, often as stories within frame stories. Indian fables have a mixed cast of humans and animals. The dialogues are often longer than in fables of Aesop and often witty as the animals try to outwit one another by trickery and deceit. In Indian fables, man is not superior to the animals. The tales are often comical. The Indian fable adhered to the universally known traditions of the fable. The best examples of the fable in India are the Panchatantra and the Jataka tales. These included Vishnu Sarma's Panchatantra, the Hitopadesha, Vikram and The Vampire, and Syntipas' Seven Wise Masters, which were collections of fables that were later influential throughout the Old World. Ben E. Perry (compiler of the "Perry Index" of Aesop's fables) has argued controversially that some of the Buddhist Jataka tales and some of the fables in the Panchatantra may have been influenced by similar Greek and Near Eastern ones.[7] Earlier Indian epics such as Vyasa's Mahabharata and Valmiki's Ramayana also contained fables within the main story, often as side stories or back-story. The most famous folk stories from the Near East were the One Thousand and One Nights, also known as the Arabian Nights.
|
18 |
+
|
19 |
+
A case of spiritualist India, Indian Fables Stories Panchatantra are the most seasoned enduring accounts of humankind, getting by for a considerable length of time, from mouth to mouth, before they were recorded. You will cherish the clear pace of the tales, and they generally make for extraordinary sleep time or narrating meetings. Offer them will all story darlings, and let them appreciate the light, captivating existence of the narratives.
|
20 |
+
|
21 |
+
The Panchatantra is an antiquated Indian assortment of between related creature tales in Sanskrit section and composition. The soonest recorded work, ascribed to Vishnu Sharma, dates to around 300 BCE. The tales are likely a lot more established, having been passed down ages orally.The word “Panchatantra” is a blend of the words Pancha – which means five in Sanskrit, and Tantra – which means weave. Truly interpreted, it implies interlacing five skeins of customs and lessons into a book.
|
22 |
+
|
23 |
+
They are really archived forms of oral stories that stumbled into ages previously. Indeed, even today, they are supported stories across India and different pieces of the world. We are pleased to present to you an assortment of the most well known Panchatantra stories to you, for they are cherished by grown-ups and kids the same.
|
24 |
+
|
25 |
+
They give the children an early balance onto good and social qualities, forming the youthful personalities into a moral future. The vast majority of the accounts accompany recordings after the content, to give you an overall encounter. Appreciate them, with your children, and offer them with everybody, and help manufacture an increasingly wonderful world.
|
26 |
+
|
27 |
+
These Indian Panchatantra stories are converted into straightforward English, and consequently they fill in as extraordinary material for short Indian stories for youngsters. Antiquated tales, still pertinent for the present howdy tech way of life
|
28 |
+
|
29 |
+
Fables had a further long tradition through the Middle Ages, and became part of European high literature. During the 17th century, the French fabulist Jean de La Fontaine (1621–1695) saw the soul of the fable in the moral — a rule of behavior. Starting with the Aesopian pattern, La Fontaine set out to satirize the court, the church, the rising bourgeoisie, indeed the entire human scene of his time.[9] La Fontaine's model was subsequently emulated by England's John Gay (1685–1732);[10] Poland's Ignacy Krasicki (1735–1801);[11] Italy's Lorenzo Pignotti (1739–1812)[12][verification needed] and Giovanni Gherardo de Rossi (1754–1827);[13][verification needed] Serbia's Dositej Obradović (1739–1811); Spain's Félix María de Samaniego (1745–1801)[14] and Tomás de Iriarte y Oropesa (1750–1791);[15][verification needed] France's Jean-Pierre Claris de Florian (1755–94);[16] and Russia's Ivan Krylov (1769–1844).[17]
|
30 |
+
|
31 |
+
In modern times, while the fable has been trivialized in children's books, it has also been fully adapted to modern adult literature. Felix Salten's Bambi (1923) is a Bildungsroman — a story of a protagonist's coming-of-age — cast in the form of a fable. James Thurber used the ancient fable style in his books Fables for Our Time (1940) and Further Fables for Our Time (1956), and in his stories "The Princess and the Tin Box" in The Beast in Me and Other Animals (1948) and "The Last Clock: A Fable for the Time, Such As It Is, of Man" in Lanterns and Lances (1961). Władysław Reymont's The Revolt (1922), a metaphor for the Bolshevik Revolution of 1917, described a revolt by animals that take over their farm in order to introduce "equality." George Orwell's Animal Farm (1945) similarly satirized Stalinist Communism in particular, and totalitarianism in general, in the guise of animal fable.
|
32 |
+
|
33 |
+
In the 21st century, the Neapolitan writer Sabatino Scia is the author of more than two hundred fables that he describes as “western protest fables.” The characters are not only animals, but also things, beings, and elements from nature. Scia's aim is the same as in the traditional fable, playing the role of revealer of human society. In Latin America, the brothers Juan and Victor Ataucuri Garcia have contributed to the resurgence of the fable. But they do so with a novel idea: use the fable as a means of dissemination of traditional literature of that place. In the book "Fábulas Peruanas" published in 2003, they have collected myths, legends, beliefs of Andean and Amazonian Peru, to write as fables. The result has been an extraordinary work rich in regional nuances. Here we discover the relationship between man and his origin, with nature, with its history, its customs and beliefs then become norms and values.[18]
|
34 |
+
|
35 |
+
Aesop, by Velázquez
|
36 |
+
|
37 |
+
Vyasa
|
38 |
+
|
39 |
+
Valmiki
|
40 |
+
|
41 |
+
Jean de La Fontaine
|
42 |
+
|
43 |
+
Sulkhan-Saba Orbeliani
|
44 |
+
|
45 |
+
John Gay
|
46 |
+
|
47 |
+
Christian Fürchtegott Gellert
|
48 |
+
|
49 |
+
Gotthold Ephraim Lessing
|
50 |
+
|
51 |
+
Ignacy Krasicki
|
52 |
+
|
53 |
+
Dositej Obradović
|
54 |
+
|
55 |
+
Félix María de Samaniego
|
56 |
+
|
57 |
+
Tomás de Iriarte y Oropesa
|
58 |
+
|
59 |
+
Jean-Pierre Claris de Florian
|
60 |
+
|
61 |
+
Ivan Krylov
|
62 |
+
|
63 |
+
Hans Christian Andersen
|
64 |
+
|
65 |
+
Ambrose Bierce
|
66 |
+
|
67 |
+
Joel Chandler Harris
|
68 |
+
|
69 |
+
Władysław Reymont
|
70 |
+
|
71 |
+
Felix Salten
|
72 |
+
|
73 |
+
Don Marquis
|
74 |
+
|
75 |
+
James Thurber
|
76 |
+
|
77 |
+
George Orwell
|
en/1932.html.txt
ADDED
@@ -0,0 +1,368 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Facebook (stylized as facebook) is an American online social media and social networking service based in Menlo Park, California and a flagship service of the namesake company Facebook, Inc. It was founded by Mark Zuckerberg, along with fellow Harvard College students and roommates Eduardo Saverin, Andrew McCollum, Dustin Moskovitz and Chris Hughes.
|
6 |
+
|
7 |
+
The founders initially limited Facebook membership to Harvard students. Membership was expanded to Columbia, Stanford, and Yale before being expanded to the rest of the Ivy League, MIT, and higher education institutions in the Boston area, then various other universities, and lastly high school students. Since 2006, anyone who claims to be at least 13 years old has been allowed to become a registered user of Facebook, though this may vary depending on local laws. The name comes from the face book directories often given to American university students.
|
8 |
+
|
9 |
+
Facebook can be accessed from devices with Internet connectivity, such as personal computers, tablets and smartphones. After registering, users can create a profile revealing information about themselves. They can post text, photos and multimedia which is shared with any other users that have agreed to be their "friend", or, with a different privacy setting, with any reader. Users can also use various embedded apps, join common-interest groups, buy and sell items or services on Marketplace, and receive notifications of their Facebook friends' activities and activities of Facebook pages they follow. Facebook claimed that it had more than 2.3 billion monthly active users as of December 2018,[9] and it was the most downloaded mobile app of the 2010s globally.[10]
|
10 |
+
|
11 |
+
Facebook has been subject to extensive media coverage and many controversies, often involving user privacy (as with the Cambridge Analytica data scandal), political manipulation (as with the 2016 U.S. elections), psychological effects such as addiction and low self-esteem, and content such as fake news, conspiracy theories, copyright infringement, and hate speech.[11] Commentators have accused Facebook of willingly facilitating the spread of such content.[12][13][14][15]
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
Zuckerberg built a website called "Facemash" in 2003 while attending Harvard University. The site was comparable to Hot or Not and used "photos compiled from the online face books of nine Houses, placing two next to each other at a time and asking users to choose the "hotter" person".[17] Facemash attracted 450 visitors and 22,000 photo-views in its first four hours.[18] The site was sent to several campus group listservs, but was shut down a few days later by Harvard administration. Zuckerberg faced expulsion and was charged with breaching security, violating copyrights and violating individual privacy. Ultimately, the charges were dropped.[17] Zuckerberg expanded on this project that semester by creating a social study tool ahead of an art history final exam. He uploaded all art images to a website, each of which was accompanied by a comments section, then shared the site with his classmates.[19]
|
16 |
+
|
17 |
+
A "face book" is a student directory featuring photos and personal information.[18] In 2003, Harvard had only a paper version[20] along with private online directories.[17][21] Zuckerberg told the Harvard Crimson, "Everyone's been talking a lot about a universal face book within Harvard. ... I think it's kind of silly that it would take the University a couple of years to get around to it. I can do it better than they can, and I can do it in a week."[21] In January 2004, Zuckerberg coded a new website, known as "TheFacebook", inspired by a Crimson editorial about Facemash, stating, "It is clear that the technology needed to create a centralized Website is readily available ... the benefits are many." Zuckerberg met with Harvard student Eduardo Saverin, and each of them agreed to invest $1,000 in the site.[22] On February 4, 2004, Zuckerberg launched "TheFacebook", originally located at thefacebook.com.[23]
|
18 |
+
|
19 |
+
Six days after the site launched, Harvard seniors Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra accused Zuckerberg of intentionally misleading them into believing that he would help them build a social network called HarvardConnection.com. They claimed that he was instead using their ideas to build a competing product.[24] The three complained to the Crimson and the newspaper began an investigation. They later sued Zuckerberg, settling in 2008[25] for 1.2 million shares (worth $300 million at Facebook's IPO).[26]
|
20 |
+
|
21 |
+
Membership was initially restricted to students of Harvard College. Within a month, more than half the undergraduates had registered.[27] Dustin Moskovitz, Andrew McCollum, and Chris Hughes joined Zuckerberg to help manage the growth of the website.[28] In March 2004, Facebook expanded to Columbia, Stanford and Yale.[29] It then became available to all Ivy League colleges, Boston University, New York University, MIT, and successively most universities in the United States and Canada.[30][31]
|
22 |
+
|
23 |
+
In mid-2004, Napster co-founder and entrepreneur Sean Parker—an informal advisor to Zuckerberg—became company president.[32] In June 2004, the company moved to Palo Alto, California.[33] It received its first investment later that month from PayPal co-founder Peter Thiel.[34] In 2005, the company dropped "the" from its name after purchasing the domain name Facebook.com for US$200,000.[35] The domain had belonged to AboutFace Corporation.
|
24 |
+
|
25 |
+
In May 2005, Accel Partners invested $12.7 million in Facebook, and Jim Breyer[36] added $1 million of his own money. A high-school version of the site launched in September 2005.[37] Eligibility expanded to include employees of several companies, including Apple Inc. and Microsoft.[38]
|
26 |
+
|
27 |
+
In May 2006, Facebook hired its first intern, Julie Zhuo.[39] After a month, Zhuo was hired as a full-time engineer.[39] On September 26, 2006, Facebook opened to everyone at least 13 years old with a valid email address.[40][41][42] By late 2007, Facebook had 100,000 pages on which companies promoted themselves.[43] Organization pages began rolling out in May 2009.[44] On October 24, 2007, Microsoft announced that it had purchased a 1.6% share of Facebook for $240 million, giving Facebook a total implied value of around $15 billion. Microsoft's purchase included rights to place international advertisements.[45][46]
|
28 |
+
|
29 |
+
In May 2007, at the first f8 developers conference, Facebook announced the launch of the Facebook Developer Platform, providing a framework for software developers to create applications that interact with core Facebook features. By the second annual f8 developers conference on July 23, 2008, the number of applications on the platform had grown to 33,000, and the number of registered developers had exceeded 400,000.[47]
|
30 |
+
|
31 |
+
In October 2008, Facebook announced that its international headquarters would locate in Dublin, Ireland.[48] In September 2009, Facebook said that it had achieved positive cash flow for the first time.[49] A January 2009 Compete.com study ranked Facebook the most used social networking service by worldwide monthly active users.[50]
|
32 |
+
|
33 |
+
The company announced 500 million users in July 2010.[51] Half of the site's membership used Facebook daily, for an average of 34 minutes, while 150 million users accessed the site from mobile devices. A company representative called the milestone a "quiet revolution."[52] In November 2010, based on SecondMarket Inc. (an exchange for privately held companies' shares), Facebook's value was $41 billion. The company had slightly surpassed eBay to become the third largest American web company after Google and Amazon.com.[53][54]
|
34 |
+
|
35 |
+
On November 15, 2010, Facebook announced it had acquired the domain name fb.com from the American Farm Bureau Federation for an undisclosed amount. On January 11, 2011, the Farm Bureau disclosed $8.5 million in "domain sales income", making the acquisition of FB.com one of the ten highest domain sales in history.[55]
|
36 |
+
|
37 |
+
In February 2011, Facebook announced plans to move its headquarters to the former Sun Microsystems campus in Menlo Park, California.[56][57] In March 2011, it was reported that Facebook was removing about 20,000 profiles daily for violations such as spam, graphic content and underage use, as part of its efforts to boost cyber security.[58] Statistics showed that Facebook reached one trillion page views in the month of June 2011, making it the most visited website tracked by DoubleClick.[59][60] According to a Nielsen study, Facebook had in 2011 become the second-most accessed website in the U.S. behind Google.[61][62]
|
38 |
+
|
39 |
+
China blocked Facebook in 2009.[63]
|
40 |
+
|
41 |
+
In March 2012, Facebook announced App Center, a store selling applications that operate via the website. The store was to be available on iPhones, Android devices, and for mobile web users.[64]
|
42 |
+
|
43 |
+
Facebook's initial public offering came on May 17, 2012, at a share price of US$38. The company was valued at $104 billion, the largest valuation to that date.[65][66][67] The IPO raised $16 billion, the third-largest in U.S. history, after Visa Inc. in 2008 and AT&T Wireless in 2000.[68][69] Based on its 2012 income of $5 billion, Facebook joined the Fortune 500 list for the first time in May 2013, ranked 462.[70] The shares set a first day record for trading volume of an IPO (460 million shares).[71] The IPO was controversial given the immediate price declines that followed,[72][73][74][75] and was the subject of lawsuits,[76] while SEC and FINRA both launched investigations.[77]
|
44 |
+
|
45 |
+
Zuckerberg announced at the start of October 2012 that Facebook had one billion monthly active users,[78] including 600 million mobile users, 219 billion photo uploads and 140 billion friend connections.[79]
|
46 |
+
|
47 |
+
On January 15, 2013, Facebook announced Facebook Graph Search, which provides users with a "precise answer", rather than a link to an answer by leveraging data present on its site.[80] Facebook emphasized that the feature would be "privacy-aware", returning results only from content already shared with the user.[81] On April 3, 2013, Facebook unveiled Facebook Home, a user-interface layer for Android devices offering greater integration with the site. HTC announced HTC First, a phone with Home pre-loaded.[82]
|
48 |
+
|
49 |
+
On April 15, 2013, Facebook announced an alliance across 19 states with the National Association of Attorneys General, to provide teenagers and parents with information on tools to manage social networking profiles.[83] On April 19 Facebook modified its logo to remove the faint blue line at the bottom of the "F" icon. The letter F moved closer to the edge of the box.[84]
|
50 |
+
|
51 |
+
Following a campaign by 100 advocacy groups, Facebook agreed to update its policy on hate speech. The campaign highlighted content promoting domestic violence and sexual violence against women and led 15 advertisers to withdrawal, including Nissan UK, House of Burlesque and Nationwide UK. The company initially stated, "while it may be vulgar and offensive, distasteful content on its own does not violate our policies".[85] It took action on May 29.[86]
|
52 |
+
|
53 |
+
On June 12, Facebook announced that it was introducing clickable hashtags to help users follow trending discussions, or search what others are talking about on a topic.[87] San Mateo County, California, became the top wage-earning county in the country after the fourth quarter of 2012 because of Facebook. The Bureau of Labor Statistics reported that the average salary was 107% higher than the previous year, at $168,000 a year, more than 50% higher than the next-highest county, New York County (better known as Manhattan), at roughly $110,000 a year.[88]
|
54 |
+
|
55 |
+
Facebook joined Alliance for Affordable Internet (A4AI) in October, as it launched. The A4AI is a coalition of public and private organizations that includes Google, Intel and Microsoft. Led by Sir Tim Berners-Lee, the A4AI seeks to make Internet access more affordable to ease access in the developing world.[89]
|
56 |
+
|
57 |
+
The company celebrated its 10th anniversary during the week of February 3, 2014.[90] In January 2014, over one billion users connected via a mobile device.[91] As of June, mobile accounted for 62% of advertising revenue, an increase of 21% from the previous year.[92] By September Facebook's market capitalization had exceeded $200 billion.[93][94][95]
|
58 |
+
|
59 |
+
Zuckerberg participated in a Q&A session at Tsinghua University in Beijing, China, on October 23, where he attempted to converse in Mandarin. Zuckerberg hosted visiting Chinese politician Lu Wei, known as the "Internet czar" for his influence in China's online policy, on December 8.[citation needed]
|
60 |
+
|
61 |
+
As of 2015[update], Facebook's algorithm was revised in an attempt to filter out false or misleading content, such as fake news stories and hoaxes. It relied on users who flag a story accordingly. Facebook maintained that satirical content should not be intercepted.[96] The algorithm was accused of maintaining a "filter bubble", where material the user disagrees with[97] and posts with few likes would be deprioritized.[98] In November, Facebook extended paternity leave from 4 weeks to 4 months.[99]
|
62 |
+
|
63 |
+
On April 12, 2016, Zuckerberg outlined his 10-year vision, which rested on three main pillars: artificial intelligence, increased global connectivity, and virtual and augmented reality.[100] In July, a US$1 billion suit was filed against the company alleging that it permitted Hamas to use it to perform assaults that cost the lives of four people.[101] Facebook released its blueprints of Surround 360 camera on GitHub under an open-source license.[102] In September, it won an Emmy for its animated short "Henry".[103] In October, Facebook announced a fee-based communications tool called Workplace that aims to "connect everyone" at work. Users can create profiles, see updates from co-workers on their news feed, stream live videos and participate in secure group chats.[104]
|
64 |
+
|
65 |
+
Following the 2016 presidential election, Facebook announced that it would combat fake news by using fact-checkers from sites like FactCheck.org and Associated Press (AP), making reporting hoaxes easier through crowdsourcing, and disrupting financial incentives for abusers.[105]
|
66 |
+
|
67 |
+
On January 17, 2017, Facebook COO Sheryl Sandberg planned to open Station F, a startup incubator campus in Paris, France.[107] On a six-month cycle, Facebook committed to work with ten to 15 data-driven startups there.[108] On April 18, Facebook announced the beta launch of Facebook Spaces at its annual F8 developer conference.[109] Facebook Spaces is a virtual reality version of Facebook for Oculus VR goggles. In a virtual and shared space, users can access a curated selection of 360-degree photos and videos using their avatar, with the support of the controller. Users can access their own photos and videos, along with media shared on their newsfeed.[110] In September, Facebook announced it would spend up to US$1 billion on original shows for its Facebook Watch platform.[111] On October 16, it acquired the anonymous compliment app tbh, announcing its intention to leave the app independent.[112][113][114][115]
|
68 |
+
|
69 |
+
In May 2018 at F8, the company announced it would offer its own dating service. Shares in competitor Match Group fell by 22%.[116] Facebook Dating includes privacy features and friends are unable to view their friends' dating profile.[117] In July, Facebook was charged £500,000 by UK watchdogs for failing to respond to data erasure requests.[118] On July 18, Facebook established a subsidiary named Lianshu Science & Technology in Hangzhou City, China, with $30 million of capital. All its shares are held by Facebook Hong.[119] Approval of the registration of the subsidiary was then withdrawn, due to a disagreement between officials in Zhejiang province and the Cyberspace Administration of China.[120] On July 26, Facebook became the first company to lose over $100 billion worth of market capitalization in one day, dropping from nearly $630 billion to $510 billion after disappointing sales reports.[121][122] On July 31, Facebook said that the company had deleted 17 accounts related to the 2018 U.S. midterm elections. On September 19, Facebook announced that, for news distribution outside the United States, it would work with U.S. funded democracy promotion organizations, International Republican Institute and the National Democratic Institute, which are loosely affiliated with the Republican and Democratic parties.[123] Through the Digital Forensic Research Lab Facebook partners with the Atlantic Council, a NATO-affiliated think tank.[123] In November, Facebook launched smart displays branded Portal and Portal Plus (Portal+). They support Amazon's Alexa (intelligent personal assistant service). The devices include video chat function with Facebook Messenger.[124][125]
|
70 |
+
|
71 |
+
In January 2019, the 10 year challenge was started[126] asking users to post a photograph of themselves from 10 years ago (2009) and a more recent photo.[127]
|
72 |
+
|
73 |
+
Criticized for its role in vaccine hesitancy, Facebook announced in March 2019 that it would provide users with "authoritative information" on the topic of vaccines.[128]
|
74 |
+
A study in the journal Vaccine[129] of advertisements posted in the three months prior to that found that 54% of the anti-vaccine advertisements on Facebook were placed by just two organisations funded by well-known anti-vaccination activists.[130] The Children's Health Defense / World Mercury Project chaired by Robert F. Kennedy Jr. and Stop Mandatory Vaccination, run by campaigner Larry Cook, posted 54% of the advertisements. The ads often linked to commercial products, such as natural remedies and books.
|
75 |
+
|
76 |
+
On March 14, the Huffington Post reported that Facebook's PR agency had paid someone to tweak Facebook COO Sheryl Sandberg's Wikipedia page, as well as adding a page for the global head of PR, Caryn Marooney.[131]
|
77 |
+
|
78 |
+
In March 2019, the perpetrator of the Christchurch mosque shootings in New Zealand used Facebook to stream live footage of the attack as it unfolded. Facebook took 29 minutes to detect the livestreamed video, which was eight minutes longer than it took police to arrest the gunman. About 1.3m copies of the video were blocked from Facebook but 300,000 copies were published and shared. Facebook has promised changes to its platform; spokesman Simon Dilner told Radio New Zealand that it could have done a better job. Several companies, including the ANZ and ASB banks, have stopped advertising on Facebook after the company was widely condemned by the public.[132] Following the attack, Facebook began blocking white nationalist, white supremacist, and white separatist content, saying that they could not be meaningfully separated. Previously, Facebook had only blocked overtly supremacist content. The older policy had been condemned by civil rights groups, who described these movements as functionally indistinct.[133][134] Further bans were made in mid-April 2019, banning several British far-right organizations and associated individuals from Facebook, and also banning praise or support for them.[135][136]
|
79 |
+
|
80 |
+
NTJ's member Moulavi Zahran Hashim, a radical Islamist imam believed to be the mastermind behind the 2019 Sri Lanka Easter bombings, preached on a pro-ISIL Facebook account, known as "Al-Ghuraba" media.[137][138]
|
81 |
+
|
82 |
+
On May 2, 2019 at F8, the company announced its new vision with the tagline "the future is private".[139] A redesign of the website and mobile app was introduced, dubbed as "FB5".[140] The event also featured plans for improving groups,[141] a dating platform,[142] end-to-end encryption on its platforms,[143] and allowing users on Messenger to communicate directly with WhatsApp and Instagram users.[144][145]
|
83 |
+
|
84 |
+
On July 31, 2019, Facebook announced a partnership with University of California, San Francisco to build a non-invasive, wearable device that lets people type by simply imagining themselves talking.[146]
|
85 |
+
|
86 |
+
On September 5, 2019, Facebook launched Facebook Dating in the United States. This new application allows users to integrate their Instagram posts in their dating profile.[147]
|
87 |
+
|
88 |
+
Facebook News, which features selected stories from news organizations, was launched on October 25.[148] Facebook's decision to include far-right website Breitbart News as a "trusted source" was negatively received.[149][150]
|
89 |
+
|
90 |
+
On November 17, 2019, the banking data for 29,000 Facebook employees was stolen from a payroll worker's car. The data was stored on unencrypted hard drives and included bank account numbers, employee names, the last four digits of their social security numbers, salaries, bonuses, and equity details. The company didn't realize the hard drives were missing until November 20. Facebook confirmed that the drives contained employee information on November 29. Employees weren't notified of the break-in until December 13, 2019.[151]
|
91 |
+
|
92 |
+
On March 10, 2020, Facebook appointed two new directors Tracey Travis and Nancy Killefer to their board of members.[152]
|
93 |
+
|
94 |
+
In June 2020, several major companies including Adidas, Aviva, Coca-Cola, Ford, HP, Intercontinental Hotels Group, Mars, Starbucks, Target, and Unilever, announced they would pause adverts on Facebook for July in support of the Stop Hate for Profit campaign which claimed the company was not doing enough to remove hateful content.[153] The BBC noted that this was unlikely to affect the company as most of Facebook's advertising revenue comes from small- to medium-sized businesses.[154]
|
95 |
+
|
96 |
+
The website's primary color is blue as Zuckerberg is red–green colorblind, a realization that occurred after a test undertaken around 2007.[155][156] Facebook is built in PHP, compiled with HipHop for PHP, a "source code transformer" built by Facebook engineers that turns PHP into C++.[157] The deployment of HipHop reportedly reduced average CPU consumption on Facebook servers by 50%.[158]
|
97 |
+
|
98 |
+
Facebook is developed as one monolithic application. According to an interview in 2012 with Facebook build engineer Chuck Rossi, Facebook compiles into a 1.5 GB binary blob which is then distributed to the servers using a custom BitTorrent-based release system. Rossi stated that it takes about 15 minutes to build and 15 minutes to release to the servers. The build and release process has zero downtime. Changes to Facebook are rolled out daily.[158]
|
99 |
+
|
100 |
+
Facebook used a combination platform based on HBase to store data across distributed machines. Using a tailing architecture, events are stored in log files, and the logs are tailed. The system rolls these events up and writes them to storage. The user interface then pulls the data out and displays it to users. Facebook handles requests as AJAX behavior. These requests are written to a log file using Scribe (developed by Facebook).[159]
|
101 |
+
|
102 |
+
Data is read from these log files using Ptail, an internally built tool to aggregate data from multiple Scribe stores. It tails the log files and pulls data out. Ptail data are separated into three streams and sent to clusters in different data centers (Plugin impression, News feed impressions, Actions (plugin + news feed)). Puma is used to manage periods of high data flow (Input/Output or IO). Data is processed in batches to lessen the number of times needed to read and write under high demand periods (A hot article generates many impressions and news feed impressions that cause huge data skews). Batches are taken every 1.5 seconds, limited by memory used when creating a hash table.[159]
|
103 |
+
|
104 |
+
Data is then output in PHP format. The backend is written in Java. Thrift is used as the messaging format so PHP programs can query Java services. Caching solutions display pages more quickly. The data is then sent to MapReduce servers where it is queried via Hive. This serves as a backup as the data can be recovered from Hive.[159]
|
105 |
+
|
106 |
+
Facebook uses a CDN or 'edge network' under the domain fbcdn.net for serving static data.[160][161] Until the mid 2010s, Facebook also relied on akamai as the CDN service provider.[162][163][164]
|
107 |
+
|
108 |
+
On March 20, 2014, Facebook announced a new open-source programming language called Hack. Before public release, a large portion of Facebook was already running and "battle tested" using the new language.[165]
|
109 |
+
|
110 |
+
On July 20, 2008, Facebook introduced "Facebook Beta", a significant redesign of its user interface on selected networks. The Mini-Feed and Wall were consolidated, profiles were separated into tabbed sections, and an effort was made to create a cleaner look.[166] Facebook began migrating users to the new version in September 2008.[167]
|
111 |
+
|
112 |
+
Each registered user on Facebook has a personal profile that shows their posts and content.[168] The format of individual user pages was revamped in September 2011 and became known as "Timeline", a chronological feed of a user's stories,[169][170] including status updates, photos, interactions with apps and events.[171] The layout let users add a "cover photo".[171] Users were given more privacy settings.[171] In 2007, Facebook launched Facebook Pages for brands and celebrities to interact with their fanbase.[172][173] 100,000 Pages launched in November.[174] In June 2009, Facebook introduced a "Usernames" feature, allowing users to choose a unique nickname used in the URL for their personal profile, for easier sharing.[175][176]
|
113 |
+
|
114 |
+
In February 2014, Facebook expanded the gender setting, adding a custom input field that allows users to choose from a wide range of gender identities. Users can also set which set of gender-specific pronoun should be used in reference to them throughout the site.[177][178][179] In May 2014, Facebook introduced a feature to allow users to ask for information not disclosed by other users on their profiles. If a user does not provide key information, such as location, hometown, or relationship status, other users can use a new "ask" button to send a message asking about that item to the user in a single click.[180][181]
|
115 |
+
|
116 |
+
News Feed appears on every user's homepage and highlights information including profile changes, upcoming events and friends' birthdays.[182] This enabled spammers and other users to manipulate these features by creating illegitimate events or posting fake birthdays to attract attention to their profile or cause.[183] Initially, the News Feed caused dissatisfaction among Facebook users; some complained it was too cluttered and full of undesired information, others were concerned that it made it too easy for others to track individual activities (such as relationship status changes, events, and conversations with other users).[184] Zuckerberg apologized for the site's failure to include appropriate privacy features. Users then gained control over what types of information are shared automatically with friends. Users are now able to prevent user-set categories of friends from seeing updates about certain types of activities, including profile changes, Wall posts and newly added friends.[185]
|
117 |
+
|
118 |
+
On February 23, 2010, Facebook was granted a patent[186] on certain aspects of its News Feed. The patent covers News Feeds in which links are provided so that one user can participate in the activity of another user.[187] The sorting and display of stories in a user's News Feed is governed by the EdgeRank algorithm.[188]
|
119 |
+
|
120 |
+
The Photos application allows users to upload albums and photos.[189] Each album can contain 200 photos.[190] Privacy settings apply to individual albums. Users can "tag", or label, friends in a photo. The friend receives a notification about the tag with a link to the photo.[191] This photo tagging feature was developed by Aaron Sittig, now a Design Strategy Lead at Facebook, and former Facebook engineer Scott Marlette back in 2006 and was only granted a patent in 2011.[192][193]
|
121 |
+
|
122 |
+
On June 7, 2012, Facebook launched its App Center to help users find games and other applications.[194]
|
123 |
+
|
124 |
+
On May 13, 2015, Facebook in association with major news portals launched "Instant Articles" to provide news on the Facebook news feed without leaving the site.[195][196]
|
125 |
+
|
126 |
+
In January 2017, Facebook launched Facebook Stories for iOS and Android in Ireland. The feature, following the format of Snapchat and Instagram stories, allows users to upload photos and videos that appear above friends' and followers' News Feeds and disappear after 24 hours.[197]
|
127 |
+
|
128 |
+
On October 11, 2017, Facebook introduced the 3D Posts feature to allow for uploading interactive 3D assets.[198] On January 11, 2018, Facebook announced that it would change News Feed to prioritize friends/family content and de-emphasize content from media companies.[199]
|
129 |
+
|
130 |
+
The "like" button, stylized as a "thumbs up" icon, was first enabled on February 9, 2009,[200] and enables users to easily interact with status updates, comments, photos and videos, links shared by friends, and advertisements. Once clicked by a user, the designated content is more likely to appear in friends' News Feeds.[201][202] The button displays the number of other users who have liked the content.[203] The like button was extended to comments in June 2010.[204] In February 2016, Facebook expanded Like into "Reactions", choosing among five pre-defined emotions, including "Love", "Haha", "Wow", "Sad", or "Angry".[205][206][207][208] In late April 2020, during the coronavirus pandemic, a new "Care" reaction was added.[209]
|
131 |
+
|
132 |
+
Facebook Messenger is an instant messaging service and software application. It began as Facebook Chat in 2008,[210] was revamped in 2010[211] and eventually became a standalone mobile app in August 2011, while remaining part of the user page on browsers.[212]
|
133 |
+
|
134 |
+
Complementing regular conversations, Messenger lets users make one-to-one[213] and group[214] voice[215] and video calls.[216] Its Android app has integrated support for SMS[217] and "Chat Heads", which are round profile photo icons appearing on-screen regardless of what app is open,[218] while both apps support multiple accounts,[219] conversations with optional end-to-end encryption[220] and "Instant Games".[221] Some features, including sending money[222] and requesting transportation,[223] are limited to the United States.[222] In 2017, Facebook added "Messenger Day", a feature that lets users share photos and videos in a story-format with all their friends with the content disappearing after 24 hours;[224] Reactions, which lets users tap and hold a message to add a reaction through an emoji;[225] and Mentions, which lets users in group conversations type @ to give a particular user a notification.[225]
|
135 |
+
|
136 |
+
Businesses and users can interact through Messenger with features such as tracking purchases and receiving notifications, and interacting with customer service representatives. Third-party developers can integrate apps into Messenger, letting users enter an app while inside Messenger and optionally share details from the app into a chat.[226] Developers can build chatbots into Messenger, for uses such as news publishers building bots to distribute news.[227] The M virtual assistant (U.S.) scans chats for keywords and suggests relevant actions, such as its payments system for users mentioning money.[228][229] Group chatbots appear in Messenger as "Chat Extensions". A "Discovery" tab allows finding bots, and enabling special, branded QR codes that, when scanned, take the user to a specific bot.[230]
|
137 |
+
|
138 |
+
Users can "Follow" content posted by other users without needing to friend them.[231] Accounts can be "verified", confirming a user's identity.[232]
|
139 |
+
|
140 |
+
Facebook enables users to control access to individual posts and their profile[234] through privacy settings.[235] The user's name and profile picture (if applicable) are public. Facebook's revenue depends on targeted advertising, which involves analyzing user data (from the site and the broader internet) to inform the targeting. These facilities have changed repeatedly since the service's debut, amid a series of controversies covering everything from how well it secures user data, to what extent it allows users to control access, to the kinds of access given to third parties, including businesses, political campaigns and governments. These facilities vary according to country, as some nations require the company to make data available (and limit access to services), while the European Union's GDPR regulation mandates additional privacy protections.[236]
|
141 |
+
|
142 |
+
On July 29, 2011, Facebook announced its Bug Bounty Program that paid security researchers a minimum of $500 for reporting security holes. The company promised not to pursue "white hat" hackers who identified such problems.[237][238] This led researchers in many countries to participate, particularly in India and Russia.[239]
|
143 |
+
|
144 |
+
Facebook's rapid growth began as soon as it became available and continued through 2018, before beginning to decline.
|
145 |
+
|
146 |
+
Facebook passed 100 million registered users in 2008,[240] and 500 million in July 2010.[51] According to the company's data at the July 2010 announcement, half of the site's membership used Facebook daily, for an average of 34 minutes, while 150 million users accessed the site by mobile.[52]
|
147 |
+
|
148 |
+
In October 2012 Facebook's monthly active users passed one billion,[78][241] with 600 million mobile users, 219 billion photo uploads, and 140 billion friend connections.[79] The 2 billion user mark was crossed in June 2017.[242][243]
|
149 |
+
|
150 |
+
In November 2015, after skepticism about the accuracy of its "monthly active users" measurement, Facebook changed its definition to a logged-in member who visits the Facebook site through the web browser or mobile app, or uses the Facebook Messenger app, in the 30 day period prior to the measurement. This excluded the use of third-party services with Facebook integration, which was previously counted.[244]
|
151 |
+
|
152 |
+
From 2017 to 2019, the percentage of the U.S. population over the age of 12 who use Facebook has declined, from 67% to 61% (a decline of some 15 million U.S. users), with a higher drop-off among younger Americans (a decrease in the percentage of U.S. 12- to 34-year-olds who are users from 58% in 2015 to 29% in 2019.[245][246] The decline coincided with an increase in the popularity of Instagram, which is also owned by Facebook Inc.[245][246]
|
153 |
+
|
154 |
+
Historically, commentators have offered predictions of Facebook's decline or end, based on causes such as a declining user base;[247] the legal difficulties of being a closed platform, inability to generate revenue, inability to offer user privacy, inability to adapt to mobile platforms, or Facebook ending itself to present a next generation replacement;[248] or Facebook's role in Russian interference in the 2016 United States elections.[249]
|
155 |
+
|
156 |
+
Population pyramid of Facebook users by age As of 2010[update][250]
|
157 |
+
|
158 |
+
The highest number of Facebook users as of October 2018 are from India and the United States, followed by Indonesia, Brazil and Mexico.[251] Region-wise, the highest number of users are from Asia-Pacific (947 million) followed by Europe (381 million) and US-Canada (242 million). The rest of the world has 750 million users.[252]
|
159 |
+
|
160 |
+
Over the 2008-2018 period, the percentage of users under 34 declined to less than half of the total.[236]
|
161 |
+
|
162 |
+
The website has won awards such as placement into the "Top 100 Classic Websites" by PC Magazine in 2007,[253] and winning the "People's Voice Award" from the Webby Awards in 2008.[254]
|
163 |
+
|
164 |
+
In 2010, Facebook won the Crunchie "Best Overall Startup Or Product" award[255] for the third year in a row.[256]
|
165 |
+
|
166 |
+
In many countries the social networking sites and mobile apps have been blocked temporarily or permanently, including China,[257] Iran,[258] Syria,[259] and North Korea. In May 2018, the government of Papua New Guinea announced that it would ban Facebook for a month while it considered the impact of the website on the country, though no ban has since occurred.[260] In 2019, Facebook announced that influencers are no longer able to promote any vape, tobacco products, or weapons on its platforms.[261]
|
167 |
+
|
168 |
+
Facebook's importance and scale has led to criticisms in many domains. Issues include Internet privacy, excessive retention of user information,[262] its facial recognition software,[263][264] its addictive quality[265] and its role in the workplace, including employer access to employee accounts.[266]
|
169 |
+
|
170 |
+
Facebook is alleged to have psychological effects, including feelings of jealousy[267][268] and stress,[269][270] a lack of attention[271] and social media addiction.[272][273]
|
171 |
+
|
172 |
+
European antitrust regulator Margrethe Vestager stated that Facebook's terms of service relating to private data were "unbalanced".[274]
|
173 |
+
|
174 |
+
Facebook has been criticized for electricity usage,[275] tax avoidance,[276] real-name user requirement policies,[277] censorship[278][279] and its involvement in the United States PRISM surveillance program.[280]
|
175 |
+
|
176 |
+
Facebook has been criticized for allowing users to publish illegal and/or offensive material. Specifics include copyright and intellectual property infringement,[281] hate speech,[282][283] incitement of rape[284] and terrorism,[285][286] fake news,[287][288][289] and crimes, murders, and livestreaming violent incidents.[290][291][292]
|
177 |
+
|
178 |
+
According to The Express Tribune, Facebook "avoided billions of dollars in tax using offshore companies".[293]
|
179 |
+
|
180 |
+
Sri Lanka blocked both Facebook and WhatsApp in May 2019 after anti-Muslim riots, the worst in the country since the Easter Sunday bombing in the same year as a temporary measure to maintain peace in Sri Lanka.[294][295]
|
181 |
+
|
182 |
+
Facebook removed 3 billion fake accounts only during the last quarter of 2018 and the first quarter of 2019.[296] This is considered to be a wildly high number given that the social network reports only 2.39 billion monthly active users.[296]
|
183 |
+
|
184 |
+
In late July 2019, the company announced it was under antitrust investigation by the Federal Trade Commission.[297]
|
185 |
+
|
186 |
+
Facebook has faced a steady stream of controversies over how it handles user privacy, repeatedly adjusting its privacy settings and policies.[298]
|
187 |
+
|
188 |
+
In 2010, the US National Security Agency began taking publicly posted profile information from Facebook, among other social media services.[299]
|
189 |
+
|
190 |
+
On November 29, 2011, Facebook settled Federal Trade Commission charges that it deceived consumers by failing to keep privacy promises.[300] In August 2013 High-Tech Bridge published a study showing that links included in Facebook messaging service messages were being accessed by Facebook.[301] In January 2014 two users filed a lawsuit against Facebook alleging that their privacy had been violated by this practice.[302]
|
191 |
+
|
192 |
+
On June 7, 2018, Facebook announced that a bug had resulted in about 14 million Facebook users having their default sharing setting for all new posts set to "public".[303]
|
193 |
+
|
194 |
+
On April 4, 2019, half a billion records of Facebook users were found exposed on Amazon cloud servers, containing information about users’ friends, likes, groups, and checked-in locations, as well as names, passwords and email addresses.[304]
|
195 |
+
|
196 |
+
The phone numbers of at least 200 million Facebook users were found to be exposed on an open online database in September 2019. They included 133 million US users, 18 million from the UK, and 50 million from users in Vietnam. After removing duplicates, the 419 million records have been reduced to 219 million. The database went offline after TechCrunch contacted the web host. It is thought the records were amassed using a tool that Facebook disabled in April 2018 after the Cambridge Analytica controversy. A Facebook spokeswoman said in a statement: "The dataset is old and appears to have information obtained before we made changes last year...There is no evidence that Facebook accounts were compromised."[305]
|
197 |
+
|
198 |
+
Facebooks privacy problems resulted companies like Viber Media and Mozilla discontinuing advertising on Facebooks platforms.
|
199 |
+
|
200 |
+
A "shadow profile" refers to the data Facebook collects about individuals without their explicit permission. For example, the "like" button that appears on third-party websites allows the company to collect information about an individual's internet browsing habits, even if the individual is not a Facebook user.[306][307] Data can also be collected by other users. For example, a Facebook user can link their email account to their Facebook to find friends on the site, allowing the company to collect the email addresses of users and non-users alike.[308] Over time, countless data points about an individual are collected; any single data point perhaps cannot identify an individual, but together allows the company to form a unique "profile."
|
201 |
+
|
202 |
+
This practice has been criticized by those who believe people should be able to opt-out of involuntary data collection. Additionally, while Facebook users have the ability to download and inspect the data they provide to the site, data from the user's "shadow profile" is not included, and non-users of Facebook do not have access to this tool regardless. The company has also been unclear whether or not it is possible for a person to revoke Facebook's access to their "shadow profile."[306]
|
203 |
+
|
204 |
+
Facebook customer Global Science Research sold information on over 87 million Facebook users to Cambridge Analytica, a political data analysis firm.[309] While approximately 270,000 people used the app, Facebook's API permitted data collection from their friends without their knowledge.[310] At first Facebook downplayed the significance of the breach, and suggested that Cambridge Analytica no longer had access. Facebook then issued a statement expressing alarm and suspended Cambridge Analytica. Review of documents and interviews with former Facebook employees suggested that Cambridge Analytica still possessed the data.[311] This was a violation of Facebook's consent decree with the Federal Trade Commission. This violation potentially carried a penalty of $40,000 per occurrence, totaling trillions of dollars.[312]
|
205 |
+
|
206 |
+
According to The Guardian both Facebook and Cambridge Analytica threatened to sue the newspaper if it published the story. After publication, Facebook claimed that it had been "lied to". On March 23, 2018, The English High Court granted an application by the Information Commissioner's Office for a warrant to search Cambridge Analytica's London offices, ending a standoff between Facebook and the Information Commissioner over responsibility.[313]
|
207 |
+
|
208 |
+
On March 25, Facebook published a statement by Zuckerberg in major UK and US newspapers apologizing over a "breach of trust".[314]
|
209 |
+
|
210 |
+
You may have heard about a quiz app built by a university researcher that leaked Facebook data of millions of people in 2014. This was a breach of trust, and I'm sorry we didn't do more at the time. We're now taking steps to make sure this doesn't happen again.
|
211 |
+
|
212 |
+
We've already stopped apps like this from getting so much information. Now we're limiting the data apps get when you sign in using Facebook.
|
213 |
+
|
214 |
+
We're also investigating every single app that had access to large amounts of data before we fixed this. We expect there are others. And when we find them, we will ban them and tell everyone affected.
|
215 |
+
|
216 |
+
Finally, we'll remind you which apps you've given access to your information – so you can shut off the ones you don't want anymore.
|
217 |
+
|
218 |
+
Thank you for believing in this community. I promise to do better for you.
|
219 |
+
|
220 |
+
On March 26, the Federal Trade Commission opened an investigation into the matter.[315] The controversy led Facebook to end its partnerships with data brokers who aid advertisers in targeting users.[298]
|
221 |
+
|
222 |
+
On April 24, 2019, Facebook said it could face a fine between $3 billion to $5 billion as the result of an investigation by the Federal Trade Commission. The agency has been investigating Facebook for possible privacy violations, but has not announced any findings yet.[316]
|
223 |
+
|
224 |
+
Facebook also implemented additional privacy controls and settings[317] in part to comply with the European Union's General Data Protection Regulation (GDPR), which took effect in May.[318] Facebook also ended its active opposition to the California Consumer Privacy Act.[319]
|
225 |
+
|
226 |
+
Some, such as Meghan McCain have drawn an equivalence between the use of data by Cambridge Analytica and the Barack Obama's 2012 campaign, which, according to Investor's Business Daily, "encouraged supporters to download an Obama 2012 Facebook app that, when activated, let the campaign collect Facebook data both on users and their friends."[320][321][322] Carol Davidsen, the Obama for America (OFA) former director of integration and media analytics, wrote that "Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realised that was what we were doing."[321][322] PolitiFact has rated McCain's statements "Half-True", on the basis that "in Obama's case, direct users knew they were handing over their data to a political campaign" whereas with Cambridge Analytica, users thought they were only taking a personality quiz for academic purposes, and while the Obama campaign only used the data "to have their supporters contact their most persuadable friends", Cambridge Analytica "targeted users, friends and lookalikes directly with digital ads."[323]
|
227 |
+
|
228 |
+
On September 28, 2018, Facebook experienced a major breach in its security, exposing the data of 50 million users. The data breach started in July 2017 and was discovered on September 16.[324] Facebook notified users affected by the exploit and logged them out of their accounts.[325][326]
|
229 |
+
|
230 |
+
In March 2019, Facebook confirmed a password compromise of millions of Facebook lite application users, however in April the company further stated that it was not just limited to Facebook but had also affected millions of Instagram users. The reason cited was the storage of password as plain text instead of encryption which could be read by its employees.[327]
|
231 |
+
|
232 |
+
On December 19, 2019, security researcher Bob Diachenko discovered a database containing more than 267 million Facebook user IDs, phone numbers, and names that were left exposed on the web for anyone to access without a password or any other authentication.[328]
|
233 |
+
|
234 |
+
In February 2020, Facebook encountered a major security breach in which its official Twitter account was hacked by a Saudi Arabia-based group called "OurMine". The group has a history of actively exposing high-profile social media profiles’ vulnerabilities.[329]
|
235 |
+
|
236 |
+
After acquiring Onavo in 2013, Facebook used its Onavo Protect virtual private network (VPN) app to collect information on users' web traffic and app usage. This allowed Facebook to monitor its competitors' performance, and motivated Facebook to acquire WhatsApp in 2014.[330][331][332] Media outlets classified Onavo Protect as spyware.[333][334][335] In August 2018, Facebook removed the app in response to pressure from Apple, who asserted that it violated their guidelines.[336][337]
|
237 |
+
|
238 |
+
In 2016, Facebook Research launched Project Atlas, offering some users between the ages of 13 and 35 up to $20 per month in exchange for their personal data, including their app usage, web browsing history, web search history, location history, personal messages, photos, videos, emails and Amazon order history.[338][339] In January 2019, TechCrunch reported on the project. This led Apple to temporarily revoke Facebook's Enterprise Developer Program certificates for one day, preventing Facebook Research from operating on iOS devices and disabling Facebook's internal iOS apps.[339][340][341]
|
239 |
+
|
240 |
+
Ars Technica reported in April 2018 that the Facebook Android app had been harvesting user data, including phone calls and text messages, since 2015.[342][343][344] In May 2018, several Android users filed a class action lawsuit against Facebook for invading their privacy.[345][346]
|
241 |
+
|
242 |
+
In January 2020, Facebook launched the Off-Facebook Activity page, which allows users to see information collected by Facebook about their non-Facebook activities.[347] Washington Post columnist Geoffrey A. Fowler found that this included what other apps he used on his phone, even while the Facebook app was closed, what other web sites he visited on his phone, and what in-store purchases he made from affiliated businesses, even while his phone was completely off.[348]
|
243 |
+
|
244 |
+
The company first apologized for its privacy abuses in 2009.[349]
|
245 |
+
|
246 |
+
Facebook apologies have appeared in newspapers, television, blog posts and on Facebook.[350] On March 25, 2018, leading US and UK newspapers published full-page ads with a personal apology from Zuckerberg. Zuckerberg issued a verbal apology on CNN.[351] In May 2010, he apologized for discrepancies in privacy settings.[350]
|
247 |
+
|
248 |
+
Previously, Facebook had its privacy settings spread out over 20 pages, and has now put all of its privacy settings on one page, which makes it more difficult for third-party apps to access the user's personal information.[298] In addition to publicly apologizing, Facebook has said that it will be reviewing and auditing thousands of apps that display "suspicious activities" in an effort to ensure that this breach of privacy does not happen again.[352] In a 2010 report regarding privacy, a research project stated that not a lot of information is available regarding the consequences of what people disclose online so often what is available are just reports made available through popular media.[353] In 2017, a former Facebook executive went on the record to discuss how social media platforms have contributed to the unraveling of the "fabric of society".[354]
|
249 |
+
|
250 |
+
Facebook relies on its users to generate the content that bonds its users to the service. The company has come under criticism both for allowing objectionable content, including conspiracy theories and fringe discourse,[355] and for prohibiting other content that it deems inappropriate.
|
251 |
+
|
252 |
+
It has been criticised as a vector for 'fake news', and has been accused of bearing responsibility for the conspiracy theory that the United States created ISIS,[356] false anti-Rohingya posts being used by Myanmar's military to fuel genocide and ethnic cleansing,[357][358] enabling Sandy Hook Elementary School shooting conspiracy theorists,[359] and anti-refugee attacks in Germany.[360][361][362] The government of the Philippines has also used Facebook as a tool to attack its critics.[363]
|
253 |
+
|
254 |
+
In 2017, Facebook partnered with fact checkers from the Poynter Institute's International Fact-Checking Network to identify and mark false content, though most ads from political candidates are exempt from this program.[364][365] Critics of the program accuse Facebook of not doing enough to remove false information from its website.[366]
|
255 |
+
|
256 |
+
Professor Ilya Somin reported that he had been the subject of death threats on Facebook in April 2018 from Cesar Sayoc, who threatened to kill Somin and his family and "feed the bodies to Florida alligators". Somin's Facebook friends reported the comments to Facebook, which did nothing except dispatch automated messages.[367] Sayoc was later arrested for the October 2018 United States mail bombing attempts directed at Democratic politicians.
|
257 |
+
|
258 |
+
Facebook has repeatedly amended its content policies. In July 2018, it stated that it would "downrank" articles that its fact-checkers determined to be false, and remove misinformation that incited violence.[368] Zuckerberg once stated that it was unclear whether Holocaust deniers on Facebook intended to deceive others,[369] for which he later apologized.[370] Facebook stated that content that receives "false" ratings from its fact-checkers can be demonetized and suffer dramatically reduced distribution. Specific posts and videos that violate community standards can be removed on Facebook.[369]
|
259 |
+
|
260 |
+
In May 2019, Facebook banned a number of "dangerous" commentators from its platform, including Alex Jones, Louis Farrakhan, Milo Yiannopoulos, Paul Joseph Watson, Paul Nehlen, David Duke, and Laura Loomer, for allegedly engaging in "violence and hate".[371][372]
|
261 |
+
|
262 |
+
In May 2020, Facebook agreed to a preliminary settlement of $52 million to compensate U.S.-based Facebook content moderators for their psychological trauma suffered on the job.[373][374] Other legal actions around the world, including in Ireland, await settlement.[375]
|
263 |
+
|
264 |
+
Facebook was criticized for allowing InfoWars to publish falsehoods and conspiracy theories.[369][370][376][377][378] Facebook defended its actions in regards to InfoWars, saying "we just don't think banning Pages for sharing conspiracy theories or false news is the right way to go."[376] Facebook provided only six cases in which it fact-checked content on the InfoWars page over the period September 2017 to July 2018.[369] In 2018 InfoWars falsely claimed that the survivors of the Parkland shooting were "actors". Facebook pledged to remove InfoWars content making the claim, although InfoWars videos pushing the false claims were left up, even though Facebook had been contacted about the videos.[369] Facebook stated that the videos never explicitly called them actors.[369] Facebook also allowed InfoWars videos that shared the Pizzagate conspiracy theory to survive, despite specific assertions that it would purge Pizzagate content.[369] In late July 2018 Facebook suspended the personal profile of InfoWars head Alex Jones for 30 days.[379] In early August 2018, Facebook banned the four most active InfoWars-related pages for hate speech.[380]
|
265 |
+
|
266 |
+
In 2018, Facebook stated that during 2018 they had identified "coordinated inauthentic behavior" in "many Pages, Groups and accounts created to stir up political debate, including in the US, the Middle East, Russia and the UK."[381]
|
267 |
+
|
268 |
+
Campaigns operated by the British intelligence agency unit, called Joint Threat Research Intelligence Group, have broadly fallen into two categories; cyber attacks and propaganda efforts. The propaganda efforts utilize "mass messaging" and the "pushing [of] stories" via social media sites like Facebook.[382][383] Israel's Jewish Internet Defense Force, China's 50 Cent Party and Turkey's AK Trolls also focus their attention on social media platforms like Facebook.[384][385][386][387]
|
269 |
+
|
270 |
+
In July 2018, Samantha Bradshaw, co-author of the report from the Oxford Internet Institute (OII) at Oxford University, said that "The number of countries where formally organised social media manipulation occurs has greatly increased, from 28 to 48 countries globally. The majority of growth comes from political parties who spread disinformation and junk news around election periods."[388]
|
271 |
+
|
272 |
+
In October 2018, The Daily Telegraph reported that Facebook "banned hundreds of pages and accounts that it says were fraudulently flooding its site with partisan political content – although they came from the United States instead of being associated with Russia."[389]
|
273 |
+
|
274 |
+
In December 2018, The Washington Post reported that "Facebook has suspended the account of Jonathon Morgan, the chief executive of a top social media research firm" New Knowledge, "after reports that he and others engaged in an operation to spread disinformation" on Facebook and Twitter during the 2017 United States Senate special election in Alabama.[390][391]
|
275 |
+
|
276 |
+
In January 2019, Facebook said it has removed 783 Iran-linked accounts, pages and groups for engaging in what it called "coordinated inauthentic behaviour".[392]
|
277 |
+
|
278 |
+
In May 2019, Tel Aviv-based private intelligence agency Archimedes Group was banned from Facebook for “coordinated inauthentic behavior” after Facebook found fake users in countries in sub-Saharan Africa, Latin America and Southeast Asia.[393] Facebook investigations revealed that Archimedes had spent some $1.1 million on fake ads, paid for in Brazilian reais, Israeli shekels and US dollars.[394] Facebook gave examples of Archimedes Group political interference in Nigeria, Senegal, Togo, Angola, Niger and Tunisia.[395] The Atlantic Council's Digital Forensic Research Lab said in a report that "The tactics employed by Archimedes Group, a private company, closely resemble the types of information warfare tactics often used by governments, and the Kremlin in particular."[396][397]
|
279 |
+
|
280 |
+
On May 23, 2019, Facebook released its Community Standards Enforcement Report highlighting that it has identified several fake accounts through artificial intelligence and human monitoring. In a period of six months, October 2018-March 2019, the social media website removed a total of 3.39 billion fake accounts. The number of fake accounts was reported to be more than 2.4 billion real people on the platform.[398]
|
281 |
+
|
282 |
+
In July 2019, Facebook advanced its measures to counter deceptive political propaganda and other abuse of its services. The company removed more than 1,800 accounts and pages that were being operated from Russia, Thailand, Ukraine and Honduras.[399]
|
283 |
+
|
284 |
+
On October 30, 2019, Facebook deleted several accounts of the employees working at the Israeli NSO Group, stating that the accounts were “deleted for not following our terms”. The deletions came after WhatsApp sued the Israeli surveillance firm for targeting 1,400 devices with spyware.[400]
|
285 |
+
|
286 |
+
In 2020, Facebook helped found American Edge, an anti-regulation lobbying firm to fight anti-trust probes.[401]
|
287 |
+
|
288 |
+
In 2018, Special Counsel Robert Mueller indicted 13 Russian nationals and three Russian organizations for "engaging in operations to interfere with U.S. political and electoral processes, including the 2016 presidential election."[402][403][404]
|
289 |
+
|
290 |
+
Mueller contacted Facebook subsequently to the company's disclosure that it had sold more than $100,000 worth of ads to a company (Internet Research Agency) with links to the Russian intelligence community before the 2016 United States presidential election.[405][406] In September 2017, Facebook's chief security officer Alex Stamos wrote the company "found approximately $100,000 in ad spending from June 2015 to May 2017 — associated with roughly 3,000 ads — that was connected to about 470 inauthentic accounts and Pages in violation of our policies. Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia."[407] Clinton and Trump campaigns spent $81 million on Facebook ads.[408]
|
291 |
+
|
292 |
+
The company pledged full cooperation in Mueller's investigation, and provided all information about the Russian advertisements.[409] Members of the House and Senate Intelligence Committees have claimed that Facebook had withheld information that could illuminate the Russian propaganda campaign.[410] Russian operatives have used Facebook to organize Black Lives Matter rallies[411][412] and anti-immigrant rallies on U.S. soil,[413] as well as anti-Clinton rallies[414] and rallies both for and against Donald Trump.[415][416] Facebook ads have also been used to exploit divisions over black political activism and Muslims by simultaneously sending contrary messages to different users based on their political and demographic characteristics in order to sow discord.[417][418][419] Zuckerberg has stated that he regrets having dismissed concerns over Russian interference in the 2016 U.S. presidential election.[420]
|
293 |
+
|
294 |
+
Russian-American Billionaire Yuri Milner, who befriended Zuckerberg[421] between 2009 and 2011 had Kremlin backing for his investments in Facebook and Twitter.[422]
|
295 |
+
|
296 |
+
In January 2019, Facebook removed 289 Pages and 75 coordinated accounts linked to the Russian state-owned news agency Sputnik which had misrepresented themselves as independent news or general interest Pages.[423][424] Facebook later identified and removed an additional 1,907 accounts linked to Russia found to be engaging in "coordinated inauthentic behaviour".[425] In 2018, a UK DCMS select committee report had criticised Facebook for its reluctance to investigate abuse of its platform by the Russian government, and for downplaying the extent of the problem.[426][427]
|
297 |
+
|
298 |
+
In February 2019, Glenn Greenwald wrote that a cybersecurity company New Knowledge, which is behind one of the Senate reports on Russian social media election interference, "was caught just six weeks ago engaging in a massive scam to create fictitious Russian troll accounts on Facebook and Twitter in order to claim that the Kremlin was working to defeat Democratic Senate nominee Doug Jones in Alabama. The New York Times, when exposing the scam, quoted a New Knowledge report that boasted of its fabrications..."[428][429]
|
299 |
+
|
300 |
+
In 2018, Facebook took down 536, Facebook Pages, 17 Facebook Groups, 175 Facebook accounts and 16 Instagram accounts linked to the Myanmar military. Collectively these were followed by over 10 million people.[430] The New York Times reported that:[431]
|
301 |
+
|
302 |
+
after months of reports about anti-Rohingya propaganda on Facebook, the company acknowledged that it had been too slow to act in Myanmar. By then, more than 700,000 Rohingya had fled the country in a year, in what United Nations officials called “a textbook example of ethnic cleansing.”
|
303 |
+
|
304 |
+
In 2019 a book titled The Real Face of Facebook in India,[432] co-authored by the journalists Paranjoy Guha Thakurta and Cyril Sam alleges that Facebook was both directly complicit in, and benefited from, the rise of Modi's BJP in India.
|
305 |
+
|
306 |
+
Early Facebook investor and former Zuckerberg mentor Roger McNamee described Facebook as having "the most centralized decision-making structure I have ever encountered in a large company."[433] Nathan Schneider, a professor of media studies at the University of Colorado Boulder argued for transforming Facebook into a platform cooperative owned and governed by the users.[434]
|
307 |
+
|
308 |
+
Facebook co-founder Chris Hughes states that CEO Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. Hughes called for the breakup of Facebook in an op-ed on The New York Times. Hughes says he's concerned that Zuckerberg has surrounded himself with a team that doesn't challenge him and that as a result, it's the U.S. government's job to hold him accountable and curb his "unchecked power."[435] Hughes also said that "Mark's power is unprecedented and un-American."[436] Several U.S. politicians agree with Hughes.[437] EU Commissioner for Competition Margrethe Vestager has stated that splitting Facebook should only be done as "a remedy of the very last resort", and that splitting Facebook would not solve Facebook's underlying problems.[438]
|
309 |
+
|
310 |
+
The company has been subject to repeated litigation.[439][440][441][442] Its most prominent case addressed allegations that Zuckerberg broke an oral contract with Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra to build the then-named "HarvardConnection" social network in 2004.[443][444][445]
|
311 |
+
|
312 |
+
On March 6, 2018 BlackBerry sued Facebook and its Instagram and WhatsApp subdivision for ripping off key features of its messaging app.[446]
|
313 |
+
|
314 |
+
In 2019 British solicitors representing a bullied Syrian schoolboy, sued Facebook over false claims. They claimed that Facebook protected prominent figures from scrutiny instead of removing content that violates its rules and that the special treatment was financially driven.[447][448]
|
315 |
+
|
316 |
+
In October 2018 a Texan woman sued Facebook, claiming she had been recruited into the sex trade at the age of 15 by a man who "friended" her on the social media network. Facebook responded that it works both internally and externally to ban sex traffickers.[449][450]
|
317 |
+
|
318 |
+
In October 2017, Facebook expanded its work with Definers Public Affairs, a PR firm that had originally been hired to monitor press coverage of the company to address concerns primarily regarding Russian meddling, then mishandling of user data by Cambridge Analytica, hate speech on Facebook, and calls for regulation.[451] Company spokesman Tim Miller stated that a goal for tech firms should be to "have positive content pushed out about your company and negative content that's being pushed out about your competitor". Definers claimed that George Soros was the force behind what appeared to be a broad anti-Facebook movement, and created other negative media, along with America Rising, that was picked up by larger media organisations like Breitbart.[451][452] Facebook cut ties with the agency in late 2018, following public outcry over their association.[453]
|
319 |
+
|
320 |
+
On August 13, 2019, it was revealed that Facebook had enlisted hundreds of contractors to create and obtain transcripts of the audio messages of users.[454][455][456] This was especially common of Facebook Messenger, where the contractors frequently listened to and transcribed voice messages of users.[456] After this was first reported on by Bloomberg News, Facebook released a statement confirming the report to be true,[455] but also stated that the monitoring program was now suspended.[455]
|
321 |
+
|
322 |
+
A commentator in The Washington Post noted that Facebook constitutes a "massive depository of information that documents both our reactions to events and our evolving customs with a scope and immediacy of which earlier historians could only dream".[457] Especially for anthropologists, social researchers, and social historians—and subject to proper preservation and curation—the website "will preserve images of our lives that are vastly crisper and more nuanced than any ancestry record in existence".[457]
|
323 |
+
|
324 |
+
Economists have noted that Facebook offers many non-rivalrous services that benefit as many users as are interested without forcing users to compete with each other. By contrast, most goods are available to a limited number of users. E.g., if one user buys a phone, no other user can buy that phone. Three areas add the most economic impact: platform competition, the market place and user behavior data.[458]
|
325 |
+
|
326 |
+
Facebook began to reduce its carbon impact after Greenpeace attacked it for its long-term reliance on coal and resulting carbon footprint.[459]
|
327 |
+
|
328 |
+
Facebook provides a development platform for many social gaming, communication, feedback, review, and other applications related to online activities. This platform spawned many businesses and added thousands of jobs to the global economy. Zynga Inc., a leader in social gaming, is an example of such a business. An econometric analysis found that Facebook's app development platform added more than 182,000 jobs in the U.S. economy in 2011. The total economic value of the added employment was about $12 billion.[460]
|
329 |
+
|
330 |
+
Facebook was one of the first large-scale social networks. In The Facebook Effect, David Kirkpatrick stated that Facebook's structure makes it difficult to replace, because of its "network effects".[neutrality is disputed] As of 2016, it is estimated that 44 percent of the US population gets news through Facebook.[461]
|
331 |
+
|
332 |
+
A 2020 experimental study in the American Economic Review found that deactivating Facebook led to increased subjective well-being.[462]
|
333 |
+
|
334 |
+
Studies have associated social networks with positive[463] and negative impacts[464][465][466][467][468] on emotional health. Studies have associated Facebook with feelings of envy, often triggered by vacation and holiday photos. Other triggers include posts by friends about family happiness and images of physical beauty—such feelings leave people dissatisfied with their own lives. A joint study by two German universities discovered that one out of three people were more dissatisfied with their lives after visiting Facebook,[469][470] and another study by Utah Valley University found that college students felt worse about themselves following an increase in time on Facebook.[470][471][472]
|
335 |
+
|
336 |
+
Professor Larry D. Rosen stated that teenagers on Facebook exhibit more narcissistic tendencies, while young adults show signs of antisocial behavior, mania and aggressiveness. Positive effects included signs of "virtual empathy" towards online friends and helping introverted persons learn social skills.[473]
|
337 |
+
|
338 |
+
In a blog post in December 2017, the company highlighted research that has shown "passively consuming" the News Feed, as in reading but not interacting, left users with negative feelings afterwards, whereas interacting with messages pointed to improvements in well-being.[474]
|
339 |
+
|
340 |
+
In February 2008, a Facebook group called "One Million Voices Against FARC" organized an event in which hundreds of thousands of Colombians marched in protest against the Revolutionary Armed Forces of Colombia (FARC).[475] In August 2010, one of North Korea's official government websites and the country's official news agency, Uriminzokkiri, joined Facebook.[476]
|
341 |
+
|
342 |
+
During the Arab Spring many journalists claimed that Facebook played a major role in the 2011 Egyptian revolution.[477][478] On January 14, the Facebook page of "We are all Khaled Said" was started by Wael Ghoniem to invite the Egyptian people to "peaceful demonstrations" on January 25. According to Mashable,[unreliable source?] in Tunisia and Egypt, Facebook became the primary tool for connecting protesters and led the Egyptian government to ban Facebook, Twitter and other websites on January 26[479] then ban all mobile and Internet connections for all of Egypt on January 28. After 18 days, the uprising forced President Hosni Mubarak to resign.
|
343 |
+
|
344 |
+
In a Bahraini uprising that started on February 14, 2011, Facebook was utilized by the Bahraini regime and regime loyalists to identify, capture and prosecute citizens involved in the protests. A 20-year-old woman named Ayat Al Qurmezi was identified as a protester using Facebook and imprisoned.[480]
|
345 |
+
|
346 |
+
In 2011, Facebook filed paperwork with the Federal Election Commission to form a political action committee under the name FB PAC.[481] In an email to The Hill, a spokesman for Facebook said "Facebook Political Action Committee will give our employees a way to make their voice heard in the political process by supporting candidates who share our goals of promoting the value of innovation to our economy while giving people the power to share and make the world more open and connected."[482]
|
347 |
+
|
348 |
+
During the Syrian civil war, the YPG, a libertarian army for Rojava recruited westerners through Facebook in its fight against ISIL.[483] Dozens joined its ranks. The Facebook page's name "The Lions of Rojava" comes from a Kurdish saying which translates as "A lion is a lion, whether it's a female or a male", reflecting the organization's feminist ideology.[484]
|
349 |
+
|
350 |
+
In recent years, Facebook's News Feed algorithms have been identified as a cause of political polarization, for which it has been criticized.[485][486] It has likewise been accused of amplifying the reach of 'fake news' and extreme viewpoints, as when it may have enabled conditions which led to the 2015 Rohingya refugee crisis.[487][488]
|
351 |
+
|
352 |
+
Facebook first played a role in the American political process in January 2008, shortly before the New Hampshire primary. Facebook teamed up with ABC and Saint Anselm College to allow users to give live feedback about the "back to back" January 5 Republican and Democratic debates.[489][490][491] Facebook users took part in debate groups on specific topics, voter registration and message questions.[492]
|
353 |
+
|
354 |
+
Over a million people installed the Facebook application "US Politics on Facebook" in order to take part which measured responses to specific comments made by the debating candidates.[493] A poll by CBS News, UWIRE and The Chronicle of Higher Education claimed to illustrate how the "Facebook effect" had affected youthful voters, increasing voting rates, support of political candidates, and general involvement.[494]
|
355 |
+
|
356 |
+
The new social media, such as Facebook and Twitter, connected hundreds of millions of people. By 2008, politicians and interest groups were experimenting with systematic use of social media to spread their message.[495][496] By the 2016 election, political advertising to specific groups had become normalized. Facebook offered the most sophisticated targeting and analytics platform.[497] ProPublica noted that their system enabled advertisers to direct their pitches to almost 2,300 people who expressed interest in the topics of "Jew hater," "How to burn Jews," or, "History of 'why Jews ruin the world".[498]
|
357 |
+
|
358 |
+
The Cambridge Analytica data scandal offered another example of the perceived attempt to influence elections.[499][500] The Guardian claimed that Facebook knew about the security breach for two years, but did nothing to stop it until it became public.[501]
|
359 |
+
|
360 |
+
Ahead of the 2019 general elections in India, Facebook has removed 103 pages, groups and accounts on Facebook and Instagram platforms originating from Pakistan. Facebook said its investigation found a Pakistani military link, along with a mix of real accounts of ISPR employees, and a network of fake accounts created by them that have been operating military fan pages, general interest pages but were posting content about Indian politics while trying to conceal their identity.[502] Owing to the same reasons, Facebook also removed 687 pages and accounts of Congress because of coordinated inauthentic behavior on the platform.[503]
|
361 |
+
|
362 |
+
Facebook established Social Science One to encourage research with its data. Data from Facebook is used for different scientific investigations. One study examined how Facebook users interact with socially shared news and show that individuals’ choices played a stronger role in limiting exposure to cross-cutting content.[504] Another study found that most of health science students acquired academic materials from others through Facebook.[505] Signals from Facebook are also used in quality assessment of scientific works.[506] Facebook data can be used to assess the quality of Wikipedia articles.[507]
|
363 |
+
|
364 |
+
Facebook and Zuckerberg have been the subject of music, books, film and television. The 2010 film The Social Network, directed by David Fincher and written by Aaron Sorkin, stars Jesse Eisenberg as Zuckerberg and went on to win three Academy Awards and four Golden Globes.
|
365 |
+
|
366 |
+
In 2008, Collins English Dictionary declared "Facebook" as its new Word of the Year.[508] In December 2009, the New Oxford American Dictionary declared its word of the year to be the verb "unfriend", defined as "To remove someone as a 'friend' on a social networking site such as Facebook".[509]
|
367 |
+
|
368 |
+
In August 2013, Facebook founded Internet.org in collaboration with six other technology companies to plan and help build affordable Internet access for underdeveloped and developing countries.[510] The service, called Free Basics, includes various low-bandwidth applications such as AccuWeather, BabyCenter, BBC News, ESPN, and Bing.[511][512] There was severe opposition to Internet.org in India, where the service started in partnership with Reliance Communications in 2015 was banned a year later by the Telecom Regulatory Authority of India (TRAI). In 2018, Zuckerberg claimed that "Internet.org efforts have helped almost 100 million people get access to the internet who may not have had it otherwise."[511]
|
en/1933.html.txt
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In politics, humanitarian aid, and social science, hunger is a condition in which a person, for a sustained period, is unable to eat sufficient food to meet basic nutritional needs. So in the field of hunger relief, the term hunger is used in a sense that goes beyond the common desire for food that all humans experience.
|
4 |
+
|
5 |
+
Throughout history, portions of the world's population have often suffered sustained periods of hunger. In many cases, hunger resulted from food supply disruptions caused by war, plagues, or adverse weather. In the decades following World War II, technological progress and enhanced political cooperation suggested it might be possible to substantially reduce the number of people suffering from hunger. While progress was uneven, by 2015 the threat of extreme hunger subsided for many of the world's population. According to figures published by the FAO in 2019 however, the number of people suffering from chronic hunger has been increasing over the last four years. This is both as a percentage of the world's population, and in absolute terms, with about 821 million afflicted with hunger in 2018.
|
6 |
+
|
7 |
+
While most of the world's hungry people continue to live in Asia, much of the increase in hunger since 2015 occurred in Africa and South America. The FAO's 2017 report discussed three principal reasons for the recent increase in hunger: climate, conflict, and economic slowdowns. The 2018 report focused on extreme weather as a primary driver of the increase in hunger, finding rises were especially severe in countries where the agricultural systems were most sensitive to extreme variations in weather. While the FAO's 2019 report found there was also a strong correlation between increases in hunger and countries that had suffered an economic slowdown.
|
8 |
+
|
9 |
+
Many thousands of organisations are engaged in the field of hunger relief; operating at local, national, regional or international levels. Some of these organisations are dedicated to hunger relief, while others may work in a number of different fields. The organisations range from multilateral institutions, to national governments, to small local initiatives such as independent soup kitchens. Many participate in umbrella networks that connect together thousands of different hunger relief organisations. At the global level, much of the world's hunger relief efforts are coordinated by the UN, and geared towards achieving the 2030 Sustainable Development Goal for "Zero hunger".
|
10 |
+
|
11 |
+
There is only one globally recognized approach for defining and measuring hunger that is generally used by those studying or working to relieve hunger as a social problem. This is the United Nation's FAO measurement, which they typically refer to as chronic undernourishment (or in older publications, as 'food deprivation', 'chronic hunger', or just plain 'hunger'.) For the FAO:
|
12 |
+
|
13 |
+
Not all of the organizations in the hunger relief field use the FAO definition of hunger. Some use a broader definition that overlaps more fully with malnutrition. The alternative definitions do however tend to go beyond the commonly understood meaning of hunger as a painful or uncomfortable motivational condition; the desire for food is something that all humans frequently experience, even the most affluent, and is not in itself a social problem.[7][5][4][3]
|
14 |
+
|
15 |
+
Very low food supply can be described as "food insecure with hunger." A change in description was made in 2006 at the recommendation of the Committee on National Statistics (National Research Council, 2006) in order to distinguish the physiological state of hunger from indicators of food availability.[8] Food insecure is when food intake of one or more household members was reduced and their eating patterns were disrupted at times during the year because the household lacked money and other resources for food.[9] Food security statistics, is measured by using survey data, based on household responses to items about whether the household was able to obtain enough food to meet their needs.[10]
|
16 |
+
|
17 |
+
The physical sensation of hunger is related to contractions of the stomach muscles. These contractions—sometimes called hunger pangs once they become severe—are believed to be triggered by high concentrations of the ghrelin hormone. The hormones Peptide YY and Leptin can have an opposite effect on the appetite, causing the sensation of being full. Ghrelin can be released if blood sugar levels get low—a condition that can result from long periods without eating. Stomach contractions from hunger can be especially severe and painful in children and young adults.
|
18 |
+
|
19 |
+
Hunger pangs can be made worse by irregular meals. People who cannot afford to eat more than once a day sometimes refuse one-off additional meals, because if they do not eat at around the same time on the next days, they may suffer extra severe hunger pangs.[11] Older people may feel less violent stomach contractions when they get hungry, but still suffer the secondary effects resulting from low food intake: these include weakness, irritability and decreased concentration. Prolonged lack of adequate nutrition also causes increased susceptibility to disease and reduced ability for the body to self heal.[12][13]
|
20 |
+
|
21 |
+
The United nations publish an annual report on the state of food security and nutrition across the world. Led by the FAO, the 2019 report was joint authored by four other UN agencies: the WFP, IFAD, WHO and UNICEF. The FAO's yearly report provides a statistical overview on the prevalence of hunger around the world, and is widely considered the main global reference for tracking hunger. No simple set of statistics can ever fully capture the multi dimensional nature of hunger however. Reasons include that the FAO's key metric for hunger, "undernourishment", is defined solely in terms of dietary energy availability – disregarding micro-nutrients such as vitamins or minerals. Second, the FAO uses the energy requirements for minimum activity levels as a benchmark; many people would not count as hungry by the FAO's measure yet still be eating too little to undertake hard manual labour, which might be the only sort of work available to them. Thirdly, the FAO statistics do not always reflect short-term undernourishment.[4][14]
|
22 |
+
|
23 |
+
An alternative measure of hunger across the world is the Global Hunger Index (GHI). Unlike the FAO's measure, the GHI defines hunger in a way that goes beyond raw calorie intake, to include for example ingestion of micronutrients. GDI is a multidimensional statistical tool used to describe the state of countries’ hunger situation. The GHI measures progress and failures in the global fight against hunger.[15] The GHI is updated once a year. The data from the 2015 report showed that Hunger levels have dropped 27% since 2000. Fifty two countries remained at serious or alarming levels.[16] The 2019 GHI report expresses concern about the increase in hunger since 2015. In addition to the latest statistics on Hunger and Food Security, the GHI also features different special topics each year. The 2019 report includes an essay on hunger and climate change, with evidence suggesting that areas most vulnerable to climate change have suffered much of the recent increases in hunger.[17][18]
|
24 |
+
|
25 |
+
Throughout history, the need to aid those suffering from hunger has been commonly, though not universally,[19] recognized. The philosopher Simone Weil wrote that feeding the hungry when you have resources to do so is the most obvious of all human obligations. She says that as far back as Ancient Egypt, many believed that people had to show they had helped the hungry in order to justify themselves in the afterlife. Weil writes that Social progress is commonly held to be first of all, "...a transition to a state of human society in which people will not suffer from hunger."[20]
|
26 |
+
Social historian Karl Polanyi wrote that before markets became the world's dominant form of economic organization in the 19th century, most human societies would either starve all together or not at all, because communities would invariably share their food.[21]
|
27 |
+
|
28 |
+
While some of the principles for avoiding famines had been laid out in the very first book of the Holy Bible,[22] they were not always understood. Historical hunger relief efforts were often largely left to religious organisations and individual kindness. Even up to early modern times, political leaders often reacted to famine with bewilderment and confusion. From the first age of globalization, which began in the 19th century, it became more common for the elite to consider problems like hunger in global terms. However, as early globalization largely coincided with the high peak of influence for classical liberalism, there was relatively little call for politicians to address world hunger.[23][24]
|
29 |
+
|
30 |
+
In the late nineteenth and early twentieth century, the view that politicians ought not to intervene against hunger was increasingly challenged by campaigning journalists. There were also more frequent calls for large scale intervention against world hunger from academics and politicians, such as U.S. President Woodrow Wilson. Funded both by the government and private donations, the U.S. was able to dispatch millions of tons of food aid to European countries during and in the years immediately after WWI, organised by agencies such as the American Relief Administration. Hunger as an academic and social topic came to further prominence in the U.S. thanks to mass media coverage of the issue as a domestic problem during the Great Depression.[25][26][27][28][1][29]
|
31 |
+
|
32 |
+
While there had been increasing attention to hunger relief from the late 19th century, Dr David Grigg has summarised that prior to the end of World War II, world hunger still received relatively little academic or political attention; whereas after 1945 there was an explosion of interest in the topic.[27]
|
33 |
+
|
34 |
+
After World War II, a new international politico-economic order came into being, which was later described as Embedded liberalism. For at least the first decade after the war, the United States, then by far the period's most dominant national actor, was strongly supportive of efforts to tackle world hunger and to promote international development. It heavily funded the United Nation's development programmes, and later the efforts of other multilateral organizations like the International Monetary Fund (IMF) and the World Bank (WB).[27][1][30]
|
35 |
+
|
36 |
+
The newly established United Nations became a leading player in co-ordinating the global fight against hunger. The UN has three agencies that work to promote food security and agricultural development: the Food and Agriculture Organization (FAO), the World Food Programme (WFP) and the International Fund for Agricultural Development (IFAD). FAO is the world's agricultural knowledge agency, providing policy and technical assistance to developing countries to promote food security, nutrition and sustainable agricultural production, particularly in rural areas. WFP's key mission is to deliver food into the hands of the hungry poor. The agency steps in during emergencies and uses food to aid recovery after emergencies. Its longer term approaches to hunger helps the transition from recovery to development. IFAD, with its knowledge of rural poverty and exclusive focus on poor rural people, designs and implements programmes to help those people access the assets, services and opportunities they need to overcome poverty.[27][1][30]
|
37 |
+
|
38 |
+
Following successful post WWII reconstruction of Germany and Japan, the IMF and WB began to turn their attention to the developing world. A great many civil society actors were also active in trying to combat hunger, especially after the late 1970s when global media began to bring the plight of starving people in places like Ethiopia to wider attention. Most significant of all, especially in the late 1960s and 70s, the Green revolution helped improved agricultural technology propagate throughout the world.[27][1][30]
|
39 |
+
|
40 |
+
The United States began to change its approach to the problem of world hunger from about the mid 1950s. Influential members of the administration became less enthusiastic about methods they saw as promoting an over reliance on the state, as they feared that might assist the spread of communism. By the 1980s, the previous consensus in favour of moderate government intervention had been displaced across the western world. The IMF and World Bank in particular began to promote market-based solutions. In cases where countries became dependent on the IMF, they sometimes forced national governments to prioritize debt repayments and sharply cut public services. This sometimes had a negative effect on efforts to combat hunger.[31][32][33]
|
41 |
+
|
42 |
+
Organizations such as Food First raised the issue of food sovereignty and claimed that every country on earth (with the possible minor exceptions of some city-states) has sufficient agricultural capacity to feed its own people, but that the "free trade" economic order, which from the late 1970s to about 2008 had been associated with such institutions as the IMF and World Bank, had prevented this from happening. The World Bank itself claimed it was part of the solution to hunger, asserting that the best way for countries to break the cycle of poverty and hunger was to build export-led economies that provide the financial means to buy foodstuffs on the world market. However, in the early 21st century the World Bank and IMF became less dogmatic about promoting free market reforms. They increasingly returned to the view that government intervention does have a role to play, and that it can be advisable for governments to support food security with policies favourable to domestic agriculture, even for countries that do not have a Comparative advantage in that area. As of 2012, the World Bank remains active in helping governments to intervene against hunger.[34][27][1][30][35]
|
43 |
+
|
44 |
+
Until at least the 1980s—and, to an extent, the 1990s—the dominant academic view concerning world hunger was that it was a problem of demand exceeding supply. Proposed solutions often focused on boosting food production, and sometimes on birth control. There were exceptions to this, even as early as the 1940s, Lord Boyd-Orr, the first head of the UN's FAO, had perceived hunger as largely a problem of distribution, and drew up comprehensive plans to correct this. Few agreed with him at the time, however, and he resigned after failing to secure support for his plans from the US and Great Britain. In 1998, Amartya Sen won a Nobel Prize in part for demonstrating that hunger in modern times is not typically the product of a lack of food. Rather, hunger usually arises from food distribution problems, or from governmental policies in the developed and developing world. It has since been broadly accepted that world hunger results from issues with the distribution as well as the production of food.[31][32][33] Sen's 1981 essay Poverty and Famines: An Essay on Entitlement and Deprivation played a prominent part in forging the new consensus.[1][36]
|
45 |
+
|
46 |
+
In 2007 and 2008, rapidly increasing food prices caused a global food crisis. Food riots erupted in several dozen countries; in at least two cases, Haiti and Madagascar, this led to the toppling of governments. A second global food crisis unfolded due to the spike in food prices of late 2010 and early 2011. Fewer food riots occurred, due in part to greater availability of food stock piles for relief. However, several analysts argue the food crisis was one of the causes of the Arab Spring.[30][37][38]
|
47 |
+
|
48 |
+
In the early 21st century, the attention paid to the problem of hunger by the leaders of advanced nations such as those that form the G8 had somewhat subsided.[37] Prior to 2009, large scale efforts to fight hunger were mainly undertaken by governments of the worst affected countries, by civil society actors, and by multilateral and regional organizations. In 2009, Pope Benedict published his third encyclical, Caritas in Veritate, which emphasised the importance of fighting against hunger. The encyclical was intentionally published immediately before the July 2009 G8 Summit to maximise its influence on that event. At the Summit, which took place at L'Aquila in central Italy, the L'Aquila Food Security Initiative was launched, with a total of US$22 billion committed to combat hunger.[39][40]
|
49 |
+
|
50 |
+
Food prices fell sharply in 2009 and early 2010, though analysts credit this much more to farmers increasing production in response to the 2008 spike in prices, than to the fruits of enhanced government action. However, since the 2009 G8 summit, the fight against hunger became a high-profile issue among the leaders of the worlds major nations, and was a prominent part of the agenda for the 2012 G-20 summit.[37][41][42]
|
51 |
+
|
52 |
+
In April 2012, the Food Assistance Convention was signed, the world's first legally binding international agreement on food aid. The May 2012 Copenhagen Consensus recommended that efforts to combat hunger and malnutrition should be the first priority for politicians and private sector philanthropists looking to maximize the effectiveness of aid spending. They put this ahead of other priorities, like the fight against malaria and AIDS.[43] Also in May 2012, U.S. President Barack Obama launched a "new alliance for food security and nutrition"—a broad partnership between private sector, governmental and civil society actors—that aimed to "...achieve sustained and inclusive agricultural growth and raise 50 million people out of poverty over the next 10 years."[31][41][44][45] The UK's prime minister David Cameron held a hunger summit on 12 August, the last day of the 2012 Summer Olympics.[41]
|
53 |
+
|
54 |
+
The fight against hunger has also been joined by an increased number of regular people. While folk throughout the world had long contributed to efforts to alleviate hunger in the developing world, there has recently been a rapid increase in the numbers involved in tackling domestic hunger even within the economically advanced nations of the Global North. This had happened much earlier in North America than it did in Europe. In the US, the Reagan administration scaled back welfare the early 1980s, leading to a vast increase of charity sector efforts to help Americans unable to buy enough to eat. According to a 1992 survey of 1000 randomly selected US voters, 77% of Americans had contributed to efforts to feed the hungry, either by volunteering for various hunger relief agencies such as food banks and soup kitchens, or by donating cash or food.[46]
|
55 |
+
Europe, with its more generous welfare system, had little awareness of domestic hunger until the food price inflation that began in late 2006, and especially as austerity-imposed welfare cuts began to take effect in 2010. Various surveys reported that upwards of 10% of Europe's population had begun to suffer from food insecurity. Especially since 2011, there has been a substantial increase in grass roots efforts to help the hungry by means of food banks, within both the UK and continental Europe.[11][47][48][49][50]
|
56 |
+
|
57 |
+
By July 2012, the 2012 US drought had already caused a rapid increase in the price of grain and soy, with a knock on effect on the price of meat. As well as affecting hungry people in the US, this caused prices to rise on the global markets; the US is the world's biggest exporter of food. This led to much talk of a possible third 21st century global food crisis. The Financial Times reported that the BRICS may not be as badly affected as they were in the earlier crises of 2008 and 2011. However, smaller developing countries that must import a substantial portion of their food could be hard hit. The UN and G20 has begun contingency planning so as to be ready to intervene if a third global crisis breaks out.[34][38][51][52]
|
58 |
+
By August 2013 however, concerns had been allayed, with above average grain harvests expected from major exporters, including Brazil, Ukraine and the U.S.[53] 2014 also saw a good worldwide harvest, leading to speculation that grain prices could soon begin to fall.[54]
|
59 |
+
|
60 |
+
In an April 2013 summit held in Dublin concerning Hunger, Nutrition, Climate Justice, and the post 2015 MDG framwework for global justice, Ireland's President Higgins said that only 10% of deaths from hunger are due to armed conflict and natural disasters, with ongoing hunger being both the "greatest ethical failure of the current global system" and the "greatest ethical challenge facing the global community."[55]
|
61 |
+
$4.15 billion of new commitments were made to tackle hunger at a June 2013 Hunger Summit held in London, hosted by the governments of Britain and Brazil, together with The Children's Investment Fund Foundation.[56][57]
|
62 |
+
|
63 |
+
Despite the hardship caused by the 2007–2009 financial crisis and global increases in food prices that occurred around the same time, the UN's global statistics show it was followed by close to year on year reductions in the numbers suffering from hunger around the world. By 2019 however, evidence had mounted that this progress seemed to have gone into reverse over the last four years. The numbers suffering from hunger had risen both in absolute terms, and very slightly even as a percentage of the world's population.[58][59][14]
|
64 |
+
|
65 |
+
In April and May 2020, concerns were expressed that the COVID-19 pandemic could result in a doubling in global hunger unless world leaders acted to prevent this. Agencies such as the WFP have warned that this could include the number of people facing acute hunger rising from 135 million to about 265 million by the end of 2020. Indications of extreme hunger have been seen in various cities, such as fatal stampedes when word spread that emergency food aid was being handed out. Letters calling for co-ordinated action to offset the effects of the COVID-19 pandemic have been written to the G20 and G7, by various actors including NGOs, UN staff, corporations, academics and former national leaders.[60][61]
|
66 |
+
[62]
|
67 |
+
[6]
|
68 |
+
|
69 |
+
Many thousands of hunger relief organisations exist across the world. Some but not all are entirely dedicated to fighting hunger. They range from independent soup kitchens that serve only one locality, to global organisations. Organisations working at the global and regional level will often focus much of their efforts on helping hungry communities to better feed themselves, for example by sharing agricultural technology. With some exceptions, organisations that work just on the local level tend to focus more on providing food directly to hungry people. Many of the entities are connected by a web of national, regional and global alliances that help them share resources, knowledge, and coordinate efforts.[63]
|
70 |
+
|
71 |
+
The United Nations is central to global efforts to relieve hunger, most especially through the FAO, and also via other agencies: such as WFP, IFAD, WHO and UNICEF. After the Millennium Development Goals expired in 2015, the Sustainable Development Goals (SDGs) became key objectives to shape the world's response to development challenges such as hunger. In particular Goal 2: Zero Hunger sets globally agreed targets to end hunger, achieve food security and improved nutrition and promote sustainable agriculture.[64][5][6]
|
72 |
+
|
73 |
+
Aside from the UN agencies themselves, hundreds of other actors address the problem of hunger on the global level, often involving participation in large umbrella organisations. These include national governments, religious groups, international charities and in some cases international corporations. Though except perhaps in the cases of dedicated charities, the priority these organisations assign to hunger relief may vary from year to year. In many cases the organisations partner with the UN agencies, though often they pursue independent goals. For example, as consensus began to form for the SDG zero hunger goal to aim to end hunger by 2030, a number of organizations formed initiatives with the more ambitious target to achieve this outcome early, by 2025:
|
74 |
+
|
75 |
+
Goal # 2 of the UN's Sustainable Development Goals states the following objective:
|
76 |
+
|
77 |
+
SUSTAINABLE DEVELOPMENT GOAL 2: End hunger, achieve food security and improved nutrition and promote sustainable agriculture[70]
|
78 |
+
|
79 |
+
Various targets and indicators are associated to this objective. The first target address directly hunger and the second malnutrition. Others targets are partly instrumental to reduce hunger such as increasing agricultural productivity and incomes of small scale food producers (2.3), sustainable food production systems and resilient agricultural practices (2.4) and genetic diversity of seeds and access to "arising benefits arising from the utilization of genetic resources and associated traditional knowledge" (2.5)United Nations Department of Economic and Social Affaires. "Goal 2 .:. Sustainable Development Knowledge Platform". Sustainable Development Knowledge Platform. United Nations. Retrieved 6 October 2018.. Other targets (2.A, 2.B and 2.C) are means of implementation to facilitate 2.1-2.5 targets.[71]
|
80 |
+
|
81 |
+
A report by the International Food Policy Research Institute (IFPRI) of 2013 argued that the emphasis of the SDGs should be on eliminating hunger and under-nutrition, rather than on poverty, and that attempts should be made to do so by 2025 rather than 2030.[68] The argument is based on an analysis of experiences in China, Vietnam, Brazil, and Thailand and the fact that people suffering from severe hunger face extra impediments to improving their lives, whether it be by education or work. Three pathways to achieve this were identified: 1) agriculture-led; 2) social protection- and nutrition- intervention-led; or 3) a combination of both of these approaches.[68]
|
82 |
+
|
83 |
+
Much of the world's regional alliances are located in Africa. For example, the Alliance for Food Sovereignty in Africa or the Alliance for a Green Revolution in Africa.[72][63]
|
84 |
+
|
85 |
+
The Food and Agriculture Organization of the UN has created a partnership that will act through the African Union's CAADP framework aiming to end hunger in Africa by 2025. It includes different interventions including support for improved food production, a strengthening of social protection and integration of the right to food into national legislation.[73]
|
86 |
+
|
87 |
+
Examples of hunger relief organisations that operate on the national level include The Trussell Trust in the U.K. and Feeding America in the U.S.[74]
|
88 |
+
|
89 |
+
A food bank (or foodbank) is a non-profit, charitable organization that aids in the distribution of food to those who have difficulty purchasing enough to avoid hunger. Food banks tend to run on different operating models depending on where they are located. In the U.S., Australia, and to some extent in Canada, foodbanks tend to perform a warehouse type function, storing and delivering food to front line food orgs, but not giving it directly to hungry peoples themselves. In much of Europe and elsewhere, food banks operate on the front line model, where they hand out parcels of uncooked food direct to the hungry, typically giving them enough for several meals which they can eat in their homes. In the U.S and Australia, establishments that hand out uncooked food to individual people are instead called food pantries, food shelves or food closets'.[75]
|
90 |
+
|
91 |
+
In Less Developed Countries, there are charity-run food banks that operate on a semi-commercial system that differs from both the more common "warehouse" and "frontline" models. In some rural LDCs such as Malawi, food is often relatively cheap and plentiful for the first few months after the harvest, but then becomes more and more expensive. Food banks in those areas can buy large amounts of food shortly after the harvest, and then as food prices start to rise, they sell it back to local people throughout the year at well below market prices. Such food banks will sometimes also act as centres to provide small holders and subsistence farmers with various forms of support.[76]
|
92 |
+
|
93 |
+
A soup kitchen, meal center, or food kitchen is a place where food is offered to the hungry for free or at a below market price. Frequently located in lower-income neighborhoods, they are often staffed by volunteer organizations, such as church or community groups. Soup kitchens sometimes obtain food from a food bank for free or at a low price, because they are considered a charity, which makes it easier for them to feed the many people who require their services.
|
94 |
+
|
95 |
+
Local establishments calling themselves "food banks" or "soup kitchens" are often run either by Christian churches or less frequently by secular civil society groups. Other religions carry out similar hunger relief efforts, though sometimes with slightly different methods. For example, in the Sikh tradition of Langar, food is served to the hungry direct from Sikh temples. There are exceptions to this, for example in the UK Sikhs run some of the food banks, as well as giving out food direct from their Gurdwara's.[77][78]
|
96 |
+
|
97 |
+
In both developing and advanced countries, parents sometimes go without food so they can feed their children. Women, however, seem more likely to make this sacrifice than men. World Bank studies consistently find that about 60% of those who are hungry are female. The apparent explanation for this imbalance is that, compared to men, women more often forgo meals in order to feed their children. Older sources sometimes claim this phenomenon is unique to developing countries, due to greater sexual inequality. More recent findings suggested that mothers often miss meals in advanced economies too. For example, a 2012 study undertaken by Netmums in the UK found that one in five mothers sometimes misses out on food to save their children from hunger.[34][79][80]
|
98 |
+
|
99 |
+
In several periods and regions, gender has also been an important factor determining whether or not victims of hunger would make suitable examples for generating enthusiasm for hunger relief efforts. James Vernon, in his Hunger: A Modern History, wrote that in Britain before the 20th century, it was generally only women and children suffering from hunger who could arouse compassion. Men who failed to provide for themselves and their families were often regarded with contempt.[26]
|
100 |
+
|
101 |
+
This changed after World War I, where thousands of men who had proved their manliness in combat found themselves unable to secure employment. Similarly, female gender could be advantageous for those wishing to advocate for hunger relief, with Vernon writing that being a woman helped Emily Hobhouse draw the plight of hungry people to wider attention during the Second Boer War.[26]
|
en/1934.html.txt
ADDED
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
|
6 |
+
|
7 |
+
The Nintendo Entertainment System (NES) is an 8-bit third-generation home video game console produced, released, and marketed by Nintendo. It is a remodelled export version of the company's Family Computer[a] (FC) platform in Japan, commonly known as the Famicom,[b] which was launched on July 15, 1983. The NES was launched in a test market of New York City on October 18, 1985, followed by Los Angeles as a second test market in February 1986, followed by Chicago and San Francisco, then other top 12 American markets, followed by a full launch across North America and some countries in Europe in September 1986, followed by Australia and other countries in Europe in 1987. Brazil saw only unlicensed clones until the official local release in 1993. The console's South Korean release was packaged as the Hyundai Comboy[c] and distributed by Hyundai Electronics (now SK Hynix).
|
8 |
+
|
9 |
+
As one of the best-selling gaming consoles of its time, the NES helped revitalize the US video game industry following the video game crash of 1983.[12] With the NES, Nintendo introduced a now-standard business model of licensing third-party developers, authorizing them to produce and distribute games for Nintendo's platform.[13] It had been preceded by Nintendo's first home video game console, the Color TV-Game, and was succeeded by the Super Nintendo Entertainment System (SNES).
|
10 |
+
|
11 |
+
Following a series of arcade game successes in the early 1980s, Nintendo made plans to create a cartridge-based console called the Family Computer, or Famicom. Masayuki Uemura (ja) designed the system.[14][15] Original plans called for an advanced 16-bit system which would function as a full-fledged computer with a keyboard and floppy disk drive, but Nintendo president Hiroshi Yamauchi rejected this and instead decided to go for a cheaper, more conventional cartridge-based game console as he believed that features such as keyboards and disks were intimidating to non-technophiles. A test model was constructed in October 1982 to verify the functionality of the hardware, after which work began on programming tools. Because 65xx CPUs had not been manufactured or sold in Japan up to that time, no cross-development software was available and it had to be produced from scratch. Early Famicom games were written on a system that ran on an NEC PC-8001 computer and LEDs on a grid were used with a digitizer to design graphics as no software design tools for this purpose existed at that time.[16]
|
12 |
+
|
13 |
+
The code name for the project was "GameCom", but Masayuki Uemura's wife proposed the name "Famicom", arguing that "In Japan, 'pasokon' is used to mean a personal computer, but it is neither a home or personal computer. Perhaps we could say it is a family computer." Meanwhile, Hiroshi Yamauchi decided that the console should use a red and white theme after seeing a billboard for DX Antenna (a Japanese antenna manufacturer) which used those colors.[16]
|
14 |
+
|
15 |
+
The creation of the Famicom was hugely influenced by the ColecoVision, Coleco's competition against the Atari 2600 in the United States. Takao Sawano, chief manager of the project, brought a ColecoVision home to his family, who were impressed by the system's capability to produce smooth graphics at the time,[17] which contrasts with the flicker and slowdown commonly seen on Atari 2600 games. Uemura, head of Famicom development, stated that the ColecoVision set the bar for the Famicom.[18]
|
16 |
+
|
17 |
+
Original plans called for the Famicom's cartridges to be the size of a cassette tape, but ultimately they ended up being twice as big. Careful design attention was paid to the cartridge connectors because loose and faulty connections often plagued arcade machines. As it necessitated 60 connection lines for the memory and expansion, Nintendo decided to produce its own connectors.[16]
|
18 |
+
|
19 |
+
The controllers are hard-wired to the console with no connectors for cost reasons. The game pad controllers were more-or-less copied directly from the Game & Watch machines, although the Famicom design team originally wanted to use arcade-style joysticks, even dismantling some from American game consoles to see how they worked. There were concerns regarding the durability of the joystick design and that children might step on joysticks on the floor. Katsuyah Nakawaka attached a Game & Watch D-pad to the Famicom prototype and found that it was easy to use and caused no discomfort. Ultimately though, they installed a 15-pin expansion port on the front of the console so that an optional arcade-style joystick could be used.[16]
|
20 |
+
|
21 |
+
Gunpei Yokoi suggested an eject lever to the cartridge slot which is not really necessary, but he believed that children could be entertained by pressing it. Uemura adopted his idea. Uemura added a microphone to the second controller with the idea that it could be used to make players' voices sound through the TV speaker.[19][16]
|
22 |
+
|
23 |
+
The console was released on July 15, 1983 as the Family Computer (or Famicom) for ¥14,800 (equivalent to ¥18,400 in 2019) alongside three ports of Nintendo's successful arcade games Donkey Kong, Donkey Kong Jr. and Popeye. The Famicom was slow to gather momentum; a bad chip set caused the initial release of the system to crash. Following a product recall and a reissue with a new motherboard, the Famicom's popularity soared, becoming the best-selling game console in Japan by the end of 1984.[20]:279, 285
|
24 |
+
|
25 |
+
Nintendo also had its sights set on the North American market, entering into negotiations with Atari to release the Famicom under Atari's name as the Nintendo Advanced Video Gaming System. The deal was set to be finalized and signed at the Summer Consumer Electronics Show in June 1983. However, Atari discovered at that show that its competitor Coleco was illegally demonstrating its Coleco Adam computer with Nintendo's Donkey Kong game. This violation of Atari's exclusive license with Nintendo to publish the game for its own computer systems delayed the implementation of Nintendo's game console marketing contract with Atari. Atari's CEO Ray Kassar was fired the next month, so the deal went nowhere, and Nintendo decided to market its system on its own.[20]:283–285[g]
|
26 |
+
|
27 |
+
Subsequent plans for the Nintendo Advanced Video System likewise never materialized: a North American repackaged Famicom console featuring a keyboard, cassette data recorder, wireless joystick controller, and a special BASIC cartridge.[20]:287 By the beginning of 1985, more than 2.5 million Famicom units had been sold in Japan, and Nintendo soon announced plans to release it in North America as the Advanced Video Entertainment System (AVS) that same year. The American video game press was skeptical that the console could have any success in the region, as the industry was still recovering from the video game crash of 1983. The March 1985 issue of Electronic Games magazine stated that "the videogame market in America has virtually disappeared" and that "this could be a miscalculation on Nintendo's part".[21]
|
28 |
+
|
29 |
+
At June 1985's Consumer Electronics Show (CES), Nintendo unveiled the American version of its Famicom, with a new case redesigned by Lance Barr and featuring a "zero insertion force" cartridge slot.[22] The change from a top-loader in the Famicom to a front-loader was to make the new console more like a video cassette recorder, which had grown in popularity by 1985, and differentiate the unit from past video game consoles.[23] Additionally, Uemura explained that Nintendo developers had feared that the console's electronics might face electrostatic hazards in dry American states such as Arizona and Texas, and a front-loading design would be safer if children handled the console carelessly.[24]
|
30 |
+
|
31 |
+
This would eventually be officially deployed as the Nintendo Entertainment System, or the colloquial "NES". Nintendo seeded these first systems to limited American test markets starting in New York City on October 18, 1985, and followed up with a full North American release in February 1986.[25] The nationwide release came in September 1986. Nintendo released 17 launch games: 10-Yard Fight, Baseball, Clu Clu Land, Duck Hunt, Excitebike, Golf, Gyromite, Hogan’s Alley, Ice Climber, Kung Fu, Pinball, Soccer, Stack-Up, Super Mario Bros., Tennis, Wild Gunman and Wrecking Crew.[26][h]
|
32 |
+
For expedient production, some varieties of these launch games contain Famicom chips with an adapter inside the cartridge so they play on North American consoles, which is why the title screens of Gyromite and Stack-Up have the Famicom titles "Robot Gyro" and "Robot Block", respectively.[27]
|
33 |
+
|
34 |
+
The system's launch represented not only a new product, but also a reframing of the severely damaged home video game market. The 1983 video game crash had occurred in large part due to a lack of consumer and retailer confidence in video games, which had been partially due to confusion and misrepresentation in video game marketing. Prior to the NES, the packaging of many video games presented bombastic artwork which exaggerated the graphics of the actual game. In terms of product identity, a single game such as Pac-Man would appear in many versions on many different game consoles and computers, with large variations in graphics, sound, and general quality between the versions. In stark contrast, Nintendo's marketing strategy aimed to regain consumer and retailer confidence by delivering a singular platform whose technology was not in need of exaggeration and whose qualities were clearly defined.
|
35 |
+
|
36 |
+
To differentiate Nintendo's new home platform from the perception of a troubled and shallow video game market still reeling from the 1983 crash, the company freshened its product nomenclature and established a strict product approval and licensing policy. The overall platform is referred to as "Entertainment System" instead of a "video game system", is centered upon a machine called a "Control Deck" instead of a "console", and features software cartridges called "Game Paks" instead of "video games". This allowed Nintendo to gain more traction in selling the system in toy stores.[28][29] To deter production of games which had not been licensed by Nintendo, and to prevent copying, the 10NES lockout chip system act as a lock-and-key coupling of each Game Pak and Control Deck. The packaging of the launch lineup of NES games bear pictures of close representations of actual onscreen graphics. To reduce consumer confusion, symbols on the games' packaging clearly indicate the genre of the game. A seal of quality is on all licensed game and accessory packaging. The initial seal states, "This seal is your assurance that Nintendo has approved and guaranteed the quality of this product". This text was later changed to "Official Nintendo Seal of Quality".[30]
|
37 |
+
|
38 |
+
Unlike with the Famicom, Nintendo of America marketed the console primarily to children, instituting a strict policy of censoring profanity, sexual, religious, or political content. The most famous example is Lucasfilm's attempts to port the comedy-horror game Maniac Mansion to the NES, which Nintendo insisted be considerably watered down. Nintendo of America continued its censorship policy until 1994 with the advent of the Entertainment Software Rating Board system, coinciding with criticism stemming from the cuts made to the Super NES port of Mortal Kombat compared to the Sega Genesis'.[citation needed]
|
39 |
+
|
40 |
+
The optional Robotic Operating Buddy, or R.O.B., was part of a marketing plan to portray the NES's technology as being novel and sophisticated when compared to previous game consoles, and to portray its position as being within reach of the better established toy market. Though at first, the American public exhibited limited excitement for the console itself, peripherals such as the light gun and R.O.B. attracted extensive attention.[31]
|
41 |
+
|
42 |
+
In Europe and Oceania the system was released in two separate marketing regions. The first consisted of mainland Europe (excluding Italy) where distribution was handled by a number of different companies, with Nintendo responsible for manufacturing. Most of this region saw a 1987 release. The release in Scandinavia was in 1986, where it was released by Bergsala. In the Netherlands, it was distributed by Bandai BV.[32] In 1987 Mattel handled distribution for the second region, consisting of the United Kingdom, Ireland, Italy, Australia and New Zealand. Mattel also handled distribution for Nintendo in Canada, however it was unrelated to the aforementioned European / Australian releases.[33]
|
43 |
+
|
44 |
+
In Brazil, the console was released late in 1993 by Playtronic, even after the SNES. But the Brazilian market had been dominated by unlicensed NES clones — both locally made, and smuggled from China and Taiwan.[34] One of the most successful local clones is the Phantom System, manufactured by Gradiente, which would end up licensing Nintendo products in the country for the following decade.[35] The sales of officially licensed products were low, due to the cloning, the quite late official launch, and the high prices of Nintendo's licensed products.[36]
|
45 |
+
|
46 |
+
For its complete North American release, the Nintendo Entertainment System was progressively released over the ensuing years in several different bundles, beginning with the Deluxe Set, the Basic Set, the Action Set and the Power Set. The Deluxe Set, retailing at US$179.99 (equivalent to $462 in 2019),[4] included R.O.B., a light gun called the NES Zapper, two controllers, and two Game Paks: Gyromite, and Duck Hunt. The Basic Set, first released in 1987, retailed at US$89.99 with no game, and US$99.99 bundled with the Super Mario Bros. cartridge. The Action Set, retailing in November 1988 for US$149.99, came with the Control Deck, two game controllers, an NES Zapper, and a dual Game Pak containing both Super Mario Bros. and Duck Hunt.[6][20]:305 In 1989, the Power Set included the console, two game controllers, an NES Zapper, a Power Pad, and a triple Game Pak containing Super Mario Bros, Duck Hunt, and World Class Track Meet. In 1990, a Sports Set bundle was released, including the console, an NES Satellite infrared wireless multitap adapter, four game controllers, and a dual Game Pak containing Super Spike V'Ball and Nintendo World Cup.[37] Two more bundle packages were later released using the original model NES console. The Challenge Set of 1992 included the console, two controllers, and a Super Mario Bros. 3 Game Pak for a retail price of US$89.99. The Basic Set was repackaged for a retail US$89.99. It included only the console and two controllers, and no longer was bundled with a cartridge.[37] Instead, it contained a book called the Official Nintendo Player's Guide, which contained detailed information for every NES game made up to that point.
|
47 |
+
|
48 |
+
Finally, the console was redesigned for both the North American and Japanese markets as part of the final Nintendo-released bundle package. The package included the new style NES-101 console, and one redesigned "dogbone" game controller. Released in October 1993 in North America, this final bundle retailed for US$49.99 and remained in production until the discontinuation of the NES in 1995.[6]
|
49 |
+
|
50 |
+
On August 14, 1995, Nintendo discontinued the Nintendo Entertainment System in both North America and Europe.[38] In North America, replacements for the original front-loading NES were available for $25 in exchange for a broken system until at least December 1996, under Nintendo's Power Swap program. The Game Boy and Super Nintendo were also covered in this program, for $25 and $35 respectively.[39]
|
51 |
+
|
52 |
+
The Famicom was officially discontinued in September 2003. Nintendo offered repair service for the Famicom in Japan until 2007, when it was discontinued due to a shortage of available parts.[38]
|
53 |
+
|
54 |
+
Although the Japanese Famicom, North American and European NES versions included essentially the same hardware, there were certain key differences among the systems.
|
55 |
+
|
56 |
+
The original Japanese Famicom was predominantly white plastic, with dark red trim. It featured a top-loading cartridge slot, grooves on both sides of the deck in which the hardwired game controllers could be placed when not in use, and a 15-pin expansion port located on the unit's front panel for accessories.[40]
|
57 |
+
|
58 |
+
The original NES, meanwhile, featured a front-loading cartridge covered by a small, hinged door that can be opened to insert or remove a cartridge and closed at other times. It features a more subdued gray, black, and red color scheme. An expansion port was found on the bottom of the unit and the cartridge connector pinout was changed.
|
59 |
+
|
60 |
+
In the UK, Italy and Australia which all share the PAL-A region, the "Mattel Version" of the NES released, with the UK also receiving the "NES Version" of the NES and Italy later receiving the "Versione Italiana" NES.[41] When the NES was first released in those countries, it was distributed by Mattel and Nintendo decided to use a lockout chip specific to those countries, different from the chip used in other European countries.
|
61 |
+
|
62 |
+
In October 1993, Nintendo redesigned the NES to follow many of the same design cues as the newly introduced Super Nintendo Entertainment System and the Japanese Super Famicom. Like the SNES, the NES-101 model loads cartridges through a covered slot on top of the unit replacing the complicated mechanism of the earlier design. The console was only released in America and Australia.
|
63 |
+
|
64 |
+
In December 1993, the Famicom received a similar redesign. It also loads cartridges through a covered slot on the top of the unit and uses non-hardwired controllers. Because HVC-101 uses composite video output instead of being RF only like the HVC-001, Nintendo marketed the newer model as the AV Famicom.[d] Since the new controllers don't have microphones on them like the second controller on the original console, certain games such as the Disk System version of The Legend of Zelda and Raid on Bungeling Bay have certain tricks that cannot be replicated when played on an HVC-101 Famicom without a modified controller. In October 1987, Nintendo released a 3D stereoscopic headset called the Famicom 3D System (HVC-031) in Japan only.
|
65 |
+
|
66 |
+
Nintendo's design styling for US release was made deliberately different from that of other game consoles. Nintendo wanted to distinguish its product from those of competitors and to avoid the generally poor reputation that game consoles had acquired following the North American video game crash of 1983. One result of this philosophy is to disguise the cartridge slot design as a front-loading zero insertion force (ZIF) cartridge socket, designed to resemble the front-loading mechanism of a VCR. The newly designed connector works quite well when both the connector and the cartridges are clean and the pins on the connector are new. Unfortunately, the ZIF connector is not truly zero insertion force. When a user inserts the cartridge into the NES, the force of pressing the cartridge down and into place bends the contact pins slightly, as well as pressing the cartridge's ROM board back into the cartridge itself. Frequent insertion and removal of cartridges cause the pins to wear out from repeated usage over the years[dubious – discuss] and the ZIF design prove more prone to interference by dirt and dust than an industry-standard card edge connector.[42] These design issues are exacerbated by Nintendo's choice of materials; the console slot nickel connector springs wear due to design and the game cartridge's brass plated nickel connectors are also prone to tarnishing and oxidation. Nintendo sought to fix these issues by redesigning the next generation Super Nintendo Entertainment System (SNES) as a top loader similar to the Famicom to ensure better results reading the game cartridges.[43] Many players try to alleviate issues in the game caused by this corrosion by blowing into the cartridges, then reinserting them, which actually speeds up the tarnishing due to moisture. One way to slow down the tarnishing process and extend the life of the cartridges is to use isopropyl alcohol and swabs, as well as non-conductive metal polish such as Brasso or Sheila Shine.[44][45]
|
67 |
+
|
68 |
+
The Famicom contains no lockout hardware and, as a result, unlicensed cartridges (both legitimate and bootleg) were extremely common throughout Japan and the Far East.[46] The original NES (but not the top-loading NES-101) contain the 10NES lockout chip, which significantly increased the challenges faced by unlicensed developers. Hobbyists in later years discovered that disassembling the NES and cutting the fourth pin of the lockout chip would change the chip's mode of operation from "lock" to "key", removing all effects and greatly improving the console's ability to play legal games, as well as bootlegs and converted imports. NES consoles sold in different regions have different lockout chips, so games marketed in one region do not work on consoles from another region. Known regions are these: USA/Canada (3193 lockout chip); most of Europe, Korea, and countries where Nintendo of Europe were responsible for distribution (3195); Asia and Hong Kong (3196); and UK, Italy, and Australia (3197). Because two types of lockout chip were released in Europe, European NES game boxes often have an "A" or "B" letter on the front, indicating whether the game is compatible with UK/Italian/Australian consoles (A), or the rest of Europe (B). Games in the rest of Europe typically have text on the box stating "This game is not compatible with the Mattel or NES versions of the Nintendo Entertainment System". Similarly, most UK/Italy/Australia games state "This game is only compatible with the Mattel or NES versions of the Nintendo Entertainment System".
|
69 |
+
|
70 |
+
Problems with the 10NES lockout chip frequently result in the console's most infamous problem: the blinking red power light, in which the system appears to turn itself on and off repeatedly because the 10NES would reset the console once per second. The lockout chip required constant communication with the chip in the game to work.[33]:247 Dirty, aging, and bent connectors often disrupt the communication, resulting in the blink effect.[42] Alternatively, the console would turn on but only show a solid white, gray, or green screen. Users may attempt to solve this problem by blowing air onto the cartridge connectors, inserting the cartridge just far enough to get the ZIF to lower, licking the edge connector, slapping the side of the system after inserting a cartridge, shifting the cartridge from side to side after insertion, pushing the ZIF up and down repeatedly, holding the ZIF down lower than it should have been, and cleaning the connectors with alcohol. Many of the most frequent attempts to fix this problem instead risk damaging the cartridge or system.[47] In 1989, Nintendo released an official NES Cleaning Kit to help users clean malfunctioning cartridges and consoles.
|
71 |
+
|
72 |
+
With the release of the top-loading NES-101 (NES 2) toward the end of the NES's lifespan, Nintendo resolved the problems by switching to a standard card edge connector and eliminating the lockout chip. All of the Famicom systems use standard card edge connectors, as do Nintendo's two subsequent game consoles, the Super Nintendo Entertainment System and the Nintendo 64.
|
73 |
+
|
74 |
+
In response to these hardware flaws, "Nintendo Authorized Repair Centers" sprang up across the U.S. According to Nintendo, the authorization program was designed to ensure that the machines were properly repaired. Nintendo would ship the necessary replacement parts only to shops that had enrolled in the authorization program. In practice, the authorization process consisted of nothing more than paying a fee to Nintendo for the privilege.
|
75 |
+
|
76 |
+
Nintendo released a stereoscopic headset peripheral called Famicom 3D System. This was never released outside Japan, since it was a commercial failure, making some gamers experience headaches and nausea.[48]
|
77 |
+
|
78 |
+
Nintendo released a modem peripheral called Famicom Modem. This was not intended for traditional games but for gambling on real horse races, stock trading, and banking.
|
79 |
+
|
80 |
+
For its CPU, the NES uses the Ricoh 2A03, an 8-bit microprocessor containing a second source MOS Technology 6502 core, running at 1.79 MHz for the NTSC NES and 1.66 MHz for the PAL version.
|
81 |
+
|
82 |
+
The NES contains 2 kB of onboard work RAM.[49] A game cartridge may contain expanded RAM to increase this amount. The sizes of NES games vary from 8 kB (Galaxian) to 1 MB (Metal Slader Glory), but 128 to 384 kB is the most common.
|
83 |
+
|
84 |
+
The NES[50] uses a custom-made Picture Processing Unit (PPU) developed by Ricoh. All variations of the PPU feature 2 kB of video RAM, 256 bytes of on-die "object attribute memory" (OAM) to store the positions, colors, and tile indices of up to 64 sprites on the screen, and 28 bytes of on-die palette RAM to allow selection of background and sprite colors. The console's 2 kB of onboard RAM may be used for tile maps and attributes on the NES board and 8 kB of tile pattern ROM or RAM may be included on a cartridge. The system has an available color palette of 48 colors and 6 grays. Up to 25 simultaneous colors may be used without writing new values mid-frame: a background color, four sets of three tile colors, and four sets of three sprite colors. The NES palette is based on NTSC rather than RGB values. A total of 64 sprites may be displayed onscreen at a given time without reloading sprites mid-screen. The standard display resolution of the NES is 256 horizontal pixels by 240 vertical pixels.
|
85 |
+
|
86 |
+
Video output connections vary between console models. The original HVC-001 model of the Family Computer features only radio frequency (RF) modulator output. The North American and European consoles have support for composite video through RCA connectors in addition to the RF modulator. The HVC-101 model of the Famicom omits the RF modulator entirely and has composite video output via a proprietary 12-pin "multi-out" connector first introduced on the Super Famicom and Super Nintendo Entertainment System. Conversely, the North American re-released NES-101 model most closely resembles the original HVC-001 model Famicom, in that it features RF modulator output only.[51] Finally, the PlayChoice-10 utilizes an inverted RGB video output.
|
87 |
+
|
88 |
+
The stock NES supports a total of five sound channels, two of which are pulse channels with 4 pulse width settings, one is a triangle wave generator, another is a noise generator (often used for percussion), and the fifth one plays low-quality digital samples.
|
89 |
+
|
90 |
+
The NES supports expansion chips contained in certain cartridges to add sound channels and help with data processing. Developers can add these chips to their games, such as the Konami VRC6, Konami VRC7, Sunsoft 5B, Namco 163, and two more by Nintendo itself: the Nintendo FDS wave generator (a modified Ricoh RP2C33 chip with single-cycle wave table-lookup sound support), and the Nintendo Memory Management Controller 5 (MMC5).[52] Due to wiring differences between the Famicom and NES, a stock NES console is incapable of passing through audio generated by expansion chips utilizing additional sound channels, but can be modified to regain this capability.[53][54]
|
91 |
+
|
92 |
+
The game controller used for both the NES and the Famicom features an oblong brick-like design with a simple four button layout: two round buttons labeled "A" and "B", a "START" button, and a "SELECT" button.[55] Additionally, the controllers utilize the cross-shaped joypad, designed by Nintendo employee Gunpei Yokoi for Nintendo Game & Watch systems, to replace the bulkier joysticks on earlier gaming consoles' controllers.[20]:279
|
93 |
+
|
94 |
+
The original model Famicom features two game controllers, both of which are hardwired to the back of the console. The second controller lacks the START and SELECT buttons, but features a small microphone. Relatively few games use this feature. The earliest produced Famicom units have square A and B buttons.[51] This was changed to the circular designs because of the square buttons being caught in the controller casing when pressed down and glitches within the hardware causing the system to freeze occasionally while playing a game.
|
95 |
+
|
96 |
+
Instead of the Famicom's hardwired controllers, the NES features two custom 7-pin ports on the front of the console to support swappable and potentially third-party controllers. The controllers bundled with the NES are identical and include the START and SELECT buttons, allowing some NES versions of games, such as The Legend of Zelda, to use the START button on the second controller to save the game at any time. The NES controllers lack the microphone, which is used on the Famicom version of Zelda to kill certain enemies, or for singing with karaoke games.[40]
|
97 |
+
|
98 |
+
A number of special controllers are for use with specific games, though are not commonly utilized. Such devices include the Zapper light gun, the R.O.B.,[20]:297 and the Power Pad.[33]:226[56] The original Famicom features a deepened DA-15 expansion port on the front of the unit, which is used to connect most auxiliary devices.[40] On the NES, these special controllers are generally connected to one of the two control ports on the front of the console.
|
99 |
+
|
100 |
+
Nintendo made two advanced controllers for the NES called NES Advantage and the NES Max. Both controllers have a Turbo feature, where one press of the button represents multiple automatic rapid presses. This feature allows players to shoot much faster in shooter games. The NES Advantage has two knobs that adjust the firing rate of the turbo button from quick to Turbo, as well as a "Slow" button that slows down compatible games by rapidly pausing the game. The NES Max has a non-adjustable Turbo feature and no "Slow" feature, and has a wing-like handheld shape and a sleek directional pad. Turbo functionality exists on the NES Satellite, the NES Four Score, and the U-Force. Other accessories include the Power Pad and the Power Glove, which is featured in the movie The Wizard.
|
101 |
+
|
102 |
+
Near the end of the NES's lifespan, upon the release of the AV Famicom and the top-loading NES 2, the design of the game controllers was modified slightly. Though the original button layout was retained, the controller's shape resembles that of the SNES's controller. In addition, the AV Famicom dropped the hardwired controllers in favor of detachable controller ports. The controllers included with the Famicom AV have 90 cm (3 feet) long cables, compared to the 180 cm (6 feet) of NES controllers.[57]
|
103 |
+
|
104 |
+
The original NES controller has become one of the most recognizable symbols of the console. Nintendo has mimicked the look of the controller in several other products, from promotional merchandise to limited edition versions of the Game Boy Advance.[58]
|
105 |
+
|
106 |
+
Few of the numerous peripheral devices and software packages for the Famicom were released outside Japan.
|
107 |
+
|
108 |
+
Family BASIC is an implementation of BASIC for the Famicom, packaged with a keyboard. Similar in concept to the Atari 2600 BASIC cartridge, it allows the user to write programs, especially games, which can be saved on an included cassette recorder.[59] Nintendo of America rejected releasing Famicom BASIC in the US because it did not think it fit their primary marketing demographic of children.[33]:162
|
109 |
+
|
110 |
+
The Famicom Modem connected a Famicom to a now defunct proprietary network in Japan which provided content such as financial services.[60] A dialup modem was never released for NES.
|
111 |
+
|
112 |
+
In 1986, Nintendo released the Famicom Disk System (FDS) in Japan, a type of floppy drive that uses a single-sided, proprietary 5 cm (2") disk and plugs into the cartridge port. It contains RAM for the game to load into and an extra single-cycle wave table-lookup sound chip. The disks are used both for storing the game and saving progress, with a total capacity of 128k (64k per side). The disks were originally obtained from kiosks in malls and other public places where buyers could select a game and have it written to the disk. This process cost less than cartridges and users could take the disk back to a vending booth and have it rewritten with a new game, or the high score faxed to Nintendo for national leaderboard contests.
|
113 |
+
|
114 |
+
A variety of games for the FDS were released by Nintendo (including some which had already been released on cartridge, such as Super Mario Bros.), and third party companies such as Konami, Taito, or unlicensed publishers. Its limitations became quickly apparent as larger ROM chips were introduced, allowing cartridges with greater than 128k of space. More advanced memory management chips (MMC) soon appeared and the FDS quickly became obsolete. Nintendo also charged developers high prices to produce FDS games, and many refused to develop for it, instead continuing to make cartridge games.[citation needed] After only two years, the FDS was discontinued, although vending booths remained in place until 1993 and Nintendo continued to service drives, and to rewrite and offer replacement disks until 2003. Approximately four million drives were sold.[citation needed]
|
115 |
+
|
116 |
+
Nintendo did not release the Disk System outside Japan due to numerous problems encountered with the medium in Japan, and due to the increasing data storage capacity and reducing cost of the highly reliable cartridge medium.[61] As a result, many Disk System games such as Castlevania, The Legend of Zelda, and Bubble Bobble were converted to cartridge format for their export releases, resulting in simplified sound and the disk save function replaced by either password or battery save systems.
|
117 |
+
|
118 |
+
A thriving market of unlicensed NES hardware clones emerged during the climax of the console's popularity. Initially, such clones were popular in markets where Nintendo issued a legitimate version of the console long time after unlicensed hardware. In particular, the Dendy (Russian: Де́нди), an unlicensed hardware clone produced in Taiwan and sold in the former Soviet Union, emerged as the most popular video game console of its time in that setting and it enjoyed a degree of fame roughly equivalent to that experienced by the NES/Famicom in North America and Japan. A range of Famicom clones was marketed in Argentina during the late 1980s and early 1990s under the name of "Family Game", resembling the original hardware design. The Micro Genius (Simplified Chinese: 小天才) was marketed in Southeast Asia as an alternative to the Famicom; and in Central Europe, especially Poland, the Pegasus was available.[62] Since 1989, there were many Brazilian clones of NES,[36] and the very popular Phantom System (with hardware superior to the original console) caught the attention of Nintendo itself.[35]
|
119 |
+
|
120 |
+
The unlicensed clone market has flourished following Nintendo's discontinuation of the NES. Some of the more exotic of these resulting systems surpass the functionality of the original hardware, such as a portable system with a color LCD (PocketFami). Others have been produced for certain specialized markets, such as a rather primitive personal computer with a keyboard and basic word processing software.[63] These unauthorized clones have been helped by the invention of the so-called NES-on-a-chip.[64]
|
121 |
+
|
122 |
+
As was the case with unlicensed games, Nintendo has typically gone to the courts to prohibit the manufacture and sale of unlicensed cloned hardware. Many of the clone vendors have included built-in copies of licensed Nintendo software, which constitutes copyright infringement in most countries.
|
123 |
+
|
124 |
+
Although most hardware clones were not produced under license by Nintendo, certain companies were granted licenses to produce NES-compatible devices. The Sharp Corporation produced at least two such clones: the Twin Famicom and the SHARP 19SC111 television. The Twin Famicom is compatible with both Famicom cartridges and Famicom Disk System disks.[65]:29 Like the Famicom, it is red and black and has hardwired controllers, but has a different case design. The SHARP 19SC111 television includes a built-in Famicom.[66] A similar licensing deal was reached with Hyundai Electronics, who licensed the system under the name Comboy in the South Korean market. This deal with Hyundai was made necessary because of the South Korean government's wide ban on all Japanese "cultural products", which remained in effect until 1998 and ensured that the only way Japanese products could legally enter the South Korean market was through licensing to a third-party (non-Japanese) distributor (see also Japan–Korea disputes).[67] In India, the system was sold under the name Samurai and assembled locally under license from kits due to policies that banned imports of electronics.[68]
|
125 |
+
|
126 |
+
The NES Test Station diagnostics machine was introduced in 1988. It is a NES-based unit designed for testing NES hardware, components, and games. It was only provided for use in World of Nintendo boutiques as part of the Nintendo World Class Service program. Visitors were to bring items to test with the station, and could be assisted by a store technician or employee.
|
127 |
+
|
128 |
+
The NES Test Station's front features a Game Pak slot and connectors for testing various components (AC adapter, RF switch, Audio/Video cable, NES Control Deck, accessories and games), with a centrally-located selector knob to choose which component to test. The unit itself weighs approximately 11.7 pounds without a TV. It connects to a television via a combined A/V and RF Switch cable. By actuating the green button, a user can toggle between an A/V Cable or RF Switch connection. The television it is connected to (typically 11" to 14") is meant to be placed atop it.[69]
|
129 |
+
|
130 |
+
In 1991, Nintendo provided an add-on called the "Super NES Counter Tester" that tests Super Nintendo components and games. The SNES Counter Tester is a standard SNES on a metal fixture with the connection from the back of the SNES re-routed to the front of the unit. These connections may be made directly to the test station or to the TV, depending on what is to be tested.
|
131 |
+
|
132 |
+
The Nintendo Entertainment System has a number of groundbreaking games. Super Mario Bros. pioneered side-scrollers whilst The Legend of Zelda established the genre of action adventure games, and helped popularize battery-backed save functionality. A third influential game is Metroid, a game that established the Metroidvania genre.
|
133 |
+
|
134 |
+
The NES uses a 72-pin design, as compared with 60 pins on the Famicom. To reduce costs and inventory, some early games released in North America are simply Famicom cartridges attached to an adapter to fit inside the NES hardware.[27] Early NES cartridges are held together with five small slotted screws. Games released after 1987 were redesigned slightly to incorporate two plastic clips molded into the plastic itself, removing the need for the top two screws.[70]
|
135 |
+
|
136 |
+
The back of the cartridge bears a label with handling instructions. Production and software revision codes were imprinted as stamps on the back label to correspond with the software version and producer. All licensed NTSC and PAL cartridges are a standard shade of gray plastic, with the exception of The Legend of Zelda and Zelda II: The Adventure of Link, which were manufactured in gold-plastic carts. Unlicensed carts were produced in black, robin egg blue, and gold, and are all slightly different shapes than standard NES cartridges. Nintendo also produced yellow-plastic carts for internal use at Nintendo Service Centers, although these "test carts" were never made available for purchase. All licensed US cartridges were made by Nintendo, Konami, and Acclaim. For promotion of DuckTales: Remastered, Capcom sent 150 limited-edition gold NES cartridges with the original game, featuring the Remastered art as the sticker, to different gaming news agencies. The instruction label on the back includes the opening lyric from the show's theme song, "Life is like a hurricane".[71]
|
137 |
+
|
138 |
+
Famicom cartridges are shaped slightly differently. Unlike NES games, official Famicom cartridges were produced in many colors of plastic. Adapters, similar in design to the popular accessory Game Genie, are available that allow Famicom games to be played on an NES. In Japan, several companies manufactured the cartridges for the Famicom.[33]:61 This allowed these companies to develop their own customized chips designed for specific purposes, such as superior sound and graphics.
|
139 |
+
|
140 |
+
Nintendo's near monopoly on the home video game market left it with a dominant influence over the industry. Unlike Atari, which never actively pursued third-party developers (and even went to court in an attempt to force Activision to cease production of Atari 2600 games), Nintendo had anticipated and encouraged the involvement of third-party software developers, though strictly on Nintendo's terms.[72] Some of the Nintendo platform-control measures were adopted in a less stringent way by later console manufacturers such as Sega, Sony, and Microsoft.
|
141 |
+
|
142 |
+
To this end, a 10NES authentication chip is in every console and in every officially licensed cartridge. If the console's chip can not detect a counterpart chip inside the cartridge, the game does not load.[33]:247 Nintendo portrayed these measures as intended to protect the public against poor-quality games,[73] and placed a golden seal of approval on all licensed games released for the system.
|
143 |
+
|
144 |
+
Nintendo found success with Japanese arcade manufacturers such as Konami, Capcom, Taito and Namco, which signed on as third-party developers. However, they found resistance with US game developers including Atari, Activision, Electronic Arts and Epyx refusing Nintendo's one-sided terms. Acclaim Entertainment, a fledgling game publisher founded by former Activision employees, would be the first major third-party licensee based out of the United States to sign on with Nintendo in late 1987. Atari (through Tengen) and Activision would sign on soon after.
|
145 |
+
|
146 |
+
Nintendo was not as restrictive as Sega, which did not permit third-party publishing until Mediagenic in late summer 1988.[74] Nintendo's intention was to reserve a large part of NES game revenue for itself. Nintendo required that it be the sole manufacturer of all cartridges, and that the publisher had to pay in full before the cartridges for that game be produced. Cartridges could not be returned to Nintendo, so publishers assumed all the risk. As a result, some publishers lost more money due to distress sales of remaining inventory at the end of the NES era than they ever earned in profits from sales of the games. Because Nintendo controlled the production of all cartridges, it was able to enforce strict rules on its third-party developers, which were required to sign a contract by Nintendo that would obligate these parties to develop exclusively for the system, order at least 10,000 cartridges, and only make five games per year.[33]:214–215 The global 1988 shortage of DRAM and ROM chips reportedly caused Nintendo to only permit an average of 25% of publishers' requests for cartridges, with some receiving much higher amounts and others almost none.[73] GameSpy noted that Nintendo's "iron-clad terms" made the company many enemies during the 1980s. Some developers tried to circumvent the five game limit by creating additional company brands like Konami's Ultra Games label and others tried circumventing the 10NES chip.[72]
|
147 |
+
|
148 |
+
Nintendo was accused of antitrust behavior because of the strict licensing requirements.[75] The United States Department of Justice and several states began probing Nintendo's business practices, leading to the involvement of Congress and the Federal Trade Commission (FTC). The FTC conducted an extensive investigation which included interviewing hundreds of retailers. During the FTC probe, Nintendo changed the terms of its publisher licensing agreements to eliminate the two-year rule and other restrictive terms. Nintendo and the FTC settled the case in April 1991, with Nintendo required to send vouchers giving a $5 discount off to a new game, to every person that had purchased a NES game between June 1988 and December 1990. GameSpy remarked that Nintendo's punishment was particularly weak giving the case's findings, although it has been speculated that the FTC did not want to damage the video game industry in the United States.[72]
|
149 |
+
|
150 |
+
With the NES near the end of its life, many third-party publishers such as Electronic Arts supported upstart competing consoles with less strict licensing terms such as the Sega Genesis and then the PlayStation, which eroded and then took over Nintendo's dominance in the home console market, respectively. Consoles from Nintendo's rivals in the post-SNES era had always enjoyed much stronger third-party support than Nintendo, which relied more heavily on first-party games.
|
151 |
+
|
152 |
+
Companies that refused to pay the licensing fee or were rejected by Nintendo found ways to circumvent the console's authentication system. Most of these companies created circuits that use a voltage spike to temporarily disable the 10NES chip.[33]:286 A few unlicensed games released in Europe and Australia are in the form of a dongle to connect to a licensed game, in order to use the licensed game's 10NES chip for authentication. To combat unlicensed games, Nintendo of America threatened retailers who sold them with losing their supply of licensed games, and multiple revisions were made to the NES PCBs to prevent unlicensed games from working.
|
153 |
+
|
154 |
+
Atari Games took a different approach with its line of NES products, Tengen. The company attempted to reverse engineer the lockout chip to develop its own "Rabbit" chip. Tengen also obtained a description of the lockout chip from the United States Patent and Trademark Office by falsely claiming that it was required to defend against present infringement claims. Nintendo successfully sued Tengen for copyright infringement. Tengen's antitrust claims against Nintendo were never decided.[75]
|
155 |
+
|
156 |
+
Color Dreams made Christian video games under the subsidiary name Wisdom Tree. Historian Steven Kent wrote, "Wisdom Tree presented Nintendo with a prickly situation. The general public did not seem to pay close attention to the court battle with Atari Games, and industry analysts were impressed with Nintendo's legal acumen; but going after a tiny company that published innocuous religious games was another story."[20]:400
|
157 |
+
|
158 |
+
As the Nintendo Entertainment System grew in popularity and entered millions of American homes, some small video rental shops began buying their own copies of NES games, and renting them out to customers for around the same price as a video cassette rental for a few days. Nintendo received no profit from the practice beyond the initial cost of their game, and unlike movie rentals, a newly released game could hit store shelves and be available for rent on the same day. Nintendo took steps to stop game rentals, but didn't take any formal legal action until Blockbuster Video began to make game rentals a large-scale service. Nintendo claimed that allowing customers to rent games would significantly hurt sales and drive up the cost of games.[76] Nintendo lost the lawsuit,[77] but did win on a claim of copyright infringement.[78] Blockbuster was banned from including original, copyrighted instruction booklets with its rented games. In compliance with the ruling, Blockbuster produced original short instructions—usually in the form of a small booklet, card, or label stuck on the back of the rental box—that explained the game's basic premise and controls. Video rental shops continued the practice of renting video games.
|
159 |
+
|
160 |
+
By 1988, industry observers stated that the NES's popularity had grown so quickly that the market for Nintendo cartridges was larger than that for all home computer software.[79][20]:347 Compute! reported in 1989 that Nintendo had sold seven million NES systems in 1988 alone, almost as many as the number of Commodore 64s sold in its first five years.[80] "Computer game makers [are] scared stiff", the magazine said, stating that Nintendo's popularity caused most competitors to have poor sales during the previous Christmas and resulted in serious financial problems for some.[81]
|
161 |
+
|
162 |
+
In June 1989, Nintendo of America's vice president of marketing Peter Main, said that the Famicom was present in 37% of Japan's households.[82] By 1990, 30% of American households owned the NES, compared to 23% for all personal computers.[83] By 1990, the NES had outsold all previously released consoles worldwide.[84][better source needed] The slogan for this brand was "It can't be beaten".[33]:345 The Nintendo Entertainment System was not available in the Soviet Union.
|
163 |
+
|
164 |
+
In the early 1990s, gamers predicted that competition from technologically superior systems such as the 16-bit Sega Genesis would mean the immediate end of the NES's dominance. Instead, during the first year of Nintendo's successor console the Super Famicom (named Super Nintendo Entertainment System outside Japan), the Famicom remained the second highest-selling video game console in Japan, outselling the newer and more powerful NEC PC Engine and Sega Mega Drive by a wide margin.[85] The launch of the Genesis was overshadowed by the launch of Super Mario Bros. 3 for NES.[citation needed] The console remained popular in Japan and North America until late 1993, when the demand for new NES software abruptly plummeted.[85] The final licensed Famicom game released in Japan is Takahashi Meijin no Bōken Jima IV (Adventure Island IV), in North America is Wario's Woods, and in Europe is The Lion King in 1995.[86] In the wake of ever decreasing sales and the lack of new games, Nintendo of America officially discontinued the NES by 1995.[6][87] Nintendo produced new Famicom units in Japan until September 25, 2003,[88] and continued to repair Famicom consoles until October 31, 2007, attributing the discontinuation of support to insufficient supplies of parts.[89][90]
|
165 |
+
|
166 |
+
The NES was released two years after the North American video game crash of 1983, when many retailers and adult consumers regarded electronic games as a passing fad,[20]:280 so many believed at first that the NES would soon fade.[81] Before the NES and Famicom, Nintendo was known as a moderately successful Japanese toy and playing card manufacturer, but the consoles' popularity helped the company grow into an internationally recognized name almost synonymous with video games as Atari had been,[91] and set the stage for Japanese dominance of the video game industry.[92] With the NES, Nintendo also changed the relationship between console manufacturers and third-party software developers by restricting developers from publishing and distributing software without licensed approval. This led to higher-quality games, which helped change the attitude of a public that had grown weary from poorly produced games for earlier systems.[20]:306–307
|
167 |
+
|
168 |
+
The NES hardware design is also very influential. Nintendo chose the name "Nintendo Entertainment System" for the US market and redesigned the system so it would not give the appearance of a child's toy. The front-loading cartridge input allow it to be used more easily in a TV stand with other entertainment devices, such as a videocassette recorder.[93][94][95]
|
169 |
+
|
170 |
+
The system's hardware limitations led to design principles that still influence the development of modern video games. Many prominent game franchises originated on the NES, including Nintendo's own Super Mario Bros.,[65]|:57 The Legend of Zelda[20]:353 and Metroid,[20]:357 Capcom's Mega Man[96] franchise, Konami's Castlevania[20]:358 franchise, Square's Final Fantasy,[65]|:95 and Enix's Dragon Quest[65]|:222 franchises.
|
171 |
+
|
172 |
+
NES imagery, especially its controller, has become a popular motif for a variety of products,[97][98][99][100] including Nintendo's own Game Boy Advance.[58] Clothing, accessories, and food items adorned with NES-themed imagery are still produced and sold in stores.
|
173 |
+
|
174 |
+
The NES can be emulated on many other systems. The first emulator was the Japanese-only Pasofami. It was soon followed by iNES, which is available in English and is cross-platform, in 1996. It was described as being the first NES emulation software that could be used by a non-expert.[101] NESticle, a popular MS-DOS-based emulator, was released on April 3, 1997. Nintendo offers officially licensed emulation of many specific NES games via its own Virtual Console for the Wii, Nintendo 3DS, and Wii U, and the Nintendo Switch Online service.
|
175 |
+
|
176 |
+
On July 14, 2016, Nintendo announced the November 2016 launch of a miniature replica of the NES, named the Nintendo Entertainment System: NES Classic Edition in the United States and Nintendo Classic Mini: Nintendo Entertainment System in Europe and Australia.[102] The emulation-based console includes 30 permanently inbuilt games from the vintage NES library, including the Super Mario Bros. and The Legend of Zelda series. The system features HDMI display output and a new replica controller, which can also connect to the Wii Remote for use with Virtual Console games.[103][104] It was discontinued in North America on April 13, 2017, and worldwide on April 15, 2017. However, Nintendo announced in September 2017 that the NES Classic Mini would return to production on June 29, 2018, only to be discontinued again permanently by December of that year.[105][106]
|
177 |
+
|
178 |
+
^ a: For distribution purposes, most of Europe and Australasia were divided into two regions by Nintendo. The first of these regions consisted of mainland Europe (excluding Italy) and Scandanavia, which saw the NES in 1987 and 1986 respectively. The console was released in the second region, consisting of the United Kingdom (and Ireland), Italy, and Australia (and New Zealand), in 1987.
|
179 |
+
^ b: In Japan, Nintendo sold an optional expansion peripheral for the Famicom, called the Famicom Disk System, which enables the console to run software from proprietary floppy disks.
|
180 |
+
^ c: The original Famicom features two hardwired game controllers and a single port for additional input devices. See game controllers section.
|
181 |
+
^ e: The NES is the overall best-selling system worldwide of its time. In Japan and the United States, it controlled 85-90% of the market.[33]:349 In Europe, it was at most in 10-12% of households.[33]:413–414 Nintendo sold 61.9 million NES units worldwide: 19.35 million in Japan, 34 million in the Americas, and 8.5 million in other regions.[8]
|
182 |
+
^ f: The commonly bundled game Super Mario Bros. popularized the platform game genre and introduced elements that would be copied in many subsequent games[107]
|
183 |
+
^ g: Atari broke off negotiations with Nintendo in response to Coleco's unveiling of an unlicensed port of Donkey Kong for its Coleco Adam computer system. Although the game had been produced without Nintendo's permission or support, Atari took its release as a sign that Nintendo was dealing with one of its major competitors in the market.[20]:283–285
|
184 |
+
^ h: Donkey Kong Jr. Math and Mach Rider are often erroneously listed as launch games. Neither was available until later in 1986.[26]
|
en/1935.html.txt
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Family (Latin: familia, plural familiae) is one of the eight major hierarchical taxonomic ranks in Linnaean taxonomy; it is classified between order and genus. A family may be divided into subfamilies, which are intermediate ranks between the ranks of family and genus. The official family names are Latin in origin; however, popular names are often used: for example, walnut trees and hickory trees belong to the family Juglandaceae, but that family is commonly referred to as being the "walnut family".
|
4 |
+
|
5 |
+
What does or does not belong to a family—or whether a described family should be recognized at all—are proposed and determined by practicing taxonomists. There are no hard rules for describing or recognizing a family. Taxonomists often take different positions about descriptions, and there may be no broad consensus across the scientific community for some time. The publishing of new data and opinions often enables adjustments and consensus.
|
6 |
+
|
7 |
+
The naming of families is codified by various international bodies using the following suffixes:
|
8 |
+
|
9 |
+
The taxonomic term familia was first used by French botanist Pierre Magnol in his Prodromus historiae generalis plantarum, in quo familiae plantarum per tabulas disponuntur (1689) where he called the seventy-six groups of plants he recognised in his tables families (familiae). The concept of rank at that time was not yet settled, and in the preface to the Prodromus Magnol spoke of uniting his families into larger genera, which is far from how the term is used today.
|
10 |
+
|
11 |
+
Carolus Linnaeus used the word familia in his Philosophia botanica (1751) to denote major groups of plants: trees, herbs, ferns, palms, and so on. He used this term only in the morphological section of the book, discussing the vegetative and generative organs of plants.
|
12 |
+
|
13 |
+
Subsequently, in French botanical publications, from Michel Adanson's Familles naturelles des plantes (1763) and until the end of the 19th century, the word famille was used as a French equivalent of the Latin ordo (or ordo naturalis).
|
14 |
+
|
15 |
+
In zoology, the family as a rank intermediate between order and genus was introduced by Pierre André Latreille in his Précis des caractères génériques des insectes, disposés dans un ordre naturel (1796). He used families (some of them were not named) in some but not in all his orders of "insects" (which then included all arthropods).
|
16 |
+
|
17 |
+
In nineteenth-century works such as the Prodromus of Augustin Pyramus de Candolle and the Genera Plantarum of George Bentham and Joseph Dalton Hooker this word ordo was used for what now is given the rank of family.
|
18 |
+
|
19 |
+
Families can be used for evolutionary, palaeontological and genetic studies because they are more stable than lower taxonomic levels such as genera and species.[4][5]
|
en/1936.html.txt
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Family (Latin: familia, plural familiae) is one of the eight major hierarchical taxonomic ranks in Linnaean taxonomy; it is classified between order and genus. A family may be divided into subfamilies, which are intermediate ranks between the ranks of family and genus. The official family names are Latin in origin; however, popular names are often used: for example, walnut trees and hickory trees belong to the family Juglandaceae, but that family is commonly referred to as being the "walnut family".
|
4 |
+
|
5 |
+
What does or does not belong to a family—or whether a described family should be recognized at all—are proposed and determined by practicing taxonomists. There are no hard rules for describing or recognizing a family. Taxonomists often take different positions about descriptions, and there may be no broad consensus across the scientific community for some time. The publishing of new data and opinions often enables adjustments and consensus.
|
6 |
+
|
7 |
+
The naming of families is codified by various international bodies using the following suffixes:
|
8 |
+
|
9 |
+
The taxonomic term familia was first used by French botanist Pierre Magnol in his Prodromus historiae generalis plantarum, in quo familiae plantarum per tabulas disponuntur (1689) where he called the seventy-six groups of plants he recognised in his tables families (familiae). The concept of rank at that time was not yet settled, and in the preface to the Prodromus Magnol spoke of uniting his families into larger genera, which is far from how the term is used today.
|
10 |
+
|
11 |
+
Carolus Linnaeus used the word familia in his Philosophia botanica (1751) to denote major groups of plants: trees, herbs, ferns, palms, and so on. He used this term only in the morphological section of the book, discussing the vegetative and generative organs of plants.
|
12 |
+
|
13 |
+
Subsequently, in French botanical publications, from Michel Adanson's Familles naturelles des plantes (1763) and until the end of the 19th century, the word famille was used as a French equivalent of the Latin ordo (or ordo naturalis).
|
14 |
+
|
15 |
+
In zoology, the family as a rank intermediate between order and genus was introduced by Pierre André Latreille in his Précis des caractères génériques des insectes, disposés dans un ordre naturel (1796). He used families (some of them were not named) in some but not in all his orders of "insects" (which then included all arthropods).
|
16 |
+
|
17 |
+
In nineteenth-century works such as the Prodromus of Augustin Pyramus de Candolle and the Genera Plantarum of George Bentham and Joseph Dalton Hooker this word ordo was used for what now is given the rank of family.
|
18 |
+
|
19 |
+
Families can be used for evolutionary, palaeontological and genetic studies because they are more stable than lower taxonomic levels such as genera and species.[4][5]
|
en/1937.html.txt
ADDED
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In human society, family (from Latin: familia) is a group of people related either by consanguinity (by recognized birth) or affinity (by marriage or other relationship). The purpose of families is to maintain the well-being of its members and of society. Ideally, families would offer predictability, structure, and safety as members mature and participate in the community.[1] In most societies, it is within families that children acquire socialization for life outside the family. Additionally, as the basic unit for meeting the basic needs of its members, it provides a sense of boundaries for performing tasks in a safe environment, ideally builds a person into a functional adult, transmits culture, and ensures continuity of humankind with precedents of knowledge.
|
2 |
+
|
3 |
+
Anthropologists generally classify most family organizations as matrifocal (a mother and her children); patrifocal (a father and his children); conjugal (a wife, her husband, and children, also called the nuclear family); avuncular (for example, a grandparent, a brother, his sister, and her children); or extended (parents and children co-reside with other members of one parent's family).
|
4 |
+
|
5 |
+
Members of the immediate family may include spouses, parents, grandparents, brothers, sisters, sons, and daughters. Members of the extended family may include aunts, uncles, cousins, nephews, nieces, and siblings-in-law. Sometimes these are also considered members of the immediate family, depending on an individual's specific relationship with them, and the legal definition of "immediate family" varies.[2] Sexual relations with family members are regulated by rules concerning incest such as the incest taboo.
|
6 |
+
|
7 |
+
The field of genealogy aims to trace family lineages through history. The family is also an important economic unit studied in family economics. The word "families" can be used metaphorically to create more inclusive categories such as community, nationhood, and global village.
|
8 |
+
|
9 |
+
One of the primary functions of the family involves providing a framework for the production and reproduction of persons biologically and socially. This can occur through the sharing of material substances (such as food); the giving and receiving of care and nurture (nurture kinship); jural rights and obligations; and moral and sentimental ties.[4][5] Thus, one's experience of one's family shifts over time. From the perspective of children, the family is a "family of orientation": the family serves to locate children socially and plays a major role in their enculturation and socialization.[6] From the point of view of the parent(s), the family is a "family of procreation", the goal of which is to produce, enculturate and socialize children.[7][8] However, producing children is not the only function of the family; in societies with a sexual division of labor, marriage, and the resulting relationship between two people, it is necessary for the formation of an economically productive household.[9][10][11]
|
10 |
+
|
11 |
+
Christopher Harris notes that the western conception of family is ambiguous and confused with the household, as revealed in the different contexts in which the word is used.[12] Olivia Harris states this confusion is not accidental, but indicative of the familial ideology of capitalist, western countries that pass social legislation that insists members of a nuclear family should live together, and that those not so related should not live together; despite the ideological and legal pressures, a large percentage of families do not conform to the ideal nuclear family type.[13]
|
12 |
+
|
13 |
+
The total fertility rate of women varies from country to country, from a high of 6.76 children born/woman in Niger to a low of 0.81 in Singapore (as of 2015).[14] Fertility is low in most Eastern European and Southern European countries; and high in most Sub-Saharan African countries.[14]
|
14 |
+
|
15 |
+
In some cultures, the mother's preference of family size influences that of the children through early adulthood.[15] A parent's number of children strongly correlates with the number of children that their children will eventually have.[16]
|
16 |
+
|
17 |
+
Although early western cultural anthropologists and sociologists considered family and kinship to be universally associated with relations by "blood" (based on ideas common in their own cultures) later research[4] has shown that many societies instead understand family through ideas of living together, the sharing of food (e.g. milk kinship) and sharing care and nurture. Sociologists have a special interest in the function and status of family forms in stratified (especially capitalist) societies.[citation needed]
|
18 |
+
|
19 |
+
According to the work of scholars Max Weber, Alan Macfarlane, Steven Ozment, Jack Goody and Peter Laslett, the huge transformation that led to modern marriage in Western democracies was "fueled by the religio-cultural value system provided by elements of Judaism, early Christianity, Roman Catholic canon law and the Protestant Reformation".[17]
|
20 |
+
|
21 |
+
Much sociological, historical and anthropological research dedicates itself to the understanding of this variation, and of changes in the family that form over time. Levitan claims:
|
22 |
+
|
23 |
+
"Times have changed; it is more acceptable and encouraged for mothers to work and fathers to spend more time at home with the children. The way roles are balanced between the parents will help children grow and learn valuable life lessons. There is [the] great importance of communication and equality in families, in order to avoid role strain."[18]
|
24 |
+
|
25 |
+
The term "nuclear family" is commonly used, especially in the United States of America, to refer to conjugal families. A "conjugal" family includes only the spouses and unmarried children who are not of age.[19][failed verification] Some sociologists[which?] distinguish between conjugal families (relatively independent of the kindred of the parents and of other families in general) and nuclear families (which maintain relatively close ties with their kindred).[20][21]
|
26 |
+
Other family structures - with (for example) blended parents, single parents, and domestic partnerships – have begun to challenge the normality of the nuclear family.[22][23][24]
|
27 |
+
|
28 |
+
A single-parent family consists of one parent together with their children, where the parent is either widowed, divorced (and not remarried), or never married. The parent may have sole custody of the children, or separated parents may have a shared-parenting arrangement where the children divide their time (possibly equally) between two different single-parent families or between one single-parent family and one blended family. As compared to sole custody, physical, mental and social well-being of children may be improved by shared-parenting arrangements and by children having greater access to both parents.[25][26] The number of single-parent families have been[when?] increasing, and about half of all children in the United States will live in a single-parent family at some point before they reach the age of 18.[27] Most single-parent families are headed by a mother, but the number of single-parent families headed by fathers is increasing.[28][29][30]
|
29 |
+
|
30 |
+
A "matrifocal" family consists of a mother and her children. Generally, these children are her biological offspring, although adoption of children is a practice in nearly every society. This kind of family occurs commonly where women have the resources to rear their children by themselves, or where men are more mobile than women. As a definition, "a family or domestic group is matrifocal when it is centred on a woman and her children. In this case, the father(s) of these children are intermittently present in the life of the group and occupy a secondary place. The children's mother is not necessarily the wife of one of the children's fathers."[31]
|
31 |
+
|
32 |
+
The term "extended family" is also common, especially in the United States. This term has two distinct meanings:
|
33 |
+
|
34 |
+
These types refer to ideal or normative structures found in particular societies. Any society will exhibit some variation in the actual composition and conception of families.[citation needed]
|
35 |
+
|
36 |
+
The term family of choice, also sometimes referred to as "chosen family" or "found family", is common within the LGBT community, veterans, individuals who have suffered abuse, and those who have no contact with biological "parents". It refers to the group of people in an individual's life that satisfies the typical role of family as a support system. The term differentiates between the "family of origin" (the biological family or that in which people are raised) and those that actively assume that ideal role.[32]
|
37 |
+
The family of choice may or may not include some or all of the members of the family of origin. This terminology stems from the fact that many LGBT individuals, upon coming out, face rejection or shame from the families they were raised in.>[33] The term family of choice is also used by individuals in the 12 step communities, who create close-knit "family" ties through the recovery process.
|
38 |
+
|
39 |
+
As a family system, families of choice face unique issues. Without legal safeguards, family's of choice may struggle when medical, educational, or governmental institutions fail to recognize their legitimacy.[33] If members of the chosen family have been disowned by their family of origin, they may experience surrogate grief, displacing anger, loss, or anxious attachment onto their new family.[33]
|
40 |
+
|
41 |
+
The term blended family or stepfamily describes families with mixed parents: one or both parents remarried, bringing children of the former family into the new family.[34] Also in sociology, particularly in the works of social psychologist Michael Lamb,[35] traditional family refers to "a middle-class family with a bread-winning father and a stay-at-home mother, married to each other and raising their biological children," and nontraditional to exceptions to this rule. Most of the US households are now non-traditional under this definition.[36] Critics of the term "traditional family" point out that in most cultures and at most times, the extended family model has been most common, not the nuclear family,[37] though it has had a longer tradition in England[38] than in other parts of Europe and Asia which contributed large numbers of immigrants to the Americas. The nuclear family became the most common form in the U.S. in the 1960s and 1970s.[39]
|
42 |
+
|
43 |
+
In terms of communication patterns in families, there are a certain set of beliefs within the family that reflect how its members should communicate and interact. These family communication patterns arise from two underlying sets of beliefs. One being conversation orientation (the degree to which the importance of communication is valued) and two, conformity orientation (the degree to which families should emphasize similarities or differences regarding attitudes, beliefs, and values).[40]
|
44 |
+
|
45 |
+
A monogamous family is based on a legal or social monogamy. In this case, an individual has only one (official) partner during their lifetime or at any one time (i.e. serial monogamy).[41] This means that a person may not have several different legal spouses at the same time, as this is usually prohibited by bigamy laws, in jurisdictions that require monogamous marriages.
|
46 |
+
|
47 |
+
Polygamy is a marriage that includes more than two partners.[42][43] When a man is married to more than one wife at a time, the relationship is called polygyny; and when a woman is married to more than one husband at a time, it is called polyandry. If a marriage includes multiple husbands and wives, it can be called polyamory,[44] group or conjoint marriage.[43]
|
48 |
+
|
49 |
+
Polygyny is a form of plural marriage, in which a man is allowed more than one wife .[45] In modern countries that permit polygamy, polygyny is typically the only form permitted. Polygyny is practiced primarily (but not only) in parts of the Middle East and Africa; and is often associated with Islam, however, there are certain conditions in Islam that must be met to perform polygyny.[citation needed]
|
50 |
+
|
51 |
+
Polyandry is a form of marriage whereby a woman takes two or more husbands at the same time.[46] Fraternal polyandry, where two or more brothers are married to the same wife, is a common form of polyandry. Polyandry was traditionally practiced in areas of the Himalayan mountains, among Tibetans in Nepal, in parts of China and in parts of northern India. Polyandry is most common in societies marked by high male mortality or where males will often be apart from the rest of the family for a considerable period of time.[46]
|
52 |
+
|
53 |
+
A first-degree relative is one who shares 50% of your DNA through direct inheritance, such as a full sibling, parent or progeny.
|
54 |
+
|
55 |
+
There is another measure for the degree of relationship, which is determined by counting up generations to the first common ancestor and back down to the target individual, which is used for various genealogical and legal purposes.[citation needed]
|
56 |
+
|
57 |
+
In his book Systems of Consanguinity and Affinity of the Human Family, anthropologist Lewis Henry Morgan (1818–1881) performed the first survey of kinship terminologies in use around the world. Although much of his work is now considered dated, he argued that kinship terminologies reflect different sets of distinctions. For example, most kinship terminologies distinguish between sexes (the difference between a brother and a sister) and between generations (the difference between a child and a parent). Moreover, he argued, kinship terminologies distinguish between relatives by blood and marriage (although recently some anthropologists have argued that many societies define kinship in terms other than "blood").
|
58 |
+
|
59 |
+
Morgan made a distinction between kinship systems that use classificatory terminology and those that use descriptive terminology. Classificatory systems are generally and erroneously understood to be those that "class together" with a single term relatives who actually do not have the same type of relationship to ego. (What defines "same type of relationship" under such definitions seems to be genealogical relationship. This is problematic given that any genealogical description, no matter how standardized, employs words originating in a folk understanding of kinship.) What Morgan's terminology actually differentiates are those (classificatory) kinship systems that do not distinguish lineal and collateral relationships and those (descriptive) kinship systems that do. Morgan, a lawyer, came to make this distinction in an effort to understand Seneca inheritance practices. A Seneca man's effects were inherited by his sisters' children rather than by his own children.[49] Morgan identified six basic patterns of kinship terminologies:
|
60 |
+
|
61 |
+
Most Western societies employ Eskimo kinship terminology.[citation needed] This kinship terminology commonly occurs in societies with strong conjugal, where families have a degree of relative mobility. Typically, societies with conjugal families also favor neolocal residence; thus upon marriage, a person separates from the nuclear family of their childhood (family of orientation) and forms a new nuclear family (family of procreation). Such systems generally assume that the mother's husband is also the biological father. The system uses highly descriptive terms for the nuclear family and progressively more classificatory as the relatives become more and more collateral.
|
62 |
+
|
63 |
+
The system emphasizes the nuclear family. Members of the nuclear family use highly descriptive kinship terms, identifying directly only the husband, wife, mother, father, son, daughter, brother, and sister. All other relatives are grouped together into categories. Members of the nuclear family may be lineal or collateral. Kin, for whom these are family, refer to them in descriptive terms that build on the terms used within the nuclear family or use the nuclear family term directly.
|
64 |
+
|
65 |
+
Nuclear family of orientation
|
66 |
+
|
67 |
+
Nuclear conjugal family
|
68 |
+
|
69 |
+
Nuclear non-lineal family
|
70 |
+
|
71 |
+
A sibling is a collateral relative with a minimal removal. For collateral relatives with one additional removal, one generation more distant from a common ancestor on one side, more classificatory terms come into play. These terms (Aunt, Uncle, Niece, and Nephew) do not build on the terms used within the nuclear family as most are not traditionally members of the household. These terms do not traditionally differentiate between a collateral relatives and a person married to a collateral relative (both collateral and aggregate). Collateral relatives with additional removals on each side are Cousins. This is the most classificatory term and can be distinguished by degrees of collaterality and by generation (removal).
|
72 |
+
|
73 |
+
When only the subject has the additional removal, the relative is the subjects parents siblings, the terms Aunt and Uncle are used for female and male relatives respectively. When only the relative has the additional removal, the relative is the subjects siblings child, the terms Niece and Nephew are used for female and male relatives respectively. The spouse of a biological aunt or uncle is an aunt or uncle, and the nieces and nephews of a spouse are nieces and nephews. With further removal by the subject for aunts and uncles and by the relative for nieces and nephews the prefix "grand-" modifies these terms. With further removal the prefix becomes "great-grand-," adding another "great-" for each additional generation.
|
74 |
+
|
75 |
+
When the subject and the relative have an additional removal they are cousins. A cousin with minimal removal is a first cousin, i.e. the child of the subjects uncle or aunt. Degrees of collaterality and removals are used to more precisely describe the relationship between cousins. The degree is the number of generations subsequent to the common ancestor before a parent of one of the cousins is found, while the removal is the difference between the number of generations from each cousin to the common ancestor (the difference between the generations the cousins are from).[50][51]
|
76 |
+
|
77 |
+
Cousins of an older generation (in other words, one's parents' first cousins), although technically first cousins once removed, are often classified with "aunts" and "uncles."
|
78 |
+
|
79 |
+
English-speakers mark relationships by marriage (except for wife/husband) with the tag "-in-law." The mother and father of one's spouse become one's mother-in-law and father-in-law; the wife of one's son becomes one's daughter-in-law and the husband of one's daughter becomes one's son-in-law. The term "sister-in-law" refers to three essentially different relationships, either the wife of one's brother, or the sister of one's spouse. "Brother-in-law" is the husband of one's sister, or the brother of one's spouse. The terms "half-brother" and "half-sister" indicate siblings who share only one biological parent. The term "aunt-in-law" is the wife of one's uncle, or the aunt of one's spouse. "Uncle-in-law" is the husband of one's aunt, or the uncle of one's spouse. "Cousin-in-law" is the spouse of one's cousin, or the cousin of one's spouse. The term "niece-in-law" is the wife of one's nephew, or the niece of one's spouse. "Nephew-in-law" is the husband of one's niece, or the nephew of one's spouse. The grandmother and grandfather of one's spouse become one's grandmother-in-law and grandfather-in-law; the wife of one's grandson becomes one's granddaughter-in-law and the husband of one's granddaughter becomes one's grandson-in-law.
|
80 |
+
|
81 |
+
In Indian English a sibling in law who is the spouse of your sibling can be referred to as a co-sibling (specificity a co-sister[52] or co-brother[53]).
|
82 |
+
|
83 |
+
Patrilineality, also known as the male line or agnatic kinship, is a form of kinship system in which an individual's family membership derives from and is traced through his or her father's lineage.[54] It generally involves the inheritance of property, rights, names, or titles by persons related through male kin.
|
84 |
+
|
85 |
+
A patriline ("father line") is a person's father, and additional ancestors that are traced only through males. One's patriline is thus a record of descent from a man in which the individuals in all intervening generations are male. In cultural anthropology, a patrilineage is a consanguineal male and female kinship group, each of whose members is descended from the common ancestor through male forebears.
|
86 |
+
|
87 |
+
Matrilineality is a form of kinship system in which an individual's family membership derives from and is traced through his or her mother's lineage.
|
88 |
+
|
89 |
+
It may also correlate with a societal system in which each person is identified with their matriline—their mother's lineage—and which can involve the inheritance of property and titles. A matriline is a line of descent from a female ancestor to a descendant in which the individuals in all intervening generations are mothers – in other words, a "mother line".
|
90 |
+
|
91 |
+
In a matrilineal descent system, an individual is considered to belong to the same descent group as her or his mother. This matrilineal descent pattern is in contrasts to the more common pattern of patrilineal descent pattern.
|
92 |
+
|
93 |
+
Bilateral descent is a form of kinship system in which an individual's family membership derives from and is traced through both the paternal and maternal sides. The relatives on the mother's side and father's side are equally important for emotional ties or for transfer of property or wealth. It is a family arrangement where descent and inheritance are passed equally through both parents.[55] Families who use this system trace descent through both parents simultaneously and recognize multiple ancestors, but unlike with cognatic descent it is not used to form descent groups.[56]
|
94 |
+
|
95 |
+
Traditionally, this is found among some groups in West Africa, India, Australia, Indonesia, Melanesia, Malaysia and Polynesia. Anthropologists believe that a tribal structure based on bilateral descent helps members live in extreme environments because it allows individuals to rely on two sets of families dispersed over a wide area.[57]
|
96 |
+
|
97 |
+
Early scholars of family history applied Darwin's biological theory of evolution in their theory of evolution of family systems.[58] American anthropologist Lewis H. Morgan published Ancient Society in 1877 based on his theory of the three stages of human progress from Savagery through Barbarism to Civilization.[59] Morgan's book was the "inspiration for Friedrich Engels' book" The Origin of the Family, Private Property and the State published in 1884.[60]
|
98 |
+
|
99 |
+
Engels expanded Morgan's hypothesis that economical factors caused the transformation of primitive community into a class-divided society.[61] Engels' theory of resource control, and later that of Karl Marx, was used to explain the cause and effect of change in family structure and function. The popularity of this theory was largely unmatched until the 1980s, when other sociological theories, most notably structural functionalism, gained acceptance.
|
100 |
+
|
101 |
+
Contemporary society generally views the family as a haven from the world, supplying absolute fulfillment. Zinn and Eitzen discuss the image of the "family as haven ... a place of intimacy, love and trust where individuals may escape the competition of dehumanizing forces in modern society".[63]
|
102 |
+
During industrialization, "[t]he family as a repository of warmth and tenderness (embodied by the mother) stands in opposition to the competitive and aggressive world of commerce (embodied by the father). The family's task was to protect against the outside world."[64] However, Zinn and Eitzen note, "The protective image of the family has waned in recent years as the ideals of family fulfillment have taken shape. Today, the family is more compensatory than protective. It supplies what is vitally needed but missing in other social arrangements."[64]
|
103 |
+
|
104 |
+
"The popular wisdom", according to Zinn and Eitzen, sees the family structures of the past as superior to those today, and families as more stable and happier at a time when they did not have to contend with problems such as illegitimate children and divorce. They respond to this, saying, "there is no golden age of the family gleaming at us in the far back historical past."[65] "Desertion by spouses, illegitimate children, and other conditions that are considered characteristics of modern times existed in the past as well."[65]
|
105 |
+
|
106 |
+
Others argue that whether or not one views the family as "declining" depends on one's definition of "family". "Married couples have dropped below half of all American households. This drop is shocking from traditional forms of the family system. Only a fifth of households were following traditional ways of having married couples raising a family together."[67] In the Western World, marriages are no longer arranged for economic, social or political gain, and children are no longer expected to contribute to family income. Instead, people choose mates based on love. This increased role of love indicates a societal shift toward favoring emotional fulfilment and relationships within a family, and this shift necessarily weakens the institution of the family.[68]
|
107 |
+
|
108 |
+
Margaret Mead considers the family as a main safeguard to continuing human progress. Observing, "Human beings have learned, laboriously, to be human", she adds: "we hold our present form of humanity on trust, [and] it is possible to lose it" ... "It is not without significance that the most successful large-scale abrogations of the family have occurred not among simple savages, living close to the subsistence edge, but among great nations and strong empires, the resources of which were ample, the populations huge, and the power almost unlimited"[69]
|
109 |
+
|
110 |
+
Many countries (particularly Western) have, in recent years, changed their family laws in order to accommodate diverse family models. For instance, in the United Kingdom, in Scotland, the Family Law (Scotland) Act 2006 provides cohabitants with some limited rights.[70] In 2010, Ireland enacted the Civil Partnership and Certain Rights and Obligations of Cohabitants Act 2010. There have also been moves at an international level, most notably, the Council of Europe European Convention on the Legal Status of Children Born out of Wedlock[71] which came into force in 1978. Countries which ratify it must ensure that children born outside marriage are provided with legal rights as stipulated in the text of this Convention. The Convention was ratified by the UK in 1981 and by Ireland in 1988.[72]
|
111 |
+
|
112 |
+
In the United States, one in five mothers has children by different fathers; among mothers with two or more children the figure is higher, with 28% having children with at least two different men. Such families are more common among Blacks and Hispanics and among the lower socioeconomic class.[73]
|
113 |
+
|
114 |
+
However, in western society, the single parent family has been growing more accepted and has begun to make an impact on culture. Single parent families are more commonly single mother families than single father. These families sometimes face difficult issues besides the fact that they have to rear their children on their own, for example, low income making it difficult to pay for rent, child care, and other necessities for a healthy and safe home.
|
115 |
+
|
116 |
+
Domestic violence (DV) is violence that happens within the family. The legal and social understanding of the concept of DV differs by culture. The definition of the term "domestic violence" varies, depending on the context in which it is used.[74] It may be defined differently in medical, legal, political or social contexts. The definitions have varied over time, and vary in different parts of the world.
|
117 |
+
|
118 |
+
The Convention on preventing and combating violence against women and domestic violence states that:[75]
|
119 |
+
|
120 |
+
In 1993, the United Nations Declaration on the Elimination of Violence against Women identified domestic violence as one of three contexts in which violence against women occurs, describing it as:[76]
|
121 |
+
|
122 |
+
Family violence is a broader definition, often used to include child abuse, elder abuse, and other violent acts between family members.[77]
|
123 |
+
|
124 |
+
Child abuse is defined by the WHO as:[78]
|
125 |
+
|
126 |
+
Elder abuse is, according to the WHO: "a single, or repeated act, or lack of appropriate action, occurring within any relationship where there is an expectation of trust which causes harm or distress to an older person".[81]
|
127 |
+
|
128 |
+
Child abuse is the physical, sexual or emotional maltreatment or neglect of a child or children.[82] In the United States, the Centers for Disease Control and Prevention (CDC) and the Department for Children and Families (DCF) define child maltreatment as any act or series of acts of commission or omission by a parent or other caregiver that results in harm, potential for harm, or threat of harm to a child.[83] Child abuse can occur in a child's home, or in the organizations, schools or communities the child interacts with. There are four major categories of child abuse: neglect, physical abuse, psychological or emotional abuse, and sexual abuse.
|
129 |
+
|
130 |
+
Abuse of parents by their children is a common but under reported and under researched subject. Parents are quite often subject to levels of childhood aggression in excess of normal childhood aggressive outbursts, typically in the form of verbal or physical abuse. Parents feel a sense of shame and humiliation to have that problem, so they rarely seek help and there is usually little or no help available anyway.[84][85]
|
131 |
+
|
132 |
+
Elder abuse is "a single, or repeated act, or lack of appropriate action, occurring within any relationship where there is an expectation of trust, which causes harm or distress to an older person."[86] This definition has been adopted by the World Health Organization from a definition put forward by Action on Elder Abuse in the UK. Laws protecting the elderly from abuse are similar to, and related to, laws protecting dependent adults from abuse.
|
133 |
+
|
134 |
+
The core element to the harm of elder abuse is the "expectation of trust" of the older person toward their abuser. Thus, it includes harms by people the older person knows or with whom they have a relationship, such as a spouse, partner or family member, a friend or neighbor, or people that the older person relies on for services. Many forms of elder abuse are recognized as types of domestic violence or family violence.
|
135 |
+
|
136 |
+
Forced and child marriages are practiced in certain regions of the world, particularly in Asia and Africa, and these types of marriages are associated with a high rate of domestic violence.[87][88][89][90]
|
137 |
+
|
138 |
+
A forced marriage is a marriage where one or both participants are married without their freely given consent.[91] The line between forced marriage and consensual marriage may become blurred, because the social norms of many cultures dictate that one should never oppose the desire of one's parents/relatives in regard to the choice of a spouse; in such cultures it is not necessary for violence, threats, intimidation etc. to occur, the person simply "consents" to the marriage even if he/she doesn't want it, out of the implied social pressure and duty. The customs of bride price and dowry, that exist in parts of the world, can lead to buying and selling people into marriage.[92][93]
|
139 |
+
|
140 |
+
A child marriage is a marriage where one or both spouses are under 18.[94][87] Child marriage was common throughout history but is today condemned by international human rights organizations.[95][96][97] Child marriages are often arranged between the families of the future bride and groom, sometimes as soon as the girl is born.[95] Child marriages can also occur in the context of marriage by abduction.[95]
|
141 |
+
|
142 |
+
Family honor is an abstract concept involving the perceived quality of worthiness and respectability that affects the social standing and the self-evaluation of a group of related people, both corporately and individually.[98][99] The family is viewed as the main source of honor and the community highly values the relationship between honor and the family.[100] The conduct of family members reflects upon family honor and the way the family perceives itself, and is perceived by others.[99] In cultures of honor maintaining the family honor is often perceived as more important than either individual freedom, or individual achievement.[101] In extreme cases, engaging in acts that are deemed to tarnish the honor of the family results in honor killings. An honor killing is the homicide of a member of a family or social group by other members, due to the perpetrators' belief that the victim has brought shame or dishonor upon the family or community, usually for reasons such as refusing to enter an arranged marriage, being in a relationship that is disapproved by their relatives, having sex outside marriage, becoming the victim of rape, dressing in ways which are deemed inappropriate, or engaging in homosexual relations.[102][103][104][105][106]
|
143 |
+
|
144 |
+
A family is often part of a sharing economy with common ownership.
|
145 |
+
|
146 |
+
Dowry is property (money, goods, or estate) that a wife or wife's family gives to her husband when the wife and husband marry.[107] Offering dowry was common in many cultures historically (including in Europe and North America), but this practice today is mostly restricted to some areas primarily in the Indian subcontinent.
|
147 |
+
|
148 |
+
Bride price, (also bridewealth or bride token), is property paid by the groom or his family to the parents of a woman upon the marriage of their daughter to the groom. It is practiced mostly in Sub-Saharan Africa, parts of South-East Asia (Thailand, Cambodia), and parts of Central Asia.
|
149 |
+
|
150 |
+
Dower is property given to the bride herself by the groom at the time of marriage, and which remains under her ownership and control.[108]
|
151 |
+
|
152 |
+
In some countries married couples benefit from various taxation advantages not available to a single person or to unmarried couples. For example, spouses may be allowed to average their combined incomes. Some jurisdictions recognize common law marriage or de facto relations for this purposes. In some jurisdictions there is also an option of civil partnership or domestic partnership.
|
153 |
+
|
154 |
+
Different property regimes exist for spouses. In many countries, each marriage partner has the choice of keeping their property separate or combining properties. In the latter case, called community property, when the marriage ends by divorce each owns half. In lieu of a will or trust, property owned by the deceased generally is inherited by the surviving spouse.
|
155 |
+
|
156 |
+
Reproductive rights are legal rights and freedoms relating to reproduction and reproductive health. These include the right to decide on issues regarding the number of children born, family planning, contraception, and private life, free from coercion and discrimination; as well as the right to access health services and adequate information.[109][110][111][112] According to UNFPA, reproductive rights "include the right to decide the number, timing and spacing of children, the right to voluntarily marry and establish a family, and the right to the highest attainable standard of health, among others".[113] Family planning refers to the factors that may be considered by individuals and couples in order for them to control their fertility, anticipate and attain the desired number of children and the spacing and timing of their births.[114][115]
|
157 |
+
|
158 |
+
The state and church have been, and still are in some countries, involved in controlling the size of families, often using coercive methods, such as bans on contraception or abortion (where the policy is a natalist one—for example through tax on childlessness) or conversely, discriminatory policies against large families or even forced abortions (e.g., China's one-child policy in place from 1978 to 2015). Forced sterilization has often targeted ethnic minority groups, such as Roma women in Eastern Europe,[116][117] or indigenous women in Peru (during the 1990s).[118]
|
159 |
+
|
160 |
+
The parents' rights movement is a movement whose members are primarily interested in issues affecting parents and children related to family law, specifically parental rights and obligations. Mothers' rights movements focus on maternal health, workplace issues such as labor rights, breastfeeding, and rights in family law. The fathers' rights movement is a movement whose members are primarily interested in issues related to family law, including child custody and child support, that affect fathers and their children.[119]
|
161 |
+
|
162 |
+
Children's rights are the human rights of children, with particular attention to the rights of special protection and care afforded to minors, including their right to association with both parents, their right to human identity, their right to be provided in regard to their other basic needs, and their right to be free from violence and abuse.[120][121][122]
|
163 |
+
|
164 |
+
Each jurisdiction has its own marriage laws. These laws differ significantly from country to country; and these laws are often controversial. Areas of controversy include women's rights as well as same-sex marriage.
|
165 |
+
|
166 |
+
Legal reforms to family laws have taken place in many countries during the past few decades. These dealt primarily with gender equality within marriage and with divorce laws. Women have been given equal rights in marriage in many countries, reversing older family laws based on the dominant legal role of the husband. Coverture, which was enshrined in the common law of England and the US for several centuries and throughout most of the 19th century, was abolished. In some European countries the changes that lead to gender equality were slower. The period of 1975–1979 saw a major overhaul of family laws in countries such as Italy,[123][124] Spain,[125] Austria,[126][126] West Germany,[127][128] and Portugal.[129] In 1978, the Council of Europe passed the Resolution (78) 37 on equality of spouses in civil law.[130] Among the last European countries to establish full gender equality in marriage were Switzerland. In 1985, a referendum guaranteed women legal equality with men within marriage.[131][132] The new reforms came into force in January 1988.[133] In Greece, in 1983, legislation was passed guaranteeing equality between spouses, abolishing dowry, and ending legal discrimination against illegitimate children.[134][135] In 1981, Spain abolished the requirement that married women must have their husbands’ permission to initiate judicial proceedings[136] the Netherlands,[137][138] and France[139] in the 1980s. In recent decades, the marital power has also been abolished in African countries that had this doctrine, but many African countries that were former French colonies still have discriminatory laws in their marriages regulations, such regulations originating in the Napoleonic Code that has inspired these laws.[136] In some countries (predominantly Roman Catholic) divorce was legalized only recently (e.g. Italy (1970), Portugal (1975), Brazil (1977), Spain (1981), Argentina (1987), Ireland (1996), Chile (2004) and Malta (2011)) although annulment and legal separation were options. Philippines still does not allow divorce. (see Divorce law by country). The laws pertaining to the situation of children born outside marriage have also been revised in many countries (see Legitimacy (family law)).
|
167 |
+
|
168 |
+
Family medicine is a medical specialty devoted to comprehensive health care for people of all ages; it is based on knowledge of the patient in the context of the family and the community, emphasizing disease prevention and health promotion.[141] The importance of family medicine is being increasingly recognized.[142]
|
169 |
+
|
170 |
+
Maternal mortality or maternal death is defined by WHO as "the death of a woman while pregnant or within 42 days of termination of pregnancy, irrespective of the duration and site of the pregnancy, from any cause related to or aggravated by the pregnancy or its management but not from accidental or incidental causes."[144] Historically, maternal mortality was a major cause of women's death. In recent decades, advances in healthcare have resulted in rates of maternal mortality having dropped dramatically, especially in Western countries. Maternal mortality however remains a serious problem in many African and Asian counties.[144][145]
|
171 |
+
|
172 |
+
Infant mortality is the death of a child less than one year of age. Child mortality is the death of a child before the child's fifth birthday. Like maternal mortality, infant and child mortality were common throughout history, but have decreased significantly in modern times.[146][147]
|
173 |
+
|
174 |
+
While in many parts of the world family policies seek to promote a gender-equal organization of the family life, in others the male-dominated family continues to be the official policy of the authorities, which is also supported by law. For instance, the Civil Code of Iran states at Article 1105: "In relations between husband and wife; the position of the head of the family is the exclusive right of the husband".[148]
|
175 |
+
|
176 |
+
In some parts of the world, some governments promote a specific form of family, such as that based on traditional family values. The term "family values" is often used in political discourse in some countries, its general meaning being that of traditional or cultural values that pertain to the family's structure, function, roles, beliefs, attitudes, and ideals, usually involving the "traditional family"—a middle-class family with a breadwinner father and a homemaker mother, raising their biological children. Any deviation from this family model is considered a "nontraditional family".[149] These family ideals are often advanced through policies such as marriage promotion. Some jurisdictions outlaw practices which they deem as socially or religiously unacceptable, such as fornication, cohabitation or adultery.
|
177 |
+
|
178 |
+
Work-family balance is a concept involving proper prioritizing between work/career and family life. It includes issues relating to the way how work and families intersect and influence each other. At a political level, it is reflected through policies such maternity leave and paternity leave. Since the 1950s, social scientists as well as feminists have increasingly criticized gendered arrangements of work and care, and the male breadwinner role, and policies are increasingly targeting men as fathers, as a tool of changing gender relations.[150]
|
179 |
+
|
180 |
+
Article 8 of the European Convention on Human Rights provides a right to respect for one's "private and family life, his home and his correspondence", subject to certain restrictions that are "in accordance with law" and "necessary in a democratic society".[151]
|
181 |
+
|
182 |
+
Article 8 – Right to respect for private and family life
|
183 |
+
|
184 |
+
1. Everyone has the right to respect for his private and family life, his home and his correspondence.
|
185 |
+
|
186 |
+
2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.
|
187 |
+
|
188 |
+
Certain social scientists have advocated the abolition of the family. An early opponent of the family was Socrates whose position was outlined by Plato in The Republic.[152] In Book 5 of The Republic, Socrates tells his interlocutors that a just city is one in which citizens have no family ties.[153][154]
|
189 |
+
|
190 |
+
The family being such a deep-rooted and much-venerated institution, few intellectuals have ventured to speak against it. Familialism has been atypically defined as a "social structure where ... a family's values are held in higher esteem than the values of the individual members of the family." Favoritism granted to relatives regardless of merit is called nepotism.
|
191 |
+
|
192 |
+
The Russian-American rationalist and individualist philosopher, novelist and playwright Ayn Rand compared partiality towards consanguinity with racism, as a small-scale manifestation of the latter.[155] "The worship of the family is merely racism, like a crudely primitive first installment on the worship of the tribe. It places the accident of birth above a man's values and duty to the tribe above a man's right to his own life."[156] Additionally, she spoke in favor of childfree lifestyle, while following it herself.[155]
|
193 |
+
|
194 |
+
The British social critic, poet, mountaineer and occultist Aleister Crowley censured the institution of family in his works: "Horrid word, family! Its very etymology accuses it of servility and stagnation. / Latin, famulus, a servant; Oscan, Faamat, he dwells. ... [T]hink what horrid images it evokes from the mind. Not only Victorian; wherever the family has been strong, it has always been an engine of tyranny. Weak members or weak neighbours: it is the mob spirit crushing genius, or overwhelming opposition by brute arithmetic. ... In every Magical, or similar system, it is invariably the first condition which the Aspirant must fulfill: he must once and for all and for ever put his family outside his magical circle."[157]
|
195 |
+
|
196 |
+
One of the controversies regarding the family is the application of the concept of social justice to the private sphere of family relations, in particular with regard to the rights of women and children. Throughout much of the history, most philosophers who advocated for social justice focused on the public political arena, not on the family structures; with the family often being seen as a separate entity which needed to be protected from outside state intrusion. One notable exception was John Stuart Mill, who, in his work The Subjection of Women, advocated for greater rights for women within marriage and family.[158] Second wave feminists argued that the personal is political, stating that there are strong connections between personal experiences and the larger social and political structures. In the context of the feminist movement of the 1960s and 1970s, this was a challenge to the nuclear family and family values, as they were understood then.[159] Feminists focused on domestic violence, arguing that the reluctance—in law or in practice—of the state to intervene and offer protection to women who have been abused within the family, is in violation of women's human rights, and is the result of an ideology which places family relations outside the conceptual framework of human rights.[160]
|
197 |
+
|
198 |
+
In 2015, Nicholas Eberstadt, political economist at the American Enterprise Institute in Washington, described a "global flight from family" in an opinion piece in the Wall Street Journal.[161] Statistics from an infographic by Olivier Ballou showed that,[162]
|
199 |
+
|
200 |
+
In 2013, just over 40% of US babies were born outside marriage. The Census bureau estimated that 27% of all children lived in a fatherless home. Europe has seen a surge in child-free adults. One in five 40-something women are childless in Sweden and in Switzerland, in Italy one in four, in Berlin one in three. So-called traditional societies are seeing the same trend. About one-sixth of Japanese women in their forties have never married and about 30% of all woman that age are childless.
|
201 |
+
|
202 |
+
However, Swedish statisticians reported in 2013 that, in contrast to many countries, since the 2000s, fewer children have experienced their parents' separation, childlessness had decreased in Sweden and marriages had increased. It had also become more common for couples to have a third child suggesting that the nuclear family was no longer in decline in Sweden.[163]:10
|
en/1938.html.txt
ADDED
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In human society, family (from Latin: familia) is a group of people related either by consanguinity (by recognized birth) or affinity (by marriage or other relationship). The purpose of families is to maintain the well-being of its members and of society. Ideally, families would offer predictability, structure, and safety as members mature and participate in the community.[1] In most societies, it is within families that children acquire socialization for life outside the family. Additionally, as the basic unit for meeting the basic needs of its members, it provides a sense of boundaries for performing tasks in a safe environment, ideally builds a person into a functional adult, transmits culture, and ensures continuity of humankind with precedents of knowledge.
|
2 |
+
|
3 |
+
Anthropologists generally classify most family organizations as matrifocal (a mother and her children); patrifocal (a father and his children); conjugal (a wife, her husband, and children, also called the nuclear family); avuncular (for example, a grandparent, a brother, his sister, and her children); or extended (parents and children co-reside with other members of one parent's family).
|
4 |
+
|
5 |
+
Members of the immediate family may include spouses, parents, grandparents, brothers, sisters, sons, and daughters. Members of the extended family may include aunts, uncles, cousins, nephews, nieces, and siblings-in-law. Sometimes these are also considered members of the immediate family, depending on an individual's specific relationship with them, and the legal definition of "immediate family" varies.[2] Sexual relations with family members are regulated by rules concerning incest such as the incest taboo.
|
6 |
+
|
7 |
+
The field of genealogy aims to trace family lineages through history. The family is also an important economic unit studied in family economics. The word "families" can be used metaphorically to create more inclusive categories such as community, nationhood, and global village.
|
8 |
+
|
9 |
+
One of the primary functions of the family involves providing a framework for the production and reproduction of persons biologically and socially. This can occur through the sharing of material substances (such as food); the giving and receiving of care and nurture (nurture kinship); jural rights and obligations; and moral and sentimental ties.[4][5] Thus, one's experience of one's family shifts over time. From the perspective of children, the family is a "family of orientation": the family serves to locate children socially and plays a major role in their enculturation and socialization.[6] From the point of view of the parent(s), the family is a "family of procreation", the goal of which is to produce, enculturate and socialize children.[7][8] However, producing children is not the only function of the family; in societies with a sexual division of labor, marriage, and the resulting relationship between two people, it is necessary for the formation of an economically productive household.[9][10][11]
|
10 |
+
|
11 |
+
Christopher Harris notes that the western conception of family is ambiguous and confused with the household, as revealed in the different contexts in which the word is used.[12] Olivia Harris states this confusion is not accidental, but indicative of the familial ideology of capitalist, western countries that pass social legislation that insists members of a nuclear family should live together, and that those not so related should not live together; despite the ideological and legal pressures, a large percentage of families do not conform to the ideal nuclear family type.[13]
|
12 |
+
|
13 |
+
The total fertility rate of women varies from country to country, from a high of 6.76 children born/woman in Niger to a low of 0.81 in Singapore (as of 2015).[14] Fertility is low in most Eastern European and Southern European countries; and high in most Sub-Saharan African countries.[14]
|
14 |
+
|
15 |
+
In some cultures, the mother's preference of family size influences that of the children through early adulthood.[15] A parent's number of children strongly correlates with the number of children that their children will eventually have.[16]
|
16 |
+
|
17 |
+
Although early western cultural anthropologists and sociologists considered family and kinship to be universally associated with relations by "blood" (based on ideas common in their own cultures) later research[4] has shown that many societies instead understand family through ideas of living together, the sharing of food (e.g. milk kinship) and sharing care and nurture. Sociologists have a special interest in the function and status of family forms in stratified (especially capitalist) societies.[citation needed]
|
18 |
+
|
19 |
+
According to the work of scholars Max Weber, Alan Macfarlane, Steven Ozment, Jack Goody and Peter Laslett, the huge transformation that led to modern marriage in Western democracies was "fueled by the religio-cultural value system provided by elements of Judaism, early Christianity, Roman Catholic canon law and the Protestant Reformation".[17]
|
20 |
+
|
21 |
+
Much sociological, historical and anthropological research dedicates itself to the understanding of this variation, and of changes in the family that form over time. Levitan claims:
|
22 |
+
|
23 |
+
"Times have changed; it is more acceptable and encouraged for mothers to work and fathers to spend more time at home with the children. The way roles are balanced between the parents will help children grow and learn valuable life lessons. There is [the] great importance of communication and equality in families, in order to avoid role strain."[18]
|
24 |
+
|
25 |
+
The term "nuclear family" is commonly used, especially in the United States of America, to refer to conjugal families. A "conjugal" family includes only the spouses and unmarried children who are not of age.[19][failed verification] Some sociologists[which?] distinguish between conjugal families (relatively independent of the kindred of the parents and of other families in general) and nuclear families (which maintain relatively close ties with their kindred).[20][21]
|
26 |
+
Other family structures - with (for example) blended parents, single parents, and domestic partnerships – have begun to challenge the normality of the nuclear family.[22][23][24]
|
27 |
+
|
28 |
+
A single-parent family consists of one parent together with their children, where the parent is either widowed, divorced (and not remarried), or never married. The parent may have sole custody of the children, or separated parents may have a shared-parenting arrangement where the children divide their time (possibly equally) between two different single-parent families or between one single-parent family and one blended family. As compared to sole custody, physical, mental and social well-being of children may be improved by shared-parenting arrangements and by children having greater access to both parents.[25][26] The number of single-parent families have been[when?] increasing, and about half of all children in the United States will live in a single-parent family at some point before they reach the age of 18.[27] Most single-parent families are headed by a mother, but the number of single-parent families headed by fathers is increasing.[28][29][30]
|
29 |
+
|
30 |
+
A "matrifocal" family consists of a mother and her children. Generally, these children are her biological offspring, although adoption of children is a practice in nearly every society. This kind of family occurs commonly where women have the resources to rear their children by themselves, or where men are more mobile than women. As a definition, "a family or domestic group is matrifocal when it is centred on a woman and her children. In this case, the father(s) of these children are intermittently present in the life of the group and occupy a secondary place. The children's mother is not necessarily the wife of one of the children's fathers."[31]
|
31 |
+
|
32 |
+
The term "extended family" is also common, especially in the United States. This term has two distinct meanings:
|
33 |
+
|
34 |
+
These types refer to ideal or normative structures found in particular societies. Any society will exhibit some variation in the actual composition and conception of families.[citation needed]
|
35 |
+
|
36 |
+
The term family of choice, also sometimes referred to as "chosen family" or "found family", is common within the LGBT community, veterans, individuals who have suffered abuse, and those who have no contact with biological "parents". It refers to the group of people in an individual's life that satisfies the typical role of family as a support system. The term differentiates between the "family of origin" (the biological family or that in which people are raised) and those that actively assume that ideal role.[32]
|
37 |
+
The family of choice may or may not include some or all of the members of the family of origin. This terminology stems from the fact that many LGBT individuals, upon coming out, face rejection or shame from the families they were raised in.>[33] The term family of choice is also used by individuals in the 12 step communities, who create close-knit "family" ties through the recovery process.
|
38 |
+
|
39 |
+
As a family system, families of choice face unique issues. Without legal safeguards, family's of choice may struggle when medical, educational, or governmental institutions fail to recognize their legitimacy.[33] If members of the chosen family have been disowned by their family of origin, they may experience surrogate grief, displacing anger, loss, or anxious attachment onto their new family.[33]
|
40 |
+
|
41 |
+
The term blended family or stepfamily describes families with mixed parents: one or both parents remarried, bringing children of the former family into the new family.[34] Also in sociology, particularly in the works of social psychologist Michael Lamb,[35] traditional family refers to "a middle-class family with a bread-winning father and a stay-at-home mother, married to each other and raising their biological children," and nontraditional to exceptions to this rule. Most of the US households are now non-traditional under this definition.[36] Critics of the term "traditional family" point out that in most cultures and at most times, the extended family model has been most common, not the nuclear family,[37] though it has had a longer tradition in England[38] than in other parts of Europe and Asia which contributed large numbers of immigrants to the Americas. The nuclear family became the most common form in the U.S. in the 1960s and 1970s.[39]
|
42 |
+
|
43 |
+
In terms of communication patterns in families, there are a certain set of beliefs within the family that reflect how its members should communicate and interact. These family communication patterns arise from two underlying sets of beliefs. One being conversation orientation (the degree to which the importance of communication is valued) and two, conformity orientation (the degree to which families should emphasize similarities or differences regarding attitudes, beliefs, and values).[40]
|
44 |
+
|
45 |
+
A monogamous family is based on a legal or social monogamy. In this case, an individual has only one (official) partner during their lifetime or at any one time (i.e. serial monogamy).[41] This means that a person may not have several different legal spouses at the same time, as this is usually prohibited by bigamy laws, in jurisdictions that require monogamous marriages.
|
46 |
+
|
47 |
+
Polygamy is a marriage that includes more than two partners.[42][43] When a man is married to more than one wife at a time, the relationship is called polygyny; and when a woman is married to more than one husband at a time, it is called polyandry. If a marriage includes multiple husbands and wives, it can be called polyamory,[44] group or conjoint marriage.[43]
|
48 |
+
|
49 |
+
Polygyny is a form of plural marriage, in which a man is allowed more than one wife .[45] In modern countries that permit polygamy, polygyny is typically the only form permitted. Polygyny is practiced primarily (but not only) in parts of the Middle East and Africa; and is often associated with Islam, however, there are certain conditions in Islam that must be met to perform polygyny.[citation needed]
|
50 |
+
|
51 |
+
Polyandry is a form of marriage whereby a woman takes two or more husbands at the same time.[46] Fraternal polyandry, where two or more brothers are married to the same wife, is a common form of polyandry. Polyandry was traditionally practiced in areas of the Himalayan mountains, among Tibetans in Nepal, in parts of China and in parts of northern India. Polyandry is most common in societies marked by high male mortality or where males will often be apart from the rest of the family for a considerable period of time.[46]
|
52 |
+
|
53 |
+
A first-degree relative is one who shares 50% of your DNA through direct inheritance, such as a full sibling, parent or progeny.
|
54 |
+
|
55 |
+
There is another measure for the degree of relationship, which is determined by counting up generations to the first common ancestor and back down to the target individual, which is used for various genealogical and legal purposes.[citation needed]
|
56 |
+
|
57 |
+
In his book Systems of Consanguinity and Affinity of the Human Family, anthropologist Lewis Henry Morgan (1818–1881) performed the first survey of kinship terminologies in use around the world. Although much of his work is now considered dated, he argued that kinship terminologies reflect different sets of distinctions. For example, most kinship terminologies distinguish between sexes (the difference between a brother and a sister) and between generations (the difference between a child and a parent). Moreover, he argued, kinship terminologies distinguish between relatives by blood and marriage (although recently some anthropologists have argued that many societies define kinship in terms other than "blood").
|
58 |
+
|
59 |
+
Morgan made a distinction between kinship systems that use classificatory terminology and those that use descriptive terminology. Classificatory systems are generally and erroneously understood to be those that "class together" with a single term relatives who actually do not have the same type of relationship to ego. (What defines "same type of relationship" under such definitions seems to be genealogical relationship. This is problematic given that any genealogical description, no matter how standardized, employs words originating in a folk understanding of kinship.) What Morgan's terminology actually differentiates are those (classificatory) kinship systems that do not distinguish lineal and collateral relationships and those (descriptive) kinship systems that do. Morgan, a lawyer, came to make this distinction in an effort to understand Seneca inheritance practices. A Seneca man's effects were inherited by his sisters' children rather than by his own children.[49] Morgan identified six basic patterns of kinship terminologies:
|
60 |
+
|
61 |
+
Most Western societies employ Eskimo kinship terminology.[citation needed] This kinship terminology commonly occurs in societies with strong conjugal, where families have a degree of relative mobility. Typically, societies with conjugal families also favor neolocal residence; thus upon marriage, a person separates from the nuclear family of their childhood (family of orientation) and forms a new nuclear family (family of procreation). Such systems generally assume that the mother's husband is also the biological father. The system uses highly descriptive terms for the nuclear family and progressively more classificatory as the relatives become more and more collateral.
|
62 |
+
|
63 |
+
The system emphasizes the nuclear family. Members of the nuclear family use highly descriptive kinship terms, identifying directly only the husband, wife, mother, father, son, daughter, brother, and sister. All other relatives are grouped together into categories. Members of the nuclear family may be lineal or collateral. Kin, for whom these are family, refer to them in descriptive terms that build on the terms used within the nuclear family or use the nuclear family term directly.
|
64 |
+
|
65 |
+
Nuclear family of orientation
|
66 |
+
|
67 |
+
Nuclear conjugal family
|
68 |
+
|
69 |
+
Nuclear non-lineal family
|
70 |
+
|
71 |
+
A sibling is a collateral relative with a minimal removal. For collateral relatives with one additional removal, one generation more distant from a common ancestor on one side, more classificatory terms come into play. These terms (Aunt, Uncle, Niece, and Nephew) do not build on the terms used within the nuclear family as most are not traditionally members of the household. These terms do not traditionally differentiate between a collateral relatives and a person married to a collateral relative (both collateral and aggregate). Collateral relatives with additional removals on each side are Cousins. This is the most classificatory term and can be distinguished by degrees of collaterality and by generation (removal).
|
72 |
+
|
73 |
+
When only the subject has the additional removal, the relative is the subjects parents siblings, the terms Aunt and Uncle are used for female and male relatives respectively. When only the relative has the additional removal, the relative is the subjects siblings child, the terms Niece and Nephew are used for female and male relatives respectively. The spouse of a biological aunt or uncle is an aunt or uncle, and the nieces and nephews of a spouse are nieces and nephews. With further removal by the subject for aunts and uncles and by the relative for nieces and nephews the prefix "grand-" modifies these terms. With further removal the prefix becomes "great-grand-," adding another "great-" for each additional generation.
|
74 |
+
|
75 |
+
When the subject and the relative have an additional removal they are cousins. A cousin with minimal removal is a first cousin, i.e. the child of the subjects uncle or aunt. Degrees of collaterality and removals are used to more precisely describe the relationship between cousins. The degree is the number of generations subsequent to the common ancestor before a parent of one of the cousins is found, while the removal is the difference between the number of generations from each cousin to the common ancestor (the difference between the generations the cousins are from).[50][51]
|
76 |
+
|
77 |
+
Cousins of an older generation (in other words, one's parents' first cousins), although technically first cousins once removed, are often classified with "aunts" and "uncles."
|
78 |
+
|
79 |
+
English-speakers mark relationships by marriage (except for wife/husband) with the tag "-in-law." The mother and father of one's spouse become one's mother-in-law and father-in-law; the wife of one's son becomes one's daughter-in-law and the husband of one's daughter becomes one's son-in-law. The term "sister-in-law" refers to three essentially different relationships, either the wife of one's brother, or the sister of one's spouse. "Brother-in-law" is the husband of one's sister, or the brother of one's spouse. The terms "half-brother" and "half-sister" indicate siblings who share only one biological parent. The term "aunt-in-law" is the wife of one's uncle, or the aunt of one's spouse. "Uncle-in-law" is the husband of one's aunt, or the uncle of one's spouse. "Cousin-in-law" is the spouse of one's cousin, or the cousin of one's spouse. The term "niece-in-law" is the wife of one's nephew, or the niece of one's spouse. "Nephew-in-law" is the husband of one's niece, or the nephew of one's spouse. The grandmother and grandfather of one's spouse become one's grandmother-in-law and grandfather-in-law; the wife of one's grandson becomes one's granddaughter-in-law and the husband of one's granddaughter becomes one's grandson-in-law.
|
80 |
+
|
81 |
+
In Indian English a sibling in law who is the spouse of your sibling can be referred to as a co-sibling (specificity a co-sister[52] or co-brother[53]).
|
82 |
+
|
83 |
+
Patrilineality, also known as the male line or agnatic kinship, is a form of kinship system in which an individual's family membership derives from and is traced through his or her father's lineage.[54] It generally involves the inheritance of property, rights, names, or titles by persons related through male kin.
|
84 |
+
|
85 |
+
A patriline ("father line") is a person's father, and additional ancestors that are traced only through males. One's patriline is thus a record of descent from a man in which the individuals in all intervening generations are male. In cultural anthropology, a patrilineage is a consanguineal male and female kinship group, each of whose members is descended from the common ancestor through male forebears.
|
86 |
+
|
87 |
+
Matrilineality is a form of kinship system in which an individual's family membership derives from and is traced through his or her mother's lineage.
|
88 |
+
|
89 |
+
It may also correlate with a societal system in which each person is identified with their matriline—their mother's lineage—and which can involve the inheritance of property and titles. A matriline is a line of descent from a female ancestor to a descendant in which the individuals in all intervening generations are mothers – in other words, a "mother line".
|
90 |
+
|
91 |
+
In a matrilineal descent system, an individual is considered to belong to the same descent group as her or his mother. This matrilineal descent pattern is in contrasts to the more common pattern of patrilineal descent pattern.
|
92 |
+
|
93 |
+
Bilateral descent is a form of kinship system in which an individual's family membership derives from and is traced through both the paternal and maternal sides. The relatives on the mother's side and father's side are equally important for emotional ties or for transfer of property or wealth. It is a family arrangement where descent and inheritance are passed equally through both parents.[55] Families who use this system trace descent through both parents simultaneously and recognize multiple ancestors, but unlike with cognatic descent it is not used to form descent groups.[56]
|
94 |
+
|
95 |
+
Traditionally, this is found among some groups in West Africa, India, Australia, Indonesia, Melanesia, Malaysia and Polynesia. Anthropologists believe that a tribal structure based on bilateral descent helps members live in extreme environments because it allows individuals to rely on two sets of families dispersed over a wide area.[57]
|
96 |
+
|
97 |
+
Early scholars of family history applied Darwin's biological theory of evolution in their theory of evolution of family systems.[58] American anthropologist Lewis H. Morgan published Ancient Society in 1877 based on his theory of the three stages of human progress from Savagery through Barbarism to Civilization.[59] Morgan's book was the "inspiration for Friedrich Engels' book" The Origin of the Family, Private Property and the State published in 1884.[60]
|
98 |
+
|
99 |
+
Engels expanded Morgan's hypothesis that economical factors caused the transformation of primitive community into a class-divided society.[61] Engels' theory of resource control, and later that of Karl Marx, was used to explain the cause and effect of change in family structure and function. The popularity of this theory was largely unmatched until the 1980s, when other sociological theories, most notably structural functionalism, gained acceptance.
|
100 |
+
|
101 |
+
Contemporary society generally views the family as a haven from the world, supplying absolute fulfillment. Zinn and Eitzen discuss the image of the "family as haven ... a place of intimacy, love and trust where individuals may escape the competition of dehumanizing forces in modern society".[63]
|
102 |
+
During industrialization, "[t]he family as a repository of warmth and tenderness (embodied by the mother) stands in opposition to the competitive and aggressive world of commerce (embodied by the father). The family's task was to protect against the outside world."[64] However, Zinn and Eitzen note, "The protective image of the family has waned in recent years as the ideals of family fulfillment have taken shape. Today, the family is more compensatory than protective. It supplies what is vitally needed but missing in other social arrangements."[64]
|
103 |
+
|
104 |
+
"The popular wisdom", according to Zinn and Eitzen, sees the family structures of the past as superior to those today, and families as more stable and happier at a time when they did not have to contend with problems such as illegitimate children and divorce. They respond to this, saying, "there is no golden age of the family gleaming at us in the far back historical past."[65] "Desertion by spouses, illegitimate children, and other conditions that are considered characteristics of modern times existed in the past as well."[65]
|
105 |
+
|
106 |
+
Others argue that whether or not one views the family as "declining" depends on one's definition of "family". "Married couples have dropped below half of all American households. This drop is shocking from traditional forms of the family system. Only a fifth of households were following traditional ways of having married couples raising a family together."[67] In the Western World, marriages are no longer arranged for economic, social or political gain, and children are no longer expected to contribute to family income. Instead, people choose mates based on love. This increased role of love indicates a societal shift toward favoring emotional fulfilment and relationships within a family, and this shift necessarily weakens the institution of the family.[68]
|
107 |
+
|
108 |
+
Margaret Mead considers the family as a main safeguard to continuing human progress. Observing, "Human beings have learned, laboriously, to be human", she adds: "we hold our present form of humanity on trust, [and] it is possible to lose it" ... "It is not without significance that the most successful large-scale abrogations of the family have occurred not among simple savages, living close to the subsistence edge, but among great nations and strong empires, the resources of which were ample, the populations huge, and the power almost unlimited"[69]
|
109 |
+
|
110 |
+
Many countries (particularly Western) have, in recent years, changed their family laws in order to accommodate diverse family models. For instance, in the United Kingdom, in Scotland, the Family Law (Scotland) Act 2006 provides cohabitants with some limited rights.[70] In 2010, Ireland enacted the Civil Partnership and Certain Rights and Obligations of Cohabitants Act 2010. There have also been moves at an international level, most notably, the Council of Europe European Convention on the Legal Status of Children Born out of Wedlock[71] which came into force in 1978. Countries which ratify it must ensure that children born outside marriage are provided with legal rights as stipulated in the text of this Convention. The Convention was ratified by the UK in 1981 and by Ireland in 1988.[72]
|
111 |
+
|
112 |
+
In the United States, one in five mothers has children by different fathers; among mothers with two or more children the figure is higher, with 28% having children with at least two different men. Such families are more common among Blacks and Hispanics and among the lower socioeconomic class.[73]
|
113 |
+
|
114 |
+
However, in western society, the single parent family has been growing more accepted and has begun to make an impact on culture. Single parent families are more commonly single mother families than single father. These families sometimes face difficult issues besides the fact that they have to rear their children on their own, for example, low income making it difficult to pay for rent, child care, and other necessities for a healthy and safe home.
|
115 |
+
|
116 |
+
Domestic violence (DV) is violence that happens within the family. The legal and social understanding of the concept of DV differs by culture. The definition of the term "domestic violence" varies, depending on the context in which it is used.[74] It may be defined differently in medical, legal, political or social contexts. The definitions have varied over time, and vary in different parts of the world.
|
117 |
+
|
118 |
+
The Convention on preventing and combating violence against women and domestic violence states that:[75]
|
119 |
+
|
120 |
+
In 1993, the United Nations Declaration on the Elimination of Violence against Women identified domestic violence as one of three contexts in which violence against women occurs, describing it as:[76]
|
121 |
+
|
122 |
+
Family violence is a broader definition, often used to include child abuse, elder abuse, and other violent acts between family members.[77]
|
123 |
+
|
124 |
+
Child abuse is defined by the WHO as:[78]
|
125 |
+
|
126 |
+
Elder abuse is, according to the WHO: "a single, or repeated act, or lack of appropriate action, occurring within any relationship where there is an expectation of trust which causes harm or distress to an older person".[81]
|
127 |
+
|
128 |
+
Child abuse is the physical, sexual or emotional maltreatment or neglect of a child or children.[82] In the United States, the Centers for Disease Control and Prevention (CDC) and the Department for Children and Families (DCF) define child maltreatment as any act or series of acts of commission or omission by a parent or other caregiver that results in harm, potential for harm, or threat of harm to a child.[83] Child abuse can occur in a child's home, or in the organizations, schools or communities the child interacts with. There are four major categories of child abuse: neglect, physical abuse, psychological or emotional abuse, and sexual abuse.
|
129 |
+
|
130 |
+
Abuse of parents by their children is a common but under reported and under researched subject. Parents are quite often subject to levels of childhood aggression in excess of normal childhood aggressive outbursts, typically in the form of verbal or physical abuse. Parents feel a sense of shame and humiliation to have that problem, so they rarely seek help and there is usually little or no help available anyway.[84][85]
|
131 |
+
|
132 |
+
Elder abuse is "a single, or repeated act, or lack of appropriate action, occurring within any relationship where there is an expectation of trust, which causes harm or distress to an older person."[86] This definition has been adopted by the World Health Organization from a definition put forward by Action on Elder Abuse in the UK. Laws protecting the elderly from abuse are similar to, and related to, laws protecting dependent adults from abuse.
|
133 |
+
|
134 |
+
The core element to the harm of elder abuse is the "expectation of trust" of the older person toward their abuser. Thus, it includes harms by people the older person knows or with whom they have a relationship, such as a spouse, partner or family member, a friend or neighbor, or people that the older person relies on for services. Many forms of elder abuse are recognized as types of domestic violence or family violence.
|
135 |
+
|
136 |
+
Forced and child marriages are practiced in certain regions of the world, particularly in Asia and Africa, and these types of marriages are associated with a high rate of domestic violence.[87][88][89][90]
|
137 |
+
|
138 |
+
A forced marriage is a marriage where one or both participants are married without their freely given consent.[91] The line between forced marriage and consensual marriage may become blurred, because the social norms of many cultures dictate that one should never oppose the desire of one's parents/relatives in regard to the choice of a spouse; in such cultures it is not necessary for violence, threats, intimidation etc. to occur, the person simply "consents" to the marriage even if he/she doesn't want it, out of the implied social pressure and duty. The customs of bride price and dowry, that exist in parts of the world, can lead to buying and selling people into marriage.[92][93]
|
139 |
+
|
140 |
+
A child marriage is a marriage where one or both spouses are under 18.[94][87] Child marriage was common throughout history but is today condemned by international human rights organizations.[95][96][97] Child marriages are often arranged between the families of the future bride and groom, sometimes as soon as the girl is born.[95] Child marriages can also occur in the context of marriage by abduction.[95]
|
141 |
+
|
142 |
+
Family honor is an abstract concept involving the perceived quality of worthiness and respectability that affects the social standing and the self-evaluation of a group of related people, both corporately and individually.[98][99] The family is viewed as the main source of honor and the community highly values the relationship between honor and the family.[100] The conduct of family members reflects upon family honor and the way the family perceives itself, and is perceived by others.[99] In cultures of honor maintaining the family honor is often perceived as more important than either individual freedom, or individual achievement.[101] In extreme cases, engaging in acts that are deemed to tarnish the honor of the family results in honor killings. An honor killing is the homicide of a member of a family or social group by other members, due to the perpetrators' belief that the victim has brought shame or dishonor upon the family or community, usually for reasons such as refusing to enter an arranged marriage, being in a relationship that is disapproved by their relatives, having sex outside marriage, becoming the victim of rape, dressing in ways which are deemed inappropriate, or engaging in homosexual relations.[102][103][104][105][106]
|
143 |
+
|
144 |
+
A family is often part of a sharing economy with common ownership.
|
145 |
+
|
146 |
+
Dowry is property (money, goods, or estate) that a wife or wife's family gives to her husband when the wife and husband marry.[107] Offering dowry was common in many cultures historically (including in Europe and North America), but this practice today is mostly restricted to some areas primarily in the Indian subcontinent.
|
147 |
+
|
148 |
+
Bride price, (also bridewealth or bride token), is property paid by the groom or his family to the parents of a woman upon the marriage of their daughter to the groom. It is practiced mostly in Sub-Saharan Africa, parts of South-East Asia (Thailand, Cambodia), and parts of Central Asia.
|
149 |
+
|
150 |
+
Dower is property given to the bride herself by the groom at the time of marriage, and which remains under her ownership and control.[108]
|
151 |
+
|
152 |
+
In some countries married couples benefit from various taxation advantages not available to a single person or to unmarried couples. For example, spouses may be allowed to average their combined incomes. Some jurisdictions recognize common law marriage or de facto relations for this purposes. In some jurisdictions there is also an option of civil partnership or domestic partnership.
|
153 |
+
|
154 |
+
Different property regimes exist for spouses. In many countries, each marriage partner has the choice of keeping their property separate or combining properties. In the latter case, called community property, when the marriage ends by divorce each owns half. In lieu of a will or trust, property owned by the deceased generally is inherited by the surviving spouse.
|
155 |
+
|
156 |
+
Reproductive rights are legal rights and freedoms relating to reproduction and reproductive health. These include the right to decide on issues regarding the number of children born, family planning, contraception, and private life, free from coercion and discrimination; as well as the right to access health services and adequate information.[109][110][111][112] According to UNFPA, reproductive rights "include the right to decide the number, timing and spacing of children, the right to voluntarily marry and establish a family, and the right to the highest attainable standard of health, among others".[113] Family planning refers to the factors that may be considered by individuals and couples in order for them to control their fertility, anticipate and attain the desired number of children and the spacing and timing of their births.[114][115]
|
157 |
+
|
158 |
+
The state and church have been, and still are in some countries, involved in controlling the size of families, often using coercive methods, such as bans on contraception or abortion (where the policy is a natalist one—for example through tax on childlessness) or conversely, discriminatory policies against large families or even forced abortions (e.g., China's one-child policy in place from 1978 to 2015). Forced sterilization has often targeted ethnic minority groups, such as Roma women in Eastern Europe,[116][117] or indigenous women in Peru (during the 1990s).[118]
|
159 |
+
|
160 |
+
The parents' rights movement is a movement whose members are primarily interested in issues affecting parents and children related to family law, specifically parental rights and obligations. Mothers' rights movements focus on maternal health, workplace issues such as labor rights, breastfeeding, and rights in family law. The fathers' rights movement is a movement whose members are primarily interested in issues related to family law, including child custody and child support, that affect fathers and their children.[119]
|
161 |
+
|
162 |
+
Children's rights are the human rights of children, with particular attention to the rights of special protection and care afforded to minors, including their right to association with both parents, their right to human identity, their right to be provided in regard to their other basic needs, and their right to be free from violence and abuse.[120][121][122]
|
163 |
+
|
164 |
+
Each jurisdiction has its own marriage laws. These laws differ significantly from country to country; and these laws are often controversial. Areas of controversy include women's rights as well as same-sex marriage.
|
165 |
+
|
166 |
+
Legal reforms to family laws have taken place in many countries during the past few decades. These dealt primarily with gender equality within marriage and with divorce laws. Women have been given equal rights in marriage in many countries, reversing older family laws based on the dominant legal role of the husband. Coverture, which was enshrined in the common law of England and the US for several centuries and throughout most of the 19th century, was abolished. In some European countries the changes that lead to gender equality were slower. The period of 1975–1979 saw a major overhaul of family laws in countries such as Italy,[123][124] Spain,[125] Austria,[126][126] West Germany,[127][128] and Portugal.[129] In 1978, the Council of Europe passed the Resolution (78) 37 on equality of spouses in civil law.[130] Among the last European countries to establish full gender equality in marriage were Switzerland. In 1985, a referendum guaranteed women legal equality with men within marriage.[131][132] The new reforms came into force in January 1988.[133] In Greece, in 1983, legislation was passed guaranteeing equality between spouses, abolishing dowry, and ending legal discrimination against illegitimate children.[134][135] In 1981, Spain abolished the requirement that married women must have their husbands’ permission to initiate judicial proceedings[136] the Netherlands,[137][138] and France[139] in the 1980s. In recent decades, the marital power has also been abolished in African countries that had this doctrine, but many African countries that were former French colonies still have discriminatory laws in their marriages regulations, such regulations originating in the Napoleonic Code that has inspired these laws.[136] In some countries (predominantly Roman Catholic) divorce was legalized only recently (e.g. Italy (1970), Portugal (1975), Brazil (1977), Spain (1981), Argentina (1987), Ireland (1996), Chile (2004) and Malta (2011)) although annulment and legal separation were options. Philippines still does not allow divorce. (see Divorce law by country). The laws pertaining to the situation of children born outside marriage have also been revised in many countries (see Legitimacy (family law)).
|
167 |
+
|
168 |
+
Family medicine is a medical specialty devoted to comprehensive health care for people of all ages; it is based on knowledge of the patient in the context of the family and the community, emphasizing disease prevention and health promotion.[141] The importance of family medicine is being increasingly recognized.[142]
|
169 |
+
|
170 |
+
Maternal mortality or maternal death is defined by WHO as "the death of a woman while pregnant or within 42 days of termination of pregnancy, irrespective of the duration and site of the pregnancy, from any cause related to or aggravated by the pregnancy or its management but not from accidental or incidental causes."[144] Historically, maternal mortality was a major cause of women's death. In recent decades, advances in healthcare have resulted in rates of maternal mortality having dropped dramatically, especially in Western countries. Maternal mortality however remains a serious problem in many African and Asian counties.[144][145]
|
171 |
+
|
172 |
+
Infant mortality is the death of a child less than one year of age. Child mortality is the death of a child before the child's fifth birthday. Like maternal mortality, infant and child mortality were common throughout history, but have decreased significantly in modern times.[146][147]
|
173 |
+
|
174 |
+
While in many parts of the world family policies seek to promote a gender-equal organization of the family life, in others the male-dominated family continues to be the official policy of the authorities, which is also supported by law. For instance, the Civil Code of Iran states at Article 1105: "In relations between husband and wife; the position of the head of the family is the exclusive right of the husband".[148]
|
175 |
+
|
176 |
+
In some parts of the world, some governments promote a specific form of family, such as that based on traditional family values. The term "family values" is often used in political discourse in some countries, its general meaning being that of traditional or cultural values that pertain to the family's structure, function, roles, beliefs, attitudes, and ideals, usually involving the "traditional family"—a middle-class family with a breadwinner father and a homemaker mother, raising their biological children. Any deviation from this family model is considered a "nontraditional family".[149] These family ideals are often advanced through policies such as marriage promotion. Some jurisdictions outlaw practices which they deem as socially or religiously unacceptable, such as fornication, cohabitation or adultery.
|
177 |
+
|
178 |
+
Work-family balance is a concept involving proper prioritizing between work/career and family life. It includes issues relating to the way how work and families intersect and influence each other. At a political level, it is reflected through policies such maternity leave and paternity leave. Since the 1950s, social scientists as well as feminists have increasingly criticized gendered arrangements of work and care, and the male breadwinner role, and policies are increasingly targeting men as fathers, as a tool of changing gender relations.[150]
|
179 |
+
|
180 |
+
Article 8 of the European Convention on Human Rights provides a right to respect for one's "private and family life, his home and his correspondence", subject to certain restrictions that are "in accordance with law" and "necessary in a democratic society".[151]
|
181 |
+
|
182 |
+
Article 8 – Right to respect for private and family life
|
183 |
+
|
184 |
+
1. Everyone has the right to respect for his private and family life, his home and his correspondence.
|
185 |
+
|
186 |
+
2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.
|
187 |
+
|
188 |
+
Certain social scientists have advocated the abolition of the family. An early opponent of the family was Socrates whose position was outlined by Plato in The Republic.[152] In Book 5 of The Republic, Socrates tells his interlocutors that a just city is one in which citizens have no family ties.[153][154]
|
189 |
+
|
190 |
+
The family being such a deep-rooted and much-venerated institution, few intellectuals have ventured to speak against it. Familialism has been atypically defined as a "social structure where ... a family's values are held in higher esteem than the values of the individual members of the family." Favoritism granted to relatives regardless of merit is called nepotism.
|
191 |
+
|
192 |
+
The Russian-American rationalist and individualist philosopher, novelist and playwright Ayn Rand compared partiality towards consanguinity with racism, as a small-scale manifestation of the latter.[155] "The worship of the family is merely racism, like a crudely primitive first installment on the worship of the tribe. It places the accident of birth above a man's values and duty to the tribe above a man's right to his own life."[156] Additionally, she spoke in favor of childfree lifestyle, while following it herself.[155]
|
193 |
+
|
194 |
+
The British social critic, poet, mountaineer and occultist Aleister Crowley censured the institution of family in his works: "Horrid word, family! Its very etymology accuses it of servility and stagnation. / Latin, famulus, a servant; Oscan, Faamat, he dwells. ... [T]hink what horrid images it evokes from the mind. Not only Victorian; wherever the family has been strong, it has always been an engine of tyranny. Weak members or weak neighbours: it is the mob spirit crushing genius, or overwhelming opposition by brute arithmetic. ... In every Magical, or similar system, it is invariably the first condition which the Aspirant must fulfill: he must once and for all and for ever put his family outside his magical circle."[157]
|
195 |
+
|
196 |
+
One of the controversies regarding the family is the application of the concept of social justice to the private sphere of family relations, in particular with regard to the rights of women and children. Throughout much of the history, most philosophers who advocated for social justice focused on the public political arena, not on the family structures; with the family often being seen as a separate entity which needed to be protected from outside state intrusion. One notable exception was John Stuart Mill, who, in his work The Subjection of Women, advocated for greater rights for women within marriage and family.[158] Second wave feminists argued that the personal is political, stating that there are strong connections between personal experiences and the larger social and political structures. In the context of the feminist movement of the 1960s and 1970s, this was a challenge to the nuclear family and family values, as they were understood then.[159] Feminists focused on domestic violence, arguing that the reluctance—in law or in practice—of the state to intervene and offer protection to women who have been abused within the family, is in violation of women's human rights, and is the result of an ideology which places family relations outside the conceptual framework of human rights.[160]
|
197 |
+
|
198 |
+
In 2015, Nicholas Eberstadt, political economist at the American Enterprise Institute in Washington, described a "global flight from family" in an opinion piece in the Wall Street Journal.[161] Statistics from an infographic by Olivier Ballou showed that,[162]
|
199 |
+
|
200 |
+
In 2013, just over 40% of US babies were born outside marriage. The Census bureau estimated that 27% of all children lived in a fatherless home. Europe has seen a surge in child-free adults. One in five 40-something women are childless in Sweden and in Switzerland, in Italy one in four, in Berlin one in three. So-called traditional societies are seeing the same trend. About one-sixth of Japanese women in their forties have never married and about 30% of all woman that age are childless.
|
201 |
+
|
202 |
+
However, Swedish statisticians reported in 2013 that, in contrast to many countries, since the 2000s, fewer children have experienced their parents' separation, childlessness had decreased in Sweden and marriages had increased. It had also become more common for couples to have a third child suggesting that the nuclear family was no longer in decline in Sweden.[163]:10
|
en/1939.html.txt
ADDED
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Reportedly haunted locations:
|
4 |
+
|
5 |
+
In folklore, a ghost (sometimes known as an apparition, haunt, phantom, poltergeist, shade, specter or spectre, spirit, spook, and wraith) is the soul or spirit of a dead person or animal that can appear to the living. In ghostlore, descriptions of ghosts vary widely from an invisible presence to translucent or barely visible wispy shapes, to realistic, lifelike forms. The deliberate attempt to contact the spirit of a deceased person is known as necromancy, or in spiritism as a séance.
|
6 |
+
|
7 |
+
The belief in the existence of an afterlife, as well as manifestations of the spirits of the dead, is widespread, dating back to animism or ancestor worship in pre-literate cultures. Certain religious practices—funeral rites, exorcisms, and some practices of spiritualism and ritual magic—are specifically designed to rest the spirits of the dead. Ghosts are generally described as solitary, human-like essences, though stories of ghostly armies and the ghosts of animals rather than humans have also been recounted.[2][3] They are believed to haunt particular locations, objects, or people they were associated with in life. According to a 2009 study by the Pew Research Center, 18% of Americans say they have seen a ghost.[4]
|
8 |
+
|
9 |
+
The overwhelming consensus of science is that ghosts do not exist.[5] Their existence is impossible to falsify,[5] and ghost hunting has been classified as pseudoscience.[6][7][8] Despite centuries of investigation, there is no scientific evidence that any location is inhabited by spirits of the dead.[6][9] Historically, certain toxic and psychoactive plants (such as datura and hyoscyamus niger), whose use has long been associated with necromancy and the underworld, have been shown to contain anticholinergic compounds that are pharmacologically linked to dementia (specifically DLB) as well as histological patterns of neurodegeneration.[10][11] Recent research has indicated that ghost sightings may be related to degenerative brain diseases such as Alzheimer's disease.[12] Common prescription medication and over-the-counter drugs (such as sleep aids) may also, in rare instances, cause ghost-like hallucinations, particularly zolpidem and diphenhydramine.[13] Older reports linked carbon monoxide poisoning to ghost-like hallucinations.[14]
|
10 |
+
|
11 |
+
In folklore studies, ghosts fall within the motif index designation E200-E599 ("Ghosts and other revenants").
|
12 |
+
|
13 |
+
The English word ghost continues Old English gāst, from Proto-Germanic *gaistaz. It is common to West Germanic, but lacking in North Germanic and East Germanic (the equivalent word in Gothic is ahma, Old Norse has andi m., önd f.). The prior Proto-Indo-European form was *ǵʰéysd-os, from the root *ǵʰéysd- denoting "fury, anger" reflected in Old Norse geisa "to rage". The Germanic word is recorded as masculine only, but likely continues a neuter s-stem. The original meaning of the Germanic word would thus have been an animating principle of the mind, in particular capable of excitation and fury (compare óðr). In Germanic paganism, "Germanic Mercury", and the later Odin, was at the same time the conductor of the dead and the "lord of fury" leading the Wild Hunt.
|
14 |
+
|
15 |
+
Besides denoting the human spirit or soul, both of the living and the deceased, the Old English word is used as a synonym of Latin spiritus also in the meaning of "breath" or "blast" from the earliest attestations (9th century). It could also denote any good or evil spirit, such as angels and demons; the Anglo-Saxon gospel refers to the demonic possession of Matthew 12:43 as se unclæna gast. Also from the Old English period, the word could denote the spirit of God, viz. the "Holy Ghost".
|
16 |
+
|
17 |
+
The now-prevailing sense of "the soul of a deceased person, spoken of as appearing in a visible form" only emerges in Middle English (14th century). The modern noun does, however, retain a wider field of application, extending on one hand to "soul", "spirit", "vital principle", "mind", or "psyche", the seat of feeling, thought, and moral judgement; on the other hand used figuratively of any shadowy outline, or fuzzy or unsubstantial image; in optics, photography, and cinematography especially, a flare, secondary image, or spurious signal.[15]
|
18 |
+
|
19 |
+
The synonym spook is a Dutch loanword, akin to Low German spôk (of uncertain etymology); it entered the English language via American English in the 19th century.[16][17][18][19] Alternative words in modern usage include spectre (altn. specter; from Latin spectrum), the Scottish wraith (of obscure origin), phantom (via French ultimately from Greek phantasma, compare fantasy) and apparition. The term shade in classical mythology translates Greek σκιά,[20] or Latin umbra,[21] in reference to the notion of spirits in the Greek underworld. "Haint" is a synonym for ghost used in regional English of the southern United States,[22] and the "haint tale" is a common feature of southern oral and literary tradition.[23] The term poltergeist is a German word, literally a "noisy ghost", for a spirit said to manifest itself by invisibly moving and influencing objects.[24]
|
20 |
+
|
21 |
+
Wraith is a Scots word for ghost, spectre, or apparition. It appeared in Scottish Romanticist literature, and acquired the more general or figurative sense of portent or omen. In 18th- to 19th-century Scottish literature, it also applied to aquatic spirits. The word has no commonly accepted etymology; the OED notes "of obscure origin" only.[25] An association with the verb writhe was the etymology favored by J. R. R. Tolkien.[26] Tolkien's use of the word in the naming of the creatures known as the Ringwraiths has influenced later usage in fantasy literature. Bogey[27] or bogy/bogie is a term for a ghost, and appears in Scottish poet John Mayne's Hallowe'en in 1780.[28][29]
|
22 |
+
|
23 |
+
A revenant is a deceased person returning from the dead to haunt the living, either as a disembodied ghost or alternatively as an animated ("undead") corpse. Also related is the concept of a fetch, the visible ghost or spirit of a person yet alive.
|
24 |
+
|
25 |
+
A notion of the transcendent, supernatural, or numinous, usually involving entities like ghosts, demons, or deities, is a cultural universal.[30] In pre-literate folk religions, these beliefs are often summarized under animism and ancestor worship. Some people believe the ghost or spirit never leaves Earth until there is no-one left to remember the one who died.[31]
|
26 |
+
|
27 |
+
In many cultures, malignant, restless ghosts are distinguished from the more benign spirits involved in ancestor worship.[32]
|
28 |
+
|
29 |
+
Ancestor worship typically involves rites intended to prevent revenants, vengeful spirits of the dead, imagined as starving and envious of the living. Strategies for preventing revenants may either include sacrifice, i.e., giving the dead food and drink to pacify them, or magical banishment of the deceased to force them not to return. Ritual feeding of the dead is performed in traditions like the Chinese Ghost Festival or the Western All Souls' Day. Magical banishment of the dead is present in many of the world's burial customs. The bodies found in many tumuli (kurgan) had been ritually bound before burial,[33] and the custom of binding the dead persists, for example, in rural Anatolia.[34]
|
30 |
+
|
31 |
+
Nineteenth-century anthropologist James Frazer stated in his classic work The Golden Bough that souls were seen as the creature within that animated the body.[35]
|
32 |
+
|
33 |
+
Although the human soul was sometimes symbolically or literally depicted in ancient cultures as a bird or other animal, it appears to have been widely held that the soul was an exact reproduction of the body in every feature, even down to clothing the person wore. This is depicted in artwork from various ancient cultures, including such works as the Egyptian Book of the Dead, which shows deceased people in the afterlife appearing much as they did before death, including the style of dress.
|
34 |
+
|
35 |
+
While deceased ancestors are universally regarded as venerable, and often believed to have a continued presence in some form of afterlife, the spirit of a deceased person that persists in the material world (a ghost) is regarded as an unnatural or undesirable state of affairs and the idea of ghosts or revenants is associated with a reaction of fear. This is universally the case in pre-modern folk cultures, but fear of ghosts also remains an integral aspect of the modern ghost story, Gothic horror, and other horror fiction dealing with the supernatural.
|
36 |
+
|
37 |
+
Another widespread belief concerning ghosts is that they are composed of a misty, airy, or subtle material. Anthropologists link this idea to early beliefs that ghosts were the person within the person (the person's spirit), most noticeable in ancient cultures as a person's breath, which upon exhaling in colder climates appears visibly as a white mist.[31] This belief may have also fostered the metaphorical meaning of "breath" in certain languages, such as the Latin spiritus and the Greek pneuma, which by analogy became extended to mean the soul. In the Bible, God is depicted as synthesising Adam, as a living soul, from the dust of the Earth and the breath of God.
|
38 |
+
|
39 |
+
In many traditional accounts, ghosts were often thought to be deceased people looking for vengeance (vengeful ghosts), or imprisoned on earth for bad things they did during life. The appearance of a ghost has often been regarded as an omen or portent of death. Seeing one's own ghostly double or "fetch" is a related omen of death.[36]
|
40 |
+
|
41 |
+
White ladies were reported to appear in many rural areas, and supposed to have died tragically or suffered trauma in life. White Lady legends are found around the world. Common to many of them is the theme of losing a child or husband and a sense of purity, as opposed to the Lady in Red ghost that is mostly attributed to a jilted lover or prostitute. The White Lady ghost is often associated with an individual family line or regarded as a harbinger of death similar to a banshee.[37][38][needs context]
|
42 |
+
|
43 |
+
Legends of ghost ships have existed since the 18th century; most notable of these is the Flying Dutchman. This theme has been used in literature in The Rime of the Ancient Mariner by Coleridge.
|
44 |
+
|
45 |
+
Ghosts are often depicted as being covered in a shroud and/or dragging chains.[39]
|
46 |
+
|
47 |
+
A place where ghosts are reported is described as haunted, and often seen as being inhabited by spirits of deceased who may have been former residents or were familiar with the property. Supernatural activity inside homes is said to be mainly associated with violent or tragic events in the building's past such as murder, accidental death, or suicide—sometimes in the recent or ancient past. However, not all hauntings are at a place of a violent death, or even on violent grounds. Many cultures and religions believe the essence of a being, such as the 'soul', continues to exist. Some religious views argue that the 'spirits' of those who have died have not 'passed over' and are trapped inside the property where their memories and energy are strong.
|
48 |
+
|
49 |
+
There are many references to ghosts in Mesopotamian religions – the religions of Sumer, Babylon, Assyria, and other early states in Mesopotamia. Traces of these beliefs survive in the later Abrahamic religions that came to dominate the region.[40]
|
50 |
+
Ghosts were thought to be created at time of death, taking on the memory and personality of the dead person. They traveled to the netherworld, where they were assigned a position, and led an existence similar in some ways to that of the living.
|
51 |
+
Relatives of the dead were expected to make offerings of food and drink to the dead to ease their conditions. If they did not, the ghosts could inflict misfortune and illness on the living. Traditional healing practices ascribed a variety of illnesses to the action of ghosts, while others were caused by gods or demons.[41]
|
52 |
+
|
53 |
+
There was widespread belief in ghosts in ancient Egyptian culture
|
54 |
+
The Hebrew Bible contains few references to ghosts, associating spiritism with forbidden occult activities cf. Deuteronomy 18:11. The most notable reference is in the First Book of Samuel (I Samuel 28:3–19 KJV), in which a disguised King Saul has the Witch of Endor summon the spirit or ghost of Samuel.
|
55 |
+
|
56 |
+
The soul and spirit were believed to exist after death, with the ability to assist or harm the living, and the possibility of a second death. Over a period of more than 2,500 years, Egyptian beliefs about the nature of the afterlife evolved constantly. Many of these beliefs were recorded in hieroglyph inscriptions, papyrus scrolls and tomb paintings. The Egyptian Book of the Dead compiles some of the beliefs from different periods of ancient Egyptian history.[42]
|
57 |
+
In modern times, the fanciful concept of a mummy coming back to life and wreaking vengeance when disturbed has spawned a whole genre of horror stories and films.[43]
|
58 |
+
|
59 |
+
Ghosts appeared in Homer's Odyssey and Iliad, in which they were described as vanishing "as a vapor, gibbering and whining into the earth". Homer's ghosts had little interaction with the world of the living. Periodically they were called upon to provide advice or prophecy, but they do not appear to be particularly feared. Ghosts in the classical world often appeared in the form of vapor or smoke, but at other times they were described as being substantial, appearing as they had been at the time of death, complete with the wounds that killed them.[44]
|
60 |
+
|
61 |
+
By the 5th century BC, classical Greek ghosts had become haunting, frightening creatures who could work to either good or evil purposes. The spirit of the dead was believed to hover near the resting place of the corpse, and cemeteries were places the living avoided. The dead were to be ritually mourned through public ceremony, sacrifice, and libations, or else they might return to haunt their families. The ancient Greeks held annual feasts to honor and placate the spirits of the dead, to which the family ghosts were invited, and after which they were "firmly invited to leave until the same time next year."[45]
|
62 |
+
|
63 |
+
The 5th-century BC play Oresteia includes an appearance of the ghost of Clytemnestra, one of the first ghosts to appear in a work of fiction.[46]
|
64 |
+
|
65 |
+
The ancient Romans believed a ghost could be used to exact revenge on an enemy by scratching a curse on a piece of lead or pottery and placing it into a grave.[47]
|
66 |
+
|
67 |
+
Plutarch, in the 1st century AD, described the haunting of the baths at Chaeronea by the ghost of a murdered man. The ghost's loud and frightful groans caused the people of the town to seal up the doors of the building.[48] Another celebrated account of a haunted house from the ancient classical world is given by Pliny the Younger (c. 50 AD).[49] Pliny describes the haunting of a house in Athens, which was bought by the Stoic philosopher Athenodorus, who lived about 100 years before Pliny. Knowing that the house was supposedly haunted, Athenodorus intentionally set up his writing desk in the room where the apparition was said to appear and sat there writing until late at night when he was disturbed by a ghost bound in chains. He followed the ghost outside where it indicated a spot on the ground. When Athenodorus later excavated the area, a shackled skeleton was unearthed. The haunting ceased when the skeleton was given a proper reburial.[50] The writers Plautus and Lucian also wrote stories about haunted houses.
|
68 |
+
|
69 |
+
In the New Testament, according to Luke 24:37–39,[51] following his resurrection, Jesus was forced to persuade the Disciples that he was not a ghost (some versions of the Bible, such as the KJV and NKJV, use the term "spirit"). Similarly, Jesus' followers at first believed he was a ghost (spirit) when they saw him walking on water.
|
70 |
+
|
71 |
+
One of the first persons to express disbelief in ghosts was Lucian of Samosata in the 2nd century AD. In his satirical novel The Lover of Lies (circa 150 AD), he relates how Democritus "the learned man from Abdera in Thrace" lived in a tomb outside the city gates to prove that cemeteries were not haunted by the spirits of the departed. Lucian relates how he persisted in his disbelief despite practical jokes perpetrated by "some young men of Abdera" who dressed up in black robes with skull masks to frighten him.[52] This account by Lucian notes something about the popular classical expectation of how a ghost should look.
|
72 |
+
|
73 |
+
In the 5th century AD, the Christian priest Constantius of Lyon recorded an instance of the recurring theme of the improperly buried dead who come back to haunt the living, and who can only cease their haunting when their bones have been discovered and properly reburied.[53]
|
74 |
+
|
75 |
+
Ghosts reported in medieval Europe tended to fall into two categories: the souls of the dead, or demons. The souls of the dead returned for a specific purpose. Demonic ghosts existed only to torment or tempt the living. The living could tell them apart by demanding their purpose in the name of Jesus Christ. The soul of a dead person would divulge its mission, while a demonic ghost would be banished at the sound of the Holy Name.[54]
|
76 |
+
|
77 |
+
Most ghosts were souls assigned to Purgatory, condemned for a specific period to atone for their transgressions in life. Their penance was generally related to their sin. For example, the ghost of a man who had been abusive to his servants was condemned to tear off and swallow bits of his own tongue; the ghost of another man, who had neglected to leave his cloak to the poor, was condemned to wear the cloak, now "heavy as a church tower". These ghosts appeared to the living to ask for prayers to end their suffering. Other dead souls returned to urge the living to confess their sins before their own deaths.[55]
|
78 |
+
|
79 |
+
Medieval European ghosts were more substantial than ghosts described in the Victorian age, and there are accounts of ghosts being wrestled with and physically restrained until a priest could arrive to hear its confession. Some were less solid, and could move through walls. Often they were described as paler and sadder versions of the person they had been while alive, and dressed in tattered gray rags. The vast majority of reported sightings were male.[56]
|
80 |
+
|
81 |
+
There were some reported cases of ghostly armies, fighting battles at night in the forest, or in the remains of an Iron Age hillfort, as at Wandlebury, near Cambridge, England. Living knights were sometimes challenged to single combat by phantom knights, which vanished when defeated.[57]
|
82 |
+
|
83 |
+
From the medieval period an apparition of a ghost is recorded from 1211, at the time of the Albigensian Crusade.[58] Gervase of Tilbury, Marshal of Arles, wrote that the image of Guilhem, a boy recently murdered in the forest, appeared in his cousin's home in Beaucaire, near Avignon. This series of "visits" lasted all of the summer. Through his cousin, who spoke for him, the boy allegedly held conversations with anyone who wished, until the local priest requested to speak to the boy directly, leading to an extended disquisition on theology. The boy narrated the trauma of death and the unhappiness of his fellow souls in Purgatory, and reported that God was most pleased with the ongoing Crusade against the Cathar heretics, launched three years earlier. The time of the Albigensian Crusade in southern France was marked by intense and prolonged warfare, this constant bloodshed and dislocation of populations being the context for these reported visits by the murdered boy.
|
84 |
+
|
85 |
+
Haunted houses are featured in the 9th-century Arabian Nights (such as the tale of Ali the Cairene and the Haunted House in Baghdad).[59]
|
86 |
+
|
87 |
+
Renaissance magic took a revived interest in the occult, including necromancy. In the era of the Reformation and Counter Reformation, there was frequently a backlash against unwholesome interest in the dark arts, typified by writers such as Thomas Erastus.[60] The Swiss Reformed pastor Ludwig Lavater supplied one of the most frequently reprinted books of the period with his Of Ghosts and Spirits Walking By Night.[61]
|
88 |
+
|
89 |
+
The Child Ballad "Sweet William's Ghost" (1868) recounts the story of a ghost returning to his fiancée begging her to free him from his promise to marry her. He cannot marry her because he is dead but her refusal would mean his damnation. This reflects a popular British belief that the dead haunted their lovers if they took up with a new love without some formal release.[62] "The Unquiet Grave" expresses a belief even more widespread, found in various locations over Europe: ghosts can stem from the excessive grief of the living, whose mourning interferes with the dead's peaceful rest.[63] In many folktales from around the world, the hero arranges for the burial of a dead man. Soon after, he gains a companion who aids him and, in the end, the hero's companion reveals that he is in fact the dead man.[64] Instances of this include the Italian fairy tale "Fair Brow" and the Swedish "The Bird 'Grip'".
|
90 |
+
|
91 |
+
Spiritualism is a monotheistic belief system or religion, postulating a belief in God, but with a distinguishing feature of belief that spirits of the dead residing in the spirit world can be contacted by "mediums", who can then provide information about the afterlife.[65]
|
92 |
+
|
93 |
+
Spiritualism developed in the United States and reached its peak growth in membership from the 1840s to the 1920s, especially in English-language countries.[66][67] By 1897, it was said to have more than eight million followers in the United States and Europe,[68] mostly drawn from the middle and upper classes, while the corresponding movement in continental Europe and Latin America is known as Spiritism.
|
94 |
+
|
95 |
+
The religion flourished for a half century without canonical texts or formal organization, attaining cohesion by periodicals, tours by trance lecturers, camp meetings, and the missionary activities of accomplished mediums.[69] Many prominent Spiritualists were women. Most followers supported causes such as the abolition of slavery and women's suffrage.[66] By the late 1880s, credibility of the informal movement weakened, due to accusations of fraud among mediums, and formal Spiritualist organizations began to appear.[66] Spiritualism is currently practiced primarily through various denominational Spiritualist churches in the United States and United Kingdom.
|
96 |
+
|
97 |
+
Spiritism, or French spiritualism, is based on the five books of the Spiritist Codification written by French educator Hypolite Léon Denizard Rivail under the pseudonym Allan Kardec reporting séances in which he observed a series of phenomena that he attributed to incorporeal intelligence (spirits). His assumption of spirit communication was validated by many contemporaries, among them many scientists and philosophers who attended séances and studied the phenomena. His work was later extended by writers like Leon Denis, Arthur Conan Doyle, Camille Flammarion, Ernesto Bozzano, Chico Xavier, Divaldo Pereira Franco, Waldo Vieira, Johannes Greber,[70] and others.
|
98 |
+
|
99 |
+
Spiritism has adherents in many countries throughout the world, including Spain, United States, Canada,[71] Japan, Germany, France, England, Argentina, Portugal, and especially Brazil, which has the largest proportion and greatest number of followers.[72]
|
100 |
+
|
101 |
+
The physician John Ferriar wrote "An Essay Towards a Theory of Apparitions" in 1813 in which he argued that sightings of ghosts were the result of optical illusions. Later the French physician Alexandre Jacques François Brière de Boismont published On Hallucinations: Or, the Rational History of Apparitions, Dreams, Ecstasy, Magnetism, and Somnambulism in 1845 in which he claimed sightings of ghosts were the result of hallucinations.[73][74]
|
102 |
+
|
103 |
+
David Turner, a retired physical chemist, suggested that ball lightning could cause inanimate objects to move erratically.[75]
|
104 |
+
|
105 |
+
Joe Nickell of the Committee for Skeptical Inquiry wrote that there was no credible scientific evidence that any location was inhabited by spirits of the dead.[76] Limitations of human perception and ordinary physical explanations can account for ghost sightings; for example, air pressure changes in a home causing doors to slam, humidity changes causing boards to creak, condensation in electrical connections causing intermittent behavior, or lights from a passing car reflected through a window at night. Pareidolia, an innate tendency to recognize patterns in random perceptions, is what some skeptics believe causes people to believe that they have 'seen ghosts'.[77] Reports of ghosts "seen out of the corner of the eye" may be accounted for by the sensitivity of human peripheral vision. According to Nickell, peripheral vision can easily mislead, especially late at night when the brain is tired and more likely to misinterpret sights and sounds.[78] Nickell further states, "science cannot substantiate the existence of a 'life energy' that could survive death without dissipating or function at all without a brain... why would... clothes survive?'" He asks, if ghosts glide, then why do people claim to hear them with "heavy footfalls"? Nickell says that ghosts act the same way as "dreams, memories, and imaginings, because they too are mental creations. They are evidence - not of another world, but of this real and natural one."[79]
|
106 |
+
|
107 |
+
Benjamin Radford from the Committee for Skeptical Inquiry and author of the 2017 book Investigating Ghosts: The Scientific Search for Spirits writes that "ghost hunting is the world's most popular paranormal pursuit" yet, to date ghost hunters can't agree on what a ghost is, or offer proof that they exist "it's all speculation and guesswork". He writes that it would be "useful and important to distinguish between types of spirits and apparitions. Until then it's merely a parlor game distracting amateur ghost hunters from the task at hand."[80]
|
108 |
+
|
109 |
+
According to research in anomalistic psychology visions of ghosts may arise from hypnagogic hallucinations ("waking dreams" experienced in the transitional states to and from sleep).[81] In a study of two experiments into alleged hauntings (Wiseman et al. 2003) came to the conclusion "that people consistently report unusual experiences in 'haunted' areas because of environmental factors, which may differ across locations." Some of these factors included "the variance of local magnetic fields, size of location and lighting level stimuli of which witnesses may not be consciously aware".[82]
|
110 |
+
|
111 |
+
Some researchers, such as Michael Persinger of Laurentian University, Canada, have speculated that changes in geomagnetic fields (created, e.g., by tectonic stresses in the Earth's crust or solar activity) could stimulate the brain's temporal lobes and produce many of the experiences associated with hauntings.[83] Sound is thought to be another cause of supposed sightings. Richard Lord and Richard Wiseman have concluded that infrasound can cause humans to experience bizarre feelings in a room, such as anxiety, extreme sorrow, a feeling of being watched, or even the chills.[84] Carbon monoxide poisoning, which can cause changes in perception of the visual and auditory systems,[85] was speculated upon as a possible explanation for haunted houses as early as 1921.
|
112 |
+
|
113 |
+
People who experience sleep paralysis often report seeing ghosts during their experiences. Neuroscientists Baland Jalal and V.S. Ramachandran have recently proposed neurological theories for why people hallucinate ghosts during sleep paralysis. Their theories emphasize the role of the parietal lobe and mirror neurons in triggering such ghostly hallucinations.[86]
|
114 |
+
|
115 |
+
The Hebrew Bible contains several references to owb (Hebrew: אוֹב), which are in a few places akin to shades of classical mythology but mostly describing mediums in connection with necromancy and spirit-consulting, which are grouped with witchcraft and other forms of divination under the category of forbidden occult activities.[87] The most notable reference to a shade is in the First Book of Samuel,[88] in which a disguised King Saul has the Witch of Endor conduct a seance to summon the dead prophet Samuel. A similar term appearing throughout the scriptures is repha'(im) (Hebrew: רְפָאִים), which while describing the race of "giants" formerly inhabiting Canaan in many verses, also refer to (the spirits of) dead ancestors of Sheol (like shades) in many others such as in the Book of Isaiah.[89]
|
116 |
+
|
117 |
+
In the New Testament, Jesus has to persuade the Disciples that he is not a ghost following the resurrection, Luke 24:37–39 (some versions of the Bible, such as the KJV and NKJV, use the term "spirit"). Similarly, Jesus' followers at first believe he is a ghost (spirit) when they see him walking on water.[90]
|
118 |
+
|
119 |
+
Some Christian denominations[which?] consider ghosts as beings who while tied to earth, no longer live on the material plane and linger in an intermediate state before continuing their journey to heaven.[91][92][93][94] On occasion, God would allow the souls in this state to return to earth to warn the living of the need for repentance.[95] Christians are taught that it is sinful to attempt to conjure or control spirits in accordance with Deuteronomy XVIII: 9–12.[96][97]
|
120 |
+
|
121 |
+
Some ghosts are actually said to be demons in disguise, who the Church teaches, in accordance with I Timothy 4:1, that they "come to deceive people and draw them away from God and into bondage."[98] As a result, attempts to contact the dead may lead to unwanted contact with a demon or an unclean spirit, as was said to occur in the case of Robbie Mannheim, a fourteen-year-old Maryland youth.[99] The Seventh-Day Adventist view is that a "soul" is not equivalent to "spirit" or "ghost" (depending on the Bible version), and that save for the Holy Spirit, all spirits or ghosts are demons in disguise. Furthermore, they teach that in accordance with (Genesis 2:7, Ecclesiastes 12:7), there are only two components to a "soul", neither of which survives death, with each returning to its respective source.
|
122 |
+
|
123 |
+
Christadelphians and Jehovah's Witnesses reject the view of a living, conscious soul after death.[100]
|
124 |
+
|
125 |
+
Jewish mythology and folkloric traditions describe dybbuks, malicious possessing spirits believed to be the dislocated soul of a dead person. However, the term does not appear in the Kabbalah or talmudic literature, where it is rather called an "evil spirit" or ru'aḥ tezazit ("unclean spirit" in the New Testament). It supposedly leaves the host body once it has accomplished its goal, sometimes after being helped.[101][102][103]
|
126 |
+
|
127 |
+
According to Islam, the souls of the deceased dwell in barzakh and while it is only a barrier in Quran, in Islamic tradition the world, especially cemeteries, are perforated with several gateways to the otherworld.[104] In rare occasions, the dead can appear to the living.[105] Pure souls, such as the souls of saints, are commonly addressed as rūḥ, while impure souls seeking for revenge, are often addressed as afarit.[106] An inappropriate burial can also cause a soul to stay in this world, whereupon roaming the earth as a ghost. Since the just souls remain close to their tomb, some people try to communicate with them in order to gain hidden knowledge. Contact with the dead is not the same as contact with jinn, who alike could provide knowledge concealed from living human.[107] Many encounters with ghosts are related to dreams supposed to occur in the realm of symbols.
|
128 |
+
|
129 |
+
In contrast to traditional Islamic thought, Salafi scholars state that spirits of the dead are unable to return to or make any contact with the world of the living,[108] and ghost sightings are attributed to the Salafi concept of jinn.
|
130 |
+
|
131 |
+
In Buddhism, there are a number of planes of existence into which a person can be reborn, one of which is the realm of hungry ghosts.[109] Buddhist celebrate the Ghost Festival[110] as an expression of compassion, one of Buddhist virtues. If the hungry ghosts are fed by non-relatives, they would not bother the community.
|
132 |
+
|
133 |
+
For the Igbo people, a man is simultaneously a physical and spiritual entity. However, it is his spirited dimension that is eternal.[111] In the Akan conception, we witness five parts of the human personality. We have the Nipadua (body), the Okra (soul), Sunsum (spirit), Ntoro (character from father), Mogya (character from mother).[111] The Humr people of southwestern Kordofan, Sudan consume the drink Umm Nyolokh, which is prepared from the liver and bone marrow of giraffes. Richard Rudgley [112] hypothesises that Umm Nyolokh may contain DMT and certain online websites further theorise that giraffe liver might owe its putative psychoactivity to substances derived from psychoactive plants, such as Acacia spp. consumed by the animal. The drink is said to cause hallucinations of giraffes, believed by the Humr to be the ghosts of giraffes.[113][114]
|
134 |
+
|
135 |
+
Belief in ghosts in European folklore is characterized by the recurring fear of "returning" or revenant deceased who may harm the living. This includes the Scandinavian gjenganger, the Romanian strigoi, the Serbian vampir, the Greek vrykolakas, etc. In Scandinavian and Finnish tradition, ghosts appear in corporeal form, and their supernatural nature is given away by behavior rather than appearance. In fact, in many stories they are first mistaken for the living. They may be mute, appear and disappear suddenly, or leave no footprints or other traces.
|
136 |
+
|
137 |
+
English folklore is particularly notable for its numerous haunted locations.
|
138 |
+
|
139 |
+
Belief in the soul and an afterlife remained near universal until the emergence of atheism in the 18th century.[citation needed] In the 19th century, spiritism resurrected "belief in ghosts" as the object of systematic inquiry, and popular opinion in Western culture remains divided.[115]
|
140 |
+
|
141 |
+
A bhoot or bhut (Hindi: भूत, Gujarati: ભૂત, Urdu: بهوت, Bengali: ভূত, Odia: ଭୂତ) is a supernatural creature, usually the ghost of a deceased person, in the popular culture, literature and some ancient texts of the Indian subcontinent.
|
142 |
+
|
143 |
+
Interpretations of how bhoots come into existence vary by region and community, but they are usually considered to be perturbed and restless due to some factor that prevents them from moving on (to transmigration, non-being, nirvana, or heaven or hell, depending on tradition). This could be a violent death, unsettled matters in their lives, or simply the failure of their survivors to perform proper funerals.[116]
|
144 |
+
|
145 |
+
In Central and Northern India, ojha or spirit guides play a central role.[citation needed] It duly happens when in the night someone sleeps and decorates something on the wall, and they say that if one sees the spirit the next thing in the morning he will become a spirit too, and that to a headless spirit and the soul of the body will remain the dark with the dark lord from the spirits who reside in the body of every human in Central and Northern India. It is also believed that if someone calls one from behind, never turn back and see because the spirit may catch the human to make it a spirit.
|
146 |
+
Other types of spirits in Hindu mythology include Baital, an evil spirit who haunts cemeteries and takes demonic possession of corpses, and Pishacha, a type of flesh-eating demon.
|
147 |
+
|
148 |
+
There are many kinds of ghosts and similar supernatural entities that frequently come up in Bengali culture, its folklores and form an important part in Bengali peoples' socio-cultural beliefs and superstitions. It is believed that the spirits of those who cannot find peace in the afterlife or die unnatural deaths remain on Earth. The word Pret (from Sanskrit) is also used in Bengali to mean ghost. In Bengal, ghosts are believed to be the spirit after death of an unsatisfied human being or a soul of a person who dies in unnatural or abnormal circumstances (like murder, suicide or accident). Even it is believed that other animals and creatures can also be turned into ghost after their death.
|
149 |
+
|
150 |
+
Ghosts in Thailand are part of local folklore and have now become part of the popular culture of the country. Phraya Anuman Rajadhon was the first Thai scholar who seriously studied Thai folk beliefs and took notes on the nocturnal village spirits of Thailand. He established that, since such spirits were not represented in paintings or drawings, they were purely based on descriptions of popular orally transmitted traditional stories. Therefore, most of the contemporary iconography of ghosts such as Nang Tani, Nang Takian,[117] Krasue, Krahang,[118] Phi Hua Kat, Phi Pop, Phi Phong, Phi Phraya, and Mae Nak has its origins in Thai films that have now become classics.[119][120]
|
151 |
+
The most feared spirit in Thailand is Phi Tai Hong, the ghost of a person who has died suddenly of a violent death.[121] The folklore of Thailand also includes the belief that sleep paralysis is caused by a ghost, Phi Am.
|
152 |
+
|
153 |
+
There is widespread belief in ghosts in Tibetan culture. Ghosts are explicitly recognized in the Tibetan Buddhist religion as they were in Indian Buddhism,[122] occupying a distinct but overlapping world to the human one, and feature in many traditional legends. When a human dies, after a period of uncertainty they may enter the ghost world. A hungry ghost (Tibetan: yidag, yi-dvags; Sanskrit: प्रेत) has a tiny throat and huge stomach, and so can never be satisfied. Ghosts may be killed with a ritual dagger or caught in a spirit trap and burnt, thus releasing them to be reborn. Ghosts may also be exorcised, and an annual festival is held throughout Tibet for this purpose. Some say that Dorje Shugden, the ghost of a powerful 17th-century monk, is a deity, but the Dalai Lama asserts that he is an evil spirit, which has caused a split in the Tibetan exile community.
|
154 |
+
|
155 |
+
There are many Malay ghost myths, remnants of old animist beliefs that have been shaped by later Hindu, Buddhist, and Muslim influences in the modern states of Indonesia, Malaysia, and Brunei. Some ghost concepts such as the female vampires Pontianak and Penanggalan are shared throughout the region.
|
156 |
+
Ghosts are a popular theme in modern Malaysian and Indonesian films.
|
157 |
+
There are also many references to ghosts in Filipino culture, ranging from ancient legendary creatures such as the Manananggal and Tiyanak to more modern urban legends and horror films. The beliefs, legends and stories are as diverse as the people of the Philippines.
|
158 |
+
|
159 |
+
There was widespread belief in ghosts in Polynesian culture, some of which persists today.
|
160 |
+
After death, a person's ghost normally traveled to the sky world or the underworld, but some could stay on earth. In many Polynesian legends, ghosts were often actively involved in the affairs of the living. Ghosts might also cause sickness or even invade the body of ordinary people, to be driven out through strong medicines.[123]
|
161 |
+
|
162 |
+
There are many references to ghosts in Chinese culture. Even Confucius said, "Respect ghosts and gods, but keep away from them."[124]
|
163 |
+
|
164 |
+
The ghosts take many forms, depending on how the person died, and are often harmful. Many Chinese ghost beliefs have been accepted by neighboring cultures, notably Japan and southeast Asia. Ghost beliefs are closely associated with traditional Chinese religion based on ancestor worship, many of which were incorporated in Taoism. Later beliefs were influenced by Buddhism, and in turn influenced and created uniquely Chinese Buddhist beliefs.
|
165 |
+
|
166 |
+
Many Chinese today believe it possible to contact the spirits of their ancestors through a medium, and that ancestors can help descendants if properly respected and rewarded. The annual ghost festival is celebrated by Chinese around the world. On this day, ghosts and spirits, including those of the deceased ancestors, come out from the lower realm. Ghosts are described in classical Chinese texts as well as modern literature and films.
|
167 |
+
|
168 |
+
A article in the China Post stated that nearly eighty-seven percent of Chinese office workers believe in ghosts, and some fifty-two percent of workers will wear hand art, necklaces, crosses, or even place a crystal ball on their desks to keep ghosts at bay, according to the poll.[citation needed]
|
169 |
+
|
170 |
+
Yūrei (幽霊) are figures in Japanese folklore, analogous to Western legends of ghosts. The name consists of two kanji, 幽 (yū), meaning "faint" or "dim", and 霊 (rei), meaning "soul" or "spirit". Alternative names include 亡霊 (Bōrei) meaning ruined or departed spirit, 死霊 (Shiryō) meaning dead spirit, or the more encompassing 妖怪 (Yōkai) or お化け (Obake).
|
171 |
+
|
172 |
+
Like their Chinese and Western counterparts, they are thought to be spirits kept from a peaceful afterlife.
|
173 |
+
|
174 |
+
There is extensive and varied belief in ghosts in Mexican culture. The modern state of Mexico before the Spanish conquest was inhabited by diverse peoples such as the Maya and Aztec, and their beliefs have survived and evolved, combined with the beliefs of the Spanish colonists. The Day of the Dead incorporates pre-Columbian beliefs with Christian elements. Mexican literature and films include many stories of ghosts interacting with the living.
|
175 |
+
|
176 |
+
According to the Gallup Poll News Service, belief in haunted houses, ghosts, communication with the dead, and witches had an especially steep increase over the 1990s.[125] A 2005 Gallup poll found that about 32 percent of Americans believe in ghosts.[126]
|
177 |
+
|
178 |
+
Ghosts are prominent in story-telling of various nations. The ghost story is ubiquitous across all cultures from oral folktales to works of literature. While ghost stories are often explicitly meant to be scary, they have been written to serve all sorts of purposes, from comedy to morality tales. Ghosts often appear in the narrative as sentinels or prophets of things to come. Belief in ghosts is found in all cultures around the world, and thus ghost stories may be passed down orally or in written form.[127]
|
179 |
+
|
180 |
+
Spirits of the dead appear in literature as early as Homer's Odyssey, which features a journey to the underworld and the hero encountering the ghosts of the dead,[127] and the Old Testament, in which the Witch of Endor summons the spirit of the prophet Samuel.[127]
|
181 |
+
|
182 |
+
One of the more recognizable ghosts in English literature is the shade of Hamlet's murdered father in Shakespeare's The Tragical History of Hamlet, Prince of Denmark. In Hamlet, it is the ghost who demands that Prince Hamlet investigate his "murder most foul" and seek revenge upon his usurping uncle, King Claudius.
|
183 |
+
|
184 |
+
In English Renaissance theater, ghosts were often depicted in the garb of the living and even in armor, as with the ghost of Hamlet's father. Armor, being out-of-date by the time of the Renaissance, gave the stage ghost a sense of antiquity.[128] But the sheeted ghost began to gain ground on stage in the 19th century because an armored ghost could not satisfactorily convey the requisite spookiness: it clanked and creaked, and had to be moved about by complicated pulley systems or elevators. These clanking ghosts being hoisted about the stage became objects of ridicule as they became clichéd stage elements. Ann Jones and Peter Stallybrass, in Renaissance Clothing and the Materials of Memory, point out, "In fact, it is as laughter increasingly threatens the Ghost that he starts to be staged not in armor but in some form of 'spirit drapery'."[129]
|
185 |
+
|
186 |
+
The "classic" ghost story arose during the Victorian period, and included authors such as M. R. James, Sheridan Le Fanu, Violet Hunt, and Henry James. Classic ghost stories were influenced by the gothic fiction tradition, and contain elements of folklore and psychology. M. R. James summed up the essential elements of a ghost story as, "Malevolence and terror, the glare of evil faces, ‘the stony grin of unearthly malice', pursuing forms in darkness, and 'long-drawn, distant screams', are all in place, and so is a modicum of blood, shed with deliberation and carefully husbanded...".[130] One of the key early appearances by ghosts was The Castle of Otranto by Horace Walpole in 1764, considered to be the first gothic novel.[127][131][132]
|
187 |
+
|
188 |
+
Famous literary apparitions from this period are the ghosts of A Christmas Carol, in which Ebenezer Scrooge is helped to see the error of his ways by the ghost of his former colleague Jacob Marley, and the ghosts of Christmas Past, Christmas Present, and Christmas Yet to Come.
|
189 |
+
|
190 |
+
Professional parapsychologists and "ghosts hunters", such as Harry Price, active in the 1920s and 1930s, and Peter Underwood, active in the 1940s and 1950s, published accounts of their experiences with ostensibly true ghost stories such as Price's The Most Haunted House in England, and Underwood's Ghosts of Borley (both recounting experiences at Borley Rectory). The writer Frank Edwards delved into ghost stories in his books of his, like Stranger than Science.
|
191 |
+
|
192 |
+
Children's benevolent ghost stories became popular, such as Casper the Friendly Ghost, created in the 1930s and appearing in comics, animated cartoons, and eventually a 1995 feature film.
|
193 |
+
|
194 |
+
With the advent of motion pictures and television, screen depictions of ghosts became common, and spanned a variety of genres; the works of Shakespeare, Dickens and Wilde have all been made into cinematic versions. Novel-length tales have been difficult to adapt to cinema, although that of The Haunting of Hill House to The Haunting in 1963 is an exception.[132]
|
195 |
+
|
196 |
+
Sentimental depictions during this period were more popular in cinema than horror, and include the 1947 film The Ghost and Mrs. Muir, which was later adapted to television with a successful 1968–70 TV series.[132] Genuine psychological horror films from this period include 1944's The Uninvited, and 1945's Dead of Night.
|
197 |
+
|
198 |
+
The 1970s saw screen depictions of ghosts diverge into distinct genres of the romantic and horror. A common theme in the romantic genre from this period is the ghost as a benign guide or messenger, often with unfinished business, such as 1989's Field of Dreams, the 1990 film Ghost, and the 1993 comedy Heart and Souls.[133] In the horror genre, 1980's The Fog, and the A Nightmare on Elm Street series of films from the 1980s and 1990s are notable examples of the trend for the merging of ghost stories with scenes of physical violence.[132]
|
199 |
+
|
200 |
+
Popularised in such films as the 1984 comedy Ghostbusters, ghost hunting became a hobby for many who formed ghost hunting societies to explore reportedly haunted places. The ghost hunting theme has been featured in reality television series, such as Ghost Adventures, Ghost Hunters, Ghost Hunters International, Ghost Lab, Most Haunted, and A Haunting. It is also represented in children's television by such programs as The Ghost Hunter and Ghost Trackers. Ghost hunting also gave rise to multiple guidebooks to haunted locations, and ghost hunting "how-to" manuals.
|
201 |
+
|
202 |
+
The 1990s saw a return to classic "gothic" ghosts, whose dangers were more psychological than physical. Examples of films from this period include 1999's The Sixth Sense and The Others.
|
203 |
+
|
204 |
+
Asian cinema has also produced horror films about ghosts, such as the 1998 Japanese film Ringu (remade in the US as The Ring in 2002), and the Pang brothers' 2002 film The Eye.[134]
|
205 |
+
Indian ghost movies are popular not just in India, but in the Middle East, Africa, South East Asia, and other parts of the world. Some Indian ghost movies such as the comedy / horror film Chandramukhi have been commercial successes, dubbed into several languages.[135]
|
206 |
+
|
207 |
+
In fictional television programming, ghosts have been explored in series such as Supernatural, Ghost Whisperer, and Medium.
|
208 |
+
|
209 |
+
In animated fictional television programming, ghosts have served as the central element in series such as Casper the Friendly Ghost, Danny Phantom, and Scooby-Doo. Various other television shows have depicted ghosts as well.
|
210 |
+
|
211 |
+
Nietzsche argued that people generally wear prudent masks in company, but that an alternative strategy for social interaction is to present oneself as an absence, as a social ghost – "One reaches out for us but gets no hold of us"[136] – a sentiment later echoed (if in a less positive way) by Carl Jung.[137]
|
212 |
+
|
213 |
+
Nick Harkaway has considered that all people carry a host of ghosts in their heads in the form of impressions of past acquaintances – ghosts who represent mental maps of other people in the world and serve as philosophical reference points.[138]
|
214 |
+
|
215 |
+
Object relations theory sees human personalities as formed by splitting off aspects of the person that he or she deems incompatible, whereupon the person may be haunted in later life by such ghosts of his or her alternate selves.[139]
|
216 |
+
|
217 |
+
The sense of ghosts as invisible, mysterious entities is invoked in several terms that use the word metaphorically, such as ghostwriter (a writer who pens texts credited to another person without revealing the ghostwriter's role as an author); ghost singer (a vocalist who records songs whose vocals are credited to another person); and "ghosting" a date (when a person breaks off contact with a former romantic partner and disappears).
|
en/194.html.txt
ADDED
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
North America is a continent entirely within the Northern Hemisphere and almost all within the Western Hemisphere. It can also be described as a northern subcontinent of the Americas, or America,[5][6] in models that use fewer than seven continents. It is bordered to the north by the Arctic Ocean, to the east by the Atlantic Ocean, to the west and south by the Pacific Ocean, and to the southeast by South America and the Caribbean Sea.
|
6 |
+
|
7 |
+
North America covers an area of about 24,709,000 square kilometers (9,540,000 square miles), about 16.5% of the earth's land area and about 4.8% of its total surface.
|
8 |
+
North America is the third-largest continent by area, following Asia and Africa,[7][better source needed] and the fourth by population after Asia, Africa, and Europe.[8] In 2013, its population was estimated at nearly 579 million people in 23 independent states, or about 7.5% of the world's population, if nearby islands (most notably around the Caribbean) are included.
|
9 |
+
|
10 |
+
North America was reached by its first human populations during the last glacial period, via crossing the Bering land bridge approximately 40,000 to 17,000 years ago. The so-called Paleo-Indian period is taken to have lasted until about 10,000 years ago (the beginning of the Archaic or Meso-Indian period). The Classic stage spans roughly the 6th to 13th centuries. The Pre-Columbian era ended in 1492, with the beginning of the transatlantic migrations—the arrival of European settlers during the Age of Discovery and the Early Modern period. Present-day cultural and ethnic patterns reflect interactions between European colonists, indigenous peoples, African slaves and all of their descendants.
|
11 |
+
|
12 |
+
Owing to Europe's colonization of the Americas, most North Americans speak European languages such as English, Spanish or French, and their states' cultures commonly reflect Western traditions.
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
The Americas are usually accepted as having been named after the Italian explorer Amerigo Vespucci by the German cartographers Martin Waldseemüller and Matthias Ringmann.[9] Vespucci, who explored South America between 1497 and 1502, was the first European to suggest that the Americas were not the East Indies, but a different landmass previously unknown by Europeans. In 1507, Waldseemüller produced a world map, in which he placed the word "America" on the continent of South America, in the middle of what is today Brazil. He explained the rationale for the name in the accompanying book Cosmographiae Introductio:[10]
|
17 |
+
|
18 |
+
... ab Americo inventore ... quasi Americi terram sive Americam (from Americus the discoverer ... as if it were the land of Americus, thus America).
|
19 |
+
|
20 |
+
For Waldseemüller, no one should object to the naming of the land after its discoverer. He used the Latinized version of Vespucci's name (Americus Vespucius), but in its feminine form "America", following the examples of "Europa", "Asia" and "Africa". Later, other mapmakers extended the name America to the northern continent. In 1538, Gerard Mercator used America on his map of the world for all the Western Hemisphere.[11]
|
21 |
+
|
22 |
+
Some argue that because the convention is to use the surname for naming discoveries (except in the case of royalty), the derivation from "Amerigo Vespucci" could be put in question.[12] In 1874, Thomas Belt proposed a derivation from the Amerrique mountains of Central America; the next year, Jules Marcou suggested that the name of the mountain range stemmed from indigenous American languages.[13] Marcou corresponded with Augustus Le Plongeon, who wrote: "The name AMERICA or AMERRIQUE in the Mayan language means, a country of perpetually strong wind, or the Land of the Wind, and ... the [suffixes] can mean ... a spirit that breathes, life itself."[11]
|
23 |
+
|
24 |
+
The United Nations formally recognizes "North America" as comprising three areas: Northern America, Central America, and The Caribbean. This has been formally defined by the UN Statistics Division.[14][15][16]
|
25 |
+
|
26 |
+
"Northern America", as a term distinct from "North America", excludes Central America, which itself may or may not include Mexico (see Central America § Different definitions). In the limited context of the North American Free Trade Agreement, the term covers Canada, the United States, and Mexico, which are the three signatories of that treaty.
|
27 |
+
|
28 |
+
France, Italy, Portugal, Spain, Romania, Greece, and the countries of Latin America use a six-continent model, with the Americas viewed as a single continent and North America designating a subcontinent comprising Canada, the United States, and Mexico, and often Greenland, Saint Pierre et Miquelon, and Bermuda.[17][18][19][20][21]
|
29 |
+
|
30 |
+
North America has been historically referred to by other names. Spanish North America (New Spain) was often referred to as Northern America, and this was the first official name given to Mexico.[22]
|
31 |
+
|
32 |
+
Geographically the North American continent has many regions and subregions. These include cultural, economic, and geographic regions. Economic regions included those formed by trade blocs, such as the North American Trade Agreement bloc and Central American Trade Agreement. Linguistically and culturally, the continent could be divided into Anglo-America and Latin America. Anglo-America includes most of Northern America, Belize, and Caribbean islands with English-speaking populations (though sub-national entities, such as Louisiana and Quebec, have large Francophone populations; in Quebec, French is the sole official language[23]).
|
33 |
+
|
34 |
+
The southern North American continent is composed of two regions. These are Central America and the Caribbean.[24][25] The north of the continent maintains recognized regions as well. In contrast to the common definition of "North America", which encompasses the whole continent, the term "North America" is sometimes used to refer only to Mexico, Canada, the United States, and Greenland.[26][27][28][29][30]
|
35 |
+
|
36 |
+
The term Northern America refers to the northern-most countries and territories of North America: the United States, Bermuda, St. Pierre and Miquelon, Canada and Greenland.[31][32] Although the term does not refer to a unified region,[33] Middle America—not to be confused with the Midwestern United States—groups the regions of Mexico, Central America, and the Caribbean.[34]
|
37 |
+
|
38 |
+
The largest countries of the continent, Canada and the United States, also contain well-defined and recognized regions. In the case of Canada these are (from east to west) Atlantic Canada, Central Canada, Canadian Prairies, the British Columbia Coast, and Northern Canada. These regions also contain many subregions. In the case of the United States – and in accordance with the US Census Bureau definitions – these regions are: New England, Mid-Atlantic, South Atlantic States, East North Central States, West North Central States, East South Central States, West South Central States, Mountain States, and Pacific States. Regions shared between both nations included the Great Lakes Region. Megalopolises have formed between both nations in the case of the Pacific Northwest and the Great Lakes Megaregion.
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
Laurentia is an ancient craton which forms the geologic core of North America; it formed between 1.5 and 1.0 billion years ago during the Proterozoic eon.[44] The Canadian Shield is the largest exposure of this craton. From the Late Paleozoic to Early Mesozoic eras, North America was joined with the other modern-day continents as part of the supercontinent Pangaea, with Eurasia to its east. One of the results of the formation of Pangaea was the Appalachian Mountains, which formed some 480 million years ago, making it among the oldest mountain ranges in the world. When Pangaea began to rift around 200 million years ago, North America became part of Laurasia, before it separated from Eurasia as its own continent during the mid-Cretaceous period.[45] The Rockies and other western mountain ranges began forming around this time from a period of mountain building called the Laramide orogeny, between 80 and 55 million years ago. The formation of the Isthmus of Panama that connected the continent to South America arguably occurred approximately 12 to 15 million years ago,[46] and the Great Lakes (as well as many other northern freshwater lakes and rivers) were carved by receding glaciers about 10,000 years ago.
|
43 |
+
|
44 |
+
North America is the source of much of what humanity knows about geologic time periods.[47] The geographic area that would later become the United States has been the source of more varieties of dinosaurs than any other modern country.[47] According to paleontologist Peter Dodson, this is primarily due to stratigraphy, climate and geography, human resources, and history.[47] Much of the Mesozoic Era is represented by exposed outcrops in the many arid regions of the continent.[47] The most significant Late Jurassic dinosaur-bearing fossil deposit in North America is the Morrison Formation of the western United States.[48]
|
45 |
+
|
46 |
+
The indigenous peoples of the Americas have many creation myths by which they assert that they have been present on the land since its creation,[49] but there is no evidence that humans evolved there.[50] The specifics of the initial settlement of the Americas by ancient Asians are subject to ongoing research and discussion.[51] The traditional theory has been that hunters entered the Beringia land bridge between eastern Siberia and present-day Alaska from 27,000 to 14,000 years ago.[52][53][h] A growing viewpoint is that the first American inhabitants sailed from Beringia some 13,000 years ago,[55] with widespread habitation of the Americas during the end of the Last Glacial Period, in what is known as the Late Glacial Maximum, around 12,500 years ago.[56] The oldest petroglyphs in North America date from 15,000 to 10,000 years before present.[57][i] Genetic research and anthropology indicate additional waves of migration from Asia via the Bering Strait during the Early-Middle Holocene.[59][60][61]
|
47 |
+
|
48 |
+
Before contact with Europeans, the natives of North America were divided into many different polities, from small bands of a few families to large empires. They lived in several "culture areas", which roughly correspond to geographic and biological zones and give a good indication of the main way of life of the people who lived there (e.g., the bison hunters of the Great Plains, or the farmers of Mesoamerica). Native groups can also be classified by their language family (e.g., Athapascan or Uto-Aztecan). Peoples with similar languages did not always share the same material culture, nor were they always allies. Anthropologists think that the Inuit people of the high Arctic came to North America much later than other native groups, as evidenced by the disappearance of Dorset culture artifacts from the archaeological record, and their replacement by the Thule people.
|
49 |
+
|
50 |
+
During the thousands of years of native habitation on the continent, cultures changed and shifted. One of the oldest yet discovered is the Clovis culture (c. 9550–9050 BCE) in modern New Mexico.[58] Later groups include the Mississippian culture and related Mound building cultures, found in the Mississippi river valley and the Pueblo culture of what is now the Four Corners. The more southern cultural groups of North America were responsible for the domestication of many common crops now used around the world, such as tomatoes, squash, and maize. As a result of the development of agriculture in the south, many other cultural advances were made there. The Mayans developed a writing system, built huge pyramids and temples, had a complex calendar, and developed the concept of zero around 400 CE.[62]
|
51 |
+
|
52 |
+
The first recorded European references to North America are in Norse sagas where it is referred to as Vinland.[63] The earliest verifiable instance of pre-Columbian trans-oceanic contact by any European culture with the North America mainland has been dated to around 1000 CE.[64] The site, situated at the northernmost extent of the island named Newfoundland, has provided unmistakable evidence of Norse settlement.[65] Norse explorer Leif Erikson (c. 970–1020 CE) is thought to have visited the area.[j] Erikson was the first European to make landfall on the continent (excluding Greenland).[67][68]
|
53 |
+
|
54 |
+
The Mayan culture was still present in southern Mexico and Guatemala when the Spanish conquistadors arrived, but political dominance in the area had shifted to the Aztec Empire, whose capital city Tenochtitlan was located further north in the Valley of Mexico. The Aztecs were conquered in 1521 by Hernán Cortés.[69]
|
55 |
+
|
56 |
+
During the Age of Discovery, Europeans explored and staked claims to various parts of North America. Upon their arrival in the "New World", the Native American population declined substantially, because of violent conflicts with the invaders and the introduction of European diseases to which the Native Americans lacked immunity.[70] Native culture changed drastically and their affiliation with political and cultural groups also changed. Several linguistic groups died out, and others changed quite quickly. The names and cultures that Europeans recorded were not necessarily the same as the names they had used a few generations before, or the ones in use today.
|
57 |
+
|
58 |
+
Britain, Spain, and France took over extensive territories in North America. In the late 18th and early 19th century, independence movements sprung up across the continent, leading to the founding of the modern countries in the area. The 13 British Colonies on the North Atlantic coast declared independence in 1776, becoming the United States of America. Canada was formed from the unification of northern territories controlled by Britain and France. New Spain, a territory that stretched from the modern-day southern US to Central America, declared independence in 1810, becoming the First Mexican Empire. In 1823 the former Captaincy General of Guatemala, then part of the Mexican Empire, became the first independent state in Central America, officially changing its name to the United Provinces of Central America.
|
59 |
+
|
60 |
+
Over three decades of work on the Panama Canal led to the connection of Atlantic and Pacific waters in 1913, physically making North America a separate continent.[attribution needed]
|
61 |
+
|
62 |
+
North America occupies the northern portion of the landmass generally referred to as the New World, the Western Hemisphere, the Americas, or simply America (which, less commonly, is considered by some as a single continent[71][72][73] with North America a subcontinent).[74] North America's only land connection to South America is at the Isthmus of Darian/ Isthmus of Panama. The continent is delimited on the southeast by most geographers at the Darién watershed along the Colombia-Panama border, placing almost all of Panama within North America.[75][76][77] Alternatively, some geologists physiographically locate its southern limit at the Isthmus of Tehuantepec, Mexico, with Central America extending southeastward to South America from this point.[78] The Caribbean islands, or West Indies, are considered part of North America.[5] The continental coastline is long and irregular. The Gulf of Mexico is the largest body of water indenting the continent, followed by Hudson Bay. Others include the Gulf of Saint Lawrence and the Gulf of California.
|
63 |
+
|
64 |
+
Before the Central American isthmus formed, the region had been underwater. The islands of the West Indies delineate a submerged former land bridge, which had connected North and South America via what are now Florida and Venezuela.
|
65 |
+
|
66 |
+
There are numerous islands off the continent's coasts; principally, the Arctic Archipelago, the Bahamas, Turks & Caicos, the Greater and Lesser Antilles, the Aleutian Islands (some of which are in the Eastern Hemisphere proper), the Alexander Archipelago, the many thousand islands of the British Columbia Coast, and Newfoundland. Greenland, a self-governing Danish island, and the world's largest, is on the same tectonic plate (the North American Plate) and is part of North America geographically. In a geologic sense, Bermuda is not part of the Americas, but an oceanic island which was formed on the fissure of the Mid-Atlantic Ridge over 100 million years ago. The nearest landmass to it is Cape Hatteras, North Carolina. However, Bermuda is often thought of as part of North America, especially given its historical, political and cultural ties to Virginia and other parts of the continent.
|
67 |
+
|
68 |
+
The vast majority of North America is on the North American Plate. Parts of western Mexico, including Baja California, and of California, including the cities of San Diego, Los Angeles, and Santa Cruz, lie on the eastern edge of the Pacific Plate, with the two plates meeting along the San Andreas fault. The southernmost portion of the continent and much of the West Indies lie on the Caribbean Plate, whereas the Juan de Fuca and Cocos plates border the North American Plate on its western frontier.
|
69 |
+
|
70 |
+
The continent can be divided into four great regions (each of which contains many subregions): the Great Plains stretching from the Gulf of Mexico to the Canadian Arctic; the geologically young, mountainous west, including the Rocky Mountains, the Great Basin, California and Alaska; the raised but relatively flat plateau of the Canadian Shield in the northeast; and the varied eastern region, which includes the Appalachian Mountains, the coastal plain along the Atlantic seaboard, and the Florida peninsula. Mexico, with its long plateaus and cordilleras, falls largely in the western region, although the eastern coastal plain does extend south along the Gulf.
|
71 |
+
|
72 |
+
The western mountains are split in the middle into the main range of the Rockies and the coast ranges in California, Oregon, Washington, and British Columbia, with the Great Basin—a lower area containing smaller ranges and low-lying deserts—in between. The highest peak is Denali in Alaska.
|
73 |
+
|
74 |
+
The United States Geographical Survey (USGS) states that the geographic center of North America is "6 miles [10 km] west of Balta, Pierce County, North Dakota" at about 48°10′N 100°10′W / 48.167°N 100.167°W / 48.167; -100.167, about 24 kilometres (15 mi) from Rugby, North Dakota. The USGS further states that "No marked or monumented point has been established by any government agency as the geographic center of either the 50 States, the conterminous United States, or the North American continent." Nonetheless, there is a 4.6-metre (15 ft) field stone obelisk in Rugby claiming to mark the center. The North American continental pole of inaccessibility is located 1,650 km (1,030 mi) from the nearest coastline, between Allen and Kyle, South Dakota at 43°22′N 101°58′W / 43.36°N 101.97°W / 43.36; -101.97 (Pole of Inaccessibility North America).[79]
|
75 |
+
|
76 |
+
Geologically, Canada is one of the oldest regions in the world, with more than half of the region consisting of precambrian rocks that have been above sea level since the beginning of the Palaeozoic era.[80] Canada's mineral resources are diverse and extensive.[80] Across the Canadian Shield and in the north there are large iron, nickel, zinc, copper, gold, lead, molybdenum, and uranium reserves. Large diamond concentrations have been recently developed in the Arctic,[81] making Canada one of the world's largest producers. Throughout the Shield there are many mining towns extracting these minerals. The largest, and best known, is Sudbury, Ontario. Sudbury is an exception to the normal process of forming minerals in the Shield since there is significant evidence that the Sudbury Basin is an ancient meteorite impact crater. The nearby, but less known Temagami Magnetic Anomaly has striking similarities to the Sudbury Basin. Its magnetic anomalies are very similar to the Sudbury Basin, and so it could be a second metal-rich impact crater.[82] The Shield is also covered by vast boreal forests that support an important logging industry.
|
77 |
+
|
78 |
+
The lower 48 US states can be divided into roughly five physiographic provinces:
|
79 |
+
|
80 |
+
The geology of Alaska is typical of that of the cordillera, while the major islands of Hawaii consist of Neogene volcanics erupted over a hot spot.
|
81 |
+
|
82 |
+
Central America is geologically active with volcanic eruptions and earthquakes occurring from time to time. In 1976 Guatemala was hit by a major earthquake, killing 23,000 people; Managua, the capital of Nicaragua, was devastated by earthquakes in 1931 and 1972, the last one killing about 5,000 people; three earthquakes devastated El Salvador, one in 1986 and two in 2001; one earthquake devastated northern and central Costa Rica in 2009, killing at least 34 people; in Honduras a powerful earthquake killed seven people in 2009.
|
83 |
+
|
84 |
+
Volcanic eruptions are common in the region. In 1968 the Arenal Volcano, in Costa Rica, erupted and killed 87 people. Fertile soils from weathered volcanic lavas have made it possible to sustain dense populations in the agriculturally productive highland areas.
|
85 |
+
|
86 |
+
Central America has many mountain ranges; the longest are the Sierra Madre de Chiapas, the Cordillera Isabelia, and the Cordillera de Talamanca. Between the mountain ranges lie fertile valleys that are suitable for the people; in fact, most of the population of Honduras, Costa Rica, and Guatemala live in valleys. Valleys are also suitable for the production of coffee, beans, and other crops.
|
87 |
+
|
88 |
+
North America is a very large continent which surpasses the Arctic Circle, and the Tropic of Cancer. Greenland, along with the Canadian Shield, is tundra with average temperatures ranging from 10 to 20 °C (50 to 68 °F), but central Greenland is composed of a very large ice sheet. This tundra radiates throughout Canada, but its border ends near the Rocky Mountains (but still contains Alaska) and at the end of the Canadian Shield, near the Great Lakes.
|
89 |
+
Climate west of the Cascades is described as being a temperate weather with average precipitation 20 inches (510 mm).[83]
|
90 |
+
Climate in coastal California is described to be Mediterranean, with average temperatures in cities like San Francisco ranging from 57 to 70 °F (14 to 21 °C) over the course of the year.[84]
|
91 |
+
|
92 |
+
Stretching from the East Coast to eastern North Dakota, and stretching down to Kansas, is the continental-humid climate featuring intense seasons, with a large amount of annual precipitation, with places like New York City averaging 50 inches (1,300 mm).[85]
|
93 |
+
Starting at the southern border of the continental-humid climate and stretching to the Gulf of Mexico (whilst encompassing the eastern half of Texas) is the subtropical climate. This area has the wettest cities in the contiguous U.S. with annual precipitation reaching 67 inches (1,700 mm) in Mobile, Alabama.[86]
|
94 |
+
Stretching from the borders of the continental humid and subtropical climates, and going west to the Cascades Sierra Nevada, south to the southern tip of durango, north to the border with tundra climate, the steppe/desert climate is the driest climate in the U.S.[87] Highland climates cut from north to south of the continent, where subtropical or temperate climates occur just below the tropics, as in central Mexico and Guatemala. Tropical climates appear in the island regions and in the subcontinent's bottleneck. Usually of the savannah type, with rains and high temperatures constants the whole year. Found in countries and states bathed by the Caribbean Sea or to south of the Gulf of Mexico and Pacific Ocean.[88]
|
95 |
+
|
96 |
+
Notable North American fauna include the bison, black bear, prairie dog, turkey, pronghorn, raccoon, coyote and monarch butterfly.
|
97 |
+
|
98 |
+
Notable plants that were domesticated in North America include tobacco, maize, squash, tomato, sunflower, blueberry, avocado, cotton, chile pepper
|
99 |
+
and vanilla.
|
100 |
+
|
101 |
+
Economically, Canada and the United States are the wealthiest and most developed nations in the continent, followed by Mexico, a newly industrialized country.[89] The countries of Central America and the Caribbean are at various levels of economic and human development. For example, small Caribbean island-nations, such as Barbados, Trinidad and Tobago, and Antigua and Barbuda, have a higher GDP (PPP) per capita than Mexico due to their smaller populations. Panama and Costa Rica have a significantly higher Human Development Index and GDP than the rest of the Central American nations.[90] Additionally, despite Greenland's vast resources in oil and minerals, much of them remain untapped, and the island is economically dependent on fishing, tourism, and subsidies from Denmark. Nevertheless, the island is highly developed.[91]
|
102 |
+
|
103 |
+
Demographically, North America is ethnically diverse. Its three main groups are Caucasians, Mestizos and Blacks.[citation needed] There is a significant minority of Indigenous Americans and Asians among other less numerous groups.[citation needed]
|
104 |
+
|
105 |
+
The dominant languages in North America are English, Spanish, and French. Danish is prevalent in Greenland alongside Greenlandic, and Dutch is spoken side by side local languages in the Dutch Caribbean. The term Anglo-America is used to refer to the anglophone countries of the Americas: namely Canada (where English and French are co-official) and the United States, but also sometimes Belize and parts of the tropics, especially the Commonwealth Caribbean. Latin America refers to the other areas of the Americas (generally south of the United States) where the Romance languages, derived from Latin, of Spanish and Portuguese (but French speaking countries are not usually included) predominate: the other republics of Central America (but not always Belize), part of the Caribbean (not the Dutch-, English-, or French-speaking areas), Mexico, and most of South America (except Guyana, Suriname, French Guiana (France), and the Falkland Islands (UK)).
|
106 |
+
|
107 |
+
The French language has historically played a significant role in North America and now retains a distinctive presence in some regions. Canada is officially bilingual. French is the official language of the Province of Quebec, where 95% of the people speak it as either their first or second language, and it is co-official with English in the Province of New Brunswick. Other French-speaking locales include the Province of Ontario (the official language is English, but there are an estimated 600,000 Franco-Ontarians), the Province of Manitoba (co-official as de jure with English), the French West Indies and Saint-Pierre et Miquelon, as well as the US state of Louisiana, where French is also an official language. Haiti is included with this group based on historical association but Haitians speak both Creole and French. Similarly, French and French Antillean Creole is spoken in Saint Lucia and the Commonwealth of Dominica alongside English.
|
108 |
+
|
109 |
+
Christianity is the largest religion in the United States, Canada and Mexico. According to a 2012 Pew Research Center survey, 77% of the population considered themselves Christians.[92] Christianity also is the predominant religion in the 23 dependent territories in North America.[93] The United States has the largest Christian population in the world, with nearly 247 million Christians (70%), although other countries have higher percentages of Christians among their populations.[94] Mexico has the world's second largest number of Catholics, surpassed only by Brazil.[95] A 2015 study estimates about 493,000 Christian believers from a Muslim background in North America, most of them belonging to some form of Protestantism.[96]
|
110 |
+
|
111 |
+
According to the same study religiously unaffiliated (include agnostic and atheist) make up about 17% of the population of Canada and the United States.[97] No religion make up about 24% of the United States population, and 24% of Canada total population.[98]
|
112 |
+
|
113 |
+
Canada, the United States and Mexico host communities of both Jews (6 million or about 1.8%),[99] Buddhists (3.8 million or 1.1%)[100] and Muslims (3.4 million or 1.0%).[101] The biggest number of Jewish individuals can be found in the United States (5.4 million),[102] Canada (375,000)[103] and Mexico (67,476).[104] The United States host the largest Muslim population in North America with 2.7 million or 0.9%,[105][106] While Canada host about one million Muslim or 3.2% of the population.[107] While in Mexico there were 3,700 Muslims in the country.[108] In 2012, U-T San Diego estimated U.S. practitioners of Buddhism at 1.2 million people, of whom 40% are living in Southern California.[109]
|
114 |
+
|
115 |
+
The predominant religion in Central America is Christianity (96%).[110] Beginning with the Spanish colonization of Central America in the 16th century, Roman Catholicism became the most popular religion in the region until the first half of the 20th century. Since the 1960s, there has been an increase in other Christian groups, particularly Protestantism, as well as other religious organizations, and individuals identifying themselves as having no religion. Also Christianity is the predominant religion in the Caribbean (85%).[110] Other religious groups in the region are Hinduism, Islam, Rastafari (in Jamaica), and Afro-American religions such as Santería and Vodou.
|
116 |
+
|
117 |
+
The most populous country in North America is the United States with 329.7 million persons. The second largest country is Mexico with a population of 112.3 million.[111] Canada is the third most populous country with 37.0 million.[112] The majority of Caribbean island-nations have national populations under a million, though Cuba, Dominican Republic, Haiti, Puerto Rico (a territory of the United States), Jamaica, and Trinidad and Tobago each have populations higher than a million.[113][114][115][116][117] Greenland has a small population of 55,984 for its massive size (2,166,000 km² or 836,300 mi²), and therefore, it has the world's lowest population density at 0.026 pop./km² (0.067 pop./mi²).[118]
|
118 |
+
|
119 |
+
While the United States, Canada, and Mexico maintain the largest populations, large city populations are not restricted to those nations. There are also large cities in the Caribbean. The largest cities in North America, by far, are Mexico City and New York. These cities are the only cities on the continent to exceed eight million, and two of three in the Americas. Next in size are Los Angeles, Toronto,[119] Chicago, Havana, Santo Domingo, and Montreal. Cities in the sun belt regions of the United States, such as those in Southern California and Houston, Phoenix, Miami, Atlanta, and Las Vegas, are experiencing rapid growth. These causes included warm temperatures, retirement of Baby Boomers, large industry, and the influx of immigrants. Cities near the United States border, particularly in Mexico, are also experiencing large amounts of growth. Most notable is Tijuana, a city bordering San Diego that receives immigrants from all over Latin America and parts of Europe and Asia. Yet as cities grow in these warmer regions of North America, they are increasingly forced to deal with the major issue of water shortages.[120]
|
120 |
+
|
121 |
+
Eight of the top ten metropolitan areas are located in the United States. These metropolitan areas all have a population of above 5.5 million and include the New York City metropolitan area, Los Angeles metropolitan area, Chicago metropolitan area, and the Dallas–Fort Worth metroplex.[121] Whilst the majority of the largest metropolitan areas are within the United States, Mexico is host to the largest metropolitan area by population in North America: Greater Mexico City.[122] Canada also breaks into the top ten largest metropolitan areas with the Toronto metropolitan area having six million people.[123] The proximity of cities to each other on the Canada–United States border and Mexico–United States border has led to the rise of international metropolitan areas. These urban agglomerations are observed at their largest and most productive in Detroit–Windsor and San Diego–Tijuana and experience large commercial, economic, and cultural activity. The metropolitan areas are responsible for millions of dollars of trade dependent on international freight. In Detroit-Windsor the Border Transportation Partnership study in 2004 concluded US$13 billion was dependent on the Detroit–Windsor international border crossing while in San Diego-Tijuana freight at the Otay Mesa Port of Entry was valued at US$20 billion.[124][125]
|
122 |
+
|
123 |
+
North America has also been witness to the growth of megapolitan areas. In the United States exists eleven megaregions that transcend international borders and comprise Canadian and Mexican metropolitan regions. These are the Arizona Sun Corridor, Cascadia, Florida, Front Range, Great Lakes Megaregion, Gulf Coast Megaregion, Northeast, Northern California, Piedmont Atlantic, Southern California, and the Texas Triangle.[126] Canada and Mexico are also the home of megaregions. These include the Quebec City – Windsor Corridor, Golden Horseshoe – both of which are considered part of the Great Lakes Megaregion – and megalopolis of Central Mexico. Traditionally the largest megaregion has been considered the Boston-Washington, DC Corridor, or the Northeast, as the region is one massive contiguous area. Yet megaregion criterion have allowed the Great Lakes Megalopolis to maintain status as the most populated region, being home to 53,768,125 people in 2000.[127]
|
124 |
+
|
125 |
+
The top ten largest North American metropolitan areas by population as of 2013, based on national census numbers from the United States and census estimates from Canada and Mexico.
|
126 |
+
|
127 |
+
†2011 Census figures.
|
128 |
+
|
129 |
+
North America's GDP per capita was evaluated in October 2016 by the International Monetary Fund (IMF) to be $41,830, making it the richest continent in the world,[129] followed by Oceania.[130]
|
130 |
+
|
131 |
+
Canada, Mexico, and the United States have significant and multifaceted economic systems. The United States has the largest economy of all three countries and in the world.[130] In 2016, the U.S. had an estimated per capita gross domestic product (PPP) of $57,466 according to the World Bank, and is the most technologically developed economy of the three.[131] The United States' services sector comprises 77% of the country's GDP (estimated in 2010), industry comprises 22% and agriculture comprises 1.2%.[130] The U.S. economy is also the fastest growing economy in North America and the Americas as a whole,[132][129] with the highest GDP per capita in the Americas as well.[129]
|
132 |
+
|
133 |
+
Canada shows significant growth in the sectors of services, mining and manufacturing.[133] Canada's per capita GDP (PPP) was estimated at $44,656 and it had the 11th largest GDP (nominal) in 2014.[133] Canada's services sector comprises 78% of the country's GDP (estimated in 2010), industry comprises 20% and agriculture comprises 2%.[133] Mexico has a per capita GDP (PPP) of $16,111 and as of 2014 is the 15th largest GDP (nominal) in the world.[134] Being a newly industrialized country,[89] Mexico maintains both modern and outdated industrial and agricultural facilities and operations.[135] Its main sources of income are oil, industrial exports, manufactured goods, electronics, heavy industry, automobiles, construction, food, banking and financial services.[136]
|
134 |
+
|
135 |
+
The North American economy is well defined and structured in three main economic areas.[137] These areas are the North American Free Trade Agreement (NAFTA), Caribbean Community and Common Market (CARICOM), and the Central American Common Market (CACM).[137] Of these trade blocs, the United States takes part in two. In addition to the larger trade blocs there is the Canada-Costa Rica Free Trade Agreement among numerous other free trade relations, often between the larger, more developed countries and Central American and Caribbean countries.
|
136 |
+
|
137 |
+
The North America Free Trade Agreement (NAFTA) forms one of the four largest trade blocs in the world.[138] Its implementation in 1994 was designed for economic homogenization with hopes of eliminating barriers of trade and foreign investment between Canada, the United States and Mexico.[139] While Canada and the United States already conducted the largest bilateral trade relationship – and to present day still do – in the world and Canada–United States trade relations already allowed trade without national taxes and tariffs,[140] NAFTA allowed Mexico to experience a similar duty-free trade. The free trade agreement allowed for the elimination of tariffs that had previously been in place on United States-Mexico trade. Trade volume has steadily increased annually and in 2010, surface trade between the three NAFTA nations reached an all-time historical increase of 24.3% or US$791 billion.[141] The NAFTA trade bloc GDP (PPP) is the world's largest with US$17.617 trillion.[142] This is in part attributed to the fact that the economy of the United States is the world's largest national economy; the country had a nominal GDP of approximately $14.7 trillion in 2010.[143] The countries of NAFTA are also some of each other's largest trade partners. The United States is the largest trade partner of Canada and Mexico;[144] while Canada and Mexico are each other's third largest trade partners.[145][146]
|
138 |
+
|
139 |
+
The Caribbean trade bloc – CARICOM – came into agreement in 1973 when it was signed by 15 Caribbean nations. As of 2000, CARICOM trade volume was US$96 billion. CARICOM also allowed for the creation of a common passport for associated nations. In the past decade the trade bloc focused largely on Free Trade Agreements and under the CARICOM Office of Trade Negotiations (OTN) free trade agreements have been signed into effect.
|
140 |
+
|
141 |
+
Integration of Central American economies occurred under the signing of the Central American Common Market agreement in 1961; this was the first attempt to engage the nations of this area into stronger financial cooperation. Recent implementation of the Central American Free Trade Agreement (CAFTA) has left the future of the CACM unclear.[147] The Central American Free Trade Agreement was signed by five Central American countries, the Dominican Republic, and the United States. The focal point of CAFTA is to create a free trade area similar to that of NAFTA. In addition to the United States, Canada also has relations in Central American trade blocs. Currently under proposal, the Canada – Central American Free Trade Agreement (CA4) would operate much the same as CAFTA with the United States does.
|
142 |
+
|
143 |
+
These nations also take part in inter-continental trade blocs. Mexico takes a part in the G3 Free Trade Agreement with Colombia and Venezuela and has a trade agreement with the EU. The United States has proposed and maintained trade agreements under the Transatlantic Free Trade Area between itself and the European Union; the US-Middle East Free Trade Area between numerous Middle Eastern nations and itself; and the Trans-Pacific Strategic Economic Partnership between Southeast Asian nations, Australia, and New Zealand.
|
144 |
+
|
145 |
+
The Pan-American Highway route in the Americas is the portion of a network of roads nearly 48,000 km (30,000 mi) in length which travels through the mainland nations. No definitive length of the Pan-American Highway exists because the US and Canadian governments have never officially defined any specific routes as being part of the Pan-American Highway, and Mexico officially has many branches connecting to the US border. However, the total length of the portion from Mexico to the northern extremity of the highway is roughly 26,000 km (16,000 mi).
|
146 |
+
|
147 |
+
The First Transcontinental Railroad in the United States was built in the 1860s, linking the railroad network of the eastern US with California on the Pacific coast. Finished on 10 May 1869 at the famous golden spike event at Promontory Summit, Utah, it created a nationwide mechanized transportation network that revolutionized the population and economy of the American West, catalyzing the transition from the wagon trains of previous decades to a modern transportation system.[148] Although an accomplishment, it achieved the status of first transcontinental railroad by connecting myriad eastern US railroads to the Pacific and was not the largest single railroad system in the world. The Canadian Grand Trunk Railway (GTR) had, by 1867, already accumulated more than 2,055 km (1,277 mi) of track by connecting Ontario with the Canadian Atlantic provinces west as far as Port Huron, Michigan, through Sarnia, Ontario.
|
148 |
+
|
149 |
+
A shared telephone system known as the North American Numbering Plan (NANP) is an integrated telephone numbering plan of 24 countries and territories: the United States and its territories, Canada, Bermuda, and 17 Caribbean nations.
|
150 |
+
|
151 |
+
Canada and the United States were both former British colonies. There is frequent cultural interplay between the United States and English-speaking Canada.
|
152 |
+
|
153 |
+
Greenland has experienced many immigration waves from Northern Canada, e.g. the Thule People. Therefore, Greenland shares some cultural ties with the indigenous people of Canada. Greenland is also considered Nordic and has strong Danish ties due to centuries of colonization by Denmark.[149]
|
154 |
+
|
155 |
+
Spanish-speaking North America shares a common past as former Spanish colonies. In Mexico and the Central American countries where civilizations like the Maya developed, indigenous people preserve traditions across modern boundaries. Central American and Spanish-speaking Caribbean nations have historically had more in common due to geographical proximity.
|
156 |
+
|
157 |
+
Northern Mexico, particularly in the cities of Monterrey, Tijuana, Ciudad Juárez, and Mexicali, is strongly influenced by the culture and way of life of the United States. Of the aforementioned cities, Monterrey has been regarded as the most Americanized city in Mexico.[150] Immigration to the United States and Canada remains a significant attribute of many nations close to the southern border of the US. The Anglophone Caribbean states have witnessed the decline of the British Empire and its influence on the region, and its replacement by the economic influence of Northern America in the Anglophone Caribbean. This is partly due to the relatively small populations of the English-speaking Caribbean countries, and also because many of them now have more people living abroad than those remaining at home. Northern Mexico, the Western United States and Alberta, Canada share a cowboy culture.
|
158 |
+
|
159 |
+
Canada, Mexico and the US submitted a joint bid to host the 2026 FIFA World Cup.
|
160 |
+
The following table shows the most prominent sports leagues in North America, in order of average revenue.[151][152]
|
161 |
+
|
162 |
+
Footnotes
|
163 |
+
|
164 |
+
Citations
|
165 |
+
|
166 |
+
Africa
|
167 |
+
|
168 |
+
Antarctica
|
169 |
+
|
170 |
+
Asia
|
171 |
+
|
172 |
+
Australia
|
173 |
+
|
174 |
+
Europe
|
175 |
+
|
176 |
+
North America
|
177 |
+
|
178 |
+
South America
|
179 |
+
|
180 |
+
Afro-Eurasia
|
181 |
+
|
182 |
+
America
|
183 |
+
|
184 |
+
Eurasia
|
185 |
+
|
186 |
+
Oceania
|
en/1940.html.txt
ADDED
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Deer are the hoofed ruminant mammals forming the family Cervidae. The two main groups of deer are the Cervinae, including the muntjac, the elk (wapiti), the fallow deer, and the chital; and the Capreolinae, including the reindeer (caribou), the roe deer, and the moose. Female reindeer, and male deer of all species except the Chinese water deer, grow and shed new antlers each year. In this they differ from permanently horned antelope, which are part of a different family (Bovidae) within the same order of even-toed ungulates (Artiodactyla).
|
6 |
+
|
7 |
+
The musk deer (Moschidae) of Asia and chevrotains (Tragulidae) of tropical African and Asian forests are separate families within the ruminant clade (Ruminantia). They are not especially closely related to deer among the Ruminantia.
|
8 |
+
|
9 |
+
Deer appear in art from Paleolithic cave paintings onwards, and they have played a role in mythology, religion, and literature throughout history, as well as in heraldry. Their economic importance includes the use of their meat as venison, their skins as soft, strong buckskin, and their antlers as handles for knives. Deer hunting has been a popular activity since at least the Middle Ages and remains a resource for many families today.
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
Deer live in a variety of biomes, ranging from tundra to the tropical rainforest. While often associated with forests, many deer are ecotone species that live in transitional areas between forests and thickets (for cover) and prairie and savanna (open space). The majority of large deer species inhabit temperate mixed deciduous forest, mountain mixed coniferous forest, tropical seasonal/dry forest, and savanna habitats around the world. Clearing open areas within forests to some extent may actually benefit deer populations by exposing the understory and allowing the types of grasses, weeds, and herbs to grow that deer like to eat. Additionally, access to adjacent croplands may also benefit deer. However, adequate forest or brush cover must still be provided for populations to grow and thrive.
|
14 |
+
|
15 |
+
Deer are widely distributed, with indigenous representatives in all continents except Antarctica and Australia, though Africa has only one native deer, the Barbary stag, a subspecies of red deer that is confined to the Atlas Mountains in the northwest of the continent, an additional extinct species of deer Megaceroides algericus was present in North Africa until 6000 years ago. Fallow deer have been introduced to South Africa. Small species of brocket deer and pudús of Central and South America, and muntjacs of Asia generally occupy dense forests and are less often seen in open spaces, with the possible exception of the Indian muntjac. There are also several species of deer that are highly specialized, and live almost exclusively in mountains, grasslands, swamps, and "wet" savannas, or riparian corridors surrounded by deserts. Some deer have a circumpolar distribution in both North America and Eurasia. Examples include the caribou that live in Arctic tundra and taiga (boreal forests) and moose that inhabit taiga and adjacent areas. Huemul deer (taruca and Chilean huemul) of South America's Andes fill the ecological niches of the ibex and wild goat, with the fawns behaving more like goat kids.
|
16 |
+
|
17 |
+
The highest concentration of large deer species in temperate North America lies in the Canadian Rocky Mountain and Columbia Mountain regions between Alberta and British Columbia where all five North American deer species (white-tailed deer, mule deer, caribou, elk, and moose) can be found. This region has several clusters of national parks including Mount Revelstoke National Park, Glacier National Park (Canada), Yoho National Park, and Kootenay National Park on the British Columbia side, and Banff National Park, Jasper National Park, and Glacier National Park (U.S.) on the Alberta and Montana sides. Mountain slope habitats vary from moist coniferous/mixed forested habitats to dry subalpine/pine forests with alpine meadows higher up. The foothills and river valleys between the mountain ranges provide a mosaic of cropland and deciduous parklands. The rare woodland caribou have the most restricted range living at higher altitudes in the subalpine meadows and alpine tundra areas of some of the mountain ranges. Elk and mule deer both migrate between the alpine meadows and lower coniferous forests and tend to be most common in this region. Elk also inhabit river valley bottomlands, which they share with White-tailed deer. The White-tailed deer have recently expanded their range within the foothills and river valley bottoms of the Canadian Rockies owing to conversion of land to cropland and the clearing of coniferous forests allowing more deciduous vegetation to grow up the mountain slopes. They also live in the aspen parklands north of Calgary and Edmonton, where they share habitat with the moose. The adjacent Great Plains grassland habitats are left to herds of elk, American bison, and pronghorn.
|
18 |
+
|
19 |
+
The Eurasian Continent (including the Indian Subcontinent) boasts the most species of deer in the world, with most species being found in Asia. Europe, in comparison, has lower diversity in plant and animal species. However, many national parks and protected reserves in Europe do have populations of red deer, roe deer, and fallow deer. These species have long been associated with the continent of Europe, but also inhabit Asia Minor, the Caucasus Mountains, and Northwestern Iran. "European" fallow deer historically lived over much of Europe during the Ice Ages, but afterwards became restricted primarily to the Anatolian Peninsula, in present-day Turkey.
|
20 |
+
|
21 |
+
Present-day fallow deer populations in Europe are a result of historic man-made introductions of this species, first to the Mediterranean regions of Europe, then eventually to the rest of Europe. They were initially park animals that later escaped and reestablished themselves in the wild. Historically, Europe's deer species shared their deciduous forest habitat with other herbivores, such as the extinct tarpan (forest horse), extinct aurochs (forest ox), and the endangered wisent (European bison). Good places to see deer in Europe include the Scottish Highlands, the Austrian Alps, the wetlands between Austria, Hungary, and the Czech Republic and some fine National Parks, including Doñana National Park in Spain, the Veluwe in the Netherlands, the Ardennes in Belgium, and Białowieża National Park of Poland. Spain, Eastern Europe, and the Caucasus Mountains still have virgin forest areas that are not only home to sizable deer populations but also for other animals that were once abundant such as the wisent, Eurasian lynx, Iberian lynx, wolves, and brown bears.
|
22 |
+
|
23 |
+
The highest concentration of large deer species in temperate Asia occurs in the mixed deciduous forests, mountain coniferous forests, and taiga bordering North Korea, Manchuria (Northeastern China), and the Ussuri Region (Russia). These are among some of the richest deciduous and coniferous forests in the world where one can find Siberian roe deer, sika deer, elk, and moose. Asian caribou occupy the northern fringes of this region along the Sino-Russian border.
|
24 |
+
|
25 |
+
Deer such as the sika deer, Thorold's deer, Central Asian red deer, and elk have historically been farmed for their antlers by Han Chinese, Turkic peoples, Tungusic peoples, Mongolians, and Koreans. Like the Sami people of Finland and Scandinavia, the Tungusic peoples, Mongolians, and Turkic peoples of Southern Siberia, Northern Mongolia, and the Ussuri Region have also taken to raising semi-domesticated herds of Asian caribou.
|
26 |
+
|
27 |
+
The highest concentration of large deer species in the tropics occurs in Southern Asia in India's Indo-Gangetic Plain Region and Nepal's Terai Region. These fertile plains consist of tropical seasonal moist deciduous, dry deciduous forests, and both dry and wet savannas that are home to chital, hog deer, barasingha, Indian sambar, and Indian muntjac. Grazing species such as the endangered barasingha and very common chital are gregarious and live in large herds. Indian sambar can be gregarious but are usually solitary or live in smaller herds. Hog deer are solitary and have lower densities than Indian muntjac. Deer can be seen in several national parks in India, Nepal, and Sri Lanka of which Kanha National Park, Dudhwa National Park, and Chitwan National Park are most famous. Sri Lanka's Wilpattu National Park and Yala National Park have large herds of Indian sambar and chital. The Indian sambar are more gregarious in Sri Lanka than other parts of their range and tend to form larger herds than elsewhere.
|
28 |
+
|
29 |
+
The Chao Praya River Valley of Thailand was once primarily tropical seasonal moist deciduous forest and wet savanna that hosted populations of hog deer, the now-extinct Schomburgk's deer, Eld's deer, Indian sambar, and Indian muntjac. Both the hog deer and Eld's deer are rare, whereas Indian sambar and Indian muntjac thrive in protected national parks, such as Khao Yai. Many of these South Asian and Southeast Asian deer species also share their habitat with other herbivores, such as Asian elephants, the various Asian rhinoceros species, various antelope species (such as nilgai, four-horned antelope, blackbuck, and Indian gazelle in India), and wild oxen (such as wild Asian water buffalo, gaur, banteng, and kouprey). One way that different herbivores can survive together in a given area is for each species to have different food preferences, although there may be some overlap.
|
30 |
+
|
31 |
+
Australia has six introduced species of deer that have established sustainable wild populations from acclimatisation society releases in the 19th century. These are the fallow deer, red deer, sambar, hog deer, rusa, and chital. Red deer introduced into New Zealand in 1851 from English and Scottish stock were domesticated in deer farms by the late 1960s and are common farm animals there now. Seven other species of deer were introduced into New Zealand but none are as widespread as red deer.[2]
|
32 |
+
|
33 |
+
Deer constitute the second most diverse family of artiodactyla after bovids.[3] Though of a similar build, deer are strongly distinguished from antelopes by their antlers, which are temporary and regularly regrown unlike the permanent horns of bovids.[4] Characteristics typical of deer include long, powerful legs, a diminutive tail and long ears.[5] Deer exhibit a broad variation in physical proportions. The largest extant deer is the moose, which is nearly 2.6 metres (8.5 ft) tall and weighs up to 800 kilograms (1,800 lb).[6][7] The elk stands 1.4–2 metres (4.6–6.6 ft) at the shoulder and weighs 240–450 kilograms (530–990 lb).[8] The northern pudu is the smallest deer in the world; it reaches merely 32–35 centimetres (13–14 in) at the shoulder and weighs 3.3–6 kilograms (7.3–13.2 lb). The southern pudu is only slightly taller and heavier.[9] Sexual dimorphism is quite pronounced – in most species males tend to be larger than females,[10] and, except for the reindeer, only males possess antlers.[11]
|
34 |
+
|
35 |
+
Coat colour generally varies between red and brown,[12] though it can be as dark as chocolate brown in the tufted deer[13] or have a grayish tinge as in elk.[8] Different species of brocket deer vary from gray to reddish brown in coat colour.[14] Several species such as the chital,[15] the fallow deer[16] and the sika deer[17] feature white spots on a brown coat. Coat of reindeer shows notable geographical variation.[18] Deer undergo two moults in a year;[12][19] for instance, in red deer the red, thin-haired summer coat is gradually replaced by the dense, greyish brown winter coat in autumn, which in turn gives way to the summer coat in the following spring.[20] Moulting is affected by the photoperiod.[21]
|
36 |
+
|
37 |
+
Deer are also excellent jumpers and swimmers. Deer are ruminants, or cud-chewers, and have a four-chambered stomach. Some deer, such as those on the island of Rùm,[22] do consume meat when it is available.[23]
|
38 |
+
|
39 |
+
Nearly all deer have a facial gland in front of each eye. The gland contains a strongly scented pheromone, used to mark its home range. Bucks of a wide range of species open these glands wide when angry or excited. All deer have a liver without a gallbladder. Deer also have a tapetum lucidum, which gives them sufficiently good night vision.
|
40 |
+
|
41 |
+
All male deer possess antlers, with the exception of the water deer, in which males have long tusk-like canines that reach below the lower jaw.[24] Females generally lack antlers, though female reindeer bear antlers smaller and less branched than those of the males.[25] Occasionally females in other species may develop antlers, especially in telemetacarpal deer such as European roe deer, red deer, white-tailed deer and mule deer and less often in plesiometacarpal deer. A study of antlered female white-tailed deer noted that antlers tend to be small and malformed, and are shed frequently around the time of parturition.[26]
|
42 |
+
|
43 |
+
The fallow deer and the various subspecies of the reindeer have the largest as well as the heaviest antlers, both in absolute terms as well as in proportion to body mass (an average of 8 grams (0.28 oz) per kilogram of body mass);[25][27] the tufted deer, on the other hand, has the smallest antlers of all deer, while the pudú has the lightest antlers with respect to body mass (0.6 grams (0.021 oz) per kilogram of body mass).[25] The structure of antlers show considerable variation; while fallow deer and elk antlers are palmate (with a broad central portion), white-tailed deer antlers include a series of tines sprouting upward from a forward-curving main beam, and those of the pudú are mere spikes.[9] Antler development begins from the pedicel, a bony structure that appears on the top of the skull by the time the animal is a year old. The pedicel gives rise to a spiky antler the following year, that is replaced by a branched antler in the third year. This process of losing a set of antlers to develop a larger and more branched set continues for the rest of the life.[25] The antlers emerge as soft tissues (known as velvet antlers) and progressively harden into bony structures (known as hard antlers), following mineralisation and blockage of blood vessels in the tissue, from the tip to the base.[28]
|
44 |
+
|
45 |
+
Antlers might be one of the most exaggerated male secondary sexual characteristics,[29] and are intended primarily for reproductive success through sexual selection and for combat. The tines (forks) on the antlers create grooves that allow another male's antlers to lock into place. This allows the males to wrestle without risking injury to the face.[30] Antlers are correlated to an individual's position in the social hierarchy and its behaviour. For instance, the heavier the antlers, the higher the individual's status in the social hierarchy, and the greater the delay in shedding the antlers;[25] males with larger antlers tend to be more aggressive and dominant over others.[31] Antlers can be an honest signal of genetic quality; males with larger antlers relative to body size tend to have increased resistance to pathogens[32] and higher reproductive capacity.[33]
|
46 |
+
|
47 |
+
In elk in Yellowstone National Park, antlers also provide protection against predation by wolves.[34]
|
48 |
+
|
49 |
+
Most deer bear 32 teeth; the corresponding dental formula is: 0.0.3.33.1.3.3. The elk and the reindeer may be exceptions, as they may retain their upper canines and thus have 34 teeth (dental formula: 0.1.3.33.1.3.3).[35] The Chinese water deer, tufted deer, and muntjac have enlarged upper canine teeth forming sharp tusks, while other species often lack upper canines altogether. The cheek teeth of deer have crescent ridges of enamel, which enable them to grind a wide variety of vegetation.[36] The teeth of deer are adapted to feeding on vegetation, and like other ruminants, they lack upper incisors, instead having a tough pad at the front of their upper jaw.
|
50 |
+
|
51 |
+
Deer are browsers, and feed primarily on foliage of grasses, sedges, forbs, shrubs and trees, with additional consumption of lichens in northern latitudes during winter.[37] They have small, unspecialized stomachs by ruminant standards, and high nutrition requirements. Rather than eating and digesting vast quantities of low-grade fibrous food as, for example, sheep and cattle do, deer select easily digestible shoots, young leaves, fresh grasses, soft twigs, fruit, fungi, and lichens. The low-fibered food, after minimal fermentation and shredding, passes rapidly through the alimentary canal. The deer require a large amount of minerals such as calcium and phosphate in order to support antler growth, and this further necessitates a nutrient-rich diet. There are, however, some reports of deer engaging in carnivorous activity, such as eating dead alewives along lakeshores[38] or depredating the nests of northern bobwhites.[39]
|
52 |
+
|
53 |
+
Nearly all cervids are so-called uniparental species: the fawns are only cared for by the mother, known as a doe. A doe generally has one or two fawns at a time (triplets, while not unknown, are uncommon). Mating season typically begins in later August and lasts until December. Some species mate until early March. The gestation period is anywhere up to ten months for the European roe deer. Most fawns are born with their fur covered with white spots, though in many species they lose these spots by the end of their first winter. In the first twenty minutes of a fawn's life, the fawn begins to take its first steps. Its mother licks it clean until it is almost free of scent, so predators will not find it. Its mother leaves often to graze, and the fawn does not like to be left behind. Sometimes its mother must gently push it down with her foot.[40] The fawn stays hidden in the grass for one week until it is strong enough to walk with its mother. The fawn and its mother stay together for about one year. A male usually leaves and never sees his mother again, but females sometimes come back with their own fawns and form small herds.
|
54 |
+
|
55 |
+
In some areas of the UK, deer (especially fallow deer due to their gregarious behaviour), have been implicated as a possible reservoir for transmission of bovine tuberculosis,[41][42] a disease which in the UK in 2005 cost £90 million in attempts to eradicate.[43] In New Zealand, deer are thought to be important as vectors picking up M. bovis in areas where brushtail possums Trichosurus vulpecula are infected, and transferring it to previously uninfected possums when their carcasses are scavenged elsewhere.[44] The white-tailed deer Odocoileus virginianus has been confirmed as the sole maintenance host in the Michigan outbreak of bovine tuberculosis which remains a significant barrier to the US nationwide eradication of the disease in livestock.[45]
|
56 |
+
|
57 |
+
Moose and deer can carry rabies.[46]
|
58 |
+
|
59 |
+
Docile moose may suffer from brain worm, a helminth which drills holes through the brain in its search for a suitable place to lay its eggs. A government biologist states that "They move around looking for the right spot and never really find it." Deer appear to be immune to this parasite; it passes through the digestive system and is excreted in the feces. The parasite is not screened by the moose intestine, and passes into the brain where damage is done that is externally apparent, both in behaviour and in gait.[46]
|
60 |
+
|
61 |
+
Deer, elk and moose in North America may suffer from chronic wasting disease, which was identified at a Colorado laboratory in the 1960s and is believed to be a prion disease. Out of an abundance of caution hunters are advised to avoid contact with specified risk material (SRM) such as the brain, spinal column or lymph nodes. Deboning the meat when butchering and sanitizing the knives and other tools used to butcher are amongst other government recommendations.[47]
|
62 |
+
|
63 |
+
Deer are believed to have evolved from antlerless, tusked ancestors that resembled modern duikers and diminutive deer in the early Eocene, and gradually developed into the first antlered cervoids (the superfamily of cervids and related extinct families) in the Miocene. Eventually, with the development of antlers, the tusks as well as the upper incisors disappeared. Thus, evolution of deer took nearly 30 million years. Biologist Valerius Geist suggests evolution to have occurred in stages. There are not many prominent fossils to trace this evolution, but only fragments of skeletons and antlers that might be easily confused with false antlers of non-cervid species.[9][48]
|
64 |
+
|
65 |
+
The ruminants, ancestors of the Cervidae, are believed to have evolved from Diacodexis, the earliest known artiodactyl (even-toed ungulate), 50–55 Mya in the Eocene.[49] Diacodexis, nearly the size of a rabbit, featured the talus bone characteristic of all modern even-toed ungulates. This ancestor and its relatives occurred throughout North America and Eurasia, but were on the decline by at least 46 Mya.[49][50] Analysis of a nearly complete skeleton of Diacodexis discovered in 1982 gave rise to speculation that this ancestor could be closer to the non-ruminants than the ruminants.[51] Andromeryx is another prominent prehistoric ruminant, but appears to be closer to the tragulids.[52]
|
66 |
+
|
67 |
+
The formation of the Himalayas and the Alps brought about significant geographic changes. This was the chief reason behind the extensive diversification of deer-like forms and the emergence of cervids from the Oligocene to the early Pliocene.[53] The latter half of the Oligocene (28–34 Mya) saw the appearance of the European Eumeryx and the North American Leptomeryx. The latter resembled modern-day bovids and cervids in dental morphology (for instance, it had brachyodont molars), while the former was more advanced.[54] Other deer-like forms included the North American Blastomeryx and the European Dremotherium; these sabre-toothed animals are believed to have been the direct ancestors of all modern antlered deer, though they themselves lacked antlers.[55] Another contemporaneous form was the four-horned protoceratid Protoceras, that was replaced by Syndyoceras in the Miocene; these animals were unique in having a horn on the nose.[48] Late Eocene fossils dated approximately 35 million years ago, which were found in North America, show that Syndyoceras had bony skull outgrowths that resembled non-deciduous antlers.[56]
|
68 |
+
|
69 |
+
Fossil evidence suggests that the earliest members of the superfamily Cervoidea appeared in Eurasia in the Miocene. Dicrocerus, Euprox and Heteroprox were probably the first antlered cervids.[57] Dicrocerus featured single-forked antlers that were shed regularly.[58] Stephanocemas had more developed and diffuse ("crowned") antlers.[59] Procervulus (Palaeomerycidae), in addition to the tusks of Dremotherium, possessed antlers that were not shed.[60] Contemporary forms such as the merycodontines eventually gave rise to the modern pronghorn.[61]
|
70 |
+
|
71 |
+
The Cervinae emerged as the first group of extant cervids around 7–9 Mya, during the late Miocene in central Asia. The tribe Muntiacini made its appearance as † Muntiacus leilaoensis around 7–8 Mya;[62] The early muntjacs varied in size–as small as hares or as large as fallow deer. They had tusks for fighting and antlers for defence.[9] Capreolinae followed soon after; Alceini appeared 6.4–8.4 Mya.[63] Around this period, the Tethys Ocean disappeared to give way to vast stretches of grassland; these provided the deer with abundant protein-rich vegetation that led to the development of ornamental antlers and allowed populations to flourish and colonise areas.[9][53] As antlers had become pronounced, the canines were no more retained or were poorly represented (as in elk), probably because diet was no more browse-dominated and antlers were better display organs. In muntjac and tufted deer, the antlers as well as the canines are small. The tragulids, however, possess long canines to this day.[50]
|
72 |
+
|
73 |
+
With the onset of the Pliocene, the global climate became cooler. A fall in the sea-level led to massive glaciation; consequently, grasslands abounded in nutritious forage. Thus a new spurt in deer populations ensued.[9][53] The oldest member of Cervini, † Cervocerus novorossiae, appeared around the transition from Miocene to Pliocene (4.2–6 Mya) in Eurasia;[64] cervine fossils from early Pliocene to as late as the Pleistocene have been excavated in China[65] and the Himalayas.[66] While Cervus and Dama appeared nearly 3 Mya, Axis emerged during the late Pliocene–Pleistocene. The tribes Capreolini and Rangiferini appeared around 4–7 Mya.[63]
|
74 |
+
|
75 |
+
Around 5 Mya, the rangiferines † Bretzia and † Eocoileus were the first cervids to reach North America.[63] This implies the Bering Strait could be crossed during the late Miocene–Pliocene; this appears highly probable as the camelids migrated into Asia from North America around the same time.[67] Deer invaded South America in the late Pliocene (2.5–3 Mya) as part of the Great American Interchange, thanks to the recently formed Isthmus of Panama, and emerged successful due to the small number of competing ruminants in the continent.[68]
|
76 |
+
|
77 |
+
Large deer with impressive antlers evolved during the early Pleistocene, probably as a result of abundant resources to drive evolution.[9] The early Pleistocene cervid † Eucladoceros was comparable in size to the modern elk.[69] † Megaloceros (Pliocene–Pleistocene) featured the Irish elk (M. giganteus), one of the largest known cervids. The Irish elk reached 2 metres (6.6 ft) at the shoulder and had heavy antlers that spanned 3.6 metres (12 ft) from tip to tip.[70] These large animals are thought to have faced extinction due to conflict between sexual selection for large antlers and body and natural selection for a smaller form.[71] Meanwhile, the moose and reindeer radiated into North America from Siberia.[72]
|
78 |
+
|
79 |
+
Deer constitute the artiodactyl family Cervidae. This family was first described by German zoologist Georg August Goldfuss in Handbuch der Zoologie (1820). Three subfamilies are recognised: Capreolinae (first described by the English zoologist Joshua Brookes in 1828), Cervinae (described by Goldfuss) and Hydropotinae (first described by French zoologist Édouard Louis Trouessart in 1898).[3][73]
|
80 |
+
|
81 |
+
Other attempts at the classification of deer have been based on morphological and genetic differences.[48] The Anglo-Irish naturalist Victor Brooke suggested in 1878 that deer could be bifurcated into two classes on the according to the features of the second and fifth metacarpal bones of their forelimbs: Plesiometacarpalia (most Old World deer) and Telemetacarpalia (most New World deer). He treated the musk deer as a cervid, placing it under Telemetacarpalia. While the telemetacarpal deer showed only those elements located far from the joint, the plesiometacarpal deer retained the elements closer to the joint as well.[74] Differentiation on the basis of diploid number of chromosomes in the late 20th century has been flawed by several inconsistencies.[48]
|
82 |
+
|
83 |
+
In 1987, the zoologists Colin Groves and Peter Grubb identified three subfamilies: Cervinae, Hydropotinae and Odocoileinae; they noted that the hydropotines lack antlers, and the other two subfamilies differ in their skeletal morphology.[75] However, they reverted from this classification in 2000.[76]
|
84 |
+
|
85 |
+
Until the beginning of the 21st century it was understood that the family Moschidae (musk deer) is sister to Cervidae. However, a 2003 phylogenetic study by Alexandre Hassanin (of National Museum of Natural History, France) and colleagues, based on mitochondrial and nuclear analyses, revealed that Moschidae and Bovidae form a clade sister to Cervidae. According to the study, Cervidae diverged from the Bovidae-Moschidae clade 27 to 28 million years ago.[77] The following cladogram is based on the 2003 study.[77]
|
86 |
+
|
87 |
+
Tragulidae
|
88 |
+
|
89 |
+
Antilocapridae
|
90 |
+
|
91 |
+
Giraffidae
|
92 |
+
|
93 |
+
Cervidae
|
94 |
+
|
95 |
+
Bovidae
|
96 |
+
|
97 |
+
Moschidae
|
98 |
+
|
99 |
+
A 2006 phylogenetic study of the internal relationships in Cervidae by Clément Gilbert and colleagues divided the family into two major clades: Capreolinae (telemetacarpal or New World deer) and Cervinae (plesiometacarpal or Old World deer). Studies in the late 20th century suggested a similar bifurcation in the family. This as well as previous studies support monophyly in Cervinae, while Capreolinae appears paraphyletic. The 2006 study identified two lineages in Cervinae, Cervini (comprising the genera Axis, Cervus, Dama and Rucervus) and Muntiacini (Muntiacus and Elaphodus). Capreolinae featured three lineages, Alceini (Alces species), Capreolini (Capreolus and the subfamily Hydropotinae) and Rangiferini (Blastocerus, Hippocamelus, Mazama, Odocoileus, Pudu and Rangifer species). The following cladogram is based on the 2006 study.[63]
|
100 |
+
|
101 |
+
Reeves's muntjac
|
102 |
+
|
103 |
+
Tufted deer
|
104 |
+
|
105 |
+
Fallow deer
|
106 |
+
|
107 |
+
Persian fallow deer
|
108 |
+
|
109 |
+
Rusa
|
110 |
+
|
111 |
+
Sambar
|
112 |
+
|
113 |
+
Red deer
|
114 |
+
|
115 |
+
Thorold's deer
|
116 |
+
|
117 |
+
Sika deer
|
118 |
+
|
119 |
+
Eld's deer
|
120 |
+
|
121 |
+
Père David's deer
|
122 |
+
|
123 |
+
Barasingha
|
124 |
+
|
125 |
+
Indian hog deer
|
126 |
+
|
127 |
+
Chital
|
128 |
+
|
129 |
+
Reindeer (Caribou)
|
130 |
+
|
131 |
+
American red brocket
|
132 |
+
|
133 |
+
White-tailed deer
|
134 |
+
|
135 |
+
Mule deer
|
136 |
+
|
137 |
+
Marsh deer
|
138 |
+
|
139 |
+
Gray brocket
|
140 |
+
|
141 |
+
Southern pudu
|
142 |
+
|
143 |
+
Taruca
|
144 |
+
|
145 |
+
Roe deer
|
146 |
+
|
147 |
+
Water deer
|
148 |
+
|
149 |
+
Moose or Eurasian elk
|
150 |
+
|
151 |
+
The subfamily Capreolinae consists of 9 genera and 36 species, while Cervinae comprises 10 genera and 55 species.[73] Hydropotinae consists of a single species, the water deer (H. inermis); however, a 1998 study placed it under Capreolinae.[78] The following list is based on molecular and phylogenetic studies by zoologists such as Groves and Grubb.[79][80][81][82][83]
|
152 |
+
|
153 |
+
The following is the classification of the extinct cervids with known fossil record:
|
154 |
+
|
155 |
+
Deer were an important source of food for early hominids. In China, Homo erectus fed upon the sika deer, while the red deer was hunted in Germany. In the Upper Palaeolithic, the reindeer was the staple food for Cro-Magnon people,[95] while the cave paintings at Lascaux in southwestern France include some 90 images of stags.[96]
|
156 |
+
|
157 |
+
Deer had a central role in the ancient art, culture and mythology of the Hittites, the ancient Egyptians, the Celts, the ancient Greeks, the Asians and several others. For instance, the Stag Hunt Mosaic of ancient Pella, under the Kingdom of Macedonia (4th century BC), possibly depicts Alexander the Great hunting a deer with Hephaistion.[97] In Japanese Shintoism, the sika deer is believed to be a messenger to the gods. In China, deer are associated with great medicinal significance; deer penis is thought by some in China to have aphrodisiac properties.[98] Spotted deer are believed in China to accompany the god of longevity. Deer was the principal sacrificial animal for the Huichal Indians of Mexico. In medieval Europe, deer appeared in hunting scenes and coats-of-arms. Deer are depicted in many materials by various pre-Hispanic civilizations in the Andes.[95][99]
|
158 |
+
|
159 |
+
The common male first name Oscar is taken from the Irish Language, where it is derived from two elements: the first, os, means "deer"; the second element, cara, means "friend". The name is borne by a famous hero of Irish mythology—Oscar, grandson of Fionn Mac Cumhail. The name was popularised in the 18th century by James Macpherson, creator of 'Ossianic poetry'.
|
160 |
+
|
161 |
+
Deer have been an integral part of fables and other literary works since the inception of writing. Stags were used as symbols in the latter Sumerian writings. For instance, the boat of Sumerian god Enki is named the Stag of Azbu. There are several mentions of the animal in the Rigveda as well as the Bible. In the Indian epic Ramayana, Sita is lured by a golden deer which Rama tries to catch. In the absence of both Rama and Lakshman, Ravana kidnaps Sita. Many of the allegorical Aesop's fables, such as "The Stag at the Pool", "The One-Eyed Doe" and "The Stag and a Lion", personify deer to give moral lessons. For instance, "The Sick Stag" gives the message that uncaring friends can do more harm than good.[95] The Yaqui deer song accompanies the deer dance which is performed by a pascola [from the Spanish 'pascua', Easter] dancer (also known as a deer dancer). Pascolas would perform at religious and social functions many times of the year, especially during Lent and Easter.[95][100]
|
162 |
+
|
163 |
+
In one of Rudolf Erich Raspe's 1785 stories of Baron Munchausen's Narrative of his Marvellous Travels and Campaigns in Russia, the baron encounters a stag while eating cherries and, without ammunition, fires the cherry-pits at the stag with his musket, but it escapes. The next year, the baron encounters a stag with a cherry tree growing from its head; presumably this is the animal he had shot at the previous year. In Christmas lore (such as in the narrative poem "A Visit from St. Nicholas"), reindeer are often depicted pulling the sleigh of Santa Claus.[101] Marjorie Kinnan Rawlings's Pulitzer Prize-winning 1938 novel The Yearling was about a boy's relationship with a baby deer. The fiction book Fire Bringer is about a young fawn who goes on a quest to save the Herla, the deer kind.[102] In the 1942 Walt Disney Pictures film, Bambi is a white-tailed deer, while in Felix Salten's original 1923 book Bambi, a Life in the Woods, he is a roe deer. In C. S. Lewis's 1950 fantasy novel The Lion, the Witch and the Wardrobe the adult Pevensies, now kings and queens of Narnia, chase the White Stag on a hunt, as the Stag is said to grant its captor a wish. The hunt is key in returning the Pevensies to their home in England. In the 1979 book The Animals of Farthing Wood, The Great White Stag is the leader of all the animals.
|
164 |
+
|
165 |
+
Deer of various types appear frequently in European heraldry. In the British armory, the term "stag" is typically used to refer to antlered male red deer, while "buck" indicates an antlered male fallow deer. Stags and bucks appear in a number of attitudes, referred to as "lodged" when the deer is lying down, "trippant" when it has one leg raised, "courant" when it is running, "springing" when in the act of leaping, "statant" when it is standing with all hooves on the ground and looking ahead, and "at gaze" when otherwise statant but looking at the viewer. Stags' heads are also frequently used; these are typically portrayed without an attached neck and as facing the viewer, in which case they are termed "caboshed".[103]
|
166 |
+
|
167 |
+
Examples of deer in coats of arms can be found in the arms of Hertfordshire, England, and its county town of Hertford; both are examples of canting arms. A deer appears on the arms of the Israeli Postal Authority. Coats of arms featuring deer include those of Dotternhausen, Thierachern, Friolzheim, Bauen, Albstadt, and Dassel in Germany; of the Earls Bathurst in England; of Balakhna, Russia; of Åland, Finland; of Gjemnes, Hitra, Hjartdal, Rendalen and Voss in Norway; of Jelenia Góra, Poland; of Umeå, Sweden; of Queensland, Australia; of Cervera, Catalonia; of Northern Ireland; and of Chile.[citation needed]
|
168 |
+
|
169 |
+
Other types of deer used in heraldry include the hind, portrayed much like the stag or buck but without antlers, as well as the reindeer and winged stags. Winged stags are used as supporters in the arms of the de Carteret family. The sea-stag, possessing the antlers, head, forelegs and upper body of a stag and the tail of a mermaid, is often found in German heraldry.[103]
|
170 |
+
|
171 |
+
Deer have long had economic significance to humans. Deer meat, known as venison, is highly nutritious.[104][105] Due to the inherently wild nature and diet of deer, venison is most often obtained through deer hunting. In the United States, it is produced in small amounts compared to beef but still represents a significant trade. By 2012, some 25,000 tons of red deer were raised on farms in North America. The major deer-producing countries are New Zealand, the market leader, with Ireland, Great Britain and Germany. The trade earns over $100 million annually for these countries.[106]
|
172 |
+
|
173 |
+
The skins make a peculiarly strong, soft leather, known as buckskin. There is nothing special about skins with the fur on since the hair is brittle and soon falls off. The hoofs and horns are used for ornamental purposes, especially the antlers of the roe deer, which are utilized for making umbrella handles, and for similar purposes; elk horn is often employed in making knife handles. In China, a medicine is made from stag horn, and the antlers of certain species are eaten when "in the velvet".[107] Among the Inuit, the traditional ulu women's knife was made with an antler, horn, or ivory handle.[108]
|
174 |
+
|
175 |
+
Deer have long been bred in captivity as ornaments for parks, but only in the case of reindeer has thorough domestication succeeded.[107] The Sami of Scandinavia and the Kola Peninsula of Russia and other nomadic peoples of northern Asia use reindeer for food, clothing, and transport. Deer bred for hunting are selected based on the size of the antlers.[109] In North America, the reindeer, known there as caribou, is not domesticated or herded, but it is important as a quarry animal to the Caribou Inuit.[110]
|
176 |
+
|
177 |
+
Automobile collisions with deer can impose a significant cost on the economy. In the U.S., about 1.5 million deer-vehicle collisions occur each year, according to the National Highway Traffic Safety Administration. Those accidents cause about 150 human deaths and $1.1 billion in property damage annually.[111] In Scotland, several roads including the A82, the A87 and the A835 have had significant enough problems with deer vehicle collisions (DVCs) that sets of vehicle activated automatic warning signs have been installed along these roads.[112]
|
178 |
+
|
179 |
+
In some areas of the UK, deer (especially fallow deer due to their gregarious behaviour), have been implicated as a possible reservoir for transmission of bovine tuberculosis,[41][42] a disease which in the UK in 2005 cost £90 million in attempts to eradicate.[43] In New Zealand, deer are thought to be important as vectors picking up M. bovis in areas where brushtail possums Trichosurus vulpecula are infected, and transferring it to previously uninfected possums when their carcasses are scavenged elsewhere.[44] The white-tailed deer Odocoileus virginianus has been confirmed as the sole maintenance host in the Michigan outbreak of bovine tuberculosis which remains a significant barrier to the US nationwide eradication of the disease in livestock. In 2008, 733,998 licensed deer hunters killed approximately 489,922 white-tailed deer to procure venison, control the deer population, and minimize the spread of disease. These hunters purchased more than 1.5 million deer harvest tags. The economic value of deer hunting to Michigan's economy is substantial. For example, in 2006, hunters spent US$507 million hunting white-tailed deer in Michigan.[45]
|
180 |
+
|
181 |
+
Deer hunting is a popular activity in the U.S. that provides the hunter's family with high quality meat and generates revenue for states and the federal government from the sales of licenses, permits and tags. The 2006 survey by the U.S. Fish and Wildlife Service estimates that license sales generate approximately $700 million annually. This revenue generally goes to support conservation efforts in the states where the licenses are purchased. Overall, the U.S. Fish and Wildlife Service estimates that big game hunting for deer and elk generates approximately $11.8 billion annually in hunting-related travel, equipment and related expenditures.[113]
|
182 |
+
|
183 |
+
The word deer was originally broad in meaning, becoming more specific with time. Old English dēor and Middle English der meant a wild animal of any kind. Cognates of Old English dēor in other dead Germanic languages have the general sense of animal, such as Old High German tior, Old Norse djur or dȳr, Gothic dius, Old Saxon dier, and Old Frisian diar.[114] This general sense gave way to the modern English sense by the end of the Middle English period, around 1500. However, all modern Germanic languages save English and Scots retain the more general sense: for example, German Tier and Norwegian dyr mean animal.[115]
|
184 |
+
|
185 |
+
For many types of deer in modern English usage, the male is a buck and the female a doe, but the terms vary with dialect, and according to the size of the species. The male red deer is a stag, while for other large species the male is a bull, the female a cow, as in cattle. In older usage, the male of any species is a hart, especially if over five years old, and the female is a hind, especially if three or more years old.[116] The young of small species is a fawn and of large species a calf; a very small young may be a kid. A castrated male is a havier.[117] A group of any species is a herd. The adjective of relation is cervine; like the family name Cervidae, this is from Latin: cervus, meaning stag or deer.
|
186 |
+
|
en/1941.html.txt
ADDED
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A car (or automobile) is a wheeled motor vehicle used for transportation. Most definitions of cars say that they run primarily on roads, seat one to eight people, have four tires, and mainly transport people rather than goods.[2][3]
|
4 |
+
|
5 |
+
Cars came into global use during the 20th century, and developed economies depend on them. The year 1886 is regarded as the birth year of the modern car when German inventor Karl Benz patented his Benz Patent-Motorwagen. Cars became widely available in the early 20th century. One of the first cars accessible to the masses was the 1908 Model T, an American car manufactured by the Ford Motor Company. Cars were rapidly adopted in the US, where they replaced animal-drawn carriages and carts, but took much longer to be accepted in Western Europe and other parts of the world.[citation needed]
|
6 |
+
|
7 |
+
Cars have controls for driving, parking, passenger comfort, and a variety of lights. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex, but also more reliable and easier to operate.[citation needed] These include rear-reversing cameras, air conditioning, navigation systems, and in-car entertainment. Most cars in use in the 2010s are propelled by an internal combustion engine, fueled by the combustion of fossil fuels. Electric cars, which were invented early in the history of the car, became commercially available in the 2000s and are predicted to cost less to buy than gasoline cars before 2025.[4][5] The transition from fossil fuels to electric cars features prominently in most climate change mitigation scenarios.[6]
|
8 |
+
|
9 |
+
There are costs and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs and maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance.[7] The costs to society include maintaining roads, land use, road congestion, air pollution, public health, healthcare, and disposing of the vehicle at the end of its life. Traffic collisions are the largest cause of injury-related deaths worldwide.[8]
|
10 |
+
|
11 |
+
The personal benefits include on-demand transportation, mobility, independence, and convenience.[9] The societal benefits include economic benefits, such as job and wealth creation from the automotive industry, transportation provision, societal well-being from leisure and travel opportunities, and revenue generation from the taxes. People's ability to move flexibly from place to place has far-reaching implications for the nature of societies.[10] There are around 1 billion cars in use worldwide. The numbers are increasing rapidly, especially in China, India and other newly industrialized countries.[11]
|
12 |
+
|
13 |
+
The English word car is believed to originate from Latin carrus/carrum "wheeled vehicle" or (via Old North French) Middle English carre "two-wheeled cart," both of which in turn derive from Gaulish karros "chariot."[12][13] It originally referred to any wheeled horse-drawn vehicle, such as a cart, carriage, or wagon.[14][15]
|
14 |
+
|
15 |
+
"Motor car," attested from 1895, is the usual formal term in British English.[3] "Autocar," a variant likewise attested from 1895 and literally meaning "self-propelled car," is now considered archaic.[16] "Horseless carriage" is attested from 1895.[17]
|
16 |
+
|
17 |
+
"Automobile," a classical compound derived from Ancient Greek autós (αὐτός) "self" and Latin mobilis "movable," entered English from French and was first adopted by the Automobile Club of Great Britain in 1897.[18] It fell out of favour in Britain and is now used chiefly in North America,[19] where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".[20][21] Both forms are still used in everyday Dutch (auto/automobiel) and German (Auto/Automobil).[citation needed]
|
18 |
+
|
19 |
+
The first working steam-powered vehicle was designed — and quite possibly built — by Ferdinand Verbiest, a Flemish member of a Jesuit mission in China around 1672. It was a 65-cm-long scale-model toy for the Chinese Emperor that was unable to carry a driver or a passenger.[9][22][23] It is not known with certainty if Verbiest's model was successfully built or run.[23]
|
20 |
+
|
21 |
+
Nicolas-Joseph Cugnot is widely credited with building the first full-scale, self-propelled mechanical vehicle or car in about 1769; he created a steam-powered tricycle.[24] He also constructed two steam tractors for the French Army, one of which is preserved in the French National Conservatory of Arts and Crafts.[25] His inventions were, however, handicapped by problems with water supply and maintaining steam pressure.[25] In 1801, Richard Trevithick built and demonstrated his Puffing Devil road locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
|
22 |
+
|
23 |
+
The development of external combustion engines is detailed as part of the history of the car but often treated separately from the development of true cars. A variety of steam-powered road vehicles were used during the first part of the 19th century, including steam cars, steam buses, phaetons, and steam rollers. Sentiment against them led to the Locomotive Acts of 1865.
|
24 |
+
|
25 |
+
In 1807, Nicéphore Niépce and his brother Claude created what was probably the world's first internal combustion engine (which they called a Pyréolophore), but they chose to install it in a boat on the river Saone in France.[26] Coincidentally, in 1807 the Swiss inventor François Isaac de Rivaz designed his own 'de Rivaz internal combustion engine' and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture of Lycopodium powder (dried spores of the Lycopodium plant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture of hydrogen and oxygen.[26] Neither design was very successful, as was the case with others, such as Samuel Brown, Samuel Morey, and Etienne Lenoir with his hippomobile, who each produced vehicles (usually adapted carriages or carts) powered by internal combustion engines.[1]
|
26 |
+
|
27 |
+
In November 1881, French inventor Gustave Trouvé demonstrated the first working (three-wheeled) car powered by electricity at the International Exposition of Electricity, Paris.[27] Although several other German engineers (including Gottlieb Daimler, Wilhelm Maybach, and Siegfried Marcus) were working on the problem at about the same time, Karl Benz generally is acknowledged as the inventor of the modern car.[1]
|
28 |
+
|
29 |
+
In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His first Motorwagen was built in 1885 in Mannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company, Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered with four-stroke engines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888 Bertha Benz, the wife of Karl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention.
|
30 |
+
|
31 |
+
In 1896, Benz designed and patented the first internal-combustion flat engine, called boxermotor. During the last years of the nineteenth century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became a joint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.
|
32 |
+
|
33 |
+
Daimler and Maybach founded Daimler Motoren Gesellschaft (DMG) in Cannstatt in 1890, and sold their first car in 1892 under the brand name Daimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895 about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine named Daimler-Mercedes that was placed in a specially ordered model built to specifications set by Emil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to the Daimler brand name were sold to other manufacturers.
|
34 |
+
|
35 |
+
Karl Benz proposed co-operation between DMG and Benz & Cie. when economic conditions began to deteriorate in Germany following the First World War, but the directors of DMG refused to consider it initially. Negotiations between the two companies resumed several years later when these conditions worsened and, in 1924 they signed an Agreement of Mutual Interest, valid until the year 2000. Both enterprises standardized design, production, purchasing, and sales and they advertised or marketed their car models jointly, although keeping their respective brands. On 28 June 1926, Benz & Cie. and DMG finally merged as the Daimler-Benz company, baptizing all of its cars Mercedes Benz, as a brand honoring the most important model of the DMG cars, the Maybach design later referred to as the 1902 Mercedes-35 hp, along with the Benz name. Karl Benz remained a member of the board of directors of Daimler-Benz until his death in 1929, and at times, his two sons also participated in the management of the company.
|
36 |
+
|
37 |
+
In 1890, Émile Levassor and Armand Peugeot of France began producing vehicles with Daimler engines, and so laid the foundation of the automotive industry in France. In 1891, Auguste Doriot and his Peugeot colleague Louis Rigoulot completed the longest trip by a gasoline-powered vehicle when their self-designed and built Daimler powered Peugeot Type 3 completed 2,100 km (1,300 miles) from Valentigney to Paris and Brest and back again. They were attached to the first Paris–Brest–Paris bicycle race, but finished 6 days after the winning cyclist, Charles Terront.
|
38 |
+
|
39 |
+
The first design for an American car with a gasoline internal combustion engine was made in 1877 by George Selden of Rochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of sixteen years and a series of attachments to his application, on 5 November 1895, Selden was granted a United States patent (U.S. Patent 549,160) for a two-stroke car engine, which hindered, more than encouraged, development of cars in the United States. His patent was challenged by Henry Ford and others, and overturned in 1911.
|
40 |
+
|
41 |
+
In 1893, the first running, gasoline-powered American car was built and road-tested by the Duryea brothers of Springfield, Massachusetts. The first public run of the Duryea Motor Wagon took place on 21 September 1893, on Taylor Street in Metro Center Springfield.[28][29] The Studebaker Automobile Company, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897[30]:p.66 and commenced sales of electric vehicles in 1902 and gasoline vehicles in 1904.[31]
|
42 |
+
|
43 |
+
In Britain, there had been several attempts to build steam cars with varying degrees of success, with Thomas Rickett even attempting a production run in 1860.[32] Santler from Malvern is recognized by the Veteran Car Club of Great Britain as having made the first gasoline-powered car in the country in 1894,[33] followed by Frederick William Lanchester in 1895, but these were both one-offs.[33] The first production vehicles in Great Britain came from the Daimler Company, a company founded by Harry J. Lawson in 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.[33]
|
44 |
+
|
45 |
+
In 1892, German engineer Rudolf Diesel was granted a patent for a "New Rational Combustion Engine". In 1897, he built the first diesel engine.[1] Steam-, electric-, and gasoline-powered vehicles competed for decades, with gasoline internal combustion engines achieving dominance in the 1910s. Although various pistonless rotary engine designs have attempted to compete with the conventional piston and crankshaft design, only Mazda's version of the Wankel engine has had more than very limited success.
|
46 |
+
|
47 |
+
All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.[34]
|
48 |
+
|
49 |
+
Large-scale, production-line manufacturing of affordable cars was started by Ransom Olds in 1901 at his Oldsmobile factory in Lansing, Michigan and based upon stationary assembly line techniques pioneered by Marc Isambard Brunel at the Portsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the U.S. by Thomas Blanchard in 1821, at the Springfield Armory in Springfield, Massachusetts.[35] This concept was greatly expanded by Henry Ford, beginning in 1913 with the world's first moving assembly line for cars at the Highland Park Ford Plant.
|
50 |
+
|
51 |
+
As a result, Ford's cars came off the line in fifteen-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5-man-hours to 1 hour 33 minutes).[36] It was so successful, paint became a bottleneck. Only Japan black would dry fast enough, forcing the company to drop the variety of colors available before 1913, until fast-drying Duco lacquer was developed in 1926. This is the source of Ford's apocryphal remark, "any color as long as it's black".[36] In 1914, an assembly line worker could buy a Model T with four months' pay.[36]
|
52 |
+
|
53 |
+
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury.[citation needed] The combination of high wages and high efficiency is called "Fordism," and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the United States. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
|
54 |
+
|
55 |
+
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921, Citroen was the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going broke; by 1930, 250 companies which did not, had disappeared.[36]
|
56 |
+
|
57 |
+
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electric ignition and the electric self-starter (both by Charles Kettering, for the Cadillac Motor Company in 1910–1911), independent suspension, and four-wheel brakes.
|
58 |
+
|
59 |
+
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It was Alfred P. Sloan who established the idea of different makes of cars produced by one company, called the General Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
|
60 |
+
|
61 |
+
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s, LaSalles, sold by Cadillac, used cheaper mechanical parts made by Oldsmobile; in the 1950s, Chevrolet shared hood, doors, roof, and windows with Pontiac; by the 1990s, corporate powertrains and shared platforms (with interchangeable brakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such as Apperson, Cole, Dorris, Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with the Great Depression, by 1940, only 17 of those were left.[36]
|
62 |
+
|
63 |
+
In Europe, much the same would happen. Morris set up its production line at Cowley in 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice of vertical integration, buying Hotchkiss (engines), Wrigley (gearboxes), and Osberton (radiators), for instance, as well as competitors, such as Wolseley: in 1925, Morris had 41% of total British car production. Most British small-car assemblers, from Abbey to Xtra, had gone under. Citroen did the same in France, coming to cars in 1919; between them and other cheap cars in reply such as Renault's 10CV and Peugeot's 5CV, they produced 550,000 cars in 1925, and Mors, Hurtu, and others could not compete.[36] Germany's first mass-manufactured car, the Opel 4PS Laubfrosch (Tree Frog), came off the line at Russelsheim in 1924, soon making Opel the top car builder in Germany, with 37.5% of the market.[36]
|
64 |
+
|
65 |
+
In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, like Daihatsu, or were the result of partnering with European companies, like Isuzu building the Wolseley A-9 in 1922. Mitsubishi was also partnered with Fiat and built the Mitsubishi Model A based on a Fiat vehicle. Toyota, Nissan, Suzuki, Mazda, and Honda began as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to take Toyoda Loom Works into automobile manufacturing would create what would eventually become Toyota Motor Corporation, the largest automobile manufacturer in the world. Subaru, meanwhile, was formed from a conglomerate of six companies who banded together as Fuji Heavy Industries, as a result of having been broken up under keiretsu legislation.
|
66 |
+
|
67 |
+
According to the European Environment Agency, the transport sector is a major contributor to air pollution, noise pollution and climate change.[37]
|
68 |
+
|
69 |
+
Most cars in use in the 2010s run on gasoline burnt in an internal combustion engine (ICE). The International Organization of Motor Vehicle Manufacturers says that, in countries that mandate low sulfur gasoline, gasoline-fuelled cars built to late 2010s standards (such as Euro-6) emit very little local air pollution.[38][39] Some cities ban older gasoline-fuelled cars and some countries plan to ban sales in future. However some environmental groups say this phase-out of fossil fuel vehicles must be brought forward to limit climate change. Production of gasoline fueled cars peaked in 2017.[40][41]
|
70 |
+
|
71 |
+
Other hydrocarbon fossil fuels also burnt by deflagration (rather than detonation) in ICE cars include diesel, Autogas and CNG. Removal of fossil fuel subsidies,[42][43] concerns about oil dependence, tightening environmental laws and restrictions on greenhouse gas emissions are propelling work on alternative power systems for cars. This includes hybrid vehicles, plug-in electric vehicles and hydrogen vehicles. 2.1 million light electric vehicles (of all types but mainly cars) were sold in 2018, over half in China: this was an increase of 64% on the previous year, giving a global total on the road of 5.4 million.[44] Vehicles using alternative fuels such as ethanol flexible-fuel vehicles and natural gas vehicles[clarification needed] are also gaining popularity in some countries.[citation needed] Cars for racing or speed records have sometimes employed jet or rocket engines, but these are impractical for common use.
|
72 |
+
|
73 |
+
Oil consumption has increased rapidly in the 20th and 21st centuries because there are more cars; the 1985–2003 oil glut even fuelled the sales of low-economy vehicles in OECD countries. The BRIC countries are adding to this consumption.
|
74 |
+
|
75 |
+
Cars are equipped with controls used for driving, passenger comfort and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st century cars. These controls include a steering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation and other functions. Modern cars' controls are now standardized, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example the electric car and the integration of mobile communications.
|
76 |
+
|
77 |
+
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch, ignition timing, and a crank instead of an electric starter. However new controls have also been added to vehicles, making them more complex. These include air conditioning, navigation systems, and in car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such as BMW's iDrive and Ford's MyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the 2010s, cars have increasingly replaced these physical linkages with electronic controls.
|
78 |
+
|
79 |
+
Cars are typically fitted with multiple types of lights. These include headlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions, daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-colored reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a trunk light and, more rarely, an engine compartment light.
|
80 |
+
|
81 |
+
During the late 20th and early 21st century cars increased in weight due to batteries,[46] modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more-efficient—engines"[47] and, as of 2019[update], typically weigh between 1 and 3 tonnes.[48] Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users.[47] The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. The SmartFortwo, a small city car, weighs 750–795 kg (1,655–1,755 lb). Heavier cars include full-size cars, SUVs and extended-length SUVs like the Suburban.
|
82 |
+
|
83 |
+
According to research conducted by Julian Allwood of the University of Cambridge, global energy use could be greatly reduced by using lighter cars, and an average weight of 500 kg (1,100 lb) has been said to be well achievable.[49][better source needed] In some competitions such as the Shell Eco Marathon, average car weights of 45 kg (99 lb) have also been achieved.[50] These cars are only single-seaters (still falling within the definition of a car, although 4-seater cars are more common), but they nevertheless demonstrate the amount by which car weights could still be reduced, and the subsequent lower fuel use (i.e. up to a fuel use of 2560 km/l).[51]
|
84 |
+
|
85 |
+
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear. Full-size cars and large sport utility vehicles can often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand, sports cars are most often designed with only two seats. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, the sedan/saloon, hatchback, station wagon/estate, and minivan.
|
86 |
+
|
87 |
+
Traffic collisions are the largest cause of injury-related deaths worldwide.[8] Mary Ward became one of the first documented car fatalities in 1869 in Parsonstown, Ireland,[52] and Henry Bliss one of the United States' first pedestrian car casualties in 1899 in New York City.[53]
|
88 |
+
There are now standard tests for safety in new cars, such as the EuroNCAP and the US NCAP tests,[54] and insurance-industry-backed tests by the Insurance Institute for Highway Safety (IIHS).[55]
|
89 |
+
|
90 |
+
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs and auto maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance,[7] are weighed against the cost of the alternatives, and the value of the benefits – perceived and real – of vehicle usage. The benefits may include on-demand transportation, mobility, independence and convenience.[9] During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."[57]
|
91 |
+
|
92 |
+
Similarly the costs to society of car use may include; maintaining roads, land use, air pollution, road congestion, public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from the tax opportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.[10]
|
93 |
+
|
94 |
+
Cars are a major cause of urban air pollution,[58] with all types of cars producing dust from brakes, tyres and road wear.[59] As of 2018[update] the average diesel car has a worse effect on air quality than the average gasoline car[60] But both gasoline and diesel cars pollute more than electric cars.[61] While there are different ways to power cars most rely on gasoline or diesel, and they consume almost a quarter of world oil production as of 2019[update].[40] In 2018 passenger road vehicles emitted 3.6 gigatonnes of carbon dioxide.[62] As of 2019[update], due to greenhouse gases emitted during battery production, electric cars must be driven tens of thousands of kilometers before their lifecycle carbon emissions are less than fossil fuel cars:[63] but this is expected to improve in future due to longer lasting[64] batteries being produced in larger factories,[65] and lower carbon electricity. Many governments are using fiscal policies, such as road tax, to discourage the purchase and use of more polluting cars;[66] and many cities are doing the same with low-emission zones.[67] Fuel taxes may act as an incentive for the production of more efficient, hence less polluting, car designs (e.g. hybrid vehicles) and the development of alternative fuels. High fuel taxes or cultural change may provide a strong incentive for consumers to purchase lighter, smaller, more fuel-efficient cars, or to not drive.[67]
|
95 |
+
|
96 |
+
The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 million kilometres (1.2 million miles) if driven a lot.[68] According to the International Energy Agency fuel economy improved 0.7% in 2017, but an annual improvement of 3.7% is needed to meet the Global Fuel Economy Initiative 2030 target.[69] The increase in sales of SUVs is bad for fuel economy.[40] Many cities in Europe, have banned older fossil fuel cars and all fossil fuel vehicles will be banned in Amsterdam from 2030.[70] Many Chinese cities limit licensing of fossil fuel cars,[71] and many countries plan to stop selling them between 2025 and 2050.[72]
|
97 |
+
|
98 |
+
The manufacture of vehicles is resource intensive, and many manufacturers now report on the environmental performance of their factories, including energy usage, waste and water consumption.[73] Manufacturing each kWh of battery emits a similar amount of carbon as burning through one full tank of gasoline.[74] The growth in popularity of the car allowed cities to sprawl, therefore encouraging more travel by car resulting in inactivity and obesity, which in turn can lead to increased risk of a variety of diseases.[75]
|
99 |
+
|
100 |
+
Animals and plants are often negatively impacted by cars via habitat destruction and pollution. Over the lifetime of the average car the "loss of habitat potential" may be over 50,000 m2 (540,000 sq ft) based on primary production correlations.[76] Animals are also killed every year on roads by cars, referred to as roadkill. More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allow wildlife crossings) and creating wildlife corridors.
|
101 |
+
|
102 |
+
Growth in the popularity of vehicles and commuting has led to traffic congestion. Moscow, Istanbul, Bogota, Mexico City and Sao Paulo were the world's most congested cities in 2018 according to INRIX, a data analytics company.[77]
|
103 |
+
|
104 |
+
Although intensive development of conventional battery electric vehicles is continuing into the 2020s,[78] other car propulsion technologies that are under development include wheel hub motors,[79] wireless charging,[80] hydrogen cars,[81] and hydrogen/electric hybrids.[82] Research into alternative forms of power includes using ammonia instead of hydrogen in fuel cells.[83]
|
105 |
+
|
106 |
+
New materials[84] which may replace steel car bodies include duralumin, fiberglass, carbon fiber, biocomposites, and carbon nanotubes. Telematics technology is allowing more and more people to share cars, on a pay-as-you-go basis, through car share and carpool schemes. Communication is also evolving due to connected car systems.[85]
|
107 |
+
|
108 |
+
Fully autonomous vehicles, also known as driverless cars, already exist in prototype (such as the Google driverless car), but have a long way to go before they are in general use.
|
109 |
+
|
110 |
+
There have been several projects aiming to develop a car on the principles of open design, an approach to designing in which the plans for the machinery and systems are publicly shared, often without monetary compensation. The projects include OScar, Riversimple (through 40fires.org)[86] and c,mm,n.[87] None of the projects have reached significant success in terms of developing a car as a whole both from hardware and software perspective and no mass production ready open-source based design have been introduced as of late 2009. Some car hacking through on-board diagnostics (OBD) has been done so far.[88]
|
111 |
+
|
112 |
+
Car-share arrangements and carpooling are also increasingly popular, in the US and Europe.[89] For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offering a residents to "share" a vehicle rather than own a car in already congested neighborhoods.[90]
|
113 |
+
|
114 |
+
The automotive industry designs, develops, manufactures, markets, and sells the world's motor vehicles, more than three-quarters of which are cars. In 2018 there were 70 million cars manufactured worldwide,[91] down 2 million from the previous year.[92]
|
115 |
+
|
116 |
+
The automotive industry in China produces by far the most (24 million in 2018), followed by Japan (8 million), Germany (5 million) and India (4 million).[91] The largest market is China, followed by the USA.
|
117 |
+
|
118 |
+
Around the world there are about a billion cars on the road;[93] they burn over a trillion liters of gasoline and diesel fuel yearly, consuming about 50 EJ (nearly 300 terawatt-hours) of energy.[94] The numbers of cars are increasing rapidly in China and India.[11] In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative impacts fall disproportionately on those social groups who are also least likely to own and drive cars.[95][96] The sustainable transport movement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage.
|
119 |
+
|
120 |
+
Established alternatives for some aspects of car use include public transport such as buses, trolleybuses, trains, subways, tramways, light rail, cycling, and walking. Bicycle sharing systems have been established in China and many European cities, including Copenhagen and Amsterdam. Similar programs have been developed in large US cities.[98][99] Additional individual modes of transport, such as personal rapid transit could serve as an alternative to cars if they prove to be socially accepted.[100]
|
121 |
+
|
122 |
+
The term motorcar was formerly also used in the context of electrified rail systems to denote a car which functions as a small locomotive but also provides space for passengers and baggage. These locomotive cars were often used on suburban routes by both interurban and intercity railroad systems.[101]
|
en/1942.html.txt
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Flour is a powder made by grinding raw grains, roots, beans, nuts, seeds, or bones. Flours are used to make many different foods. Cereal flour is the main ingredient of bread, which is a staple food for most cultures. Wheat flour is one of the most important ingredients in Oceanic, European, South American, North American, Middle Eastern, North Indian and North African cultures, and is the defining ingredient in their styles of breads and pastries. Corn flour has been important in Mesoamerican cuisine since ancient times and remains a staple in the Americas. Rye flour is a constituent of bread in central Europe.
|
2 |
+
|
3 |
+
Cereal flour consists either of the endosperm, germ, and bran together (whole-grain flour) or of the endosperm alone (refined flour). Meal is either differentiable from flour as having slightly coarser particle size (degree of comminution) or is synonymous with flour; the word is used both ways. For example, the word cornmeal often connotes a grittier texture whereas corn flour connotes fine powder, although there is no codified dividing line.
|
4 |
+
|
5 |
+
The English word flour is originally a variant of the word flower, and both words derive from the Old French fleur or flour, which had the literal meaning "blossom", and a figurative meaning "the finest". The phrase fleur de farine meant "the finest part of the meal", since flour resulted from the elimination of coarse and unwanted matter from the grain during milling.[1]
|
6 |
+
|
7 |
+
The earliest archaeological evidence for wheat seeds crushed between simple millstones to make flour dates to 6000 BC.[2] The Romans were the first to grind seeds on cone mills. In 1779, at the beginning of the Industrial Era, the first steam mill was erected in London.[3] In the 1930s, some flour began to be enriched with iron, niacin, thiamine and riboflavin. In the 1940s, mills started to enrich flour and folic acid was added to the list in the 1990s.
|
8 |
+
|
9 |
+
An important problem of the industrial revolution was the preservation of flour. Transportation distances and a relatively slow distribution system collided with natural shelf life. The reason for the limited shelf life is the fatty acids of the germ, which react from the moment they are exposed to oxygen. This occurs when grain is milled; the fatty acids oxidize and flour starts to become rancid.
|
10 |
+
Depending on climate and grain quality, this process takes six to nine months. In the late 19th century, this process was too short for an industrial production and distribution cycle. As vitamins, micronutrients and amino acids were completely or relatively unknown in the late 19th century, removing the germ was an effective solution. Without the germ, flour cannot become rancid. Degermed flour became standard. Degermation started in densely populated areas and took approximately one generation to reach the countryside.
|
11 |
+
Heat-processed flour is flour where the germ is first separated from the endosperm and bran, then processed with steam, dry heat or microwave and blended into flour again.[4]
|
12 |
+
|
13 |
+
Milling of flour is accomplished by grinding grain between stones or steel wheels.[5] Today, "stone-ground" usually means that the grain has been ground in a mill in which a revolving stone wheel turns over a stationary stone wheel, vertically or horizontally with the grain in between.
|
14 |
+
|
15 |
+
Roller mills soon replaced stone grist mills as the production of flour has historically driven technological development, as attempts to make gristmills more productive and less labor-intensive led to the watermill[6] and windmill. These terms are now applied more broadly to uses of water and wind power for purposes other than milling.[7]
|
16 |
+
More recently, the Unifine mill, an impact-type mill, was developed in the mid-20th century.
|
17 |
+
|
18 |
+
Home users have begun grinding their own flour from organic wheat berries on a variety of electric flour mills. The grinding process is not much different from grinding coffee but the mills are larger. This provides fresh flour with the benefits of wheat germ and fiber without spoilage.[citation needed]
|
19 |
+
|
20 |
+
Modern farm equipment allows livestock farmers to do some or all of their own milling when it comes time to convert their own grain crops to coarse meal for livestock feed. This capability is economically important because the profit margins are often thin enough in commercial farming that saving expenses is vital to staying in business.
|
21 |
+
|
22 |
+
Flour contains a high proportion of starches, which are a subset of complex carbohydrates also known as polysaccharides. The kinds of flour used in cooking include all-purpose flour (known as plain outside North America), self-rising flour, and cake flour including bleached flour. The higher the protein content the harder and stronger the flour, and the more it will produce crusty or chewy breads. The lower the protein the softer the flour, which is better for cakes, cookies, and pie crusts.[8]
|
23 |
+
|
24 |
+
"Bleached flour" is any refined flour with a whitening agent added. "Refined flour" has had the germ and bran removed and is typically referred to as "white flour".
|
25 |
+
|
26 |
+
Bleached flour is artificially aged using a bleaching agent, a maturing agent, or both. A bleaching agent would affect only the carotenoids in the flour; a maturing agent affects gluten development. A maturing agent may either strengthen or weaken gluten development.
|
27 |
+
|
28 |
+
The four most common additives used as bleaching/maturing agents in the US are:
|
29 |
+
|
30 |
+
Some other chemicals used as flour treatment agents to modify color and baking properties include:
|
31 |
+
|
32 |
+
Common preservatives in commercial flour include:
|
33 |
+
|
34 |
+
Cake flour in particular is nearly always chlorinated. At least one flour labeled "unbleached cake flour blend" (marketed by King Arthur Flour) is not bleached, but the protein content is much higher than typical cake flour at about 9.4% protein (cake flour is usually around 6% to 8%). According to King Arthur, this flour is a blend of a more finely milled unbleached wheat flour and cornstarch, which makes a better end result than unbleached wheat flour alone (cornstarch blended with all-purpose flour is commonly substituted for cake flour when the latter is unavailable). The end product, however, is denser than would result from lower-protein, chlorinated cake flour.[citation needed]
|
35 |
+
|
36 |
+
All bleaching and maturing agents (with the possible exception of ascorbic acid) have been banned in the United Kingdom.[10]
|
37 |
+
|
38 |
+
Bromination of flour in the US has fallen out of favor, and while it is not yet actually banned anywhere, few retail flours available to the home baker are bromated anymore.
|
39 |
+
|
40 |
+
Many varieties of flour packaged specifically for commercial bakeries are still bromated. Retail bleached flour marketed to the home baker is now treated mostly with either peroxidation or chlorine gas. Current information from Pillsbury is that their varieties of bleached flour are treated both with benzoyl peroxide and chlorine gas. Gold Medal states that their bleached flour is treated either with benzoyl peroxide or chlorine gas, but no way exists to tell which process has been used when buying the flour at the grocery store.
|
41 |
+
|
42 |
+
During the process of making flour nutrients are lost. Some of these nutrients may be replaced during refining – the result is enriched flour.
|
43 |
+
|
44 |
+
Cake flour is the lowest in gluten protein content, with 6-7%[11] (5-8% from second source[12]) protein to produce minimal binding so the cake "crumbles" easily.
|
45 |
+
|
46 |
+
Pastry flour has the second-lowest gluten protein content, with 7.5-9.5%[11] (8-9% from second source[12]) protein to hold together with a bit more strength than cakes, but still produce flaky crusts rather than hard or crisp ones.
|
47 |
+
|
48 |
+
All-purpose, or "AP flour", or plain flour is medium in gluten protein content at 9.5-11.5%[11](10-12% from second source[12]) protein content. It has adequate protein content for many bread and pizza bases, though bread flour and special 00 grade Italian flour are often preferred for these purposes, respectively, especially by artisan bakers. Some biscuits are also prepared using this type of flour. "Plain" refers not only to AP flour's middling gluten content but also to its lack of any added leavening agent (as in self-rising flour).
|
49 |
+
|
50 |
+
Bread flour, or strong flour is high in gluten protein, with 11.5-13.5%[11](12-14% from second source[12]) protein. The increased protein binds to the flour to entrap carbon dioxide released by the yeast fermentation process, resulting in a stronger rise and more chewy crumb. Bread flour may be made with a hard spring wheat.
|
51 |
+
|
52 |
+
Hard is a general term for flours with high gluten protein content, commonly refers to extra strong flour, with 13.5-16%[11] (or 14-15% from some sources) protein (16% is a theoretically possible protein content[11]). This flour may be used where a recipe adds ingredients that require the dough to be extra strong to hold together in their presence, or when strength is needed for constructions of bread (e.g., some centerpiece displays).
|
53 |
+
|
54 |
+
Gluten flour is refined gluten protein, or a theoretical 100% protein (though practical refining never achieves a full 100%). It is used to strengthen flour as needed. For example, adding approximately one teaspoon per cup of AP flour gives the resulting mix the protein content of bread flour. It is commonly added to whole grain flour recipes to overcome the tendency of greater fiber content to interfere with gluten development, needed to give the bread better rising (gas holding) qualities and chew.
|
55 |
+
|
56 |
+
Unbleached flour is simply flour that has not undergone bleaching and therefore does not have the color of "white" flour. An example is graham flour, whose namesake, Sylvester Graham, was against using bleaching agents, which he considered unhealthy.
|
57 |
+
|
58 |
+
Leavening agents are used with some varieties of flour,[13] especially those with significant gluten content, to produce lighter and softer baked products by embedding small gas bubbles. Self-raising (or self-rising) flour is sold mixed with chemical leavening agents. The added ingredients are evenly distributed throughout the flour, which aids a consistent rise in baked goods. This flour is generally used for preparing sponge cakes, scones, muffins, etc. It was invented by Henry Jones and patented in 1845. Plain flour can be used to make a type of self-raising flour, although the flour will be coarser. Self-raising flour is typically composed of (or can be made from):
|
59 |
+
|
60 |
+
Wheat is the grain most commonly used to make flour.[citation needed] Certain varieties may be referred to as "clean" or "white". Flours contain differing levels of the protein gluten. "Strong flour" or "hard flour" has a higher gluten content than "weak" or "soft" flour. "Brown" and wholemeal flours may be made of hard or soft wheat.
|
61 |
+
|
62 |
+
When flours do not contain gluten, they are suitable for people with gluten-related disorders, such as coeliac disease, non-celiac gluten sensitivity or wheat allergy sufferers, among others.[15][16][17][18] Contamination with gluten-containing cereals can occur during grain harvesting, transporting, milling, storing, processing, handling and/or cooking.[18][19][20]
|
63 |
+
|
64 |
+
Flour also can be made from soybeans, arrowroot, taro, cattails, acorns, manioc, quinoa, and other non-cereal foodstuffs.
|
65 |
+
|
66 |
+
In some markets, the different available flour varieties are labeled according to the ash mass that remains after a sample is incinerated in a laboratory oven (typically at 550 °C or 900 °C, see international standards ISO 2171 and ICC 104/1). This is an easily verified indicator for the fraction of the whole grain remains in the flour, because the mineral content of the starchy endosperm is much lower than that of the outer parts of the grain. Flour made from all parts of the grain (extraction rate: 100%) leaves about 2 g ash or more per 100 g dry flour. Plain white flour with an extraction rate of 50–60% leaves about 0.4 g.
|
67 |
+
|
68 |
+
In the United States and the United Kingdom, no numbered standardized flour types are defined, and the ash mass is only rarely given on the label by flour manufacturers. However, the legally required standard nutrition label specifies the protein content of the flour, which is also a way for comparing the extraction rates of different available flour types.
|
69 |
+
|
70 |
+
In general, as the extraction rate of the flour increases, so do both the protein and the ash content. However, as the extraction rate approaches 100% (whole meal), the protein content drops slightly, while the ash content continues to rise.
|
71 |
+
|
72 |
+
The following table shows some typical examples of how protein and ash content relate to each other in wheat flour:
|
73 |
+
|
74 |
+
This table is only a rough guideline for converting bread recipes. Since flour types are not standardized in many countries, the numbers may differ between manufacturers. There is no French type corresponding to the lowest ash residue in the table. The closest is French Type 45.
|
75 |
+
|
76 |
+
Also there is no Chinese name corresponding to the highest ash residue in the table. Usually such product are imported from Japan and the Japanese name Zenryufun(全粒粉) is used.
|
77 |
+
|
78 |
+
It is possible to determine ash content from some US manufacturers. However, US measurements are based on wheat with a 14% moisture content. Thus, a US flour with 0.48% ash would approximate a French Type 55.
|
79 |
+
|
80 |
+
Other measurable properties of flour as used in baking can be determined using a variety of specialized instruments, such as the farinograph.
|
81 |
+
|
82 |
+
Flour dust suspended in air is explosive—as is any mixture of a finely powdered flammable substance with air[30] (see dust explosion). Some devastating explosions have occurred at flour mills, including an explosion in 1878 at the Washburn "A" Mill in Minneapolis that killed 22 people.[31]
|
83 |
+
|
84 |
+
Bread, pasta, crackers, many cakes, and many other foods are made using flour. Wheat flour is also used to make a roux as a base for thickening gravy and sauces.
|
85 |
+
It can also be used as an ingredient in papier-mâché glue.[32]
|
86 |
+
|
87 |
+
Cornstarch is a principal ingredient used to thicken many puddings or desserts, and is the main ingredient in packaged custard.
|
en/1943.html.txt
ADDED
@@ -0,0 +1,241 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Fascism (/ˈfæʃɪzəm/) is a form of far-right, authoritarian ultranationalism[1][2] characterized by dictatorial power, forcible suppression of opposition, as well as strong regimentation of society and of the economy[3] which came to prominence in early 20th-century Europe.[4] The first fascist movements emerged in Italy during World War I, before spreading to other European countries.[4] Opposed to liberalism, Marxism, and anarchism, fascism is placed on the far right within the traditional left–right spectrum.[4][5][6]
|
6 |
+
|
7 |
+
Fascists saw World War I as a revolution that brought massive changes to the nature of war, society, the state, and technology. The advent of total war and the total mass mobilization of society had broken down the distinction between civilians and combatants. A "military citizenship" arose in which all citizens were involved with the military in some manner during the war.[7][8] The war had resulted in the rise of a powerful state capable of mobilizing millions of people to serve on the front lines and providing economic production and logistics to support them, as well as having unprecedented authority to intervene in the lives of citizens.[7][8]
|
8 |
+
|
9 |
+
Fascists believe that liberal democracy is obsolete and regard the complete mobilization of society under a totalitarian one-party state as necessary to prepare a nation for armed conflict and to respond effectively to economic difficulties.[9] Such a state is led by a strong leader—such as a dictator and a martial government composed of the members of the governing fascist party—to forge national unity and maintain a stable and orderly society.[9] Fascism rejects assertions that violence is automatically negative in nature and views political violence, war, and imperialism as means that can achieve national rejuvenation.[10][11] Fascists advocate a mixed economy, with the principal goal of achieving autarky (national economic self-sufficiency) through protectionist and interventionist economic policies.[12]
|
10 |
+
|
11 |
+
Since the end of World War II in 1945, few parties have openly described themselves as fascist, and the term is instead now usually used pejoratively by political opponents. The descriptions neo-fascist or post-fascist are sometimes applied more formally to describe parties of the far right with ideologies similar to, or rooted in, 20th-century fascist movements.[4][13]
|
12 |
+
|
13 |
+
The Italian term fascismo is derived from fascio meaning "a bundle of sticks", ultimately from the Latin word fasces.[14] This was the name given to political organizations in Italy known as fasci, groups similar to guilds or syndicates. According to Italian fascist dictator Benito Mussolini's own account, the Fasces of Revolutionary Action were founded in Italy in 1915.[15] In 1919, Mussolini founded the Italian Fasces of Combat in Milan, which became the National Fascist Party two years later. The Fascists came to associate the term with the ancient Roman fasces or fascio littorio[16]—a bundle of rods tied around an axe,[17] an ancient Roman symbol of the authority of the civic magistrate[18] carried by his lictors, which could be used for corporal and capital punishment at his command.[19][20]
|
14 |
+
|
15 |
+
The symbolism of the fasces suggested strength through unity: a single rod is easily broken, while the bundle is difficult to break.[21] Similar symbols were developed by different fascist movements: for example, the Falange symbol is five arrows joined together by a yoke.[22]
|
16 |
+
|
17 |
+
Historians, political scientists, and other scholars have long debated the exact nature of fascism.[23] Each group described as fascist has at least some unique elements, and many definitions of fascism have been criticized as either too wide or narrow.[24][25]
|
18 |
+
|
19 |
+
According to many scholars, fascism – especially once in power – has historically attacked communism, conservatism, and parliamentary liberalism, attracting support primarily from the far-right.[26]
|
20 |
+
|
21 |
+
One common definition of the term, frequently cited by reliable sources as a standard definition, is that of historian Stanley G. Payne. He focuses on three concepts:
|
22 |
+
|
23 |
+
Historian John Lukacs argues that there is no such thing as generic fascism. He claims that National Socialism and communism are essentially manifestations of populism and that states such as National Socialist Germany and Fascist Italy are more different than similar.[31]
|
24 |
+
|
25 |
+
Roger Griffin describes fascism as "a genus of political ideology whose mythic core in its various permutations is a palingenetic form of populist ultranationalism".[32] Griffin describes the ideology as having three core components: "(i) the rebirth myth, (ii) populist ultra-nationalism, and (iii) the myth of decadence".[33] In Griffin's view, fascism is "a genuinely revolutionary, trans-class form of anti-liberal, and in the last analysis, anti-conservative nationalism" built on a complex range of theoretical and cultural influences. He distinguishes an inter-war period in which it manifested itself in elite-led but populist "armed party" politics opposing socialism and liberalism and promising radical politics to rescue the nation from decadence.[34] In Against the Fascist Creep, Alexander Reid Ross writes regarding Griffin's view:
|
26 |
+
|
27 |
+
Following the Cold War and shifts in fascist organizing techniques, a number of scholars have moved toward the minimalist "new consensus" refined by Roger Griffin: "the mythic core" of fascism is "a populist form of palingenetic ultranationalism." That means that fascism is an ideology that draws on old, ancient, and even arcane myths of racial, cultural, ethnic, and national origins to develop a plan for the "new man."[35]
|
28 |
+
|
29 |
+
Indeed, Griffin himself explored this 'mythic' or 'eliminable' core of fascism with his concept of post-fascism to explore the continuation of Nazism in the modern era.[36] Additionally, other historians have applied this minimalist core to explore proto-fascist movements.[37]
|
30 |
+
|
31 |
+
Cas Mudde and Cristóbal Rovira Kaltwasser argue that although fascism "flirted with populism ... in an attempt to generate mass support", it is better seen as an elitist ideology. They cite in particular its exaltation of the Leader, the race, and the state, rather than the people. They see populism as a "thin-centered ideology" with a "restricted morphology" which necessarily becomes attached to "thick-centered" ideologies such as fascism, liberalism, or socialism. Thus populism can be found as an aspect of many specific ideologies, without necessarily being a defining characteristic of those ideologies. They refer to the combination of populism, authoritarianism and ultranationalism as "a marriage of convenience."[38]
|
32 |
+
|
33 |
+
Robert Paxton says that fascism is "a form of political behavior marked by obsessive preoccupation with community decline, humiliation, or victimhood and by compensatory cults of unity, energy, and purity, in which a mass-based party of committed nationalist militants, working in uneasy but effective collaboration with traditional elites, abandons democratic liberties and pursues with redemptive violence and without ethical or legal restraints goals of internal cleansing and external expansion".[39]
|
34 |
+
|
35 |
+
Roger Eatwell defines fascism as "an ideology that strives to forge social rebirth based on a holistic-national radical Third Way",[40] while Walter Laqueur sees the core tenets of fascism as "self-evident: nationalism; Social Darwinism; racialism, the need for leadership, a new aristocracy, and obedience; and the negation of the ideals of the Enlightenment and the French Revolution."[41]
|
36 |
+
|
37 |
+
Racism was a key feature of German fascism, for which the Holocaust was a high priority. According to the historiography of genocide, "In dealing with the Holocaust, it is the consensus of historians that Nazi Germany targeted Jews as a race, not as a religious group."[42] Umberto Eco,[43] Kevin Passmore,[44] John Weiss,[45] Ian Adams,[46] and Moyra Grant[47] stress racism as a characteristic component of German fascism. Historian Robert Soucy stated that "Hitler envisioned the ideal German society as a Volksgemeinschaft, a racially unified and hierarchically organized body in which the interests of individuals would be strictly subordinate to those of the nation, or Volk."[48] Fascist philosophies vary by application, but remain distinct by one theoretical commonality: all traditionally fall into the far-right sector of any political spectrum, catalyzed by afflicted class identities over conventional social inequities.[4]
|
38 |
+
|
39 |
+
Most scholars place fascism on the far right of the political spectrum.[4][5] Such scholarship focuses on its social conservatism and its authoritarian means of opposing egalitarianism.[49][50] Roderick Stackelberg places fascism—including Nazism, which he says is "a radical variant of fascism"—on the political right by explaining: "The more a person deems absolute equality among all people to be a desirable condition, the further left he or she will be on the ideological spectrum. The more a person considers inequality to be unavoidable or even desirable, the further to the right he or she will be".[51]
|
40 |
+
|
41 |
+
Fascism's origins, however, are complex and include many seemingly contradictory viewpoints, ultimately centered around a mythos of national rebirth from decadence.[52] Fascism was founded during World War I by Italian national syndicalists who drew upon both left-wing organizational tactics and right-wing political views.[53]
|
42 |
+
|
43 |
+
Italian Fascism gravitated to the right in the early 1920s.[54][55] A major element of fascist ideology that has been deemed to be far-right is its stated goal to promote the right of a supposedly superior people to dominate, while purging society of supposedly inferior elements.[56]
|
44 |
+
|
45 |
+
In the 1920s, the Italian Fascists described their ideology as right-wing in the political program The Doctrine of Fascism, stating: "We are free to believe that this is the century of authority, a century tending to the 'right,' a fascist century".[57][58] Mussolini stated that fascism's position on the political spectrum was not a serious issue for fascists: "Fascism, sitting on the right, could also have sat on the mountain of the center ... These words in any case do not have a fixed and unchanged meaning: they do have a variable subject to location, time and spirit. We don't give a damn about these empty terminologies and we despise those who are terrorized by these words".[59]
|
46 |
+
|
47 |
+
Major Italian groups politically on the right, especially rich landowners and big business, feared an uprising by groups on the left such as sharecroppers and labour unions.[60] They welcomed Fascism and supported its violent suppression of opponents on the left.[61] The accommodation of the political right into the Italian Fascist movement in the early 1920s created internal factions within the movement. The "Fascist left" included Michele Bianchi, Giuseppe Bottai, Angelo Oliviero Olivetti, Sergio Panunzio, and Edmondo Rossoni, who were committed to advancing national syndicalism as a replacement for parliamentary liberalism in order to modernize the economy and advance the interests of workers and common people.[62] The "Fascist right" included members of the paramilitary Squadristi and former members of the Italian Nationalist Association (ANI).[62] The Squadristi wanted to establish Fascism as a complete dictatorship, while the former ANI members, including Alfredo Rocco, sought to institute an authoritarian corporatist state to replace the liberal state in Italy while retaining the existing elites.[62] Upon accommodating the political right, there arose a group of monarchist fascists who sought to use fascism to create an absolute monarchy under King Victor Emmanuel III of Italy.[62]
|
48 |
+
|
49 |
+
After the fall of the Fascist regime in Italy, when King Victor Emmanuel III forced Mussolini to resign as head of government and placed him under arrest in 1943, Mussolini was rescued by German forces. While continuing to rely on Germany for support, Mussolini and the remaining loyal Fascists founded the Italian Social Republic with Mussolini as head of state. Mussolini sought to re-radicalize Italian Fascism, declaring that the Fascist state had been overthrown because Italian Fascism had been subverted by Italian conservatives and the bourgeoisie.[63] Then the new Fascist government proposed the creation of workers' councils and profit-sharing in industry, although the German authorities, who effectively controlled northern Italy at this point, ignored these measures and did not seek to enforce them.[63]
|
50 |
+
|
51 |
+
A number of post-World War II fascist movements described themselves as a "third position" outside the traditional political spectrum.[64] Spanish Falangist leader José Antonio Primo de Rivera said: "[B]asically the Right stands for the maintenance of an economic structure, albeit an unjust one, while the Left stands for the attempt to subvert that economic structure, even though the subversion thereof would entail the destruction of much that was worthwhile".[65]
|
52 |
+
|
53 |
+
The term "fascist" has been used as a pejorative,[66] regarding varying movements across the far right of the political spectrum.[67] George Orwell wrote in 1944 that "the word 'Fascism' is almost entirely meaningless ... almost any English person would accept 'bully' as a synonym for 'Fascist'".[67]
|
54 |
+
|
55 |
+
Communist states have sometimes been referred to as "fascist", typically as an insult. For example, it has been applied to Marxist-Leninist regimes in Cuba under Fidel Castro and Vietnam under Ho Chi Minh.[68] Chinese Marxists used the term to denounce the Soviet Union during the Sino-Soviet Split, and likewise the Soviets used the term to denounce Chinese Marxists[69] and social democracy (coining a new term in "social fascism").
|
56 |
+
|
57 |
+
In the United States, Herbert Matthews of The New York Times asked in 1946: "Should we now place Stalinist Russia in the same category as Hitlerite Germany? Should we say that she is Fascist?".[70] J. Edgar Hoover, longtime FBI director and ardent anti-communist, wrote extensively of "Red Fascism".[71] The Ku Klux Klan in the 1920s was sometimes called "fascist". Historian Peter Amann states that, "Undeniably, the Klan had some traits in common with European fascism—chauvinism, racism, a mystique of violence, an affirmation of a certain kind of archaic traditionalism—yet their differences were fundamental....[the KKK] never envisioned a change of political or economic system."[72]
|
58 |
+
|
59 |
+
Professor Richard Griffiths of the University of Wales[73] wrote in 2005 that "fascism" is the "most misused, and over-used word, of our times".[25] "Fascist" is sometimes applied to post-World War II organizations and ways of thinking that academics more commonly term "neo-fascist".[74]
|
60 |
+
|
61 |
+
Georges Valois, founder of the first non-Italian fascist party Faisceau,[75] claimed the roots of fascism stemmed from the late 18th century Jacobin movement, seeing in its totalitarian nature a foreshadowing of the fascist state. Historian George Mosse similarly analyzed fascism as an inheritor of the mass ideology and civil religion of the French Revolution, as well as a result of the brutalization of societies in 1914–1918.[76]
|
62 |
+
|
63 |
+
Historians such as Irene Collins and Howard C Payne see Napoleon III, who ran a 'police state' and suppressed the media, as a forerunner of fascism.[77] According to David Thomson,[78] the Italian Risorgimento of 1871 led to the 'nemesis of fascism'. William L Shirer[79] sees a continuity from the views of Fichte and Hegel, through Bismarck, to Hitler; Robert Gerwarth speaks of a 'direct line' from Bismarck to Hitler.[80] Julian Dierkes sees fascism as a 'particularly violent form of imperialism'.[81]
|
64 |
+
|
65 |
+
The historian Zeev Sternhell has traced the ideological roots of fascism back to the 1880s and in particular to the fin de siècle theme of that time.[82][83] The theme was based on a revolt against materialism, rationalism, positivism, bourgeois society and democracy.[84] The fin-de-siècle generation supported emotionalism, irrationalism, subjectivism and vitalism.[85] The fin-de-siècle mindset saw civilization as being in a crisis that required a massive and total solution.[84] The fin-de-siècle intellectual school considered the individual only one part of the larger collectivity, which should not be viewed as an atomized numerical sum of individuals.[84] They condemned the rationalistic individualism of liberal society and the dissolution of social links in bourgeois society.[84]
|
66 |
+
|
67 |
+
The fin-de-siècle outlook was influenced by various intellectual developments, including Darwinian biology; Wagnerian aesthetics; Arthur de Gobineau's racialism; Gustave Le Bon's psychology; and the philosophies of Friedrich Nietzsche, Fyodor Dostoyevsky and Henri Bergson.[86] Social Darwinism, which gained widespread acceptance, made no distinction between physical and social life, and viewed the human condition as being an unceasing struggle to achieve the survival of the fittest.[86] Social Darwinism challenged positivism's claim of deliberate and rational choice as the determining behaviour of humans, with social Darwinism focusing on heredity, race, and environment.[86] Social Darwinism's emphasis on biogroup identity and the role of organic relations within societies fostered legitimacy and appeal for nationalism.[87] New theories of social and political psychology also rejected the notion of human behaviour being governed by rational choice and instead claimed that emotion was more influential in political issues than reason.[86] Nietzsche's argument that "God is dead" coincided with his attack on the "herd mentality" of Christianity, democracy and modern collectivism; his concept of the übermensch; and his advocacy of the will to power as a primordial instinct, were major influences upon many of the fin-de-siècle generation.[88] Bergson's claim of the existence of an "élan vital" or vital instinct centred upon free choice and rejected the processes of materialism and determinism; this challenged Marxism.[89]
|
68 |
+
|
69 |
+
Gaetano Mosca in his work The Ruling Class (1896) developed the theory that claims that in all societies an "organized minority" will dominate and rule over the "disorganized majority".[90][91] Mosca claims that there are only two classes in society, "the governing" (the organized minority) and "the governed" (the disorganized majority).[92] He claims that the organized nature of the organized minority makes it irresistible to any individual of the disorganized majority.[92]
|
70 |
+
|
71 |
+
French nationalist and reactionary monarchist Charles Maurras influenced fascism.[93] Maurras promoted what he called integral nationalism, which called for the organic unity of a nation and Maurras insisted that a powerful monarch was an ideal leader of a nation. Maurras distrusted what he considered the democratic mystification of the popular will that created an impersonal collective subject.[93] He claimed that a powerful monarch was a personified sovereign who could exercise authority to unite a nation's people.[93] Maurras' integral nationalism was idealized by fascists, but modified into a modernized revolutionary form that was devoid of Maurras' monarchism.[93]
|
72 |
+
|
73 |
+
French revolutionary syndicalist Georges Sorel promoted the legitimacy of political violence in his work Reflections on Violence (1908) and other works in which he advocated radical syndicalist action to achieve a revolution to overthrow capitalism and the bourgeoisie through a general strike.[94] In Reflections on Violence, Sorel emphasized need for a revolutionary political religion.[95] Also in his work The Illusions of Progress, Sorel denounced democracy as reactionary, saying "nothing is more aristocratic than democracy".[96] By 1909 after the failure of a syndicalist general strike in France, Sorel and his supporters left the radical left and went to the radical right, where they sought to merge militant Catholicism and French patriotism with their views—advocating anti-republican Christian French patriots as ideal revolutionaries.[97] Initially Sorel had officially been a revisionist of Marxism, but by 1910 announced his abandonment of socialist literature and claimed in 1914, using an aphorism of Benedetto Croce that "socialism is dead" because of the "decomposition of Marxism".[98] Sorel became a supporter of reactionary Maurrassian nationalism beginning in 1909 that influenced his works.[98] Maurras held interest in merging his nationalist ideals with Sorelian syndicalism as a means to confront democracy.[99] Maurras stated "a socialism liberated from the democratic and cosmopolitan element fits nationalism well as a well made glove fits a beautiful hand".[100]
|
74 |
+
|
75 |
+
The fusion of Maurrassian nationalism and Sorelian syndicalism influenced radical Italian nationalist Enrico Corradini.[101] Corradini spoke of the need for a nationalist-syndicalist movement, led by elitist aristocrats and anti-democrats who shared a revolutionary syndicalist commitment to direct action and a willingness to fight.[101] Corradini spoke of Italy as being a "proletarian nation" that needed to pursue imperialism in order to challenge the "plutocratic" French and British.[102] Corradini's views were part of a wider set of perceptions within the right-wing Italian Nationalist Association (ANI), which claimed that Italy's economic backwardness was caused by corruption in its political class, liberalism, and division caused by "ignoble socialism".[102] The ANI held ties and influence among conservatives, Catholics and the business community.[102] Italian national syndicalists held a common set of principles: the rejection of bourgeois values, democracy, liberalism, Marxism, internationalism and pacifism; and the promotion of heroism, vitalism and violence.[103] The ANI claimed that liberal democracy was no longer compatible with the modern world, and advocated a strong state and imperialism, claiming that humans are naturally predatory and that nations were in a constant struggle, in which only the strongest could survive.[104]
|
76 |
+
|
77 |
+
Futurism was both an artistic-cultural movement and initially a political movement in Italy led by Filippo Tommaso Marinetti who founded the Futurist Manifesto (1908), that championed the causes of modernism, action, and political violence as necessary elements of politics while denouncing liberalism and parliamentary politics. Marinetti rejected conventional democracy based on majority rule and egalitarianism, for a new form of democracy, promoting what he described in his work "The Futurist Conception of Democracy" as the following: "We are therefore able to give the directions to create and to dismantle to numbers, to quantity, to the mass, for with us number, quantity and mass will never be—as they are in Germany and Russia—the number, quantity and mass of mediocre men, incapable and indecisive".[105]
|
78 |
+
|
79 |
+
Futurism influenced fascism in its emphasis on recognizing the virile nature of violent action and war as being necessities of modern civilization.[106] Marinetti promoted the need of physical training of young men, saying that in male education, gymnastics should take precedence over books, and he advocated segregation of the genders on this matter, in that womanly sensibility must not enter men's education whom Marinetti claimed must be "lively, bellicose, muscular and violently dynamic".[107]
|
80 |
+
|
81 |
+
At the outbreak of World War I in August 1914, the Italian political left became severely split over its position on the war. The Italian Socialist Party (PSI) opposed the war but a number of Italian revolutionary syndicalists supported war against Germany and Austria-Hungary on the grounds that their reactionary regimes had to be defeated to ensure the success of socialism.[108] Angelo Oliviero Olivetti formed a pro-interventionist fascio called the Revolutionary Fasces of International Action in October 1914.[108] Benito Mussolini upon being expelled from his position as chief editor of the PSI's newspaper Avanti! for his anti-German stance, joined the interventionist cause in a separate fascio.[109] The term "Fascism" was first used in 1915 by members of Mussolini's movement, the Fasces of Revolutionary Action.[110]
|
82 |
+
|
83 |
+
The first meeting of the Fasces of Revolutionary Action was held on 24 January 1915[111] when Mussolini declared that it was necessary for Europe to resolve its national problems—including national borders—of Italy and elsewhere "for the ideals of justice and liberty for which oppressed peoples must acquire the right to belong to those national communities from which they descended".[111] Attempts to hold mass meetings were ineffective and the organization was regularly harassed by government authorities and socialists.[112]
|
84 |
+
|
85 |
+
Similar political ideas arose in Germany after the outbreak of the war. German sociologist Johann Plenge spoke of the rise of a "National Socialism" in Germany within what he termed the "ideas of 1914" that were a declaration of war against the "ideas of 1789" (the French Revolution).[113] According to Plenge, the "ideas of 1789" that included rights of man, democracy, individualism and liberalism were being rejected in favor of "the ideas of 1914" that included "German values" of duty, discipline, law and order.[113] Plenge believed that racial solidarity (Volksgemeinschaft) would replace class division and that "racial comrades" would unite to create a socialist society in the struggle of "proletarian" Germany against "capitalist" Britain.[113] He believed that the "Spirit of 1914" manifested itself in the concept of the "People's League of National Socialism".[114] This National Socialism was a form of state socialism that rejected the "idea of boundless freedom" and promoted an economy that would serve the whole of Germany under the leadership of the state.[114] This National Socialism was opposed to capitalism because of the components that were against "the national interest" of Germany, but insisted that National Socialism would strive for greater efficiency in the economy.[114][115] Plenge advocated an authoritarian rational ruling elite to develop National Socialism through a hierarchical technocratic state.[116]
|
86 |
+
|
87 |
+
Fascists viewed World War I as bringing revolutionary changes in the nature of war, society, the state and technology, as the advent of total war and mass mobilization had broken down the distinction between civilian and combatant, as civilians had become a critical part in economic production for the war effort and thus arose a "military citizenship" in which all citizens were involved to the military in some manner during the war.[7][8] World War I had resulted in the rise of a powerful state capable of mobilizing millions of people to serve on the front lines or provide economic production and logistics to support those on the front lines, as well as having unprecedented authority to intervene in the lives of citizens.[7][8] Fascists viewed technological developments of weaponry and the state's total mobilization of its population in the war as symbolizing the beginning of a new era fusing state power with mass politics, technology and particularly the mobilizing myth that they contended had triumphed over the myth of progress and the era of liberalism.[7]
|
88 |
+
|
89 |
+
The October Revolution of 1917—in which Bolshevik communists led by Vladimir Lenin seized power in Russia—greatly influenced the development of fascism.[117] In 1917, Mussolini, as leader of the Fasces of Revolutionary Action, praised the October Revolution, but later he became unimpressed with Lenin, regarding him as merely a new version of Tsar Nicholas.[118] After World War I, fascists have commonly campaigned on anti-Marxist agendas.[117]
|
90 |
+
|
91 |
+
Liberal opponents of both fascism and the Bolsheviks argue that there are various similarities between the two, including that they believed in the necessity of a vanguard leadership, had disdain for bourgeois values and it is argued had totalitarian ambitions.[117] In practice, both have commonly emphasized revolutionary action, proletarian nation theories, one-party states and party-armies.[117] However, both draw clear distinctions from each other both in aims and tactics, with the Bolsheviks emphasizing the need for an organized participatory democracy and an egalitarian, internationalist vision for society while the fascists emphasize hyper-nationalism and open hostility towards democracy, envisioning a hierarchical social structure as essential to their aims.
|
92 |
+
|
93 |
+
With the antagonism between anti-interventionist Marxists and pro-interventionist Fascists complete by the end of the war, the two sides became irreconcilable. The Fascists presented themselves as anti-Marxists and as opposed to the Marxists.[119] Mussolini consolidated control over the Fascist movement, known as Sansepolcrismo, in 1919 with the founding of the Italian Fasces of Combat.
|
94 |
+
|
95 |
+
In 1919, Alceste De Ambris and Futurist movement leader Filippo Tommaso Marinetti created The Manifesto of the Italian Fasces of Combat (the Fascist Manifesto).[120] The Manifesto was presented on 6 June 1919 in the Fascist newspaper Il Popolo d'Italia. The Manifesto supported the creation of universal suffrage for both men and women (the latter being realized only partly in late 1925, with all opposition parties banned or disbanded);[121] proportional representation on a regional basis; government representation through a corporatist system of "National Councils" of experts, selected from professionals and tradespeople, elected to represent and hold legislative power over their respective areas, including labour, industry, transportation, public health, communications, etc.; and the abolition of the Italian Senate.[122] The Manifesto supported the creation of an eight-hour work day for all workers, a minimum wage, worker representation in industrial management, equal confidence in labour unions as in industrial executives and public servants, reorganization of the transportation sector, revision of the draft law on invalidity insurance, reduction of the retirement age from 65 to 55, a strong progressive tax on capital, confiscation of the property of religious institutions and abolishment of bishoprics, and revision of military contracts to allow the government to seize 85% of profits.[123] It also called for the fulfillment of expansionist aims in the Balkans and other parts of the Mediterranean,[124] the creation of a short-service national militia to serve defensive duties, nationalization of the armaments industry and a foreign policy designed to be peaceful but also competitive.[125]
|
96 |
+
|
97 |
+
The next events that influenced the Fascists in Italy was the raid of Fiume by Italian nationalist Gabriele d'Annunzio and the founding of the Charter of Carnaro in 1920.[126] D'Annunzio and De Ambris designed the Charter, which advocated national-syndicalist corporatist productionism alongside D'Annunzio's political views.[127] Many Fascists saw the Charter of Carnaro as an ideal constitution for a Fascist Italy.[128] This behaviour of aggression towards Yugoslavia and South Slavs was pursued by Italian Fascists with their persecution of South Slavs—especially Slovenes and Croats.
|
98 |
+
|
99 |
+
In 1920, militant strike activity by industrial workers reached its peak in Italy and 1919 and 1920 were known as the "Red Years".[129] Mussolini and the Fascists took advantage of the situation by allying with industrial businesses and attacking workers and peasants in the name of preserving order and internal peace in Italy.[130]
|
100 |
+
|
101 |
+
Fascists identified their primary opponents as the majority of socialists on the left who had opposed intervention in World War I.[128] The Fascists and the Italian political right held common ground: both held Marxism in contempt, discounted class consciousness and believed in the rule of elites.[131] The Fascists assisted the anti-socialist campaign by allying with the other parties and the conservative right in a mutual effort to destroy the Italian Socialist Party and labour organizations committed to class identity above national identity.[131]
|
102 |
+
|
103 |
+
Fascism sought to accommodate Italian conservatives by making major alterations to its political agenda—abandoning its previous populism, republicanism and anticlericalism, adopting policies in support of free enterprise and accepting the Catholic Church and the monarchy as institutions in Italy.[132] To appeal to Italian conservatives, Fascism adopted policies such as promoting family values, including promotion policies designed to reduce the number of women in the workforce limiting the woman's role to that of a mother. The fascists banned literature on birth control and increased penalties for abortion in 1926, declaring both crimes against the state.[133]
|
104 |
+
|
105 |
+
Though Fascism adopted a number of anti-modern positions designed to appeal to people upset with the new trends in sexuality and women's rights—especially those with a reactionary point of view—the Fascists sought to maintain Fascism's revolutionary character, with Angelo Oliviero Olivetti saying: "Fascism would like to be conservative, but it will [be] by being revolutionary".[134] The Fascists supported revolutionary action and committed to secure law and order to appeal to both conservatives and syndicalists.[135]
|
106 |
+
|
107 |
+
Prior to Fascism's accommodations to the political right, Fascism was a small, urban, northern Italian movement that had about a thousand members.[136] After Fascism's accommodation of the political right, the Fascist movement's membership soared to approximately 250,000 by 1921.[137]
|
108 |
+
|
109 |
+
Beginning in 1922, Fascist paramilitaries escalated their strategy from one of attacking socialist offices and homes of socialist leadership figures to one of violent occupation of cities. The Fascists met little serious resistance from authorities and proceeded to take over several northern Italian cities.[138] The Fascists attacked the headquarters of socialist and Catholic labour unions in Cremona and imposed forced Italianization upon the German-speaking population of Trent and Bolzano.[138] After seizing these cities, the Fascists made plans to take Rome.[138]
|
110 |
+
|
111 |
+
On 24 October 1922, the Fascist party held its annual congress in Naples, where Mussolini ordered Blackshirts to take control of public buildings and trains and to converge on three points around Rome.[138] The Fascists managed to seize control of several post offices and trains in northern Italy while the Italian government, led by a left-wing coalition, was internally divided and unable to respond to the Fascist advances.[139] King Victor Emmanuel III of Italy perceived the risk of bloodshed in Rome in response to attempting to disperse the Fascists to be too high.[140] Victor Emmanuel III decided to appoint Mussolini as Prime Minister of Italy and Mussolini arrived in Rome on 30 October to accept the appointment.[140] Fascist propaganda aggrandized this event, known as "March on Rome", as a "seizure" of power because of Fascists' heroic exploits.[138]
|
112 |
+
|
113 |
+
Historian Stanley G. Payne says Fascism in Italy was:
|
114 |
+
|
115 |
+
A primarily political dictatorship....The Fascist Party itself had become almost completely bureaucratized and subservient to, not dominant over, the state itself. Big business, industry, and finance retained extensive autonomy, particularly in the early years. The armed forces also enjoyed considerable autonomy....The Fascist militia was placed under military control....The judicial system was left largely intact and relatively autonomous as well. The police continued to be directed by state officials and were not taken over by party leaders...nor was a major new police elite created....There was never any question of bringing the Church under overall subservience.... Sizable sectors of Italian cultural life retained extensive autonomy, and no major state propaganda-and-culture ministry existed....The Mussolini regime was neither especially sanguinary nor particularly repressive.[141]
|
116 |
+
|
117 |
+
Upon being appointed Prime Minister of Italy, Mussolini had to form a coalition government because the Fascists did not have control over the Italian parliament.[142] Mussolini's coalition government initially pursued economically liberal policies under the direction of liberal finance minister Alberto De Stefani, a member of the Center Party, including balancing the budget through deep cuts to the civil service.[142] Initially, little drastic change in government policy had occurred and repressive police actions were limited.[142]
|
118 |
+
|
119 |
+
The Fascists began their attempt to entrench Fascism in Italy with the Acerbo Law, which guaranteed a plurality of the seats in parliament to any party or coalition list in an election that received 25% or more of the vote.[143] Through considerable Fascist violence and intimidation, the list won a majority of the vote, allowing many seats to go to the Fascists.[143] In the aftermath of the election, a crisis and political scandal erupted after Socialist Party deputy Giacomo Matteotti was kidnapped and murdered by a Fascist.[143] The liberals and the leftist minority in parliament walked out in protest in what became known as the Aventine Secession.[144] On 3 January 1925, Mussolini addressed the Fascist-dominated Italian parliament and declared that he was personally responsible for what happened, but insisted that he had done nothing wrong. Mussolini proclaimed himself dictator of Italy, assuming full responsibility over the government and announcing the dismissal of parliament.[144] From 1925 to 1929, Fascism steadily became entrenched in power: opposition deputies were denied access to parliament, censorship was introduced and a December 1925 decree made Mussolini solely responsible to the King.[145]
|
120 |
+
|
121 |
+
In 1929, the Fascist regime briefly gained what was in effect a blessing of the Catholic Church after the regime signed a concordat with the Church, known as the Lateran Treaty, which gave the papacy state sovereignty and financial compensation for the seizure of Church lands by the liberal state in the nineteenth century, but within two years the Church had renounced Fascism in the Encyclical Non Abbiamo Bisogno as a "pagan idolotry of the state" which teaches "hatred, violence and irreverence".[146] Not long after signing the agreement, by Mussolini's own confession the Church had threatened to have him "excommunicated", in part because of his intractable nature and that he had "confiscated more issues of Catholic newspapers in the next three months than in the previous seven years”.[147] By the late 1930s, Mussolini became more vocal in his anti-clerical rhetoric, repeatedly denouncing the Catholic Church and discussing ways to depose the pope. He took the position that the “papacy was a malignant tumor in the body of Italy and must 'be rooted out once and for all,’ because there was no room in Rome for both the Pope and himself".[148] In her 1974 book, Mussolini's widow Rachele stated that her husband had always been an atheist until near the end of his life, writing that her husband was "basically irreligious until the later years of his life".[149]
|
122 |
+
|
123 |
+
The National Socialists of Germany employed similar anti-clerical policies. The Gestapo confiscated hundreds of monasteries in Austria and Germany, evicted clergymen and laymen alike and often replaced crosses with swastikas.[150] Referring to the swastika as the "Devil’s Cross", church leaders found their youth organizations banned, their meetings limited and various Catholic periodicals censored or banned. Government officials eventually found it necessary to place "Nazis into editorial positions in the Catholic press".[151] Up to 2,720 clerics, mostly Catholics, were arrested by the Gestapo and imprisoned inside of Germany's Dachau concentration camp, resulting in over 1,000 deaths.[152]
|
124 |
+
|
125 |
+
The Fascist regime created a corporatist economic system in 1925 with creation of the Palazzo Vidioni Pact, in which the Italian employers' association Confindustria and Fascist trade unions agreed to recognize each other as the sole representatives of Italy's employers and employees, excluding non-Fascist trade unions.[153] The Fascist regime first created a Ministry of Corporations that organized the Italian economy into 22 sectoral corporations, banned workers' strikes and lock-outs and in 1927 created the Charter of Labour, which established workers' rights and duties and created labour tribunals to arbitrate employer-employee disputes.[153] In practice, the sectoral corporations exercised little independence and were largely controlled by the regime and employee organizations were rarely led by employees themselves, but instead by appointed Fascist party members.[153]
|
126 |
+
|
127 |
+
In the 1920s, Fascist Italy pursued an aggressive foreign policy that included an attack on the Greek island of Corfu, aims to expand Italian territory in the Balkans, plans to wage war against Turkey and Yugoslavia, attempts to bring Yugoslavia into civil war by supporting Croat and Macedonian separatists to legitimize Italian intervention and making Albania a de facto protectorate of Italy, which was achieved through diplomatic means by 1927.[154] In response to revolt in the Italian colony of Libya, Fascist Italy abandoned previous liberal-era colonial policy of cooperation with local leaders. Instead, claiming that Italians were a superior race to African races and thereby had the right to colonize the "inferior" Africans, it sought to settle 10 to 15 million Italians in Libya.[155] This resulted in an aggressive military campaign known as the Pacification of Libya against natives in Libya, including mass killings, the use of concentration camps and the forced starvation of thousands of people.[155] Italian authorities committed ethnic cleansing by forcibly expelling 100,000 Bedouin Cyrenaicans, half the population of Cyrenaica in Libya, from their settlements that was slated to be given to Italian settlers.[156][157]
|
128 |
+
|
129 |
+
The March on Rome brought Fascism international attention. One early admirer of the Italian Fascists was Adolf Hitler, who less than a month after the March had begun to model himself and the Nazi Party upon Mussolini and the Fascists.[158] The Nazis, led by Hitler and the German war hero Erich Ludendorff, attempted a "March on Berlin" modeled upon the March on Rome, which resulted in the failed Beer Hall Putsch in Munich in November 1923.[159]
|
130 |
+
|
131 |
+
The conditions of economic hardship caused by the Great Depression brought about an international surge of social unrest. According to historian Philip Morgan, "the onset of the Great Depression...was the greatest stimulus yet to the diffusion and expansion of fascism outside Italy".[160] Fascist propaganda blamed the problems of the long depression of the 1930s on minorities and scapegoats: "Judeo-Masonic-bolshevik" conspiracies, left-wing internationalism and the presence of immigrants.
|
132 |
+
|
133 |
+
In Germany, it contributed to the rise of the National Socialist German Workers' Party, which resulted in the demise of the Weimar Republic and the establishment of the fascist regime, Nazi Germany, under the leadership of Adolf Hitler. With the rise of Hitler and the Nazis to power in 1933, liberal democracy was dissolved in Germany and the Nazis mobilized the country for war, with expansionist territorial aims against several countries. In the 1930s, the Nazis implemented racial laws that deliberately discriminated against, disenfranchised and persecuted Jews and other racial and minority groups.
|
134 |
+
|
135 |
+
Fascist movements grew in strength elsewhere in Europe. Hungarian fascist Gyula Gömbös rose to power as Prime Minister of Hungary in 1932 and attempted to entrench his Party of National Unity throughout the country. He created an eight-hour work day, a forty-eight-hour work week in industry and sought to entrench a corporatist economy; and pursued irredentist claims on Hungary's neighbors.[161] The fascist Iron Guard movement in Romania soared in political support after 1933, gaining representation in the Romanian government and an Iron Guard member assassinated Romanian prime minister Ion Duca.[162] During the 6 February 1934 crisis, France faced the greatest domestic political turmoil since the Dreyfus Affair when the fascist Francist Movement and multiple far-right movements rioted en masse in Paris against the French government resulting in major political violence.[163] A variety of para-fascist governments that borrowed elements from fascism were formed during the Great Depression, including those of Greece, Lithuania, Poland and Yugoslavia.[164]
|
136 |
+
|
137 |
+
In the Americas, the Brazilian Integralists led by Plínio Salgado claimed as many as 200,000 members although following coup attempts it faced a crackdown from the Estado Novo of Getúlio Vargas in 1937.[165] In the 1930s, the National Socialist Movement of Chile gained seats in Chile's parliament and attempted a coup d'état that resulted in the Seguro Obrero massacre of 1938.[166]
|
138 |
+
|
139 |
+
During the Great Depression, Mussolini promoted active state intervention in the economy. He denounced the contemporary "supercapitalism" that he claimed began in 1914 as a failure because of its alleged decadence, its support for unlimited consumerism and its intention to create the "standardization of humankind".[167] Fascist Italy created the Institute for Industrial Reconstruction (IRI), a giant state-owned firm and holding company that provided state funding to failing private enterprises.[168] The IRI was made a permanent institution in Fascist Italy in 1937, pursued Fascist policies to create national autarky and had the power to take over private firms to maximize war production.[168] While Hitler's regime only nationalized 500 companies in key industries by the early 1940s,[169] Mussolini declared in 1934 that "[t]hree-fourths of Italian economy, industrial and agricultural, is in the hands of the state".[170] Due to the worldwide depression, Mussolini's government was able to take over most of Italy's largest failing banks, who held controlling interest in many Italian businesses. The Institute for Industrial Reconstruction, a state-operated holding company in charge of bankrupt banks and companies, reported in early 1934 that they held assets of "48.5 percent of the share capital of Italy", which later included the capital of the banks themselves.[171] Political historian Martin Blinkhorn estimated Italy's scope of state intervention and ownership "greatly surpassed that in Nazi Germany, giving Italy a public sector second only to that of Stalin’s Russia".[172] In the late 1930s, Italy enacted manufacturing cartels, tariff barriers, currency restrictions and massive regulation of the economy to attempt to balance payments.[173] Italy's policy of autarky failed to achieve effective economic autonomy.[173] Nazi Germany similarly pursued an economic agenda with the aims of autarky and rearmament and imposed protectionist policies, including forcing the German steel industry to use lower-quality German iron ore rather than superior-quality imported iron.[174]
|
140 |
+
|
141 |
+
In Fascist Italy and Nazi Germany, both Mussolini and Hitler pursued territorial expansionist and interventionist foreign policy agendas from the 1930s through the 1940s culminating in World War II. Mussolini called for irredentist Italian claims to be reclaimed, establishing Italian domination of the Mediterranean Sea and securing Italian access to the Atlantic Ocean and the creation of Italian spazio vitale ("vital space") in the Mediterranean and Red Sea regions.[175] Hitler called for irredentist German claims to be reclaimed along with the creation of German Lebensraum ("living space") in Eastern Europe, including territories held by the Soviet Union, that would be colonized by Germans.[176]
|
142 |
+
|
143 |
+
From 1935 to 1939, Germany and Italy escalated their demands for territorial claims and greater influence in world affairs. Italy invaded Ethiopia in 1935 resulting in its condemnation by the League of Nations and its widespread diplomatic isolation. In 1936, Germany remilitarized the industrial Rhineland, a region that had been ordered demilitarized by the Treaty of Versailles. In 1938, Germany annexed Austria and Italy assisted Germany in resolving the diplomatic crisis between Germany versus Britain and France over claims on Czechoslovakia by arranging the Munich Agreement that gave Germany the Sudetenland and was perceived at the time to have averted a European war. These hopes faded when Hitler violated the Munich Agreement by ordering the invasion and partition of Czechoslovakia between Germany and a client state of Slovakia in 1939. At the same time from 1938 to 1939, Italy was demanding territorial and colonial concessions from France and Britain.[177] In 1939, Germany prepared for war with Poland, but attempted to gain territorial concessions from Poland through diplomatic means.[178] The Polish government did not trust Hitler's promises and refused to accept Germany's demands.[178]
|
144 |
+
|
145 |
+
The invasion of Poland by Germany was deemed unacceptable by Britain, France and their allies, resulting in their mutual declaration of war against Germany that was deemed the aggressor in the war in Poland, resulting in the outbreak of World War II. In 1940, Mussolini led Italy into World War II on the side of the Axis. Mussolini was aware that Italy did not have the military capacity to carry out a long war with France or the United Kingdom and waited until France was on the verge of imminent collapse and surrender from the German invasion before declaring war on France and the United Kingdom on 10 June 1940 on the assumption that the war would be short-lived following France's collapse.[179] Mussolini believed that following a brief entry of Italy into war with France, followed by the imminent French surrender, Italy could gain some territorial concessions from France and then concentrate its forces on a major offensive in Egypt where British and Commonwealth forces were outnumbered by Italian forces.[180] Plans by Germany to invade the United Kingdom in 1940 failed after Germany lost the aerial warfare campaign in the Battle of Britain. In 1941, the Axis campaign spread to the Soviet Union after Hitler launched Operation Barbarossa. Axis forces at the height of their power controlled almost all of continental Europe. The war became prolonged—contrary to Mussolini's plans—resulting in Italy losing battles on multiple fronts and requiring German assistance.
|
146 |
+
|
147 |
+
During World War II, the Axis Powers in Europe led by Nazi Germany participated in the extermination of millions of Poles, Jews, Gypsies and others in the genocide known as the Holocaust.
|
148 |
+
|
149 |
+
After 1942, Axis forces began to falter. In 1943, after Italy faced multiple military failures, the complete reliance and subordination of Italy to Germany, the Allied invasion of Italy and the corresponding international humiliation, Mussolini was removed as head of government and arrested on the order of King Victor Emmanuel III, who proceeded to dismantle the Fascist state and declared Italy's switching of allegiance to the Allied side. Mussolini was rescued from arrest by German forces and led the German client state, the Italian Social Republic from 1943 to 1945. Nazi Germany faced multiple losses and steady Soviet and Western Allied offensives from 1943 to 1945.
|
150 |
+
|
151 |
+
On 28 April 1945, Mussolini was captured and executed by Italian communist partisans. On 30 April 1945, Hitler committed suicide. Shortly afterwards, Germany surrendered and the Nazi regime was systematically dismantled by the occupying Allied powers. An International Military Tribunal was subsequently convened in Nuremberg. Beginning in November 1945 and lasting through 1949, numerous Nazi political, military and economic leaders were tried and convicted of war crimes, with many of the worst offenders receiving the death penalty.
|
152 |
+
|
153 |
+
The victory of the Allies over the Axis powers in World War II led to the collapse of many fascist regimes in Europe. The Nuremberg Trials convicted several Nazi leaders of crimes against humanity involving the Holocaust. However, there remained several movements and governments that were ideologically related to fascism.
|
154 |
+
|
155 |
+
Francisco Franco's Falangist one-party state in Spain was officially neutral during World War II and it survived the collapse of the Axis Powers. Franco's rise to power had been directly assisted by the militaries of Fascist Italy and Nazi Germany during the Spanish Civil War and Franco had sent volunteers to fight on the side of Nazi Germany against the Soviet Union during World War II. The first years were characterized by a repression against the anti-fascist ideologies, a deep censorship and the suppression of democratic institutions (elected Parliament, Constitution of 1931, Regional Statutes of Autonomy). After World War II and a period of international isolation, Franco's regime normalized relations with the Western powers during the Cold War, until Franco's death in 1975 and the transformation of Spain into a liberal democracy.
|
156 |
+
|
157 |
+
Historian Robert Paxton observes that one of the main problems in defining fascism is that it was widely mimicked. Paxton says: "In fascism's heyday, in the 1930s, many regimes that were not functionally fascist borrowed elements of fascist decor in order to lend themselves an aura of force, vitality, and mass mobilization". He goes on to observe that Salazar "crushed Portuguese fascism after he had copied some of its techniques of popular mobilization".[181] Paxton says that: "Where Franco subjected Spain’s fascist party to his personal control, Salazar abolished outright in July 1934 the nearest thing Portugal had to an authentic fascist movement, Rolão Preto’s blue-shirted National Syndicalists […] Salazar preferred to control his population through such “organic” institutions traditionally powerful in Portugal as the Church. Salazar's regime was not only non-fascist, but “voluntarily non-totalitarian,” preferring to let those of its citizens who kept out of politics “live by habit".[182] Historians tend to view the Estado Novo as para-fascist in nature,[183] possessing minimal fascist tendencies.
|
158 |
+
[184] Other historians, including Fernando Rosas and Manuel Villaverde Cabral, think that the Estado Novo should be considered fascist.[185] In Argentina, Peronism, associated with the regime of Juan Perón from 1946 to 1955 and 1973 to 1974, was influenced by fascism.[186] Between 1939 and 1941, prior to his rise to power, Perón had developed a deep admiration of Italian Fascism and modelled his economic policies on Italian Fascist policies.[186]
|
159 |
+
|
160 |
+
The term neo-fascism refers to fascist movements after World War II. In Italy, the Italian Social Movement led by Giorgio Almirante was a major neo-fascist movement that transformed itself into a self-described "post-fascist" movement called the National Alliance (AN), which has been an ally of Silvio Berlusconi's Forza Italia for a decade. In 2008, AN joined Forza Italia in Berlusconi's new party The People of Freedom, but in 2012 a group of politicians split from The People of Freedom, refounding the party with the name Brothers of Italy. In Germany, various neo-Nazi movements have been formed and banned in accordance with Germany's constitutional law which forbids Nazism. The National Democratic Party of Germany (NPD) is widely considered a neo-Nazi party, although the party does not publicly identify itself as such.
|
161 |
+
|
162 |
+
After the onset of the Great Recession and economic crisis in Greece, a movement known as the Golden Dawn, widely considered a neo-Nazi party, soared in support out of obscurity and won seats in Greece's parliament, espousing a staunch hostility towards minorities, illegal immigrants and refugees. In 2013, after the murder of an anti-fascist musician by a person with links to Golden Dawn, the Greek government ordered the arrest of Golden Dawn's leader Nikolaos Michaloliakos and other Golden Dawn members on charges related to being associated with a criminal organization.[187][188]
|
163 |
+
|
164 |
+
Robert O. Paxton finds that the transformations undertaken by fascists in power were "profound enough to be called 'revolutionary'". They "often set fascists into conflict with conservatives rooted in families, churches, social rank, and property." Paxton argues:
|
165 |
+
|
166 |
+
[F]ascism redrew the frontiers between private and public, sharply diminishing what had once been untouchably private. It changed the practice of citizenship from the enjoyment of constitutional rights and duties to participation in mass ceremonies of affirmation and conformity. It reconfigured relations between the individual and the collectivity, so that an individual had no rights outside community interest. It expanded the powers of the executive—party and state—in a bid for total control. Finally, it unleashed aggressive emotions hitherto known in Europe only during war or social revolution.[189]
|
167 |
+
|
168 |
+
Ultranationalism, combined with the myth of national rebirth, is a key foundation of fascism.[190]
|
169 |
+
|
170 |
+
The fascist view of a nation is of a single organic entity that binds people together by their ancestry and is a natural unifying force of people.[191] Fascism seeks to solve economic, political and social problems by achieving a millenarian national rebirth, exalting the nation or race above all else and promoting cults of unity, strength and purity.[39][192][193][194][195] European fascist movements typically espouse a racist conception of non-Europeans being inferior to Europeans.[196] Beyond this, fascists in Europe have not held a unified set of racial views.[196] Historically, most fascists promoted imperialism, although there have been several fascist movements that were uninterested in the pursuit of new imperial ambitions.[196] For example, Nazism and Italian Fascism were expansionist and irredentist. Falangism in Spain envisioned worldwide unification of Spanish-speaking peoples (Hispanidad). British Fascism was non-interventionist, though it did embrace the British Empire.
|
171 |
+
|
172 |
+
Fascism promotes the establishment of a totalitarian state.[197] It opposes liberal democracy, rejects multi-party systems and may support a one-party state so that it may synthesize with the nation.[198] Mussolini's The Doctrine of Fascism (1932) – partly ghostwritten by philosopher Giovanni Gentile,[199] who Mussolini described as "the philosopher of Fascism" – states: "The Fascist conception of the State is all-embracing; outside of it no human or spiritual values can exist, much less have value. Thus understood, Fascism is totalitarian, and the Fascist State—a synthesis and a unit inclusive of all values—interprets, develops, and potentiates the whole life of a people".[200] In The Legal Basis of the Total State, Nazi political theorist Carl Schmitt described the Nazi intention to form a "strong state which guarantees a totality of political unity transcending all diversity" in order to avoid a "disastrous pluralism tearing the German people apart".[201]
|
173 |
+
|
174 |
+
Fascist states pursued policies of social indoctrination through propaganda in education and the media and regulation of the production of educational and media materials.[202][203] Education was designed to glorify the fascist movement and inform students of its historical and political importance to the nation. It attempted to purge ideas that were not consistent with the beliefs of the fascist movement and to teach students to be obedient to the state.[204]
|
175 |
+
|
176 |
+
Fascism presented itself as an alternative to both international socialism and free market capitalism.[205] While fascism opposed mainstream socialism, it sometimes regarded itself as a type of nationalist "socialism" to highlight their commitment to national solidarity and unity.[206][207] Fascists opposed international free market capitalism, but supported a type of productive capitalism.[115][208] Economic self-sufficiency, known as autarky, was a major goal of most fascist governments.[209]
|
177 |
+
|
178 |
+
Fascist governments advocated resolution of domestic class conflict within a nation in order to secure national solidarity.[210] This would be done through the state mediating relations between the classes (contrary to the views of classical liberal-inspired capitalists).[211] While fascism was opposed to domestic class conflict, it was held that bourgeois-proletarian conflict existed primarily in national conflict between proletarian nations versus bourgeois nations.[212] Fascism condemned what it viewed as widespread character traits that it associated as the typical bourgeois mentality that it opposed, such as materialism, crassness, cowardice, inability to comprehend the heroic ideal of the fascist "warrior"; and associations with liberalism, individualism and parliamentarianism.[213] In 1918, Mussolini defined what he viewed as the proletarian character, defining proletarian as being one and the same with producers, a productivist perspective that associated all people deemed productive, including entrepreneurs, technicians, workers and soldiers as being proletarian.[214] He acknowledged the historical existence of both bourgeois and proletarian producers, but declared the need for bourgeois producers to merge with proletarian producers.[214]
|
179 |
+
|
180 |
+
While fascism denounced the mainstream internationalist and Marxist socialisms, it claimed to economically represent a type of nationalist productivist socialism that while condemning parasitical capitalism, it was willing to accommodate productivist capitalism within it.[208] This was derived from Henri de Saint Simon, whose ideas inspired the creation of utopian socialism and influenced other ideologies, that stressed solidarity rather than class war and whose conception of productive people in the economy included both productive workers and productive bosses to challenge the influence of the aristocracy and unproductive financial speculators.[215] Saint Simon's vision combined the traditionalist right-wing criticisms of the French Revolution combined with a left-wing belief in the need for association or collaboration of productive people in society.[215] Whereas Marxism condemned capitalism as a system of exploitative property relations, fascism saw the nature of the control of credit and money in the contemporary capitalist system as abusive.[208] Unlike Marxism, fascism did not see class conflict between the Marxist-defined proletariat and the bourgeoisie as a given or as an engine of historical materialism.[208] Instead, it viewed workers and productive capitalists in common as productive people who were in conflict with parasitic elements in society including: corrupt political parties, corrupt financial capital and feeble people.[208] Fascist leaders such as Mussolini and Hitler spoke of the need to create a new managerial elite led by engineers and captains of industry—but free from the parasitic leadership of industries.[208] Hitler stated that the Nazi Party supported bodenständigen Kapitalismus ("productive capitalism") that was based upon profit earned from one's own labour, but condemned unproductive capitalism or loan capitalism, which derived profit from speculation.[216]
|
181 |
+
|
182 |
+
Fascist economics supported a state-controlled economy that accepted a mix of private and public ownership over the means of production.[217] Economic planning was applied to both the public and private sector and the prosperity of private enterprise depended on its acceptance of synchronizing itself with the economic goals of the state.[218] Fascist economic ideology supported the profit motive, but emphasized that industries must uphold the national interest as superior to private profit.[218]
|
183 |
+
|
184 |
+
While fascism accepted the importance of material wealth and power, it condemned materialism which identified as being present in both communism and capitalism and criticized materialism for lacking acknowledgement of the role of the spirit.[219] In particular, fascists criticized capitalism not because of its competitive nature nor support of private property, which fascists supported—but due to its materialism, individualism, alleged bourgeois decadence and alleged indifference to the nation.[220] Fascism denounced Marxism for its advocacy of materialist internationalist class identity, which fascists regarded as an attack upon the emotional and spiritual bonds of the nation and a threat to the achievement of genuine national solidarity.[221]
|
185 |
+
|
186 |
+
In discussing the spread of fascism beyond Italy, historian Philip Morgan states:
|
187 |
+
|
188 |
+
Since the Depression was a crisis of laissez-faire capitalism and its political counterpart, parliamentary democracy, fascism could pose as the 'third-way' alternative between capitalism and Bolshevism, the model of a new European 'civilization'. As Mussolini typically put it in early 1934, "from 1929...fascism has become a universal phenomenon... The dominant forces of the 19th century, democracy, socialism, liberalism have been exhausted...the new political and economic forms of the twentieth-century are fascist'(Mussolini 1935: 32).[160]
|
189 |
+
|
190 |
+
Fascists criticized egalitarianism as preserving the weak, and they instead promoted social Darwinist views and policies.[222][223] They were in principle opposed to the idea of social welfare, arguing that it "encouraged the preservation of the degenerate and the feeble."[224] The Nazi Party condemned the welfare system of the Weimar Republic, as well as private charity and philanthropy, for supporting people whom they regarded as racially inferior and weak, and who should have been weeded out in the process of natural selection.[225] Nevertheless, faced with the mass unemployment and poverty of the Great Depression, the Nazis found it necessary to set up charitable institutions to help racially-pure Germans in order to maintain popular support, while arguing that this represented "racial self-help" and not indiscriminate charity or universal social welfare.[226] Thus, Nazi programs such as the Winter Relief of the German People and the broader National Socialist People's Welfare (NSV) were organized as quasi-private institutions, officially relying on private donations from Germans to help others of their race—although in practice those who refused to donate could face severe consequences.[227] Unlike the social welfare institutions of the Weimar Republic and the Christian charities, the NSV distributed assistance on explicitly racial grounds. It provided support only to those who were "racially sound, capable of and willing to work, politically reliable, and willing and able to reproduce." Non-Aryans were excluded, as well as the "work-shy", "asocials" and the "hereditarily ill."[228] Under these conditions, by 1939, over 17 million Germans had obtained assistance from the NSV, and the agency "projected a powerful image of caring and support" for "those who were judged to have got into difficulties through no fault of their own."[228] Yet the organization was "feared and disliked among society's poorest" because it resorted to intrusive questioning and monitoring to judge who was worthy of support.[229]
|
191 |
+
|
192 |
+
Fascism emphasizes direct action, including supporting the legitimacy of political violence, as a core part of its politics.[11][230] Fascism views violent action as a necessity in politics that fascism identifies as being an "endless struggle".[231] This emphasis on the use of political violence means that most fascist parties have also created their own private militias (e.g. the Nazi Party's Brown shirts and Fascist Italy's Blackshirts).
|
193 |
+
|
194 |
+
The basis of fascism's support of violent action in politics is connected to social Darwinism.[231] Fascist movements have commonly held social Darwinist views of nations, races and societies.[232] They say that nations and races must purge themselves of socially and biologically weak or degenerate people, while simultaneously promoting the creation of strong people, in order to survive in a world defined by perpetual national and racial conflict.[233]
|
195 |
+
|
196 |
+
Fascism emphasizes youth both in a physical sense of age and in a spiritual sense as related to virility and commitment to action.[234] The Italian Fascists' political anthem was called Giovinezza ("The Youth").[234] Fascism identifies the physical age period of youth as a critical time for the moral development of people who will affect society.[235]
|
197 |
+
|
198 |
+
Walter Laqueur argues that:
|
199 |
+
|
200 |
+
Italian Fascism pursued what it called "moral hygiene" of youth, particularly regarding sexuality.[237] Fascist Italy promoted what it considered normal sexual behaviour in youth while denouncing what it considered deviant sexual behaviour.[237] It condemned pornography, most forms of birth control and contraceptive devices (with the exception of the condom), homosexuality and prostitution as deviant sexual behaviour, although enforcement of laws opposed to such practices was erratic and authorities often turned a blind eye.[237] Fascist Italy regarded the promotion of male sexual excitation before puberty as the cause of criminality amongst male youth, declared homosexuality a social disease and pursued an aggressive campaign to reduce prostitution of young women.[237]
|
201 |
+
|
202 |
+
Mussolini perceived women's primary role as primarily child bearers and men, warriors—once saying: "War is to man what maternity is to the woman".[238] In an effort to increase birthrates, the Italian Fascist government gave financial incentives to women who raised large families and initiated policies intended to reduce the number of women employed.[239] Italian Fascism called for women to be honoured as "reproducers of the nation" and the Italian Fascist government held ritual ceremonies to honour women's role within the Italian nation.[240] In 1934, Mussolini declared that employment of women was a "major aspect of the thorny problem of unemployment" and that for women, working was "incompatible with childbearing". Mussolini went on to say that the solution to unemployment for men was the "exodus of women from the work force".[241]
|
203 |
+
|
204 |
+
The German Nazi government strongly encouraged women to stay at home to bear children and keep house.[242] This policy was reinforced by bestowing the Cross of Honor of the German Mother on women bearing four or more children. The unemployment rate was cut substantially, mostly through arms production and sending women home so that men could take their jobs. Nazi propaganda sometimes promoted premarital and extramarital sexual relations, unwed motherhood and divorce, but at other times the Nazis opposed such behaviour.[243]
|
205 |
+
|
206 |
+
The Nazis decriminalized abortion in cases where fetuses had hereditary defects or were of a race the government disapproved of, while the abortion of healthy pure German, Aryan fetuses remained strictly forbidden.[244] For non-Aryans, abortion was often compulsory. Their eugenics program also stemmed from the "progressive biomedical model" of Weimar Germany.[245] In 1935, Nazi Germany expanded the legality of abortion by amending its eugenics law, to promote abortion for women with hereditary disorders.[244] The law allowed abortion if a woman gave her permission and the fetus was not yet viable[246][247] and for purposes of so-called racial hygiene.[248][249]
|
207 |
+
|
208 |
+
The Nazis said that homosexuality was degenerate, effeminate, perverted and undermined masculinity because it did not produce children.[250] They considered homosexuality curable through therapy, citing modern scientism and the study of sexology, which said that homosexuality could be felt by "normal" people and not just an abnormal minority.[251] Open homosexuals were interned in Nazi concentration camps.[252]
|
209 |
+
|
210 |
+
Fascism emphasizes both palingenesis (national rebirth or re-creation) and modernism.[253] In particular, fascism's nationalism has been identified as having a palingenetic character.[190] Fascism promotes the regeneration of the nation and purging it of decadence.[253] Fascism accepts forms of modernism that it deems promotes national regeneration while rejecting forms of modernism that are regarded as antithetical to national regeneration.[254] Fascism aestheticized modern technology and its association with speed, power and violence.[255] Fascism admired advances in the economy in the early 20th century, particularly Fordism and scientific management.[256] Fascist modernism has been recognized as inspired or developed by various figures—such as Filippo Tommaso Marinetti, Ernst Jünger, Gottfried Benn, Louis-Ferdinand Céline, Knut Hamsun, Ezra Pound and Wyndham Lewis.[257]
|
211 |
+
|
212 |
+
In Italy, such modernist influence was exemplified by Marinetti who advocated a palingenetic modernist society that condemned liberal-bourgeois values of tradition and psychology, while promoting a technological-martial religion of national renewal that emphasized militant nationalism.[258] In Germany, it was exemplified by Jünger who was influenced by his observation of the technological warfare during World War I and claimed that a new social class had been created that he described as the "warrior-worker".[259] Jünger like Marinetti emphasized the revolutionary capacities of technology and emphasized an "organic construction" between human and machine as a liberating and regenerative force in that challenged liberal democracy, conceptions of individual autonomy, bourgeois nihilism and decadence.[259] He conceived of a society based on a totalitarian concept of "total mobilization" of such disciplined warrior-workers.[259]
|
213 |
+
|
214 |
+
According to cultural critic Susan Sontag:
|
215 |
+
|
216 |
+
Fascist aesthetics ... flow from (and justify) a preoccupation with situations of control, submissive behavior, extravagant effort, and the endurance of pain; they endorse two seemingly opposite states, egomania and servitude. The relations of domination and enslavement take the form of a characteristic pageantry: the massing of groups of people; the turning of people into things; the multiplication or replication of things; and the grouping of people/things around an all-powerful, hypnotic leader-figure or force. The fascist dramaturgy centers on the orgiastic transactions between mighty forces and their puppets, uniformly garbed and shown in ever swelling numbers. Its choreography alternates between ceaseless motion and a congealed, static, "virile" posing. Fascist art glorifies surrender, it exalts mindlessness, it glamorizes death.[260]
|
217 |
+
|
218 |
+
Sontag also enumerates some commonalities between fascist art of the official art of communist countries, such as the obeisance of the masses to the hero, and a preference for the monumental and the "grandiose and rigid" choreography of mass bodies. But whereas official communist art "aims to expound and reinforce a utopian morality", the art of fascist countries such as Nazi Germany "displays a utopian aesthetics – that of physical perfection", in a way that is "both prurient and idealizing".[260]
|
219 |
+
|
220 |
+
"Fascist aesthetics", according to Sontag, "is based on the containment of vital forces; movements are confined, held tight, held in." And its appeal is not necessarily limited to those who share the fascist political ideology, because "fascism ... stands for an ideal or rather ideals that are persistent today under the other banners: the ideal of life as art, the cult of beauty, the fetishism of courage, the dissolution of alienation in ecstatic feelings of community; the repudiation of the intellect; the family of man (under the parenthood of leaders)."[260]
|
221 |
+
|
222 |
+
Fascism has been widely criticized and condemned in modern times since the defeat of the Axis Powers in World War II.
|
223 |
+
|
224 |
+
One of the most common and strongest criticisms of fascism is that it is a tyranny.[261] Fascism is deliberately and entirely non-democratic and anti-democratic.[262][263][264]
|
225 |
+
|
226 |
+
Some critics of Italian fascism have said that much of the ideology was merely a by-product of unprincipled opportunism by Mussolini and that he changed his political stances merely to bolster his personal ambitions while he disguised them as being purposeful to the public.[265] Richard Washburn Child, the American ambassador to Italy who worked with Mussolini and became his friend and admirer, defended Mussolini's opportunistic behaviour by writing: "Opportunist is a term of reproach used to brand men who fit themselves to conditions for the reasons of self-interest. Mussolini, as I have learned to know him, is an opportunist in the sense that he believed that mankind itself must be fitted to changing conditions rather than to fixed theories, no matter how many hopes and prayers have been expended on theories and programmes".[266] Child quoted Mussolini as saying: "The sanctity of an ism is not in the ism; it has no sanctity beyond its power to do, to work, to succeed in practice. It may have succeeded yesterday and fail to-morrow. Failed yesterday and succeed to-morrow. The machine first of all must run!".[266]
|
227 |
+
|
228 |
+
Some have criticized Mussolini's actions during the outbreak of World War I as opportunist for seeming to suddenly abandon Marxist egalitarian internationalism for non-egalitarian nationalism and note to that effect that upon Mussolini endorsing Italy's intervention in the war against Germany and Austria-Hungary, he and the new fascist movement received financial support from foreign sources, such as Ansaldo (an armaments firm) and other companies[267] as well as the British Security Service MI5.[268] Some, including Mussolini's socialist opponents at the time, have noted that regardless of the financial support he accepted for his pro-interventionist stance, Mussolini was free to write whatever he wished in his newspaper Il Popolo d'Italia without prior sanctioning from his financial backers.[269] Furthermore, the major source of financial support that Mussolini and the fascist movement received in World War I was from France and is widely believed to have been French socialists who supported the French government's war against Germany and who sent support to Italian socialists who wanted Italian intervention on France's side.[270]
|
229 |
+
|
230 |
+
Mussolini's transformation away from Marxism into what eventually became fascism began prior to World War I, as Mussolini had grown increasingly pessimistic about Marxism and egalitarianism while becoming increasingly supportive of figures who opposed egalitarianism, such as Friedrich Nietzsche.[271] By 1902, Mussolini was studying Georges Sorel, Nietzsche and Vilfredo Pareto.[272] Sorel's emphasis on the need for overthrowing decadent liberal democracy and capitalism by the use of violence, direct action, general strikes and neo-Machiavellian appeals to emotion impressed Mussolini deeply.[273] Mussolini's use of Nietzsche made him a highly unorthodox socialist, due to Nietzsche's promotion of elitism and anti-egalitarian views.[271] Prior to World War I, Mussolini's writings over time indicated that he had abandoned the Marxism and egalitarianism that he had previously supported in favour of Nietzsche's übermensch concept and anti-egalitarianism.[271] In 1908, Mussolini wrote a short essay called "Philosophy of Strength" based on his Nietzschean influence, in which Mussolini openly spoke fondly of the ramifications of an impending war in Europe in challenging both religion and nihilism: "[A] new kind of free spirit will come, strengthened by the war, ... a spirit equipped with a kind of sublime perversity, ... a new free spirit will triumph over God and over Nothing".[106]
|
231 |
+
|
232 |
+
Fascism has been criticized for being ideologically dishonest. Major examples of ideological dishonesty have been identified in Italian fascism's changing relationship with German Nazism.[274][275] Fascist Italy's official foreign policy positions were known to commonly utilize rhetorical ideological hyperbole to justify its actions, although during Dino Grandi's tenure as Italy's foreign minister the country engaged in realpolitik free of such fascist hyperbole.[276] Italian fascism's stance towards German Nazism fluctuated from support from the late 1920s to 1934, when it celebrated Hitler's rise to power and meeting with Hitler in 1934; to opposition from 1934 to 1936 after the assassination of Italy's allied leader in Austria, Engelbert Dollfuss, by Austrian Nazis; and again back to support after 1936, when Germany was the only significant power that did not denounce Italy's invasion and occupation of Ethiopia.
|
233 |
+
|
234 |
+
After antagonism exploded between Nazi Germany and Fascist Italy over the assassination of Austrian Chancellor Dollfuss in 1934, Mussolini and Italian fascists denounced and ridiculed Nazism's racial theories, particularly by denouncing its Nordicism, while promoting Mediterraneanism.[275] Mussolini himself responded to Nordicists' claims of Italy being divided into Nordic and Mediterranean racial areas due to Germanic invasions of Northern Italy by claiming that while Germanic tribes such as the Lombards took control of Italy after the fall of Ancient Rome, they arrived in small numbers (about 8,000) and quickly assimilated into Roman culture and spoke the Latin language within fifty years.[277] Italian fascism was influenced by the tradition of Italian nationalists scornfully looking down upon Nordicists' claims and taking pride in comparing the age and sophistication of ancient Roman civilization as well as the classical revival in the Renaissance to that of Nordic societies that Italian nationalists described as "newcomers" to civilization in comparison.[274] At the height of antagonism between the Nazis and Italian fascists over race, Mussolini claimed that the Germans themselves were not a pure race and noted with irony that the Nazi theory of German racial superiority was based on the theories of non-German foreigners, such as Frenchman Arthur de Gobineau.[278] After the tension in German-Italian relations diminished during the late 1930s, Italian fascism sought to harmonize its ideology with German Nazism and combined Nordicist and Mediterranean racial theories, noting that Italians were members of the Aryan Race, composed of a mixed Nordic-Mediterranean subtype.[275]
|
235 |
+
|
236 |
+
In 1938, Mussolini declared upon Italy's adoption of antisemitic laws that Italian fascism had always been antisemitic,[275] In fact, Italian fascism did not endorse antisemitism until the late 1930s when Mussolini feared alienating antisemitic Nazi Germany, whose power and influence were growing in Europe. Prior to that period there had been notable Jewish Italians who had been senior Italian fascist officials, including Margherita Sarfatti, who had also been Mussolini's mistress.[275] Also contrary to Mussolini's claim in 1938, only a small number of Italian fascists were staunchly antisemitic (such as Roberto Farinacci and Giuseppe Preziosi), while others such as Italo Balbo, who came from Ferrara which had one of Italy's largest Jewish communities, were disgusted by the antisemitic laws and opposed them.[275] Fascism scholar Mark Neocleous notes that while Italian fascism did not have a clear commitment to antisemitism, there were occasional antisemitic statements issued prior to 1938, such as Mussolini in 1919 declaring that the Jewish bankers in London and New York were connected by race to the Russian Bolsheviks and that eight percent of the Russian Bolsheviks were Jews.[279]
|
237 |
+
|
238 |
+
|
239 |
+
|
240 |
+
|
241 |
+
|