question
stringlengths
3
300
answer
stringlengths
9
2.77k
context
sequencelengths
7
7
how do grizzly bears (or any other animal) know that we are not a threat to them?
> How do they know that we are weaker than them? We are smaller, no claws in sight. > They don't get taught by their parents to hunt human, so why they are not afraid to attack us? See answer #1 > How do they know that we don't have some poison or something. animals have very limited reasoning abilities. We don't display nature's poison colors so we are not poisonous. > Is it our body language that shows that we're scared? Yes > What if we acted super confident and crazy, would they run from us then? Maybe depends on the bear. Imagine they are like dumb drunk people. Some are rational, some are not. Some are angry, some are cowards, etc.
[ "Grizzly bears normally avoid contact with people. In spite of their obvious physical advantage they rarely actively hunt humans. Most grizzly bear attacks result from a bear that has been surprised at very close range, especially if it has a supply of food to protect, or female grizzlies protecting their offspring. A bear killing a human in a national park may be killed to prevent its attacking again.\n", "Although most bears are alpha predators in their own habitat, most do not, under normal circumstances, hunt and feed on humans. Most bear attacks occur when the animal is defending itself against anything it perceives as a threat to itself or its territory. For instance, bear sows can become extremely aggressive if they feel their cubs are threatened. Any solitary bear is also likely to become agitated if surprised or cornered, especially while eating.\n", "Grizzly bears are especially dangerous because of the force of their bite, which has been measured at over 8 megapascals (1160 psi). It has been estimated that a bite from a grizzly could even crush a bowling ball.\n", "The relationship between grizzly bears and other predators is mostly one-sided; grizzly bears will approach feeding predators to steal their kill. In general, the other species will leave the carcasses for the bear to avoid competition or predation. Any parts of the carcass left uneaten are scavenged by smaller animals. Cougars generally give the bears a wide berth. Grizzlies have less competition with cougars than with other predators, such as coyotes, wolves, and other bears. When a grizzly descends on a cougar feeding on its kill, the cougar usually gives way to the bear. When a cougar does stand its ground, it will use its superior agility and its claws to harass the bear, yet stay out of its reach until one of them gives up. Grizzly bears occasionally kill cougars in disputes over kills. There have been several accounts, primarily from the late 19th and early 20th centuries, of cougars and grizzly bears killing each other in fights to the death. The other big cat that is present in the United States, which may pose as a threat to bears, is the jaguar.\n", "Several bear species are dangerous to humans, especially in areas where they have become used to people; elsewhere, they generally avoid humans. Injuries caused by bears are rare, but are widely reported. Bears may attack humans in response to being startled, in defense of young or food, or even for predatory reasons.\n", "A bear attack is an attack by any mammal of the family Ursidae, on another animal, although it usually refers to bears attacking humans or domestic pets. Bear attacks are of particular concern for those who are in bear habitats. They can be fatal and often hikers, hunters, fishers, and others in bear country take precautions against bear attacks.\n", "Their main predators include lions, leopards, cheetahs, spotted hyenas, Cape hunting dogs, pythons, and crocodiles. They can camouflage themselves in the grasslands due to their coats, which are almost the same color. If startled or attacked, they stand still, then either hide or flee with an odd rocking-horse movement, and cautiously look back to ensure the danger is gone, generally. They use vocalizations like a shrill whistle through their nostrils and a clicking noise to alert others about danger.\n" ]
Any recommendations for a good espionage book?
I'm a fan of the atomic spies myself. Some favorites that focus on individuals (which often makes for better stories than big, all-encompassing books on Soviet espionage, like _The Haunted Wood_): * _Bombshell: The Secret Story of America's Unknown Atomic Spy Conspiracy_. This focuses primarily on the spying of Ted Hall, a Harvard undergraduate who worked at Los Alamos. The guy is barely out of high school and he decides to spy on the atomic bomb for the USSR. Why'd he do it? How'd he do it? And why did he never go to jail, even though the FBI figured out he was a spy? The book gives interesting answers to these questions. * _The Catcher was a Spy_. Moe Berg was a Princeton-educated catcher for the Boston Red Sox. He was also a spy for the US during WWII. One of his jobs was to decide whether or not he should assassinate the famous German physicist Wernher Heisenberg who was thought to be working on an atomic bomb for the Nazis. * _The Invisible Harry Gold_. Gold was not a spy himself per se, but he was part of the Rosenberg/Greenglass/Fuchs network that got a lot of information out of Los Alamos, working as a courier. What makes him a great study is that he is not some kind of trained agent or even an ideological die-hard, but just a psychologically kind of messed up guy who falls in with "the wrong crowd" and aims to please. A much more nuanced story than you usually get with spy accounts, a great psychological portrait.
[ "\"Time\" magazine, while including \"The Spy Who Came in from the Cold\" in its top 100 novels list, stated that the novel was \"a sad, sympathetic portrait of a man who has lived by lies and subterfuge for so long, he's forgotten how to tell the truth.\" The book also headed the \"Publishers Weekly\"s list of 15 top spy novels in 2006.\n", "\"The Spy\" reached the \"USA Today\" best-selling book list on June 10, 2010, and remained on the list for twelve weeks, at one point reaching number sixteen on the list. The Book Reporter website said in early 2011, \"The ship-shape writing duo heaps on more excitement and thrills than a Coney Island roller coaster ride.\" \"The Citizen\", a Key West, Florida, daily newspaper said of \"The Spy\", \"Clive Cussler and Justin Scott have succeeded in writing another page-turning historical thriller filled with suspense and great period detail.\"\n", "Leading examples include the \"Agent Cody Banks\" film, the Alex Rider adventure novels by Anthony Horowitz, and the CHERUB series, by Robert Muchamore. Ben Allsop, one of England's youngest novelists, also writes spy fiction. His titles include \"Sharp\" and \"The Perfect Kill\".\n", "De Villiers' books are well known in French-speaking countries for their in-depth insider knowledge of such subjects as espionage, geopolitics, and terrorist threats, as well as their hard-core sex scenes. \n", "\"The Spy Who Came in from the Cold\" portrays Western espionage methods as morally inconsistent with Western democracy and values. The novel received critical acclaim at the time of its publication and became an international best-seller; it was selected as one of the \"All-Time 100 Novels\" by \"Time\" magazine.\n", "Other books include \"United Nations\", a book for children (Franklin Watts World Organisations Series, 2001); \"Techno-Bandits\" (co-authored; Boston Houghton Mifflin, 1983), an account of the campaign by the US Department of Defense to stop the illicit Soviet efforts to acquire American technology; and \"The End of the Street\", published in London, in 1986 (Methuen), exposing the secret planning by Rupert Murdoch to destroy the British print unions and move his newspapers to a modern printing plant at Wapping. \"The Ultimate Crime\" (Allison and Busby, 1995) was a secret history of the UN’s first 50 years and was the basis of a TV series for Channel Four, the three-part \"UN Blues\" broadcast in January 1995.\n", "His work is also cited in the books \"Bad News\" by Tom Fenton and \"Fog Facts\" by Larry Beinhart. Richard Clarke has put it on his reading list for his course on \"Terrorism, Security, and Intelligence\" at Harvard University.\n" ]
why does certain parts of audio dissappear when the headphonejack isn't all the way in?
The headphone plug actually has multiple connectors on it--those are the colored ridges on the plug itself. Usually, there are three main connectors, but I suppose others can exist. When the plug isn't all the way in, some of those connectors don't line up, and others might line up with another input. This causes the audio to be messed up. The audio is typically split into several channels; each channel contains some audio information. If the song is encoded such that the voices are on one channel and the bass is on another, moving the headphone plug out a little bit might cause the lyrics to cut out but not the bass, or the bass to cut out but not the lyrics, etc.
[ "The Gamate's mono internal speaker is of poor quality, giving off sound that is quite distorted, particularly at low volumes. However, if a user plugs into the headphone jack, the sound is revealed to be programmed in stereo, and of a relatively high quality.\n", "Active noise-cancelling headphones use a microphone, amplifier, and speaker to pick up, amplify, and play ambient noise in phase-reversed form; this to some extent cancels out unwanted noise from the environment without affecting the desired sound source, which is not picked up and reversed by the microphone. They require a power source, usually a battery, to drive their circuitry. Active noise cancelling headphones can attenuate ambient noise by 20 dB or more, but the active circuitry is mainly effective on constant sounds and at lower frequencies, rather than sharp sounds and voices. Some noise cancelling headphones are designed mainly to reduce low-frequency engine and travel noise in aircraft, trains, and automobiles, and are less effective in environments with other types of noise.\n", "Once recorded, the binaural effect can be reproduced using headphones. It does not work with mono playback; nor does it work while using loudspeaker units, as the acoustics of this arrangement distort the channel separation via natural crosstalk (an approximation can be obtained if the listening environment is carefully designed by employing expensive crosstalk cancellation equipment.)\n", "Open-back headphones have the back of the earcups open. This leaks more sound out of the headphone and also lets more ambient sounds into the headphone, but gives a more natural or speaker-like sound, due to including sounds from the environment.\n", "Any set of headphones that provides good right and left channel isolation is sufficient to hear the immersive effects of the recording. Several high-end head set manufacturers have created some units specifically for the playback of binaural. It is also found that even normal headphones suffer from poor externalization, especially if the headphone completely blocks the ear from outside. A better design for externalization found in experiments is the open-ear one, where the drivers are sitting in front of the pinnae with the ear canal connected to the air. The hypothesis is that when the ear canal is completely blocked, the radiation impedance seen from the eardrum to the outside has been altered, which negatively affects externalization.\n", "The outer shells of in-ear headphones are made up of a variety of materials, such as plastic, aluminum, ceramic and other metal alloys. Because in-ear headphones engage the ear canal, they can be prone to sliding out, and they block out much environmental noise. Lack of sound from the environment can be a problem when sound is a necessary cue for safety or other reasons, as when walking, driving, or riding near or in vehicular traffic.\n", "This model also suffers from a whine on the headphone and microphone jacks that are located on the left of the unit. This is because of shared space with the leftmost fan, and the spinning of said fan causes interference. There is no known fix than to otherwise use a USB, FireWire/1394 or PCMCIA-based audio device or card for sound output.\n" ]
Have the Amish always been significantly different than other rural Midwestern farmers? At what point did technological and societal changes really set the Amish apart?
From the time they first settled in the US, the Amish have been different. They spoke German and eschewed the clothing that was popular. They had specific rules about dress and behavior, like no buttons, which set them apart even before modern technology. They also have their own religion that is a branch of Protestantism. As a result, they'd be going to their own church and hanging out in their own social circles.
[ "The first Amish began migrating to the United States in the 18th century, largely to avoid religious persecution and compulsory military service. The Northkill Creek watershed, in eastern Province of Pennsylvania, was opened for settlement in 1736 and that year Melchior Detweiler and Hans Seiber settled near Northkill. Shortly thereafter many Amish began to move to Northkill with large groups settling in 1742 and 1749.\n", "The Amish from Somerset County became the \"vanguard of Amish settlers in Midwest\", because \"out of and through it most Midwest Amish settlements were founded\". This movement either to Lancaster or Somerset resulted in a first major divide in the family tree of the Amish. The two groups differ not only in dialect (Midwestern vs. Pennsylvania forms of Pennsylvania German) but also in the selection of typical Amish family names.\n", "Amish began migrating to Pennsylvania, then known for its religious toleration, in the 18th century as part of a larger migration from the Palatinate and neighboring areas. This migration was a reaction to religious wars, poverty, and religious persecution in Europe. The first Amish immigrants went to Berks County, Pennsylvania, but later moved, motivated by land issues and by security concerns tied to the French and Indian War. Many eventually settled in Lancaster County. Other groups later settled elsewhere in North America.\n", "Although it existed for only a brief period, the Northkill settlement was fundamental in establishing the Amish in North America. The Northkill settlers included the progenitors of many widespread Amish families, such as the Yoders, Burkeys, Troyers, Hostetlers, and Hershbergers.\n", "The Northkill Amish Settlement was established in 1740 in Berks County, Pennsylvania. As the first identifiable Amish community in the new world, it was the foundation of Amish settlement in the Americas. By the 1780s it had become the largest Amish settlement, but declined as families moved elsewhere.\n", "Northkill Amish Settlement, founded around 1740, was the first Amish settlement in North America and remained the largest Amish settlement into the 1780s, but then declined as families moved on to areas of better farmland, mainly to Lancaster County, Pennsylvania and Somerset County, Pennsylvania in Pennsylvania, where they formed the Lancaster Amish Settlement around 1760 and the Somerset Amish Settlement in 1772.\n", "Amish settled in Mifflin County as early as 1791, coming from Lancaster County, Pennsylvania. In the 1840s there were three Amish congregations in the region. In 1849 one district diveded from the two others, forming the Byler Amish, the first subgroup in North America that divided because of doctrinal differences. \n" ]
why does tiresize change based on car size?
Larger wheels allow the axle to be higher off the ground, letting you drive over larger irregularities without smashing into things. This is important in something like a pickup truck which might be driving onto an ungraded work site. A civic or a smart car would prefer to have smaller wheels because they aren't designed to leave a road, and it is more difficult to turn a larger tire because of the leverage involved.
[ "Note that tire size (or dimensions of the road wheels) does not affect the gear ratio of a vehicle, and thus using a different size tire on the same vehicle does not affect the torque on the road wheels or the crawl ratio. However, for a given engine speed and a gear ratio, the output force on the road wheels decreases as the tire size increases. A lower force in turn decreases the acceleration of rotating wheels. Therefore, the smallest tires that are still big enough to drive over obstacles perform better for a given crawl ratio.\n", "Increasing the tire size increases the ground clearance of all parts of vehicle including suspended components, such as the axles. It may be necessary to make modifications to vehicle's suspension or body depending on the size of the tires to be installed and the specific vehicle.\n", "The amount a tire meets the road is an equation between the weight of the car and the type (and size) of its tire. A 1000 kg car can depress a 185/65/15 tire more than a 215/45/15 tire longitudinally thus having better linear grip and better braking distance not to mention better aquaplaning performance, while the wider tires have better (dry) cornering resistance.\n", "Vehicle motions are largely due to the shear forces generated between the tires and road, and therefore the tire model is an essential part of the math model. The tire model must produce realistic shear forces during braking, acceleration, cornering, and combinations, on a range of surface conditions. Many models are in use. Most are semi-empirical, such as the Pacejka Magic Formula model.\n", "Tires have large effects on a car's behavior and are replaced periodically; therefore, tire selection is a very cost-effective way to personalize an automobile. Choices include tires for various weather and road conditions, different sizes and various compromises between cost, grip, service life, rolling resistance, handling and ride comfort. Drivers also personalize tires for aesthetic reasons, for example, by adding tire lettering.\n", "Modern road tires have several measurements associated with their size as specified by tire codes like 225/70R14. The first number in the code (e.g., \"225\") represents the nominal tire width in millimeters. This is followed by the aspect ratio (e.g.,\"70\"), which is the height of the sidewall expressed as a percentage of the nominal tire width. \"R\" stands for radial and relates to the tire construction. The final number in the code (e.g.,\"14\") is the rim size measured in inches. The overall circumference of the tire will increase by increasing any of the tire's specifications. For example, increasing the width of the tire will also increase its circumference, because the sidewall height is a proportional length. Increasing the aspect ratio will increase the height of the tire and hence the circumference.\n", "Replacing the wheels on a car with larger ones can involve using tires with a smaller profile. This is done to keep the overall radius of the wheel/tire the same as stock to ensure the same clearances are achieved. Larger wheels are typically desired for their appearance but could also offer more space for brake components. This comes at a performance price though as larger wheels weigh more. \n" ]
why does our own body clog our nose, which is essential for breathing, on allergic reactions or when we've got a cold?
The allergic reaction comes from the inflammatory response trying to stop the spread of the allergen. The cells that release these chemicals don't know where in the body they are located. Just that something foreign is there and the body doesn't like it.
[ "One of the more common health risks that people encounter is a result of air pollutants and air quality. Allergic Asthma is a chronic disease that affects individual's inflammatory system when they are exposed to allergens resulting in shortness of breath, wheezing, and coughing. Environmental factors such as, air pollutants, tobacco smoke, emission fumes, and other allergens in the air when absorbed through the body are said to have an influence on allergic asthma.\n", "Increased vascular permeability causes fluid to escape from capillaries into the tissues, which leads to the classic symptoms of an allergic reaction: a runny nose and watery eyes. Allergens can bind to IgE-loaded mast cells in the nasal cavity's mucous membranes. This can lead to three clinical responses:\n", "A severe case of an allergic reaction, caused by symptoms affecting the respiratory tract and blood circulation, is called anaphylaxis. When symptoms are related to a drop in blood pressure, the person is said to be in anaphylactic shock. Anaphylaxis occurs when IgE antibodies are involved, and areas of the body that are not in direct contact with the food become affected and show symptoms. Those with asthma or an allergy to peanuts, tree nuts, or seafood are at greater risk for anaphylaxis.\n", "BULLET::::- In asthma, the bronchioles, or the \"bottle-necks\" into the sac are restricted, causing the amount of air flow into the lungs to be greatly reduced. It can be triggered by irritants in the air, photochemical smog for example, as well as substances that a person is allergic to.\n", "It may cause breathing difficulty within minutes after eating a food containing it. Asthmatics and possibly people with salicylate sensitivity (or aspirin sensitivity) are at an elevated risk for reaction to sulfites. Anaphylaxis and life-threatening reactions are rare. Other potential symptoms include sneezing, swelling of the throat, hives, and migraine.\n", "Rarer side effects may indicate a dangerous allergic reaction. These include: paradoxical bronchospasm (shortness of breath and difficulty breathing); skin itching, rash, or hives (urticaria); swelling (angioedema) of any part of the face or throat (which can lead to voice hoarseness), or swelling of the extremities.\n", "Histamine produces increased vascular permeability, causing fluid to escape from capillaries into tissues, which leads to the classic symptoms of an allergic reaction — a runny nose and watery eyes. Histamine also promotes angiogenesis.\n" ]
if you're swimming in a pool and lightning strikes the water, it'll most likely harm you. at what range when swimming in a larger body of water, like a lake or even an ocean would lightning have an effect on a person swimming in it at the time?
Lighting striking in an open water has a lethal range of about 6 - 10 meters with most of the energy being dispersed along the surface. If you're outside that range, you might still suffer burns. There's also a notable pressure wave (the underwater equivalent of thunder) that would be potentially dangerous at further distances.
[ "Besides boats and dockside power hookups, several other potential causes exist. Lightning strikes over or near water have caused electric shock drownings. Faulty hydroelectric generators or damaged underwater power lines can cause leakage currents, potentially creating a hazard. In general, anything electrically active that comes in contact with water has the potential to create leakage currents and contribute to this type of safety hazard.\n", "Other termination shocks can be seen in terrestrial systems; perhaps the easiest may be seen by simply running a water tap into a sink creating a hydraulic jump. Upon hitting the floor of the sink, the flowing water spreads out at a speed that is higher than the local wave speed, forming a disk of shallow, rapidly diverging flow (analogous to the tenuous, supersonic solar wind). Around the periphery of the disk, a shock front or wall of water forms; outside the shock front, the water moves slower than the local wave speed (analogous to the subsonic interstellar medium).\n", "There is no visible warning to electrified water. Swimmers will be able to feel the electricity if the current is substantial. If the swimmers notice any unusual tingling feeling or symptoms of electrical shock, it is highly likely that stray currents exist and everyone needs to get out. Swimmers should always swim away from the suspected current source. In most cases this means swimming away from docks and boats and toward another safer portion of the shoreline.\n", "A sign warns of the dangers of swimming there because the water is deep and fast flowing through channels and over underwater rocks but deaths still occur – some by swimming, others by falling in unexpectedly, many being wedged in a rock \"chute\". \n", "Hazards due to lightning obviously include a direct strike on persons or property. However, lightning can also create dangerous voltage gradients in the earth, as well as an electromagnetic pulse, and can charge extended metal objects such as telephone cables, fences, and pipelines to dangerous voltages that can be carried many miles from the site of the strike. Although many of these objects are not normally conductive, very high voltage can cause the electrical breakdown of such insulators, causing them to act as conductors. These transferred potentials are dangerous to people, livestock, and electronic apparatus. Lightning strikes also start fires and explosions, which result in fatalities, injuries, and property damage. For example, each year in North America, thousands of forest fires are started by lightning strikes.\n", "Earthquake-generated seiches can be observed thousands of miles away from the epicentre of a quake. Swimming pools are especially prone to seiches caused by earthquakes, as the ground tremors often match the resonant frequencies of small bodies of water. The 1994 Northridge earthquake in California caused swimming pools to overflow across southern California. The massive Good Friday earthquake that hit Alaska in 1964 caused seiches in swimming pools as far away as Puerto Rico. The earthquake that hit Lisbon, Portugal in 1755 caused seiches 2,000 miles (3,000 km) away in Loch Lomond, Loch Long, Loch Katrine and Loch Ness in Scotland and in canals in Sweden. The 2004 Indian Ocean earthquake caused seiches in standing water bodies in many Indian states as well as in Bangladesh, Nepal and northern Thailand. Seiches were again observed in Uttar Pradesh, Tamil Nadu and West Bengal in India as well as in many locations in Bangladesh during the 2005 Kashmir earthquake.\n", "Running whitewater rivers is a popular recreational sport but is not without danger. In fast moving water there is always the potential for injury or death by drowning or hitting objects. Fatalities do occur; some 50+ people die in whitewater accidents in the United States each year.\n" ]
in terms of evolution, why are peacocks' tails so big?
a large tail would be a trait of a healthy bird, which in turn drives the success of the species.
[ "The tail of a peacock makes the peacock more vulnerable to predators, and may therefore be a handicap. However, the message that the tail carries to the potential mate peahen may be 'I have survived in spite of this huge tail; hence I am fitter and more attractive than others'.\n", "In 1993, along with two other researchers, he investigated why the tails of birds are shaped as they are, aiming to test Charles Darwin's hypothesis that females have a preference for males with longer and more ornate tails using aerodynamic analysis. They reported that shallow forked shaped tails (such as those of the house martin) are aerodynamically optimal and that species with them had similar lengthed tails, indicating they could have developed through natural selection. In species with longer tails, males tend to have longer tails than females and which also create drag, since this is no advantage except for when courting, the authors suggested long tails may have evolved through sexual selection.\n", "One theory to explain the evolution of traits like a peacock's tail is 'runaway selection'. This requires two traits—a trait that exists, like the bright tail, and a preexisting bias in the female to select for that trait. Females prefer the more elaborate tails, and thus those males are able to mate successfully. Exploiting the psychology of the female, a positive feedback loop is enacted and the tail becomes bigger and brighter. Eventually, the evolution will level off because the survival costs to the male do not allow for the trait to be elaborated any further. Two theories exist to explain runaway selection. The first is the good genes hypothesis. This theory states that an elaborate display is an honest signal of fitness and truly is a better mate. The second is the handicap hypothesis. This explains that the peacock's tail is a handicap, requiring energy to keep and makes it more visible to predators. Thus, the signal is costly to maintain, and remains an honest indicator of the signaler's condition. Another assumption is that the signal is more costly for low quality males to produce than for higher quality males to produce. This is simply because the higher quality males have more energy reserves available to allocate to costly signaling.\n", "The plumage dimorphism of the peacock and peahen of the species within the genus \"Pavo\" is a prime example of the ornamentation paradox that has long puzzled evolutionary biologists; Darwin wrote in 1860:The sight of a feather in a peacock’s tail, whenever I gaze at it, makes me sick!The peacock's colorful and elaborate tail requires a great deal of energy to grow and maintain. It also reduces the bird's agility, and may increase the animal's visibility to predators. The tail appears to lower the overall fitness of the individuals who possess it. Yet, it has evolved, indicating that peacocks with longer and more colorfully elaborate tails have some advantage over peacocks who don’t. Fisherian runaway posits that the evolution of the peacock tail is made possible if peahens have a preference to mate with peacocks that possess a longer and more colourful tail. Peahens that select males with these tails in turn have male offspring that are more likely to have long and colourful tails and thus are more likely to be sexually successful themselves. Equally importantly, the female offspring of these peahens are more likely to have a preference for peacocks with longer and more colourful tails. However, though the relative fitness of males with large tails is higher than those without, the absolute fitness levels of all the members of the population (both male and female) is less than it would be if none of the peahens (or only a small number) had a preference for a longer or more colorful tail.\n", "The functions of the elaborate iridescent colouration and large \"train\" of peacocks have been the subject of extensive scientific debate. Charles Darwin suggested they served to attract females, and the showy features of the males had evolved by sexual selection. More recently, Amotz Zahavi proposed in his handicap theory that these features acted as honest signals of the males' fitness, since less-fit males would be disadvantaged by the difficulty of surviving with such large and conspicuous structures.\n", "Peacocks are a larger sized bird with a length from bill to tail of and to the end of a fully grown train as much as and weigh . The females, or peahens, are smaller at around in length and weigh . Indian peafowl are among the largest and heaviest representatives of the Phasianidae. So far as is known, only the wild turkey grows notably heavier. The green peafowl is slightly lighter in body mass despite the male having a longer train on average than the male of the Indian species. Their size, colour and shape of crest make them unmistakable within their native distribution range. The male is metallic blue on the crown, the feathers of the head being short and curled. The fan-shaped crest on the head is made of feathers with bare black shafts and tipped with bluish-green webbing. A white stripe above the eye and a crescent shaped white patch below the eye are formed by bare white skin. The sides of the head have iridescent greenish blue feathers. The back has scaly bronze-green feathers with black and copper markings. The scapular and the wings are buff and barred in black, the primaries are chestnut and the secondaries are black. The tail is dark brown and the \"train\" is made up of elongated upper tail coverts (more than 200 feathers, the actual tail has only 20 feathers) and nearly all of these feathers end with an elaborate eye-spot. A few of the outer feathers lack the spot and end in a crescent shaped black tip. The underside is dark glossy green shading into blackish under the tail. The thighs are buff coloured. The male has a spur on the leg above the hind toe.\n", "They also have long hair-like setae projecting from rear (caudal setae) that have been compared to a trailing peacock tail. The 5–7 pairs of caudal setae can be flicked over the body very quickly, so they are used like whips in defense against predators. They may also help in wind-borne dispersal.\n" ]
at any given night in a city, we see a fairly small amount of stars but far away from city lights we see thousands of stars. what determines what stars we see in a city?
The brightest stars are still visible. Cities produce a lot of light, dubbed "light pollution" that drowns out the light from dimmer stars.
[ "Heavy, bright stars (both giants and blue dwarfs) are the most common stars listed in general star catalogs, even though on average they are rare in space. Small dim stars (red dwarfs) seem to be the most common stars in space, at least locally, but can only be seen with large telescopes, and then only when they are within a few tens of light-years from Earth.\n", "Many stars may be referred to in fictional works for their metaphorical or mythological associations, or else as bright points of light in the sky of the Earth, but not as locations in space or the centers of planetary systems.\n", "Many stars may be referred to in fictional works for their metaphorical or mythological associations, or else as bright points of light in the sky of the Earth, but not as locations in space or the centers of planetary systems.\n", "Many stars may be referred to in fictional works for their metaphorical or mythological associations, or else as bright points of light in the sky of Earth, but not as locations in space or the centers of planetary systems.\n", "A star is an astronomical object consisting of a luminous spheroid of plasma held together by its own gravity. The nearest star to Earth is the Sun. Many other stars are visible to the naked eye from Earth during the night, appearing as a multitude of fixed luminous points in the sky due to their immense distance from Earth. Historically, the most prominent stars were grouped into constellations and asterisms, the brightest of which gained proper names. Astronomers have assembled star catalogues that identify the known stars and provide standardized stellar designations. However, most of the estimated 300 sextillion () stars in the observable universe are invisible to the naked eye from Earth, including all stars outside our galaxy, the Milky Way.\n", "The stars of the night sky cannot be counted unaided because they are so numerous and there is no way to track which have been counted and which have not. Further complicating the count, fainter stars may appear and disappear depending on exactly where the observer is looking. The result is an impression of an extraordinarily vast star field.\n", "The star's apparent magnitude, or how bright it appears from Earth's perspective, is 14. It is too dim to be seen with the naked eye, which typically can only see objects with a magnitude around 6 or less.\n" ]
How does the photon of specific phase that causes stimulated emission in a laser device arise?
The first photon doesn't have to have a specific phase. Whatever it has determines the phase of the laser. In terms of direction and polarization: If it is not aligned with the laser cavity (or has the wrong polarization, if that is relevant), this chain of photons dies down quickly and another "first photon" will start the laser. Note that actual lasers do not emit *perfect* laser light. You can still get all sorts of weird effects in between.
[ "The critical detail of stimulated emission is that the induced photon has the same frequency and phase as the incident photon. In other words, the two photons are coherent. It is this property that allows optical amplification, and the production of a laser system. During the operation of a laser, all three light-matter interactions described above are taking place. Initially, atoms are energized from the ground state to the excited state by a process called \"pumping\", described below. Some of these atoms decay via spontaneous emission, releasing incoherent light as photons of frequency, ν. These photons are fed back into the laser medium, usually by an optical resonator. Some of these photons are absorbed by the atoms in the ground state, and the photons are lost to the laser process. However, some photons cause stimulated emission in excited-state atoms, releasing another coherent photon. In effect, this results in \"optical amplification\".\n", "This light emission is based on the nonlinear optical principle. The photon of an incident laser pulse (pump) is, by a nonlinear optical crystal, divided into two lower-energy photons. The wavelengths of the signal and the idler are determined by the phase matching condition, which is changed e. g. by temperature or, in bulk optics, by the angle between the incident pump laser ray and the optical axes of the crystal. The wavelengths of the signal and the idler photons can, therefore, be tuned by changing the phase matching condition.\n", "If a laser-active ion is in an excited state, it can decay to a lower state either radiatively (i.e. energy is conserved by the emission of a photon, as required for laser operation) or nonradiatively. Nonradiative emission may be via Auger decay or via energy transfer to another laser-active ion. If this occurs, the ion receiving the energy will be excited to a higher energy state than that already achieved by absorption of a pump photon. This process of further exciting an already excited laser-active ion is known as photon upconversion.\n", "The two-photon laser-induced fluorescence (TALIF) is a modification of the laser-induced fluorescence technique. In this approach the upper level is excited by absorbing two photons and registering the resulting emission from the excited state. The advantage of this approach is that the registered light from the fluorescence is with a different wavelength from the exciting laser beam, which leads to improved signal to noise ratio.\n", "If the gain (amplification) in the medium is larger than the resonator losses, then the power of the recirculating light can rise exponentially. But each stimulated emission event returns an atom from its excited state to the ground state, reducing the gain of the medium. With increasing beam power the net gain (gain minus loss) reduces to unity and the gain medium is said to be saturated. In a continuous wave (CW) laser, the balance of pump power against gain saturation and cavity losses produces an equilibrium value of the laser power inside the cavity; this equilibrium determines the operating point of the laser. If the applied pump power is too small, the gain will never be sufficient to overcome the cavity losses, and laser light will not be produced. The minimum pump power needed to begin laser action is called the \"lasing threshold\". The gain medium will amplify any photons passing through it, regardless of direction; but only the photons in a spatial mode supported by the resonator will pass more than once through the medium and receive substantial amplification.\n", "Single photons are extracted out of a semiconductor by spontaneous emission from the decay of a single excitation. Inside the cavity spontaneous emission is increased due to the Purcell effect. The challenge in making in a single photon source is to make sure that there is only one excited state in the system at a time. To do that, a quantum dot is placed in a microcavity (Fig. 1). A quantum dot has discrete energy levels. An excitation from its ground state to an excited state will create an exciton. The eventual decay of this exciton due to spontaneous emission will result in the emission of a single photon. DBR’s are placed in the cavity to achieve a well-defined spatial mode and to reduce linewidth broadening due to the lifetime formula_1 of the excited state (see Fig. 2).\n", "Laser-induced fluorescence (LIF) or laser-stimulated fluorescence (LSF) is a spectroscopic method in which an atom or molecule is excited to a higher energy level by the absorption of laser light followed by spontaneous emission of light. It was first reported by Zare and coworkers in 1968.\n" ]
Who is the Japanese military leader in this picture?
> My own investigation of Jappanese military leaders who killed themselves in circumstances where the Americans might quickly find their body makes me think it could be either Isamu Cho or Mitsuru Ushijima. The picture of Cho kind of looks like the picture I have. Right idea, but wrong guy, mainly because this is a suicide attempt, not a successful one! Hideki Tojo was seen as one of the principal war criminals of Japan by the Allies, and when American MPs went to arrest him in early September, he attempted to shoot himself in the heart, but missed and only wounded himself. He was arrested, and successfully treated by American medical personnel so that he could stand trial for war crimes, be found guilty, and finally be executed by hanging a few years later. Now of course it is possible Im wrong, but based on what I can see of the face, [with that trademark mustache](_URL_1_), as well as the apparent location of the wound based on the bloodied garments, I feel confident in my deduction here. Edit: reverse Google search failed, but "Tojo suicide photos" turned up a few that [look very similar](_URL_0_), although that specific one does not seem to be online (which would make it pretty interesting for a collector!)
[ "There are Japanese caricatures and depictions of the Imperial Japanese Army. There is also a reference to Hirohito. The Japanese soldiers speak in stereotypical dialect and advocate firing the first shot at a man's back.\n", "It shows the common Japanese soldier as an individual and as a family man, and even enemy Chinese soldiers are presented as individuals, sometimes fighting bravely. The film, based on a true story of the Sino-Japanese war, served as propaganda, instructing its audience in the correct way to endure loss without despair. To make the film, Yoshimura toured the actual battlefields in China.\n", "The Chinese soldier panel: The picture in this panel is designed as a shield with a crest on it. It depicts a Chinese soldier in Korea bayonetting Kurelek. This image is meant to represent his fear of war, which derived from his father keeping him out of the army.\n", "General was the founder of a collateral branch of the Japanese imperial family and a career officer in the Imperial Japanese Army. Son-in-law of Emperor Meiji and uncle by marriage of Emperor Hirohito, Prince Asaka was commander of Japanese forces in the final assault on Nanjing, then the capital city of Nationalist China, in December 1937. He was a perpetrator of the Nanking massacre in 1937 but was never charged.\n", "He was a dock worker in 1941 when he witnessed the Japanese attack on Pearl Harbor, and would later work as a military photographer for the U.S. Army, serving in World War II, and the Korean War and the Vietnam War. He briefly left the armed forces to work for National Geographic and the Associated Press during the Vietnam War, but then returned to work for the Army during the war. His work includes photographs of the official surrender of Japan aboard the , and a photograph of an American sergeant embracing a fellow soldier which was featured in Edward Steichen's \"The Family of Man\".\n", "Zhang Shibo (; born February 1952) is a retired general of the Chinese People's Liberation Army of China. He served as Commander of the PLA Hong Kong Garrison, Commander of the Beijing Military Region, and President of the PLA National Defence University.\n", "At the time, Japanese nationalists called the photograph a fake, and the Japanese government put a bounty of $50,000 on Wong's head: an amount equivalent to $ in 2020. Wong was known to be against the Japanese invasion of China and to have leftist political sympathies, and he worked for William Randolph Hearst who was famous for saying to his newsmen, \"You furnish the pictures and I'll furnish the war\" in relation to the Spanish–American War. Another of Wong's photos appeared in \"Look\" magazine on December 21, 1937, showing a man bent over a child of perhaps five years of age, both near the crying baby. The man was alleged to be Wong's assistant Taguchi who was arranging the children for best photographic effect. An article in \"The Japan Times and Mail\" said the man was a rescue worker who was posing the baby and the boy for the photographer. Wong described the man as the baby's father, coming to rescue his children as the Japanese aircraft returned following the bombing. Japanese propagandists drew a connection between what they claimed was a falsified image and the general news accounts by U.S. and Chinese sources reporting on the fighting in Shanghai, with the aim of discrediting all reports of Japanese atrocities.\n" ]
why do woman traditionally throw underhand while men throw overhand?
What do you mean by "traditionally"? Do you mean as in softball vs. baseball? They're different games with different rules, and overhand pitching is illegal in softball. Women who do play baseball do throw overhand pitches.
[ "Dunking is much less common in women's basketball than in men's play. Dunking is slightly more common during practice sessions, but many coaches advise against it in competitive play because of the risks of injury or failing to score.\n", "\"Faltas\" (errors or faults) were made when the ball came to a halt on the ground or if it had been thrown out of bounds (outside the stone boundary markers). The ball could only be struck from the shoulder, the elbow, the head, the hips, the buttock, or the knees and never with the hands. Las Casas noted that when women played the game they did not use their hips or shoulders, but their knees. Points were earned when the ball failed to be returned from a non-faulted play (similar to the earning of points in today's volleyball). Play continued until the number of predetermined points was earned by a side. Often, players and chiefs made bets or wagers on the possible outcome of a game. These wagers were paid after a game was concluded.\n", "The overhand throw is a complex motor skill that involves the entire body in a series of linked movements starting from the legs, progressing up through the pelvis and trunk, and culminating in a ballistic motion in the arm that propels a projectile forward. It is used almost exclusively in athletic events. The throwing motion can be broken down into three basic steps: cocking, accelerating, and releasing.\n", "During those years, the women's games were popular and fun to watch but the real draws were the men's games. Pitchers that could hurl the ball in excess of 85 mph at a batter 46 feet away could strike out 15 to 20 batters a game. To make things even more difficult, the underhand delivery meant the ball was rising as it approached the plate and a talented pitcher could make the ball perform some baffling aerobatics on its journey to the batter's box. \n", "In contemporary stickball games, it is not unusual to see women playing. Female stickball players are the only players on the field who are not required to use sticks and are allowed to pick up the ball with their hands, while men are always required to play with a pair of stickball sticks. Teams are usually split into men vs. women for social games. The men will suffer some sort of penalty or disqualification for being too aggressive towards the women players, but the women have no such restrictions on their methods of playing.\n", "BULLET::::- 1999: \"Magistrate #1\" (in a case reviewed by the Judicial Commission): \"Women cause a lot of problems by nagging, bitching and emotionally hurting men. Men cannot bitch back for hormonal reasons, and often have no recourse but violence.\"\n", "As far back as the Revolutionary War, when Molly Pitcher took over a cannon after her husband fell in the field, where she was delivering water (in pitchers), women have at times been forced into combat, though until recently they have been formally banned from choosing to do so intentionally.\n" ]
Will Microscopes ever be powerful enough that we can view individual molecules?
[Already exists](_URL_0_). That is from an atomic force microscope; optical microscopes are limited by the wavelength of light and laws of optics.
[ "Optical microscopes can focus on objects the size of a wavelength or larger, giving restrictions still to advancement in discoveries with objects smaller than the wavelengths of visible light. Later in the 1920s, the electron microscope was developed, making it possible to view objects that are smaller than optical wavelengths, once again, changing the possibilities in science.\n", "Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology. Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength). This means that viruses, proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable.\n", "Possible applications include a “planar hyperlens” that could make optical microscopes able to see objects as small as DNA, advanced sensors, more efficient solar collectors, nano-resonators, quantum computing and diffraction free focusing and imaging.\n", "Biomolecules are too small to see in detail even with the most advanced light microscopes. The methods that structural biologists use to determine their structures generally involve measurements on vast numbers of identical molecules at the same time. These methods include:\n", "The electron microscope has the capacity to obtain a resolution of up to 100 pm, whereby microscopic biomolecules and structures such as viruses, ribosomes, proteins, lipids, small molecules and even single atoms can be observed.\n", "The idea of imaging with atoms instead of light is widely discussed in the literature since the past century. Atom optics using neutral atoms instead of light could provide resolution as good as the electron microscope and be completely non-destructive, because short wavelengths on the order of a nanometer can be realized at low energy of the probing particles. \"It follows that a helium microscope with nanometer resolution is possible. A helium atom microscope will be [a] unique non-destructive tool for reflection or transmission microscopy.\"\n", "BULLET::::- The best microscopes are only of little aid if the cells or processes to be investigated are hardly discernible from their background. Dr. Oliver Griesbeck and his Research Group Cellular Dynamics develop biosensors, which stain specific cells or change their fluorescent hue when something goes on in the investigated nerve cell.\n" ]
i need help understanding the difference between torque and rotational inertia
Linear | Rotating ---|--- Force | Torque Inertia | Rotational Inertia Torque is the force you apply in a circular fashion. The moment arm is how far away from the axis that the force is applied. If I have a wrench and I'm turning a nut with it, I'm applying torque (circular force) to that nut by pushing on the end of the wrench. The distance between the center of the nut and where my hand is is the moment arm. The further from the nut my hand is (longer the wrench) the more torque is applied as long as I'm pushing with the same amount of strength. Rotational inertia is just how much inertia a spinning object has. I spin a basketball on my finger by applying torque to it. Once it's spinning, it keeps spinning because it has rotational inertia, even if I apply no additional torque. [I made you a crappy picture](_URL_0_)
[ "Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's First Law of Motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's Second Law of Motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body:\n", "Torque-free precession implies that no external moment (torque) is applied to the body. In torque-free precession, the angular momentum is a constant, but the angular velocity vector changes orientation with time. What makes this possible is a time-varying moment of inertia, or more precisely, a time-varying inertia matrix. The inertia matrix is composed of the moments of inertia of a body calculated with respect to separate coordinate axes (e.g. , , ). If an object is asymmetric about its principal axis of rotation, the moment of inertia with respect to each coordinate direction will change with time, while preserving angular momentum. The result is that the component of the angular velocities of the body about each axis will vary inversely with each axis' moment of inertia.\n", "The torque vector points along the axis around which the torque tends to cause rotation. To maintain rotation around a fixed axis, the total torque vector has to be along the axis, so that it only changes the magnitude and not the direction of the angular velocity vector. In the case of a hinge, only the component of the torque vector along the axis has an effect on the rotation, other forces and torques are compensated by the structure.\n", "Due to the way the torque vectors are defined, it is a vector that is perpendicular to the plane of the forces that create it. Thus it may be seen that the angular momentum vector will change perpendicular to those forces. Depending on how the forces are created, they will often rotate with the angular momentum vector, and then circular precession is created.\n", "The mathematical description of rotational forces such as torque and angular momentum often makes use of the cross product of vector calculus in three dimensions with a convention of orientation (handedness).\n", "Rotational viscometers use the idea that the torque required to turn an object in a fluid is a function of the viscosity of that fluid. They measure the torque required to rotate a disk or bob in a fluid at a known speed.\n", "The motion produced by an actuator may be either continuous rotation, as for an electric motor, or movement to a fixed angular position as for servomotors and stepper motors. A further form, the torque motor, does not necessarily produce any rotation but merely generates a precise torque which then either causes rotation, or is balanced by some opposing torque.\n" ]
electricity supply
Houses have 3 power lines. Imagine a top, bottom and middle line. Top and bottom are 240V with respect to each other. The middle line is there to give you 120V between it and the top or bottom line.
[ "Electric power is usually produced by electric generators, but can also be supplied by sources such as electric batteries. It is usually supplied to businesses and homes (as domestic mains electricity) by the electric power industry through an electric power grid. Electric energy is usually sold by the kilowatt hour (1 kW·h = 3.6 MJ) which is the product of the power in kilowatts multiplied by running time in hours. Electric utilities measure power using an electricity meter, which keeps a running total of the electric energy delivered to a customer.\n", "A power supply is an electrical device that supplies electric power to an electrical load. The primary function of a power supply is to convert electric current from a source to the correct voltage, current, and frequency to power the load. As a result, power supplies are sometimes referred to as electric power converters. Some power supplies are separate standalone pieces of equipment, while others are built into the load appliances that they power. Examples of the latter include power supplies found in desktop computers and consumer electronics devices. Other functions that power supplies may perform include limiting the current drawn by the load to safe levels, shutting off the current in the event of an electrical fault, power conditioning to prevent electronic noise or voltage surges on the input from reaching the load, power-factor correction, and storing energy so it can continue to power the load in the event of a temporary interruption in the source power (uninterruptible power supply).\n", "Mains electricity (as it is known in the UK and some parts of Canada; US terms include grid power, wall power, and domestic power; in much of Canada it is known as hydro) is the general-purpose alternating-current (AC) electric power supply. It is the form of electrical power that is delivered to homes and businesses, and it is the form of electrical power that consumers use when they plug domestic appliances, televisions and electric lamps into wall outlets.\n", "The electric power industry provides the production and delivery of power, in sufficient quantities to areas that need electricity, through a grid connection. The grid distributes electrical energy to customers. Electric power is generated by central power stations or by distributed generation. The electric power industry has gradually been trending towards deregulation - with emerging players offering consumers competition to the traditional public utility companies.\n", "An electric utility is a company in the electric power industry (often a public utility) that engages in electricity generation and distribution of electricity for sale generally in a regulated market. The electrical utility industry is a major provider of energy in most countries.\n", "Electricity generation is often done with electric generators, but can also be supplied by chemical sources such as electric batteries or by other means from a wide variety of sources of energy. Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.\n", "Electricity generation is the process of generating electric power from sources of primary energy. For electric utilities in the electric power industry, it is the first stage in the delivery of electricity to end users, the other stages being transmission, distribution, energy storage and recovery, using the pumped-storage method.\n" ]
why do muscles stiffen and lose flexibility? and why does stretching sometimes feel good and sometimes hurt?
Lots and lots of reasons. But ELI5. Muscles get stiff because they get used to being short and all the fibres get tighter and closer together. It can also be because of literal knots in the muscle. Imagine you cut a piece of string in half, to make it whole you have to tie a knot in it. The string is shorter but it’s whole. These are knots and there can be thousands. Thanks to healing and massage those cuts can be healed to normal. Pain when stretching is normally due to excessive tearing. It’s your body screaming at you to stop. It feels good because of other reasons that I’m not clear on.
[ "Flexibility is improved by stretching. Stretching should only be started when muscles are warm and the body temperature is raised. To be effective while stretching, force applied to the body must be held just beyond a feeling of pain and needs to be held for at least ten seconds. Increasing the range of motion creates good posture and develops proficient performance in everyday activities increasing the length of life and overall health of the individual.\n", "Stretching prior to strenuous physical activity has been thought to increase muscular performance by extending the soft tissue past its attainable length in order to increase range of motion. Many physically active individuals practice these techniques as a “warm-up” in order to achieve a certain level of muscular preparation for specific exercise movements. When stretching, muscles should feel somewhat uncomfortable but not physically agonizing.\n", "An active stretching regimen can strengthen muscles because stretching affects muscles in a way similar to strength training, just on a smaller scale. A stretching regimen has been shown to increase weight-lifting abilities, improve endurance, and assist in plyometrics. Research shows that StretchTrainer users can increase their flexibility (as judged by a basic sit and reach test) after 30 days of use, regardless of age.\n", "Stretching is a form of physical exercise in which a specific muscle or tendon (or muscle group) is deliberately flexed or stretched in order to improve the muscle's felt elasticity and achieve comfortable muscle tone. The result is a feeling of increased muscle control, flexibility, and range of motion. Stretching is also used therapeutically to alleviate cramps.\n", "Stretching can be dangerous when performed incorrectly. There are many techniques for stretching in general, but depending on which muscle group is being stretched, some techniques may be ineffective or detrimental, even to the point of causing hypermobility, instability, or permanent damage to the tendons, ligaments, and muscle fiber. The physiological nature of stretching and theories about the effect of various techniques are therefore subject to heavy inquiry.\n", "There are different positives and negatives for the two main types of stretching: static and dynamic. Static stretching is better at creating a more intense stretch because it is able to isolate a muscle group better. But this intense of a stretch may hinder one's athletic performance because the muscle is being over stretched while held in this position and, once the tension is released, the muscle will tend to tighten up and may actually become weaker than it was previously . Also, the longer the duration of static stretching, the more exhausted the muscle becomes. This type of stretching has been shown to have negative results on athletic performance within the categories of power and speed .\n", "Stretching is part of some warm up routines, although a study in 2013 indicates that it weakens muscles in that situation. There are 3 types of stretches: ballistic stretching, dynamic, and static stretching:\n" ]
what is a proxy war? what sets it apart from a traditional war/conflict?
You have a brother called Joe, and a sister called Suzy, now you don't want to fight Joe because your parent will get mad at you, so you give Suzy some candy to pick a fight with Joe. Your parents don't get upset with you, because you're not involved.
[ "A proxy war is an armed conflict between two states or non-state actors which act on the instigation or on behalf of other parties that are not directly involved in the hostilities. In order for a conflict to be considered a proxy war, there must be a direct, long-term relationship between external actors and the belligerents involved. The aforementioned relationship usually takes the form of funding, military training, arms, or other forms of material assistance which assist a belligerent party in sustaining its war effort.\n", "Proxy wars became battle grounds between forces supported either directly or indirectly by the hegemonic powers and included the Korean War, the Laotian Civil War, the Arab–Israeli conflict, the Vietnam War, the Afghan War, the Angolan Civil War, and the Central American Civil Wars.\n", "Since the early twentieth century, proxy wars have most commonly taken the form of states assuming the role of sponsors to non-state proxies, essentially using them as fifth columns to undermine an adversarial power. This type of proxy warfare includes external support for a faction engaged in a civil war, terrorists, national liberation movements, and insurgent groups, or assistance to a national revolt against foreign occupation. For example, the British partly organized and instigated the Arab Revolt to undermine the Ottoman Empire during World War I. Many proxy wars began assuming a distinctive ideological dimension after the Spanish Civil War, which pitted the fascist political ideology of Italy and National Socialist ideology of Nazi Germany against the communist ideology of the Soviet Union without involving these states in open warfare with each other. Sponsors of both sides also used the Spanish conflict as a proving ground for their own weapons and battlefield tactics. During the Cold War, proxy warfare was motivated by fears that a conventional war between the United States and Soviet Union would result in nuclear holocaust, rendering the use of ideological proxies a safer way of exercising hostilities. The Soviet government found that supporting parties antagonistic to the US and Western nations was a cost-effective way to combat NATO influence in lieu of direct military engagement. In addition, the proliferation of televised media and its impact on public perception made the US public especially susceptible to war-weariness and skeptical of risking American life abroad. This encouraged the American practice of arming insurgent forces, such as the funneling of supplies to the mujahideen during the Soviet–Afghan War.\n", "It is contrasted against a cold war, in which at least two states which are not openly pursuing a state of war against each other, openly or covertly support conflicts between each other's client states or allies. Cold peace, while marked by similar levels of mistrust and antagonistic domestic policy between the two governments and populations, do not result in proxy wars, formal incursions, or similar conflicts.\n", "A war of succession or succession war is a war prompted by a succession crisis in which two or more individuals claim the right of successor to a deceased or deposed monarch. The rivals are typically supported by factions within the royal court. Foreign powers sometimes intervene, allying themselves with a faction. This may widen the war into one between those powers.\n", "Because it only pertains to wars involving any of the Coalition parties, not all wars counted amongst the French Revolutionary and Napoleonic Wars are considered \"Coalition Wars\". For example, the French invasion of Switzerland (1798, between the First and Second Coalition), the Stecklikrieg (1802, between the Second and Third Coalition) and the French invasion of Russia (1812, between the Fifth and Sixth Coalition) were not \"Coalition Wars\", since France fought against a single opponent.\n", "State-based conflict refers to what most people intuitively perceive as \"war\"; fighting either between two states, or between a state and a rebel group that challenges it. The UCDP defines an armed state-based conflict as: \"An armed conflict is a contested incompatibility that concerns government and/or territory where the use of armed force between two parties, of which at least one is the government of a state, results in at least 25 battle-related deaths in one calendar year\".\n" ]
why does vision degrade when you are tired?
Your eyes become fatigued, as they are working muscles. So after a long day of using your eyes, they need that rest. Usually by the time you’re tired your eyes have been strained enough to feel that fatigue. It can also sometimes make you think that you are tired when your eyes just need resting too, especially if you use bright light objects such as a computer phone or television for an extensive amount of time.
[ "Numerous clinical studies have shown that dark adaptation function is dramatically impaired from the earliest stages of AMD, retinitis pigmentosa (RP), and other retinal diseases, with increasing impairment as the diseases progress. AMD is a chronic, progressive disease that causes a part of your retina, called the macula, to slowly deteriorate as you get older. It is also the leading cause of vision loss among people age 50 and older. It is characterized by a breakdown of the RPE/Bruch's membrane complex in the retina, leading to an accumulation of cholesterol deposits in the macula. Eventually, these deposits become clinically-visible drusen that affect photoreceptor health, causing inflammation and a predisposition to choroidal neovascularization (CNV). During the AMD disease course, the RPE/Bruch's function continues to deteriorate, hampering nutrient and oxygen transport to the rod and cone photoreceptors. As a side effect of this process, the photoreceptors exhibit impaired dark adaptation because they require these nutrients for replenishment of photopigments and clearance of opsin to regain scotopic sensitivity after light exposure.\n", "Accommodative infacility is the inability to change the accommodation of the eye with enough speed and accuracy to achieve normal function. This can result in visual fatigue, headaches, and difficulty reading. The delay in accurate accommodation also makes vision blurry for a moment when switching between distant and near objects. The duration and extent of this blurriness depends on the extent of the deficit.\n", "As the eye ages, certain changes occur that can be attributed solely to the aging process. Most of these anatomic and physiologic processes follow a gradual decline. With aging, the quality of vision worsens due to reasons independent of diseases of the aging eye. While there are many changes of significance in the non-diseased eye, the most functionally important changes seem to be a reduction in pupil size and the loss of accommodation or focusing capability (presbyopia). The area of the pupil governs the amount of light that can reach the retina. The extent to which the pupil dilates decreases with age, leading to a substantial decrease in light received at the retina. In comparison to younger people, it is as though older persons are constantly wearing medium-density sunglasses. Therefore, for any detailed visually guided tasks on which performance varies with illumination, older persons require extra lighting. Certain ocular diseases can come from sexually transmitted diseases such as herpes and genital warts. If contact between the eye and area of infection occurs, the STD can be transmitted to the eye.\n", "Troxler's fading can occur without any extraordinary stabilization of the retinal image in peripheral vision because the neurons in the visual system beyond the rods and cones have large receptive fields. This means that the small, involuntary eye movements made when fixating on something fail to move the stimulus onto a new cell's receptive field, in effect giving unvarying stimulation. Further experimentation this century by Hsieh and Tse showed that at least some portion of the perceptual fading occurred in the brain, not in the eyes.\n", "Aging is one of the most common causes of dry eyes because tear production decreases with age. Several classes of medications (both prescription and OTC) have been hypothesized as a major cause of dry eye, especially in the elderly. Particularly, anticholinergic medications that also cause dry mouth are believed to promote dry eye. Dry eye may also be caused by thermal or chemical burns, or (in epidemic cases) by adenoviruses.   A number of studies have found that diabetics are at increased risk for the disease.\n", "Age can result in visual impairment, whereby non-verbal communication is reduced, which can lead to isolation and possible depression. Older adults, however, may not suffer depression as much as younger adults, and were paradoxically found to have improved mood despite declining physical health. Macular degeneration causes vision loss and increases with age, affecting nearly 12% of those above the age of 80. This degeneration is caused by systemic changes in the circulation of waste products and by growth of abnormal vessels around the retina.\n", "The first symptom of this disease is usually a slow loss of vision. Early signs of Retinitis include loss of night vision; making it harder to drive at night. Later signs of retinitis include loss of peripheral vision, leading to tunnel vision. In some cases, symptoms are experienced in only one of the eyes. Experiencing the vision of floaters, flashes, blurred vision and loss of side vision in just one of the eyes is an early indication of the onset of Retinitis.\n" ]
the median of something vs. the average.
Median is just a different way to calculate an average. The three main ways to 'average' a group of numbers are: Mean, median, and mode. Let's say you have the following 11 speeds caught on a radar and you need to determine the average (56,58,62,65,65,68,69,70, 71,74, 75) Mean = 66.6 (add all #s and divide by the sample size). This is the most common use when someone talks about an average. Median = 68 (the middle number when #s are sorted by size. 68 in this example is the 6th sample counting up from the smallest and 6th sample counting down from the largest sample) Mode = 65 (mode is the most frequently sampled speed as there were 2 samples at that speed whereas there is only 1 sample for all the other speeds) The type of average used depends a lot on what the user wants to convey. Mode is often used to communicate image the most "popular" or most likely outcome. Median is used to identify the middle, where it may be helpful to know that half of the numbers are smaller and half the numbers are larger... You can even say that is the number where there is a 50% chance that any new sample will be larger and 50% chance that it will be smaller.
[ "The median is the value separating the higher half from the lower half of a data sample (a population or a probability distribution). For a data set, it may be thought of as the \"middle\" value. For example, in the data set {1, 3, 3, 6, 7, 8, 9}, the median is 6, the fourth largest, and also the fourth smallest, number in the sample. For a continuous probability distribution, the median is the value such that a number is equally likely to fall above or below it.\n", "The median is a commonly used measure of the properties of a data set in statistics and probability theory. The basic advantage of the median in describing data compared to the mean (often simply described as the \"average\") is that it is not skewed so much by a small proportion of extremely large or small values, and so it may give a better idea of a \"typical\" value. For example, in understanding statistics like household income or assets, which vary greatly, the mean may be skewed by a small number of extremely high or low values. Median income, for example, may be a better way to suggest what a \"typical\" income is.\n", "The median is used primarily for skewed distributions, which it summarizes differently from the arithmetic mean. Consider the multiset { 1, 2, 2, 2, 3, 14 }. The median is 2 in this case, (as is the mode), and it might be seen as a better indication of central tendency (less susceptible to the exceptionally large value in data) than the arithmetic mean of 4.\n", "The median is a popular summary statistic used in descriptive statistics, since it is simple to understand and easy to calculate, while also giving a measure that is more robust in the presence of outlier values than is the mean. The widely cited empirical relationship between the relative locations of the mean and the median for skewed distributions is, however, not generally true. There are, however, various relationships for the \"absolute\" difference between them; see below.\n", "The median is one of a number of ways of summarising the typical values associated with members of a statistical population; thus, it is a possible location parameter. The median is the 2nd quartile, 5th decile, and 50th percentile. Since the median is the same as the \"second quartile\", its calculation is illustrated in the article on quartiles. A median can be worked out for ranked but not numerical classes (e.g. working out a median grade when students are graded from A to F), although the result might be halfway between grades if there is an even number of cases.\n", "The median is a robust measure of central tendency. Taking the same dataset {2,3,5,6,9}, if we add another datapoint with value -1000 or +1000 then the median will change slightly, but it will still be similar to the median of the original data. If we replace one of the values with a datapoint of value -1000 or +1000 then the resulting median will still be similar to the median of the original data.\n", "The arithmetic mean may be contrasted with the median. The median is defined such that no more than half the values are larger than, and no more than half are smaller than, the median. If elements in the data increase arithmetically, when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample formula_15. The average is formula_16, as is the median. However, when we consider a sample that cannot be arranged so as to increase arithmetically, such as formula_17, the median and arithmetic average can differ significantly. In this case, the arithmetic average is 6.2 and the median is 4. In general, the average value can vary significantly from most values in the sample, and can be larger or smaller than most of them.\n" ]
if both nuclear fission and fusion generate energy, why don't we have infinite energy?
You don't use the same elements in fusion and fission. The whole reason the respective processes can generate energy is because the nuclear reaction results in a nucleus that is _more stable_ than the starting nuclei. You use a heavy element - such as uranium - for fission, while you use a light element - such as hydrogen - for fusion. You cannot reverse those and still get energy out of the reaction.
[ "Nuclear fusion produces energy by combining the very lightest elements into more tightly bound elements (such as hydrogen into helium), and nuclear fission produces energy by splitting the heaviest elements (such as uranium and plutonium) into more tightly bound elements (such as barium and krypton). Both processes produce energy, because middle-sized nuclei are the most tightly bound of all.\n", "There are potentially two sources of nuclear power. Fission is used in all current nuclear power plants. Fusion is the reaction that exists in stars, including the sun, and remains impractical for use on Earth, as fusion reactors are not yet available. However nuclear power is controversial politically and scientifically due to concerns about radioactive waste disposal, safety, the risks of a severe accident, and technical and economical problems in dismantling of old power plants.\n", "Fuels that produce energy by the process of nuclear fusion are currently not utilized by humans but are the main source of fuel for stars. Fusion fuels tend to be light elements such as hydrogen which will combine easily. Energy is required to start fusion by raising temperature so high all materials would turn into plasma, and allow nuclei to collide and stick together with each other before repelling due to electric charge. This process is called fusion and it can give out energy.\n", "Hybrid nuclear fusion–fission (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The basic idea is to use high-energy fast neutrons from a fusion reactor to trigger fission in otherwise nonfissile fuels like U-238 or Th-232. Each neutron can trigger several fission events, multiplying the energy released by each fusion reaction hundreds of times. This would not only make fusion designs more economical in power terms, but also be able to burn fuels that were not suitable for use in conventional fission plants, even their nuclear waste.\n", "U is not usable directly as nuclear fuel, though it can produce energy via \"fast\" fission. In this process, a neutron that has a kinetic energy in excess of 1 MeV can cause the nucleus of U to split in two. Depending on design, this process can contribute some one to ten percent of all fission reactions in a reactor, but too few of the average 2.5 neutrons produced in each fission have enough speed to continue a chain reaction.\n", "Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as \"fusion reactors\".\n", "Hybrid nuclear power is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to delays in the realization of pure fusion. When a sustained nuclear fusion power plant is built, it has the potential to be capable of extracting all the fission energy that remains in spent fission fuel, reducing the volume of nuclear waste by orders of magnitude, and more importantly, eliminating all actinides present in the spent fuel, substances which cause security concerns.\n" ]
if the edge of the universe to us is 45 billion light years away, could it have already stopped expanding?
When we talk about the "edge" of the universe, we are referring to the extent of the observable universe. Space is expanding, and there is no centre to the expansion. Every point is moving away from every other point, equally in all directions. The further away you look, the more space exists between you and the point you are observing, so the faster that point appears to be moving away from you. If you look far enough, space is expanding away from you at the speed of light. This is the limit of observability, because no information about events beyond that distance will ever reach you. That occurs at a finite distance, so the observable universe is finite in extent, but we can't ever know what lies beyond. The true universe could be infinite or not. At present, all indications are that the expansion will continue, so more and more objects will continue to expand out past the observability limit and become invisible to us, until eventually our own galaxy will be the only thing in the night sky.
[ "The proper distance—the distance as would be measured at a specific time, including the present—between Earth and the edge of the observable universe is 46 billion light-years (14 billion parsecs), making the diameter of the observable universe about 93 billion light-years (28 billion parsecs). The distance the light from the edge of the observable universe has travelled is very close to the age of the Universe times the speed of light, , but this does not represent the distance at any given time because the edge of the observable universe and the Earth have since moved further apart. For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs). As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter, and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away.\n", "The current accepted answer is that, although the universe is infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black.\n", "However, recent measurements of the distances and velocities of faraway galaxies revealed a 9 percent discrepancy in the value of the Hubble constant, implying a universe that seems expanding too fast compared to previous measurements. In 2001, Dr. Wendy Freedman determined space to expand at 72 kilometers per second per megaparsec - roughly 3.3 million light years - meaning that for every 3.3 million light years further away from the earth you are, the matter where you are, is moving away from earth 72 kilometers a second faster. In the summer of 2016, another measurement reported a value of 73 for the constant, thereby contradicting 2013 measurements from the European Planck mission of slower expansion value of 67. The discrepancy opened new questions concerning the nature of dark energy, or of neutrinos.\n", "According to calculations, the current \"comoving distance\"—proper distance, which takes into account that the universe has expanded since the light was emitted—to particles from which the cosmic microwave background radiation (CMBR) was emitted, which represent the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light-years), while the comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light-years), about 2% larger. The radius of the observable universe is therefore estimated to be about 46.5 billion light-years and its diameter about 28.5 gigaparsecs (93 billion light-years, ). The total mass of ordinary matter in the universe can be calculated using the critical density and the diameter of the observable universe to be about 1.5 × 10 kg. In November 2018, astronomers reported that the extragalactic background light (EBL) amounted to 4 × 10 photons.\n", "BULLET::::- 2009 – Astrophysicists studying the universe confirm its age at 13.7 billion years, discover that it will most likely expand forever without limit, and conclude that only 4% of the universe's contents are ordinary matter (the other 96% being still-mysterious dark matter, dark energy, and dark flow).\n", "However, because the expansion of the universe is accelerating, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future, because the light never reaches a point where its \"peculiar velocity\" towards us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Comoving and proper distances#Uses of the proper distance). The current distance to this cosmological event horizon is about 16 billion light-years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event was less than 16 billion light-years away, but the signal would never reach us if the event was more than 16 billion light-years away.\n", "BULLET::::- NASA and ESA jointly announce that the Universe is expanding 5% to 9% faster than previously thought, after using the Hubble Space Telescope to measure the distance to stars in 19 galaxies beyond the Milky Way.\n" ]
why do we tend to get small violent tendencies when we get angry or have a heated argument with someone else? [biology]
The "Fight or Flight" response to the confrontation. Anticipating a fight, a cascade of things happen to your physiology..adrenaline production, flushing, heat, respiration increasing, muscles tensing..the whole brain is prepped to go to battle. This also suppresses normal functions, like situational awareness giving way to tunnel vision, reduced perception of pain and fatigue, and most noteworthy: rapidly reduced impulse control. Impulse control in a potentially fatal situation can be deadly, and we have evolved a way of shutting it down in the face of danger: Don't *think* about the tiger in the bushes, just run. Baser impulses become difficult if not impossible to suppress, as seen when someone "rages". Those violent tendencies rush to the surface and find expression. We don't even have to full on rage for this. It varies from person to person but when there is moderate stimulation of a fight-or-flight response we can observe expressions of anxiety and/or aggression. It's why people yell during sports matches.
[ "Confrontation may occur between individuals, or between larger groups. Because groups are composed of multiple individuals, with each member having their own specific triggers for a violent response to a perceived provocation, risk factors which \"may not be sufficient individually to explain collective violence, in combination [can] create conditions that may precipitate aggressive confrontations between groups\". Thus provocation of a single member of one group by a single member of the other group can lead to a confrontation between the groups as a whole.\n", "Other evolutionary and genetic explanations of violent behaviour include: dopamine receptors mutations, DRD2 and DRD4, that, when mutate simultaneously, are hypothesized to cause personality disorders, low serotonin levels increasing irritability and gloom and the effects of testosterone on neurotransmitter functioning to explain the increased occurrence of aggression in males.\n", "Anger causes a reduction in cognitive ability and the accurate processing of external stimuli. Dangers seem smaller, actions seem less risky, ventures seem more likely to succeed, and unfortunate events seem less likely. Angry people are more likely to make risky decisions, and make less realistic risk assessments. In one study, test subjects primed to feel angry felt less likely to suffer heart disease, and more likely to receive a pay raise, compared to fearful people. This tendency can manifest in retrospective thinking as well: in a 2005 study, angry subjects said they thought the risks of terrorism in the year following 9/11 in retrospect were low, compared to what the fearful and neutral subjects thought.\n", "Aggression can be the result of both internal and external factors that create a measurable activation in the autonomic nervous system. This activation can become evident through symptoms such as the clenching of fists or jaw, pacing, slamming doors, hitting palms of hands with fists, or being easily startled. It is estimated that 17% of visits to psychiatric emergency service settings are homicidal in origin and an additional 5% involve both suicide and homicide. Violence is also associated with many conditions such as acute intoxication, acute psychosis, paranoid personality disorder, antisocial personality disorder, narcissistic personality disorder and borderline personality disorder. Additional risk factors have also been identified which may lead to violent behavior. Such risk factors may include prior arrests, presence of hallucinations, delusions or other neurological impairment, being uneducated, unmarried, etc. Mental health professionals complete violence risk assessments to determine both security measures and treatments for the patient.\n", "Anger and frustration confirm negative relationships. The resulting behavior patterns will often be characterized by more than their share of unilateral action because an individual will have a natural desire to avoid unpleasant rejections, and these unilateral actions (especially when antisocial) will further contribute to an individual's alienation from society. If particular rejections are generalized into feelings that the environment is unsupportive,more strongly negative emotions may motivate the individual to engage in crime. This is most likely to be true for younger individuals, and Agnew suggested that research focus on the magnitude, recency, duration, and clustering of such strain-related events to determine whether a person copes with strain in a criminal or conforming manner. Temperament, intelligence, interpersonal skills, self-efficacy, the presence of conventional social support, and the absence of association with antisocial (\"e.g.\", criminally inclined) age and status peers are chief among the factors Agnew identified as beneficial.\n", "The causes of violent behavior in people are often a topic of research in psychology. Neurobiologist Jan Vodka emphasizes that, for those purposes, \"violent behavior is defined as overt and intentional physically aggressive behavior against another person.\"\n", "Frustration is another major cause of aggression. The Frustration aggression theory states that aggression increases if a person feels that he or she is being blocked from achieving a goal (Aronson et al. 2005). One study found that the closeness to the goal makes a difference. The study examined people waiting in line and concluded that the 2nd person was more aggressive than the 12th one when someone cut in line (Harris 1974). Unexpected frustration may be another factor. In a separate study to demonstrate how unexpected frustration leads to increased aggression, Kulik & Brown (1979) selected a group of students as volunteers to make calls for charity donations. One group was told that the people they would call would be generous and the collection would be very successful. The other group was given no expectations. The group that expected success was more upset when no one was pledging than the group who did not expect success (everyone actually had horrible success). This research suggests that when an expectation does not materialize (successful collections), unexpected frustration arises which increases aggression.\n" ]
I'm watching English period pieces like The Tudors and Elizabeth. Did monarchs have titles like Lord Burleigh to give out? What did that entail?
If you're just talking about titles rather than estates and incomes, yes. The sovereign is the [fount of honour](_URL_0_), i.e. has the exclusive right to confer titles of nobility and orders of chivalry.
[ "There was no consistent title for the king of England before 1066, and monarchs chose to style themselves as they pleased. Imperial titles were used inconsistently, beginning with Athelstan in 930 and ended with the Norman conquest of England. Empress Matilda (1102–1167) is the only English monarch commonly referred to as \"emperor\" or \"empress\", but she acquired her title through her marriage to Henry V, Holy Roman Emperor.\n", "BULLET::::- British monarchs have become eponymous throughout the English-speaking world for time periods, fashions, etc. \"Elizabethan\", \"Georgian\", \"Victorian\", and \"Edwardian\" are examples of these.\n", "The English royal consorts were the spouses of the reigning monarchs of the Kingdom of England who were not themselves monarchs of England: spouses of some English monarchs who were themselves English monarchs are not listed, comprising Mary I and Philip who reigned together in the 16th century, and William III and Mary II who reigned together in the 17th century.\n", "After the death of Elizabeth I and the end of the Tudor dynasty, the Stuarts came to power in England. Both James I and Charles I are known to have worn the crown. Following the abolition of the monarchy and the execution of Charles I in 1649, the Tudor Crown was broken up and its valuable components sold for £1,100. According to an inventory drawn up for the sale of the king's goods, it weighed .\n", "The standard title for monarchs from Æthelstan until John was ' (\"King of the English\"). Canute the Great, a Dane, was the first to call himself \"King of England\". In the Norman period ' remained standard, with occasional use of ' (\"King of England\"). From John's reign onwards all other titles were eschewed in favour of ' or \"\". In 1604 James I, who had inherited the English throne the previous year, adopted the title (now usually rendered in English rather than Latin) \"King of Great Britain\". The English and Scottish parliaments, however, did not recognise this title until the Acts of Union of 1707.\n", "BULLET::::- The most frequently sung words refer to 17th-century monarchs. Therefore, a later proposed model is Simon Symonds, who was an Independent in the Protectorate, a Church of England cleric under Charles II, a Roman Catholic under James II, and a moderate Anglican under William III and Mary II.\n", "Sir Henry Vernon, KB, (1441–13 April 1515) was a Tudor-era English landowner, politician, and courtier. He was the Controller of the household of Arthur, Prince of Wales, eldest son of Henry VII of England and heir to the throne until his untimely death.\n" ]
Why do atoms "want" to get full outer electron shells when bonding?
Good question! Understanding this behaviour requires that you first understand that a reaction can be considered as a number of individual processes. When you combine chlorine and sodium to make table salt, the sodium atom loses an electron, the chlorine atom gains one and the resultant ions bond due to their charge (this description is accurate to a first approximation). Removing electrons from atoms takes energy. Adding electrons to neutral atoms releases energy, but adding electrons to already-negative ions typically takes energy. Forming the resultant bond between the atoms releases energy. So, let's add this all up. Sodium has a single electron in its outer shell. The energy cost of removing this, combined with the energy gain of creating Cl^- and then the salt NaCl yields a large negative number for the overall energy change because the bond that's formed is quite strong. That is, the overall process is favoured and the reaction happens. You may ask: if the bond is quite strong, why not form two of them (i.e. NaCl*_2_*) and get twice the energy out from bonding? Removing a second electron from sodium is much tougher. It's at a lower energy level, much closer to the nucleus and more tightly bound. It turns out that the energy gain from a second bond doesn't make up for the extra energy required to remove a second electron. Similarly, for magnesium, which does form two bonds, this is because magnesium has two electrons in its outer shell which are comparatively easy to remove. So, it's not so much that an element wants to form a particular number of bonds. Elements will form as many bonds as they can (because bond formation releases energy) until the energetics become unfavourable. Sometimes, if you have particularly reactive compounds, you can exceed the traditional "correct" number of bonds because the reactive compound releases sufficient energy in the reaction. Bond angles are caused by the electrons in bonds repelling each other. If you have methane, a tetrahedral molecule, you end up with the bonds at an angle of about 109 degrees because that maximises the distance between bonds. However, lone pairs that aren't involved in bonding exert higher repulsion than bond pairs. So, in water, oxygen has two lone pairs which squeeze the bond pairs closer together to an angle of about 104 degrees. Read up on "VSEPR" for more about this.
[ "Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs.\n", "Bonds are formed by atoms so that they are able to achieve a lower energy state. Free atoms will have more energy than a bonded atom. This is because some energy is released during bond formation, allowing the entire system to achieve a lower energy state. The bond length, or the minimum separating distance between two atoms participating in bond formation, is determined by their repulsive and attractive forces along the internuclear direction. As the two atoms get closer and closer, the positively charged nuclei repel, creating a force that attempts to push the atoms apart. As the two atoms get further apart, attractive forces work to pull them back together. Thus an equilibrium bond length is achieved and is a good measure of bond stability.\n", "In order to gain enough electrons to fill their valence shells (see also octet rule), many atoms will form covalent bonds with other atoms. In the simplest case, that of a single bond, two atoms each contribute one unpaired electron, and the resulting pair of electrons is shared between them. Atoms which possess too few bonding partners to satisfy their valences and which possess unpaired electrons are termed \"free radicals\"; so, often, are molecules containing such atoms. When a free radical exists in an immobilized environment (for example, a solid), it is referred to as an \"immobilized free radical\" or a \"dangling bond\".\n", "In the context of atomic orbitals, an open shell is a valence shell which is not completely filled with electrons or that has not given all of its valence electrons through chemical bonds with other atoms or molecules during a chemical reaction. Atoms generally reach a noble gas configuration in a molecule. The noble gases (He, Ne, Ar, Kr, Xe, Rn) are less reactive and have configurations 1s (He),\n", "In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra \"d\" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment.\n", "The superposition of the two 1s atomic orbitals leads to the formation of the σ and σ* molecular orbitals. Two atomic orbitals in phase create a larger electron density, which leads to the σ orbital. If the two 1s orbitals are not in phase, a node between them causes a jump in energy, the σ* orbital. From the diagram you can deduce the bond order, how many bonds are formed between the two atoms. For this molecule it is equal to one. Bond order can also give insight to how close or stretched a bond has become if a molecule is ionized.\n", "The quantum theory of the atom explains the eight electrons as a closed shell with an sp electron configuration. A closed-shell configuration is one in which low-lying energy levels are full and higher energy levels are empty. For example, the neon atom ground state has a full shell (2s 2p) and an empty shell. According to the octet rule, the atoms immediately before and after neon in the periodic table (i.e. C, N, O, F, Na, Mg and Al), tend to attain a similar configuration by gaining, losing, or sharing electrons.\n" ]
What is the best referencing/organizing software you use to write history and why?
Not history specific, but OneNote works REALLY well for writing initial drafts. You can organize your work far beyond anything else I've used, and can include text clippings, photos, videos, and links out to the side. It eschews the "page" construct and is more like a whiteboard. Once you get down to doing a final draft, you'd be better to switch to something designed to nicely handle notations, footnotes, and all that.
[ "The more-concise author-date style (sometimes referred to as the \"reference list style\") is more common in the physical, natural, and social sciences. This style involves sources being \"briefly cited in the text, usually in parentheses, by author’s last name and year of publication\" with the parenthetical citations corresponding to \"an entry in a reference list, where full bibliographic information is provided.\"\n", "Technical writing as a discipline usually requires that a technical writer use a style guide. These guides may relate to a specific project, product, company, or brand. They ensure that technical writing reflects formatting, punctuation, and general stylistic standards that the audience expects. In the United States, many consider the \"Chicago Manual of Style\" the bible for general technical communication. Other style guides have their adherents, particularly for specific industries—such as the \"Microsoft Style Guide\" in some information technology settings.\n", "ASA style is supported by most major reference management software programs, including Endnote, Procite, Zotero, Refworks, and so forth, making the formatting of references a fairly straightforward task.\n", "Another use of a reference model is to educate. Using a reference model, leaders in software development can help break down a large problem space into smaller problems that can be understood, tackled, and refined. Developers who are new to a particular set of problems can quickly learn what the different problems are, and can focus on the problems that they are being asked to solve, while trusting that other areas are well understood and rigorously constructed. The level of trust is important to allow software developers to efficiently focus on their work.\n", "Reference software is software which emulates and expands upon print reference forms including the dictionary, translation dictionary, encyclopaedia, thesaurus, and atlas. Like print references, reference software can either be general or specific to a domain, and often includes maps and illustrations, as well as bibliography and statistics. Reference software may include multimedia content including animations, audio, and video, which further illustrate a concept. Well designed reference software improves upon the navigability of print references, through the use of search functionality and hyperlinks.\n", "Reference management software, citation management software, company reference software or personal bibliographic management software is software for scholars and authors to use for recording and utilising bibliographic citations (references) as well as managing project references either as a company or an individual. Once a citation has been recorded, it can be used time and again in generating bibliographies, such as lists of references in scholarly books, articles and essays. The development of reference management packages has been driven by the rapid expansion of scientific literature.\n", "Style guides are important to writers since \"virtually all professional editors work closely with one of them in editing a manuscript for publication.\" Comprehensive style guides, such as the \"Oxford Style Manual\" in the United Kingdom and style guides developed by the American Psychological Association, and the Modern Language Association in the United States, provide standards for a wide variety of writing, design, and English language topics—such as grammar, punctuation, and typographic conventions—and are widely used regardless of profession.\n" ]
what is the purpose of checking in for a flight, if you can check in online?
The online checkin process is mainly to get the customer to complete as much of the administration and data entry as possible before hand - rather than have staff do it whilst a queue waits.
[ "Online check-in is the process in which passengers confirm their presence on a flight via the Internet and typically print their own boarding passes. Depending on the carrier and the specific flight, passengers may also enter details such as meal options and baggage quantities and select their preferred seating.\n", "Typically, web-based check-in for airline travel is offered on the airline's website not earlier than 24 hours before a flight's scheduled departure or seven days for Internet Check-In Assistant. However, some airlines allow a longer time, such as Ryanair, which opens online check-in 30 and 4 days beforehand (depending on whether the passenger paid for a seat reservation), AirAsia, which opens it 14 days prior to departure, and easyJet, which opens as soon as a passenger is ticketed (however for easyJet, passengers are not checked-in automatically after ticketing, the passenger must click the relevant button). Depending on the airline, there can be benefits of better seating or upgrades to first class or business class offered to the first people to check in for a flight. In order to meet this demand, some sites have offered travelers the ability to request an airline check-in prior to the 24-hour window and receive airline boarding passes by email when available from the airline. Some airlines charge for the privilege of early check-in before the 24-hour window opens, thus capitalising on the demand for desirable seats such as those immediately behind a bulkhead or emergency exit row.\n", "Check-in is usually the first procedure for a passenger when arriving at an airport, as airline regulations require passengers to check in by certain times prior to the departure of a flight. This duration spans from 15 minutes to 4 hours depending on the destination and airline (with self check in, this can be expanded to 24 hours, if checking in by online processes). During this process, the passenger has the ability to ask for special accommodations such as seating preferences, inquire about flight or destination information, accumulate frequent flyer program miles, or pay for upgrades. The required time is sometimes written in the reservation, sometimes written somewhere in websites, and sometimes only referred as \"passengers should allow sufficient time for check-in\". \n", "Several websites assist people holding e-tickets to check in online in advance of the twenty-four-hour airline restriction. These sites store a passenger's flight information and then when the airline opens up for online check-in the data is transferred to the airline and the boarding pass is emailed back to the customer. With this e-ticket technology, if a passenger receives his boarding pass remotely and is travelling without check-in luggage, he may bypass traditional counter check-in.\n", "Many airlines have a deadline for passengers to check in before each flight. This is to allow the airline to offer unclaimed seats to stand-by passengers, to load luggage onto the plane and to finalize documentation for take-off. The passenger must also take into account the time that may be needed for them to clear the check-in line, to pass security and then to walk (sometimes also to ride) from the check-in area to the boarding area. This may take several hours at some airports or at some times of the year. On international flights, additional time would be required for immigration and customs clearance.\n", "The check-in process at airports enables passengers to check in luggage onto a plane and to obtain a boarding pass. When presenting at the check-in counter, a passenger will provide evidence of the right to travel, such as a ticket, visa or electronic means. Each airline provides facilities for passengers to check in their luggage, except for their carry-on bags. This may be by way of airline-employed staff at check-in counters at airports or through an agency arrangement or by way of a self-service kiosk. The luggage is weighed and tagged, and then placed on a conveyor that usually feeds the luggage into the main baggage handling system. The luggage goes into the aircraft's cargo hold. The check-in staff then issues each passenger with a boarding pass.\n", "Many airlines encourage travellers to check in online up to a month before their flight and obtain their boarding pass before arriving at the airport. Some carriers offer incentives for doing so (e.g., in 2015, US Airways offered 1000 bonus miles to anyone checking in online,), while others charge fees for checking in or printing one's boarding pass at the airport.\n" ]
Why do the edges of certain materials dry faster than the middles?
The edges are more likely than not touching more air than concrete found in the center of a section of sidewalk. Whereas the center only has exposure to air above it (along with a very small amount found in small cracks, etc), the edges are exposed to air on two fronts, the top and the side that goes into the ground. More air allows for more heat to transfer to the edges of the sidewalk, causing more water to evaporate. As the center is only exposed to air on the top, the water found in this section takes longer to acquire enough energy to evaporate, thus remaining wet for longer.
[ "Because reeds change with climate, reeds that are too soft can be kept in the hopes that they eventually thicken, but there is nothing else that can be done. If a reed is too stiff, however, there are solutions. The most simple solution is to turn a piece of paper over so there is no ink and gently rotate the reed around it while gently placing the fingers at the tip and the butt to ensure even distribution on the paper. This works if the reed is just barely too stiff or warped (the tip is not flat). If the reed is more than a little too stiff, sandpaper can be used (preferably 300–500 grain) to repeat the process as described. Be careful not to damage the tip of the reed.\n", "Bound materials are sensitive to rapid temperature or humidity cycling due to differential expansion of the binding and pages, which may cause the binding to crack and/or the pages to warp. Changes in temperature and humidity should be done slowly so as to minimize the difference in expansion rates. However, an accelerated aging study on the effects of fluctuating temperature and humidity on paper color and strength showed no evidence that cycling of one temperature to another or one RH to another caused a different mechanism of decay.\n", "Materials such as stones, sand and ceramics are considered 'dry' and have much lower equilibrium moisture content than organic material like wood and leather. typically a fraction of a percent by weight when in equilibrium of air of Relative humidity 10% to 90%. This affects the rate that buildings need to dry out after construction, typical cements starting with 40-60% water content. \n", "The finer the particles, the closer the clay bond, and the denser and stronger the fired product. \"The strength in the dry state increases with grog down as fine as that passing the 100-mesh sieve, but decreases with material passing the 200-mesh sieve.\" \n", "A high cooling rate of thick sections will cause a steep thermal gradient in the material. The outer layers of the heat treated part will cool faster and shrink more, causing it to be under tension and thermal staining. At high cooling rates, the material will transform from austenite to martensite which is much harder and will generate cracks at much lower strains. The volume change (martensite is less dense than austenite) can generate stresses as well. The difference in strain rates of the inner and outer portion of the part may cause cracks to develop in the outer portion, compelling the use of slower quenching rates to avoid this. By alloying the steel with tungsten, the carbon diffusion is slowed and the transformation to BCT allotrope occurs at lower temperatures, thereby avoiding the cracking. Such a material is said to have its hardenability increased. Tempering following quenching will transform some of the brittle martensite into tempered martensite. If a low-hardenability steel is quenched, a significant amount of austenite will be retained in the microstructure, leaving the steel with internal stresses that leave the product prone to sudden fracture.\n", "Many dryers consist of a rotating drum called a \"tumbler\" through which heated air is circulated to evaporate the moisture, while the tumbler is rotated to maintain air space between the articles. Using these machines may cause clothes to shrink or become less soft (due to loss of short soft fibers/lint). A simpler non-rotating machine called a \"drying cabinet\" may be used for delicate fabrics and other items not suitable for a tumble dryer.\n", "Moisture content affects both the ease of cutting wood and the final shape of the work when it dries. Wetter wood cuts easily with a continuous ribbon of shavings that are relatively dust-free. However, the wet wood moves as it dries. shrinking less along the grain. These variable changes may add the illusion of an oval bowl, or draw attention to features of the wood. Dry wood is necessary for turnings that require precision, as in the fit of a lid to a box, or in forms where pieces are glued together.\n" ]
Why can I hear a transmitted radio signal on several frequencies?
Its called harmonics, the idea comes from that all waveforms comes from the sum of a series of increasing sinusodial waves. Meaing a 11MHz transducer is transducing a sum of 11Mhz, 22Mhz, 44Mhz, 88Mhz etc frequencies, but in decreasing amplitudes. But your reciever is sensitive enough to pick those harmonics up. I need to stress that this is not a problem with your transducer, but a fundamental property of maths and physics.
[ "The radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter's radio waves oscillate at a different rate, in other words each transmitter has a different frequency, measured in kilohertz (kHz), megahertz (MHz) or gigahertz (GHz). The receiving antenna typically picks up the radio signals of many transmitters. The receiver uses \"tuned circuits\" to select the radio signal desired out of all the signals picked up by the antenna, and reject the others. A tuned circuit (also called resonant circuit or tank circuit) acts like a resonator, similarly to a tuning fork. It has a natural resonant frequency at which it oscillates. The resonant frequency of the receiver's tuned circuit is adjusted by the user to the frequency of the desired radio station; this is called \"tuning\". The oscillating radio signal from the desired station causes the tuned circuit to resonate, oscillate in sympathy, and it passes the signal on to the rest of the receiver. Radio signals at other frequencies are blocked by the tuned circuit and not passed on.\n", "The radio waves from many transmitters pass through the air simultaneously without interfering with each other. They can be separated in the receiver because each transmitter's radio waves oscillate at a different rate, in other words each transmitter has a different frequency, measured in kilohertz (kHz), megahertz (MHz) or gigahertz (GHz). The bandpass filter in the receiver consists of a tuned circuit which acts like a resonator, similarly to a tuning fork. It has a natural resonant frequency at which it oscillates. The resonant frequency is set equal to the frequency of the desired radio station. The oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. Radio signals at other frequencies are blocked by the tuned circuit and not passed on.\n", "In radio reception, noise is unwanted random electrical signals always present in a radio receiver in addition to the desired radio signal. Radio noise is a combination of natural electromagnetic atmospheric noise (\"spherics\", static) created by electrical processes in the atmosphere like lightning, manmade radio frequency interference (RFI) from other electrical devices picked up by the receiver's antenna, and thermal noise present in the receiver input circuits, caused by the random thermal motion of molecules. The level of noise determines the maximum sensitivity and reception range of a radio receiver; if no noise were picked up with radio signals, even weak transmissions could be received at virtually any distance by making a radio receiver that was sensitive enough. With noise present, if a radio source is so weak and far away that the radio signal in the receiver has a lower amplitude than the average noise, the noise will drown out the signal. \n", "As they travel farther from the transmitting antenna, radio waves spread out so their signal strength (intensity in watts per square meter) decreases, so radio transmissions can only be received within a limited range of the transmitter, the distance depending on the transmitter power, antenna radiation pattern, receiver sensitivity, noise level, and presence of obstructions between transmitter and receiver. An omnidirectional antenna transmits or receives radio waves in all directions, while a directional antenna or high gain antenna transmits radio waves in a beam in a particular direction, or receives waves from only one direction. \n", "Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Radio waves have frequencies as high as 300 gigahertz (GHz) to as low as 30 hertz (Hz). At 300 GHz, the corresponding wavelength is 1 mm, and at 30 Hz is 10,000 km. Like all other electromagnetic waves, radio waves travel at the speed of light. They are generated by electric charges undergoing acceleration, such as time varying electric currents. Naturally occurring radio waves are emitted by lightning and astronomical objects. \n", "At the receiver, the radio wave induces a tiny oscillating voltage in the receiving antenna which is a weaker replica of the current in the transmitting antenna. This voltage is applied to the radio receiver, which amplifies the weak radio signal so it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. The modulation signal is converted by a transducer back to a human-usable form: an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. \n", "Radio transmitters work by mixing a radio frequency (RF) signal of a specific frequency, the carrier wave, with the audio signal to be broadcast. In AM transmitters this mixing usually takes place in the final RF amplifier (high level modulation). It is less common and much less efficient to do the mixing at low power and then amplify it in a linear amplifier. Either method produces a set of frequencies with a strong signal at the carrier frequency and with weaker signals at frequencies extending above and below the carrier frequency by the maximum frequency of the input signal. Thus the resulting signal has a spectrum whose bandwidth is twice the maximum frequency of the original input audio signal. \n" ]
why is it easier to shoot at people under you in games?
Insert "i have the high ground" meme. Its not only in games, highround has the vision advantage.
[ "Many shooters will allow players to zoom down the sights of a gun or use a scope, usually exchanging movement speed, field of vision, and the speed of their traverse for greater accuracy. This can make a player considerably more vulnerable to circle-strafing, as objects will pass through their field of vision more quickly, they are less capable of keeping up with the target, and their lowered speed makes dodging more difficult.\n", "BULLET::::- The converse approach, suggesting a level of expertise far higher than the player actually possesses, can also be effective. For example, although gamesmanship frowns on simple distractions like whistling loudly while an opponent takes a shot, it is good gamesmanship to do so when taking a shot oneself, suggesting as it does a level of carefree detachment which the opponent does not possess.\n", "Tactical shooters are designed for realism. It is not unusual for players to be killed with a single bullet, and thus players must be more cautious than in other shooter games. The emphasis is on realistic modeling of weapons, and power-ups are often more limited than in other action games. This restrains the individual heroism seen in other shooter games, and thus tactics become more important.\n", "In first person shooter games such as \"\", \"Tactical Ops\", and \"Unreal Tournament\", the concept of a critical hit is often substituted by the headshot, where a player attempts to place a shot on an opposed player or non-player character's head area or other weak-spot, which is generally fatal, or otherwise devastating, when successfully placed. Headshots require considerable accuracy as players often have to compensate for target movement and a very specific area of the enemy's body. In some games, even when the target is stationary, the player may have to compensate for movement generated by the telescopic sight.\n", "The game has a pseudo-realistic portrayal of the weaponry used. There is no on-screen crosshair and the players must use the iron sights of the game's weapon model to accurately aim the weapon. Shooting \"from the hip\" is still possible; however, the free-aim system makes this difficult. Weapons are also more deadly than in most first-person shooter titles, with most rifles capable of taking out players with one or two shots to the torso. According to their class, players can also use fragmentation grenades, smoke grenades, and RPGs.\n", "Played from a third-person perspective, the majority of the game is a beat 'em up, with the player using the right analog stick to direct blows at enemies. The game also features a number of levels where the player uses firearms with unlimited ammunition. During levels, the player constantly builds up a store of adrenaline, which the player can unleash to perform powerful hand-to-hand combat strikes. An alternative is, when using firearms, the player initiates a temporary slow motion bullet time mode similar to the video game \"Max Payne\". During the firearm scenes, you can hide behind various objects\n", "BULLET::::- Certain tactical shooters have higher degrees of realism than other shooters. Sometimes called \"soldier sims\", these games try to simulate the feeling of being in combat. This includes games such as \"Arma\".\n" ]
why doesn't the us place a price ceiling on medical equipment ?
The issue is that developing new medicines or medical devices is somewhat of a gamble. It costs a lot of money and the project you're working on might turn out to be ineffective or not be approved by the FDA or whatever. The prospect of being able to make a lot of money encourages companies to go out there and spend a lot of money developing new medicines. There are certain things that could be done to make this system work better, either by identifying situations where the market is failing and then trying to fix them with new rules, or by subsidizing poor people's drug bills more, or by having the government take a more active role in research and development of new medical technology. A blanket law that said something like "no pill shall cost more than $10" would not be a good idea though.
[ "In 2018, an analysis concluded that prices and administrative costs were largely the cause of the high costs, including prices for labor, pharmaceuticals, and diagnostics. The combination of high prices and high volume can cause particular expense; in the U.S., high-margin high-volume procedures include angioplasties, c-sections, knee replacements, and CT and MRI scans; CT and MRI scans also showed higher utilization in the United States.\n", "While price ceilings are often imposed by governments, there are also price ceilings which are implemented by non-governmental organizations such as companies, such as the practice of resale price maintenance. With resale price maintenance, a manufacturer and its distributors agree that the distributors will sell the manufacturer's product at certain prices (resale price maintenance), at or below a price ceiling (maximum resale price maintenance) or at or above a price floor. \n", "The primary criticism leveled against the price ceiling type of price controls is that by keeping prices artificially low, demand is increased to the point where supply can not keep up, leading to shortages in the price-controlled product. For example, Lactantius wrote that Diocletian \"by various taxes he had made all things exceedingly expensive, attempted by a law to limit their prices. Then much blood [of merchants] was shed for trifles, men were afraid to offer anything for sale, and the scarcity became more excessive and grievous than ever. Until, in the end, the [price limit] law, after having proved destructive to many people, was from mere necessity abolished.\"\n", "Organizations such as the American Medical Association (AMA) and AARP support a \"fair and accurate valuation for all physician services\". Very few resources exist, however, that allow consumers to compare physician prices (one exception is CostOfDoctors.com) The AMA sponsors the Specialty Society Relative Value Scale Update Committee, a private group of physicians which largely determine how to value physician labor in Medicare prices. Among politicians, former House Speaker Newt Gingrich has called for transparency in the prices of medical devices, noting it is one of the few aspects or U.S. health care where consumers and federal health officials are \"barred from comparing the quality, medical outcomes or price\".\n", "Unlike most markets for consumer services in the United States, the health care market generally lacks transparent market-based pricing. Patients are typically not able to comparison shop for medical services based on price, as medical service providers do not typically disclose prices prior to service. Government mandated critical care and government insurance programs like Medicare also impact market pricing of U.S. health care. According to the New York Times in 2011, \"the United States is far and away the world leader in medical spending, even though numerous studies have concluded that Americans do not get better care\" and prices are the highest in the world.\n", "With regard to hospitals, Prof. Nick Bosanquet of Imperial College London has argued that the government commissioned some PFI hospitals without a proper understanding of their costs, resulting in a number of hospitals which are too expensive to be used. He said:\n", "A price ceiling is a government- or group-imposed price control, or limit, on how high a price is charged for a product, commodity, or service. Governments use price ceilings to protect consumers from conditions that could make commodities prohibitively expensive. Such conditions can occur during periods of high inflation, in the event of an investment bubble, or in the event of monopoly ownership of a product, all of which can cause problems if imposed for a long period without controlled rationing, leading to shortages. Further problems can occur if a government sets unrealistic price ceilings, causing business failures, stock crashes, or even economic crises. In unregulated market economies, price ceilings do not exist. \n" ]
What is happening at the molecular level when water is being squeezed or wrung out of something?
Water goes into fabric and is held in small spaces where its cohesive and adhesive properties hold it to the fibers in the cloth. Tighten up the fibers (by wringing) and you restrict the space the water can be in squeezing it out of all its little spaces in the cloth.
[ "In a liquid solution, any given liquid molecule experience strong cohesive forces from neighboring molecules. While these forces are balanced in the bulk, molecules at the surface of the solution are surrounded on one side by water molecules and on the other side by gas molecules. The resulting imbalance of cohesive forces along the surface results in a net \"pull\" toward the bulk, giving rise to the phenomena of surface tension.\n", "When liquids are constrained in vessels whose dimensions are small, compared to the relevant length scales, surface tension effects become important leading to the formation of a meniscus through capillary action. This capillary action has profound consequences for biological systems as it is part of one of the two driving mechanisms of the flow of water in plant xylem, the transpirational pull.\n", "When an object is placed on a liquid, its weight depresses the surface, and if surface tension and downward force becomes equal than is balanced by the surface tension forces on either side , which are each parallel to the water's surface at the points where it contacts the object. Notice that small movement in the body may cause the object to sink. As the angle of contact decreases surface tension decreases the horizontal components of the two arrows point in opposite directions, so they cancel each other, but the vertical components point in the same direction and therefore add up to balance . The object's surface must not be wettable for this to happen, and its weight must be low enough for the surface tension to support it.\n", "suggests that pressure and volume can also be changed to force a system into a supersaturated state. If the volume of solvent is decreased, the concentration of the solute can be above the saturation point and thus create a supersaturated solution. The decrease in volume is most commonly generated through evaporation. Similarly, an increase in pressure can drive a solution to a supersaturated state. All three of these mechanisms rely on the fact that the conditions of the solution can be changed quicker than the solute can precipitate or crystallize out.\n", "When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure doesn't have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point. This is why water spurting from a hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the surface. The speed of liquid out of the hole is formula_23, where \"h\" is the depth below the free surface. This is the same speed the water (or anything else) would have if freely falling the same vertical distance \"h\".\n", "When water is in contact with solid particles (e.g., clay or sand particles within soil), adhesive intermolecular forces between the water and the solid can be large and important. The forces between the water molecules and the solid particles in combination with attraction among water molecules promote surface tension and the formation of menisci within the solid matrix. Force is then required to break these menisci. The magnitude of matrix potential depends on the distances between solid particles—the width of the menisci (also capillary action and differing Pa at ends of capillary)—and the chemical composition of the solid matrix (meniscus, macroscopic motion due to ionic attraction).\n", "Liquid water can be assumed to be incompressible for most purposes: its compressibility ranges from 4.4 to in ordinary conditions. Even in oceans at 4 km depth, where the pressure is 400 atm, water suffers only a 1.8% decrease in volume.\n" ]
why do people hide their license plate when selling their vehicle but not always their vin?
The VIN number only shows who manufactured the car and what model it is. The licence plate tells you what state the car is in and, if someone is thorough, it can reveal the location. You don't want randoms knowing where you live.
[ "This is the recommended procedure for selling a car. Alternatively the seller may hand out their car with valid licence plates and papers still in their name to the new owner thus giving them the responsibility to register the car in their name shortly. In a scenario without a proper sales contract the seller may become liable when the buyer commits criminal acts related to the car or plates and thus it is generally not recommended to sell used cars with licence plates.\n", "Because there is no national identity card in the United States, the driver's license is often used as the \"de facto\" equivalent for completion of many common business and governmental transactions. As a result, driver's licenses are the focus of many kinds of identity theft. Driver's licenses were not always identification cards. In many states, driver's licenses did not even have a photograph well into the 1980s. Activism by the Mothers Against Drunk Driving organization for the use of photo ID age verification in conjunction with increasing the drinking age to 21 in order to reduce underage drinking led to photographs being added to all state licenses. New York and Tennessee were the last states to add photos in 1986. However, New Jersey later allowed drivers to get non-photo licenses; this was later revoked. Vermont license holders have the option of receiving a non-photo license. All Tennessee drivers aged 60 years of age or older had the option of a non-photo driver's license prior to January 2013, when photo licenses were required for voting identification. All people with valid non-photo licenses will be allowed to get a photo license when their current license expires. Thirteen states allow the option of a non-photo driver's license for reasons of religious belief: Arkansas, Indiana, Kansas, Minnesota, Missouri, Nebraska, New Jersey, North Dakota, Oregon, Pennsylvania, Tennessee, Washington, and Wisconsin.\n", "Because of a European regulation that identification as a rental car should not be possible, the plates with \"V\" are no longer in use. Today, rental cars usually have common car plates with the canton codes VD or AI. Temporary duty unpaid vehicles use \"Z\" plates and year band while Temporary duty paid plate have year band on the right.\n", "Plates that are not up to date quickly attract the attention of law enforcement, because registration \"renewal\" is a transaction that can usually be undertaken only by the car's registered owner, once certain requirements have been met, and because registration fees are a source of government revenue. A delinquent registration sticker is often an indicator that the vehicle may be stolen, that the vehicle's owner has failed to comply with the applicable law regarding emission inspection or insurance, or that the vehicle's owner has unpaid traffic or parking tickets. Even with the stickers, most provinces previously required that all licence plates be replaced every few years; that practice is being abandoned by many provinces because of the expense of continually producing large numbers of plates.\n", "Dealer plates have black text on a green background. These plates are used on vehicles without registration, insurance and vehicles which have failed inspection. The dealers have reported their car not to be driven, meaning they don't have to pay road tax. Cars can be parked for months awaiting sale. The cars can be used for short test drives with one of these licence plates. Unlike normal Swedish license plates, the dealer plate is not tied to any vehicle but to the plate owner. These plates can also be used by car manufacturers to test vehicles. The plate has a sticker indicating if the plate is for cars, trucks or trailers. The plate shows that the owner has a special insurance that covers test drives.\n", "In the United States, driver's licenses are issued by the states, not by the federal government. Additionally, because the United States has no national identification card and because of the widespread use of cars, driver's licenses have been used as a \"de facto\" standard form of identification within the country. For non-drivers, states also issue voluntary identification cards which do not grant driving privileges. Prior to the Real ID Act, each state set its own rules and criteria regarding the issuance of a driver's license or identification card, including the look of the card, what data is on the card, what documents must be provided to obtain one, and what information is stored in each state's database of licensed drivers and identification card holders.\n", "Currently, Québec, Saskatchewan and Manitoba are the only provinces in which decals are not used. Instead, the police rely on the use of cameras and computers that automatically report any plates for which the registration is expired (making the use of fake stickers obsolete), the car has been reported as stolen and/or similar reasons. That said, the Registration Certificate is the only way for the owner to prove that a vehicle has valid registration.\n" ]
How often does a comet crash into the Sun?
Only one time per comet.
[ "In 1998, two comets were observed plunging toward the Sun in close succession. The first of these was on June 1 and the second the next day. A video of this, followed by a dramatic ejection of solar gas (unrelated to the impacts), can be found at the NASA website. Both of these comets evaporated before coming into contact with the surface of the Sun. According to a theory by NASA Jet Propulsion Laboratory scientist Zdeněk Sekanina, the latest impactor to actually make contact with the Sun was the \"supercomet\" Howard-Koomen-Michels on August 30, 1979. (See also sungrazer.)\n", "Some comets meet a more spectacular end – either falling into the Sun or smashing into a planet or other body. Collisions between comets and planets or moons were common in the early Solar System: some of the many craters on the Moon, for example, may have been caused by comets. A recent collision of a comet with a planet occurred in July 1994 when Comet Shoemaker–Levy 9 broke up into pieces and collided with Jupiter.\n", "The comet was hit by a coronal mass ejection during its pass near the Sun; some rumoured it had \"disturbed\" the Sun, but scientists dismissed this notion. The scientific consensus is that there is no link between comets and CMEs that can not be explained through simple coincidence, and there were 56 CMEs in February 2003. On February 18, 2003, comet C/2002 V1 (NEAT) passed 5.7 degrees from the Sun. C/2002 V1 (NEAT) appeared impressive as viewed by the Solar and Heliospheric Observatory (SOHO) as a result of the forward scattering of light off of the dust in the coma and tail. After the comet left LASCO's field of view, on February 20, 2003, an object was seen at the bottom of a single frame. Although technicians dismissed this as a software bug, rumours persisted that the object had been expelled from the Sun.\n", "It appeared as a ball of hot gas traveling at one hundred miles per second from the Naval Observatory. The comet passed within 7,000,000 miles of the Sun on August 26. A wanderer in the solar system, it is considered unlikely to return from\n", "Each time a comet swings by the Sun in its orbit, some of its ice vaporizes and a certain amount of meteoroids will be shed. The meteoroids spread out along the entire orbit of the comet to form a meteoroid stream, also known as a \"dust trail\" (as opposed to a comet's \"gas tail\" caused by the very small particles that are quickly blown away by solar radiation pressure).\n", "In 1987 and 1988 it was first observed by SMM that there could be pairs of sungrazing comets that can appear within very short time periods ranging from a half of a day up to about two weeks. Calculations were made to determine that the pairs were part of the same parent body but broke apart at tens of AU from the Sun. The breakup velocities were only on the order of a few meters per second which is comparable to the speed of rotation for these comets. This led to the conclusion that these comets break from tidal forces and that comets C/1882 R1, C/1965 S1, and C/1963 R1 probably broke off from the Great Comet of 1106.\n", "BULLET::::- 28 November – The comet C/2012 S1 (ISON) passed roughly above the Sun's surface. Although it was highly anticipated that the comet would be visible to the naked eye on Earth once it orbited the sun, it became increasingly evident that it had vaporized as it made its approach. Hours after it passed behind the sun, a part of the comet re-emerged, though significantly smaller. Over the next 24 hours, it too, faded.\n" ]
what is with the sometime hours and hours or delay in having sore/dead/tired legs after over doing and pushing yourself with leg exercise/walking/running?
I believe it's somewhat to do with DOMS (Delayed Onset Muscle Soreness), in which your body begins to repair tiny microtears in the muscle fibres of your legs after long periods of exertion. Doing this makes your legs stronger and able to endure more physical activity. It's like how people build muscle in the gym. The delay must be due to inflammation occurring in your leg tissue hours later as the body begins to repair them. Not inflammation in the sense that your legs are gonna swell up and become red and very hot, but bits of inflammation in the muscle fibres which add up to aching. There'd probably be lactic acid present as well which contributes to the aches, like how a stitch aches. Explained this out of my own idea of it, so if I'm a bit off what actually happens, I apologise.
[ "Intermittent claudication (Latin: \"claudicatio intermittens\"), is a symptom that describes muscle pain on mild exertion (ache, cramp, numbness or sense of fatigue), classically in the calf muscle, which occurs during exercise, such as walking, and is relieved by a short period of rest. It is classically associated with early-stage peripheral artery disease, and can progress to critical limb ischemia unless treated or risk factors are modified.\n", "Stretching the leg muscles can bring temporary relief. Walking and moving the legs, as the name \"restless legs\" implies, brings temporary relief. In fact, those with RLS often have an almost uncontrollable need to walk and therefore relieve the symptoms while they are moving. Unfortunately, the symptoms usually return immediately after the moving and walking ceases. A vibratory counter-stimulation device has been found to help some people with primary RLS to improve their sleep.\n", "Rest pain is a continuous burning pain of the lower leg or feet. It begins, or is aggravated, after reclining or elevating the limb and is relieved by sitting or standing. It is more severe than intermittent claudication, which is also a pain in the legs from arterial insufficiency.\n", "Delayed onset muscle soreness is pain or discomfort that may be felt one to three days after exercising and generally subsides two to three days later. Once thought to be caused by lactic acid build-up, a more recent theory is that it is caused by tiny tears in the muscle fibers caused by eccentric contraction, or unaccustomed training levels. Since lactic acid disperses fairly rapidly, it could not explain pain experienced days after exercise.\n", "Depending on the cause of the disease, such clinical conditions manifest different speed in progression of symptoms in a matter of hours to days. Most myelitis manifests fast progression in muscle weakness or paralysis starting with the legs and then arms with varying degrees of severity. Sometimes the dysfunction of arms or legs cause instability of posture and difficulty in walking or any movement. Also symptoms generally include paresthesia which is a sensation of tickling, tingling, burning, pricking, or numbness of a person's skin with no apparent long-term physical effect. Adult patients often report pain in the back, extremities, or abdomen. Patients also present increased urinary urgency, bowel or bladder dysfunctions such as bladder incontinence, difficulty or inability to void, and incomplete evacuation of bowel or constipation. Others also report fever, respiratory problems and intractable vomiting.\n", "Stretching of the tight structures (piriformis, hip abductor, and hip flexor muscle) may alleviate the symptoms. The involved muscle is stretched (for 30 seconds), repeated three times separated by 30 second to 1 minute rest periods, in sets performed two times daily for six to eight weeks. This should allow one to progress back into jogging until symptoms disappear.\n", "Landis rode the 2006 Tour with the constant pain from the injury, which he described: \"It's bad, it's grinding, it's bone rubbing on bone. Sometimes it's a sharp pain. When I pedal and walk, it comes and goes, but mostly it's an ache, like an arthritis pain. It aches down my leg into my knee. The morning is the best time, it doesn't hurt too much. But when I walk it hurts, when I ride it hurts. Most of the time it doesn't keep me awake, but there are nights that it does.\" During the Tour, Landis was medically approved to take cortisone for this injury, a medication otherwise prohibited in professional cycling for its known potential for abuse. Landis himself called his win \"a triumph of persistence\" despite the pain.\n" ]
why is a revealing outfit that doesn't quite bare all often so much more attractive than a completely nude body?
For the same reason that things like burlesque and stripper shows are popular. It's about anticipation and tantalisation. While the body is covered up, your imagination is running wild. Even the most flawless body is still just a body. Your imagination is always more powerful. EDIT: general useless spelling.
[ "Barechestedness is the state of a man wearing no clothes above the waist, exposing the upper torso. Bare male chests are generally considered acceptable at beaches, swimming pools and sunbathing areas. However, some stores and restaurants have a \"no shirt, no service\" rule to prevent barechested men from coming inside. While going barechested at outdoor activities may be acceptable, it is taboo at office workplaces, churches and other settings.\n", "In most societies, barechestedness is much more common than toplessness, as exposure of the male pectoral muscles is often considered to be far less taboo than of the female breasts, despite some considering them equally erogenous. Male barechestedness is often due to practical reasons such as heat, or the ability to move the body without being restricted by an upper body garment. In several sports it is encouraged or even obligatory to be barechested. Barechestedness may also be used as a display of power, or to draw attention to oneself, especially if the upper body muscles are well-developed.\n", "“Today we are so defined by the exterior, labels and what we wear. Clothes hide and mask who we really are. But if you take it away we are nothing else but ourselves. I am fascinated by bodies, regardless one being skinny, not skinny, fat, obese, wrinkled, aged or young. There is beauty in absolutely everything, even in a nude body, which is not perfect as none of us are. There is beauty in human vulnerability.\"\n", "Plain dress is attributed to reasons of theology and sociology. In general, plain dress involves the covering of much of the body (often including the head, forearms and calves), with minimal ornamentation, rejecting print fabrics, trims, fasteners, and jewelry. Non-essential elements of garments such as neckties, collars, and lapels may be minimized or omitted. Practical garments such as aprons and shawls may be layered over the basic ensemble. Plain dress garments are often handmade and may be produced by groups of women in the community for efficiency and to ensure uniformity of style. Plain dress practices can extend to the grooming of hair and beards and may vary somewhat to accommodate stages in the life cycle such as allowing children and older people more latitude.\n", "Backless dresses first appeared in the 1920s. In the 1930s, the style became associated with the sun tanning fashions of the time, and the backless dress was a way of showing off a tan, usually without tan lines. The wearer usually had to be slim to be able to pull off the effect. In December 1937, the actress Micheline Patton was controversially filmed from behind while wearing a backless dress in the final episode of the early BBC fashion documentary \"Clothes-Line\". The illusion of nudity led to outraged viewers writing in to complain, and Pearl Binder, who co-presented the show, quipped, \"Grandmamma looks back but Micheline has no back to be seen.\"\n", "Most discussion of modesty involves clothing. The criteria for acceptable modesty and decency have relaxed continuously in much of the world since the nineteenth century, with shorter, form-fitting, and more revealing clothing and swimsuits, more for women than men. Most people wear clothes that they consider not to be unacceptably immodest for their religion, culture, generation, occasion, and the people present. Some wear clothes which they consider immodest, due to exhibitionism, the desire to create an erotic impact, or for publicity.\n", "Because women in some countries are forced to cover their bodies and faces, modest dress is often perceived as a symbol of oppression in Western culture even when a woman freely chooses to dress that way. Josephs wrote that when she became an Orthodox Jew and began dressing modestly, she found that covering up made her feel empowered. Her article and short video prompted online discussions and were featured on websites such as \"Glossy\" and the Nachum Segal radio show.\n" ]
why does food dye in milk react in such a way when soap is added?
The soap breaks the surface tension of the milk. The food coloring rests on top because it has a lower density than milk. When you drop soap in the middle, the surface tension drops. But it takes time for that effect to reach the edge of the container. So the edge of milk still has all it's surface tension while the middle doesn't. This makes the food coloring move toward the edges.
[ "Methyl cellulose is very occasionally added to hair shampoos, tooth pastes and liquid soaps, to generate their characteristic thick consistency. This is also done for foods, for example ice cream or croquette. Methyl cellulose is also an important emulsifier, preventing the separation of two mixed liquids because it is an emulsion stabilizer.\n", "Two studies published in the late 20th century showed that UHT treatment causes proteins contained in the milk to unfold and flatten, and the formerly \"buried\" sulfhydryl (SH) groups, which are normally masked in the natural protein, cause extremely-cooked or burnt flavors to appear to the human palate. One study reduced the thiol content by immobilizing sulfhydryl oxidase in UHT-heated skim milk and reported, after enzymatic oxidation, an improved flavor. Two Pennsylvania authors prior to heating added the flavonoid compound epicatechin to the milk, and reported a partial reduction of thermally generated aromas.\n", "As is the case with milk, cream will curdle whenever it comes into contact with a weak acid. Milk and cream contain casein, which coagulates, when mixed with weak acids such as lemon, tonic water, or traces of wine. While this outcome is undesirable in most situations, some cocktails (such as the cement mixer, which consists of a shot of Bailey's mixed with the squeezed juice from a slice of lime) specifically encourage coagulation.\n", "Chemical preservatives can prevent oxidative spoilage, but the moisture-to-protein ratio prevents microbial spoilage by low water activity. Some jerky products are very high in sugar and therefore taste very sweet - unlike biltong, which rarely contains added sugars.\n", "For the preparation of phenol disinfectants, liquid soaps of different types are used which aid in cleaning and, mainly, the solubility of the active substance (phenols or cresols). It has been standard practice to use soaps which, upon dissolving the finished product in water, give a white, milk-like emulsion. This emulsion contains, dissolved in small particles, the active material, whether phenols or cresols.\n", "The salting-out process used in the manufacture of soaps benefits from the common-ion effect. Soaps are sodium salts of fatty acids. Addition of sodium chloride reduces the solubility of the soap salts. The soaps precipitate due to a combination of common-ion effect and increased ionic strength.\n", "The exopolysaccharides of some strains of lactic acid bacteria, e.g., Lactococcus lactis subsp. cremoris, contribute a gelatinous texture to fermented milk products (e.g., Viili), and these polysaccharides are also digestible. An example for industrial use of exopolysaccharides is the application of dextran in panettone and other breads in the bakery industry.\n" ]
how are government subsidies for food producers different from an indirect food tax?
Do you read The Week (the magazine)? They just had a good feature on US food subsidies. Anyways, there is a very simple difference between an indirect tax and a subsidy. A tax generates revenue for the government. A subsidy is *paid for by the government*, meaning they lose money on it. In terms of effects, (changes to price and quantity), they are generally the same. For your other questions: - I don't know about you, but I heard a lot of debate when the new US Farm Bill was being proposed. The main problem is that it is *so* complex that many common people do not understand most of it. However, there have been a lot of controversies in recent years surrounding it, so I think we can expect reform in the next decade or two. - They can't, really. The large producers expend a lot of resources on how to get the best deal from the government, from lobbying to expansion/contraction of business. The government does not know the *exact* cost structures or production capabilities of the firms, so have to approximate, which leads to inefficiencies. - ... Hope this helped!
[ "Like indirect taxes, they can alter relative prices and budget constraints and thereby affect decisions concerning production, consumption and allocation of resources. Subsidies in areas such as education, health and environment at times merit justification on grounds that their benefits are spread well beyond the immediate recipients, and are shared by the population at large, present and future. For many other subsidies, however the case is not so clear-cut. Arising due to extensive governmental participation in a variety of economic activities, there are many subsidies that shelter inefficiencies or are of doubtful distributional credentials. Subsidies that are ineffective or distortionary need to be weaned out, for an undiscerning, uncontrolled and opaque growth of subsidies can be deleterious for a country's public finances.\n", "An indirect tax is collected by one entity in the supply chain (usually a producer or retailer) and paid to the government, but it is passed on to the consumer as part of the purchase price of a good or service. The consumer is ultimately paying the tax by paying more for the product.\n", "A subsidy, often viewed as the converse of a tax, is an instrument of fiscal policy. Derived from the Latin word 'subsidium', a subsidy literally implies coming to assistance from behind. However, their beneficial potential is at its best when they are transparent, well targeted, and suitably designed for practical implementation. Subsidies are helpful for both economy and people as well. Subsidies have a long-term impact on the economy; the Green Revolution being one example. Farmers were given good quality grain for subsidised prices. Likewise, we can see that how the government of India is trying to reduce air pollution to subsidies lpg\n", "An agricultural subsidy (also called an agricultural incentive) is a government incentive paid to agribusinesses, agricultural organizations and farms to supplement their income, manage the supply of agricultural commodities, and influence the cost and supply of such commodities. Examples of such commodities include: wheat, feed grains (grain used as fodder, such as maize or corn, sorghum, barley and oats), cotton, milk, rice, peanuts, sugar, tobacco, oilseeds such as soybeans and meat products such as beef, pork, and lamb and mutton.\n", "The subsidies are calculated based on the difference between the nationwide average production cost and the nationwide average retail price. The payment has several additional components including a reward for quality, distribution method (e.g. selling in a direct marketing shop), effort of manufacturing (e.g. promotion of rice flour), expansion of management level, environmental conservation measures such as creation diversity, production of cereals that substitute for rice (includes rice for ground rice and animal feed rice), etc.\n", "Agricultural subsidies are paid to farmers and agribusinesses to supplement their income, manage the supply of their commodities and influence the cost and supply of those commodities. In the United States, the main crops the government subsidizes contribute to the obesity problem; since 1995, $300 billion have gone to crops that are used to create junk food.\n", "A consumption subsidy is one that subsidises the behavior of consumers. This type of subsidies are most common in developing countries where governments subsidise such things as food, water, electricity and education on the basis that no matter how impoverished, all should be allowed those most basic requirements. For example, some governments offer 'lifeline' rates for electricity, that is, the first increment of electricity each month is subsidised. This paper addresses the problems of defining and measuring government subsidies, examines why and how government subsidies are used as a fiscal policy tool, discusses their economic effects, appraises international empirical evidence on government subsidies, and offers options for their reform. Evidence from recent studies suggests that government expenditures on subsidies remain high in many countries, often amounting to several percentage points of GDP. Subsidization on such a scale implies substantial opportunity costs. There are at least three compelling reasons for studying government subsidy behavior. First, subsidies are a major instrument of government expenditure policy. Second, on a domestic level, subsidies affect domestic resource allocation decisions, income distribution, and expenditure productivity.\n" ]
i know it's not quite scientific, but what are elementary particles(e.g. leptons, bosons) "made of"?
Thank you all for your answers! I suppose the question was easier to answer than I thought, though as I would hope, I'm still left wanting more answers to the universes mysteries. I imagine my talk with a physics professor would go something like: Me: "Where did that come from?" Professor: *Explanation given* Me: "But where did THAT come from?" Professor: *Explanation given* Me: "But where did THAT come from?" Professor: "We don't quite know" Me: ...... "Amazing".
[ "In particle physics, an elementary particle or fundamental particle is a with no sub structure, thus not composed of other particles. Particles currently thought to be elementary include the fundamental fermions (quarks, leptons, antiquarks, and antileptons), which generally are \"matter particles\" and \"antimatter particles\", as well as the fundamental bosons (gauge bosons and the Higgs boson), which generally are \"force particles\" that mediate interactions among fermions. A particle containing two or more elementary particles is a \"composite particle\".\n", "Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, recently including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally.\n", "All known elementary and composite particles are bosons or fermions, depending on their spin: Particles with half-integer spin are fermions; particles with integer spin are bosons. In the framework of nonrelativistic quantum mechanics, this is a purely empirical observation. In relativistic quantum field theory, the spin–statistics theorem shows that half-integer spin particles cannot be bosons and integer spin particles cannot be fermions.\n", "Particles can also be classified according to composition. \"Composite particles\" refer to particles that have composition – that is particles which are made of other particles. For example, a carbon-14 atom is made of six protons, eight neutrons, and six electrons. By contrast, \"elementary particles\" (also called \"fundamental particles\") refer to particles that are not made of other particles. According to our current understanding of the world, only a very small number of these exist, such as leptons, quarks, and gluons. However it is possible that some of these might turn up to be composite particles after all, and merely appear to be elementary for the moment. While composite particles can very often be considered \"point-like\", elementary particles are truly \"punctual\".\n", "All elementary particles are either bosons or fermions. These classes are distinguished by their quantum statistics: fermions obey Fermi–Dirac statistics and bosons obey Bose–Einstein statistics. Their spin is differentiated via the spin–statistics theorem: it is half-integer for fermions, and integer for bosons.\n", "Ordinary matter is composed of two types of elementary particles: quarks and leptons. For example, the proton is formed of two up quarks and one down quark; the neutron is formed of two down quarks and one up quark; and the electron is a kind of lepton. An atom consists of an atomic nucleus, made up of protons and neutrons, and electrons that orbit the nucleus. Because most of the mass of an atom is concentrated in its nucleus, which is made up of baryons, astronomers often use the term \"baryonic matter\" to describe ordinary matter, although a small fraction of this \"baryonic matter\" is electrons.\n", "In the history of particle physics, the situation was particularly confusing in the late 1960s. Before the discovery of quarks, hundreds of strongly interacting particles (hadrons) were known and believed to be distinct elementary particles in their own right. It was later discovered that they were not elementary particles, but rather composites of the quarks. The set of particles believed today to be elementary is known as the Standard Model and includes quarks, bosons and leptons.\n" ]
How does the electrolyte function in a dry cell battery?
I can't tell you about the specifics of dry batteries but I can try to address your two main questions. 1) The plates have positive or negative charges like a capacitor. However, unlike a capacitor, there are reactions at the electrodes that facilitate the replenishment of the charges and so a constant (-ish) voltage is maintained (a capacitor's voltage depletes overtime). Sometimes the electrodes themselves are involved in these reactions. But I think the your confusion follows from the fact that the electrons don't flow through the electrolyte. Charged ions carry the charge in the electrolyte. At least this is the case for liquid electrolytes. I can only assume that this is also the case with dry, solid-state batteries and I think it's the defects within the material that facilitate movement of charge. 2) Imagine two half cells with two different concentrations of copper metal ions and each with a copper electrode. If you put an ionic bridge between the two beakers and connect a circuit to them, electrons will flow from the beaker with the lower Cu concentration to the beaker with the higher Cu concentration. The electrons will flow until the concentrations become equal and the cells are at equilibrium. This is why batteries 'run out' of energy; for the cell reaction to preceded any further and for one the concentrations to increase, energy must be supplied. Put another way, we've reach the bottom of a energy well and trying to go up either side of this well requires energy. However, instead of a difference in concentration, batteries typically utilise a difference in reactivity of two metals (you get a lot more energy compared with just a difference in concentration). This follows the same basic idea; the reaction or cell wants to move towards equilibrium. But what happens to the electrons if there's a component in the circuit? They don't get used up. It's the energy they 'carry' that is used up, not the electrons. The current or flow of electrons is constant throughout the circuit.
[ "An A battery is any battery used to provide power to the filament of a vacuum tube. It is sometimes colloquially referred to as a \"wet battery\". (A dry cell could be used for the purpose, but the ampere-hour capacity of dry cells was too low at the time to be of practical use in this service). The term comes from the days of valve (tube) radios when it was common practice to use a dry battery for the plate (anode) voltage and a rechargeable lead/acid \"wet\" battery for the filament voltage. (The filaments in vacuum tubes consumed much more current than the anodes, and so the \"A\" battery would drain much more rapidly than the \"B\" battery; therefore, using a rechargeable \"A\" battery in this role reduced the need for battery replacement. In contrast, a non-rechargeable \"B\" battery would need to be replaced relatively infrequently.)\n", "A dry cell uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery. Wet cells have continued to be used for high-drain applications, such as starting internal combustion engines, because inhibiting the electrolyte flow tends to reduce the current capability.\n", "Galvanic cells and batteries are typically used as a source of electrical power. The energy derives from a high-cohesive-energy metal dissolving while to a lower-energy metal is deposited, and/or from high-energy metal ions plating out while lower-energy ions go into solution. \n", "A common dry cell is the zinc–carbon battery, sometimes called the dry Leclanché cell, with a nominal voltage of 1.5 volts, the same as the alkaline battery (since both use the same zinc–manganese dioxide combination). A standard dry cell comprises a zinc anode, usually in the form of a cylindrical pot, with a carbon cathode in the form of a central rod. The electrolyte is ammonium chloride in the form of a paste next to the zinc anode. The remaining space between the electrolyte and carbon cathode is taken up by a second paste consisting of ammonium chloride and manganese dioxide, the latter acting as a depolariser. In some designs, the ammonium chloride is replaced by zinc chloride.\n", "A \"wet cell\" battery has a liquid electrolyte. Other names are \"flooded cell\", since the liquid covers all internal parts, or \"vented cell\", since gases produced during operation can escape to the air. Wet cells were a precursor to dry cells and are commonly used as a learning tool for electrochemistry. They can be built with common laboratory supplies, such as beakers, for demonstrations of how electrochemical cells work. A particular type of wet cell known as a concentration cell is important in understanding corrosion. Wet cells may be primary cells (non-rechargeable) or secondary cells (rechargeable). Originally, all practical primary batteries such as the Daniell cell were built as open-top glass jar wet cells. Other primary wet cells are the Leclanche cell, Grove cell, Bunsen cell, Chromic acid cell, Clark cell, and Weston cell. The Leclanche cell chemistry was adapted to the first dry cells. Wet cells are still used in automobile batteries and in industry for standby power for switchgear, telecommunication or large uninterruptible power supplies, but in many places batteries with gel cells have been used instead. These applications commonly use lead–acid or nickel–cadmium cells.\n", "A \"dry cell\" uses a paste electrolyte, with only enough moisture to allow current to flow. Unlike a wet cell, a dry cell can operate in any orientation without spilling, as it contains no free liquid, making it suitable for portable equipment. By comparison, the first wet cells were typically fragile glass containers with lead rods hanging from the open top and needed careful handling to avoid spillage. Lead–acid batteries did not achieve the safety and portability of the dry cell until the development of the gel battery.\n", "Various types of flow cells (batteries) have been developed, including redox, hybrid and membraneless. The fundamental difference between conventional batteries and flow cells is that energy is stored not as the electrode material in conventional batteries but as the electrolyte in flow cells.\n" ]
what makes a good haircut?
Go to a licensed barbershop. Not a salon or discount place. Ask for a gentlemen’s cut. Haircut should run you $30-40. Ask for a scissor cut. Long on top, taper fade on the sides. This is pretty much the traditional ww2 style haircut everyone has. If you want to have a more extreme fade, you can ask them to use clippers down to a 1. I personally vary my fade length down to a 0, up to scissor length of a half inch depending on the season. Buy some “American crew” pomade off amazon for $10. Style your hair with that and a **wide toothed** comb. You can easily find one off amazon for < 5.
[ "The cuticle is responsible for much of the mechanical strength of the hair fiber. A healthy cuticle is more than just a protective layer, as the cuticle also controls the water content of the fiber. Much of the shine that makes healthy hair so attractive is due to the cuticle. In the hair industry, the only way to obtain the very best hair (with cuticle intact and facing the same direction) is to use the services of \"hair collectors,\" who cut the hair directly from people's heads, and bundle it as ponytails. This hair is called virgin cuticle hair, or just cuticle hair.\n", "Historically, the undercut has been associated with poverty and inability to afford a barber competent enough to blend in the sides, as on a short back and sides haircut. From the turn of the 20th century until the 1920s, the undercut was popular among young working class men, especially members of street gangs. In interwar Glasgow, the Neds (precursors to the Teddy Boys) favored a haircut that was long on top and cropped at the back and sides. Despite the fire risk, lots of paraffin wax was used to keep the hair in place. Other gangs who favored this haircut were the Scuttlers of Manchester and the Peaky Blinders of Birmingham, because longer hair put the wearer at a disadvantage in a street fight.\n", "A regular haircut is a men's and boys' hairstyle that has hair long enough to comb on top, a defined or deconstructed side part, and a short, semi-short, medium, long, or extra long back and sides. The style is also known by other names including taper cut, regular taper cut, side-part and standard haircut; as well as short back and sides, business-man cut and professional cut, subject to varying national, regional, and local interpretations of the specific taper for the back and sides.\n", "Hair cutting or hair trimming is intended to create or maintain a specific shape and form. There are ways to trim one's own hair but usually another person is enlisted to perform the process, as it is difficult to maintain symmetry while cutting hair at the back of one's head.\n", "The haircut is usually done with electric clippers utilizing the clipper over comb technique, though it can also be cut shears over comb or freehand with a clipper. Some barbers utilize large combs designed for cutting flattops. Others use wide rotary clipper blades specifically designed for freehand cutting the top of a flattop. \n", "The undercut is a hairstyle that was fashionable from the 1910s to the 1940s, predominantly among men, and saw a steadily growing revival in the 1980s before becoming fully fashionable again in the 2010s. Typically, the hair on the top of the head is long and parted on either the side or center, while the back and sides are buzzed very short. It is closely related to the curtained hair of the mid-to-late 1990s, although those with undercuts during the 2010s tend to slick back the bangs away from the face.\n", "The hime cut is high-maintenance for those without naturally straight hair, and requires frequent touch-ups on the sidelocks and front bangs in order to maintain its shape. Hair straightening is sometimes used to help with these problems as well as straightening irons and specially formulated shampoos for straight hair. Humidity is also cited as a problem with certain hair types, as the curling caused by excess humidity can change the shape of the hair. Occasionally hair extensions and weaves are used for the side locks in order to prevent this.\n" ]
In the nervous system, how exaclty does a stimulus cause the initial depolarization of the membrane that will then open the Sodium voltage-dependent channels once the treshold value is reached?
At the synapse, the presynaptic terminal releases neurotransmitter that causes receptors on the postsynaptic cell to respond. Typically this can be ligand gated ion channels, G\-protein coupled receptors, or receptor tyrosine kinases. If the input is stimulatory, that is it triggers an action potential, the ligand gated channels allow sodium and/or calcium to enter the cell causing depolarization. If enough of those channels open to allow for above threshold level depolarization the voltage gated channels open. There are pretty complete explanations [here](_URL_0_). I tried to keep this explanation broad enough to be true of most types of action potential, but there are many specific types of nerves and synapses so if I didn't answer your question or if I missed your point let me know.
[ "However, once a stimulus activates the voltage-gated sodium channels to open, positive sodium ions flood into the cell and the voltage increases. This process can also be initiated by ligand or neurotransmitter binding to a ligand-gated channel. More sodium is outside the cell relative to the inside, and the positive charge within the cell propels the outflow of potassium ions through delayed-rectifier voltage-gated potassium channels. Since the potassium channels within the cell membrane are delayed, any further entrance of sodium activates more and more voltage-gated sodium channels. Depolarization above threshold results in an increase in the conductance of Na sufficient for inward sodium movement to swamp outward potassium movement immediately. If the influx of sodium ions fails to reach threshold, then sodium conductance does not increase a sufficient amount to override the resting potassium conductance. In that case, subthreshold membrane potential oscillations are observed in some type of neurons. If successful, the sudden influx of positive charge depolarizes the membrane, and potassium is delayed in re-establishing, or hyperpolarizing, the cell. Sodium influx depolarizes the cell in attempt to establish its own equilibrium potential (about +52 mV) to make the inside of the cell more positive relative to the outside.\n", "Phase one is depolarization. During depolarization, voltage-gated sodium ion channels open, increasing the neuron's membrane conductance for sodium ions and depolarizing the cell's membrane potential (from typically -70 mV toward a positive potential). In other words, the membrane is made less negative. After the potential reaches the activation threshold (-55 mV), the depolarization is actively driven by the neuron and overshoots the equilibrium potential of an activated membrane (+30 mV).\n", "When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft. The neurotransmitter then binds to receptors located on the postsynaptic neuron. If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization, for an excitatory receptor response, or a hyperpolarization, for an inhibitory response.\n", "In neuronal cells, an action potential begins with a rush of sodium ions into the cell through sodium channels, resulting in depolarization, while recovery involves an outward rush of potassium through potassium channels. Both of these fluxes occur by passive diffusion.\n", "Before an action potential occurs, the axonal membrane is at its normal resting potential, and Na channels are in their deactivated state, blocked on the extracellular side by their activation gates. In response to an electric current (in this case, an action potential), the activation gates open, allowing positively charged Na ions to flow into the neuron through the channels, and causing the voltage across the neuronal membrane to increase. Because the voltage across the membrane is initially negative, as its voltage increases \"to\" and \"past\" zero, it is said to depolarize. This increase in voltage constitutes the rising phase of an action potential.\n", "As the membrane potential is increased, sodium ion channels open, allowing the entry of sodium ions into the cell. This is followed by the opening of potassium ion channels that permit the exit of potassium ions from the cell. The inward flow of sodium ions increases the concentration of positively charged cations in the cell and causes depolarization, where the potential of the cell is higher than the cell's resting potential. The sodium channels close at the peak of the action potential, while potassium continues to leave the cell. The efflux of potassium ions decreases the membrane potential or hyperpolarizes the cell. For small voltage increases from rest, the potassium current exceeds the sodium current and the voltage returns to its normal resting value, typically −70 mV. However, if the voltage increases past a critical threshold, typically 15 mV higher than the resting value, the sodium current dominates. This results in a runaway condition whereby the positive feedback from the sodium current activates even more sodium channels. Thus, the cell \"fires\", producing an action potential. The frequency at which a neuron elicits action potentials is often referred to as a firing rate or neural firing rate.\n", "The depolarized voltage opens additional voltage-dependent potassium channels, and some of these do not close right away when the membrane returns to its normal resting voltage. In addition, further potassium channels open in response to the influx of calcium ions during the action potential. The intracellular concentration of potassium ions is transiently unusually low, making the membrane voltage \"V\" even closer to the potassium equilibrium voltage \"E\". The membrane potential goes below the resting membrane potential. Hence, there is an undershoot or hyperpolarization, termed an afterhyperpolarization, that persists until the membrane potassium permeability returns to its usual value, restoring the membrane potential to the resting state.\n" ]
where the phrase 'second nature' comes from
It's a corruption of the Latin phrase *secundum naturam*, which means 'according to one's nature'. Basically, whatever you're referring to meshes well with your natural abilities or tendencies, as opposed to something that was *contra naturam* (against one's nature), or *super naturam* (above nature, or Godlike).
[ "Nature has two inter-related meanings in philosophy. On the one hand, it means the set of all things which are natural, or subject to the normal working of the laws of nature. On the other hand, it means the essential properties and causes of individual things.\n", "The word \"nature\" derives from Latin \"nātūra\", a philosophical term derived from the verb for birth, which was used as a translation for the earlier (pre-Socratic) Greek term \"phusis\", derived from the verb for natural growth.\n", "\"Second Nature\" is a song by American musician-singer-songwriter Dan Hartman, released as the fourth and final single from his 1984 album \"I Can Dream About You\". The single was released in early 1985.\n", "\"Second nature\" refers a group of experiences that get made over by culture. They then get remade into something else that can then take on a new meaning. As a society we transform this process so it becomes something natural to us, i.e. second nature. So, by following a particular pattern created by culture we are able to recognise how we use and move information in different ways. From sharing information via different time zones (such as talking online) to information ending up in a different location (sending a letter overseas) this has all become a habitual process that we as a society take for granted.\n", "The word \"nature\" is derived from the Latin word \"natura\", or \"essential qualities, innate disposition\", and in ancient times, literally meant \"birth\". \"Natura\" is a Latin translation of the Greek word \"physis\" (φύσις), which originally related to the intrinsic characteristics that plants, animals, and other features of the world develop of their own accord. The concept of nature as a whole, the physical universe, is one of several expansions of the original notion; it began with certain core applications of the word φύσις by pre-Socratic philosophers, and has steadily gained currency ever since. This usage continued during the advent of modern scientific method in the last several centuries.\n", "In some contexts, the use of the terms of \"nature\" and \"natural\" can be vague, leading to unintended associations with other concepts. The word \"natural\" can also be a loaded term – much like the word \"normal\", in some contexts, it can carry an implicit value judgement. An appeal to nature would thus beg the question, because the conclusion is entailed by the premise.\n", "For Combe, \"A \"law...\"denotes a rule of action; its existence indicates an established and constant mode, or process, according to which phenomena take place.\" Natural Laws refer to \"the rules of action impressed on objects and beings by their natural constitution\" Combe presents the relationship between God, Nature, and the Natural Laws: \"If, then, the reader keep in view that God is the creator; that Nature, in the general sense, means the world which He has made; and, in a more limited sense, the particular constitution which he has bestowed on any special object...and that a Law of Nature means the established mode in which that constitution acts, and the obligation thereby imposed on intelligent beings to attend to it, he will be in no danger of misunderstanding my meaning\" Combe identifies three categories for the Natural Laws: Physical, Organic, and Intelligent. The Physical Laws \"embrace all the phenomena of mere matter,\" the Organic Laws [indicate] that \"every phenomenon connected with the production, health, growth, decay, and death of vegetables and animals, takes place with undeviating regularity.\" Combe defines Intelligent beings as \"all animals that have a distinct consciousness,\" and the Intelligent Laws concern the makeup of the mental capacities of Intelligent beings. He then identifies four principles concerning the Natural Laws: 1) the Laws are independent 2) obeying the Laws brings rewards and disobedience brings punishment 3) the Laws are fixed and universal, and 4) the laws are harmonious with the constitution of man.\n" ]
what's wrong with dumping radioactive waste in the bottom of the ocean?
Kaiju Gojira Have you no cinema history? /s
[ "Ocean floor disposal of radioactive waste has been suggested by the finding that deep waters in the North Atlantic Ocean do not present an exchange with shallow waters for about 140 years based on oxygen content data recorded over a period of 25 years. They include burial beneath a stable abyssal plain, burial in a subduction zone that would slowly carry the waste downward into the Earth's mantle, and burial beneath a remote natural or human-made island. While these approaches all have merit and would facilitate an international solution to the problem of disposal of radioactive waste, they would require an amendment of the Law of the Sea.\n", "\"Ocean floor disposal\" (or sub-seabed disposal)—a more deliberate method of delivering radioactive waste to the ocean floor and depositing it into the seabed—was studied by the UK and Sweden, but never implemented.\n", "Beyond technical and political considerations, the London Convention places prohibitions on disposing of radioactive materials at sea and does not make a distinction between waste dumped directly into the water and waste that is buried underneath the ocean's floor. It remained in force until 2018, after which the sub-seabed disposal option can be revisited at 25-year intervals.\n", "In 1972, the London Dumping Convention restricted ocean disposal of radioactive waste and in 1993, ocean disposal of radioactive waste was completely banned. The US Navy began a study on scrapping nuclear submarines; two years later shallow land burial of reactor compartments was selected as the most suitable option.\n", "From 1946 through 1993, thirteen countries (fourteen, if the USSR and Russia are considered separately) used ocean disposal or ocean dumping as a method to dispose of nuclear/radioactive waste. The waste materials included both liquids and solids housed in various containers, as well as reactor vessels, with and without spent or damaged nuclear fuel. Since 1993, ocean disposal has been banned by international treaties. (London Convention (1972), Basel Convention, MARPOL 73/78)\n", "There is concern about radioactive contamination from nuclear waste the former Soviet Union dumped in the sea and the effect this will have on the marine environment. According to an official \"White Paper\" report compiled and released by the Russian government in March 1993, the Soviet Union dumped six nuclear submarine reactors and ten nuclear reactors into the Kara Sea between 1965–1988. Solid high and low-level wastes unloaded from Northern Fleet nuclear submarines during reactor refuelings, were dumped in the Kara Sea, mainly in the shallow fjords of Novaya Zemlya, where the depths of the dumping sites range from 12 to 135 meters, and in the Novaya Zemlya Trough at depths of up to 380 meters. Liquid low-level wastes were released in the open Barents and Kara Seas. A subsequent appraisal by the International Atomic Energy Agency showed that releases are low and localized from the 16 naval reactors (reported by the IAEA as having come from seven submarines and the icebreaker \"Lenin\") which were dumped at five sites in the Kara Sea. Most of the dumped reactors had suffered an accident.\n", "The question of the degree of harm which is caused by \"bad guys\" dropping plutonium into the sea is not a simple question; the radioactive power pack containing plutonium-238 which was intended for use in space for the Apollo 13 moon mission was wrapped in a heat-resistant package which is likely to prevent leaking of plutonium for a very long time. However, plutonium released in the form of the nitrate or fine powder is likely to absorb onto mineral particles such as silt. Depending on the exact conditions this absorption onto silt could either tend to fix the plutonium in soil or the silt at the bottom of a lake (or sea), or it could enable the plutonium to migrate from one location to another with greater ease.\n" ]
Were Henrietta Lack's cells special?
This is a great question! By today's standards, there is nothing inherently special about HeLa cells. Not only do we have countless "immortal" cell lines from other people, we have very well established protocols for immortalizing cell lines ourselves. However, at the time Henrietta Lacks' cells were isolated, this was definitely not the case. These were the first human cells which were found to be able to divide indefinitely. Prior to this, cells would last only a few divisions before either dying or changing dramatically. The use of HeLa cells allowed for: 1.) More convenient cell culturing; and 2.) More importantly, it allowed the scientific field to "normalize" their *in vitro* research in a profound way. All that said, HeLa cells were special because they were the *first* of their kind isolated, not because they were inherently special. In theory, cells of equivalent value could have been isolated from any cancer patient.
[ "Henrietta Lacks was an African American woman whose cells were removed without consent while receiving cancer treatment. Her cells became the source of the foundational HeLa cell line in the scientific world today. Lacks and her family were neither informed nor asked for consent to the use of her cells for this research. It was not until the 1980s when Lacks's medical records were made public, exposing the rest of her family's medical information as well as the fact that her family was never informed of this. The major issue surrounding the Lacks case is twofold. Firstly, at no point was consent sought for the extraction and research on Lacks's cells. Secondly, her family never received compensation for the commercial use of the HeLa cell line.\n", "Gey isolated the cells taken from a cervical tumor found in a woman named Henrietta Lacks in 1951. These cells proved to be very unusual in that they could grow in culture medium that was constantly stirred using the roller drum (a technique developed by Gey), and they did not need a glass surface to grow, and therefore they had no space limit. Once Gey realized the longevity and hardiness of the HeLa cells, he began sharing them with scientists all over the world, and the use of the HeLa cell line became widespread. The cells were used in the development of the polio vaccine, lead to the first clone of a human cell, helped in the discovery that humans have 46 chromosomes, and were used to develop in vitro fertilization. By the time Gey published a short abstract claiming some credit for the development of the line, the cells were already being used by scientists all over the world.\n", "In the early 1970s, a large portion of other cell cultures became contaminated by HeLa cells. As a result, members of Henrietta Lacks's family received solicitations for blood samples from researchers hoping to learn about the family's genetics in order to differentiate between HeLa cells and other cell lines.\n", "Henrietta Lacks (born Loretta Pleasant; August 1, 1920 – October 4, 1951) was an African-American woman whose cancer cells are the source of the HeLa cell line, the first immortalized human cell line and one of the most important cell lines in medical research. An immortalized cell line reproduces indefinitely under specific conditions, and the HeLa cell line continues to be a source of invaluable medical data to the present day.\n", "The HeLa cell line's connection to Henrietta Lacks was first brought to popular attention in March 1976 with a pair of articles in the \"Detroit Free Press\" and \"Rolling Stone\" written by reporter Michael Rogers. In 1998, Adam Curtis directed a BBC documentary about Henrietta Lacks called \"The Way of All Flesh\".\n", "Multinucleated giant cell formations can arise from numerous types of bacteria, diseases, and cell formations. Giant cells are known to develop when infections are also present. They were first noticed as early as the middle of the last century, but still it is not fully understood why these reactions occur. In the process of giant cell formation, monocytes or macrophages fuse together, which could cause multiple problems for the immune system.\n", "There was also the controversy surrounding how the cells were retrieved, as made famous by the book, The Immortal Life of Henrietta Lacks. The cells were taken from Henrietta Lacks without her knowledge or permission, and her family remained unaware until the 1970s. He was careful to keep her actual name secret, and it was not made public until after his death.\n" ]
why does the skin on our hands & feet have so many lines (i.e. fingerprints)?
They increase the grip and durability of that surface, the rest of your skin is pretty slick. The process that your body uses to create that type of skin also blocks hair growth and disables melanin production though, so it's only done on the palms and bottoms of your feet.
[ "In the palms, fingers, soles, and toes, the influence of the papillae projecting into the epidermis forms contours in the skin's surface. These epidermal ridges occur in patterns (\"see:\" fingerprint) that are genetically and epigenetically determined and are therefore unique to the individual, making it possible to use fingerprints or footprints as a means of identification.\n", "Blood vessels in the dermal papillae nourish all hair follicles and bring nutrients and oxygen to the lower layers of epidermal cells. The pattern of ridges they produce in hands and feet are partly genetically determined features that develop before birth. They remain substantially unaltered (except in size) throughout life, and therefore determine the patterns of fingerprints, making them useful in certain functions of personal identification.\n", "Fingerprint identification, known as dactyloscopy, or hand print identification, is the process of comparing two instances of friction ridge skin impressions (see Minutiae), from human fingers or toes, or even the palm of the hand or sole of the foot, to determine whether these impressions could have come from the same individual. The flexibility of friction ridge skin means that no two finger or palm prints are ever exactly alike in every detail; even two impressions recorded immediately after each other from the same hand may be slightly different. Fingerprint identification, also referred to as individualization, involves an expert, or an expert computer system operating under threshold scoring rules, determining whether two friction ridge impressions are likely to have originated from the same finger or palm (or toe or sole).\n", "Since the late nineteenth century, fingerprint identification methods have been used by police agencies around the world to identify suspected criminals as well as the victims of crime. The basis of the traditional fingerprinting technique is simple. The skin on the palmar surface of the hands and feet forms ridges, so-called papillary ridges, in patterns that are unique to each individual and which do not change over time. Even identical twins (who share their DNA) do not have identical fingerprints. The best way to render latent fingerprints visible, so that they can be photographed, can be complex and may depend, for example, on the type of surfaces on which they have been left. It is generally necessary to use a ‘developer’, usually a powder or chemical reagent, to produce a high degree of visual contrast between the ridge patterns and the surface on which a fingerprint has been deposited.\n", "Skin and fingernails are made of a similar type of keratinized protein as hair. That means that drips, slips and extra hair tint around the hairline can result in patches of discolored skin. This is more common with darker hair colors and persons with dry absorbent skin. That is why it is recommended that latex or nitrile gloves be worn to protect the hands.\n", "In medicine, an intertriginous area is where two skin areas may touch or rub together. Examples of intertriginous areas are the axilla of the arm, the anogenital region, skin folds of the breasts and between digits. Intertriginous areas are known to harbor large amounts of aerobic cocci and aerobic coryneform bacteria, which are both parts of normal skin flora.\n", "Everyone has marks on their fingers. They can not be removed or changed. These marks have a pattern and this pattern is called the fingerprint. Every fingerprint is special, and different from any other in the world. Because there are countless combinations, fingerprints have become an ideal means of identification.\n" ]
If neutrinos turn out to be faster than light, are they useful in any way? Communication?
The recent measurement has no practical implications for communication. Neutrinos make inefficient signals because they are almost indetectable. One application I've seen proposed is for messaging submarines, because radio can't penetrate and sound is slow.
[ "Neutrino speeds \"consistent\" with the speed of light are expected given the limited accuracy of experiments to date. Neutrinos have small but nonzero mass, and so special relativity predicts that they must propagate at speeds slower than light. Nonetheless, known neutrino production processes impart energies far higher than the neutrino mass scale, and so almost all neutrinos are ultrarelativistic, propagating at speeds very close to that of light.\n", "In a analysis of their data, scientists of the OPERA collaboration reported evidence that neutrinos they produced at CERN in Geneva and recorded at the OPERA detector at Gran Sasso, Italy, had traveled faster than light. The neutrinos were calculated to have arrived approximately 60.7 nanoseconds (60.7 billionths of a second) sooner than light would have if traversing the same distance in a vacuum. After six months of cross checking, on , the researchers announced that neutrinos had been observed traveling at faster-than-light speed. Similar results were obtained using higher-energy (28 GeV) neutrinos, which were observed to check if neutrinos' velocity depended on their energy. The particles were measured arriving at the detector faster than light by approximately one part per 40,000, with a 0.2-in-a-million chance of the result being a false positive, \"assuming\" the error were entirely due to random effects (significance of six sigma). This measure included estimates for both errors in measuring and errors from the statistical procedure used. It was, however, a measure of precision, not accuracy, which could be influenced by elements such as incorrect computations or wrong readouts of instruments. For particle physics experiments involving collision data, the standard for a discovery announcement is a five-sigma error limit, looser than the observed six-sigma limit.\n", "In September 2011, OPERA researchers observed muon neutrinos apparently traveling faster than the speed of light. In February and March 2012, OPERA researchers blamed this result on a loose fibre optic cable connecting a GPS receiver to an electronic card in a computer. On 16 March 2012, a report announced that an independent experiment in the same laboratory, also using the CNGS neutrino beam, but this time the ICARUS detector, found no discernible difference between the speed of a neutrino and the speed of light. In May 2012, the Gran Sasso experiments BOREXINO, ICARUS, LVD and OPERA all measured neutrino velocity with a short-pulsed beam, and obtained agreement with the speed of light, showing that the original OPERA result was mistaken. Finally in July 2012, the OPERA collaboration updated their results. After the instrumental effects mentioned above were taken into account, it was shown that the speed of neutrinos is consistent with the speed of light. This was confirmed by a new, improved set of measurements in May 2013.\n", "In 2011, the OPERA experiment mistakenly observed neutrinos appearing to travel faster than light. Even before the mistake was discovered, the result was considered anomalous because speeds higher than that of light in a vacuum are generally thought to violate special relativity, a cornerstone of the modern understanding of physics for over a century.\n", "BULLET::::- \"Faster Than the Speed of Light?\" (BBC 2, 2011). Marcus du Sautoy discusses the recent discovery, the faster-than-light neutrino anomaly, that neutrinos may travel faster than light. First broadcast on 19 October 2011.\n", "BULLET::::- Faster-than-light neutrino anomaly (2011–2012): In 2011, the OPERA experiment mistakenly observed neutrinos appearing to travel faster than light. On July 12, 2012 OPERA updated their paper by including the new sources of errors in their calculations. They found agreement of neutrino speed with the speed of light.\n", "BULLET::::- An international team of scientists at CERN records neutrino particles apparently traveling faster than the speed of light. If confirmed, the discovery would overturn Albert Einstein's 1905 special theory of relativity, which says that nothing can travel faster than light. (BBC) (ArXiv)\n" ]
During the time of the moon landing did people get upset about the governments space race and the mission to the moon?
Absolutely. In fact, at no point prior to the first Moon landing did the program receive a majority of support by the public at large. Indeed, there were several notable very vocal public opponents of the program because they felt it drew funds away from or was a distraction from now important work (such as poverty reduction and anti-segregation). And in some regards they had a solid case to make. In the 1960s the per capita gdp of the US was considerably lower than today. There was a level of common poverty that existed then that today exists mostly in the developing world. Remember that in the 1960s the whole country didn't even have indoor plumbing, electricity, or phones. And, of course, this was also the peak of the struggle against Jim Crow. Many people, correctly, saw the moon race as a geopolitical struggle and lamented the waste of resources for what was effectively war making on another front. Even more so while the Vietnam War was raging. It was only after the fact that the Apollo program became more closely associated with peace, science, and the inchoate environmental movement. And, of course, for the spending to become a sunk cost that couldn't be undone or diverted elsewhere. Sources & further reading: * [Historical Studies in the Societal Impact of Spaceflight pgs. 12-17 particularly (25-30 in the pdf)](_URL_3_) * [Public opinion polls and perceptions of US human spaceflight](_URL_0_) * [Moondoggle: The Forgotten Opposition to the Apollo Program](_URL_2_) * Gil-Scott Heron (of "the revolution will not be televised" fame): [Whitey on the Moon](_URL_1_) * [The Apollo Disappointment Industry](_URL_4_)
[ "BULLET::::- Marcus Allen – British publisher of \"Nexus\", who said photographs of the lander would not prove that the United States put men on the Moon, and \"Getting to the Moon really isn't much of a problem – the Russians did that in 1959. The big problem is getting people there.\" He suggests that NASA sent robot missions because radiation levels in outer space would be deadly. A variant of this idea has it that NASA and its contractors did not recover quickly enough from the Apollo 1 fire, and so all the early Apollo missions were faked, with Apollos 14 or 15 being the first real mission.\n", "The Apollo 11 mission was the first human spaceflight mission to land on the Moon. The mission's wide effect on popular culture was anticipated and since then there have been a number of portrayals in media.\n", "After the Apollo 11 mission, officials from the Soviet Union said landing humans on the Moon was dangerous and unnecessary. At the time the Soviet Union was attempting to retrieve lunar samples robotically. The Soviets publicly denied there was a race to the Moon, and indicated they were not making an attempt. Mstislav Keldysh said in July 1969, \"We are concentrating wholly on the creation of large satellite systems\". It was revealed in 1989 that the Soviets had tried to send people to the Moon, but were unable due to technological difficulties. The public's reaction in the Soviet Union was mixed. The Soviet government limited the release of information about the lunar landing, which affected the reaction. A portion of the populace did not give it any attention, and another portion was angered by it.\n", "Mary Bennett and David Percy have claimed in \"Dark Moon: Apollo and the Whistle-Blowers\", that, with all the known and unknown hazards, NASA would not risk broadcasting an astronaut getting sick or dying on live television. The counter-argument generally given is that NASA in fact \"did\" incur a great deal of public humiliation and potential political opposition to the program by losing an entire crew in the Apollo 1 fire during a ground test, leading to its upper management team being questioned by Senate and House of Representatives space oversight committees. There was in fact no video broadcast during either the landing or takeoff because of technological limitations.\n", "In a 1994 poll by \"The Washington Post\", 9% of the respondents said that it was possible that astronauts did not go to the Moon and another 5% were unsure. A 1999 Gallup Poll found that 6% of the Americans surveyed doubted that the Moon landings happened and that 5% of those surveyed had no opinion, which roughly matches the findings of a similar 1995 \"Time/CNN\" poll. Officials of the Fox network said that such skepticism rose to about 20% after the February 2001 airing of their network's television special, \"Conspiracy Theory: Did We Land on the Moon?\", seen by about 15 million viewers. This Fox special is seen as having promoted the hoax claims.\n", "Sibrel's claims that the moon landing was a hoax making claims about supposed photographic anomalies; disasters such as the destruction of Apollo 1; technical difficulties experienced in the 1950s and 1960s; and the problems of traversing the Van Allen radiation belts. Sibrel proposes that the most condemning evidence is a piece of footage that he claims was secret, and inadvertently sent to him by NASA; he alleges that the footage shows Apollo 11 astronauts attempting to create the illusion that they were from Earth (or roughly halfway to the Moon) when, he claims, they were only in a low Earth orbit.\n", "Motivation for the United States to engage the Soviet Union in a Space Race can be traced to the then on-going Cold War. Landing on the Moon was viewed as a national and technological accomplishment that would generate world-wide acclaim. But going to the Moon would be risky and expensive, as exemplified by President John F. Kennedy famously stating in a 1962 speech that the United States chose to go \"because\" it was hard.\n" ]
Who discovered energy = force x distance, and how?
The work done by a force was simply *defined* to be the line integral of the force field along the particle's path. There's nothing to discover. But this turns out to be *useful* because of the work-energy theorem and conservation of energy, which can be proven using Newton's laws and experimentally verified.
[ "However, over large variations in distance, the approximation that \"g\" is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance \"r\" between the two bodies. Using that definition, the gravitational potential energy of a system of masses \"m\" and \"M\" at a distance \"r\" using gravitational constant \"G\" is\n", "As can be seen above, the gravitational attractive force of two bodies of 1 Planck mass each, set apart by 1 Planck length is 1 Planck force. Likewise, the distance traveled by light during 1 Planck time is 1 Planck length. To determine, in terms of SI or another existing system of units, the quantitative values of the five base Planck units, those two equations and three others must be satisfied:\n", "Émilie du Châtelet (1706 – 1749) proposed and tested the hypothesis of the conservation of total energy, as distinct from momentum. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in 1722 in which balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy - as indicated by the quantity of material displaced - was shown to be proportional to the square of the velocity. The deformation of the clay was found to be directly proportional to the height the balls were dropped from, equal to the initial potential energy. Earlier workers, including Newton and Voltaire, had all believed that \"energy\" (so far as they understood the concept at all) was not distinct from momentum and therefore proportional to velocity. According to this understanding, the deformation of the clay should have been proportional to the square root of the height from which the balls were dropped from. In classical physics the correct formula is formula_3, where formula_4 is the kinetic energy of an object, formula_5 its mass and formula_6 its speed. On this basis, Châtelet proposed that energy must always have the same dimensions in any form, which is necessary to be able to relate it in different forms (kinetic, potential, heat…).\n", "This was connected with the theoretical prediction of the electromagnetic mass by J. J. Thomson in 1881, who showed that the electromagnetic energy contributes to the mass of a moving charged body. Thomson (1893) and George Frederick Charles Searle (1897) also calculated that this mass depends on velocity, and that it becomes infinitely great when the body moves at the speed of light with respect to the luminiferous aether. Also Hendrik Antoon Lorentz (1899, 1900) assumed such a velocity dependence as a consequence of his theory of electrons. At this time, the electromagnetic mass was separated into \"transverse\" and \"longitudinal\" mass, and was sometimes denoted as \"apparent mass\", while the invariant Newtonian mass was denoted as \"real mass\". On the other hand, it was the belief of the German theoretician Max Abraham that all mass would ultimately prove to be of electromagnetic origin, and that Newtonian mechanics would become subsumed into the laws of electrodynamics.\n", "Energy is defined as the ability to do work on an object; for example, the work required to lift a one-pound weight, one foot against the pull of gravity defines a foot-pound of energy (One joule is equal to the energy needed to move a body over a distance of one meter using one newton of force). If we were to modify the graph to reflect force (the pressure exerted on the base of the bullet multiplied by the area of the base of the bullet) as a function of distance, the area under that curve would be the total energy imparted to the bullet. Increasing the energy of the bullet requires increasing the area under that curve, either by raising the average pressure, or increasing the distance the bullet travels under pressure. Pressure is limited by the strength of the firearm, and duration is limited by barrel length.\n", "The Geroch energy or Geroch mass is one of the possible definitions of mass in general relativity. It can be derived from the Hawking energy, itself a measure of the bending of ingoing and outgoing rays of light that are orthogonal to a 2-sphere surrounding the region of space whose mass is to be defined, by leaving out certain (positive) terms related to the sphere's external and internal curvature.\n", "The equations above represent conservation of mass, momentum, and energy. There are thus three equations and four unknowns, formula_55 (density) formula_56 (fluid velocity), formula_57 (pressure) and formula_58 (total energy). The total energy is given by,\n" ]
Dinosaurs and the Square/Cube Law: How'd it all work?
The square/cube law applies to objects (or animals) that scale isometrically. In other words, the object exactly retains its shape and relative dimensions, it just scales in size. Think of a scale-model matchbox car relative to a real car. You can imagine an isometrically scaled chicken as being a chicken that is longer, taller, and wider by a factor *n*, with *n^2* times the surface area, and *n^3* times the mass of a regular chicken. In reality, species do not tend to scale isometrically, due to the problems it would create. For example, a chicken that is 10 times as tall as a regular chicken would weigh 10^3 = 1000 times as much, but the cross-sectional area of its leg bones would only be 10^2 = 100 times greater. This means the static pressure the bones would have to bear would be 1000/100 = 10 times greater than for a regular chicken. For this reason, many aspects of physiology are found to scale [allometrically](_URL_0_). For example, larger animals tend to have much stockier legs, dinosaurs being no exception to this. This also applies to metabolism. Larger animals tend to burn less energy per unit mass per unit time. Specifically, metabolic rate scales as approximately mass^{3/4}, so metabolic rate per unit mass scales as approximately mass^{3/4} /mass = mass^{-1/4}. This relationship is known as [Kleiber's Law](_URL_1_). While we cannot study metabolic rates of extinct species, the ubiquity of this law in living species suggests that dinosaurs too would have followed it. In addition, a lot of early estimates of dinosaur masses are now thought to have been [too high](_URL_2_).
[ "The Cube can be found in many publications related to design and some technology museums. In addition, the computer has been featured in other forms of media. The G4 Cube was used as a prop on shows such as \"Absolutely Fabulous\", \"The Drew Carey Show\", \"Curb Your Enthusiasm\", \"Dark Angel\" , \"The Gilmore Girls\" and \"24\". The computer was parodied in \"The Simpsons\" episode \"Mypods and Boomsticks.\" The Cube is also seen in films such as \"Jay and Silent Bob Strike Back\", \"40 Days and 40 Nights\", \"About a Boy\", \"August\" and \"The Royal Tenenbaums\". In William Gibson's 2003 novel \"Pattern Recognition\", the character Cayce uses her film producer friend's Cube while staying in his London flat. In the movie \"Big Fat Liar,\" a G4 Cube and a Studio Display can be seen in the background of Wolf's kitchen.\n", "BULLET::::- Cubes are the aliens' answer to the Zeroids. They can combine into large constructs such as guns and force field cubicles. Their different sides are marked differently, indicating their different functions, such as one serving as a gun. Cy-Star keeps one, Pluto, as a pet.\n", "Smeaton's Cube (Named after John Smeaton who was an English civil engineer who was responsible for the design of canals, bridges, lighthouses and harbours) is a competition for Civil Engineering undergrads where they design a cube that would undergo successive compressive tests following which a presentation on concrete is to be given. It is organized by the ICE, UK : Student Chapter, NIT Rourkela.\n", "The cube, a geometric form often used by scientists to represent the concepts of space and time, inspired Saraceno to create an installation in which the visitors' movements enact the time variable, thereby introducing the concept of the fourth dimension within the three-dimensional space. The title of the work can be traced to quantum mechanics on the origins of the universe, distinguished by the idea of extremely fast-moving subatomic particles that can trigger changes in spatio-temporal matter. Freely inspired by these theories, Saraceno makes their movements metaphorically visible. The installation is a device that calls perceptual certainties into question; it is an element that modifies the architecture containing it, a structure that makes the interrelationships among people and visible space, an attempt to overcome the laws of gravity.\n", "The Dino Cube is a cubic twisty puzzle in the style of the Rubik's Cube. It was invented in 1985 by Robert Webb, however it was not mass-produced until ten years later. It has a total of 12 external movable pieces to rearrange, compared to 20 movable pieces on the Rubik's Cube.\n", "The idea of the cube is due to the mathematician Henk Barendregt (1991). The framework of pure type systems generalizes the lambda cube in the sense that all corners of the cube, as well as many other systems can be represented as instances of this general framework. This framework predates the lambda cube by a couple of years. In his 1991 paper, Barendregt also defines the corners of the cube in this framework.\n", "The Cube is an hour-long teleplay that aired on NBC's weekly anthology television show \"NBC Experiment in Television\" in 1969. The production was produced and directed by puppeteer and filmmaker Jim Henson, and was one of several experiments with the live-action film medium which he conducted in the 1960s, before focusing entirely on \"The Muppets\" and other puppet works. The screenplay was co-written by long-time Muppet writer Jerry Juhl.\n" ]
Why when looking at a clear container holding water from a side the surface of the water looks like a mirror? Is it the container or the water?
You mean the underside of the water? It's because of [total internal reflection](_URL_0_). That's the water.
[ "\"And the same object appears straight when looked at out of the water, and crooked when in the water; and the concave becomes convex, owing to the illusion about colours to which the sight is liable. Thus every sort of confusion is revealed within us; and this is that weakness of the human mind on which the art of conjuring and deceiving by light and shadow and other ingenous devices imposes, having an effect upon us like magic.\"\n", "The inverted real image of an object reflected by a concave mirror can appear at the focal point in front of the mirror. In a construction with an object at the bottom of two opposing concave mirrors (parabolic reflectors) on top of each other, the top one with an opening in its center, the reflected image can appear at the opening as a very convincing 3D optical illusion.\n", "In fish such as the herring which live in shallower water, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically. Silvering is found in other marine animals as well as fish. The cephalopods, including squid, octopus and cuttlefish, have multi-layer mirrors made of protein rather than guanine.\n", "In fish such as the herring which live in shallower water, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically. Silvering is found in other marine animals as well as fish. The cephalopods, including squid, octopus and cuttlefish, have multi-layer mirrors made of protein rather than guanine.\n", "In the shallower epipelagic waters, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically.\n", "A mirror image (in a plane mirror) is a reflected duplication of an object that appears almost identical, but is reversed in the direction perpendicular to the mirror surface. As an optical effect it results from reflection off of substances such as a mirror or water. It is also a concept in geometry and can be used as a conceptualization process for 3-D structures.\n", "Total internal reflection (TIR) is the phenomenon that makes the water-to-air surface in a fish-tank look like a perfectly silvered mirror when viewed from below the water level (Fig.1). Technically, TIR is the total reflection of a wave incident at a sufficiently oblique angle on the interface between two media, of which the second (\"external\") medium is transparent to such waves but has a higher wave velocity than the first (\"internal\") medium. TIR occurs not only with electromagnetic waves such as light waves and microwaves, but also with other types of waves, including sound and water waves. In the case of a narrow train of waves, such as a laser beam, we tend to speak of the total internal reflection of a \"ray\" (Fig.2).\n" ]
Is there a point between the earth and the moon where their gravitational forces cancel out?
Yes, and it has a name. That's the Earth-moon L1 point. You can float there but you are in unstable equilibrium; if you are nudged even slightly to one side then you will drift towards either the moon or Earth never to return. The SOHO satellite is at the Earth/Sun L1 to monitor the sun and [continually take pictures of it](_URL_0_) without ever having an object get in the way of the sun. It starts drifting away every once in a while but moves itself back. There are not one but five different points around any two objects where gravity and centrifugal force will cancel and you can just hang there with little or no effort. There are [lots of things](_URL_1_) at Lagrange points. L4 and L5 are stable; Jupiter has a whole collection of asteroids that have become caught in those points.
[ "BULLET::::2. It is sometimes suggested that the gravity field of the Earth might preferentially allow eruptions to occur on the near side, but not on the far side. However, in a reference frame rotating with the Moon, the centrifugal acceleration the Moon is experiencing is exactly equal and opposite to the gravitational acceleration of the Earth. There is thus no net force directed towards the Earth. The Earth tides do act to deform the shape of the Moon, but this shape is that of an elongated ellipsoid with high points at both the sub- and anti-Earth points. As an analogy, one should remember that there are two high tides per day on Earth, and not one.\n", "Nordtvedt then observed that if gravity did in fact violate the strong equivalence principle, then the more-massive Earth should fall towards the Sun at a slightly different rate than the Moon, resulting in a polarization of the lunar orbit. To test for the existence (or absence) of the Nordtvedt effect, scientists have used the Lunar Laser Ranging experiment, which is capable of measuring the distance between the Earth and the Moon with near-millimetre accuracy. Thus far, the results have failed to find any evidence of the Nordtvedt effect, demonstrating that if it exists, the effect is exceedingly weak. Subsequent measurements and analysis to even higher precision have improved constraints on the effect.. Measurements of Mercury's orbit by the MESSENGER Spacecraft have further refined the Nordvedt effect to be below of even smaller scale.\n", "Gravitational Tides are caused by changes in the relative location of the Earth, sun, and moon, whose orbits are perturbed slightly by Jupiter. Newton's law of universal gravitation states that the gravitational force between a mass at a reference point on the surface of the Earth and another object such as the Moon is inversely proportional to the square of the distance between them. The declination of the Moon relative to the Earth means that as the Moon orbits the Earth during half the lunar cycle the Moon is closer to the Northern Hemisphere and during the other half the Moon is closer to the Southern Hemisphere. This periodic shift in distance gives rise to the lunar fortnightly tidal constituent. The ellipticity of the lunar orbit gives rise to a lunar monthly tidal constituent. Because of the nonlinear dependence of the force on distance additional tidal constituents exist with frequencies which are the sum and differences of these fundamental frequencies. Additional fundamental frequencies are introduced by the motion of the Sun and Jupiter, thus tidal constituents exist at all of these frequencies as well as all of the sums and differences of these frequencies, etc. The mathematical description of the tidal forces is greatly simplified by expressing the forces in terms of gravitational potentials. Because of the fact that the Earth is approximately a sphere and the orbits are approximately circular it also turns out to be very convenient to describe these gravitational potentials in spherical coordinates using spherical harmonic expansions.\n", "Because the gravitational field created by the Moon weakens with distance from the Moon, it exerts a slightly stronger than average force on the side of the Earth facing the Moon, and a slightly weaker force on the opposite side. The Moon thus tends to \"stretch\" the Earth slightly along the line connecting the two bodies. The solid Earth deforms a bit, but ocean water, being fluid, is free to move much more in response to the tidal force, particularly horizontally. As the Earth rotates, the magnitude and direction of the tidal force at any particular point on the Earth's surface change constantly; although the ocean never reaches equilibrium—there is never time for the fluid to \"catch up\" to the state it would eventually reach if the tidal force were constant—the changing tidal force nonetheless causes rhythmic changes in sea surface height.\n", "The gravitational attraction between Earth and the Moon causes tides on Earth. The same effect on the Moon has led to its tidal locking: its rotation period is the same as the time it takes to orbit Earth. As a result, it always presents the same face to the planet. As the Moon orbits Earth, different parts of its face are illuminated by the Sun, leading to the lunar phases; the dark part of the face is separated from the light part by the solar terminator.\n", "The gravitational attraction that masses have for one another decreases inversely with the square of the distance of those masses from each other. As a result, the slightly greater attraction that the Moon has for the side of Earth closest to the Moon, as compared to the part of the Earth opposite the Moon, results in tidal forces. Tidal forces affect both the Earth's crust and oceans.\n", "Neglecting axial tilt, the tidal force a satellite (such as the moon) exerts on a planet (such as earth) can be described by the variation of its gravitational force over the distance from it, when this force is considered as applied to a unit mass formula_1:\n" ]
How is the suffix of an element determined?
Well "gen" is ~~latin~~ greek for maker, or generator, the word "Hydrogen" literally means "Water Maker", whereas Oxygen (somewhat misnamed) means "Sharp (Acid) Maker" Not too sure what "Ium" means, but ^ is where the gen comes from! Hope that helps :)
[ "Once an element has been named, a one-, or two-letter symbol must be ascribed to it so it can be easily referred to in such contexts as the periodic table. The first letter is always capitalised. While the symbol is often a contraction of the element's name, it may sometimes not match the element's name when the symbol is based on non-English words; examples include \"Pb\" for lead (from \"plumbum\" in Latin) or \"W\" for tungsten (from \"Wolfram\" in German). Elements which have only temporary systematic names are given temporary three-letter symbols (e.g. Uue for ununennium, the undiscovered element 119).\n", "There are also symbols in chemical equations for groups of chemical elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, an \"X\" indicates a variable group (usually a halogen) in a class of compounds, while \"R\" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter \"Q\" is reserved for \"heat\" in a chemical reaction. \"Y\" is also often used as a general chemical symbol, although it is also the symbol of yttrium. \"Z\" is also frequently used as a general variable group. \"E\" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly \"Nu\" denotes a nucleophile. \"L\" is used to represent a general ligand in inorganic and organometallic chemistry. \"M\" is also often used in place of a general metal.\n", "The suffix \"‑ium\" overrides traditional chemical-suffix rules; thus, elements 117 and 118 were \"ununseptium\" and \"ununoctium\", not *\"ununseptine\" and *\"ununocton\". This does not apply to the trivial names these elements receive once confirmed; thus, elements 117 and 118 are now \"tennessine\" and \"oganesson\", respectively. For these trivial names, all elements receive the suffix \"‑ium\" except those in group 17, which receive \"‑ine\" (like the halogens), and those in group 18, which receive \"‑on\" (like the noble gases).\n", "The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as \"through\", \"beyond\", or \"from ... through\", as in \"through iron\", \"beyond uranium\", or \"from lanthanum through lutetium\". The terms \"light\" and \"heavy\" are sometimes also used informally to indicate relative atomic numbers (not densities), as in \"lighter than carbon\" or \"heavier than lead\", although technically the weight or mass of atoms of an element (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers.\n", "Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is \"Z\" (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of \"any\" mixture of atoms with a given atomic number.\n", "The ' suffix followed the precedent set in other newly discovered elements of the time: potassium, sodium, magnesium, calcium, and strontium (all of which Davy isolated himself). Nevertheless, element names ending in ' were known at the time; for example, platinum (known to Europeans since the 16th century), molybdenum (discovered in 1778), and tantalum (discovered in 1802). The \"\" suffix is consistent with the universal spelling alumina for the oxide (as opposed to aluminia); compare to lanthana, the oxide of lanthanum, and magnesia, ceria, and thoria, the oxides of magnesium, cerium, and thorium, respectively.\n", "Although most attributes are provided as paired names and values, some affect the element simply by their presence in the start tag of the element (like the codice_5 attribute for the codice_6 element).\n" ]
the contradiction of why you have to wait x amount of hours to report someone missing when they also say the first 24-48 hours are most important?
The 24 hour thing is a myth, an invention for police procedural dramas. In real Police stations they will ask why you suspect that a person has gone missing and will respond accordingly. For example, a woman who comes to the Police station stating that her teenage daughter never came home from school and she was supposed to be home two hours ago, the Police might say that she's just running late. However if, in the same scenario, the woman explains that she's worried because she's seen suspicious behavior in the area then the Police may open an investigation immediately.
[ "A common misconception is that a person must be absent for at least 24 hours before being legally classed as missing, but this is rarely the case. Law enforcement agencies often stress that the case should be reported as early as possible.\n", "BULLET::::- The probability that a user may be delayed longer than time \"t\" while waiting for a connection. Time \"t\" is chosen by the telecommunications service provider so that they can measure whether their services conform to a set Grade of Service.\n", "As a result of time loops, those who reside in them may not be able to return to the present day, depending on how long they've been there. In a mere matter of hours outside of the loop, the amount of time evaded will catch up. An example of this is Miss Peregrine's own former ward, a young girl named Charlotte who left the loop while Miss Peregrine was away. She was discovered by police in the mid-1980s and sent to a welfare agency. When Miss Peregrine found her just two days later, she'd already aged thirty-five years. Although she survived the ordeal, the unnatural aging process had caused Charlotte a great deal of mental disorder, and she was sent to live with Miss Nightjar, an ymbryne more suited for her care. The same process of deterioration applies to anything taken out of time loops as another instance was an apple Jacob took back to the inn where he and his father were staying in the present day. He left it on the nightstand next to his bed as he fell asleep that night, but by morning, found it had rotted to the point of disintegrating.\n", "BULLET::::- It is rarely necessary to wait 24 hours before filing a missing person report. In instances where there is evidence of violence or of an unusual absence, law enforcement agencies in the United States often stress the importance of beginning an investigation promptly. The UK government website says in large type, \"You don't have to wait 24 hours before contacting the police.\"\n", "BULLET::::- If the worker was in the United States for 18 months or less, then H-2 time is interrupted if the worker is outside the United States for at least 45 days but less than 3 months. This means that time spent outside the United States will not count toward the 3-year limit, but rather, upon return, the worker's clock will resume from where it left off at the time of departure.\n", "The U.S. Census Bureau found that there were 124 million people who work outside of their homes. Using their data on the time occupied by travel to work, the table below shows the absolute number of people who responded with travel times \"at least 30 but less than 35 minutes\" is higher than the numbers for the categories above and below it. This is likely due to people rounding their reported journey time. The problem of reporting values as somewhat arbitrarily rounded numbers is a common phenomenon when collecting data from people.\n", "If this is allowed in an event, then care must be taken not to break the 3/4 rule: this rule states that any time lost may be made back provided that no section between 2 consecutive Time Controls is done in less than 3/4 of the time allowed for that section unless the section is less than 4 miles in length in which case as much time as required may be made back. This sounds counter-intuitive, but in practice it is very difficult to make back much time on a 4 mile/8 minute section and this rule allows organisers to build in sections of, for example, 2 miles/30 minutes specifically to allow competitors to reduce lateness.\n" ]
Can planets in the habitable zone have moons that also supports life?
I think that the magnetic shielding would be a big issue. Even larger planets like Mars have their atmosphere stripped away by the solar winds, and Mars is much larger than our moon.
[ "Gliese 876 c lies at the inner edge of the system's habitable zone. While the prospects for life on gas giants are unknown, it might be possible for a large moon of the planet to provide a habitable environment. Unfortunately tidal interactions between a hypothetical moon, the planet, and the star could destroy moons massive enough to be habitable over the lifetime of the system. In addition it is unclear whether such moons could form in the first place.\n", "Planetary-mass natural satellites have the potential to be habitable as well. However, these bodies need to fulfill additional parameters, in particular being located within the circumplanetary habitable zones of their host planets. More specifically, moons need to be far enough from their host giant planets that they are not transformed by tidal heating into volcanic worlds like Io, but must still remain within the Hill radius of the planet so that they are not pulled out of orbit of their host planet. Red dwarfs that have masses less than 20% of that of the Sun cannot have habitable moons around giant planets, as the small size of the circumstellar habitable zone would put a habitable moon so close to the star that it would be stripped from its host planet. In such a system, a moon close enough to its host planet to maintain its orbit would have tidal heating so intense as to eliminate any prospects of habitability.\n", "Kepler-1647b is in the habitable zone of the star system. Since the planet is a gas giant, it is unlikely to host life. However, hypothetical large moons could potentially be suitable for life. However, large moons are usually not created during accretion near a gas giant. Such moons would likely have to be captured separately, e.g., a passing protoplanet caught into orbit due to the gravitational field of the giant planet.\n", "The notion of a giant planet with a habitable moon went against theories of planetary formation as they stood before the discovery of \"hot Jupiter\" planets. It was thought that planets large enough to have an Earth-sized moon would only form above the \"snowline\", too far from the star for life. It is now believed that such worlds can migrate inwards, and habitable moons seem likely. The existence of exomoons has not been confirmed, though there are candidates.\n", "The strongest candidates for natural satellite habitability are currently icy satellites such as those of Jupiter and Saturn—Europa and Enceladus respectively, although if life exists in either place, it would probably be confined to subsurface habitats. Historically, life on Earth was thought to be strictly a surface phenomenon, but recent studies have shown that up to half of Earth's biomass could live below the surface. Europa and Enceladus exist outside the circumstellar habitable zone which has historically defined the limits of life within the Solar System as the zone in which water can exist as liquid at the surface. In the Solar System's habitable zone, there are only three natural satellites—the Moon, and Mars's moons Phobos and Deimos (although some estimates show Mars and its moons to be slightly outside the habitable zone) —none of which sustain an atmosphere or water in liquid form. Tidal forces are likely to play as significant a role providing heat as stellar radiation in the potential habitability of natural satellites.\n", "Since HD 28185 b orbits in its star's habitable zone, some have speculated on the possibility of life on worlds in the HD 28185 system. While it is unknown whether gas giants can support life, simulations of tidal interactions suggest that HD 28185 b could harbor Earth-mass satellites in orbit around it for many billions of years. Such moons, if they exist, may be able to provide a habitable environment, though it is unclear whether such satellites would form in the first place. Additionally, a small planet in one of the gas giant's Trojan points could survive in a habitable orbit for long periods. The high mass of HD 28185 b, of over six Jupiter masses, actually makes either of these scenarios more likely than if the planet was about Jupiter's mass or less.\n", "If Earth-like planets form in or migrate into the circumbinary habitable zone they are capable of sustaining liquid water on their surface in spite of the dynamical and radiative interaction with the binary star. \n" ]
Would it be possible to make a mirror that reflects the image back the right way around?
Yes. This is called a [non-reversing mirror](_URL_0_). There were some articles about a new kind of such mirror invented a few years ago. The guy who did it also made a side view mirror with "no blind spot".
[ "A non-reversing mirror (sometimes referred to as a flip mirror) is a mirror that presents its subject as it would be seen from the mirror. A non-reversing mirror can be made by connecting two regular mirrors at their edges at a 90 degree angle. If the join is positioned so that it is vertical, an observer looking into the angle will see a non-reversed image. This can be seen in places such as public toilets when there are two mirrors mounted on walls which meet at right angles. Such an image is visible while looking towards the corner where the two mirrors meet. The problem with this type of non-reversing mirror is that there is usually a line down the middle interrupting the image. However, if first surface mirrors are used, and care is taken to set the angle to exactly 90 degrees, the join can be made almost invisible.\n", "A third type of non-reversing mirror was created by mathematics professor R. Andrew Hicks in 2009. It was created using computer algorithms to generate a \"disco ball\" like surface. The thousands of tiny mirrors are angled to create a surface which curves and bends in different directions. The curves direct rays from an object across the mirror's face before sending them back to the viewer, flipping the conventional mirror image.\n", "The inverted real image of an object reflected by a concave mirror can appear at the focal point in front of the mirror. In a construction with an object at the bottom of two opposing concave mirrors (parabolic reflectors) on top of each other, the top one with an opening in its center, the reflected image can appear at the opening as a very convincing 3D optical illusion.\n", "Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic.\n", "A convex mirror or diverging mirror is a curved mirror in which the reflective surface bulges towards the light source. Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focal point (\"F\") and the centre of curvature (\"2F\") are both imaginary points \"inside\" the mirror, that cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. The image is smaller than the object, but gets larger as the object approaches the mirror.\n", "A flip mirror unit is used on astronomical Telescope and other optical instruments in order to send the light from an object in new directions using a small mirror which can be moved into the lightbeam. It is a mirror-diagonal that holds both a camera and an eyepiece and allows you to switch your view between them by flipping a Mirror up or down. It is used to center the object in your camera and to help you focus it. It can also be used in 35-mm photography if it is large enough to allow the entire field of view to reach the camera.\n", "Analysis of the flawed images showed that the cause of the problem was that the primary mirror had been polished to the wrong shape. Although it was probably the most precisely figured optical mirror ever made, smooth to about , at the perimeter it was too flat by about . This difference was catastrophic, introducing severe spherical aberration, a flaw in which light reflecting off the edge of a mirror focuses on a different point from the light reflecting off its center.\n" ]
how will the porn ban in the uk affect ordinary internet browsing?
Same way the torrent site "ban" affected torrents in the UK - > it won't.
[ "In 2013, an 'Official' Court order was called in to bar users from browsing pornographic material. While the rule applies on censoring pornographic sites, it has been found that Internet filters have blocked other websites, \n", "Internet censorship in the United Kingdom is conducted under a variety of laws, judicial processes, administrative regulations and voluntary arrangements. It is achieved by blocking access to sites as well as the use of laws that criminalise publication or possession of certain types of material. These include English defamation law, the Copyright law of the United Kingdom, regulations against incitement to terrorism and child pornography.\n", "In July 2015 the Supreme Court of India refused to allow the blocking of pornographic websites and said that watching pornography indoors in the privacy of ones own home was not a crime. The court rejected an interim order blocking pornographic websites in the country. In August 2015 the Government of India issued an order to Indian ISPs to block at least 857 websites that it considered to be pornographic. In 2015 the Department of Telecommunications (DoT) had asked internet service providers to take down 857 websites in a bid to control cyber crime, but after receiving criticism from the authorities it partially rescinded the ban. The ban from the government came after a lawyer filed a petition in the Supreme Court arguing that online pornography encourages sex crimes and rapes.\n", "In July 2015, The Supreme Court of India denied to block pornographic websites sites and said, watching porn in the privacy of your own at indoors isn't a crime and declined to pass an interim order to block pornographic websites in the country. In August 2015, the Government of India issued an order to Indian ISPs to block at least 857 websites that it considered to be pornographic. The Department of Telecom(DoT), in the year 2015, had asked internet service providers to take down as many as 857 websites in a bid to control cyber crime but after receiving criticism from the authorities, it partially rescinded the ban. The ban from the government came after a lawyer filed a petition in Supreme Court arguing that online porn encourage sex crimes and rapes. \n", "In jurisdictions that heavily restrict access or outright ban pornography, various attempts have been made to prevent access to pornographic content. The mandating of Internet filters to try preventing access to porn sites has been used in some nations such as China and Saudi Arabia. Banning porn sites within a nation's jurisdiction does not necessarily prevent access to that site, as it may simply relocate to a hosting server within another country that does not prohibit the content it offers. The United Kingdom's Digital Economy Act 2017 includes powers to require age-verification for pornographic Internet sites and the government accepted an amendment to allow the regulator to require ISPs to block access to non-compliant sites. As the BBFC are expected to become the regulator, this has caused discussion about ISPs being required to block content that is prohibited even under an R18 certificate, the prohibition of some of which is itself controversial.\n", "In July 2015, The Supreme Court of India declined to pass an interim order to block pornographic websites and said that watching pornography in the privacy of one's own isn't a crime. In August 2015, the Government of India issued an order to Indian ISPs to block at least 857 websites that it considered to be pornographic. In 2015, the Department of Telecom(DoT) had asked internet service providers to take down as many as 857 websites in a bid to control cyber crime, but after receiving criticism from the authorities, it partially rescinded the ban. The ban came about after a lawyer filed a petition in Supreme Court arguing that online porn encourage sex crimes and rapes. \n", "In July 2015, The Supreme Court of India denied to block pornographic websites sites and said, watching porn privately indoors is not a crime and declined to pass an interim order to block pornographic websites in the country. In August 2015, the Government of India issued an order to Indian ISPs to block at least 857 websites that it considered to be pornographic. The Department of Telecom(DoT), in the year 2015, had asked internet service providers to take down as many as 857 websites in a bid to control cyber crime but after receiving criticism from the authorities, it partially rescinded the ban. The ban from the government came after a lawyer filed a petition in Supreme Court arguing that online porn encourage sex crimes and rapes. \n" ]
how do we know ancient civilisations/events existed?
Carbon dating is a chemistry-based method for determining how old organic matter (wood, preserved food) is. There's no disguising a 200 year old tomb as a 4000 year old tomb; the carbon dating is completely different. Also, we have records from long ago (like the Roman Empire) of people *already* knowing about things like the Egyptian Pyramids, which were already old at the time. Also, some civilizations (like China) have existed continuously for thousands of years, with continuous written records updated every year, and consistent with the archaeological records. It's just too much to fake; you might as well give up on reality entirely.
[ "With philology, archaeology, and art history, scholars seek understanding of the history and culture of a civilisation, through critical study of the extant literary and physical artefacts, in order to compose and establish a continual historic narrative of the Ancient World and its peoples. The task is difficult due to a dearth of physical evidence: for example, Sparta was a leading Greek city-state, yet little evidence of it survives to study, and what is available comes from Athens, Sparta's principal rival; likewise, the Roman Empire destroyed most evidence (cultural artefacts) of earlier, conquered civilizations, such as that of the Etruscans.\n", "The Ancients were a major race in the distant past; their ruins dot planets throughout charted space and their artifacts are more technically advanced than those of any existing civilization. For unknown reasons, they transplanted humans from Earth to dozens of worlds, uplifted Terran wolves to create the Vargr, and undertook many megascale engineering projects before destroying their civilization in a catastrophic war.\n", "As far as the ancient cultural profile of this part of the World is concerned, we do not know much about the Pre-Historic and Proto-Historic age. There may be several archaeological sites that can produce the traces of human habitation before the second millennium B.C but that will need thorough and methodical archaeological excavations. The earliest evidences we so far have is the arrival of Achaemenian (Persian Zoroaster), who are evident by their seglois.\n", "The Ancients are a technologically and physically advanced alien species with the ability and knowledge to harness wormhole-based technology. They are the most advanced race in \"Farscape\". They originally existed in another realm until it was bridged to the \"Farscape\" universe by wormholes. When they became aware of the various species in the other realm and many of those species' aggressive tendencies, they modified several members of their own species to live and exist in the other universe. These individuals became the Ancients. Yet after many years, the original planet of the Ancients began to die and they needed to search for a suitable location, and they discovered Earth through an elaborate simulation in human John Crichton, who gains wormhole knowledge in the process. A major part of \"Farscape\" is the risk of this wormhole knowledge getting in the hands of the Scarran. The Ancients are featured in the episodes \"A Human Reaction\", \"The Hidden Memory\", \"Infinite Possibilities\", \"Unrealized Reality\", and in \"\".\n", "The Ancients are the original builders of the Stargate network, who by the time of \"Stargate SG-1\" have Ascended beyond corporeal form into a higher plane of existence. The humans of Earth are the \"second evolution\" of the Ancients. The Ancients (originally known as the Alterans) colonized the Milky Way galaxy millions of years ago and built a great empire. They also colonized the Pegasus galaxy and seeded human life there, before being driven out by the Wraith. The civilization of the Ancients in the Milky Way was decimated thousands of years ago by a plague, and those who did not learn to Ascend died out. With few exceptions, the Ascended Ancients respect free will and (with some exceptions) refuse to interfere in the affairs of the material galaxy. However, their legacy is felt profoundly throughout \"Stargate\" universe, from their technologies such as Stargates and Atlantis, to the Ancient Technology Activation gene, that they introduced into the human genome through crossbreeding.\n", "The Ancients are the original builders of the Stargate network, who by the time of \"Stargate SG-1\" have Ascended beyond corporeal form into a higher plane of existence. The humans of Earth are the \"second evolution\" of the Ancients. The Ancients (originally known as the Alterans) colonized the Milky Way galaxy millions of years ago and built a great empire. They also colonized the Pegasus galaxy and seeded human life there, before being driven out by the Wraith. The civilization of the Ancients in the Milky Way was decimated thousands of years ago by a plague, and those who did not learn to Ascend died out. With few exceptions, the Ascended Ancients respect free will and refuse to interfere in the affairs of the material galaxy. However, their legacy is felt profoundly throughout \"Stargate\" universe, from their technologies such as Stargates and Atlantis, to the Ancient Technology Activation gene, that they introduced into the human genome through interbreeding.\n", "The Ancients are the original builders of the Stargate network, who by the time of \"Stargate SG-1\" have ascended beyond corporeal form into a higher plane of existence. The humans of Earth are the \"second evolution\" of the Ancients. The Ancients (originally known as the Alterans) colonized the Milky Way galaxy millions of years ago and built a great empire. They also colonized the Pegasus galaxy and seeded human life there before being driven out by the Wraith. The civilization of the Ancients in the Milky Way was decimated millions of years ago by a plague and those who did not learn to ascend died out. With few exceptions the ascended Ancients respect free will and refuse to interfere in the affairs of the material galaxy. However their legacy is felt profoundly throughout the \"Stargate\" universe, from their technologies such as Stargates and Atlantis to the Ancient Technology Activation gene that they introduced into the human genome through interbreeding.\n" ]
what are volts? amps? watts? which one(s) can kill you versus just shock/hurt you?
The water pipe analogy is a famous way to understand electricity. Volts are like water pressure (high voltage would be like a pressure washer, low voltage would be like a babbling brook) Amps are like the amount of water flow (high amperage would be like the Mississippi river, low amperage would be like a kitchen faucet) Watts are the ability of the water to do work. Watts, conveniently, are equal to Amps * Volts. What kills you is current (amperage) running through your body in the wrong way. However, current can't run through your body without a voltage (pressure) to drive it. So both are what kills you, really.
[ "Voltages over approximately 50 volts can usually cause dangerous amounts of current to flow through a human being who touches two points of a circuit—so safety standards, in general, are more restrictive around such circuits. The definition of \"extrahigh voltage\" (EHV) again depends on context. In electric power transmission engineering, EHV is classified as voltages in the range of 345,000 - 765,000 volts. In electronics systems, a power supply that provides greater than 275,000 volts is called an \"EHV Power Supply\", and is often used in experiments in physics.\n", "It is sometimes suggested that human lethality is most common with alternating current at 100–250 volts; however, death has occurred below this range, with supplies as low as 42 volts. Assuming a steady current flow (as opposed to a shock from a capacitor or from static electricity), shocks above 2,700 volts are often fatal, with those above 11,000 volts being usually fatal, though exceptional cases have been noted. According to a Guinness Book of World Records comic, seventeen-year-old Brian Latasa survived a 230,000 volt shock on the tower of an ultra-high voltage line in Griffith Park, Los Angeles on November 9, 1967. A news report of the event stated that he was \"jolted through the air, and landed across the line\", and though rescued by firemen, he suffered burns over 40% of his body and was completely paralyzed except for his eyelids. The shock with the highest voltage reported survived was that of Harry F. McGrew, who came in contact with a 340,000 volt transmission line in Huntington Canyon, Utah.\n", "The precise reasoning for the selection of 100 volts as the division between high and low is not clearly defined, but appears to be based on the idea that a person could touch the wires carrying low voltage with dry bare hands, and not be electrocuted, injured, or killed. This is generally true for 12 volt systems, but becomes more ambiguous as the voltage increases to 100 volt.\n", "The Kill A Watt (a pun on \"kilowatt\") is an electricity usage monitor manufactured by Prodigit Electronics and sold by P3 International. It measures the energy used by devices plugged directly into the meter, as opposed to in-home energy use displays, which display the energy used by an entire household. The LCD shows voltage; current; true, reactive, and apparent power; power factor (for sinusoidal waveform); energy consumed in kWh; and hours connected. Some models display estimated cost.\n", "Low-energy exposure to high voltage may be harmless, such as the spark produced in a dry climate when touching a doorknob after walking across a carpeted floor. The voltage can be in the thousand-volt range, but the current (the rate of charge transfer) is low.\n", "Voltages greater than 50 V applied across dry unbroken human skin can cause heart fibrillation if they produce electric currents in body tissues that happen to pass through the chest area. The voltage at which there is the danger of electrocution depends on the electrical conductivity of dry human skin. Living human tissue can be protected from damage by the insulating characteristics of dry skin up to around 50 volts. If the same skin becomes wet, if there are wounds, or if the voltage is applied to electrodes that penetrate the skin, then even voltage sources below 40 V can be lethal.\n", "BULLET::::- Low-charging or Anti-static: Materials that limit the buildup of charge by prevention of triboelectric effects through physical separation or by selecting materials that do not build up charge easily. Humans have natural electrical sources running through the body, touching an ESD unequipped can result in serious material damage.\n" ]
each of the 5 positions in basketball and their responsibilities.
Point guard, or the "1": Primary ball handler on offense. Brings the ball up the court, generally starts plays and creates opportunities for others. On defense, defends the perimeter. Shooting guard, or the "2" or the "2 guard": perimeter scorer that plays off the ball. Needs to be a good ball handler as well. They can drive to the basket to create opportunities for themselves or others, just as the point guard does. Defends the perimeter. Small forward, or the "3": nowadays, these are some of the most impactful players. Sometimes called a "wing player." They need to be versatile in their ability to score from various places and defend various types of players. See LeBron James, Kawhi Leonard, and to some extent Draymond Green. Power Forward, or the "4": traditionally a post (post means underneath the basket, in a basic sense) player, nowadays many of them have the skills to step outside the basket and hit longer shots. Having a fourth player that can do this helps to stretch the defense. They also need to be able to rebound and defend the post. Center, or the "5": This position is seemingly less important than ever in the modern game, but they are further specialized in playing the "post" than the power forward. They need to protect the basket on defense and rebound. On offense they can be used to take close shots, rebound, or step out to the perimeter and set "picks" (a pick is when one player blocks a defender so his teammate can create space from that defender, or "get open"). The modern game is fast and heavily focused on perimeter play so this position has become more about utility players that can rebound and set picks than about dominant players that can take over a game with scoring. Less about Shaq, Wilt Chamberlain, etc. Traditional type of "big men" such as Dwight Howard struggle to fill a dominant role right now.
[ "Basketball position – general location on the court which each player is responsible for. A player is generally described by the position (or positions) he or she plays, though the rules do not specify any positions. Positions are part of the strategy that has evolved for playing the game, and terminology for describing game play.\n", "The center (C), also known as the five, or the big man, is one of the five positions in a regular basketball game. The center is normally the tallest player on the team, and often has a great deal of strength and body mass as well. In the NBA, the center is usually or taller and usually weighs or more. They traditionally have played close to the basket in the low post. A center with the ability to shoot outside from three-point range is known as stretch five.\n", "Although the rules do not specify any positions whatsoever, they have evolved as part of basketball. During the early years of basketball's evolution, two guards, two forwards, and one center were used. In more recent times specific positions evolved, but the current trend, advocated by many top coaches including Mike Krzyzewski is towards positionless basketball, where big guys are free to shoot from outside and dribble if their skill allows it. Popular descriptions of positions include:\n", "BULLET::::- Forward-center – position for players who play or have played both forward and center on a consistent basis. Typically, this means power forward and center, since these are usually the two biggest player positions on any basketball team, and therefore more often overlap each other.\n", "The player is the coach of a basketball team, and determines the plays and sets, offense and defense. The basketball players are represented by numbers on the onscreen court, and the coach must learn how to effectively use the team's stars and how to obtain the best performance from the regular players. \"Basketball Challenge\" can be played by one or two players, or the computer can also play against a human opponent or run the entire game as both players. At the beginning of the game the player is given the option to choose offensive and defensive plays including lineup and tempo. During the game you have the ability to communicate with team players. You also have the ability to coach a player and this can lead to changing tactics or even substituting players during deadball. \n", "The point guard (PG), also called the one or point, is one of the five positions in a regulation basketball game. A point guard has perhaps the most specialized role of any position. Point guards are expected to run the team's offense by controlling the ball and making sure that it gets to the right player at the right time. Above all, the point guard must totally understand and accept their coach's game plan; in this way, the position can be compared to a quarterback in American football or a playmaker in association football (soccer). While the point guard must understand and accept the coach's gameplan, they must also be able to adapt to what the defense is allowing, and they also must control the pace of the game.\n", "The five players on each side at a time fall into five playing positions: the tallest player is usually the center, the tallest and strongest is the power forward, a slightly shorter but more agile big man is the small forward, and the shortest players or the best ball handlers are the shooting guard and the point guard, who implements the coach's game plan by managing the execution of offensive and defensive plays (player positioning). Informally, players may play three-on-three, two-on-two, and one-on-one.\n" ]
TIL about the Phaistos Disk and Linear A and I'm curious, what other languages are still undeciphered? What are the chances that they will be deciphered?
Indus script, inscriptions left by the Indus Valley Civilization, has yet to be deciphered. The language appears to use short strings of symbols. For the past few years, researchers have attempted to decipher the language but the chances of Indus script being deciphered is very low. A big reason is that inscriptions of Indus script are short (Average length: 5. Longest on a single surface: 17). We also have no idea what language the Indus Valley people spoke and there is no artifact like the Rosetta Stone that we can refer to. [Most of these numbers were pulled from this paper on page 796](_URL_1_) EDIT: I did some more searching and found a [TED Talk](_URL_0_) by one of the guys who contributed to the paper I listed above
[ "The Phoenician text has long been known to be in a Semitic, more specifically Canaanite language (very closely related to Hebrew, and also relatively close to Aramaic and Ugaritic); hence there was no need for it to be \"deciphered.\" And while the inscription can certainly be read, certain passages are philologically uncertain on account of perceived complications of syntax and the vocabulary employed in the inscription, and as such they have become the source of debate among both Semiticists and Classicists.\n", "These systems have not been deciphered. In some cases, such as Meroitic, the sound values of the glyphs are known, but the texts still cannot be read because the language is not understood. Several of these systems, such as Epi-Olmec and Indus, are claimed to have been deciphered, but these claims have not been confirmed by independent researchers. In many cases it is doubtful that they are actually writing. The Vinča symbols appear to be proto-writing, and quipu may have recorded only numerical information. There are doubts that Indus is writing, and the Phaistos Disc has so little content or context that its nature is undetermined.\n", "The cuneiform lexical lists are a series of ancient Mesopotamian glossaries which preserve the semantics of Sumerograms, their phonetic value and their Akkadian or other language equivalents. They are the oldest literary texts from Mesopotamia and one of the most widespread genres in the ancient Near East. Wherever cuneiform tablets have been uncovered, inside Iraq or in the wider Middle East, these lists have been discovered.\n", "Linear B, a script used in the ancient Aegean, was deciphered in 1952 by Michael Ventris and John Chadwick, who demonstrated that it recorded an early form of Greek, now known as Mycenaean Greek. Linear A, the writing system that records the still-unknown language of the Minoans, resists deciphering, despite many attempts.\n", "In 1995 independent linguist Steven Fischer, who also claims to have deciphered the enigmatic Phaistos Disc, announced that he had cracked the rongorongo \"code\", making him the only person in history to have deciphered two such scripts. In the decade since, this has not been accepted by other researchers, who feel that Fischer overstated the single pattern which formed the basis of his decipherment, and note that it has not led to an understanding of other patterns.\n", "The vowels that are clearly distinguished by the cuneiform script are , , , and . Various researchers have posited the existence of more vowel phonemes such as and even and , which would have been concealed by the transmission through Akkadian, as that language does not distinguish them. That would explain the seeming existence of numerous homophones in transliterated Sumerian, as well as some details of the phenomena mentioned in the next paragraph. These hypotheses are not yet generally accepted.\n", "Janet H. Johnson noted in 1996 that the texts can only be understood entirely when the parts written in the Egyptian language known as \"Demotic\" are accounted for. Johnson adds, \"All four of the Demotic magical texts appear to have come from the collections that Anastasi gathered in the Theban area. Most have passages in Greek as well as in Demotic, and most have words glossed into Old Coptic (Egyptian language written with the Greek alphabet [which indicated vowels, which Egyptian scripts did not] supplemented by extra signs taken from the Demotic for sounds not found in Greek); some contain passages written in the earlier Egyptian hieratic script or words written in a special \"cipher\" script, which would have been an effective secret code to a Greek reader but would have been deciphered fairly simply by an Egyptian.\" \n" ]
why does japanese use 3 different alphabets/sillabaries?
In the beginning, what would later become "Japan" adopted Chinese characters for their writing. Note that they only adopted the script, the spoken language is still Japanese, they just hammered that into Chinese characters Later, during the middle Heian period (~1000 AD), a combination of a movement to detach from the mainland and develop a "native" culture, along with much of literature being written by women in courtly cultural salons lead to "women's hand" being developed and the de facto language. Women were traditionally not allowed to learn Chinese script, which is why what would later become Hiragana developed. A number of important works, like the kokinshu, were written in hiragana. Later, the Heian period collapsed, and the era of warrior rule started. With the end of cultural salons as the center of literature, works once again were written by men, for a predominantly male audience. The language became a mixed script of Chinese characters and onnade. Katakana was parallel to all of this. It was used primarily for foreign loan-words. At first, Chinese of course. --- tl;dr let's copy china, let's try to stop copying china (women make their own script), rip women, now we do both
[ "In modern Japanese, the hiragana and katakana syllabaries each contain 46 basic characters, or 71 including diacritics. With one or two minor exceptions, each different sound in the Japanese language (that is, each different syllable, strictly each mora) corresponds to one character in each syllabary. Unlike kanji, these characters intrinsically represent sounds only; they convey meaning only as part of words. Hiragana and katakana characters also originally derive from Chinese characters, but they have been simplified and modified to such an extent that their origins are no longer visually obvious.\n", "The two Japanese syllabaries are themselves adapted from the Chinese characters (both of the syllabaries, katakana and hiragana, are in everyday use alongside the Chinese characters known as kanji; the kanji, being developed in parallel to the Chinese characters, have their own idiosyncrasies, but Chinese and Japanese ideograms are largely comprehensible, even if their use in the languages are not the same.)\n", "The contemporary Japanese language uses two syllabaries together called kana, namely hiragana and katakana, which were developed around 700. Because Japanese uses mainly CV (consonant + vowel) syllables, a syllabary is well suited to write the language. As in many syllabaries, vowel sequences and final consonants are written with separate glyphs, so that both \"atta\" and \"kaita\" are written with three kana: あった (\"a-t-ta\") and かいた (\"ka-i-ta\"). It is therefore sometimes called a \"moraic\" writing system.\n", "Other systems, known as kana, used Chinese characters phonetically to transcribe the sounds of Japanese syllables. An early system of this type was Man'yōgana, as used in the 8th century anthology \"Man'yōshū\". This system was not quite a syllabary, because each Japanese syllable could be represented by one of several characters, but from it were derived two syllabaries still in use today. They differ because they sometimes selected different characters for a syllable, and because they used different strategies to reduce these characters for easy writing: the angular katakana were obtained by selecting a part of each character, while hiragana were derived from the cursive forms of whole characters. Such classic works as Lady Murasaki's \"The Tale of Genji\" were written in hiragana, the only system permitted to women of the time.\n", "The two Japanese syllabaries, katakana and hiragana, are sometimes seen as two styles or typographic variants of each other, but usually are considered separate character sets as a few of the characters have separate kanji origins and the scripts are used for different purposes. The \"gothic\" style of the roman script with broken letter forms, on the other hand, is usually considered a mere typographic variant.\n", "Even though the Japanese language also uses Chinese characters (kanji), it primarily employs katakana to transliterate names of the elements from European languages (often German/Dutch or Latin [via German] or English). For example,\n", "The original Japanese kana syllabaries were a purely phonetic representation used for writing the Japanese language when they were invented around 800 AD as a simplification of Chinese-derived kanji characters. However, the syllabaries were not completely codified and alternate letterforms, or hentaigana, existed for many sounds until standardization in 1900. In addition, due to linguistic drift the pronunciation of many Japanese words changed, mostly in a systematic way, from the classical Japanese language as spoken when the kana syllabaries were invented. Despite this, words continued to be spelled in kana as they were in classical Japanese, reflecting the classic rather than the modern pronunciation, until a Cabinet order in 1946 officially adopted spelling reform, making the spelling of words purely phonetic (with only 3 sets of exceptions) and dropping characters that represented sounds no longer used in the language.\n" ]
How did the US Government get all of the Native Americans to Oklahoma during the Trail of Tears? What happened to the people who refused to leave?
Basically the tribes were coerced by violence to relocate to Indian Territory. Some tribes were able to hide out and they later became their own tribes in their original homelands. Famous examples include the Poarch Band of Creek Indians of Alabama, Seminole Tribe of Florida (who were never militarily defeated by the United States), Sac & Fox Tribe of the Mississippi in Iowa, and Mississippi Band of Choctaw Indians. A band of [Nez Perce](_URL_2_) were forced from Washington to Indian Territory in 1878. Basically they said hell no and returned back to the NW in 1884. Same with the Northern Cheyenne, who returned to Montana on foot in the winter of 1878. The Dawes Commission, led by Senator Henry L. Dawes (still hated today by most Oklahoma Indians), oversaw destroying tribal governments and landholdings in Indian Territory. The Curtis Act of 1898 dismantled tribal governments, courts, and school systems (many of these buildings were stolen from the tribes). The Dawes Severalty Act called for lands collectively owned by the tribes to be broken up into small individual allotments to individual Indians and Freedmen/Freedwomen. Then the so-called "Surplus" land was opened up to non-Native settlements in lotteries and [land runs](_URL_1_). When the idea of combining Oklahoma Territory, Indian Territory, and the "unassigned lands" was proposed, traditionalists fought it legally and even by force (see the Four Mother's Society, the Green Peach War). Politicians from the NE tribes tried to promise a separate [State of Sequoyah](_URL_0_) to be separate from Oklahoma (it breaks my heart that this didn't happen). Some tribes have recovered stolen public buildings from the state of Oklahoma in recent years. The tribes had to reorganized their governments under the [Oklahoma Indian Welfare Act of 1936](_URL_3_) and rebuild their infrastructure in ensuing decades. Some have repurchased important lands, but compensation, no.
[ "By the 1830s, the U.S. had drafted the Indian Removal Act, which was used to facilitate the Trail of Tears. Fearing retribution of other native peoples, Indian Agents all over the eastern U.S. began desperately trying to convince all their native peoples to uproot and move west. This included the Caddoans of Louisiana and Arkansas. Following the Texas Revolution, the Texans chose to make peace with their Native peoples, but did not honor former land claims or agreements. This began the movement of Native populations north into what would become Indian Territory—modern day Oklahoma.\n", "In 1838, the U.S. government forced the Cherokees, along with other Native Americans, to relocate to the area designated as Indian Territory, in what is now the state of Oklahoma. Their journey west became known as the \"Trail of Tears\" for their exile and fatalities along the way. The U.S. Army used Ross's Landing as the site of one of three large internment camps, or \"emigration depots\", where Native Americans were held before the journey on the Trail of Tears.\n", "Following the Indian Removal Act of 1830 the American government began forcibly relocating East Coast tribes across the Mississippi. The removal included many members of the Cherokee, Muscogee (Creek), Seminole, Chickasaw, and Choctaw nations, among others in the United States, from their homelands to Indian Territory in eastern sections of the present-day state of Oklahoma. About 2,500–6,000 died along the Trail of Tears. Chalk and Jonassohn assert that the deportation of the Cherokee tribe along the Trail of Tears would almost certainly be considered an act of genocide today. The Indian Removal Act of 1830 led to the exodus. About 17,000 Cherokees, along with approximately 2,000 Cherokee-owned black slaves, were removed from their homes. The number of people who died as a result of the Trail of Tears has been variously estimated. American doctor and missionary Elizur Butler, who made the journey with one party, estimated 4,000 deaths.\n", "On January 27, 1825, the Indian Removal Act was signed, calling for the removal of all Native American Tribes in Georgia. In the following years, most of the Muskogee people were forcibly relocated to Oklahoma. Those who stayed hid in swampy, less desirable areas; fled to Florida and joined the Seminole tribe; or moved frequently to avoid capture. Laws limiting the rights of the Muskogee people were not officially removed until 1980.\n", "By 1838, the Cherokee had run out of legal options in resisting removal. They were the last of the major Southeast tribes to be forcibly moved to the Indian Territories (in modern-day Oklahoma) on the Trail of Tears. After the removal of the Cherokee, their homes and businesses were taken over by whites, with much of the property distributed through a land lottery.\n", "The Indian Removal Act and the Trail of Tears led to a major enumeration of Native Americans, and many controversies and misunderstandings about blood quantum that persist to this day. As they were being forcibly driven out of their ancestral homelands and subjected to genocide, many Natives understandably feared and distrusted the government and tried to avoid being enumerated. But the only way to do this was to completely flee the Indian community, during a time of persecution and war. Indians who tried to refuse, if they were not already in a prison camp, had warrants issued for their arrests; they were forcibly rounded up and documented against their will. It is a modern-day misconception that this enumeration was the equivalent of contemporary tribal \"enrollment\" and in any way optional.\n", "The removal of Indian Tribes from eastern states to the Indian Territory began under President Andrew Jackson in the 1830s continued in the 1840s. Tribal groups would be organized in their home area and would begin the journey up the Arkansas River, usually by steamer, as far as water conditions would allow and would then continue overland through the state until they reached Indian Territory. The job of escorting these bands of refugees along the \"Trail of Tears\" would often fall to the Arkansas Militia. Governor Conway signed a proclamation on 22 October 1836 which stated that there were numerous Indians \"roving about the state... without any fixed place of abode and committing depredations upon the property of the citizens contrary to the laws...\" he ordered the Indians to leave and directed that \"The Commandant of Regiments of the Militia in the several counties in the state and all subordinate officers are required to give their aid in carrying this order into effect.\" A similar proclamation was signed again on July 18, 1840.\n" ]
how film negatives work the difference between them and digital
Think of a film negative as a cardboard with a cutout of something on it. When you shine a flashlight through the cardboard, it will cast a shadow of whatever's in it. In an actual photo, you cast the "shadow" from the film negative on a paper that has light sensitive chemicals. The darkest part of the film negative will therefore block most of the light (cast the darkest shadow) on the paper, so that part will appear bright. Same thing applies to the lightest part of the film negative. The colors appear from the color of the shadow. That paper will be the actual photo. The film negative acts as your "memory card" so you can reprint photos as much as you like. Most digital cameras work with something called a charge coupled device(CCD) behind the lens, instead of a film. Basically what a CCD does is to manipulate charges/signals according to a certain stimulus, in this case light. CCD's are composed of tiny pixels, each recording the light it receives. Every four of those pixels, each in a square formation, contain one color filter each(red, blue and two green), so the colors are slightly lower res than the actual image resolution. So in short a digital camera works by having very small squares record what kind of light they see, and save it on memory.
[ "Digital negatives offer many advantages, such as the ability to shoot with a digital camera and edit digitally while still working with alternative or traditional photographic processes. Small, analog negatives can be scanned and enlarged digitally to create new negatives instead of using the traditional enlarging film that must be processed in a darkroom. Another advantage lies in their reproducibility: a damaged negative can be recreated from the original digital file.\n", "Film negatives usually have less contrast, but a wider dynamic range, than the final printed positive images. The contrast typically increases when they are printed onto photographic paper. When negative film images are brought into the digital realm, their contrast may be adjusted at the time of scanning or, more usually, during subsequent post-processing.\n", "The digital negative is the collective name for methods used by photographers to create negatives on transparency film for the contact printing of alternative photographic techniques. The negatives can also be enlarged using traditional gelatin silver processes, though this is usually reserved for negatives of 4x5\" or larger due to quality limitations imposed by printer technology. This set of techniques is separate from the Digital negative (DNG) file format, although this format may be used to create digital negative transparencies.\n", "Other currently available films are designed to produce color negatives for use in creating enlarged positive prints on color photographic paper. Color negatives may also be digitally scanned and then printed by non-photographic means or viewed as positives electronically. Unlike reversal-film transparency processes, negative-positive processes are, within limits, forgiving of incorrect exposure and poor color lighting, because a considerable degree of correction is possible at the time of printing. Negative film is therefore more suitable for casual use by amateurs. Virtually all single-use cameras employ negative film. Photographic transparencies can be made from negatives by printing them on special \"positive film\", but this has always been unusual outside of the motion picture industry and commercial service to do it for still images may no longer be available. Negative films and paper prints are by far the most common form of color film photography today.\n", "Today, most films are edited digitally (on systems such as Avid, Final Cut Pro or Premiere Pro) and bypass the film positive workprint altogether. In the past, the use of a film positive (not the original negative) allowed the editor to do as much experimenting as he or she wished, without the risk of damaging the original. With digital editing, editors can experiment just as much as before except with the footage completely transferred to a computer hard drive.\n", "Digital negatives are typically used with one of the alternative processes such as gum bichromate, cyanotype, or Inkodye. In these cases digital negatives are most commonly printed full size to create contact prints. The negative is sandwiched printer ink-to-emulsion in a contact printing frame and exposed under a UV light source. They can also be used to create positives (where the initial digital file is not inverted) to make positives on emulsions such as collodion processes.\n", "One problem that has resulted from Technicolor negatives is the rate of shrinkage from one strip to another. Because three-strip negatives are shot on three rolls, they are subject to different rates of shrinkage depending on storage conditions. Today, digital technology allows for a precise re-alignment of the negatives by resizing shrunken negatives digitally to correspond with the other negatives. The G, or Green, record is usually taken as the reference as it is the record with the highest resolution. It is also a record with the correct \"wind\" (emulsion position with respect to the camera's lens). Shrinkage and re-alignment (resizing) are non-issues with Successive Exposure (single-roll RGB) Technicolor camera negatives. This issue could have been eliminated, for three-strip titles, had the preservation elements (fine-grain positives) been Successive Exposure, but this would have required the preservation elements to be 3,000 feet or 6,000 feet whereas three-strip composited camera and preservation elements are 1,000 feet or 2,000 feet (however, three records of that length are needed).\n" ]
Was 200,000 Jews really conscripted into the German Wehrmacht during WWII?
It did happen however the number is closer to 150,000. It should also be noted that they were not outright "Jews" but rather "Mischling" by German Law. Here it how it works. In Nazi Germany the Nuremberg Laws (1935) defined "Jew" as someone who, regardless of religious affiliation had 3 Jewish grandparents. You were also considered a Jew if you were a "Geltungsjude" or "Jew of Legal Validity." This was determined if you met any one of the following: - You were enrolled as member of a Jewish congregation when the Nuremberg Laws were passed, or after they were passed - You were married to a Jew - You were the offspring of a Jewish parent So what is a Mischling? A Mischling is a "mixed breed." If you had two Jewish grandparents you were a Mischling of the first degree, and if you had one you were a Mischling of the second degree. You were then put through the Mischling test who's second part had the above standards (religion, marriage, etc). If you met any of those you were no longer a Mischling but rather a Geltungsjude. Mischlings, though no preferable to Aryans *could* live and work in German culture and according to [this source from the University of Kansas](_URL_0_) about 150,000 of these Mischlings actually fought in the Wehrmacht. I would compare life as Mischling to the life as a half-white/half-black individual in the American South. Sure you could own property and hold a job but the jobs you could work would be very limited because people wouldn't hire you, police would discriminate against you, etc.
[ "OKW and OKH secret reports show that half-Jews could only serve in \"Ersatzreserve II\" or \"Landwehr II\", while quarter-Jews remained in the Wehrmacht and were eligible for promotion. Employment or promotion of quarter-Jews required Hitler's approval. Cambridge University researcher Bryan Rigg noted that there were two field marshals and fifteen generals (two full generals, eight lieutenant generals, five major generals) who were Jews or of partial Jewish descent. Rigg estimated that there were 150,000 men of some Jewish descent that served in the German armed forces during World War II. 1,671 have been identified (as of 2010). Hitler personally issued \"German Blood\" papers to \"mischlings\" (mixed Jewish) for their continuing service.\n", "Almost immediately after the invasion, Germans began forcibly conscripting laborers. Jews were drafted to repair war damage as early as October, with women and children 12 or older required to work; shifts could take half a day and with little compensation. The labourers, Jews, Poles and others, were employed in SS-owned enterprises (such as the German Armament Works, Deutsche Ausrustungswerke, DAW), but also in many private German firms – such as Messerschmitt, Junkers, Siemens, and IG Farben.\n", "In 2006, on the eve of the 68th anniversary of the \"Kristallnacht\", soldiers of the \"Bundeswehr\" formed the \"Bund jüdischer Soldaten\", a federation of Jewish soldiers in the German Army, similar to the former \"Reichsbund jüdischer Frontsoldaten\". While few German Jews joined the West German Army after the Second World War, descendants of people who suffered through the Nazi persecution having been exempt from national service, by 2014 the \"Bundeswehr\" had around 250 German Jewish soldiers in its ranks again.\n", "Quoting the research done by H. G. Adler into Poland during World War II called , there were \"80,000 Jews conscripted into Poland's independent army prior to the German invasion who identified themselves as Lithuanian Jews\". Using different sources Holocaust researchers claim there were between 60,000 and 65,000 Jewish soldiers in Poland's independent army who identified themselves as Lithuanian Jews.\n", "Under a deal between the German Defense Ministry and the Central Council of Jews in Germany, Jews up to the third generation of Holocaust victims were exempted from the military service obligation, but could still volunteer for military service. For decades, volunteering for military service was taboo in the German-Jewish community, but eventually, Jews began joining. In 2007, there were an estimated 200 Jewish soldiers serving in the Bundeswehr.\n", "The number of Jews in Poland on 1 September 1939, amounted to about 3,474,000 people. One hundred thirty thousand soldiers of Jewish descent, including Boruch Steinberg, Chief Rabbi of the Polish Military, served in the Polish Army at the outbreak of the Second World War, thus being among the first to launch armed resistance against Nazi Germany. During the September Campaign some 20,000 Jewish civilians and 32,216 Jewish soldiers were killed, while 61,000 were taken prisoner by the Germans; the majority did not survive. The soldiers and non-commissioned officers who were released ultimately found themselves in the Nazi ghettos and labor camps and suffered the same fate as other Jewish civilians in the ensuing Holocaust in Poland.\n", "Recruitment for the \"Wehrmacht\" was accomplished through voluntary enlistment and conscription, with 1.3 million being drafted and 2.4 million volunteering in the period 1935–1939. The total number of soldiers who served in the \"Wehrmacht\" during its existence from 1935 to 1945 is believed to have approached 18.2 million. As World War II intensified, Kriegsmarine and Luftwaffe personnel were increasingly transferred to the Army, and \"voluntary\" enlistments in the SS were stepped up as well. Following the Battle of Stalingrad in 1943, fitness standards for \"Wehrmacht\" recruits were drastically lowered, with the regime going so far as to create \"special diet\" battalions for men with severe stomach ailments. Rear-echelon personnel were sent to front-line duty wherever possible, especially during the last two years of the war.\n" ]
is there any benefit to turning my phone off at night? does it need to "rest"?
No. Modern smartphones are designed to only need to be rebooted for major upgrades. Turning it off at other times makes absolutely no difference other than battery consumption when not plugged in.
[ "More recent research suggests that strobe lights are not effective at waking sleeping adults with hearing loss and suggest that a different alarm tone is much more effective. Individuals in the hearing loss community are seeking changes to improved awakening methods.\n", "The energy-saving sleep state powers off the display and other vehicle electronics, after the car goes to sleep. This increases the time it takes the touchscreen and instrument panel to become usable. This mode can decrease the loss of the car's range when not being used to per day, .\n", "The blue light that is emitted from the screens of phones, computers and other devices stops the production of melatonin, the hormone that controls the sleep-wake cycle of the circadian rhythm. Reducing the amount of melatonin produced makes it harder to fall and stay asleep. In a 2011 poll conducted by the National Sleep Foundation, it reported that approximately 90% of Americans used technology in the hour before bed. The poll noted that young adults and teenagers were more likely to use cell phones, computers, and video game consoles. Additionally, the authors of the poll found that technology use was connected to sleep patterns. 22% of participants reported going to sleep with cell phone ringers on in their bedroom and 10% reported awakenings in at least a few nights per week due to their cell phones' ringers. Among those with the cell phone ringers on, being awakened by their cell phone was correlated to difficulty sustaining sleep.\n", "Exercise is an activity that can facilitate or inhibit sleep quality; people who exercise experience better quality of sleep than those who do not, but exercising too late in the day can be activating and delay falling asleep. Increasing exposure to bright and natural light during the daytime and avoiding bright light in the hours before bedtime may help promote a sleep-wake schedule aligned with nature's daily light-dark cycle.\n", "BULLET::::- Onset of sleep from time the lights were turned off: this is called \"sleep onset latency\" and normally is less than 20 minutes. (Note that determining \"sleep\" and \"awake\" is based solely on the EEG. Patients sometimes feel they were awake when the EEG shows they were sleeping. This may be because of sleep state misperception, drug effects on brain waves, or individual differences in brain waves.)\n", "BULLET::::- Caregivers could try letting patients choose their own sleeping arrangements each night, wherever they feel most comfortable sleeping, as well as allow for a dim light to occupy room to alleviate confusion associated with an unfamiliar place.\n", "The blue wavelength of light from back-lit tablets may impact one's ability to fall asleep when reading at night, through the suppression of melatonin. Experts at Harvard Medical School suggest limiting tablets for reading use in the evening. Those who have a delayed body clock, such as teenagers, which makes them prone to stay up late in the evening and sleep later in the morning, may be at particular risk for increases in sleep deficiencies. A PC app such as F.lux and Android apps such as CF.lumen and Twilight attempt to decrease the impact on sleep by filtering blue wavelengths from the display. iOS 9.3 includes Night Shift that shifts the colors of the device's display to be warmer during the later hours.\n" ]
how do you make a car more "reliable"
In general reliable cars come from years of iterative design and improvement. For example, you might design a car that ends up having 10 common faults. You fix these faults in the next redesign of the car and release it to the market. Consumers then discover another 5 faults and you fix those for the next redesign and so on. This is how companies like Honda work, they also (in general) only make small changes between models to make sure reliability doesn't suffer hugely. You also tend to find that reliability falls when a car company makes a huge overhaul to a car with many changes at once. Car companies also go through absolutely huge amounts of testing on all components - they put all of them through rigorous stress testing, testing in all climates, drive them for miles over rattle strips to make bits of the dash come loose, soak them in gallons of water to check leaks, leave parts in the desert to check UV degradation etc...
[ "Reliability is a major contributor to brand or company image, and is considered a fundamental dimension of quality by most end-users. For example, recent market research shows that, especially for women, reliability has become an automobile's most desired attribute.\n", "The engine and the car levels determine how fast and how reliable the car will be in race for the driver. Applying more funds into the engine and fewer in the car (chassis) means that the car will be faster and more reliable but with a fragile chassis. Applying the reverse settings will mean that the car might be more resistant in chassis, but slower and more prone to engine failures.\n", "Good ride quality provides comfort for the people inside the car, minimises damage to cargo and can reduce driver fatigue on long journeys in uncomfortable vehicles, and also because road disruption can impact the driver's ability to control the vehicle.\n", "The Reliability Index has been running since 2000. Data from Warranty Direct's paid claims is used to establish the reliability of cars. This information is used to rank the car manufacturers and models by reliability, and allocate a \"Warranty Direct Rating\". The results are released in association with \"What Car?\".\n", "If one was able to design cars for specific purposes, they can be tuned for much greater efficiency. The vast majority of car trips are short and low-speed; cars designed for this role can be far more efficient than the generalist vehicles generally used. However, the low ownership of specific-purpose vehicles, like motorcycles, is a good indication of the basic problem: people don't want to have to buy two vehicles to serve a single need: transportation. This has limited other forms of transit to specific roles: aircraft are used for long-distances, trains for inter-city freight and travel, and electric vehicles for known routes where power can be provided at all times.\n", "This is how all major CAM systems do it these days because it works without failing no matter what the complexity and geometry of the model, and can be made fast later. Reliability is far more important than efficiency.\n", "For ordinary production cars, manufactures err towards deliberate understeer as this is safer for inexperienced or inattentive drivers than is oversteer. Other compromises involve comfort and utility, such as preference for a softer smoother ride or more seating capacity.\n" ]
why does the body need to be trained for cardio? what does your body do when at first u can’t run 1 mile but after a while u can run 10?
None of the three answers yet actually address the question, IMHO. I’ll take a stab at it: Your body’s constantly trying balance a bunch of finite resources. If your heart doesn’t need to pump much blood all the time, the energy to maintain the heart muscle to do that would be better spent somewhere else. Cardio training (and all other muscle training) is just telling the body that it needs to start devoting some of its resources to those muscle groups now.
[ "Bradycardia is not necessarily problematic. People who regularly practice sports may have sinus bradycardia, because their trained hearts can pump enough blood in each contraction to allow a low resting heart rate. Sinus bradycardia can also be an adaptive advantage; for example, diving seals may have a heart rate as low as 12 beats per minute, helping them to conserve oxygen during long dives.\n", "The athlete's heart is associated with physiological remodeling as a consequence of repetitive cardiac loading. Athlete's heart is common in athletes who routinely exercise more than an hour a day, and occurs primarily in endurance athletes, though it can occasionally arise in heavy weight trainers. The condition is generally considered benign, but may occasionally hide a serious medical condition, or may even be mistaken for one.\n", "For this leg's Roadblock, one team member had to perform a boxing workout. After first wrapping their hands properly, the team member had to punch a punching bag and then jump rope for 60 seconds on each exercise without stopping. When they completed the workout, their boxing trainer would give them their next clue.\n", "Haldeman stated in an article posted on Muscle Foods USA's website: \"I found fitness more by circumstance than anything else. I had an injury (2009) that resulted in my inability to walk or run for quite some time. It was recommended to me that I try weight training during this time to keep in shape and help reduces my recovery time. At the time I had zero interest in lifting weights or making a daily trip to the gym. It didn’t take long, however, until I found that I loved what resistance training can do to your body. After only a few short weeks I was hooked and started to learn everything I could about sculpting and building my physique….and although a few years have passed since then I am just as excited learning about everything health and fitness related ! It is a life style that I have completely submerged myself in!\" \n", "In support of this, placebos (which must be mediated by a central process) have a powerful effect upon not only fatigue in prolonged exercise, but also upon short term endurance exercise such as sprint speed, the maximum weight that could be lifted with leg extension, and the tolerance of ischemic pain and power when a tourniqueted hand squeezes a spring exerciser 12 times.\n", "Athlete's heart should not be confused with bradycardia that occurs secondary to Relative energy deficiency in sport or Anorexia nervosa, which involve slowing of metabolic rate and sometimes shrinkage of the heart muscle and reduced heart volume.\n", "If your doctor deems it necessary, a stress TTE may be performed. This can be accomplished by either exercising on a bike or treadmill, or by medicine given through an IV along with a contrast agent to make your bodily fluids show up brighter. This allows a comparison between your heart at rest and your heart when it is beating at a faster rate. (Transthoracic Echocardiogram, n.d.)\n" ]
If I blow my nose with a piece of toilet paper, would it be more environmentally friendly to put that paper in the toilet (to be flushed later) or in the garbage to be hauled away by a truck?
I have the same question regarding vegetable matter in the garbage disposal vs. garbage bin (ignoring composting).
[ "From an environmental standpoint, bidets can reduce the need for toilet paper. Considering that an average person uses only of water for cleansing once using a bidet, much less water is used than for making toilet paper. An article in \"Scientific American\" concluded that using a bidet is \"much less stressful on the environment than using paper\". \"Scientific American\" has reported that if the US switched to using bidets, 15 million trees could be saved every year.\n", "In some parts of the world, especially before toilet paper was available or affordable, the use of newspaper, telephone directory pages, or other paper products was common. The widely distributed Sears Roebuck catalog was also a popular choice until it began to be printed on glossy paper (at which point some people wrote to the company to complain). With flush toilets, using newspaper as toilet paper is liable to cause blockages.\n", "BULLET::::- Toilet paper is a soft paper product (tissue paper) used to maintain personal hygiene after human defecation or urination. However, it can also be used for other purposes such as absorbing spillages or craft projects. Toilet paper in different forms has been used for centuries, namely in China. The ancient Greeks used clay and stone; the Romans, sponges and salt water. But according to a CNN article, the idea of a commercial product designed solely to wipe a person's buttocks was by New York City entrepreneur Joseph Gayetty, who in 1857, invented aloe-infused sheets of manila hemp dispensed from Kleenex-like boxes. However, Gayetty's toilet paper was a failure for several reasons. Americans soon grew accustomed to wiping with the Sears Roebuck catalog, they saw no need to spend money on toilet paper when catalogs for their use came in the mail for free, and because during the 19th century, it was a social taboo to openly discuss bathroom hygiene with others. Toilet paper took its next leap forward in 1890, when two brothers named Clarence and E. Irvin Scott of the Scott Paper Company co-invented rolled toilet paper.\n", "In the United States, toilet paper has been the primary tool in a prank known as \"TP-ing\" (pronounced Teepeeing). TP-ing, or \"toilet papering\", is often favored by adolescents and is the act of throwing rolls of toilet paper over cars, trees, houses and gardens, causing the toilet paper to unfurl and cover the property, creating an inconvenient mess.\n", "In rural areas of developing countries or during camping trips, sticks, stones, leaves, corn cobs and similar are also used for anal cleansing. This may be due to the unavailability of toilet paper and similar paper products or water.\n", "Advice columnist Ann Landers (Eppie Lederer) was once asked which way toilet paper should hang. She answered \"under\", prompting thousands of letters in protest; she then recommended \"over\", prompting thousands more. She reflected that the 15,000 letters made toilet paper the most controversial issue in her column's 31-year history, wondering, \"With so many problems in the world, why were thousands of people making an issue of tissue?\"\n", "Toilet paper orientation has been used rhetorically as the ultimate issue that government has no business dictating, in letters to the editor protesting the regulation of noise pollution and stricter requirements to get a divorce. In 2006, protesting New Hampshire's ban on smoking in restaurants and bars, representative Ralph Boehm (R–Litchfield) asked \"Will we soon be told which direction the toilet paper must hang from the roll?\"\n" ]
Slavery seems to have a very prominent role in the popular conception of the Roman Empire, then sort of seems to fade away and become relevant again during the 17th-19th centuries. To what extent and how was slavery practiced in Western Europe following the collapse of the Western Roman State?
Hello, [here](_URL_0_) is a post I wrote a little while ago about post-Roman slavery in the British Isles. I hope it's useful and I'm happy to answer further questions :)
[ "Slavery was common in Classical Greece and in the earlier Roman Empire. It was legal in the Byzantine Empire but became rare after the first half of 7th century. From 11th century, semi-feudal relations largely replaced slavery. Under the influence of Christianity, a shift in the view of slavery is noticed, which by the 10th century transformed gradually a slave-object into a slave-subject. It was also seen as \"an evil contrary to nature, created by man's selfishness\", although slavery was permitted by the law.\n", "In the Eastern Roman (Byzantine) Empire, slaves became quite rare by the first half of the 7th century A shift in the view of slavery is noticed, which by the 10th century transformed gradually a slave-object into a slave-subject. From 11th century, semi-feudal relations largely replaced slavery, seen as \"an evil contrary to natury, created by man's selfishness\", although slavery was permitted by the law.\n", "Roman culture had distinct values on human life which are very different from those now prevailing in Europe and, in general, in the world. The system of slavery, made it possible for a man to lose his status as \"free man\" for various reasons such as: crime, debt or military defeat. After losing their rights, they were coerced into participating in a form of entertainment which today could be considered excessively brutal, but which at that time was one of the most powerful attractions of urban life: gladiatorial combat. Not only slaves or prisoners were involved in these kinds of struggles (although the vast majority of gladiators were), but some also had career as a gladiator who fought for money, favors or glory. Even some emperors occasionally ventured down to the sand to play this bloody \"sport\", as in the case of the emperor Commodus.\n", "The social and legal status of slaves in the Roman state was different in different epochs. In the time of old civil law (ius civile Quiritium) slavery had a patriarchal shape (a slave did the same job and lived under the same conditions as his master and family). After Rome's victorious wars, from the 3rd century BC, huge numbers of slaves came to Rome, and that resulted in slave trade and increased exploitation of slaves. From that time on, a slave became only a thing (res)- \"servi pro nullis habentur\".\n", "Slavery, for example, was part of the empire-wide \"ius gentium\" because slavery was known and accepted as a normal social institution in all parts of the known world. Nevertheless, as forcing people to work for others was a human-produced condition, it was not considered natural and, hence, was part of the \"ius gentium\" but not the \"ius naturale\". The \"ius naturale\" of the Roman jurists is not the same as implied by the modern sense of natural law as something derived from pure reason. As Sir Henry James Sumner Maine puts it, \"it was never thought of as founded on quite untested principles. The notion was that it underlay existing law and must be looked for through it\".\n", "Contrary to suppositions of historians such as Marc Bloch, slavery thrived as an institution in medieval Christian Iberia. Slavery existed in the region under the Romans, and continued to do so under the Visigoths. From the fifth to the early 8th century, large portions of the Iberian Peninsula were ruled by Christian Visigothic Kingdoms, whose rulers worked to codify human bondage. In the 7th century, King Chindasuinth issued the Visigothic Code (Liber Iudiciorum), to which subsequent Visigothic kings added new legislation. Although the Visigothic Kingdom collapsed in the early 8th century, portions of the Visigothic Code were still observed in parts of Spain in the following centuries. The Code, with its pronounced and frequent attention to the legal status of slaves, reveals the continuation of slavery as an institution in post-Roman Spain.\n", "The chaos following the barbarian invasions of the Roman Empire made the taking of slaves habitual throughout Europe in the early Middle Ages. Roman practices continued in many areas the Welsh laws of Hywel the Good included provisions dealing with slaves and Germanic laws provided for the enslavement of criminals, as when the Visigothic Code prescribed enslavement for those who could not pay the financial penalty for their crime and as a punishment for certain other crimes. Such criminals would become slaves to their victims, often with their property.\n" ]
what are the rules for fan made merchandise? if it is illegal like i think it is, how do companies like etsy and redbubble get away with it?
This would fall under copyright, and the gray area of copyright is confusing and hard to navigate. However, here is some basic information on this kind of scenario. Before going further: I'm not a lawyer, so please don't take my word as gospel. What this boils down to is that you're going to be at the mercy of the copyright holder of whatever product you're making unlicensed merchandise of. Meaning that if they catch wind of your product and don't like you doing it, they can shut you down. Thankfully, with copyright, this isn't always going to happen. They are not legally compelled to enforce their copyright on all cases. That said, avoid using any form of trademark. Unlike copyrights, trademarks MUST be protected in order to remain trademarks, and must be done universally. Meaning that if the trademark holder in question finds your works, they are legally obligated to get you to stop producing those works. Though if experience tells me anything, most companies will send out C & Ds if they've got problems with your works, rather than take you to court to start. Litigation is costly for both sides, and stamps are cheap, emails cheaper. If you wind up getting a C & D, I'd urge complying to save you the hassle of a legal battle.
[ "Most fan labor products are derivative works, in that they are creative additions or modifications to an existing copyrighted work, or they are original creations which are inspired by a specific copyrighted work. Some or all of these works may fall into the legal category of transformative works (such as a parody of the original), which is protected as fair use under U.S. copyright law. However, corporations continue to ask fans to stop engaging with their products in creative ways.\n", "In addition to the official merchandise produced and sold through entertainment companies, fans themselves produce a great deal of merchandise. In some instances, fans have responded to poor-quality official merchandise by producing their own higher-quality products, which are often cheaper. Official goods have an advantage as a commodity, while fan-produced merchandise may correspond to more specialized tastes, such as a fan-made photobook focusing on a particular member of a band. Producers and sellers of fan-made merchandise are known as \"home masters\".\n", "Recent years have seen increasing legal action from media conglomerates, who are actively protecting their intellectual property rights. Because of new technologies that make media easier to distribute and modify, fan labor activities are coming under greater scrutiny. Some fans are finding themselves the subjects of cease and desist letters which ask them to take down the offending materials from a website, or stop distributing or selling an item which the corporation believes violates their copyright. As a result of these actions by media companies, some conventions now ban fan art entirely from their art shows, even if not offered for sale, and third party vendors may remove offending designs from their websites.\n", "Companies, however, react to fan activities in very different ways. While some companies actively court fans and these type of activities (sometimes limited to ways delineated by the company itself), other companies attempt to highly restrict them.\n", "However, some fans engage in for-profit exchange of their creations in what is known as the \"gray market\". The gray market operates mainly through word of mouth and \"under the table\" sales, and provides products of varying quality. Even though these are commercial activities, it is still expected that fan vendors will not make a large amount of profit, charging just enough to cover expenses. Some vendors attempt to not mark up their products at all, and will use that information in their promotional information, in an attempt to secure the confidence of other fans who may look down at fans making a profit.\n", "Some artists, such as Girl Talk and Nine Inch Nails, use copyleft licenses such as the Creative Commons Attribution-NonCommercial-ShareAlike license that don't allow commercial use. In this way they can choose to sell their creations without having to compete with others selling copies of the same works. However, some argue that the Attribution-NonCommercialShareAlike license is not a true copyleft.\n", "Fans who do their creative work out of paying respect to the original media property or an actor or to the fandom in general gain cultural capital in the fandom. However, those who attempt to sell their creative products will be shunned by other fans, and subject to possible legal action. Fans often classify other fans trying to sell their items for profit motives as \"hucksters\" rather than true fans.\n" ]
why does chronic sleep deprivation cause erectile dysfunction?
Sleep deprivation is a huge stress for your body. Evolution has built us that way - if you are that much stressed, the last thing you need at that moment is having sex
[ "Sleep disturbance is not only associated with the onset of manic or hypomanic episodes but also displays a residual symptom of manic and depressive episodes. They are associated with residual depressive symptoms and perceived cognitive performance and can thereby negatively influence the functioning and recovery of a patient. This is one reason why therapy programs like the Interpersonal and social rhythm therapy aim to reduce sleep disturbances.\n", "As an autonomic nervous system response, an erection may result from a variety of stimuli, including sexual stimulation and sexual arousal, and is therefore not entirely under conscious control. Erections during sleep or upon waking up are known as nocturnal penile tumescence (NPT). Absence of nocturnal erection is commonly used to distinguish between physical and psychological causes of erectile dysfunction and impotence.\n", "Counterintuitively, penile erections during sleep are not more frequent during sexual dreams than during other dreams. The parasympathetic nervous system experiences increased activity during REM sleep which may cause erection of the penis or clitoris. In males, 80% to 95% of REM sleep is normally accompanied by partial to full penile erection, while only about 12% of men's dreams contain sexual content.\n", "Sleep deprivation is also associated with ASC, and can provoke seizures due to fatigue. Sleep deprivation can be chronic or short-term depending on the severity of the patient's condition. Many patients report hallucinations because sleep deprivation impacts the brain. An MRI study conducted at Harvard Medical school in 2007, found that a sleep-deprived brain was not capable of being in control of its sensorimotor functions, leading to an impairment to the patient's self-awareness. Patients were also prone to be a lot clumsier than if had they not been experiencing sleep deprivation.\n", "Sleep deprivation is known to have negative effects on the brain and behavior. Extended periods of sleep deprivation often results in the malfunctioning of neurons, directly affecting an individual's behavior. While muscles are able to regenerate even in the absence of sleep, neurons are incapable of this ability. Specific stages of sleep are responsible for the regeneration of neurons while others are responsible for the generation of new synaptic connections, the formation of new memories, etc.\n", "One study has linked lack of sleep to a reduction in rodent hippocampal neurogenesis. The proposed mechanism for the observed decrease was increased levels of glucocorticoids. It was shown that two weeks of sleep deprivation acted as a neurogenesis-inhibitor, which was reversed after return of normal sleep and even shifted to a temporary increase in normal cell proliferation. More precisely, when levels of corticosterone are elevated, sleep deprivation inhibits this process. Nonetheless, normal levels of neurogenesis after chronic sleep deprivation return after 2 weeks, with a temporary increase of neurogenesis.\n", "Erections of the penis (nocturnal penile tumescence or NPT) normally accompany REM sleep in rats and humans. If a male has erectile dysfunction (ED) while awake, but has NPT episodes during REM, it would suggest that the ED is from a psychological rather than a physiological cause. In females, erection of the clitoris (nocturnal clitoral tumescence or NCT) causes enlargement, with accompanying vaginal blood flow and transudation (i.e. lubrication). During a normal night of sleep the penis and clitoris may be erect for a total time of from one hour to as long as three and a half hours during REM.\n" ]
why isn't worshipping jesus considered idol worship in the christian faith?
In catholicism jesus is part of the Holy trinity. Father, son and holy spirit...all 1 God, same person, just different forms. Weird, I admit, but that's my explanation.
[ "Christian deists consider themselves to be disciples, or students, of Jesus because Jesus taught the natural laws of God. But Christian deists believe that Jesus was only human. Jesus had to struggle with his own times of disappointment, sorrow, anger, prejudice, impatience, and despair, just as other human beings struggle with these experiences. Jesus never claimed to be perfect but he was committed to following God's natural laws of love.\n", "Scholars have discussed whether idol worshipers made a distinction between a spiritual being that existed independently of idols and the physical idols themselves. Some scholars opine that the pagans in the Hebrew Bible did not literally worship the objects themselves, so that the issue of idolatry is really concerned with whether one is pursuing a \"false god\" or \"the true God\". In addition to the spiritual aspect of their worship, peoples in the Ancient Near East took great care to physically maintain their cult idols and thought that the instructions for their manufacture and maintenance came from the spirit of the god. Magical ceremonies were performed through which the people believed the spirit of the god came to live in the physical idol. When idols were captured or not cared for, the associated religious practices also flagged. So while scholars may debate the relative importance of belief in the physical object or the spirit it represented or housed, in practice the distinction was not easy to discern.\n", "Judaism's animosity towards what they perceived as idolatry was inherited by Jewish Christianity. Although Jesus discussed the Mosaic Law in the Sermon on the Mount, he does not speak of issues regarding the meaning of the commandment against idolatry. His teachings, however, uphold that worship should be directed to God alone (Matthew 4:10 which is itself a quote of Deuteronomy 6:13, see also Shema in Christianity, Great Commandment, and Ministry of Jesus).\n", "According to the psalmist and the prophet Isaiah, those who worship inanimate idols will be like them, that is, unseeing, unfeeling, unable to hear the truth that God would communicate to them. Paul the Apostle identifies the worship of created things (rather than the Creator) as the cause of the disintegration of sexual and social morality in his letter to the Romans. Although the commandment implies that the worship of God is not compatible with the worship of idols, the status of an individual as an idol worshiper or a God worshiper is not portrayed as predetermined and unchangeable in the Bible. When the covenant is renewed under Joshua, the Israelites are encouraged to throw away their foreign gods and \"choose this day whom you will serve\". King Josiah, when he becomes aware of the terms of God's covenant, zealously works to rid his kingdom of idols. According to the book of Acts, Paul tells the Athenians that though their city is full of idols, the true God is represented by none of them and requires them to turn away from idols.\n", "Christian have sometimes been accused of idolatry, especially in regards in the iconoclastic controversy. However, Orthodox and Roman Catholic Christian forbid worship of icons and relics as divine in themselves, while honouring those represented by them is accepted and philosophically justified by the Second Council of Constantinople.\n", "However, the worship of Jesus, or the ascribing of partners to God (known as \"shirk\" in Islam and as \"shituf\" in Judaism), is typically viewed as the heresy of idolatry by Islam and Judaism. Judaism and Islam see the incarnation of God into human form as a heresy.\n", "The Christian view of idolatry may generally be divided into two general categories: the Catholic and Eastern Orthodox view which accepts the use of religious images, and the views of many Protestant churches that considerably restrict their use. However, many Protestants have used the image of the cross as a symbol.\n" ]
neurologically speaking, why do people learn in different ways (ie kinesthetic, visual, etc)?
Some scientists say the whole concept is bogus. NPR recently did a [story](_URL_0_) on this topic that you might like.
[ "All healthy, normally developing human beings learn to use language. Children acquire the language or languages used around them: whichever languages they receive sufficient exposure to during childhood. The development is essentially the same for children acquiring sign or oral languages. This learning process is referred to as first-language acquisition, since unlike many other kinds of learning, it requires no direct teaching or specialized study. In \"The Descent of Man\", naturalist Charles Darwin called this process \"an instinctive tendency to acquire an art\".\n", "Cultural learning is made possible by a deep understanding of social cognition. Humans have the unique capacity to identify and relate to others and view them as intentional beings. Humans are able to understand that others have intentions, goals, desires, and beliefs. It is this deep understanding, this cognitive adaptation, that allows humans to learn from and with others through cultural transmission (Tomasello, 1999).\n", "Humans also tend to follow “communicative” ways of learning as seen in a study by Hanna Marno, researcher at the International School for Advanced Studies, which showed that infants followed an adult's action of pressing a button to light up a lamp based on the adult's “non-verbal (eye contact) and verbal cues.”\n", "The empiricist views suggest that language can be learned with mental processes originally meant for other modes of cognition, and that there need not be a concept of innateness in order to account for the difference between the input a child receives versus the language they develop. Natural languages display statistical cues that children are able to learn from. Some argue that the ability to learn by statistical pattern matching can solve problems that nativists argue require innate knowledge.\n", "The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences?\n", "According to Steven Pinker, who builds on the work by Noam Chomsky, the universal human ability to learn to talk between the ages of 1 – 4, basically without training, suggests that language acquisition is a distinctly human psychological adaptation (see, in particular, Pinker's \"The Language Instinct\"). Pinker and Bloom (1990) argue that language as a mental faculty shares many likenesses with the complex organs of the body which suggests that, like these organs, language has evolved as an adaptation, since this is the only known mechanism by which such complex organs can develop.\n", "It has also been demonstrated that brain processing responds to the external environment. Learning, both of ideas and behaviors, appears to be coded in brain processes. It also appears that in several simplified cases this coding operates differently, but in some ways equivalently, in the brains of men and women. For example, both men and women learn and use language; however, bio-chemically, they appear to process it differently. Differences in female and male use of language are likely reflections \"both\" of biological preferences and aptitudes, \"and\" of learned patterns.\n" ]
What does it actually mean to “die peacefully” in your sleep? Is this even possible?
During REM sleep the body is paralyzed ti prevent movement during dream state. When you are having trouble breathing due ti oak if oxygen, the body can jerk you awake using a body reflex. If you combine poor breathing with rem state paralysis the body may not be able to rake you from your dream, thus passing while asleep. Keep in mind this is only one possibility. No blood flow eg heart attack means brain damage. Hypoxia.
[ "In this context, sleep may also be considered a metaphor for death, both as an eventual equalizer of all things, and for the allusion to a \"crossing over,\" as in a river, a prevalent theme in Western spiritual beliefs.\n", "\"Sleepless\" was written by Hannah Barker and Liam Jarvis, and was performed in Shoreditch Town Hall in 2016. The story, as stated on Shoreditch Town Halls website \"inspired by the extraordinary true story of a family cursed with a rare genetic disease that cruelly deprives members of sleep until they die, a story that sits at the crossroads of two cutting edge areas of science: sleep research and prion theory, and begs the broader question: how do we decide the value of a human life?\"\n", "BULLET::::- Sleep is the emancipation of the mind: While we are asleep our spirit loosens its ties to matter and wanders the spiritual world. Because of it, it is theoretically possible—although uncommon—to see the spirit of a living person as an apparition.\n", "BULLET::::- An unconscious desire to punish or cause suffering to someone one hates or envies may lead the spirit of a living person to use its relative freedom during sleep to attempt to obsess another living person.\n", "You're very sleepy and very, very tired and you're sort of nodding off to sleep but something's telling you to keep waking up. This was the thing that kept everybody going through the hunger strike in trying to live or last out as long as possible. I knew death was close but I wasn't afraid to die – and it wasn't any sort of courageous or glorious thing. I think death would have been a release. You can never feel that way again. It's not like tiredness. It's an absolute, total, mental and physical exhaustion. It's literally like slipping into death.\n", "He said: \"Well I thought that the most dreadful thing that could happen to anybody, would be not to be allowed to sleep so that just as you're dropping off there'd be a 'Dong' and you'd have to keep awake; you’re sinking into the ground alive and it's full of ants; and the sun is shining endlessly day and night and there is not a tree … there’s no shade, nothing, and that bell wakes you up all the time and all you've got is a little parcel of things to see you through life.\" He was referring to the life of the modern woman. Then he said: \"And I thought who would cope with that and go down singing, only a woman.\"\n", "BULLET::::- \"I'm starting to think that I'm dead. | Doesn't it make sense that death too would be wrapped in dream? That after death, your conscious life would continue in what might be called a dream body? It would be the same dream body you experience in your everyday dream life. Except that in the post-mortal state, you could never again wake up.\"\n" ]
how are some cars so much more expensive than other cars. eg. 1,000,000$ car vs. 20,000$ car.
It's not just the style; it's the material they use, the speeds they can reach (for sports cars), the size of the vehicle, the rarity, etc. It's like saying why they can't just kill more snakes to make snake-skinned boots so they aren't $4,000 dollars. Snakes are rare and so is carbon fiber. Snake skinned boots are hard to make and so is carbon fiber. Thus, a vehicle with a full carbon fiber chassis is worth much more than your standard metal chassis honda and the same reason why snake skinned boots cost more than your vans.
[ "The increasing popularity of these commercial vehicles in the late 1990s and early 2000s, however, pushed their average price to nearly double the average passenger car cost. In response, the 2002 Tax Act increased this \"Section 179 depreciation deduction\" to US$75,000, and it rose again to US$102,000 for the 2004 tax year. This is more than three times the current average cost of a passenger car in the United States and covers a large number of luxury models, including the Hummer H2. In late 2006, the deduction was again reduced to US$25,000 for vehicles with GVWR between and .\n", "It was massive, with a wheelbase. The price tag of $17,000-$25,000 made it the most expensive American car of the era; a Rolls-Royce sold for less than $10,000, American's highest-price model was US$5250, the Lozier Big Six limousines and landaulettes US$6,500 (tourers and roadsters were US$5,000), and the Lozier Light Six Metropolitan tourer and runabout bottomed at US$3,250. By contrast, the high-volume Oldsmobile Runabout was US$650 and Western's Gale Model A was US$500.\n", "The IRS considers that the average US automobile has a total cost of 0.58 USD/mile, around 0.32 EUR/km. According to the American Automobile Association the average driver of the average sedan, spends totally approximately 8700 USD per year, or 720 USD per month, to own and operate their vehicle.\n", "Automakers have said that small, fuel-efficient vehicles cost the auto industry billions of dollars. They cost almost as much to design and market but cannot be sold for as much as larger vehicles such as SUVs, because consumers expect small cars to be inexpensive. In 1999, \"USA Today\" reported small cars tend to depreciate faster than larger cars, so they are worth less in value to the consumer over time. However, 2007 Edmunds depreciation data show that some small cars, primarily premium models, are among the best in holding their value.\n", "You can follow the family's thinking by looking at the rationale for each judgment. Whenever a car that is under budget is compared with one that is over budget by more than $1,000, the former is extremely preferred. For cars under budget, a $1,000 less expensive car is slightly preferred, a $5,000 one is strongly preferred, and a $6,000 one is even more strongly preferred. When both cars are well over budget (comparison #6), they are equally preferred, which is to say they are equally undesirable. Because budget status and absolute price difference are enough to make each comparison, the ratio of prices never enters into the judgments.\n", "The price of the second batch is €130 million (€1.85 m. per vehicle) and is somewhat higher than price for the initial 70 ordered in 2003 (€112 million). However, due to lower production costs the increased price is still very competitive compared to other European equivalents, which are priced up to €2.5 million each per similar vehicle.\n", "It has a high resale and used-car value; the Kelley Blue Book used car retail price (the price an individual might expect to pay for one from a dealer) for a model in excellent condition with low mileage exceeds the original retail price of the car in many cases, making it one of a few recent cars that have actually approached an increase in value. This premium can be explained mostly due to scarcity, both of the cars themselves due to low production and importation, and especially ones that still have low mileage.\n" ]
How do things like Bop It manage to generate pseudorandom numbers?
A very common way to generate random numbers in a small embedded device like that is to have a fairly fast free-running counter. When the device receives some input (like the player pressing the start button) the counter is read and that can provide a dozen or so bits of entropy, which is plenty to make a game like Bop It random. (If it needed more, it could read the counter every time you activate one of the thingies on the game, and accumulate entropy.) It probably uses this entropy to seed a simple PRNG like a LFSR. Building a reasonably-good hardware random number source is not hard, but it would add a few cents or tens of cents to the toy's cost, and at the volumes those things are made that's a significant price. The free-running-counter approach can usually be implemented very cheaply.
[ "A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG), is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers. The PRNG-generated sequence is not truly random, because it is completely determined by an initial value, called the PRNG's \"seed\" (which may include truly random values). Although sequences that are closer to truly random can be generated using hardware random number generators, \"pseudorandom\" number generators are important in practice for their speed in number generation and their reproducibility.\n", "A pseudorandom variable is a variable which is created by a deterministic algorithm, often a computer program or subroutine, which in most cases takes random bits as input. The pseudorandom string will typically be longer than the original random string, but less random (less entropic in the information theory sense). This can be useful for randomized algorithms.\n", "Random number generators are very useful in developing Monte Carlo-method simulations, as debugging is facilitated by the ability to run the same sequence of random numbers again by starting from the same \"random seed\". They are also used in cryptography – so long as the \"seed\" is secret. Sender and receiver can generate the same set of numbers automatically to use as keys.\n", "Since pseudorandom numbers are in fact deterministic, a given seed will always determine the same pseudorandom number. This attribute is used in security, in the form of rolling code to avoid replay attacks, in which a command would be intercepted to be used by a thief at a later time.\n", "Pseudorandom functions are not to be confused with pseudorandom generators (PRGs). The guarantee of a PRG is that a \"single\" output appears random if the input was chosen at random. On the other hand, the guarantee of a PRF is that \"all its outputs\" appear random, regardless of how the corresponding inputs were chosen, as long as the \"function\" was drawn at random from the PRF family.\n", "No pseudorandom number generator can produce more distinct sequences, starting from the point of initialization, than there are distinct seed values it may be initialized with. Thus, a generator that has 1024 bits of internal state but which is initialized with a 32-bit seed can still only produce 2 different permutations right after initialization. It can produce more permutations if one exercises the generator a great many times before starting to use it for generating permutations, but this is a very inefficient way of increasing randomness: supposing one can arrange to use the generator a random number of up to a billion, say 2 for simplicity, times between initialization and generating permutations, then the number of possible permutations is still only 2.\n", "Note that an extractor has some conceptual similarities with a pseudorandom generator (PRG), but the two concepts are not identical. Both are functions that take as input a small, uniformly random seed and produce a longer output that \"looks\" uniformly random. Some pseudorandom generators are, in fact, also extractors. (When a PRG is based on the existence of hard-core predicates, one can think of the weakly random source as a set of truth tables of such predicates and prove that the output is statistically close to uniform.) However, the general PRG definition does not specify that a weakly random source must be used, and while in the case of an extractor, the output should be statistically close to uniform, in a PRG it is only required to be computationally indistinguishable from uniform, a somewhat weaker concept.\n" ]
how did the first land bound life forms know we needed water to survive?
How did you know to suck your mothers teat? How is it that all puppies, kittens and even little babies know to flee from pain? It's instinct. Evolution has brought us certain tendencies born right into us. If we never felt like drinking water when we needed it we'd die. If we couldn't distinguish salt water from fresh and we couldn't handle one or the other? We'd die.
[ "Major human settlements could initially develop only where fresh surface water was plentiful, such as near rivers or natural springs. Throughout history, people have devised systems to make getting water into their communities and households and disposing (and later also treating) wastewater more conveniently.\n", "BULLET::::- The first known animals to live their lives entirely without oxygen – members of the phylum Loricifera – are discovered in the L'Atalante basin deep under the Mediterranean Sea. (\"Science Daily\")\n", "Although humans cannot survive on seawater, some people claim that up to two cups a day, mixed with fresh water in a 2:3 ratio, produces no ill effect. The French physician Alain Bombard survived an ocean crossing in a small Zodiak rubber boat using mainly raw fish meat, which contains about 40 percent water (like most living tissues), as well as small amounts of seawater and other provisions harvested from the ocean. His findings were challenged, but an alternative explanation was not given. In his 1948 book, \"Kon-Tiki\", Thor Heyerdahl reported drinking seawater mixed with fresh in a 2:3 ratio during the 1947 expedition. A few years later, another adventurer, William Willis, claimed to have drunk two cups of seawater and one cup of fresh per day for 70 days without ill effect when he lost part of his water supply.\n", "Although humans cannot survive on seawater alone—and, indeed, will sicken quickly if they try—some people have claimed that up to two cups a day, mixed with fresh water in a 2:3 ratio, produces no ill effect. During the 18th century, British physician Richard Russell (1687–1759) advocated the practice as part of medical therapy in his country. In the 20th century, René Quinton (1866–1925), in France, would also endorse the practice. Currently, the practice is widely used in Nicaragua and other countries, supposedly taking advantage of the latest medical discoveries.) In his 1948 book, \"Kon-Tiki\", Thor Heyerdahl reported drinking seawater mixed with fresh in a 2:3 ratio during the 1947 expedition. The French physician Alain Bombard (1924–2005) survived an ocean crossing (1952–53) in a small Zodiac rubber boat using mainly raw fish meat, which contains about 40 percent water (like most living tissues), as well as small amounts of seawater and other provisions harvested from the ocean. His findings were challenged, but an alternative explanation was not given. A few years later, an American sailor and adventurer, William Willis (1893–1968), claimed to have drunk two cups of seawater and one cup of fresh per day for 70 days without ill effect when he lost part of his water supply.\n", "It is plausible that microbial life originated on Venus if liquid water existed on its surface prior to the heating of the planet by the runaway greenhouse effect, but no longer exists. Assuming the process that delivered water to Earth was common to all the planets near the habitable zone, it has been estimated that liquid water could have existed on its surface for up to 600 million years during and shortly after the Late Heavy Bombardment, which could be enough time for simple life to form, but this figure can vary from as little as a few million years to as much as few billion. This might also have given enough time for microbial life to evolve to be aerial. There has been very little analysis of Venusian surface material, so it is possible that evidence of past life, if it ever existed, could be found with a probe capable of enduring Venus's current extreme surface conditions, although the resurfacing of the planet in the past 500 million years means that it is unlikely that ancient surface rocks remain, especially those containing the mineral tremolite which, theoretically, could have encased some biosignatures.\n", "This argument could have been made every time a new biological life form was found, and would have been correct every time; however, it is still possible that in the future a biological life form not requiring liquid water could be discovered.\n", "Even so, volcanic outgassing could not have accounted for the amount of water in Earth's oceans. The vast majority of the water —and arguably carbon— necessary for life must have come from the outer Solar System, away from the Sun's heat, where it could remain solid. Comets impacting with the Earth in the Solar System's early years would have deposited vast amounts of water, along with the other volatile compounds life requires onto the early Earth, providing a kick-start to the origin of life.\n" ]
is the ability to sing a natural or acquired talent?
Check out this interesting [story](_URL_0_) from Radiolab.
[ "Musical ability is inherent in almost all people, to a greater or lesser extent. However, those who develop it to a high level are generally encouraged to play an instrument or to sing at an early age. Late bloomers in music are generally composers or artists who became prominent later in life, but had displayed musical ability much earlier.\n", "Singing wasn't exactly the first talent Led discovered in himself. In fact, he used to dance a lot during school events before learning songs by Mariah Carey, Michael Jackson and Regine Velasquez in his pre adolescent years. He was a subject of bullying because he doesn't look like a typical Filipino. His curly hair, dark skin and body weight became issues of bullying in school which made him shy.\n", "Some consider that singing is not a natural process but is a skill that requires highly developed muscle reflexes, but others consider that some ways of singing can be considered as natural. Singing does not require much muscle strength but it does require a high degree of muscle coordination. Individuals can develop their voices further through the careful and systematic practice of both songs and vocal exercises. Voice teachers instruct their students to exercise their voices in an intelligent manner. Singers should be thinking constantly about the kind of sound they are making and the kind of sensations they are feeling while they are singing.\n", "Aspiring singers and vocalists must have musical skill, an excellent voice, the ability to work with people, and a sense of showmanship and drama. Additionally, singers need to have the ambition and drive to continually study and improve,\n", "It seems that the ability to sing has appeared independently at least two times in birds (possibly three): one in the common ancestor of hummingbirds and another in the common antecesor of songbirds and parrots. They all have in common a number of neuronal circuits that cannot be found in non-vocal learner species. A dN/dS analysis showed conserved evolution in 227 genes, most of which are highly expressed in the regions of the brain that control singing. Furthermore, 20% of them seemed to be regulated by singing.\n", "While there is no consensus on this point, it is certainly a widely held view among newcomer singers that the singing community is best served if newcomers learn to sing in the way that traditional singers do, at least as far as this concerns rhythm, pitch, and the procedures followed at singing.\n", "Learning to sing is an activity that benefits from the involvement of an instructor. A singer does not hear the same sounds inside his or her head that others hear outside. Therefore, having a guide who can tell a student what kinds of sounds he or she is producing guides a singer to understand which of the internal sounds correspond to the desired sounds required by the style of singing the student aims to re-create.\n" ]
What makes an atomic bomb/explosion stop expanding? Why don't atoms continue to split more atoms etc?
The short answer is they blow up. Which is to say the stuff needed for the reaction is violently pushed apart. As a result there is only a very narrow time window for the reaction to occur. There are two types of nuclear weapons: - Fission (Uranium/Plutonium bombs) - Fusion (Hydrogen bombs) It is worth noting that a hydrogen bomb has a fission bomb wrapped around the outside. In both cases the trick is to mash things together at really high densities to make things work. In the case of a fusion bomb the densities and energies needed are so high that you need a fission bomb around the outside to compress and heat the hydrogen enough to fuse. Problem is when the reactions occur the forces start pushing out (that is the bomb's "*boom*"). As a result there is only a very small slice of time where the stuff going "*boom*" is in the right conditions to go "*boom*". I forget the number but the amount of material in a nuclear bomb that actually undergoes fission/fusion is something like a few percent (2-4% of the material). I am probably wrong on that number but it's close.
[ "Ordinarily, atoms are mostly electron clouds by volume, with very compact nuclei at the center (proportionally, if atoms were the size of a football stadium, their nuclei would be the size of dust mites). When a stellar core collapses, the pressure causes electrons and protons to fuse by electron capture. Without electrons, which keep nuclei apart, the neutrons collapse into a dense ball (in some ways like a giant atomic nucleus), with a thin overlying layer of degenerate matter (chiefly iron unless matter of different composition is added later). The neutrons resist further compression by the Pauli Exclusion Principle, in a way analogous to electron degeneracy pressure, but stronger.\n", "For isolated systems (closed to all mass and energy exchange), mass never disappears in the center of momentum frame, because energy cannot disappear. Instead, this equation, in context, means only that when any energy is added to, or escapes from, a system in the center-of-momentum frame, the system will be measured as having gained or lost mass, in proportion to energy added or removed. Thus, in theory, if an atomic bomb were placed in a box strong enough to hold its blast, and detonated upon a scale, the mass of this closed system would not change, and the scale would not move. Only when a transparent \"window\" was opened in the super-strong plasma-filled box, and light and heat were allowed to escape in a beam, and the bomb components to cool, would the system lose the mass associated with the energy of the blast. In a 21 kiloton bomb, for example, about a gram of light and heat is created. If this heat and light were allowed to escape, the remains of the bomb would lose a gram of mass, as it cooled. In this thought-experiment, the light and heat carry away the gram of mass, and would therefore deposit this gram of mass in the objects that absorb them.\n", "In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra \"d\" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment.\n", "The increasing nuclear charge is partly counterbalanced by the increasing number of electrons, a phenomenon that is known as shielding; which explains why the size of atoms usually increases down each column. However, there is one notable exception, known as the lanthanide contraction: the 5d block of elements are much smaller than one would expect, due to the weak shielding of the 4f electrons.\n", "Atoms can only be displaced if, upon bombardment, the energy they receive exceeds a threshold energy \"E\". Likewise, when a moving atom collides with a stationary atom, both atoms will have energy greater than \"E\" after the collision only if the original moving atom had an energy exceeding 2\"E\". Thus, only PKAs with an energy greater than 2\"E\" can continue to displace more atoms and increase the total number of displaced atoms. In cases where the PKA does have sufficient energy to displace further atoms, the same truth holds for any subsequently displaced atom.\n", "For about the next –, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.\n", "The cause of a surface mottling is more complex. In the initial microseconds after the explosion, a fireball is formed around the bomb by the massive numbers of thermal x-rays released by the explosion process. These x-rays cannot travel very far in the lower atmosphere before reacting with molecules in the air, so the result is a fireball that rapidly forms within about in diameter and does not expand. This is known as a \"radiatively driven\" fireball.\n" ]
why was cloning such a big deal in the 90's but now is rarely spoken about?
There were a lot of breakthroughs in the '90s, and that made it new and exciting and a big deal. And naturally also very blown out of proportion. There are still advances being made, but nothing huge. Cloning would not make a species viable, either. You need genetic diversity, which is the exact *opposite* of cloning.
[ "President George W. Bush said that human cloning was \"deeply troubling\" to most Americans. Kansas Republican Sam Brownback said that Congress should ban all human cloning, while some Democrats were worried that Clonaid announcement would lead to the banning of therapeutic cloning. FDA biotechnology chief Dr. Phil Noguchi warned that the human cloning, even if it worked, risked transferring sexually transmitted diseases to the newly born child. The White House was also critical of the claims.\n", "On May 31, 1997, an issue of the popular science magazine \"New Scientist\" said that the International Raëlian Movement was starting a company to fund the research and development of human cloning. This alarmed bioethicists who were opposed to such plans. They warned lawmakers against failing to regulate human cloning. At the time, European countries such as Britain had banned human cloning, but the United States had merely a moratorium on the use of federal funds for human cloning research. U.S. President Bill Clinton requested that private companies pass their own moratorium. Claude Vorilhon, the founder of Raëlism, was opposed to this move and denied that the technology used to clone was inherently dangerous.\n", "The Humane Society of the United States and other animal welfare groups denounced the cloning, saying that the $50,000 could have been better used to save some of the millions of animals euthanized each year.\n", "Although the possibility of cloning humans had been the subject of speculation for much of the 20th century, scientists and policy makers began to take the prospect seriously in the mid-1960s. J. B. S. Haldane was the first to introduce the idea of human cloning, for which he used the terms \"clone\" and \"cloning\", which had been used in agriculture since the early 20th century. In his speech on \"Biological Possibilities for the Human Species of the Next Ten Thousand Years\" at the \"Ciba Foundation Symposium on Man and his Future\" in 1963, he said:\n", "The Food and Drug Administration stated its intention to investigate Clonaid to see if it had done anything illegal. The FDA contended that its regulations forbid human cloning without prior agency permission. However, some members of the United States Congress believed that the jurisdiction of the FDA on human cloning matters was shaky and decided to push Congress to explicitly ban human cloning.\n", "Science fiction has used cloning, most commonly and specifically human cloning, due to the fact that it brings up controversial questions of identity. Humorous fiction, such as \"Multiplicity\" (1996) and the Maxwell Smart feature \"The Nude Bomb\" (1980), have featured human cloning. A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. Robin Cook's 1997 novel \"Chromosome 6\" and Michael Bay's \"The Island\" are examples of this; \"Chromosome 6\" also features genetic manipulation and xenotransplantation. The series Orphan Black follows human clones' stories and experiences as they deal with issues and react to being the property of a chain of scientific institutions. In the 2019 horror film Us, the entirety of the United States' population is secretly cloned. Years later, these clones reveal themselves to the world by successfully pulling off a mass genocide of their counterparts.\n", "The Church of England put out a statement on the Church's website which reads, \"human reproductive cloning was made unlawful by the Human Fertilisation and Embryology Act 1990. Few members of the Church of England would dissent from such a position. However, therapeutic cloning may be thought of as ethical, as it does not result in another human being.\" Thus, while reproductive cloning is again discouraged, therapeutic cloning is more acceptable.\n" ]
Does the confirmation of the Higgs Boson have any implications for String Theory? Does it strengthen it, weaken it, or have no effect?
Little effect, there are some theories with large extra dimensions that would require no Higgs (they have their own mechanism producing the same end effect without adding a boson), so those theories are probably feeling it a bit, but otherwise it sheds little light on the landscape.
[ "An initial focus of research was to investigate the possible existence of the Higgs boson, a key part of the Standard Model of physics which is predicted by theory but had not yet been observed before due to its high mass and elusive nature. CERN scientists estimated that, if the Standard Model were correct, the LHC would produce several Higgs bosons every minute, allowing physicists to finally confirm or disprove the Higgs boson's existence. In addition, the LHC allowed the search for supersymmetric particles and other hypothetical particles as possible unknown areas of physics. Some extensions of the Standard Model predict additional particles, such as the heavy W' and Z' gauge bosons, which are also estimated to be within reach of the LHC to discover.\n", "Following the 2012 discovery, it was still unconfirmed whether or not the 125 GeV/\"c\" particle was a Higgs boson. On one hand, observations remained consistent with the observed particle being the Standard Model Higgs boson, and the particle decayed into at least some of the predicted channels. Moreover, the production rates and branching ratios for the observed channels broadly matched the predictions by the Standard Model within the experimental uncertainties. However, the experimental uncertainties currently still left room for alternative explanations, meaning an announcement of the discovery of a Higgs boson would have been premature. To allow more opportunity for data collection, the LHC's proposed 2012 shutdown and 2013–14 upgrade were postponed by 7 weeks into 2013.\n", "Evidence of the Higgs field and its properties has been extremely significant for many reasons. The importance of the Higgs boson is largely that it is able to be examined using existing knowledge and experimental technology, as a way to confirm and study the entire Higgs field theory. Conversely, proof that the Higgs field and boson do \"not\" exist would have also been significant.\n", "Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles becomes visible at energies above ; therefore, the LHC (designed to collide two proton beams) was built to answer the question of whether the Higgs boson actually exists.\n", "The Higgs boson validates the Standard Model through the mechanism of mass generation. As more precise measurements of its properties are made, more advanced extensions may be suggested or excluded. As experimental means to measure the field's behaviours and interactions are developed, this fundamental field may be better understood. If the Higgs field had not been discovered, the Standard Model would have needed to be modified or superseded.\n", "In this scenario the existence of the Higgs boson follows from the symmetries of the theory. This allows to explain why this particle is lighter than the rest of the composite particles whose mass is expected from direct and indirect tests to be around a TeV or higher. It is assumed that the composite sector has a global symmetry G spontaneously broken to a subgroup H where G and H are compact Lie groups. Contrary to technicolor models the unbroken symmetry must contain the SM electro-weak group SU(2)xU(1). According to Goldstone's theorem the spontaneous breaking of a global symmetry produces massless scalar particles known as Goldstone bosons. By appropriately choosing the global symmetries it is possible to have Goldstone bosons that correspond to the Higgs doublet in the SM. This can be done in a variety of ways and is completely determined by the symmetries. In particular group theory determines the quantum numbers of the Goldstone bosons. From the decomposition of the adjoint representation one finds,\n", "Confirmation of the Higgs boson or something very much like it would constitute a rendezvous with destiny for a generation of physicists who have believed the boson existed for half a century without ever seeing it. Further, it affirms a grand view of a universe ruled by simple and elegant and symmetrical laws, but in which everything interesting in it being a result of flaws or breaks in that symmetry. According to the Standard Model, the Higgs boson is the only visible and particular manifestation of an invisible force field that permeates space and imbues elementary particles that would otherwise be massless with mass. Without this Higgs field, or something like it, physicists say all the elementary forms of matter would zoom around at the speed of light; there would be neither atoms nor life. The Higgs boson achieved a notoriety rare for abstract physics. To the eternal dismay of his colleagues, Leon Lederman, the former director of Fermilab, called it the \"God particle\" in his book of the same name, later quipping that he had wanted to call it \"the goddamn particle\". Professor Incandela also stated,\n" ]
Why was Pilot Wave Theory / de Broglie-Bohm theory considered controversial and ultimately discarded by most physicists? What does it mean for a theory to be nonlocal? What paradoxes would be created by combining relativity and de Broglie-Bohm?
As a general rue any "interpretation" of quantum mechanics better be *especially* compelling in order to gain significant traction with physicists, because interpretations of quantum mechanics are a branch of philosophy, not science. One of the main reasons de Broglie-Bohm theory isn't *especially* compelling is its non-locality. What this means is that the pilot wave in the theory (the wave that guides the particle) can send signals to itself faster than the speed of light. Now due to the way the theory is constructed, this can never be used to actually send information faster than light in practice. You don't ever see the pilot wave (it is a "hidden variable"); only the particle. Nevertheless if you take the theory seriously, you must contend with the fact that the theory includes as "real" objects that travel faster than light, and therefore violate special relativity. Why is this a problem? Well again it's not a problem *scientifically*, since you don't ever see the pilot wave. But it is a problem philosophically if you take it seriously, because faster-than-light motion results in causal paradoxes. The reason is that if A follows B in one reference frame, and the distance between A and B is farther than light could have traveled in the time between A and B, then when viewed in another reference frame moving at a different velocity, B will follow A. See the wikipedia article about the [tachyonic antitelephone](_URL_0_) for more discussion. More generally, violation of relativity just seems *wrong* to most physicists; it is a simple and beautiful symmetry of nature that has an impeccable record of not only passing every experimental test, but of making surprising predictions (like antiparticles) that have turned out to be correct. Sure it's *possible* that relativity is violated in some *hidden variable* but remains true experimentally, but such baggage seems unnecessary given some of the other compelling interpretations of QM that have gained traction with physicists.
[ "Pilot-wave theory is explicitly nonlocal, which is in ostensible conflict with special relativity. Various extensions of \"Bohm-like\" mechanics exist that attempt to resolve this problem. Bohm himself in 1953 presented an extension of the theory satisfying the Dirac equation for a single particle. However, this was not extensible to the many-particle case because it used an absolute time.\n", "The de Broglie–Bohm theory of quantum mechanics (also known as the pilot wave theory) is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single space-time, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden-variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times. Collapse is explained as phenomenological.\n", "The theory was historically developed in the 1920s by de Broglie, who, in 1927, was persuaded to abandon it in favour of the then-mainstream Copenhagen interpretation. David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot-wave theory in 1952. Bohm's suggestions were not then widely received, partly due to reasons unrelated to their content, but instead were connected to Bohm's youthful communist affiliations. De Broglie–Bohm theory was widely deemed unacceptable by mainstream theorists, mostly because of its explicit non-locality. Bell's theorem (1964) was inspired by Bell's discovery of the work of David Bohm and his subsequent wondering whether the obvious nonlocality of the theory could be eliminated. Since the 1990s, there has been renewed interest in formulating extensions to de Broglie–Bohm theory, attempting to reconcile it with special relativity and quantum field theory, besides other features such as spin or curved spatial geometries.\n", "The de Broglie–Bohm theory itself might have gone unnoticed by most physicists, if it had not been championed by John Bell, who also countered the objections to it. In 1987, John Bell rediscovered Grete Hermann's work, and thus showed the physics community that Pauli's and von Neumann's objections \"only\" showed that the pilot wave theory did not have locality. \n", "The de Broglie–Bohm theory itself might have gone unnoticed by most physicists, if it had not been championed by John Bell, who also countered the objections to it. In 1987, John Bell rediscovered Grete Hermann's work, and thus showed the physics community that Pauli's and von Neumann's objections \"only\" showed that the pilot wave theory did not have locality.\n", "The de Broglie–Bohm theory, also known as the pilot wave theory, Bohmian mechanics, Bohm's interpretation, and the causal interpretation, is an interpretation of quantum mechanics. In addition to a wavefunction on the space of all possible configurations, it also postulates an actual configuration that exists even when unobserved. The evolution over time of the configuration (that is, the positions of all particles or the configuration of all fields) is defined by the wave function by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992).\n", "Its more modern version, the de Broglie–Bohm theory, interprets quantum mechanics as a deterministic theory, avoiding troublesome notions such as wave–particle duality, instantaneous wave function collapse, and the paradox of Schrödinger's cat. To solve these problems, the theory is inherently nonlocal and non-relativistic.\n" ]
How did the Bismarck compare technologically to the allied fleet?
It was essentially a 1915 design copy pasted to the 1930s. With a newer model of gun, and some new engines. What that meant was she was still hindered by the expectation of a short range brawl in the North Sea. Where stopping shells coming horizontally was the key. But even during WW1 increasing ranges meant that fire would be coming down at a very steep plunging direction. But her double armoured deck and turtle back scheme was problematic. It meant that while certain key areas were protected, the top armor might deflect a shell into other key areas which would then be penetrated and destroyed. There was also the problem that a small volume was protected this way than peer ships. Meaning that she had less reserve buoyancy. In general German heavy ships were hard to sink by gunfire, but could be made impotent quickly. And as soon as they started to flood it was all over. There were also serious issues with her ability to handle after a problem to a propeller or rudder that were demonstrated on her final voyage. And concerns about the placement and forget if her armour belt on the sides. And her fire control scheme, and poorly placed radar position. And finally her armament didn't help her. While all ships in 1941 were light on AA, Bismarck's reliance on a mixed battery of single purpose guns meant she was getting less bang for her buck and thus less protection. /u/fourthmaninaboat and /u/jschooltiger actually had a wonderful conversation on this topic in a thread last week.
[ "Despite the fact that Raeder and other senior naval officers envisioned using \"Bismarck\" and \"Tirpitz\" as commerce raiders against first French and later British shipping in the Atlantic, and in fact used them in that role during World War II, the ships were not designed for that mission. Their steam turbines did not afford the necessary cruising radius for such a role, and many of the decisions made for the ships' armament and armor layout reflect the expectation to fight a traditional naval battle at relatively close range in the North Sea. The disconnect between how \"Bismarck\" and \"Tirpitz\" were designed and how they were ultimately used represents the strategic incoherence that dominated German naval construction in the 1930s.\n", "\"Bismarck\" displaced as built and fully loaded, with an overall length of , a beam of and a maximum draft of . The battleship was Germany's largest warship, and displaced more than any other European battleship, with the exception of , commissioned after the end of the war. \"Bismarck\" was powered by three Blohm & Voss geared steam turbines and twelve oil-fired Wagner superheated boilers, which developed a total of and yielded a maximum speed of on speed trials. The ship had a cruising range of at . \"Bismarck\" was equipped with three FuMO 23 search radar sets, mounted on the forward and stern rangefinders and foretop.\n", "The \"Bismarck\" class was a pair of fast battleships built for Nazi Germany's \"Kriegsmarine\" shortly before the outbreak of World War II. The ships were the largest and most powerful warships built for the \"Kriegsmarine\"; displacing more than normally, they were armed with a battery of eight guns and were capable of a top speed of . was laid down in July 1936 and completed in September 1940, while her sister s keel was laid in October 1936 and work finished in February 1941. The ships were ordered in response to the French s and they were designed with the traditional role of engaging enemy battleships in home waters in mind, though the German naval command envisioned employing the ships as long-range commerce raiders against British shipping in the Atlantic Ocean. As such, their design represented strategic confusion that dominated German naval construction in the 1930s.\n", "As built, the \"Bismarck\"-class ships were equipped with a full ship rig to supplement their steam engines on overseas cruising missions, but this was later reduced, and \"Blücher\" had her rigging removed altogether. Steering was controlled with a single rudder. The vessels were good sea boats, but they made bad leeway in even mild winds and they were difficult to maneuver. They lost a significant amount of speed in a head sea, and they had limited performance under sail.\n", "When \"Bismarck\" was in Norway, a pair of Bf 109 fighters circled overhead to protect her from British air attacks, but Flying Officer Michael Suckling managed to fly his Spitfire directly over the German flotilla at a height of and take photos of \"Bismarck\" and her escorts. Upon receipt of the information, Admiral John Tovey ordered the battlecruiser , the newly commissioned battleship , and six destroyers to reinforce the pair of cruisers patrolling the Denmark Strait. The rest of the Home Fleet was placed on high alert in Scapa Flow. Eighteen bombers were dispatched to attack the Germans, but weather over the fjord had worsened and they were unable to find the German warships.\n", "\"Bismarck Sea\" was a Casablanca-class escort carrier, the most numerous type of aircraft carriers ever built, and designed specifically to be mass-produced using prefabricated sections, in order to replace heavy early war losses. Standardized with her sister ships, she was long overall, had a beam of , and a draft of . She displaced standard, with a full load. She had a long hangar deck, a long flight deck. She was powered with two Uniflow reciprocating steam engines, which provided a force of 9000 horsepower, driving two shafts, enabling her to make . The ship had a cruising range of , assuming that she traveled at a constant speed of . Her compact size necessitated the installment of an aircraft catapult at her bow end, and there were two aircraft elevators to facilitate movement of aircraft between the flight and hangar deck: one on the fore, another on the aft.\n", "The \"Bismarck\"-class ships both had three sets of geared turbine engines; \"Bismarck\" was equipped with Blohm & Voss turbines, while \"Tirpitz\" used Brown, Boveri, and Co. engines. Each set of turbines drove a 3-bladed screw that was in diameter. The three-shaft arrangement was chosen over a four-shaft system, as was typically used on foreign capital ships, since it would save weight. At a full load, the high and medium-pressure turbines ran at 2,825 rpm, while the low-pressure turbines ran at 2,390 rpm. The ships' turbines were powered by twelve Wagner ultra high-pressure, oil-burning water-tube boilers. \"Bismarck\" and \"Tirpitz\" were originally intended to use electric-transmission turbines that would have produced apiece. These engines would have provided for a higher top speed, but at the cost of greater weight. The geared turbines were significantly lighter, and as a result had a slight performance advantage. The geared turbines also had a significantly more robust construction, and so they were adopted instead.\n" ]
What would happen if you swallowed gasoline? Say maybe 8 oz and no vomiting.
First off, it would be best to call poison control immediately and follow their instructions, but since that's not what you're asking, I'll give it a go: 8oz is not enough to kill (depending on the size of the person). If you ingest the gasoline, you will vomit as it is recognized as a poison by the body and will likely make you very sick for several hours. Water and milk are good to coat the stomach and flush out the poison. That all being said, if you were to develop cold or flu like symptoms or if you drank twelve or more ounces, it's time to go to the hospital. Hope that helps.
[ "Contrary to common misconception, swallowing gasoline does not generally require special emergency treatment, and inducing vomiting does not help, and can make it worse. According to poison specialist Brad Dahl, \"even two mouthfuls wouldn't be that dangerous as long as it goes down to your stomach and stays there or keeps going.\" The US CDC's Agency for Toxic Substances and Disease Registry says not to induce vomiting, lavage, or administer activated charcoal.\n", "Concentrated solutions when drunk have resulted in adult respiratory distress syndrome or swelling of the airway. Recommended measures for those who have ingested potassium permanganate include gastroscopy. Activated charcoal or medications to cause vomiting are not recommended. While medications like ranitidine and N-acetylcysteine may be used in toxicity, evidence for this use is poor.\n", "Vomiting is dangerous if gastric content enters the respiratory tract. Under normal circumstances the gag reflex and coughing prevent this from occurring; however, these protective reflexes are compromised in persons under the influence of certain substances (including alcohol) or even mildly anesthetized. The individual may choke and asphyxiate or suffer aspiration pneumonia.\n", "Pure ethanol will irritate the skin and eyes. Nausea, vomiting, and intoxication are symptoms of ingestion. Long-term use by ingestion can result in serious liver damage. Atmospheric concentrations above one in a thousand are above the European Union occupational exposure limits.\n", "Self reports from another study showed that 63% of patients in the study gulped their drinks rather than sipped. Five patients recollected vomiting during the drinking episode while 32 drank on an empty stomach and 41 drank more than originally planned. During the drinking episode 31% subjects described blackouts, 20% described brownouts, and 49% reported no amnesic episode.\n", "The oral median lethal dose (LD) of ethanol in rats is 5,628 mg/kg. Directly translated to human beings, this would mean that if a person who weighs drank a glass of pure ethanol, they would theoretically have a 50% risk of dying. Symptoms of ethanol overdose may include nausea, vomiting, central nervous system depression, coma, acute respiratory failure, or death.\n", "BULLET::::- disulfiram (Antabuse) – inhibits enzyme acetaldehyde dehydrogenase, causing acetaldehyde poisoning when ethanol is consumed; used to cause severe hangover when drinking; increases liver, kidney, and brain damage from drinking.\n" ]
why do i physically feel horrible immediately when seeing gory photos?
This is an evolutionary "trait" of sorts. Similar to how some people faint after seeing blood. My guess would be that humans subconsciously see blood and gore as 'something that's gone terribly wrong' and should be heeded as a warning. After all, if you saw someone suddenly shot and bleeding, wouldn't you be somewhat shocked?
[ "\"When I saw these pictures, it really freaked me out,\" said Jalil Hasan. His mother said: \"Right now I feel very violated ... When I'm looking at these pictures, and I'm looking at these snapshots, I'm feeling, 'Where did I send my child?'\"\n", "it's ... the spectacle of the thing, right? You really want to be there when the person is seeing it. To the extent that there's all these sites online of sort of people taking pictures of their friends and showing them Goatse... [In photos online,] It's like thousands and thousands of people looking really shocked or disgusted. It's really great.\n", "It's uncomfortable because I just simply took a photograph. That's all my participation was in my poster that sold over a million copies, was that I took a photograph that I thought was a dumb photograph. My husband said, \"Oh, try this thing tied up here, it'll look beautiful\". And the photographer said \"the back-lighting is really terrific\". So dealing with someone having that picture up in their... bedroom or their... living room or whatever I think would be hard for anyone to deal with.\n", "Even though it would be great to hear many nice words about it but getting away from that, I thought that I should do something that I have never done before, and it attracted many people's attention on it... Even then, I think they will say 'What's this?' when they see the photo. Besides cursing over the photo, I am thankful simply that they found and saw the photo.\n", "\"I do feel I have some slight corner on something about the quality of things. I mean it's very subtle and a little embarrassing to me, but I believe there are things which nobody would see unless I photographed them.\"\n", "BULLET::::- \"\"To involve the public, you have to make each of your pictures a thousand times more spectacular than what you might see on the most exquisite day – otherwise you’ll never convey even one-tenth of what it feels like to be out there on the dullest gray day when nothing’s going on.\"\" Bob Walker (Beaver, Christopher, \"After the Storm: Bob Walker and the East Bay Regional Park District\")\n", "Severe or chronic photophobia, such as in migraine or seizure disorder, may result in a person not feeling well with eye ache, headache and/or neck ache. These symptoms may persist for days even after the person is no longer exposed to the offensive light source. Further, once the eyes have become sensitized to the offensive light source (which can occur even in short duration exposures), they may become even more photosensitive with extreme pain occurring upon exposure to light.\n" ]
Question about concave mirrors, projectors, and real images.
[This is in fact almost exactly how flight simulators work.](_URL_0_) The key to it is a collimating lens that means the user doesn't have to sit in a precise spot and the image truly feels 3 dimensional. Check out the linked wikipedia article for a good diagram (that, good for you, looks almost exactly like your diagram). All you would need to do is add the touch/motion sensing.
[ "Parabolic reflectors are popular for use in creating optical illusions. These consist of two opposing parabolic mirrors, with an opening in the center of the top mirror. When an object is placed on the bottom mirror, the mirrors create a real image, which is a virtually identical copy of the original that appears in the opening. The quality of the image is dependent upon the precision of the optics. Some such illusions are manufactured to tolerances of millionths of an inch.\n", "The inverted real image of an object reflected by a concave mirror can appear at the focal point in front of the mirror. In a construction with an object at the bottom of two opposing concave mirrors (parabolic reflectors) on top of each other, the top one with an opening in its center, the reflected image can appear at the opening as a very convincing 3D optical illusion.\n", "A concave mirror, or converging mirror, has a reflecting surface that is recessed inward (away from the incident light). Concave mirrors reflect light inward to one focal point. They are used to focus light. Unlike convex mirrors, concave mirrors show different image types depending on the distance between the object and the mirror.\n", "A convex mirror or diverging mirror is a curved mirror in which the reflective surface bulges towards the light source. Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focal point (\"F\") and the centre of curvature (\"2F\") are both imaginary points \"inside\" the mirror, that cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. The image is smaller than the object, but gets larger as the object approaches the mirror.\n", "The biggest advantage of catadioptric systems (panoramic mirror lenses) is that because one uses mirrors to bend the light rays instead of lenses (like fish eye), the image has almost no chromatic aberrations or distortions. The image, a reflection of the surface on the mirror, is in the form of a doughnut to which software is applied in order to create a flat panoramic picture. Such software is normally supplied by the company who produces the system. Because the complete panorama is imaged at once, dynamic scenes can be captured without problems. Panoramic video can be captured and has found applications in robotics and journalism. The mirror lens system uses only a partial section of the digital camera's sensor and therefore some pixels are not used. Recommendations are always to use a camera with a high pixel count in order to maximize the resolution of the final image.\n", "A pseudoscope is a binocular optical instrument that reverses depth perception. It is used to study human stereoscopic perception. Objects viewed through it appear inside out, for example: a box on a floor would appear as a box shaped hole in the floor.\n", "With mirror anamorphosis, a conical or cylindrical mirror is placed on the drawing or painting to transform a flat distorted image into an apparently undistorted picture. The deformed image is created by using the laws of the angles of the incidence of reflection. This reduces the length of the flat drawing's curves when the image is viewed in a curved mirror, so that the distortions resolve into a recognizable picture. Unlike perspective anamorphosis, catoptric images can be viewed from many angles. The technique was originally developed in China during the Ming Dynasty. The first European manual on mirror anamorphosis was published around 1630 by the mathematician Vaulezard.\n" ]
How do scientist know (what evidence is there) to show the decay rates of isotopes, such as carbon 14?
Radioactive decay is probabilisitc. What that means is that, in a given time period there is a 50% chance a particular nucleus will undergo spontaneous decay. So if you take a large number of them, half of them will have decayed in that time frame. That is why it's called a half life. If you wait a further half life, half of what you had remaining will now also have decayed. And so on Now, the great thing with this is that there are many, many different isotopes, all of different stability. So they each have different half lives. Now, if the half lives did not remain constant through time, it would be impossible to draw any comparisons between these different systems. However, what we see is that each isotope system is consistent with the others. So if I get a date using, say K-Ar dating, I will find the same date using U-Pb or Sm-Nd dating etc^* . So, if I take isotope X which has a half life of 5 years, and isotope Y with a half life of 20 years, put a known volume of each in a box, and come back and measure them in 20 years, I'll have have half of my original Y isotope, and 1/16th of my original X isotope. Even better than that, we can calibrate back into the geological record (at least some way) using stuff like ice cores, tree rings, varved sediments and fission track analysis to double check our maths (at least on the younger samples). ^* this is of course an oversimplification, as many isotopic systems can only be used in certain circumstances (i.e. when the parent, stable and daughter isotopes are in high enough concentrations to measure which will depend on rock type) and we have to be aware of potential complications such as metamorphic history, weathering etc which will can introduce inaccuracies. But careful sample collection and preparation does allow us to check these clocks against each other, and we get stunningly good agreement.
[ "By measuring the amount of radioactive decay of a radioactive isotope with a known half-life, geologists can establish the absolute age of the parent material. A number of radioactive isotopes are used for this purpose, and depending on the rate of decay, are used for dating different geological periods. More slowly decaying isotopes are useful for longer periods of time, but less accurate in absolute years. With the exception of the radiocarbon method, most of these techniques are actually based on measuring an increase in the abundance of a radiogenic isotope, which is the decay-product of the radioactive parent isotope. Two or more radiometric methods can be used in concert to achieve more robust results. Most radiometric methods are suitable for geological time only, but some such as the radiocarbon method and the Ar/Ar dating method can be extended into the time of early human life and into recorded history.\n", "All methods based on the radioactive decay belong to this category. The principle at the base of radiometric dating is that natural unstable isotopes, called 'parent isotopes', decay to some isotope which is instead stable, called the 'daughter isotope'. Under the assumptions that (1) the initial amount of parent and daughter isotopes can be estimated, and (2) after the geologic material formed, parent and daughter isotopes did not escape the system, the age of the material can be obtained from the measurement of isotope concentrations, through the laws of radioactive decay.\n", "The constancy of the decay rates of isotopes is well supported in science. Evidence for this constancy includes the correspondences of date estimates taken from different radioactive isotopes as well as correspondences with non-radiometric dating techniques such as dendrochronology, ice core dating, and historical records. Although scientists have noted slight increases in the decay rate for isotopes subject to extreme pressures, those differences were too small to significantly impact date estimates. The constancy of the decay rates is also governed by first principles in quantum mechanics, wherein any deviation in the rate would require a change in the fundamental constants. According to these principles, a change in the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements' resulting unique chronological timescales would then give inconsistent time estimates.\n", "Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras. Radiometric mapping using ground and airborne gamma spectrometry can be used to map the concentration and distribution of radioisotopes near the Earth's surface, which is useful for mapping lithology and alteration.\n", "The following table lists some of the most important radiogenic isotope systems used in geology, in order of decreasing half-life of the radioactive parent isotope. The values given for half-life and decay constant are the current consensus values in the Isotope Geology community. ** indicates ultimate decay product of a series.\n", " Radiocarbon dating is also simply called Carbon-14 dating. Carbon-14 is a radioactive isotope of carbon, with a half-life of 5,730 years (which is very short compared with the above isotopes), and decays into nitrogen. In other radiometric dating methods, the heavy parent isotopes were produced by nucleosynthesis in supernovas, meaning that any parent isotope with a short half-life should be extinct by now. Carbon-14, though, is continuously created through collisions of neutrons generated by cosmic rays with nitrogen in the upper atmosphere and thus remains at a near-constant level on Earth. The carbon-14 ends up as a trace component in atmospheric carbon dioxide (CO).\n", "Thirty radioisotopes have been characterised, which range in mass number from 209 to 238. After Th, the most stable of them (with respective half-lives) are Th (75,380 years), Th (7,340 years), Th (1.92 years), Th (24.10 days), and Th (18.68 days). All of these isotopes occur in nature as trace radioisotopes due to their presence in the decay chains of Th, U, U, and Np: the last of these is long extinct in nature due to its short half-life (2.14 million years), but is continually produced in minute traces from neutron capture in uranium ores. All of the remaining thorium isotopes have half-lives that are less than thirty days and the majority of these have half-lives that are less than ten minutes.\n" ]
what is frequency modulation and how are it's uses different from amplitude modulation?
FM would be like sending a signal by changing the color of the light or pitch of the sound, while AM is like changing the brightness of the light or the loudness of the sound.
[ "Frequency modulation and phase modulation are the two complementary principal methods of angle modulation; phase modulation is often used as an intermediate step to achieve frequency modulation. These methods contrast with amplitude modulation, in which the amplitude of the carrier wave varies, while the frequency and phase remain constant.\n", "Frequency modulation is widely used for FM radio broadcasting. It is also used in telemetry, radar, seismic prospecting, and monitoring newborns for seizures via EEG, two-way radio systems, music synthesis, magnetic tape-recording systems and some video-transmission systems. In radio transmission, an advantage of frequency modulation is that it has a larger signal-to-noise ratio and therefore rejects radio frequency interference better than an equal power amplitude modulation (AM) signal. For this reason, most music is broadcast over FM radio.\n", "Frequency modulation or FM is a form of modulation which conveys information by varying the frequency of a carrier wave; the older amplitude modulation or AM varies the amplitude of the carrier, with its frequency remaining constant. With FM, frequency deviation from the assigned carrier frequency at any instant is directly proportional to the amplitude of the input signal, determining the instantaneous frequency of the transmitted signal. Because transmitted FM signals use more bandwidth than AM signals, this form of modulation is commonly used with the higher (VHF or UHF) frequencies used by TV, the FM broadcast band, and land mobile radio systems.\n", "Angle modulation is a class of carrier modulation that is used in telecommunications transmission systems. The class comprises frequency modulation (FM) and phase modulation (PM), and is based on altering the frequency or the phase, respectively, of a carrier signal to encode the message signal. This contrasts with varying the amplitude of the carrier, practiced in amplitude modulation (AM) transmission, the earliest of the major modulation methods used widely in early radio broadcasting.\n", "Frequency modulation generates high quality audio and greatly reduces the amount of noise on the channel when compared with amplitude modulation. Early broadcasters used amplitude modulation because it was easier to generate than frequency modulation and because the receivers were simpler to make. The electronics theory indicated that a frequency modulated signal would have infinite bandwidth; for an amplitude modulated signal, the bandwidth is approximately twice the highest modulating frequency.\n", "Frequency modulation synthesis (or FM synthesis) is a form of sound synthesis where the frequency of a waveform, called the carrier, is changed by modulating its frequency with a modulator. The frequency of an oscillator is altered \"in accordance with the amplitude of a modulating signal.\" \n", "Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting information via a radio carrier wave. In amplitude modulation, the amplitude (signal strength) of the carrier wave is varied in proportion to that of the message signal being transmitted. The message signal is, for example, a function of the sound to be reproduced by a loudspeaker, or the light intensity of pixels of a television screen. This technique contrasts with frequency modulation, in which the frequency of the carrier signal is varied, and phase modulation, in which its phase is varied.\n" ]
When did China start having conceptions of race? What is the history of race in China?
This doesn't answer your historical question, but it might be worth clearing up what people mean today by saying that race is a social construct. There are different ethnicities in the world, but our decision to group different ethnicities together into large umbrellas of "race" is the social construct that most people are talking about. We put Ashkenazi Jews, Greeks, Germans, Basques, and Slavs in the same "white" category. Are they so similar to each other, and so different from Arabs, Persians, and Kurds, that the categorization makes sense? Or does it all descend from somewhat arbitrary line drawing? Does the U.S. Census Bureau's categorization of South Asians, East Asians, and Pacific Islanders as a single category of "Asian/Pacific Islander" follow some kind of principled system, or is it an arbitrary categorization that makes sense only in the context of American society? It's also worth noting that we tend to account for shared culture and language when drawing the lines of ethnicity in the first place. So meaningfully separating culture from race is difficult to begin with, and I'm not sure what you'd be able to do with the isolated variables.
[ "Historian Frank Dikötter (1990:420) says the Chinese \"idea of 'race' (\"zhong\" [種], \"seed\", \"species\", \"race\") started to dominate the intellectual scene\" in the late 19th-century Qing dynasty and completed the \"transition from cultural exclusiveness to racial exclusiveness in modern China\" in the 1920s.\n", "The modern concept of race emerged as a product of the colonial enterprises of European powers from the 16th to 18th centuries which identified race in terms of skin color and physical differences. This way of classification would have been confusing for people in the ancient world since they did not categorize each other in such a fashion. In particular, the epistemological moment where the modern concept of race was invented and rationalized lies somewhere between 1730 to 1790. \n", "Wang, Štrkalj et al. (2003) examined the use of race as a biological concept in research papers published in China's only biological anthropology journal, \"Acta Anthropologica Sinica\". The study showed that the race concept was widely used among Chinese anthropologists. In a 2007 review paper, Štrkalj suggested that the stark contrast of the racial approach between the United States and China was due to the fact that race is a factor for social cohesion among the ethnically diverse people of China, whereas \"race\" is a very sensitive issue in America and the racial approach is considered to undermine social cohesion – with the result that in the socio-political context of US academics scientists are encouraged not to use racial categories, whereas in China they are encouraged to use them.\n", "Horse racing in one form or another has been a part of Chinese culture for millennia. Horse racing was a popular pastime for the aristocracy at least by the Zhou Dynasty - 4th century B.C. General Tian Ji's strategem for a horse race remains perhaps the best known story about horse racing in that period. In the 18th and 19th centuries, horse racing and equestrian sports in China was dominated by Mongol influences.\n", "Throughout much of recorded Chinese history, there was little attempt by Chinese authors to separate the concepts of nationality, culture, and ethnicity. Those outside of the reach of imperial control and dominant patterns of Chinese culture were thought of as separate groups of people regardless of whether they would today be considered as a separate ethnicity. The self-conceptualization of Han largely revolved around this center-periphery cultural divide. Thus, the process of Sinicization throughout history had as much to do with the spreading of imperial rule and culture as it did with actual ethnic migration.\n", "These things show that many times, pre-modern Chinese did view culture (and sometimes politics) rather than race and ethnicity as the dividing line between the Chinese and the non-Chinese. In many cases, the non-Chinese could and did become the Chinese and vice versa, especially when there was a change in culture.\n", "As part of this nation-building effort, the notion of race was abolished by the time of the 1930 census. Prior census did take race into account and those of Chinese origin were so noted. However, the lack of a race category, plus the complicated laws concerning nationality blurred the line as to who was Mexican and who was not. This not only affected those who had immigrated from China, but also their Mexican wives and mixed-race children. Depending on when wives married their husbands and when children were born, among other factors, wives and children could be considered to be Chinese rather than Mexican nationals. While it cannot be proven that information taken from this census was used in the mass deportation of Chinese men and their families in the 1930s, their uncertain legal status reflected by it would give them little to no protection against deportations.\n" ]
What period in human history have literacy rates been the highest as a percentage of the worlds population?
It may safely be assumed that today's literacy rates are unprecedented: Unesco reports that "Since 1950, the adult literacy rate at the world level has increased by 5 percentage points every decade on average, from 55.7% in 1950 to 86.2% in 2015" (*Reading the past, writing the future*, Paris 2016) - and most of the 1950 total represents European and north American countries with rates in excess of 90-95% (Unesco, *World literacy at mid-century*, Paris 1957), a condition that certainly doesn't apply in earlier centuries. There's really no reliable way to measure global rates before the 20th century, except to say they were lower than today's or even the 1950 level - inevitably given the spread of education in the 19th and 20th centuries. Even in Europe, barely half of the population seems to have enjoyed functional literacy as late as 1850, and fewer still could read earlier in the century. The US seems to have been a 19th-century leader with extensive schoolong, but even there a fifth were illiterate in 1870, more than a tenth in 1900. Rates doubtless had their downs as well as their ups, as conditions for written communication became less or more favourable in particular periods, as reflected in the varying availability of written sources over time. But the long-run trend has certainly been upward with the spread of writing itself and later of general basic schooling and printing.
[ "Literacy data published by UNESCO displays that since 1950, the adult literacy rate at the world level has increased by 5 percentage points every decade on average, from 55.7 per cent in 1950 to 86.2 per cent in 2015. However, for four decades, the population growth was so rapid that the number of illiterate adults kept increasing, rising from 700 million in 1950 to 878 million in 1990. Since then, the number has fallen markedly to 745 million in 2015, although it remains higher than in 1950 despite decades of universal education policies, literacy interventions and the spread of print material and information and communications technology (ICT). However, these trends have been far from uniform across regions.\n", "It took over two-hundred thousand years of human history up to 1804 for the world's population to reach 1 billion; world population reached an estimated 2 billion in 1927; by late 1999, the global population reached 6 billion. Global literacy averaged 80%; global lifespan-averages exceeded 40+ years for the first time in history, with over half achieving 70+ years (three decades \"longer\" than it was a century ago).\n", "Every census since 1881 had indicated rising literacy in the country, but the population growth rate had been high enough that the absolute number of illiterates rose with every decade. The 2001–2011 decade is the second census period (after the 1991–2001 census period) when the absolute number of Indian illiterates declined (by 31,196,847 people), indicating that the literacy growth rate is now outstripping the population growth rate.\n", "In the early 1980s, estimates of total literacy were between 50 and 60 percent, or about 70 percent for men and 35 percent for women, but the gender gap has since narrowed, especially because of increased female school attendance. For 2001 the United Nations Development Programme's Human Development Report estimates that the adult literacy rate climbed to about 80.8 percent, or 91.3 percent for males and 69.3 percent for females. According to 2004 U.S. government estimates, 82 percent of the total adult population (age 15 and older) is literate, or 92 percent of males and 72 percent of females. The United Nations Development Programme recorded about an 89.9 percent adult literacy rate in 2014, while UNICEF estimated that as high 99.9 percent literacy rates among youths ages 15 to 24 for both sexes in 2012.\n", "In the 19th century, literacy rates among the United States population were relatively high despite the decentralized educational system. There has been a notable increase in American citizens' educational attainment since then, but studies have also indicated declining reading performance starting in the 1970s. In the past, although entities such as the U.S. Adult Education and Literacy System (AELS) and legislation such as the Economic Opportunity Act of 1964 have highlighted education as a topic of national importance, the push for high levels of mass literacy has been a recent development. Expectations concerning literacy have sharply increased over the past decades. Contemporary standards for adequate literacy have become more difficult to meet in comparison to historical criteria. Whereas such standards were only applied to the elite in the past, due to the proliferation of and increased accessibility to education in the form of public schools, the expectation of mass literacy has been applied to the entirety of the U.S. population.\n", "The rate of illiteracy decreased more rapidly in more populated areas and areas where there was mixture of religious schools. The literacy rate in England in the 1640s was around 30 percent for males, rising to 60 percent in the mid-18th century. In France, the rate of literacy in 1686-90 was around 29 percent for men and 14 percent for women, it increased to 48 percent for men and 27 percent for women.\n", "Available global data indicates significant variations in literacy rates between world regions. North America, Europe, West Asia, and Central Asia have achieved almost full adult literacy (individuals at or over the age of 15) for both men and women. Most countries in East Asia and the Pacific, as well as Latin America and the Caribbean, are above a 90% literacy rate for adults. Illiteracy persists to a greater extent in other regions: 2013 UNESCO Institute for Statistics (UIS) data indicates adult literacy rates of only 67.55% in South Asia and North Africa, 59.76% in Sub-Saharan Africa.\n" ]
what are mac adresses for computers and how are they different from ip adresses?
MAC adresses are hard-coded into physical hardware devices (and cannot be changed) - sort of like the street address for your house. IP address are set in software and can be moved/changed - sort of like your phone number. When you call 911, the operator can tell what your address is because your physical address has been mapped to a particular phone number. You can change your phone number or move it to another house, but the street address for your house will never change (realistically).
[ "Apple Developer, formerly Apple Developer Connection or ADC, is Apple Inc.'s developer network. It is designed to make available resources to help software developers write software for the macOS, tvOS, watchOS, and iOS platforms. Those applications are created in Xcode or other programs that are not created by Apple Inc.. Then iOS applications are uploaded on the App Store (iOS), watchOS applications are attached to some iOS applications, and tvOS applications are uploaded to the App Store (tvOS). For Mac applications, it’s more common to find them on the World Wide Web to download or on the App Store (macOS).\n", "There are different types of ad servers. There is an ad server for publishers that helps them to launch a new ad on a website by listing the highest ads' price on its and to follow the ad's growth by registering how many users it has reached. There is an ad server for advertisers that helps them by sending the ads in the form of HTML codes to each publisher. In this way, it is possible to open the ad in every moment and make changes of frequency for example, at all times. Lastly, there is an ad server for ad networks that provides information as in which network the publisher is registering an income and which is the daily revenue.\n", "Similar to AdMob, iAd facilitates integrating advertisements into applications sold on the iOS App Store. If the user tapped on an iAd banner, a full-screen advertisement appeared within the application, unlike other ads that would send the user into the Safari web browser. Ads were promised to be more interactive than on other advertising services, and users were able to close them at any time, returning to where they left their app. Former Apple CEO Steve Jobs initially indicated that Apple would retain 40% of the ad revenue, in line with what he called \"industry standard\", with the other 60% going to the developers. The amount paid to developers was later increased to 70%. iAd was expected to benefit free applications as well. The iAd App Network was discontinued as of June 30, 2016. Since then the technology lives on in both Apple News Advertising and App Store Search Ads.\n", "The Mac Developer Program is a way developers for Apple's Mac OS X operating system can distribute their apps through the Mac App Store. It costs US$99/year. Unlike iOS, developers are not required to sign up for the program in order to distribute or test their applications. Mac applications can freely be distributed via the developer's website and/or any other method of distribution excluding the Mac App Store. Apple provides Xcode for free to developers to code, build, and test their apps. The Mac Developer Program provides developers with many resources to help them distribute their Mac applications.\n", "Apple uses a tracking technique called \"identifier for advertisers\" (IDFA). This technique assigns a unique identifier to every user that buys an Apple iOS device (such as an iPhone or iPad). This identifier is then used by Apple's advertising network, iAd, to determine the ads that individuals are viewing and responding to.\n", "Advertising companies use third-party cookies to track a user across multiple sites. In particular, an advertising company can track a user across all pages where it has placed advertising images or web bugs. Knowledge of the pages visited by a user allows the advertising company to target advertisements to the user's presumed preferences.\n", "The applets listed below are components of the Microsoft Windows control panel, which allows users to define a range of settings for their computer, monitor the status of devices such as printers and modems, and set up new hardware, programs and network connections. Each applet is stored individually as a separate file (usually a .cpl file), folder or DLL, the locations of which are stored in the registry under the following keys:\n" ]
A simple F=MA problem that frustrates my brain.
If your car is coasting along (ie engine not actively powering wheels) and you include friction, then your car will be slowing down gradually. Friction between the wheels and the ground (rolling friction) would manifest as a force in the direction opposite of the car's movement, resulting in a small acceleration in the same direction. If you want a car to travel at a constant speed while considering rolling friction, you must be applying energy to the wheels by some mechanism to oppose the losses due to friction. You'll still have the rolling friction force going in the opposite direction, but you'd have an equal and opposite force driving the wheels forward to get you a net force of 0. > Would you or would you not need another force to overcome the friction and therefore make the car's acceleration equal to zero? More fundamentally, what would the net force on the car be? You would indeed need some other force to cancel out the friction to result in zero acceleration. And if you have zero acceleration, the net force will be zero.
[ "Practitioners and researchers have suggested many ways of solving the A*P = F equation. It transpires that the most natural method – that of minimizing formula_9 by least squares regression – leads to unsatisfactory results. The large number of zeroes in the matrix \"A\" mean that function \"P\" turns out to be \"bumpy\".\n", "The hitting time of a set \"F\" is also known as the \"début\" of \"F\". The Début theorem says that the hitting time of a measurable set \"F\", for a progressively measurable process, is a stopping time. Progressively measurable processes include, in particular, all right and left-continuous adapted processes.\n", "Two other commonly used F measures are the formula_4 measure, which weighs recall higher than precision (by placing more emphasis on false negatives), and the formula_5 measure, which weighs recall lower than precision (by attenuating the influence of false negatives).\n", "where F is a random force representing the random collisions of the particle and the surrounding molecules, and where the time constant τ reflects the drag force that opposes the particle's motion through the solution. The drag force is often written F = −γv; therefore, the time constant τ equals \"m\"/γ.\n", "The Automatic Thought Questionnaire 30 (ATQ 30) is a scientific questionnaire created by Steven D. Hollon and Phillip C. Kendall that measures automatic negative thoughts. The ATQ 30 consists of 30 negative statements and asks participants to indicate how often they experienced the negative thought during the course of the week on a scale of 1-5 (1=Low-High=5). This measure was created in response to Aaron T. Beck’s hypothesis that thinking in depressed populations tends to be negative. Example statements include \"I'm worthless\", \"I've let people down\", \"I can't get started\" and \"My future is bleak\".\n", "\"f\"(\"k\", \"n\", \"p\") is monotone increasing for \"k\" < \"M\" and monotone decreasing for \"k\" > \"M\", with the exception of the case where (\"n\" + 1)\"p\" is an integer. In this case, there are two values for which \"f\" is maximal: (\"n\" + 1)\"p\" and (\"n\" + 1)\"p\" − 1. \"M\" is the \"most probable\" outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode.\n", "is sometimes taken to be \"f\"(\"u\") = \"u\"/(\"k\"+1) + \"u\" for some positive integer \"k\" (where the extra \"u\" is a \"drift term\" that makes the analysis a little easier). The case \"f\"(\"u\") = 3\"u\" is the original Korteweg–de Vries equation.\n" ]
What is the probable true extent of Koko the gorilla's intelligence? (Additional question for anyone fluent in ASL)
Intelligence and the ability to communicate are not really the same thing. If by intelligence you mean the ability to communicate higher order cognitive functions such as emotions, deception etc I think you're not going to find an answer. People associate intelligence with language, but who says they must both exist in tandem? We may not understand the bounds of intelligence. We see squid seeing with their tentacles with no eyes. The animal kingdom continually amazes us. Depending on who you ask Koko, Kanzi, Panbanisha, Alex the African grey could all be considered "exceptions". Just as extraordinary human individuals exist, extraordinary animals are likely to exist. My thoughts from my research are that Apes, Parrots and possibly other animals are more intelligent than we give them credit for. They have the ability to possess language it just is poorly understood by us. We place language in this tiny box reserved for humans because the components of religion etc have not came about in other animals. We have to question whether the language of other animals is as complex as ours. What if we do not understand it? If we start with the hypothesis that animals do not have language because of the particular results of it we're examining a hero story. We know the ending and we're just searching for facts to prove ourselves right. We've spent so long placing ourselves above animals due to our cultural biases that we lack the objective scientific approach to animal language studies. Nonsense research like [Project Nim](_URL_0_) plague the scientific field because it seems like the obvious answer. African grey's exhibit convergence evolutionary aspects of language and their environment is quite similar as our hominin ancestors. I believe there is something we are missing in our studies of animal language and intelligence. Many studies of ape language focus on gestural communication as a precursor for language as their brocas areas activate when they communicate manually. (This is the area that activates when we speak.) There was some type of switch between using our bodies and relying more on vocalizations and calls. I have my own theories about how music was a precursor to language and there recently there have been some research done into this idea, so obviously I'm not alone in this idea. I would say though that most anthropologists feel one way, linguists another, psychologists another and it is really best to evaluate the facts yourself.
[ "Although many people who encounter Ahn-Kha dismiss his intelligence due to his resemblance to the Gray Ones, his own species of Grog is generally as intelligent (if not more so) than humans. Ahn-Kha is a great source of wisdom, something Valentine relies on almost as much as he relies on Ahn-Kha's strength.\n", "Gon's intelligence seems to fluctuate in each adventure, ranging from total cluelessness (such as failing to notice a bird nest on his head for weeks), to strategic cunning (using a lion as a beast of burden to capture prey).\n", "This was addressed in the book \"Big Brain: The Origins and Future of Human Intelligence\" (2008) by neurologists Gary Lynch and Richard Granger, who claimed the large brain size in Boskop individuals might be indicative of particularly high general intelligence. Anthropologist John Hawks harshly criticized the depiction of the Boskop fossils in the book and in the book's review article in \"Discover\" magazine.\n", "Gorilla Grodd is a hyper-intelligent telepathic gorilla able to control the minds of others. He was an average ape until an alien spacecraft (retconned from a radioactive meteor which also empowered Hector Hammond) crashed in Grodd's African home. Grodd and his tribe of gorillas were imbued with super intelligence by the ship's pilot. Grodd and fellow gorilla Solovar also developed telepathic and telekinetic powers. Led by the alien, the gorillas constructed the super-advanced Gorilla City. The gorillas lived in peace until their home was discovered by explorers. Grodd forced one of the explorers to kill the alien and took over Gorilla City, planning to conquer the world next. Solovar telepathically contacted Barry Allen to warn of the evil gorilla's plans, and Grodd was defeated. The villain manages to return again and again to plague the Flash and the hero's allies.\n", "George Tarleton was subjected to a mutagenic process that transformed him into MODOK and grants him superhuman intelligence, including a computer-like memory, the ability to scour and retain large data-banks of information very quickly, and solve abstract mathematical problems nearly instantaneously. He also has the ability to calculate the mathematical probability of any given event occurring, which is so strong that it borders on precognition. However, his creativity remains at an average human level. MODOK's vast intelligence makes him one of the few beings that can analyze and comprehend the workings of the Cosmic Cube, which was the very purpose for his creation.\n", "The Kree Supreme Intelligence is a vast cybernetic/organic computer system composed of 5,000 cubic meters of computer circuitry incorporating the disembodied brains of the greatest statesmen and philosophers in Kree history, preserved cryogenically. This aggregation of brains creates a single collective intelligence able to use the vast information storage and processing capabilities of the computer system in a creative way. When wishing to interact with it, the Kree address it within its terminal chamber, where a holographic image is projected on a gigantic monitor screen.\n", "Intelligence testing was compared with anthropometrics. Samuel George Morton (1799–1851) collected hundreds of human skulls from all over the world and started trying to find a way to classify them according to some logical criterion. Morton claimed that he could judge intellectual capacity by cranial capacity. A large skull meant a large brain and high intellectual capacity, a small skull indicated a small brain and decreased intellectual capacity. Modern science has since confirmed that there is a correlation between cranium size (measured in various ways) and intelligence as measured by IQ tests, although it is a weak correlation at about 0.2. Today, brain volume as measured with MRI scanners also find a correlation between brain size and intelligence at about 0.4.\n" ]
why do professional singers, such as elvis presley and andrea bocelli, have a vibrato on every note? is it learned/does it make singing easier in some way?
Vibrato might make the sound a little more pleasant, and it also covers up tiny imperfections or discrepancies in pitch.
[ "Prior to the advent of the charismatic Rubini, every well-schooled opera singer had avoided using a conspicuous and continuous vibrato because, according to Scott, it varied the pitch of the note being sung to an unacceptable degree and it was considered to be an artificial contrivance arising from inadequate breath control. British and North American press commentators and singing teachers continued to subscribe to this view long after Rubini had come and gone.\n", "One element of the tone quality of traditional Sacred Harp singers that can be clearly asserted is that they never use vibrato. However, this in itself says little about the rather distinctive sound that traditional singers produce. \n", "Many have recorded and played this song, in particular Tríos huastecos, Mariachis and Bolero Trios. But the most famous version was made by Miguel Aceves Mejía with his mariachi. With Huapangos or Son Huastecos, the falsetto technique is used to great effect, as in David Záizar's version. Quite a few versions of the song feature vocal gymnastics by whoever sings them, particularly the stretching of vowels such as the \"e\" sound in the gentilic 'Malagueña' for as long as the singer can hold the note. Other known mariachi versions of the song were recorded by: \n", "There is another kind of vibrato-linked fault that can afflict the voices of operatic artists, especially aging ones—namely the slow, often irregular wobble produced when the singer's vibrato has loosened from the effects of forcing, over-parting, or the sheer wear and tear on the body caused by the stresses of a long stage career.\n", "As with pop singing styles, the attractiveness or exciting qualities of a live opera vocal performance or recording are subjective and vary between listeners, cultures, and time periods. A soprano singing in the 1930s would elicit praise and applause for hitting a high note in a way that would be deemed unacceptable in the 2000s, because of the use of performance practices such as doing a long, drawn-out glissando up to reach the high note and then using a wide vibrato, wavering on the pitch so much that it is hard to discern which note they are singing.\n", "Traditionally, however, the deliberate cultivation of a particularly wide, pervasive vibrato by opera singers from the Latin countries has been denounced by English-speaking music critics and pedagogues as a technical fault and a stylistic blot (see Scott, cited below, Volume 1, pp. 123–127). They have expected vocalists to emit a pure, steady stream of clear sound — irrespective of whether they were singing in church, on the concert platform, or on the operatic stage.\n", "The piece is a staple of the operatic repertoire Because of a scarcity of true contraltos, the role of Rosina has most frequently been sung by a coloratura mezzo-soprano (with or without pitch alterations, depending on the singer), and has in the past, and occasionally in more recent times, been sung by coloratura sopranos such as Marcella Sembrich, Maria Callas, Roberta Peters, Gianna D'Angelo, Victoria de los Ángeles, Beverly Sills, Lily Pons, Diana Damrau, Edita Gruberová, Kathleen Battle and Luciana Serra. Famous recent mezzo-soprano Rosinas include Marilyn Horne, Teresa Berganza, Lucia Valentini Terrani, Susanne Marsee, Cecilia Bartoli, Joyce DiDonato, Jennifer Larmore, Elīna Garanča, and Vesselina Kasarova. Famous contralto Rosinas include Ewa Podleś.\n" ]
Is there more evidence of observable black holes now?
Yes, one of the most striking examples is that the stars right at the centre of the galaxy are all [orbiting something massive and invisible](_URL_0_) over like a 20 year period. And just this year, LIGO detected gravitational radiation from two colliding black holes.
[ "Such research has attracted much media attention, as black holes have long captured the imagination of both scientists and the public for both their innate simplicity and mysteriousness. The recent theoretical results have therefore undergone much scrutiny and most of them are now ruled out by theoretical studies. For example, several alternative black hole models were shown to be unstable in extremely fast rotation, which, by conservation of angular momentum, would be a not unusual physical scenario for a collapsed star (see pulsar). Nevertheless, the existence of a stable model of a nonsingular black hole is still an open question.\n", "Note that this proof of existence of stellar black holes is not entirely observational but relies on theory: We can think of no other object for these massive compact systems in stellar binaries besides a black hole. A direct proof of the existence of a black hole would be if one actually observes the orbit of a particle (or a cloud of gas) that falls into the black hole.\n", "(May 2) First visual proof of existence of black holes is published. Suvi Gezari's team in Johns Hopkins University, using the Hawaiian telescope Pan-STARRS 1, record images of a supermassive black hole 2.7 million light-years away that is swallowing a red giant.\n", "A primordial black hole with an initial mass of around would be completing its evaporation today; a less massive primordial black hole would have already evaporated. In optimistic circumstances, the Fermi Gamma-ray Space Telescope satellite, launched in June 2008, might detect experimental evidence for evaporation of nearby black holes by observing gamma ray bursts. It is unlikely that a collision between a microscopic black hole and an object such as a star or a planet would be noticeable. The small radius and high density of the black hole would allow it to pass straight through any object consisting of normal atoms, interacting with only few of its atoms while doing so. It has, however, been suggested that a small black hole (of sufficient mass) passing through the Earth would produce a detectable acoustic or seismic signal.\n", "General relativity predicts the smallest primordial black holes would have evaporated by now, but if there were a fourth spatial dimension – as predicted by string theory – it would affect how gravity acts on small scales and \"slow down the evaporation quite substantially\". This could mean there are several thousand black holes in our galaxy. To test this theory, scientists will use the Fermi Gamma-ray Space Telescope which was put in orbit by NASA on June 11, 2008. If they observe specific small interference patterns within gamma-ray bursts, it could be the first indirect evidence for primordial black holes and string theory.\n", "A follow-up study, based on the same data and published the following year, reached a very different conclusion. The black hole that was initially suggested at was not as massive as once thought. The black hole was estimated to be between 2 and 5 billion solar masses. This is less than a third of the previously estimated mass, a significant decrease. Models with no black hole at all were also found to provide reasonably good fits to the data, including the central region.\n", "The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from arguments in general relativity that any such object will have a maximum mass.\n" ]
complete quantum teleportation of photonic quantum bits
A bit is the basic unit of information. It's a logical question with either a yes or a no answer. Quantum teleportation of information has to do with a phenomenon known as "Quantum Entanglement" which means that particles are connected sort of like twins are connected in horror films. What one particle knows, the other inevitably knows no matter how far away they are. Our method of sending and receiving information is limited by the speed of light. For example, we send out phone calls in radio waves which move at the speed of light, this is what photonic (photons are the particles that make up not only visible light, but radio/infared/ultraviolet etc) quantum bits are. Quantum teleportation, in theory, would work because the twin particles seem to "learn together" without having to send data to the other one. Say we have a few sets of 'twin particles' here on Earth and in a lab on the face of the sun. On Earth we have all the 'older twins' and in our lab on the sun we have all the 'second born twins.' We have this lab stationed on the sun to alert us of when the sun explodes. Now imagine that the sun exploded; it would take 8 minutes for us on Earth to see the explosion happen, and we would have no time to react. The lab, being on the surface of the sun, gets this information the instant it happens. Because our particles here on Earth are entangled with the ones in that lab, we now know on Earth that the sun has just exploded and we have about 8 minutes to prepare our defenses against our impending death. TL;DR - Conventional sending/receiving of bits of information are limited by the speed of light. Quantum Entanglement suggests that pairs of particles are connected in a special way such that if one particle 'learns' something, its partner instantly learns it as well, regardless of the distance between them.
[ "Quantum teleportation provides a mechanism of moving a qubit from one location to another, without having to physically transport the underlying particle to which that qubit is normally attached. Much like the invention of the telegraph allowed classical bits to be transported at high speed across continents, quantum teleportation holds the promise that one day, qubits could be moved likewise. the quantum states of single photons, photon modes, single atoms, atomic ensembles, defect centers in solids, single electrons, and superconducting circuits have been employed as information bearers.\n", "BULLET::::- 17 August – In an unprecedented effort by ETH Zurich Laboratories, computational quantum teleportation has been achieved in solid-state circuit. Using quantum entanglement methods, researchers have teleported approximately 10,000 qubits (quantum bits) per second on a specially designed chip.\n", "Quantum teleportation is a process by which quantum information (e.g. the exact state of an atom or photon) can be transmitted (exactly, in principle) from one location to another, with the help of classical communication and previously shared quantum entanglement between the sending and receiving location. Because it depends on classical communication, which can proceed no faster than the speed of light, it cannot be used for faster-than-light transport or communication of classical bits. While it has proven possible to teleport one or more qubits of information between two (entangled) quanta, this has not yet been achieved between anything larger than molecules.\n", "In quantum teleportation, a sender wishes to transmit an arbitrary quantum state of a particle to a possibly distant receiver. Quantum teleportation is able to achieve faithful transmission of quantum information by substituting classical communication and prior entanglement for a direct quantum channel. Using teleportation, an arbitrary unknown qubit can be faithfully transmitted via a pair of maximally-entangled qubits shared between sender and receiver, and a 2-bit classical message from the sender to the receiver. Quantum teleportation requires a noiseless quantum channel for sharing perfectly entangled particles, and therefore entanglement distillation satisfies this requirement by providing the noiseless quantum channel and maximally entangled qubits.\n", "The prerequisites for quantum teleportation are a qubit that is to be teleported, a conventional communication channel capable of transmitting two classical bits (i.e., one of four states), and means of generating an entangled EPR pair of qubits, transporting each of these to two different locations, A and B, performing a Bell measurement on one of the EPR pair qubits, and manipulating the quantum state of the other pair. The protocol is then as follows:\n", "Quantum teleportation is distinct from regular teleportation, as it does not transfer particles from one place to another, but rather transmits the information necessary to prepare a target system in the same quantum state as the source system. In many cases, such as normal matter at room temperature, the exact quantum state of a system is irrelevant for any practical purpose (because it fluctuates rapidly anyways, it “decoheres”), and the necessary information to recreate the system is classical. In those cases, quantum teleportation may be replaced by the simple transmission of classical information, such as radio communication.\n", "In quantum teleportation, a sender wishes to transmit an arbitrary quantum state of a particle to a possibly distant receiver. Consequently, the teleportation process is a quantum channel. The apparatus for the process itself requires a quantum channel for the transmission of one particle of an entangled-state to the receiver. Teleportation occurs by a joint measurement of the sent particle and the remaining entangled particle. This measurement results in classical information which must be sent to the receiver to complete the teleportation. Importantly, the classical information can be sent after the quantum channel has ceased to exist.\n" ]
What about deep breathing makes us lightheaded?
Hyperventilation removes more CO2 from the blood than is being released into the blood via cell respiration. This increases the pH of the blood, a condition called alkalosis, which in turn causes blood vessels to constrict, reducing blood flow. Alkalosis also reduces the amount of freely ionized calcium in the blood, which is essential for proper nerve functioning, and which also causes blood vessels to constrict, reducing blood flow to the brain further, which makes you feel lightheaded.
[ "Overly shallow breathing, also known medically as hypopnea, may result in hypoventilation, which could cause a build up of carbon dioxide in an individual's body, a symptom known as hypercapnia. It's a condition related to neuro-muscular disorders (NMDs) that include Lou Gehrig's Disease, Muscular Dystrophy, Polio, Post-Polio Syndrome and others. It is a serious condition if not diagnosed properly, or if it's ignored. It is often treated as a \"sleep disorder\" after a sleep study performed, but \"sleep studies cannot diagnose shallow breathing (JR Bach, M.D.).\" Serious symptoms arise most commonly during sleep; however, because when the body sleeps, the intercostal muscles do not perform the breathing for mechanism, it's done by the diaphragm, which is often impaired in people with NMDs. \n", "The term asphyxiation is often mistakenly associated with the strong desire to breathe that occurs if breathing is prevented. This desire is stimulated from increasing levels of carbon dioxide. However, asphyxiant gases may displace carbon dioxide along with oxygen, preventing the victim from feeling short of breath. In addition the gases may also displace oxygen from cells, leading to loss of consciousness and death rapidly.\n", "Some describe tachypnea as any rapid breathing. Hyperventilation is then described as increased ventilation of the alveoli (which can occur through increased rate or depth of breathing, or both) where there is a smaller rise in metabolic carbon dioxide relative to this increase in ventilation. Hyperpnea, on the other hand, is defined as breathing more rapid and deep than breathing at rest.\n", "In a healthy person during sleep, breathing is regular so oxygen levels and carbon dioxide levels in the bloodstream stay fairly constant: After exhalation, the blood level of oxygen decreases and that of carbon dioxide increases. Exchange of gases with a lungful of fresh air is necessary to replenish oxygen and rid the bloodstream of built-up carbon dioxide. Oxygen and carbon dioxide receptors in the body (called chemoreceptors) send nerve impulses to the brain, which then signals for reflexive opening of the larynx (enlarging the opening between the vocal cords) and movements of the rib cage muscles and diaphragm. These muscles expand the thorax (chest cavity) so that a partial vacuum is made within the lungs and air rushes in to fill it. In the absence of central apnea, any sudden drop in oxygen or excess of carbon dioxide, even if small, strongly stimulates the brain's respiratory centers to breathe; the respiratory drive is so strong that even conscious efforts to hold one's breath do not overcome it.\n", "During heavy breathing (hyperpnea), as, for instance, during exercise, inhalation is brought about by a more powerful and greater excursion of the contracting diaphragm than at rest (Fig. 8). In addition the \"accessory muscles of inhalation\" exaggerate the actions of the intercostal muscles (Fig. 8). These accessory muscles of inhalation are muscles that extend from the cervical vertebrae and base of the skull to the upper ribs and sternum, sometimes through an intermediary attachment to the clavicles. When they contract the rib cage's internal volume is increased to a far greater extent than can be achieved by contraction of the intercostal muscles alone. Seen from outside the body the lifting of the clavicles during strenuous or labored inhalation is sometimes called clavicular breathing, seen especially during asthma attacks and in people with chronic obstructive pulmonary disease.\n", "During heavy breathing, exhalation is caused by relaxation of all the muscles of inhalation. But now, the abdominal muscles, instead of remaining relaxed (as they do at rest), contract forcibly pulling the lower edges of the rib cage downwards (front and sides) (Fig. 8). This not only drastically decreases the size of the rib cage, but also pushes the abdominal organs upwards against the diaphragm which consequently bulges deeply into the thorax (Fig. 8). The end-exhalatory lung volume is now well below the resting mid-position and contains far less air than the resting \"functional residual capacity\". However, in a normal mammal, the lungs cannot be emptied completely. In an adult human there is always still at least 1 liter of residual air left in the lungs after maximum exhalation.\n", "Shallow breathing, or chest breathing is the drawing of minimal breath into the lungs, usually by drawing air into the chest area using the intercostal muscles rather than throughout the lungs via the diaphragm. Shallow breathing can result in or be symptomatic of rapid breathing and hyperventilation. Most people who breathe shallowly do it throughout the day and are almost always unaware of the condition.\n" ]
would the mass of a helium balloon be positive or negative and is there such a negative mass
The mass of a helium balloon is positive. The weight is negative. There might be such a thing as negative mass, but we haven't encountered such a thing yet.
[ "Negative mass is any region of space in which for some observers the mass density is measured to be negative. This could occur due to a region of space in which the stress component of the Einstein stress–energy tensor is larger in magnitude than the mass density. All of these are violations of one or another variant of the positive energy condition of Einstein's general theory of relativity; however, the positive energy condition is not a required condition for the mathematical consistency of the theory.\n", "Fully inflated, a balloon of this size would contain just over of helium. Helium's lift capacity at sea level and 0 °C is 1.113 kg/m (0.07 lbs/ft) and decreases at higher altitudes and at higher temperatures. The volume of helium in the balloon has been estimated as being able to lift a total load, including the balloon material and the structure beneath it, of at sea level and at .\n", "A common helium-filled toy balloon is something familiar to many. When such a balloon is fully filled with helium, it has buoyancy—a force that opposes gravity. When a toy balloon becomes partially deflated, it often becomes neutrally buoyant and can float about the house a meter or two off the floor. In such a state, there are moments when the balloon is neither rising nor falling and—in the sense that a scale placed under it has no force applied to it—is, in a sense perfectly weightless (actually as noted below, weight has merely been redistributed along the Earth's surface so it cannot be measured). Though the rubber comprising the balloon has a mass of only a few grams, which might be almost unnoticeable, the rubber still retains all its mass when inflated.\n", "In a theoretically perfect situation with weightless spheres, a 'vacuum balloon' would have 7% more net lifting force than a hydrogen-filled balloon, and 16% more net lifting force than a helium-filled one. However, because the walls of the balloon must be able to remain rigid without imploding, the balloon is impractical to construct with all known materials. Despite that, sometimes there is discussion on the topic.\n", "The balloons were spherical superpressure types with a diameter of and filled with helium. A gondola assembly weighing and long was connected to the balloon envelope by a tether long. Total mass of the entire assembly was .\n", "The mass of \"weightless\" (neutrally buoyant) balloons can be better appreciated with much larger hot air balloons. Although no effort is required to counter their weight when they are hovering over the ground (when they can often be within one hundred newtons of zero weight), the inertia associated with their appreciable mass of several hundred kilograms or more can knock fully grown men off their feet when the balloon's basket is moving horizontally over the ground.\n", "The balloons are zero pressure difference balloons, and are vented at the bottom. They are only partially inflated when launched, and as they rise up, the lower atmospheric pressure causes them to fully inflate.\n" ]
why does the grocery store carry fully cooked frozen chicken and turkey, but not beef?
Not certain but perhaps its because there's a lot of different degrees at which different people like their meet cooked.
[ "Intensive farming of turkeys from the late 1940s dramatically cut the price, making it more affordable for the working classes. With the availability of refrigeration, whole turkeys could be shipped frozen to distant markets. Later advances in disease control increased production even more. Advances in shipping, changing consumer preferences and the proliferation of commercial poultry plants has made fresh turkey inexpensive as well as readily available.\n", "Turkeys are sold sliced and ground, as well as \"whole\" in a manner similar to chicken with the head, feet, and feathers removed. Frozen whole turkeys remain popular. Sliced turkey is frequently used as a sandwich meat or served as cold cuts; in some cases, where recipes call for chicken, turkey can be used as a substitute. Additionally, ground turkey is frequently marketed as a healthy ground beef substitute. Without careful preparation, cooked turkey may end up less moist than other poultry meats, such as chicken or duck.\n", "Turkeys are sold sliced and ground, as well as \"whole\" in a manner similar to chicken with the head, feet, and feathers removed. Frozen whole turkeys remain popular. Sliced turkey is frequently used as a sandwich meat or served as cold cuts; in some cases where recipes call for chicken, it can be used as a substitute. Ground turkey is sold, and frequently marketed as a healthy alternative to ground beef. Without careful preparation, cooked turkey is usually considered to end up less moist than other poultry meats such as chicken or duck.\n", "Canned meats have a mixed reputation for their taste, texture, ingredients, preparation and nutrition. The canning process produces a product with a generally homogeneous texture and flavor. The low-cost ingredients used also affect the quality. For example, mechanically separated chicken or turkey is a paste-like product made by forcing crushed bone and tissue through a sieve to separate bone from tissue. In the United States, mechanically separated poultry has been used in poultry products since 1969, after the National Academy of Sciences found it safe for use. On November 3, 1995, the Food Safety and Inspection Service of the U.S. Department of Agriculture published a final rule in the Federal Register (see 60 FR 55962) on mechanically separated poultry, stating that it was safe to use without restrictions. However, it must be labeled as \"mechanically separated\" chicken or turkey in the ingredient statement. The final rule became effective on November 4, 1996.\n", "In addition to meatpacking, Swift sold various dairy and grocery items, including Swiftning shortening, Allsweet margarine, Brookfield butter, cheese under the Brookfield, Pauly, and Treasure Cave brands, and Peter Pan peanut butter. Swift began selling frozen turkeys under the Butterball brand in 1954. Gustavus Swift also championed the refrigerated railroad car.\n", "Kummerow researched lipids at Kansas State University during and after World War II. He won a contract from the U.S. Army Quartermaster Corps to investigate methods of preventing frozen turkeys and chickens from tasting rancid. Ultimately, \"a simple change in the poultry feed solved the problem, making possible the sale of frozen poultry in grocery stores.\" The feed change was from linseed to corn.\n", "Turkey product lines include fresh turkey meats in a range of cuts such as sausages, small roasts, steaks, wings, drumsticks, schnitzel and turkey mince. Ready-to-eat lines are either roasted or smoked and packaged. Frozen items include a whole turkey, buffé, or Kiev.\n" ]
why can't we just transfer antibodies from immune people to sick people to cure them easily?
Give a man antibodies and he's immune for however long they last. Teach a man's B-cells to make antibodies and they are immune for life (unless they stop making them in immune-compromised situations). Antibodies are single use. They bind to something determined bad to mark it disposal and are removed with the bad thing. If there are not enough antibodies, you can't get rid of the bad things before the bad things make more of themselves. Certain treatments do harvest antibodies made from chemical reactions or manufactured cells to be put in humans. Meanwhile, if you can train your body's immune B-cells to make antibodies, you will be immune as long as those cells produce antibodies. Vaccines give your body a taste of the bad things so you can produce your own immunity. This may be needed a few times to build up immunity (chicken pox) or repeatedly if the bad thing changes quickly (flu virus).
[ "Different antigens are able to escape through a variety of mechanisms. For example, the African trypanosome parasites are able to clear the host's antibodies, as well as resist lysis and inhibit parts of the innate immune response. Another bacteria, \"Bordetella pertussis\", is able to escape the immune response by inhibiting neutrophils and macrophages from invading the infection site early on. One cause of antigenic escape is that a pathogen's epitopes (the binding sites for immune cells) become too similar to a person's naturally occurring MHC-1 epitopes. The immune system becomes unable to distinguish the infection from self-cells.\n", "Antibodies are produced naturally by the body and play a key role in fighting infections caused by bacteria and viruses. They can also be used to treat infections by use of injections with blood plasma that contain large amounts of them. The use of whole, natural antibodies as medicines presents many problems: they can only be produced by live cells and this process is difficult to control on an industrial scale, they are large molecules and following administration by injection, they do not diffuse easily from the blood to the tissues and other sites of infections where they are needed.\n", "Antigenic escape occurs when the immune system is unable to respond to an infectious agent. This means that the response mechanisms a host's immune system normally utilizes to recognize and eliminate a virus or pathogen is no longer able to do so. This process can occur in a number of different mechanisms of both genetic and environmental nature. Such mechanisms include homologous recombination, and manipulation and resistance of the host's immune responses.\n", "In the course of normal immune response, parts of pathogens (e.g. bacteria) are recognized by the immune system as foreign (non-self), and eliminated or effectively neutralized to reduce their potential damage. Such a recognizable substance is called an antigen. The immune system may respond in multiple ways to an antigen; a key feature of this response is the production of antibodies by B cells (or B lymphocytes) involving an arm of the immune system known as humoral immunity. The antibodies are soluble and do not require direct cell-to-cell contact between the pathogen and the B-cell to function.\n", "Even if the host does develop antibodies, protection might not be adequate; immunity might develop too slowly to be effective in time, the antibodies might not disable the pathogen completely, or there might be multiple strains of the pathogen, not all of which are equally susceptible to the immune reaction. However, even a partial, late, or weak immunity, such as a one resulting from cross-immunity to a strain other than the target strain, may mitigate an infection, resulting in a lower mortality rate, lower morbidity, and faster recovery.\n", "These non-human antibodies are recognized as foreign by the human immune system and may be rapidly cleared from the body, provoke an allergic reaction, or both. To avoid this, parts of the antibody can be replaced with human amino acid sequences, or pure human antibodies can be engineered. If the constant region is replaced with the human form, the antibody is termed \"chimeric\" and the substem used was \"-xi-\". Part of the variable regions may also be substituted, in which case it is called \"humanized\" and \"-zu-\" was used; typically, everything is replaced except the complementarity determining regions (CDRs), the three loops of amino acid sequences at the outside of each variable region that bind to the target structure, although some other residues may have to remain non-human in order to achieve good binding. Partly chimeric and partly humanized antibodies used \"-xizu-\". These three substems did not indicate the foreign species used for production. Thus, the human/mouse chimeric antibody basiliximab ends in \"-ximab\" just as does the human/macaque antibody gomiliximab. Purely human antibodies used \"-u-\".\n", "The body naturally does not launch an immune system attack on its own tissues. Immune tolerance therapies seek to reset the immune system so that the body stops mistakenly attacking its own organs or cells in autoimmune disease or accepts foreign tissue in organ transplantation. Creating immunity reduces or eliminates the need for lifelong immunosuppression and attendant side effects. It has been tested on transplantations, and type 1 diabetes or other autoimmune disorders.\n" ]