question
stringlengths
3
300
answer
stringlengths
9
2.77k
context
sequencelengths
7
7
When I viciously rip a leaf off of a tree during Spring months such as April, what exactly happens to the tree?
Assuming you didn't tear some of the bark and only ripped off the leaf, the tree will simply seal off that area via the [abcission layer](_URL_0_). Trees don't heal, they seal; leaves are shed in the fall by this same mechanism. While auxin is produced in the leaf the abcission layer will not seal off the leaf stem, but if conditions are appropriate to halt auxin production, or if a leaf is removed, the layer will grow and seal off the area exposed. Trees shed leaves before fall as well; infestation from aphids can cause trees to prematurely abcise their leaves. Nothing really bad happens to the tree other than getting a little less glucose from photosynthesis. If you were to damage part of the bark, the tree will still seal off the area to prevent the spread of infection, creating a callous tissue around the edge as well as using chemical defenses.
[ "In the opening of the final chapter, \"Death\", the tree is 550 years old and stands 80 meters (260 feet) tall. Under the weight of too much snow accumulating on the canopy mat, a branch breaks off. Stresses from a long winter with a dry summer weaken the tree's immune system. The exposed area where the branch broke becomes infected with insects and fungus. Insect larvae eat the buds and the fungus spreads into the middle of the tree and down to the roots. With its vascular tissue system compromised, the tree diverts nutrients elsewhere, resulting in needles turning orange on the abandoned branches. Death takes years to occur as successive parts are slowly starved of nutrients. As a snag, it becomes home to a succession of animals, like woodpeckers, owls, squirrels, and bats. Eventually the roots rot enough that a rainstorm blows it down. Mosses and fungi grow on the deadfall, followed by colonies of termites, ants, and mites, which all help decompose the remaining wood.\n", "Infected trees have abnormally small, yellow and sparse leaves which frequently fall prematurely. The crowns of trees which have been infected for many years have many dead twigs and branches. Tarry or rusty spots may appear at the base of the trunk which are indicative of the death of the phloem caused by the \"P. alni\" invasion. The course of the disease is varied, with many trees dying rapidly once symptoms appear, however, others may deteriorate slowly over many years.\n", "The tree reproduces when its acorns sprout to form seedlings. It also reproduces vegetatively with new growth sprouting from the root crown after the tree is top-killed by wildfire, logging, frost, or other events.\n", "The fungus infects trees in the spring, and continues to develop over the following winter. The fungus causes yellowing (chlorosis) of the needles, with eventual necrosis and premature needle-drop. In some heavily diseased stands of trees, the only needles remaining are those of the current year, in which the disease has not yet had time to fully develop. The overall result for the infected tree is reduced growth.\n", "The larvae feed on the leaves of \"Thuja occidentalis\" as well as other arborvitae and false cypress (\"Chamaecyparis\") species. Trees infested for several consecutive years can be killed, but usually an infested tree is able to renew foliage later during the growth season.\n", "The first sign of infection is usually an upper branch of the tree with leaves starting to wither and yellow in summer, months before the normal autumnal leaf shedding. This progressively spreads to the rest of the tree, with further dieback of branches. Eventually, the roots die, starved of nutrients from the leaves. Often, not all the roots die: the roots of some species, notably the English elm, \"Ulmus procera\", can repeatedly put up suckers, which flourish for approximately 15 years, after which they too succumb.\n", "In the spring, trees produce what is known as “springwood” from the stored starches of the previous growing season. This tissue is characterized by long xylem vessels with relatively thin walls, making it the ideal habitat for the pathogen. In springwood, the fungus spreads rapidly, and it is likely that the tree will die. Later in the growing season, the elm will utilize sugars produced by the leaves to nurture the formation of “summerwood”. Summerwood vessels are typically shorter with thicker walls, making it harder for the infection to spread. \n" ]
why aren't we always hungry for the things that our body needs most?
You are hungry for specific things. That's why you crave certain foods sometimes. Your body is used to nutrients being in that food so it tells you to eat that food.
[ "There is no single explanation for food cravings, and explanations range from low serotonin levels affecting the brain centers for appetite to production of endorphins as a result of consuming fats and carbohydrates.\n", "Commonly people have an appetite for meat or eggs, high protein foods. But these may be expensive or otherwise unavailable. A specific appetite for protein may be unsatisfied with the ingestion of a diet deficient in protein. But protein is vitally important to maintaining the structures of the body’s systems, so the specific appetite leads to more eating, in a desperate attempt to satiate the specific appetite for protein in life.\n", "Appetite is the desire to eat food, sometimes due to hunger. Appealing foods can stimulate appetite even when hunger is absent, although appetite can be greatly reduced by satiety. Appetite exists in all higher life-forms, and serves to regulate adequate energy intake to maintain metabolic needs. It is regulated by a close interplay between the digestive tract, adipose tissue and the brain. Appetite has a relationship with every individual's behavior. Appetitive behaviour also known as \"approach behaviour\", and consummatory behaviours, are the only processes that involve energy intake, whereas all other behaviours affect the release of energy. When stressed, appetite levels may increase and result in an increase of food intake. Decreased desire to eat is termed anorexia, while polyphagia (or \"hyperphagia\") is increased eating. Dysregulation of appetite contributes to anorexia nervosa, bulimia nervosa, cachexia, overeating, and binge eating disorder.\n", "BULLET::::- \"Natural but not necessary\": These desires are innate to humans, but they do not need to be fulfilled for their happiness or their survival. Wanting to eat delicious food when one is hungry is an example of a natural but not necessary desire. The main problem with these desires is that they fail to substantially increase a person's happiness, and at the same time require effort to obtain and are desired by people due to false beliefs that they are actually necessary. It is for this reason that they should be avoided.\n", "Food and diet feature prominently in the aphorisms of the Hippocratic Corpus. For example, in one aphorism in the first section, Hippocrates states, “Things which are growing have the greatest natural warmth and, accordingly, need most nourishment. Failing this the body becomes exhausted. Old men have little warmth and they need little food which produces warmth; too much only extinguishes the warmth they have. For this reason, fevers are not so acute in old people for then the body is cold”. Another aphorism says, “It is better to be full of drink than full of food”. And finally, an aphorism that generally sums up treatment of disease in Hippocratic times states, “Disease which results from over-eating is cured by fasting; disease following fasting, by a surfeit. So with other things; cures may be effected by opposites,”. This concept of treating diseases opposite to the way it manifests in the individual is concept that is carried over into Roman medicine.\n", "Children who are externally motivated to eat are at a higher risk for obesity. In one study, two groups of children were told to focus on different prompts to eat: either external cues, such as the amount of food on their plate, or internal cues like hunger and satiety. The children who relied on internal cues were more likely to eat when they were hungry and stop when they were full. In contrast, the children who responded to external cues were more likely to ignore or overlook internal cues that indicated that they were full. Children who grow accustomed to relying on external hunger cues and thus eating more than their bodies need because are more likely to gain excess weight.\n", "Pretas dwell in the waste and desert places of the earth, and vary in situation according to their past karma. Some of them can eat a little, but find it very difficult to find food or drink. Others can find food and drink, but find it very difficult to swallow. Others find that the food they eat seems to burst into flames as they swallow it. Others see something edible or drinkable and desire it but it withers or dries up before their eyes. As a result, they are always hungry. \n" ]
why are there free refills for soft drinks in the us?
In the US refills are free because soft drinks are commonly dispensed from fountain machines. A 12oz soft drink from a fountain costs the restaurant almost nothing, typically less than a penny. So restaurants see free refills as a perk they can offer with only a tiny cost to themselves that might encourage patrons to stay longer and maybe buy more food. Dunno why they're uncommon in Europe. Maybe Europe got soda fountains later or they're less common.
[ "Free refills are seen as a good way to attract customers to an establishment, especially one whose beverages are not their primary source of income. Due to the extremely low cost of fountain soft drinks (especially the beverage itself, not including the cost of the cup, lid and straw), often offering a profit margin of 80-82%, establishments tend to offer free refills as a sales gimmick. Coffee produces a similar high profit margin, allowing establishments to include coffee in free refill offers.\n", "Free refills occur when a drink, usually soda, tea or coffee, is allowed to be filled again, free of charge, after being consumed. Free refills are commonplace in America and Canada in traditional restaurants, while rarer in airports, cafés, or service stations. Around the world, the availability of free refills is typically scarce, but varies widely depending on the country and the ownership of the restaurant.\n", "The original value proposition was for consumers to enjoy low-priced beverages by washing and refilling reusable soft-drink bottles in the Fountain Fresh dispenser. In return for providing a bottle and operating the machine, consumers could purchase soft drinks for as low as 59-cents for one liter and two liters for 69-cents, a significant discount from the price of commercially bottled soft drinks. The concept was rolled out in several retail locations throughout the United States, including a large number of Wal-Mart stores. \n", "Most of these establishments have fast customer turnover, thus customers rarely consume enough beverage to make the offering unprofitable. Some establishments, who make their primary income with beverage sales, only offer free refills to members of rewards programs.\n", "\"Refrescos\" is the local name for bottled soft drinks, which are widely sold. Most common brands are available, although in rural areas, vendors sometimes sell soft drinks in plastic bags, which are cheaper than cans or bottles.\n", "There is little consistency outside of North America regarding the availability of free refills. For example, Burger King restaurants in Spain often provide free refills, whereas in Bolivia, Burger King restaurants do not. In France, free refills have actually been outlawed; in Germany, most restaurants charge even to refill water. In Japan, free refills are referred to as \"drink bar\" (ドリンクバー) and often a separate purchase is required to access them.\n", "To date, 33 states have taxes on soft drinks but they are \"too low to affect consumption and the revenues are not earmarked for health programs,\" according to the \"New England Journal of Medicine\" study.\n" ]
how does prison labor work?
In my prison, as an example, the guys who work in the dining facility start out at 40c an hour. The guys who work in the welding shop start out at 60c an hour. The guys who work in the wood shop get 45c an hour, but can get a bonus of up to $25 dollars depending on what they build. Inmates who stay with a job long enough and attend additional training can work up to become a supervisor and can earn up to $1.10 an hour. The guys who work third shift make an extra dime per hour. They don't make a lot of money, because, regardless of what the inmates believe, it costs a lot of money to house an inmate. The difference between the wages of an inmate and a living wage outside goes to offset some of that cost. The inmates can do what they want to with that money. Some use the money to pay for college courses. Some use the money to buy things from the commissary. Some invest the money in a savings or retirement account. Large corporations at some prisons can "outsource" jobs. The corporation has to be approved to do so, and some of the considerations are if the inmate labor will have an effect on the local economy. Inmate labor is allowed because the inmates have to have something to do during the day. You can't leave them locked up in their cells all day and having a job to go to helps prepare them to reintegrate with the outside when they get released and have to have a job.
[ "A labor camp (or labour, see spelling differences) or work camp is a simplified detention facility where inmates are forced to engage in penal labor as a form of punishment under the criminal code. Labour camps have many common aspects with slavery and with prisons (especially prison farms). Conditions at labor camps vary widely depending on the operators.\n", "Prison inmates can work either for the prison (directly, by performing tasks linked to prison operation, or for the Régie Industrielle des Établissements Pénitentiaires, which produces and sells merchandises) or for a private company, in the framework of a prison/company agreement for leasing inmate labour. Work ceased being compulsory for sentenced inmates in France in 1987. From the French Revolution of 1789, the prison system has been governed by a new penal code. Some prisons became quasi-factories, in the nineteenth century, many discussions focused on the issue of competition between free labour and prison labour. Prison work was temporarily prohibited during the French Revolution of 1848. Prison labour then specialised in the production of goods sold to government departments (and directly to prisons, for example guards' uniforms), or in small low-skilled manual labour (mainly subcontracting to small local industries).\n", "The prisoner shall do labour and receive compensation.It was reported that most prisons uses a point-based system, where the prisoners who passed their line will receive extra compensation based on the extra points they have. The judge for commutation trial will also consider labour record as evidence for good behavior. The prison shall have classroom and reading room, and the prisoners shall receive education. The prisoner shall receive promptly medical treatment. Prison shall be equipped with an infirmary. The prisons use a point-based system to evaluate the behavior of the prisoners.\n", "Sometimes authorities turn prison labour into an industry, as on a prison farm or in a prison workshop. In such cases, the pursuit of income from their productive labour may even overtake the preoccupation with punishment and/or reeducation as such of the prisoners, who are then at risk of being exploited as slave-like cheap labour (profit may be minor after expenses, e.g. on security).\n", "In addition to being forced to labor directly for the government on a prison farm or in a penal colony, inmates may be forced to do farm work for private enterprises by being farmed out through the practice of convict leasing to work on private agricultural lands or related industries (fishing, lumbering, etc.). The party purchasing their labor from the government generally does so at a steep discount from the cost of free labor.\n", "The prison industry also includes private businesses that benefit from the exploitation of the prison labor. Some scholars, using the term prison-industrial complex, have argued that the trend of \"hiring out prisoners\" is a continuation of the slavery tradition, pointing out that the Thirteenth Amendment to the United States Constitution freed slaves but allowed forced labor for people convicted of crimes. Prisons are very attractive to employers, because prisoners can be made to perform a great array of jobs, under conditions that most free laborers wouldn't accept (and would be illegal outside of prisons): sub-minimum wage payments, no insurance, no collective bargaining, lack of alternative options, etc. Prison labor can soon deprive the free labor of jobs in a number of sectors, since the organized labor turns out to be uncompetitive compared to the prison counterpart.\n", "In a number of penal systems, the inmates have the possibility of a job. This may serve several purposes. One goal is to give an inmate a meaningful way to occupy their prison time and a possibility of earning some money. It may also play an important role in resocialisation as inmates may acquire skills that would help them to find a job after release. It may also have an important penological function: reducing the monotony of prison life for the inmate, keeping inmates busy on productive activities, rather than, for example, potentially violent or antisocial activities, and helping to increase inmate fitness, and thus decrease health problems, rather than letting inmates succumb to a sedentary lifestyle.\n" ]
Are there any lifeforms that have evolved exclusively on land and never began from water?
Not sure what you mean there. If you trace back the lineage of all known life forms further and futher back, eventually they all had ancestors that lived in the water. (E.g. for *Homo sapiens* - _URL_0_) If you're just looking back one or two evolutionary steps, then of course the recent ancestors of almost all land animals were also land animals.
[ "Recent studies showcase that ambulocetids were fully aquatic like modern cetaceans, possessing a similar thoracic morphology and being unable to support their weight on land. This suggests that complete abandonment of the land evolved much earlier among cetaceans than previously thought.\n", "Most life forms evolved initially in marine habitats. By volume, oceans provide about 90 percent of the living space on the planet. The earliest vertebrates appeared in the form of fish, which live exclusively in water. Some of these evolved into amphibians which spend portions of their lives in water and portions on land. Other fish evolved into land mammals and subsequently returned to the ocean as seals, dolphins or whales. Plant forms such as kelp and algae grow in the water and are the basis for some underwater ecosystems. Plankton, and particularly phytoplankton, are key primary producers forming the general foundation of the ocean food chain.\n", "Research by Jennifer A. Clack and her colleagues showed that the very earliest tetrapods, animals similar to \"Acanthostega\", were wholly aquatic and quite unsuited to life on land. This is in contrast to the earlier view that fish had first invaded the land — either in search of prey (like modern mudskippers) or to find water when the pond they lived in dried out — and later evolved legs, lungs, etc.\n", "Research by Jennifer A. Clack and her colleagues showed that the very earliest tetrapods, animals similar to \"Acanthostega\", were wholly aquatic and quite unsuited to life on land. This is in contrast to the earlier view that fish had first invaded the land — either in search of prey (like modern mudskippers) or to find water when the pond they lived in dried out — and later evolved legs, lungs, etc.\n", "Microfossils have been unearthed from holes riddling the otherwise barren surface of the dolomite. These geochemical and microfossil findings support the idea that during the Precambrian period, complex life evolved both in the oceans and on land. Knauth contends that animals may well have had their origins in freshwater lakes and streams, and not in the oceans.\n", "Primarily or exclusively aquatic animals have re-evolved from terrestrial tetrapods multiple times: examples include amphibians such as newts, reptiles such as crocodiles, sea turtles, ichthyosaurs, plesiosaurs and mosasaurs, marine mammals such as whales, seals and otters, and birds such as penguins. Many species of snakes are also aquatic and live their entire lives in the water. Among invertebrates, a number of insect species have adaptations for aquatic life and locomotion. Examples of aquatic insects include dragonfly larvae, water boatmen, and diving beetles. There are also aquatic spiders, although they tend to prefer other modes of locomotion under water than swimming proper.\n", "The Coelacanth is the only living example of the fossil Coelacanth fishes Actinistia. They are also the closest link between fish and the first amphibian creatures which made the transition from sea to land in the Devonian period (408-362 Million Years Ago). That such a creature could have existed for so long is nearly incredible, but some say that the cold depths of the West Indian Ocean at which the Coelacanth thrives, and the small number of predators it has, may have helped the species survive eons of change.\n" ]
Roman (and other classical) political graffiti--what's the deal with it?
> In I, Claudius, I think there's a major plot element where Claudius freaks out about seeing his name written upside down Not quite. Part of the plot involves Germanicus, Caligula's superstitious father, being terrorised by defacements of his name and other omens, which turn out to be the doing of Caligula.
[ "More than simply text and thought, Roman graffiti give insight into the use of space and how people interacted within it. Studying the motivation behind the marks reveals a trend for the graffiti to be located where people spend time and pass most frequently as they move through a space. Common places for graffiti are staircases, central peristyle, and vestibule. The use of graffiti by Romans has been said to be very different from the defacing trends of modern day, with the text blending into the walls and rooms by respecting the frescoes and decoration with the use of small letters. In this way, the environment influences the graffiti by subject and organization, and the graffiti in turn changes and influences the environment.\n", "In archaeological terms, graffiti (plural of graffito) is a mark, image or writing scratched or engraved into a surface. There have been numerous examples found on sites of the Roman Empire, including taverns and houses, as well as on pottery of the time. In many cases the graffiti tend toward the rude, with a line etched into the basilica in Pompeii reading \"Lucilla made money from her body,\" phallic images, as well as erotic pictures. Studying the graffiti left behind from the Roman Period can give a better understanding of the daily life and attitudes of the Roman people with conclusions drawn about how everyday Romans talked, where they spent their time, and their interactions within those spaces.\n", "The term \"graffiti\" referred to the inscriptions, figure drawings, and such, found on the walls of ancient sepulchres or ruins, as in the Catacombs of Rome or at Pompeii. Use of the word has evolved to include any graphics applied to surfaces in a manner that constitutes vandalism.\n", "The ancient Romans carved graffiti on walls and monuments, examples of which also survive in Egypt. Graffiti in the classical world had different connotations than they carry in today's society concerning content. Ancient graffiti displayed phrases of love declarations, political rhetoric, and simple words of thought, compared to today's popular messages of social and political ideals\n", "Graffiti is a \"pictorial or visual inscription on a accessible surface.\" According to Hanauer, Graffiti achieves three functions; the first is to allow marginalized texts to participate in the public discourse, the second is that graffiti serves the purpose of expressing openly \"controversial contents\", and the third is to allow \"marginal groups to the possibility of expressing themselves publicly.\" Bates and Martin note that this form of rhetoric has been around even in ancient Pompeii; with an example from 79 A.D. reading, \"Oh wall, so many men have come here to scrawl, I wonder that your burdened sides don't fall\". Gross and Gross indicated that graffiti is capable of serving a rhetorical purpose. Within a more modern context, Wiens' (2014) research showed that graffiti can be considered an alternative way of creating rhetorical meaning for issues such as homelessness. Furthermore, according to Ley and Cybriwsky graffiti can be an expression of territory, especially within the context of gangs. This form of Visual Rhetoric is meant to communicate meaning to anyone who so happens to see it, and due to its long history and prevalence, several styles and techniques have emerged to capture the attention of an audience.\n", "On top of the political aspect of graffiti as a movement, political groups and individuals may also use graffiti as a tool to spread their point of view. This practice, due to its illegality, has generally become favored by groups excluded from the political mainstream (e.g. far-left or far-right groups) who justify their activity by pointing out that they do not have the money – or sometimes the desire – to buy advertising to get their message across, and that a \"ruling class\" or \"establishment\" controls the mainstream press, systematically excluding the radical and alternative point of view. This type of graffiti can seem crude; for example fascist supporters often scrawl swastikas and other Nazi images.\n", "Graffiti (both singular and plural; the singular \"graffito\" is very rare in English except in archeology) is writing or drawings made on a wall or other surface, usually as a form of artistic expression, without permission and within public view. Graffiti ranges from simple written words to elaborate wall paintings, and has existed since ancient times, with examples dating back to ancient Egypt, ancient Greece, and the Roman Empire. \n" ]
why does steam always have to install microsoft c++ redistributable 2005 when i install a game?
In case it isn't already there. It isn't part of the operating system (Windows XP, say, was released in 2001 so isn't going to have something released in 2005). Since the game needs it, it is going to be installed. Why do all games need it? Because it contains all the basic building blocks of the C++ language when compiled with the MS compiler.
[ "The game also drew praise for its high-performing engine, which enables the game to run on previous-generation hardware; the minimum system requirements for CPU on Steam are stated simply as \"Anything made since 2004\" and the game supports Windows XP despite Microsoft having discontinued support for that operating system before development of AdvertCity began.\n", "As of October 2013, a Linux version exists, but is not yet available on Steam. Kamiński has stated on the Steam Forums that this is because the Adobe Air run-time can not be distributed via Steam. To fix this and other issues, Kamiński has stated that he intended to rewrite the game engine to not use Adobe Air. Kamiński announced the rewrite in June 2013, writing that he hoped to be done by September 2013, though there as been no news as of September 2014. As of June 2019, the Linux version is not on Steam, however Proton can be used to run the game.\n", "To distribute the new version, the team initially considered a pay what you want scheme, but later sought the use of the Steam Greenlight service, where independent developers can solicit votes from other players in order to have Valve subsequently offer the title through Steam. In October 2012, the game was successfully approved by Valve to be included on Steam upon the game's completion. Although Wreden originally called the stand-alone version \"The Stanley Parable: HD Remix\", he later opted to drop the distinguishing title, affirming that he believes the remake is the \"definitive\" version of the game.\n", "As of August 2011, the PC version has been removed from the Steam Store, due to lack of servers to play on and Atomic Games having seemingly gone dark. Retail versions can still be activated on Steam; Valve has been reported to be giving out Store credits to unsatisfied customers who bought the retail version after the game has been made unavailable for purchase from Steam.\n", "A remake developed by the Austrian Crafty Studios and published by German United Independent Entertainment was released on Steam on July 30, 2013. Metacritic, a review aggregator, rated it 18/100 based on five reviews. This is one of Metacritic's lowest ratings for a PC game. On release, the game was plagued with bugs, and the developer's forums were full of complaints. Crafty Studios eventually posted an apology to Steam in which they promised to update the game. Steven Strom of \"Ars Technica\", in an article where he played each of the five worst-rated games on Metacritic, said the remake is playable but still feels unfinished, including untranslated dialogue.\n", "The game does not have Steam Workshop as of now, but modifications can be done by using Unity Asset Explorer, mostly texture modifications or with MSC loader (My Summer Car loader) for assets additives. Thanks to this, one can make their own car paint job, edit the rear window stickers and even change the appearance of other vehicles and buildings. There are also unofficial mods such as cars and objects made in Blender, two of the most notable are the Lada 1200 Station Wagon and Utesuma (Satsuma pickup) mod.\n", "The Windows version of \"Remastered\" was criticized for suffering from a number of technical issues in the multiplayer, with players also being dissatisfied with the game's available settings. On Steam, it received mostly negative user reviews, with complaints including poor performance, a locked 90 frames per second, insufficient mouse support, numerous hackers, and a low player count. Players felt that Activision had failed to provide adequate support for PC in favor of the game's console versions, a trait which had also been apparent with previous installments in the series. Consequently, many users suggested that the multiplayer of \"Modern Warfare\" would be a more suitable alternative, which still attracted a sizable number of players and offered better options for performance, modding, and customization.\n" ]
what's the difference between a war and a 'cold war'?
The cold war was a period where neither side liked each other, but both were too afraid of the consequences to fight an actual war against each other (since both had nuclear weaponry). The cold war involved spies, propaganda, arms races, foreign coups, supplying aid to terrorist groups attacking your enemy, supporting your allies in proxy wars (i.e against your enemies allies), and so on. Actual wars typically involve full scale fighting between the sides at war.
[ "A cold war is a state of conflict between nations that does \"not\" involve direct military action but is pursued primarily through economic and political actions, propaganda, acts of espionage or proxy wars waged by surrogates. This term is most commonly used to refer to the Soviet-American Cold War. The surrogates are typically states that are \"satellites\" of the conflicting nations, i.e., nations allied to them or under their political influence. Opponents in a cold war will often provide economic or military aid, such as weapons, tactical support or military advisors, to lesser nations involved in conflicts with the opposing country.\n", "Cold War II, also called Second Cold War, Cold War 2.0, or the New Cold War refers to a renewed state of political and military tension between opposing geopolitical power-blocs, with one bloc typically reported as being led by Russia or China, and the other led by the United States or NATO. This is akin to the original Cold War that saw a global confrontation between the Western Bloc led by the United States and the Eastern Bloc led by the Soviet Union, Russia's predecessor.\n", "The Second Cold War (also called the New Cold War or Cold War II) is a new, post-Cold-War era of political and military tension between opposing geopolitical power blocs, with one bloc typically reported as being led by Russia and/or China and the other led by the United States, European Union, and NATO. It is akin to the original Cold War that saw a stand-off and proxy wars between the Western Bloc led by the United States, and the Eastern Bloc led by the Soviet Union, Russia's predecessor.\n", "During the Cold War II, a new definition emerged. More specifically, Cold War II, also known as the Second Cold War, New Cold War, Cold War Redux, Cold War 2.0, and Colder War, refers to the tensions, hostilities, and political rivalry that intensified dramatically in 2014 between the Russian Federation on the one hand, and the United States, European Union, NATO and some other countries on the other hand. Tensions escalated in 2014 after Russia's annexation of Crimea, military intervention in Ukraine, and the 2015 Russian military intervention in the Syrian Civil War. By August 2014, both sides had implemented economic, financial, and diplomatic sanctions upon each other: virtually all Western countries, led by the US and EU, imposed restrictive measures on Russia; the latter reciprocally introduced retaliatory measures.\n", "The term \"Cold War\" was introduced in 1947 by Americans Bernard Baruch and Walter Lippmann to describe emerging tensions between the two former wartime allies. There never was a direct military engagement between the U.S. and the Soviet Union, but there was a half-century of military buildup, and political battles for support around the world, including significant involvement of allied and satellite nations. Although the U.S. and the Soviet Union had been allied against Nazi Germany, the two sides differed on how to reconstruct the postwar world even before the end of World War II. Over the following decades, the Cold War spread outside Europe to every region of the world, as the U.S. sought the \"containment\" of communism and forged numerous alliances to this end, particularly in Western Europe, the Middle East, and Southeast Asia.\n", "Cold War – period of political and military tension that occurred after World War II between powers in the Western Bloc (the United States, its NATO allies and others) and powers in the Eastern Bloc (the Soviet Union and its allies in the Warsaw Pact). Historians have not fully agreed on the dates, but 1947–1991 is common. It was termed as \"cold\" because there was no large-scale fighting directly between the two sides. Based on the principle of mutually assured destruction, both sides developed nuclear weapons to deter the other side from attacking. So they competed against each other via espionage, propaganda, and by supporting major regional wars, known as proxy wars, in Korea, Vietnam and Afghanistan.\n", "The Cold War (1945–1991) was the continuing state of political conflict, military tension, and economic competition between the Soviet Union and its satellite states, and the powers of the Western world, led by the United States. Although the primary participants' military forces never officially clashed directly, they expressed the conflict through military coalitions, strategic conventional force deployments, a nuclear arms race, espionage, proxy wars, propaganda, and technological competition, e.g., the space race. The first use of the term to describe the specific post-war geopolitical confrontation between the USSR and the United States came in a speech by Bernard Baruch, an influential advisor to Democratic presidents, The speech, written by journalist Herbert Bayard Swope, proclaimed, \"Let us not be deceived: we are today in the midst of a cold war.\" Newspaper columnist Walter Lippmann gave the term wide currency with his book \"The Cold War\"; when asked in 1947 about the source of the term, Lippmann traced it to a French term from the 1930s, \"la guerre froide\".\n" ]
why do cars stop/stall when they are spun around?
Cars can stall when the car is in drive and the car is spun and the tires start to roll in the opposite way that the gear is supposed to be rolling, thus making the gears go the wrong way. If you have enough power and are spun, and keep your foot on the gas and the tires are spinning forward still, the car wont stall, like when youre doing a burn out. Manual cars also will not stall if spun if you hold the clutch in, disengaging the power from the engine to the transmission. Some cars also will shut off the engine just because it thinks you may have been in an accident. If you look at enough police chase videos, youll find some where the car doesnt stall when pitted and keeps on running
[ "Once the vehicle is rotating sufficiently rapidly, its angular momentum of rotation can overcome the stabilizing influence of the tires (either braking or skidding), and the rotation will continue even if the wheels are centered or past the point that the vehicle is controlled. This can be caused by some tires locking up in braking while others continue to rotate, or under acceleration where driven tires may lose traction (especially, if they lose traction unevenly), or in combining braking or acceleration with turning.\n", "The automobiles' suspension, brakes, and aerodynamic components are also selected to tailor the cars to different racetracks. A car that understeers is said to be \"tight\", or \"pushing\", causing the car to keep going up the track with the wheel turned all the way left, while one that oversteers is said to be \"loose\" or \"free\", causing the back end of the car to slide around, which can result in the car spinning out if the driver is not careful. The adjustment of front and rear aerodynamic downforce, spring rates, track bar geometry, brake proportioning, the wedge (also known as cross-weight), changing the camber angle, and changing the air pressure in the tires can all change the distribution of forces among the tires during cornering to correct for handling problems. Recently, coil bind setups have become popular among teams.\n", "When in motion, the arm swings until it makes a complete loop, though the riders never become inverted. This is because the ride has two \"twists\" that the older version did not. First, the arm pivots while the ride is in motion. Second, the cars are free to rotate horizontally or \"roll\" while the ride is in motion, always keeping the riders right-side-up.\n", "If the drive wheels aquaplane, there may be a sudden audible rise in engine RPM and indicated speed as they begin to spin. In a broad highway turn, if the front wheels lose traction, the car will suddenly drift towards the outside of the bend. If the rear wheels lose traction, the back of the car will slew out sideways into a skid. If all four wheels aquaplane at once, the car will slide in a straight line, again towards the outside of the bend if in a turn. When any or all of the wheels regain traction, there may be a sudden jerk in whatever direction that wheel is pointed.\n", "Pulling the car \"backward\" (hence the name) winds up an internal spiral spring; a flat spiral rather than a helical coil spring. When released, the car is propelled forward by the spring. When the spring has unwound and the car is moving, the motor is disengaged by a clutch or ratchet and the car then rolls freely onward. Often the clutch mechanism is geared so that the pullback distance needed to wind the spring is less than the distance the spring is engaged propelling forward.\n", "This is why driver training courses teach that if a car begins to slide sideways, the driver should try to steer in the same direction as the slide with no brakes. It gives the wheels a chance to regain static contact by rolling, which gives the driver some control again. An overenthusiastic driver may \"squeal\" the driving wheels trying to get a rapid start but this impressive display of noise and smoke is less effective than maintaining static contact with the road. Many stunt-driving techniques are also done by deliberately breaking and/or regaining this rolling friction.\n", "If the wheel rotates a little more slowly than two revolutions per second, the position of the spokes is seen to fall a little further behind in each successive frame and therefore, the wheel will seem to be turning backwards.\n" ]
Are we bound to get cancer if we don't die from something else before that?
Pretty much yes. Cancer is an unavoidable consequence of evolution. A multicellular organism is a colony of cells that work in concert for one common goal. As we have evolved our cells have developed mechanisms to make sure all the cells divide and grow as complying members of the whole. Still, an individual cell that mutates (mutations happen all the time) to grow out of this control will do so! The control systems have evolved to reduce the risk of cancers until they no longer posed significant selective pressure on reproductive success. Generally though, because of evolution, as you live longer the chance of getting cancer approaches 1. That being said, there is no reason why we should not be able to push this back a lot further. =)
[ "With the right medical help, cancer doesn't have to be a death sentence. Those who can afford it travel to other countries to pay for their treatment and care. Those who can't are left to suffer and die.\n", "Treatment and survival is determined, to a great extent, by whether or not a cancer remains localized or spreads to other locations in the body. If the cancer metastasizes to other tissues or organs it usually dramatically increases a patient's likelihood of death. Some cancers—such as some forms of leukemia, a cancer of the blood, or malignancies in the brain—can kill without spreading at all.\n", "Patients whose cancer is in remission may still have to cope with the uncertainty that at any time their cancer could return without warning. After the initial treatment has ended, anxiety is more common among cancer survivors than among other people. This anxiety regarding the cancer's return is referred to as fear of cancer recurrence. Many patients are anxious that any minor symptom indicates that the cancer has returned, with as many as 9 in 10 patients fearful that their cancer will recur or spread. In addition to the appearance of any new aches and pains, common triggers for a fear that the cancer may return include hearing that someone else has been diagnosed with cancer, annual medical exams to determine whether the cancer recurred and news stories about cancer. This anxiety leads to more medical check ups, which can be measured even after a period of up to ten years. This fear can have a significant impact on individual's lives, resulting in difficulties in their daily life such as work and socialising, and difficulties planning for the future. Overall, fear of cancer recurrence is related to a reduced quality of life in cancer survivors\n", "There are several reasons for this high death toll from cancer in developing countries. Due to poverty, lack of resources and vast distances, public access to treatment maybe difficult or non-existent. There is also not enough awareness (public or professional) about cancer to help either prevent the disease developing or to support early diagnosis. As a result, 80% of cancer patients present with advanced/incurable cancers. Unfortunately, in many cases, palliative care will not be available to them at the end of their lives.\n", "Those who survive cancer develop a second primary cancer at about twice the rate of those never diagnosed. The increased risk is believed to be due to the random chance of developing any cancer, the likelihood of surviving the first cancer, the same risk factors that produced the first cancer, unwanted side effects of treating the first cancer (particularly radiation therapy), and to better compliance with screening.\n", "Oh, it sucks to have a doctor tell you that you have cancer, but in the same breath, he told me that with aggressive treatment they can treat this particular disease. Thank God I didn't have Internet back then, so I couldn't get all wrapped up in it. I didn't have access to see how bad it could be. They told me I had to go through six months of this and five weeks of that, and that's all I really looked at: the end. \n", "200,000 new cases of cancer are diagnosed each year in the UK. One in three adults in the UK will develop cancer that can be life-threatening, and 120,000 people will be killed by their cancer each year. This accounts for 25% of all deaths in the UK. However while 90% of cancer pain can be effectively treated, only 40% of patients adhere to their medicines due to poor understanding.\n" ]
If you could theoretically survive on Venus, would you be floating in mid-air?
Density, not pressure is the key point for buoyancy. According to wiki, > The density of the air at the surface is 67 kg/m3, which is 6.5% that of liquid water on Earth. So while the air is 50 times denser than Earth's air, it's still like 15-20 times less dense than the human body, so you won't float.
[ "Although there is little possibility of existing life near the surface of Venus, the altitudes about 50 km above the surface have a mild temperature, and hence there are still some opinions in favor of such a possibility in the atmosphere of Venus.\n", "The fact that Venus is located closer to the Sun than Earth, raising temperatures on the surface to nearly , the atmospheric pressure is ninety times that of Earth, and the extreme impact of the greenhouse effect, make water-based life as currently known unlikely. A few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the lower-temperature, acidic upper layers of the Venusian atmosphere. The atmospheric pressure and temperature fifty kilometres above the surface are similar to those at Earth's surface. This has led to proposals to use aerostats (lighter-than-air balloons) for initial exploration and ultimately for permanent \"floating cities\" in the Venusian atmosphere. Among the many engineering challenges are the dangerous amounts of sulfuric acid at these heights.\n", "Since the 1960s, increasingly clear evidence from various space probes showed Venus has an extreme climate, with a greenhouse effect generating a constant temperature of about 500 °C on the surface. The atmosphere contains sulfuric acid clouds and the atmospheric pressure at the surface is 90 bar, almost 100 times that of Earth and similar to that of more than deep in Earth's oceans. In such an environment, and given the increasingly hostile characteristics of the Venusian weather, life as we know it is highly unlikely to occur.\n", "The surface temperature of Venus (over 450 °C) is far beyond the extremophile range, which extends only tens of degrees beyond 100 °C. However, the lower temperature of the cloud tops means that life could plausibly exist. It has been proposed that life on Venus could exist there, the same way that bacteria have been found living and reproducing in clouds on Earth. Any such bacteria living in the cloud tops, however, would have to be hyper-acidiphilic, due to the concentrated sulfuric acid environment. Microbes in the thick, cloudy atmosphere could be protected from solar radiation by the sulfur compounds in the air. The solar wind may provide a mechanism for the transfer of such microbiota from Venus to Earth.\n", "The probe provided information about the surface of Venus, which could not be seen through a thick veil of atmosphere. The spacecraft definitively confirmed that humans cannot survive on the surface of Venus, and excluded the possibility that there is any liquid water on Venus.\n", "BULLET::::- Venus has a CO atmosphere at the surface. Because CO is about 50% more dense than Earth air, ordinary Earth air could be a lifting gas on Venus. This has led to proposals for a human habitat that would float in the atmosphere of Venus at an altitude where both the pressure and the temperature are earthlike. In 1985, the Soviet Vega program sent two balloons to float in Venus' atmosphere at 54 km altitude.\n", "Venus's location closer to the Sun than Earth and the extreme greenhouse effect raising temperatures on the surface to nearly , and the atmospheric pressure 90 times that of Earth, make water-based life as we know it unlikely on the surface of the planet. However, a few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the lower-temperature, acidic upper layers of the Venusian atmosphere.\n" ]
why do some armies use chevrons and others use inverted chevrons? is it simply a stylistic choice or does it have some significance?
I dont exactly know where it came from, but from an educated guess it could be left over from the use of heraldry when it was used to show a particular family association, or even a throw back to the Spartans who used a lambda (Λ) on their shields. Of course if you read the da vinci code, you'd know its a penis.
[ "At the earliest times, military insignias were very simple. Tree branches, mauled birds, heads of beasts, or a handful of dry grass, were placed on top of a pole or long stick, so that the combatants could recognize themselves in the fight, or to signpost a meeting place in retreat or defeat. But as the arts of war were refined, sturdier and brighter insignias were designed, and everyone wanted theirs to use characteristic symbols.\n", "A chevron (also spelled cheveron, especially in older documents) is a V-shaped mark, often inverted. The word is usually used in reference to a kind of fret in architecture, or to a badge or insignia used in military or police uniforms to indicate rank or length of service, or in heraldry and the designs of flags (see flag terminology).\n", "In some armies, small chevrons are worn on the lower left sleeve to indicate length of service, akin to service stripes in the U.S. military. The Israel Defense Forces use chevrons in various orientations as organizational designators on their vehicles, specifically which company within a battalion they belong to.\n", "A regimental symbol is a distinguishing emblem used by soldiers during times of war. Usually, it is some easily identifiable icon that can be displayed on uniforms, vehicles, and buildings to alert others of the nationality of the respective military force.\n", "\"Pindos\" or \"Pendos\" (rus: Пиндос) in the Russian language is a derogatory ethnic slur, used to refer to Americans. Originally, the term was used to refer to US military servicemen, but it gradually became a universal disparaging term to refer to all Americans. Other slur terms to refer to the United States such as \"Pindosiya\" and \"Pindostan\" (rus: \"Пиндосия\", \"Пиндостан\") have also been derived. Some sources claim that the term was first used by Russian military servicemen during Kosovo War, where they allegedly heard this term. According to Russian soldiers, it was perfect fit for \"armed to the teeth and coward American soldier\".\n", "The symbol is an ancient one, becoming widespread with the medieval \"Danse Macabre\" symbolism. From at least the 12th century, it has been used for military flags or insignia and as a warning of the ferocity of the unit displaying it. It became associated with piracy from the 14th century onwards, possibly even earlier. By the 15th century, the symbol had developed into its familiar form.\n", "The D.A. quickly became a stereotypical feature of rebels, mobsters, and nonconformists, and gained popularity especially after the rise of rock 'n roll legend Elvis Presley, who sported the same look. Although the ducktail was adopted by Hollywood to represent the wild youth of the Fifties, only a minority of males actually sported a D.A., even amongst the British Rockers and Teddy Boys of the same era. The style became popular in India after film star Shammi Kapoor started sporting it. It is also associated with men of certain ethnic groups, mainly ones of Mediterranean, Eastern European and/or Latin American descent, though in slightly different styles.\n" ]
how to patent an invention idea and get rich.
tl;dr: patents are not the way to go for an inventor It is said, that every good idea for a product is worth negative one million dollars. Why? Because you have to invest a significant amount of money until you can get any profit from it (prototypes, testing, tooling for production machines, investment in raw materials and parts). So could you just get a patent and then license it to a bigger company, which then in turn makes a huge profit and gives you a (still decent) cut? So suppose your idea is *really* good. Then you will have to hire a patent lawyer which helps you to formulate your patent, or it will be worthless and easily circumvented. Let's also say that your did your research and there is no prior art present (nobody invented said thing before you already). Then the patent will be granted to you. At this point you can sue anybody who you think is infringing on your patent. The question is: Do you have enough free time and money at hand to fight that lawsuit? Also: In you patent you will have to disclose a lot of information about your invention. This means that everbody interested in said invention will look for loopholes to use your idea, but without infringing your patent instead of giving money to you. So what would you do if you have a brilliant idea? First check if it is really that brilliant. What to do next depends on the idea itself. Not disclosing how e.g. a production process works and relying on trade secrets could be an idea.
[ "[T]this could be done best, by giving the public at large a right to make, construct, use, and vend the thing invented, at as early a period as possible, having a due regard to the rights of the inventor. If an inventor should be permitted to hold back from the knowledge of the public the secrets of his invention; if he should for a long period of years retain the monopoly, and make, and sell his invention publicly, and thus gather the whole profits of it, relying upon his superior skill and knowledge of the structure, and then, and then only, when the danger of competition should force him to secure the exclusive right, he should be allowed to take out a patent and thus exclude the public from any further use than what should be derived under it during his fourteen years, it would materially retard the progress of science and the useful arts and give a premium to those who should be least prompt to communicate their discoveries. \n", "There are many ways in which an inventor might be compensated for a patent. An inventor might bring the patented product to market under the protection of the monopoly created by the patent. The inventor may license a patent to another entity for an up-front fee, an ongoing royalty or other consideration. The inventor may also sell the patent outright. Henry Woodward, for example, sold his original US patent on the light bulb to Thomas Edison who then developed it into a commercially successful product.\n", "One effect of modern patent usage is that a small-time inventor, who can afford both the patenting process and the defense of the patent, can use the exclusive right status to become a licensor. This allows the inventor to accumulate capital from licensing the invention and may allow innovation to occur because he or she may choose not to manage a manufacturing buildup for the invention. Thus the inventor's time and energy can be spent on pure innovation, allowing others to concentrate on manufacturability.\n", "BULLET::::- \"No inventor could introduce an invention, however excellent, unless he could get capitalists to take it up, and this usually they would not do unless the inventor relinquished to them most of his hopes of profit from the discovery.\"\n", "Even under a first-to-invent system, the first-inventor-in-fact does not always obtain entitlement to a patent. If, for example, a first-inventor-in-fact maintained his invention as a trade secret for many years before seeking patent protection, he may be judged to have \"abandoned, suppressed or concealed\" the invention. Well-established patent law provides that an inventor who makes a secret, commercial use of an invention for more than one year prior to filing a patent application at the USPTO forfeits his own right to a patent. If an earlier inventor made secret commercial use of an invention, and another person independently invented the same technology later and obtained patent protection, then the trade secret holder could face liability for patent infringement. Many foreign patent regimes, on the other hand, protect prior user rights, which are often seen as assisting small entities that may lack the sophistication or resources to pursue patent protection. The American Inventors Protection Act of 1999 established a \"first inventor defense,\" but limited the defense to patents on \"methods of doing or conducting business;\" H.R. 2795 would have extended it to all subject matter.\n", "Inventors are faced with several alternatives to making their invention a commercial success. They can build their own start up company from scratch using their own resources. In the United States, they can seek government grants; SBIR (Small Business Innovative Research) and (TTR) Technology Transfer Research to fund the early stage development of their technology. They can contract with third parties such as: venture capital or angel investors to finance a startup or they can sell or license their products to an existing and established company. Often inventors are interested in expanding their own intellectual property assets through licensing and acquisitions.\n", "A patent is a form of right granted by the government to an inventor or their successor-in-title, giving the owner the right to exclude others from making, using, selling, offering to sell, and importing an invention for a limited period of time, in exchange for the public disclosure of the invention. An invention is a solution to a specific technological problem, which may be a product or a process and generally has to fulfill three main requirements: it has to be new, not obvious and there needs to be an industrial applicability. To enrich the body of knowledge and stimulate innovation, it is an obligation for patent owners to disclose valuable information about their inventions to the public.\n" ]
how do fraternities work? do they serve any real function to the university?
Benefits to University: - Provides a social network for students who join. - Provides social and extracurricular events for students without the need for university resources. - Provides opportunities for students to gain experience holding leadership positions. - Often provides housing, which can be limited on some campuses. - Depending on a university's relationship with fraternities and sororities, it can provide the university with ways to regulate social events that can't be applied as easily to non-Greek events. - Fraternities and sororities typically require some amount of philanthropy, which benefit the community and improve a school's reputation. - Provides for networking opportunities that can help students with their careers. - Increased donations from alumni. Harm to University: - "Pledging" a fraternity or sorority often involves hazing, which can mentally and physically distress students. There have even been instances of people dying due to hazing. - Fraternities and sororities generally throw parties with alcohol, which can lead to irresponsible behavior, injuries, crime, etc. This is bad for students and the university's reputation. - Associating mainly with one's own brothers or sisters may limit interactions with other students and/or decrease the diversity of people a student gets to know. - Students may feel pressured to spend unnecessary amounts of time and money on matters related to their chapter. - Dealing with fraternities and sororities requires time on the part of the university's staff, often requiring hiring people specifically for this purpose. - Promotion of "fratty" culture, which can include immature behavior, sexism, sexual misconduct, excessive alcohol consumption, etc.
[ "In the context of the North American student fraternity and sorority system, service fraternities and service sororities comprise a type of organization whose \"primary\" purpose is community service. Members of these organizations are not restricted from joining other types of fraternities. This may be contrasted with professional fraternities, whose primary purpose is to promote the interests of a particular profession, and general or social fraternities, whose primary purposes are generally aimed towards some other aspect, such as the development of character, friendship, leadership, or literary ability.\n", "Service fraternity may refer to any fraternal public service organization, such as the Kiwanis or Rotary International. In Canada and the United States, the term fraternal organization is more common as \"fraternity\" in everyday usage refers to fraternal student societies.\n", "The fraternity provides for charitable endeavors through its Education and Building Foundations, providing academic scholarships and shelter to underprivileged families these projects are managed by fraternity brothers; Broderick McKinney, Kenneth Burnside and Gregory Anderson. The fraternity combines its efforts in conjunction with other philanthropic organizations such as Head Start, Boy Scouts of America, Big Brothers Big Sisters of America, Project Alpha with the March of Dimes, NAACP, Habitat for Humanity, and Fortune 500 companies.\n", "Individual chapters of fraternities and sororities are largely self-governed by their active (student) members; however, alumni members may retain legal ownership of the fraternity or sorority's property through an alumni chapter or alumni corporation. All of a single fraternity or sorority's chapters are generally grouped together in a national or international organization that sets standards, regulates insignia and ritual, publishes a journal or magazine for all of the chapters of the organization, and has the power to grant and revoke charters to chapters. These federal structures are largely governed by alumni members of the fraternity, though with some input from the active (student) members.\n", "BULLET::::- The Inter-Fraternity Council, or IFC, is the oldest of the Greek councils. Founded in 1934, the IFC oversees 32 social fraternities and is led by a governing board that is elected by the brothers of the member fraternities. The IFC works with the Presidents' Council, which consists of fraternity chapter presidents, to govern the fraternity community.\n", "The fraternity has a legislative (the power to make laws), executive (the power to implement laws) and judiciary (the power to judge and apply punishment when laws are broken) body. All full members make up the legislative body, which elects the executive body. The legislative body also functions as a judiciary body. In this case it assumes the function of an honorary senate.\n", "In their vision, the founders designated the purpose of the fraternity to be the following: \"To gather young men into an organization in which membership is not based on any particular race, creed, religion, or social background, but on the values of brotherhood.\" But more specifically, this \"fraternity is to provide service to the brothers of Gamma Alpha Chi, and to our universities, communities, and nation, to our fullest capacity, and we will practice the highest forms of brotherhood amongst ourselves, our fellow fraternities and sororities, and to the general public.\"\n" ]
Why was Unit 731 commissioned and did Japan ever intend on using their "research"?
You can get the US gov't documents here. _URL_0_ Pages 32-34 and 46-49 gives summaries of the activities during the investigation. Pages 53-55 gives a Q & A of Unit 731 in 1995. It's a US gov't report so it's very concerned about what happened to US PoWs. You should read the documents for yourself but here's my summary: Unit 731 began experiments in 1932 on Biological Warfare in order to defend against a possible BW attack. This phase included human experiments such as figuring out the minimal dosages necessary for infection and for lethality. (The BW experiments were not conducted on US PoWs, but rather on Chinese criminal sentenced to death, at least 3000 were subjected to these horrific experimentations.) Unit 731's General Ishii Shiro then began to experiment on possible uses of BW as an offensive tool. This phase (1940-1941) starts the field tests, such as artillery shells and bombs with BW agents and crop destruction in China. So Chinese civilians (!) and soldiers are subjected to these tests a total of 12 times. Mostly unsuccessfully, thankfully: a total of 25946 people were infected after 6 tests according to data from the papers of Kaneko Junichi, a Japanese doctor who was part of Unit 731. [I don't know how many of them subsequently died.] _URL_1_ To me it seems clear that this is following a very similar pattern to many military research. "The enemy has a devilish plan! We must defend against it by making our own!" You make out the enemy to be inhuman, and in the process you yourself become inhuman. Final point: Some people have accused the US of using BW during the Korean War and that because of this, there's massive cover up of the activities of Unit 731 even today. (The US government gave Unit 731 immunity from war crime prosecutions in exchange for their data.) There's no way to know for sure if the cover up is still going on or if all the information has been released, so for now I will only go with the documentary evidence that we have available instead of the hearsay that may or may not be accurate.
[ ", also referred to as Detachment 731, the 731 Regiment, Manshu Detachment 731, The Kamo Detachment, or the Ishii Company, was a covert biological and chemical warfare research and development unit of the Imperial Japanese Army that undertook lethal human experimentation during the Second Sino-Japanese War (1937–1945) of World War II. It was responsible for some of the most notorious war crimes carried out by Imperial Japan. Unit 731 was based at the Pingfang district of Harbin, the largest city in the Japanese puppet state of Manchukuo (now Northeast China).\n", "Unit 731 were covert medical experiment units which conducted biological warfare research and development through human experimentation during the Second Sino-Japanese War (1937–1945) and World War II. Unit 731 responsible for some of the most notorious war crimes. Initially set up as a political and ideological section of the Kempeitai military police of pre-Pacific War Japan, they were meant to counter the ideological or political influence of Japan's enemies, and to reinforce the ideology of military units.\n", "Until the end of World War II, Japan operated a covert biological and chemical warfare research and development unit called Unit 731 in Harbin. The unit's activities, including human experimentation, were documented by the Khabarovsk War Crime Trials conducted by the Soviet Union in December 1949. However, at that time, the US government described the Khabarovsk trials as \"vicious and unfounded propaganda\". It was later revealed that the accusations made against the Japanese military were correct. The US government had taken over the research at the end of the war and had then covered up the program. Leaders of Unit 731 were exempted from war crimes prosecution by the United States and then placed on the payroll of the US. \n", "The , also called the , was a military development laboratory run by the Imperial Japanese Army from 1937 to 1945. The lab, based in Noborito, Tama-ku, Kawasaki, Kanagawa Prefecture, Japan focused on clandestine activities and unconventional warfare, including energy weapons, intelligence and spycraft tools, chemical and biological weapons, poisons, and currency counterfeiting. One of the weapons developed by the lab was the fire balloon, thousands of which were launched against the United States in 1944 and 1945. The unit, which at its peak was staffed by 1,000 scientists and workers, was disbanded upon Japan's defeat at the end of World War II.\n", "Unit 731 was specifically created by the Japanese military in Harbin, China (then part of Japanese-occupied Manchukuo) for researching biological and chemical warfare, by carrying out human experimentation on people of all ages. During the Second Sino-Japanese War and later World War II, the Japanese had encased bubonic plague, cholera, smallpox, botulism, anthrax, and other diseases into bombs where they were routinely dropped on Chinese combatants and non-combatants. According to the 2002 \"International Symposium on the Crimes of Bacteriological Warfare\", the number of people killed by the Imperial Japanese Army germ warfare and human experiments was around 580,000. According to other sources, \"tens of thousands, and perhaps as many as 400,000 Chinese died of bubonic plague, cholera, anthrax and other diseases\" from the use of biological warfare.\n", "Published in Japan in 2001, the book \"Rikugun Noborito Kenkyujo no shinjitsu\" or \"The Truth About the Army Noborito Institute\" revealed that members of Japan's Unit 731 also worked for the \"chemical section\" of a U.S. clandestine unit hidden within Yokosuka Naval Base during the Korean War as well as on projects inside the United States from 1955 to 1959.\n", "Unit 731, a biological and chemical warfare research and development unit of the Imperial Japanese Army, undertook lethal human experimentation during the period that comprised both the Second Sino-Japanese War, and the Second World War (1937–1945). In Mindanao, Moro Muslim prisoners of war were subjected to various forms of vivisection by the Japanese, in many cases without anesthesia.\n" ]
In general, how were utilities (plumbing, electricity, gas) handled in the United States during the late 19th and 20th centuries?
The modern induction type electromechanical watt-hour meter was invented for the Westinghouse corporation in 1894. Prior to that there were several different designs for metering electricity running back to Samuel Gardiner who invented a meter that measured how long electricity was applied to the load (it didn't measure how much power was used just when power was used) and to Thomas Edison who in 1881 developed a meter for his DC power system. Read all about it [here](_URL_0_)
[ "The 1860s saw the creation of a public water system providing firefighters with a source of water carried via wooden mains that could be accessed by boring a hole in them. Each of the pumpers carried a short pipe that was designed to be pushed into the hole to deliver water.\n", "In the United States it became a national objective after the power crisis during the summer of 1918 in the midst of World War I to consolidate supply. In 1934 the Public Utility Holding Company Act recognized electric utilities as public goods of importance along with gas, water, and telephone companies and thereby were given outlined restrictions and regulatory oversight of their operations.\n", "The Public Utility Holding Company Act of 1935 (PUHCA) required that a company like Cities Service divest itself of either its electric utility holdings or its other energy companies. Cities Service chose to sell off its utilities and remain in the oil and gas business. The first steps to liquidate investments in its public utilities were taken in 1943 and affected over 250 different utility corporations.\n", "The first regulation of a public utility was effected in 1874 when the Oregon Legislative Assembly passed a law regulating rates and procedures for the gas distribution business of Al Zeiber in Portland. His primary contract was with the city for its gas street lamps. The agency, or its predecessors including the Public Service Commission, have been charged with a wide variety regulatory duties, encompassing industries as diverse as timber rafting to intrastate rail and bus service.\n", "In 1896, the Edison Light and Power Company merged with the San Francisco Gas Light Company to form the new San Francisco Gas and Electric Company.Consolidation of gas and electric companies solved problems for both utilities by eliminating competition and producing economic savings through joint operation. Other companies that began operation as active competitors but eventually merged into the San Francisco Gas and Electric Company included the Equitable Gas Light Company, the Independent Electric Light and Power Company, and the Independent Gas and Power Company. In 1903, the company purchased its main competitor for gas lighting, the Pacific Gas Improvement Company.\n", "In the United States in the 1920s, utilities formed joint-operations to share peak load coverage and backup power. In 1934, with the passage of the Public Utility Holding Company Act (USA), electric utilities were recognized as public goods of importance and were given outlined restrictions and regulatory oversight of their operations. The Energy Policy Act of 1992 required transmission line owners to allow electric generation companies open access to their network and led to a restructuring of how the electric industry operated in an effort to create competition in power generation. No longer were electric utilities built as vertical monopolies, where generation, transmission and distribution were handled by a single company. Now, the three stages could be split among various companies, in an effort to provide fair accessibility to high voltage transmission. The Energy Policy Act of 2005 allowed incentives and loan guarantees for alternative energy production and advance innovative technologies that avoided greenhouse emissions.\n", "In the United States in the 1920s, utilities formed joint-operations to share peak load coverage and backup power. In 1934, with the passage of the Public Utility Holding Company Act (USA), electric utilities were recognized as public goods of importance and were given outlined restrictions and regulatory oversight of their operations. The Energy Policy Act of 1992 required transmission line owners to allow electric generation companies open access to their network and led to a restructuring of how the electric industry operated in an effort to create competition in power generation. No longer were electric utilities built as vertical monopolies, where generation, transmission and distribution were handled by a single company. Now, the three stages could be split among various companies, in an effort to provide fair accessibility to high voltage transmission. The Energy Policy Act of 2005 allowed incentives and loan guarantees for alternative energy production and advance innovative technologies that avoided greenhouse emissions.\n" ]
why does coconut oil and other oils soak into some people's skin, and sit on top of others?
Now that you mention it...Coconut oil will absorb on my upper body but just sits on my legs. It doesn't even help with the ash. You can see the ash under the oil if you look closely.
[ "Many health organizations advise against the consumption of coconut oil due to its high levels of saturated fat, including the United States Food and Drug Administration, World Health Organization, the United States Department of Health and Human Services, American Dietetic Association, American Heart Association, British National Health Service, British Nutrition Foundation, and Dietitians of Canada.\n", "Marketing of coconut oil has created the inaccurate belief that it is a \"healthy food\". Instead, studies have found that coconut oil consumption has health effects similar to those of other unhealthy fats, including butter, beef fat and palm oil. Coconut oil contains a high amount of lauric acid, a saturated fat that raises total blood cholesterol levels by increasing both the amount of high-density lipoprotein (HDL) cholesterol and low-density lipoprotein (LDL) cholesterol. Although lauric acid consumption may create a more favorable total blood cholesterol profile, this does not exclude the possibility that persistent consumption of coconut oil may actually increase the risk of cardiovascular disease through other mechanisms, particularly via the marked increase in total blood cholesterol induced by lauric acid. Because the majority of saturated fat in coconut oil is lauric acid, coconut oil may be preferred over partially hydrogenated vegetable oil when solid fats are used in the diet. However, the weight of evidence to date indicates that consuming polyunsaturated fats instead of coconut oil would reduce the risk of cardiovascular diseases. Due to its high content of saturated fat with corresponding high caloric burden, regular use of coconut oil in food preparation may promote weight gain.\n", "Due to its high levels of saturated fat, the World Health Organization, the United States Department of Health and Human Services, United States Food and Drug Administration, American Heart Association, American Dietetic Association, British National Health Service, British Nutrition Foundation, and Dietitians of Canada advise that coconut oil consumption should be limited or avoided.\n", "Coconut oil, or copra oil, is an edible oil extracted from the kernel or meat of mature coconuts harvested from the coconut palm (\"Cocos nucifera\"). It has various applications. Because of its high saturated fat content, it is slow to oxidize and, thus, resistant to rancidification, lasting up to six months at 24 °C (75 °F) without spoiling.\n", "Despite its high saturated fat content, coconut oil is commonly used in baked goods, pastries, and sautés, having a \"haunting, nutty\", flavor with a touch of sweetness. Used by movie theatre chains to pop popcorn, coconut oil adds considerable saturated fat and calories to the snackfood while enhancing flavor, possibly a factor increasing further consumption of high-calorie snackfoods, energy balance, and weight gain.\n", "Coconut oil has been tested for use as an engine lubricant and as a transformer oil. Coconut oil (and derivatives, such as coconut fatty acid) are used as raw materials in the manufacture of surfactants such as cocamidopropyl betaine, cocamide MEA, and cocamide DEA.\n", "Along with coconut oil, palm oil is one of the few highly saturated vegetable fats and is semisolid at room temperature. Palm oil is a common cooking ingredient in the tropical belt of Africa, Southeast Asia and parts of Brazil. Its use in the commercial food industry in other parts of the world is widespread because of its lower cost and the high oxidative stability (saturation) of the refined product when used for frying. One source reported that humans consumed an average of palm oil per person in 2015.\n" ]
If stars emit light and planets don't, how do we discover new planets? Their reflection of their nearest stars?
[Wobble and transit.](_URL_0_) In the first one, the gravitational effects of a planet-sun coupling cause a "wobble" that permits detection from afar. In the second one, the planet's orbit is such that it goes between the distant star and the observer; this 'transit' blocks some of the light on a regular basis.
[ "Planets are extremely faint light sources compared to stars, and what little light comes from them tends to be lost in the glare from their parent star. So in general, it is very difficult to detect and resolve them directly from their host star. Planets orbiting far enough from stars to be resolved reflect very little starlight, so planets are detected through their thermal emission instead. It is easier to obtain images when the star system is relatively near to the Sun, and when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation; images have then been made in the infrared, where the planet is brighter than it is at visible wavelengths. Coronagraphs are used to block light from the star, while leaving the planet visible. Direct imaging of an Earth-like exoplanet requires extreme optothermal stability. During the accretion phase of planetary formation, the star-planet contrast may be even better in H alpha than it is in infrared – an H alpha survey is currently underway.\n", "Short-period planets in close orbits around their stars will undergo reflected light variations because, like the Moon, they will go through phases from full to new and back again. In addition, as these planets receive a lot of starlight, it heats them, making thermal emissions potentially detectable. Since telescopes cannot resolve the planet from the star, they see only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small — the photometric precision required is about the same as to detect an Earth-sized planet in transit across a solar-type star – such Jupiter-sized planets with an orbital period of a few days are detectable by space telescopes such as the Kepler Space Observatory. Like with the transit method, it is easier to detect large planets orbiting close to their parent star than other planets as these planets catch more light from their parent star. When a planet has a high albedo and is situated around a relatively luminous star, its light variations are easier to detect in visible light while darker planets or planets around low-temperature stars are more easily detectable with infrared light with this method. In the long run, this method may find the most planets that will be discovered by that mission because the reflected light variation with orbital phase is largely independent of orbital inclination and does not require the planet to pass in front of the disk of the star. It still cannot detect planets with circular face-on orbits from Earth's viewpoint as the amount of reflected light does not change during its orbit.\n", "In addition to transits, planets orbiting around their stars undergo reflected-light variations—like the Moon, they go through phases from full to new and back again. Because Kepler cannot resolve the planet from the star, it sees only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small—the photometric precision required to see a close-in giant planet is about the same as to detect an Earth-sized planet in transit across a solar-type star—Jupiter-sized planets with an orbital period of a few days or less are detectable by sensitive space telescopes such as Kepler. In the long run, this method may help find more planets than the transit method, because the reflected light variation with orbital phase is largely independent of the planet's orbital inclination, and does not require the planet to pass in front of the disk of the star. In addition, the phase function of a giant planet is also a function of its thermal properties and atmosphere, if any. Therefore, the phase curve may constrain other planetary properties, such as the particle size distribution of the atmospheric particles.\n", "One of the biggest disadvantages of this method is that the light variation effect is very small. A Jovian-mass planet orbiting 0.025 AU away from a Sun-like star is barely detectable even when the orbit is edge-on. This is not an ideal method for discovering new planets, as the amount of emitted and reflected starlight from the planet is usually much larger than light variations due to relativistic beaming. This method is still useful, however, as it allows for measurement of the planet's mass without the need for follow-up data collection from radial velocity observations.\n", "Preliminary identification of possible star candidates starts at the Haleakala telescope in Hawaii by a team of professional astronomers. Once they identify a star that dims slightly from time to time, the information is forwarded to a team of amateur astronomers who then investigate for additional evidence suggesting this dimming is caused by a transiting planet. Once enough data is collected, it is forwarded to the University of Texas McDonald Observatory to confirm the presence of a transiting planet by a second team of professional astronomers.\n", "Stars with planets may also show brightness variations if their planets pass between Earth and the star. These variations are much smaller than those seen with stellar companions and are only detectable with extremely accurate observations. Examples include HD 209458 and GSC 02652-01324, and all of the planets and planet candidates detected by the Kepler Mission.\n", "The two telescopes, OGLE and Spitzer, discovered the planet through gravitational microlensing. This is done by observing when the star passes between Earth and another star. The distance at which the star is seen allows us to observe gravity bending the light and the change of brightness shows the existence of the star. If there is a planet orbiting the star, then the astronomer will also see the same thing twice, which helped astronomers discover OGLE-2014-BLG-0124Lb.\n" ]
Why do objects in space tumble when rotated on a certain axis?
Objects will appear to "tumble" if they are not being rotated about one of their three "principle axes." If you rotate an object about some arbitrary axis, then the angular momentum vector will not, in general, be in the same direction as the rotation vector. Because the angular momentum vector must be conserved, the rotation axis changes to keep the angular momentum vector pointing in the same direction and the object appears to "tumble." If you happen to rotate the object about one of its principle axes (such as the deck of cards at the beginning of the second video), then the rotation and angular momentum vectors are aligned and the object does not "tumble."
[ "This is a list of tumblers, minor planets, comets and natural satellites that rotate on a non-principal axis, commonly known as \"tumbling\" or \"wobbling\". As of 2018, there are 3 natural satellites and 198 confirmed or likely tumblers out of a total of nearly 800,000 discovered small Solar System bodies. The data is sourced from the \"Lightcurve Data Base\" (LCDB). The tumbling of a body can be caused by the torque from asymmetrically emitted radiation known as the YORP effect.\n", "Rotating unbalance is the uneven distribution of mass around an axis of rotation. A rotating mass, or rotor, is said to be out of balance when its center of mass (inertia axis) is out of alignment with the center of rotation (geometric axis). Unbalance causes a moment which gives the rotor a wobbling movement characteristic of vibration of rotating structures.\n", "The Chandler wobble is an example of the kind of motion that can occur for a spinning object that is not a sphere; this is called a free nutation. Somewhat confusingly, the direction of the Earth's spin axis relative to the stars also varies with different periods, and these motions—caused by the tidal forces of the Moon and Sun—are also called nutations, except for the slowest, which are precessions of the equinoxes.\n", "Note that this rotation is kinematic, rather than physical, because usually when a rigid object moves freely in space its rotation is independent of its translation. The exception would be if the object's rotation is physically constrained to align itself with the object's translation, as is the case with the cart of a roller coaster.\n", "Chaotic rotation involves the irregular and unpredictable rotation of an astronomical body. Unlike Earth's rotation, a chaotic rotation may not have a fixed axis or period. Because of the conservation of angular momentum, chaotic rotation is not seen in objects that are spherically symmetric or well isolated from gravitational interaction, but is the result of the interactions within a system of orbiting bodies, similar to those associated with orbital resonance.\n", "In a flat disk of objects with eccentric orbits a small initial vertical perturbation is amplified by the inclination instability. The initial perturbation exerts an vertical force. On very long timescales relative to the period of an object's orbit this force produces a net torque on the orbit due to the object spending more time near aphelion. This torque causes the plane of the orbit to roll on its major axis. In a disk this results in the orbits rolling with respect to each other so that the orbits are no longer co-planar. The gravity of the objects now exerts forces on each other that are out of planes of their orbits. Unlike the force due to the initial perturbation these forces are in opposite directions, up and down respectively, on the inbound and outbound portions of their orbits. The resulting torque causes their orbits to rotate about their minor axes, lifting their aphelia, causing the disk to form a cone. The angular momentum of the orbit is also increased due to this torque resulting in reduction of the eccentricity of the orbits. The inclination instability requires an initial eccentricity of 0.6 or larger, and saturates when inclinations reach ~1 radian, after which orbits precess due to the gravity toward the cone's axis of symmetry.\n", "The observers also detected a non-principal axis rotation seen in distinct rotational cycles in successive order. This is commonly known as tumbling. \"Tatry\" is one of a group of less than 200 bodies known to be is such a state \"(also see List of tumblers).\"\n" ]
why does it seem like coca cola is sold in nearly every country of the world, even underdeveloped ones, but bottled water seems hard to come by?
Bottled water is available there also, but you hardly hear about it because bottled water doesn't have the marketing budget of a small country like Coca Cola pumps into marketing for it's Soft Drinks.
[ "The U.S. is the second largest consumer market for bottled water in the world, followed by Mexico, Indonesia, and Brazil. China surpassed the United States to take the lead in 2013. In 2016, bottled water outsold carbonated soft drinks (by volume) to become the number one packaged beverage in the U.S. In 2018, bottled water consumption increased to 14 billion gallons, up 5.8 percent from 2017, with the average American drinking 41.9 gallons of bottled water annually.\n", "Coca-Cola is the best-selling soft drink in most countries, and was recognized as the number one global brand in 2010. While the Middle East is one of the only regions in the world where Coca-Cola is not the number one soda drink, Coca-Cola nonetheless holds almost 25% market share (to Pepsi's 75%) and had double-digit growth in 2003. Similarly, in Scotland, where the locally produced Irn-Bru was once more popular, 2005 figures show that both Coca-Cola and Diet Coke now outsell Irn-Bru. In Peru, the native Inca Kola has been more popular than Coca-Cola, which prompted Coca-Cola to enter in negotiations with the soft drink's company and buy 50% of its stakes. In Japan, the best selling soft drink is not cola, as (canned) tea and coffee are more popular. As such, The Coca-Cola Company's best selling brand there is not Coca-Cola, but Georgia. In May 2016, The Coca-Cola Company temporarily halted production of its signature drink in Venezuela due to sugar shortages. Since then, The Coca-Cola Company has been using \"minimum inventories of raw material\" to make their signature drinks at two production plants in Venezuela.\n", "In 2001, a WWF study, \"Bottled water: understanding a social phenomenon\", warned that in many countries, bottled water may be no safer or healthier than tap water and it sold for up to 1,000 times the price. It said the booming market would put severe pressure on recycling plastics and could lead to landfill sites drowning in mountains of plastic bottles. Also, the study discovered that the production of bottled water uses more water than the consumer actually buys in the bottle itself.\n", "As of 2015, Coca-Cola has been distributed to over 200 countries worldwide. A few of the many countries consist of China, Guatemala, Papua New Guinea, Mexico, Russia, Canada, United Kingdom, Algeria, and Libya. According to the company, \"Coca-Cola is the second-most understood term in the world behind \"okay.\"\n", "Bottled water is perceived by many as being a safer alternative to other sources of water such as tap water. Bottled water usage has increased even in countries where clean tap water is present. This may be attributed to consumers disliking the taste of tap water or its organoleptics. Another contributing factor to this shift could be the marketing success of bottled water. The success of bottled water marketing can be seen by Perrier's transformation of a bottle of water into a status symbol. However, while bottled water has grown in both consumption and sales, the industry's advertising expenses are considerably less than other beverages. According to the Beverage Marketing Corporation (BMC), in 2013, the bottled water industry spent $60.6 million on advertising. That same year, sports drinks spent $128 million, sodas spent $564 million, and beer spent $1 billion.\n", "Examples are Coca-Cola and some of the General Electric businesses. The drawback is that the business would risk losing business as soon as a weakness in its supply chain or in its marketing forces it to withdraw from the market. Coca-Cola’s attempt to sell its Dasani bottled water in the UK turned out to be a flop mainly because it tries to position this “purified tap water” alongside mineral water of other brands. The trigger was a contamination scandal reported in the media.\n", "Consumers tend to choose bottled water due to health related reasons. In communities that experience problems with their tap water, bottled water consumption is significantly higher. The International Bottled Water Association guidelines state that bottled water companies cannot compare their product to tap water in marketing operations. Consumers are also affected by memories associated with particular brands. For example, Coca-Cola took their Dasani product off the UK market after finding levels of bromate that were higher than legal standards because consumers in the UK associated this flaw with the Dasani product.\n" ]
Why does traditional Japanese architecture only rarely use stone structures?
hi! additional input is welcome, but meanwhile, you may be interested in responses to these earlier questions * [Why are Japanese castles built of wood as opposed to stone?](_URL_5_) * [What military value did Japanese castles have, compared to European castles?](_URL_3_) * [Why didn't Asians build castles like the Europeans?](_URL_1_) * [Why didn't Europe adopt Japanese castles?](_URL_2_) * [Why don't we restore ancient ruins?](_URL_6_) siege warfare in Japan * [What were some defense tactics used in castles?](_URL_4_) * [How did the Japanese lay siege to their castles?](_URL_0_)
[ "Japanese architects have designed a way to build temples, furniture, and homes without using screws or nails. To keep the piece together joints are constructed to hold everything in place. However, more time consuming, joints tend to hold up to natural disasters better than nails and screws, which is how some temples in Japan are still standing despite recent natural events. There are two main categories with Japanese buildings, either craftsman like or industrial. Industrial tends to be made by machines while the craftsman style is handmade and tends to take up more time then the industrial style. Japanese homes were influenced from China greatly until 57 B.C when Japanese homes started to grow to be more distinct from other cultures. Until 660 AD homes and building constructed in Japan were made from stone and timber. Even though all buildings from this era are long gone there are documents showing traditional structures. Contrary to this however, wood still remains the most important material in Japanese architecture.\n", "Japanese architecture is distinctive in that it reflects a deep ″understanding of the natural world as a source of spiritual insight and an instructive mirror of human emotion″. Attention to aesthetics and the surroundings is given, natural materials are preferred and artifice is generally being avoided. Impressive wooden castles and temples, some of them 2000 years old, stand embedded in the natural contours of the local topography. Notable examples include the Hōryū Temple complex (6th century), Himeji Castle (14th century), Hikone Castle (17th century) and Osaka Castle.\n", "Japanese architecture is a combination between local and other influences. It has traditionally been typified by wooden structures, elevated slightly off the ground, with tiled or thatched roofs. Sliding doors (\"fusuma\") were used in place of walls, allowing the internal configuration of a space to be customized for different occasions. People usually sat on cushions or otherwise on the floor, traditionally; chairs and high tables were not widely used until the 20th century. Since the 19th century, however, Japan has incorporated much of Western, modern, and post-modern architecture into construction and design, and is today a leader in cutting-edge architectural design and technology.\n", "Japanese art and architecture is works of art produced in Japan from the beginnings of human habitation there, sometime in the 10th millennium BC, to the present. Japanese art covers a wide range of art styles and media, including ancient pottery, sculpture in wood and bronze, ink painting on silk and paper, and a myriad of other types of works of art; from ancient times until the contemporary 21st century.\n", "In Japanese traditional architecture, there are various styles, features and techniques unique to Japan in each period and use, such as residence, castle, Buddhist temple and Shinto shrine. On the other hand, especially in ancient times, it was strongly influenced by Chinese culture like other Asian countries, so it has characteristics common to architecture in Asian countries.\n", "Many historical Japantowns will exhibit architectural styles that reflect the Japanese culture. Japanese architecture has traditionally been typified by wooden structures, elevated slightly off the ground, with tiled or thatched roofs. Sliding doors (\"fusuma\") were used in place of walls, allowing the internal configuration of a space to be customized for different occasions. People usually sat on cushions or otherwise on the floor, traditionally; chairs and high tables were not widely used until the 20th century. Since the 19th century, however, Japan has incorporated much of Western, modern, and post-modern architecture into construction and design, and is today a leader in cutting-edge architectural design and technology.\n", "Japanese architecture has as long of a history as any other aspect of Japanese culture. Originally heavily influenced by Chinese architecture, it has developed many differences and aspects which are indigenous to Japan. Examples of traditional architecture are seen at temples, Shinto shrines, and castles in Kyoto and Nara. Some of these buildings are constructed with traditional gardens, which are influenced from Zen ideas.Some modern architects, such as Yoshio Taniguchi and Tadao Ando are known for their amalgamation of Japanese traditional and Western architectural influences.\n" ]
we’re all told that using phones while they’re charging is bad. can anyone of the good people here tell me why?
Who told you that? It increases the amount of time it takes to charge, but is otherwise fine.
[ "In a number of cases it has been shown that bans on mobile use while driving have proven to be an effective way to deter people from picking up their phones. Those violating the ban usually face fines and points on their licence. Although an initial decrease/alteration in driving habits is to be expected. As time goes on the number of people breaking these laws/regulations eventually goes back to normal, sometimes higher levels as time goes on and people go back to their old habits. In addition, police officers have difficulties detecting mobile phone use in vehicles, which decreases the effectiveness of bans/restrictions on mobile phones. \n", "The negative consumption externalities caused by mobile phone use while driving, as shown, has economic costs. Not only does mobile phone use while driving jeopardize safety for the driver, anyone in the car, or others on the road but it also produces economic costs to all parties involved. As shown, these costs are best managed with government intervention through policy or legislation changes. Ticketing is often the best choice as it affects only those who are caught performing the illegal act. Ticketing is another cost induced from mobile phone use and driving because ticketing laws for this act have only been put into place due to the large number of crashes caused by distracted drivers due to mobile phone use. Further, not only are the tickets costly to individuals who receive them but so is the price that must be paid to enforce the prohibition of mobile phone use while driving. Key to the success of a legislative measure is the ability to maintain and sustain them through enforcement or the perception of enforcement. Police officer and photo radar cameras are other costs that must be paid in order to reduce this externality.\n", "In the UK using a mobile phone while driving has been illegal since 2003, unless it is in a handsfree kit. The penalty originally started with a £30 ($40) fine which later became a fine of £60 ($80) plus 3 penalty points in 2006, then £100 ($134) and 3 points in 2013. There was a tendency for motorists behaving and becoming significantly more compliant initially with the introduction of the updated laws, only to later to resume their ordinary habits. The 2013 fine increase was not at all effective at stopping motorists from using their phones while driving. The percentage of drivers admitting to using their phones while on the road actually increased from 8% in 2014 to 31% in 2016 an increase of 23% in just two years. In the same year statistics revealed that only 30,000 drivers were given a Fixed penalty notice (FPN) for the offence, compared to 123,000 in 2011. The increased percentage of people using their phones can be attributed in part to the growing affordability of smartphones. Possibly the most important factor was the increasing lack of enforcement of the ban by the police. Both increased smartphone sales and lack of enforcement created a situation where in which it was acceptable to use your phone while driving again, yet having being illegal for over 13 years.\n", "Due to the increasing complexity of mobile phones, they are often more like mobile computers in their available uses. This has introduced additional difficulties for law enforcement officials when attempting to distinguish one usage from another in drivers using their devices. This is more apparent in countries which ban both handheld and hands-free usage, rather than those which ban handheld use only, as officials cannot easily tell which function of the mobile phone is being used simply by looking at the driver. This can lead to drivers being stopped for using their device illegally for a phone call when, in fact, they were using the device legally, for example, when using the phone's incorporated controls for car stereo, GPS or satnav.\n", "Current US laws are not strictly enforced. Punishments are so mild that people pay little attention. Drivers are not categorically prohibited from using phones while driving. For example, using earphones to talk and texting with a hands-free device remain legal.\n", "Mobile phone use while driving is common but it is widely considered dangerous due to its potential for causing distracted driving and crashes. Due to the number of crashes that are related to conducting calls on a phone and texting while driving, some jurisdictions have made the use of calling on a phone while driving illegal. Many jurisdictions have enacted laws to ban handheld mobile phone use. Nevertheless, many jurisdictions allow use of a hands-free device. Driving while using a hands-free device is not safer than using a handheld phone to conduct calls, as concluded by case-crossover studies, epidemiological, simulation, and meta-analysis. In some cases restrictions are directed only at minors, those who are newly qualified license holders (of any age), or to drivers in school zones. In addition to voice calling, activities such as texting while driving, web browsing, playing video games, or phone use in general can also increase the risk of a crash.\n", "The phone can be charged with the included proprietary USB cable which plugs into the only port on the phone, making it impossible to use the included headset while charging, although it is still possible to use the phone while charging.\n" ]
Searching for books about Current Elites in Korea (~ < 50 yrs) for research. Any recommendations? (x/post /r/korea)
I've done research on Korea from an economic perspective (looking at how political changes and actions were central to development), but there's some overlap with you want so here's a few papers that might be a good starting point. If you are looking for specific individuals then these papers won't be much help, but if you want an idea of what sort of groups elites belonged to then I think they will be helpful. The links are mostly about how economic and political elites were both focused on growth and development, with political favorites and corruption being part of the relationship. I also don't know how much basic info you have about the Park government but I would definitely start by researching the dramatic changes Park introduced into the country and economy because they form the basis of Korea in the second half on the 20th century. [Corruption and NIC development: A case study of South Korea](_URL_1_) Looks at how corruption between Korean conglomerates (the Chaebol) and the government was intertwined with development. [Crony Capitalism: Corruption and Development in South Korea and the Philippines](_URL_2_) Compares crony capitalism in the two countries. Useful because the corny capitilists were the economic elties, andf worked with political elites. [The Treatment of Market Power in Korea](_URL_0_) About how Chaebols are entrenched in Korea, and how they were even more entrenched previously. Shows the entrenchment of the economic elites who head them. I don't have time to track any more links down now but if you let me know what specifically you are looking for or are interested in I can check again later and hopefully find some more relevant sources for you!
[ "The \"Daehan Gyenyeonsa\" (A History of the Final Years of the Empire of Great Han of Korea) is, as the title indicates, a history of the final forty years of Korea's Joseon dynasty (after 1898 known as the Empire of Great Han). It was penned by a minor government official and member of the Korean enlightenment movement, Jeong Gyo (鄭喬 1856-1925), about whom little is known. The books is chronologically ordered and much of the historical content is based upon Jeong's own experiences and eye-witness accounts, yielding up rich historical detail and anecdote not available elsewhere. It is particularly useful in its details of Korea's Independence Club.\n", "Doksa Sillon or A New Reading of History (1908) is a book that discusses the history of Korea from the time of the mythical Dangun to the fall of the kingdom of Baekje in 926 CE. Its author––historian, essayist, and independence activist Shin Chaeho (1880–1936)––first published it as a series of articles in the \"Daehan Maeil Sinbo\" (the \"Korea Daily News\"), of which he was the editor-in-chief.\n", "The Dongguk Tonggam (Comprehensive Mirror of the eastern state) is a chronicle of the early history of Korea compiled by Seo Geo-jeong (1420–1488) and other scholars in the 15th century. Originally commissioned by King Sejo in 1446, it was completed under the reign of Seongjong of Joseon, in 1485. The official Choe Bu was one of the scholars who helped compile and edit the work. The earlier works on which it may have been based have not survived. The \"Dongguk Tonggam\" is the earliest extant record to list the names of the rulers of Gojoseon after Dangun.\n", "In the beginning years, they announced their ambition of creating a definitive Korean studies database. Some of their first published Korean classical literary works, in digital form, included Goryeosa, The History of Balhae, Tripitaka Koreana, Samguk Sagi and Samguk Yusa. Research scholars, also noted the company as having introduced, in 1998-1999, a few historical works from North Korea, through China, which they published on CD-ROM.\n", "Lee’s academic career includes works about Korea’s history of communism, the division of the Korean Peninsula, and the origins of the Republic of Korea. He also researched major figures in modern Korean history such as Syngman Rhee, the first president of Korea (1948-1960); Woon-Hyung Yuh, a Korean politician and reunification activist in the 1940s; and Chung-Hee Park, the third president of Korea (1963-1979) who seized power through a military coup. In particular, his works on Korea-Japan relations, communist movements in Manchuria, and the international relations of East Asia have been translated into many languages and are considered classics in East Asian studies.\n", "The tradition of Korean historiography was established with the \"Samguk Sagi\", a history of Korea from its allegedly earliest times. It was compiled by Goryeo court historian Kim Busik after its commission by King Injong of Goryeo (r. 1122 – 1146). It was completed in 1145 and relied not only on earlier Chinese histories for source material, but also on the \"Hwarang Segi\" written by the Silla historian Kim Daemun in the 8th century. The latter work is now lost.\n", "From 1974-77, Palais edited Occasional Papers on Korean Studies, as known as the Journal of Korean Studies, which was edited out of the University of Washington until 1988. Palais' political interests resulted in the Asia Watch report \"Human Rights in Korea\" (Washington, 1986), but perhaps his greatest work was the 1230-page \"Confucian Statecraft and Korean Institutions: Yu Hyongwon and the late Choson Dynasty\", a comprehensive overview of Choson Dynasty (1392-1910) Korean institutions as discussed by the eminent 17th century Korean statesman. This book was awarded the John Whitney Hall book prize as the best book on Japan or Korea in 1998.\n" ]
even if we could terraform mars, wouldn't its lack of magnetic field mean cosmic radiation would continually bombard whatever is living on the surface?
Radiation doesn't just blast the surface with cancer rays, it also whisks away the atmosphere. Mar's atmosphere is very thin and complex life that we have on Earth cannot survive (it is called the Armstrong Limit).
[ "In 1965, the Mariner 4 probe discovered that Mars had no global magnetic field that would protect the planet from potentially life-threatening cosmic radiation and solar radiation; observations made in the late 1990s by the Mars Global Surveyor confirmed this discovery. Scientists speculate that the lack of magnetic shielding helped the solar wind blow away much of Mars' atmosphere over the course of several billion years. As a result, the planet has been vulnerable to radiation from space for about 4 billion years.\n", "The loss of the Martian magnetic field strongly affected surface environments through atmospheric loss and increased radiation; this change significantly degraded surface habitability. When there was a magnetic field, the atmosphere would have been protected from erosion by the solar wind, which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of Mars. The loss of the atmosphere was accompanied by decreasing temperatures. Part of the liquid water inventory sublimed and was transported to the poles, while the rest became\n", "Even the hardiest cells known could not possibly survive the cosmic radiation near the surface of Mars since Mars lost its protective magnetosphere and atmosphere. After mapping cosmic radiation levels at various depths on Mars, researchers have concluded that over time, any life within the first several meters of the planet's surface would be killed by lethal doses of cosmic radiation. The team calculated that the cumulative damage to DNA and RNA by cosmic radiation would limit retrieving viable dormant cells on Mars to depths greater than 7.5 meters below the planet's surface.\n", "Furthermore, without Earth's surrounding magnetic field as a shield, solar radiation has much harsher effects on biological organisms in space. The exposure can include damage to the central nervous system, (altered cognitive function, reducing motor function and incurring possible behavioral changes), as well as the possibility of degenerative tissue diseases.\n", "Earth's magnetic field dominates the terrestrial magnetosphere and prevents the solar wind from hitting us head on. Lacking a large protective magnetosphere, Mars is thought to have lost much of its former oceans and atmosphere to space in part due to the direct impact of the solar wind. Venus with its thick atmosphere is thought to have lost most of its water to space in large part owing to solar wind ablation.\n", "Mars does not have an intrinsic global magnetic field, but the solar wind directly interacts with the atmosphere of Mars, leading to the formation of a magnetosphere from magnetic field tubes. This poses challenges for mitigating solar radiation and retaining an atmosphere.\n", "Radiation exposure is a concern for astronauts even on the surface, as Mars lacks a strong magnetic field and atmosphere is thin to stop as much radiation as Earth. However, the planet does reduce the radiation significantly especially on the surface, and it is not detected to be radioactive itself.\n" ]
Did the U.S. experience any diplomatic fallout due to non-Japanese casualties of the atomic bombs?
I haven't really looked into the diplomatic fallout, though the issue did surface from time to time in the press. I know of nothing specific on this, but that doesn't mean anything (other than, maybe, the idea that it isn't something that has been written a lot about — but that doesn't mean it didn't exist). As for "third-party nations" — the main non-Japanese victims of the bombs that come to mind are POWs (British, American, and Dutch), Koreans (laborers), and Germans (the Jesuits at Hiroshima, and maybe others). Of these groups, the ones most represented in American media are the Germans, who were featured quite prominently in John Hersey's _Hiroshima_, among other sources. The Koreans were by far the largest group of victims, of those groups.
[ "After the use of the bombs, American journalists traveled to the devastated areas and documented the horrors they saw. This raised moral concerns and the necessity of the attack. The motives of President Harry Truman, the United States Army Air Force (USAAF), and the United States Navy came under suspicion, and the USAAF and Navy released statements that it was necessary in order to make Japan surrender.\n", "On 30 June 2007, Japan's defense minister Fumio Kyūma said the dropping of atomic bombs on Japan by the United States during World War II was an inevitable way to end the war. Kyūma said: \"I now have come to accept in my mind that in order to end the war, it could not be helped (shikata ga nai) that an atomic bomb was dropped on Nagasaki and that countless numbers of people suffered great tragedy.\" Kyūma, who is from Nagasaki, said the bombing caused great suffering in the city, but he does not resent the U.S. because it prevented the Soviet Union from entering the war with Japan. Kyūma's comments were similar to those made by Emperor Hirohito when, in his first ever press conference given in Tokyo in 1975, he was asked what he thought of the bombing of Hiroshima, and answered: \"It's very regrettable that nuclear bombs were dropped and I feel sorry for the citizens of Hiroshima but it couldn't be helped (shikata ga nai) because that happened in wartime.\"\n", "The atomic bomb attacks have been the subject of long-running controversy. Shortly after the attacks an opinion poll found that about 85 percent of Americans supported the use of atomic weapons, and the wartime generation believed that they had saved millions of lives. Criticisms over the decision to use the bombs have increased over time, however. Arguments made against the attacks include that Japan would have eventually surrendered and that the attacks were made to either intimidate the Soviet Union or justify the Manhattan Project. In 1994, an opinion poll found that 55 percent of Americans supported the decision to bomb Hiroshima and Nagasaki. When registering the only dissenting opinion of the judges involved in the International Military Tribunal for the Far East in 1947, Justice Radhabinod Pal argued that Japan's leadership had not conspired to commit atrocities and stated that the decision to conduct the atomic bomb attacks was the clearest example of a direct order to conduct \"indiscriminate murder\" during the Pacific War. Since then, Japanese academics, such as Yuki Tanaka and Tsuyoshi Hasegawa, have argued that use of the bombs was immoral and constituted a war crime. In contrast, President Truman and, more recently, historians such as Paul Fussell have argued that the attacks on Hiroshima and Nagasaki were justified as they induced the Japanese surrender.\n", "However, on August 6, 1945, the US dropped an atomic bomb over Hiroshima, killing over 70,000 people. This was the first nuclear attack in history. On August 9 the Soviet Union declared war on Japan and invaded Manchukuo and other territories, and Nagasaki was struck by a second atomic bomb, killing around 40,000 people. The unconditional surrender of Japan was announced by Emperor Hirohito and communicated to the Allies on August 14, and broadcast on national radio on the following day, marking the end of Imperial Japan's ultranationalist ideology, and was a major turning point in Japanese history.\n", "As victory for the United States slowly approached, casualties mounted. A fear in the American high command was that an invasion of mainland Japan would lead to enormous losses on the part of the Allies, as casualty estimates for the planned Operation Downfall demonstrate. As Japan was able to withstand the devastating incendiary raids and the naval blockade despite hundreds of thousands of civilian deaths, President Harry Truman gave the order to drop the only two available atomic bombs, hoping that such sheer force of destruction on a city would break Japanese resolve and end the war. The first bomb was dropped on an industrial city, Hiroshima, on August 6, 1945, killing appropriately 70,000 people. A second bomb was dropped on another industrial city, Nagasaki, on August 9 after it appeared that the Japanese high command was not planning to surrender, killing approximately 35,000 people. Fearing additional atomic attacks, Japan surrendered on August 15, 1945.\n", "As Ian Buruma observed, \"News of the terrible consequences of the atom bomb attacks on Japan was deliberately withheld from the Japanese public by US military censors during the Allied occupation—even as they sought to teach the natives the virtues of a free press. Casualty statistics were suppressed. Film shot by Japanese cameramen in Hiroshima and Nagasaki after the bombings was confiscated. \"Hiroshima\", the account written by John Hersey for \"The New Yorker\", had a huge impact in the US, but was banned in Japan. As [John] Dower says: 'In the localities themselves, suffering was compounded not merely by the unprecedented nature of the catastrophe ... but also by the fact that public struggle with this traumatic experience was not permitted.\" The US occupation authorities maintained a monopoly on scientific and medical information about the effects of the atomic bomb through the work of the Atomic Bomb Casualty Commission, which treated the data gathered in studies of hibakusha as privileged information rather than making the results available for the treatment of victims or providing financial or medical support to aid victims.\n", "The Imperial Japanese Navy also carried out a carrier-based airstrike on the neutral United States at Pearl Harbor and Oahu on 7 December 1941, resulting in almost 2,500 fatalities and plunging America into World War II the next day. There were also air raids on the Philippines and northern Australia (Bombing of Darwin, 19 February 1942).\n" ]
How does your body remove excess salt from your body on a physiological level?
Excess salt doesn't really go into your cells, because you have a pump that pumps it out in exchange for pumping potassium into the cell. If large quantities of excess salt went into the cell, osmosis would, in fact, pull water into the cell causing it to swell and eventually burst. Instead, the salt remains in your plasma, where it reaches the kidneys. Your kidneys have various mechanisms for adjusting the salt concentration in your urine. For example, there are cells in the kidney that can detect high levels of salt in the blood, which ultimately prevents your kidneys from reabsorbing salt back into your body. You can think of it as your body maintaining a certain salt concentration - if you have too much salt, your kidneys will "use" extra water to remove it. There are other mechanisms as well - for example, taking in a lot of salt increases your thirst drive, trying to dilute the salt that's in your body to maintain the proper concentration. I realize that this wasn't too specific, but the main point is that it would be very bad if a lot of salt were permitted to enter your cells, so your body has mechanisms to keep that from happening. The kidneys are the primarily regulators that maintain a proper concentration of salt in your system by controlling how much salt you pee out.
[ "Salting out (also known as salt-induced precipitation, salt fractionation, anti-solvent crystallization, precipitation crystallization, or drowning out) is an effect based on the electrolyte–non-electrolyte interaction, in which the non-electrolyte could be less soluble at high salt concentrations. It is used as a method of purification for proteins, as well as preventing protein denaturation due to excessively diluted samples during experiments. The salt concentration needed for the protein to precipitate out of the solution differs from protein to protein. This process is also used to concentrate dilute solutions of proteins. Dialysis can be used to remove the salt if needed.\n", "The human body has evolved to balance salt intake with need through means such as the renin–angiotensin system. In humans, salt has important biological functions. Relevant to risk of cardiovascular disease, salt is highly involved with the maintenance of body fluid volume, including osmotic balance in the blood, extracellular and intracellular fluids, and resting membrane potential.\n", "Salt compounds dissociate in aqueous solutions. This property is exploited in the process of salting out. When the salt concentration is increased, some of the water molecules are attracted by the salt ions, which decreases the number of water molecules available to interact with the charged part of the protein.\n", "In the next stage of purification, all this added salt needs to be removed from the protein. One way to do so is using dialysis, but dialysis further dilutes the concentrated protein. The better way of removing Ammonium sulfate from the protein is mixing the precipitate protein a buffer containing mixture of SDS, Tris-HCl and phenol and centrifuging the mixture. The precipitate that comes out of this centrifugation will contain salt-less concentrated protein.\n", "Salting out is a spontaneous process when the right concentration of the salt is reached in solution. The hydrophobic patches on the protein surface generate highly ordered water shells. This results in a small decrease in enthalpy, Δ\"H\", and a larger decrease in entropy, Δ\"S,\" of the ordered water molecules relative to the molecules in the bulk solution. The overall free energy change, Δ\"G\", of the process is given by the Gibbs free energy equation:\n", "However, since this can risk dehydration, an alternative approach is possible of consuming a substantial amount of salt prior to exercise. It is still important not to overconsume water to the extent of requiring urination, because urination would cause the extra salt to be excreted.\n", "Apart from addressing the underlying cause, orthostatic hypotension may be treated with a recommendation to increase salt and water intake (to increase the blood volume), wearing compression stockings, and sometimes medication (fludrocortisone, midodrine or others). Salt loading (dramatic increases in salt intake) must be supervised by a doctor, as this can cause severe neurological problems if done too aggressively.\n" ]
When Did Black Canadians Gain the Vote in Canada?
I've found a bunch of sources saying it was on the 24th of March 1837, at least for Lower Canada, but none of them tie into usable links, actual documents or even elaborate...
[ "Following the abolition of slavery in the British empire in 1834, any black man born a British subject or who become a British subject was allowed to vote and run for office, provided that they owned taxable property. The property requirement on voting in Canada was not ended until 1920. Black Canadian women like all other Canadian women were not granted the right to vote until partially in 1917 ( when wives, daughters, sisters and mothers of servicemen were granted the right to vote) and fully in 1918 (when all women were granted the right to vote). In 1850, Canadian black women together with all other women were granted the right to vote for school trustees, which was the limit of female voting rights in Canada West. In 1848, in Colchester county in Canada West, white men prevented black men from voting in the municipal elections, but following complaints in the courts, a judge ruled that black voters could not be prevented from voting. Ward, writing about the Colchester case in \"The Voice of the Fugitive\" newspaper, declared that the right to vote was the \"most sacred\" of all rights, and that even if white men took away everything from the black farmers in Colchester county, that would still be a lesser crime compared with losing the \"right of a British vote\". In 1840, Wilson Ruffin Abbott become the first black elected to any office in what became Canada when he was elected to the city council in Toronto. In 1851, James Douglas became the governor of Vancouver Island, but that was not an elective one. Unlike in the United States, in Canada after the abolition of slavery in 1834, black Canadians were never stripped of their right to vote and hold office.\n", "The Canadian federal election of 1935 was held on October 14, 1935. to elect members of the House of Commons of Canada of the 18th Parliament of Canada. The Liberal Party of William Lyon Mackenzie King won a majority government, defeating Prime Minister R. B. Bennett's Conservatives.\n", "BULLET::::- In 1960, the Canadian Bill of Rights becomes law, and suffrage, and the right for any Canadian citizen to vote, was finally adopted by John Diefenbaker's Progressive Conservative government. The new election act allowed First Nations people to vote for the first time.\n", "BULLET::::- 1968– Lincoln Alexander, became Canada's first black Member of Parliament when he was elected to the Canadian House of Commons in 1968 as a member of the Progressive Conservative Party of Canada.\n", "Women's suffrage in Canada occurred at different times in different jurisdictions and at different times to different demographics of women. By the close of 1918, all the Canadian provinces except Quebec had granted full suffrage to white and black women. Municipal suffrage was granted in 1884 to property-owning widows and spinsters in the provinces of Quebec and Ontario; in 1886, in the province of New Brunswick, to all property-owning women except those whose husbands were voters; in Nova Scotia, in 1886; and in Prince Edward Island, in 1888, to property-owning widows and spinsters. In 1916, suffrage was given to women in Manitoba, Saskatchewan, Alberta, and British Columbia. Women in Quebec did not receive full suffrage until 1940. Asian women (and men) were not granted suffrage until after World War II in 1948, Inuit women (and men) were not granted suffrage until 1950 and it was not until 1960 that suffrage was extended to First Nations women (and men) without requiring them to give up their treaty status. \n", "This was the first election in which all of Canada's Indigenous Peoples had the right to vote after the passage in March 31, 1960 of a repeal of certain sections of the Canada Elections Act. For the first time ever, the entire land mass of Canada was covered by federal electoral districts (the former Mackenzie River riding was expanded to cover the entire Northwest Territories).\n", "Historically, Black Canadians, being descended from either Black Loyalists or American run-away slaves, had supported the Conservative Party as the party most inclined to maintain ties with Britain, which was seen as the nation that had given them freedom. The Liberals were historically the party of continentalism (i.e moving Canada closer to the United States), which was not an appealing position for most Black Canadians. In the first half of the 20th century, Black Canadians usually voted solidly for the Conservatives as the party seen as the most pro-British. Until the 1930s–1940s, the majority of Black Canadians lived in rural areas, mostly in Ontario and Nova Scotia, which provided a certain degree of insulation from the effects of racism. The self-contained nature of the rural Black communities in Ontario and Nova Scotia with Black farmers clustered together in certain rural counties meant that racism was not experienced on a daily basis. The centre of social life in the rural black communities were the churches, usually Methodist or Baptist, and ministers were generally the most important community leaders. Through anti-Black racism did exist in Canada, as the Black population in Canada was extremely small, there was nothing comparable to the massive campaign directed against Asian immigration, the so-called \"Yellow Peril\", which was a major political issue in the late 19th and early 20th centuries, especially in British Columbia. In 1908, the Canadian Brotherhood of Railroad Employees and Other Transport Workers (CBRE) was founded under the leadership of Aaron Mosher, an avowed white supremacist who objected to white workers like himself having to work alongside black workers. In 1909 and 1913, Mosher negotiated contracts with the Inter Colonial Railroad Company, where he worked as a freight handler, that imposed segregation in workplaces while giving increased wages and benefits to white workers alone. The contracts that Mosher negotiated in 1909 and 1913 served as the basis for the contracts that other railroad companies negotiated with the CBRE. To fight against the discriminatory treatment, the all-black Order of Sleeping Car Porters union was founded in 1917 to fight to end segregation on the railroad lines and to fight for equal pay and benefits.\n" ]
How did the KKK become anti-semitic? I've read before that older members of the KKK claim that it wasn't anti-semitic initially, but that became part of the organisation's ideology over time. How and when did this happen?
Follow up question: Could this have been influenced by Nazism?
[ "Vehemently anti-Catholic, the 1915 Klan had an explicitly Protestant Christian terrorist ideology, basing its beliefs in part on a \"religious foundation\" in Protestant Christianity and targeting Jews, Catholics, and other social or ethnic minorities, as well as people who engaged in \"immoral\" practices such as adulterers, bad debters, gamblers, and alcohol abusers. From an early time onward, the goals of the KKK included an intent to \"reestablish Protestant Christian values in America by any means possible\", and it believed that \"Jesus was the first Klansman\". Although members of the KKK swear to uphold Christian morality, virtually every Christian denomination has officially denounced the KKK.\n", "During reconstruction at the end of the civil war the original KKK used domestic terrorism against the Federal Government and against freed slaves. During the 20th century, leading up to the Civil Rights Movement, unrelated Ku Klux Klan (KKK) groups used threats, violence, arson, and murder to further their anti-Black, anti-Catholic, anti-Communist, anti-immigrant, anti-semitic, homophobic and white-supremacist agenda. Other groups with agendas similar to the Ku Klux Klan include neo-Nazis, white power skinheads, and other far-right movements.\n", "Initially the KKK presented itself as another fraternal organization devoted to betterment of its members. The KKK's revival was inspired in part by the movie \"Birth of a Nation\", which glorified the earlier Klan and dramatized the racist stereotypes concerning blacks of that era. The Klan focused on political mobilization, which allowed it to gain power in states such as Indiana, on a platform that combined racism with anti-immigrant, anti-Semitic, anti-Catholic and anti-union rhetoric, but also supported lynching. It reached its peak of membership and influence about 1925, declining rapidly afterward as opponents mobilized.\n", "Beginning in the 1910s, Southern Jewish communities were attacked by the Ku Klux Klan, which objected to Jewish immigration, and often used \"The Jewish Banker\" caricature in its propaganda. In 1915, Leo Frank was lynched in Georgia after being convicted of rape and sentenced to death (his punishment was commuted to life imprisonment). This event was a catalyst in the re-formation of the new Ku Klux Klan.\n", "The Ku Klux Klan (KKK) arose as independent chapters, part of the postwar insurgency related to the struggle for power in the South. In 1866, Mississippi Governor William L. Sharkey reported that disorder, lack of control and lawlessness were widespread. The Klan used public violence against blacks as intimidation. They burned houses, and attacked and killed blacks, leaving their bodies on the roads.\n", "In response to the lynching of Leo Frank, Sigmund Livingston founded the Anti-Defamation League (ADL) under the sponsorship of B'nai B'rith. The ADL became the leading Jewish group fighting antisemitism in the United States. The lynching of Leo Frank coincided with and helped spark the revival of the Ku Klux Klan. The Klan disseminated the view that anarchists, communists and Jews were subverting American values and ideals.\n", "From the 1910s, Southern Jewish communities were attacked by the Ku Klux Klan, who objected to Jewish immigration, and often used \"The Jewish Banker\" in their propaganda. In 1915, Leo Frank was lynched in Georgia after being convicted of rape and sentenced to death (his punishment was commuted to life imprisonment). The second Ku Klux Klan, which grew enormously in the early 1920s by promoting \"100% Americanism\", focused much of its hatred on Jews. \n" ]
why have some languages like spanish kept the pronunciation of the written language so that it can still be read phonetically, while spoken english deviated so much from the original spelling?
English did not originally have fixed spelling. People would spell words however they thought it sounded. This means that spelling varied from person to person and region to region. Also, due to being made of bits of several languages all smushed together often retaining parts of the original language's rules, there's no consistency as to how words are pronounced or where you even get the spelling from. A man named Samuel Johnson eventually wrote a dictionary in which he spelled the words however he wanted to and because of how popular it became, that became the fixed spelling. Johson liked stuffy fancy spellings rather than simple phonetic ones and he set the idea of telling people the "correct" way to write instead of telling them how words were normally used. Webster eventually did something similar for American English, although he preferred simplified spellings, hence some of the differences between American and British spelling.
[ "However, some Spanish speakers are concerned that this proposal is unlikely to be adopted, since the Spanish language does not distinguish and from and respectively, and most of its speakers would therefore not even notice a difference in pronunciation.\n", "Peculiar to Spanish (as well as to the neighboring Gascon dialect of Occitan, and attributed to a Basque substratum) was the mutation of Latin initial into whenever it was followed by a vowel that did not diphthongize. The , still preserved in spelling, is now silent in most varieties of the language, although in some Andalusian and Caribbean dialects it is still aspirated in some words. Because of borrowings from Latin and from neighboring Romance languages, there are many -/-doublets in modern Spanish: and (both Spanish for \"Ferdinand\"), and (both Spanish for \"smith\"), and (both Spanish for \"iron\"), and and (both Spanish for \"deep\", but means \"bottom\" while means \"deep\"); (Spanish for \"to make\") is cognate to the root word of (Spanish for \"to satisfy\"), and (\"made\") is similarly cognate to the root word of (Spanish for \"satisfied\").\n", "However, some traits of the Spanish spoken in Spain are exclusive to that country, and for this reason, courses of Spanish as a second language often neglect them, preferring Mexican Spanish in the United States and Canada whilst European Spanish is taught in Europe. Spanish grammar and to a lesser extent pronunciation can vary sometimes between variants.\n", "There are numerous regional particularities and idiomatic expressions within Spanish. In Latin American Spanish, loanwords directly from English are relatively more frequent, and often foreign spellings are left intact. One notable trend is the higher abundance of loan words taken from English in Latin America as well as words derived from English. The Latin American Spanish word for \"computer\" is \"computadora\", whereas the word used in Spain is \"ordenador\", and each word sounds foreign in the region where it is not used. Some differences are due to Iberian Spanish having a stronger French influence than Latin America, where, for geopolitical reasons, the United States influence has been predominant throughout the twentieth century.\n", "Old Spanish had , just as Modern Spanish does, which mostly represents a development of earlier * (still preserved in Portuguese and French), from the Latin . The use of for originated in Old French and spread to Spanish, Portuguese, and English despite the different origins of the sound in each language:\n", "These dialects have important phonological differences compared to varieties of Spanish proper; for example, they have preserved the voiced/voiceless distinction among sibilants as they were in Old Spanish. For this reason, the letter , when written single between vowels, corresponds to a voiced —e.g. ('rose'). Where is not between vowels and is not followed by a voiced consonant, or when it is written double, it corresponds to voiceless —thus ('to sit down'). And due to a phonemic neutralization similar to the \"seseo\" of other dialects, the Old Spanish voiced and the voiceless \"ç\" have merged, respectively, with and —while maintaining the voicing contrast between them. Thus ('to make') has gone from the medieval to , and ('town square') has gone from to .\n", "Conventional written English is not phonetic (that is, it is not written as it sounds, due to the history of its spelling, which led to outdated, unintuitive, misleading or arbitrary spelling conventions and spellings of individual words) unlike, for example, German or Spanish, where letters have relatively fixed associated sounds, so that the written text is a fair representation of the spoken words.\n" ]
Why were most of the popular ancient literature written in verse?
I think there is a flaw in your question, or at least several problematic assumptions about literature, ancient and modern. Let's take your examples. Firstly, the Bible contain significant portions of poetry (Psalms, large portions of the Prophetic books), but it is not all poetry, and it is not even mostly poetry. Homer's Iliad is verse, because it emerges in the context of a pre-literate society. Generally, highly oral cultures tend to maintain a high value on poetry and song, because those forms of composition do indeed lend themselves to memorisation. It's much harder to memorise long prose texts, and it's much less interesting to hear long prose texts "performed". Sticking with ancient Greek literature, though, plenty of non-verse material was produced. Herodotus, Thucydides, etc.. The prose genre of history, among others, was "popular". You just have selected a poetic example. A very similar thing could be said about Ovid's Metamorphoses. Yes, it's poetry. But Romans produced plenty of prose literature - philosophy, for-publication epistles, history - that was popular. At least as much as poetry was. Dante, I will skip, since it's much later than your other examples of "ancient" literature, and the context of its composition is a different literary world to antiquity. I think the flaw in your question can be demonstrated simply by reversing the examples: "why was so much prose written in ancient times? The Bible, Livy, Herodotus?" > Verse nowadays seems confined to music, theater and poems. Which is exactly the same. Your examples are all poetry, that's why they are written in poetic forms. Similarly, plays tend to be written in verse as well (in antiquity). Songs, too, obviously, though our knowledge of ancient melodies is severely limited.
[ "The Greeks created poetry before making use of writing for literary purposes. Poems created in the Preclassical period were meant to be sung or recited (writing was little known before the 7th century BC). Most poems focused on myths, legends that were part folktale and part religion. Tragedies and comedies emerged around 600 BC.\n", "Ancient Greek society placed considerable emphasis upon literature. Many authors consider the western literary tradition to have begun with the epic poems \"The Iliad\" and \"The Odyssey\", which remain giants in the literary canon for their skillful and vivid depictions of war and peace, honor and disgrace, love and hatred. Notable among later Greek poets was Sappho, who defined, in many ways, lyric poetry as a genre.\n", "The earliest Greek literature was poetry, and was composed for performance rather than private consumption. The earliest Greek poet known is Homer, although he was certainly part of an existing tradition of oral poetry. Homer's poetry, though it was developed around the same time that the Greeks developed writing, would have been composed orally; the first poet to certainly compose their work in writing was Archilochus, a lyric poet from the mid-seventh century BC. tragedy developed, around the end of the archaic period, taking elements from across the pre-existing genres of late archaic poetry. Towards the beginning of the classical period, comedy began to develop – the earliest date associated with the genre is 486 BC, when a competition for comedy became an official event at the City Dionysia in Athens, though the first preserved ancient comedy is Aristophanes' \"Acharnians\", produced in 425.\n", "The earliest playwright in Western literature with surviving works are the Ancient Greeks. These early plays were for annual Athenian competitions among play writers held around the 5th century BC. Such notables as Aeschylus, Sophocles, Euripides, and Aristophanes established forms still relied on by their modern counterparts. For the ancient Greeks, playwriting involved \"poïesis\", \"the act of making\". This is the source of the English word \"poet\".\n", "The first recorded works in the western literary tradition are the epic poems of Homer and Hesiod. Early Greek lyric poetry, as represented by poets such as Sappho and Pindar, was responsible for defining the lyric genre as it is understood today in western literature. Aesop wrote his \"Fables\" in the 6th century BC. These innovations were to have a profound influence not only on Roman poets, most notably Virgil in his epic poem on the founding of Rome, \"The Aeneid\", but one that flourished throughout Europe.\n", "Greek literature in the archaic period was predominantly poetry, though the earliest prose dates to the sixth century BC. archaic poetry was primarily intended to be performed rather than read, and can be broadly divided into three categories: lyric, rhapsodic, and citharodic. The performance of the poetry could either be private (most commonly in the symposium) or public. \n", "In the ancient world, poetry usually played a far more important part of daily life than it does today. In general, educated Greeks and Romans thought of poetry as playing a much more fundamental part of life than in modern times. Initially in Rome poetry was not considered a suitable occupation for important citizens, but the attitude changed in the second and first centuries BC. In Rome poetry considerably preceded prose writing in date. As Aristotle pointed out, poetry was the first sort of literate to arouse people's interest in questions of style. The importance of poetry in the Roman Empire was so strong that Quintilian, the greatest authority on education, wanted secondary schools to focus on the reading and teaching of poetry, leaving prose writings to what would now be referred to as the university stage. Virgil represents the pinnacle of Roman epic poetry. His \"Aeneid\" was produced at the request of Maecenas and tells the story of flight of Aeneas from Troy and his settlement of the city that would become Rome. Lucretius, in his \"On the Nature of Things\", attempted to explicate science in an epic poem. Some of his science seems remarkably modern, but other ideas, especially his theory of light, are no longer accepted. Later Ovid produced his \"Metamorphoses\", written in dactylic hexameter verse, the meter of epic, attempting a complete mythology from the creation of the earth to his own time. He unifies his subject matter through the theme of metamorphosis. It was noted in classical times that Ovid's work lacked the \"gravitas\" possessed by traditional epic poetry.\n" ]
How long before a nuclear weapon is incapable of producing a nuclear explosion?
So the uranium 235 bombs required 56 kg of uranium. For an actual nuclear weapon (not a dirty bomb) about 85% of uranium must be weapons grade (not decayed). Soo.. Using formula N(t) = N e^ (-(half life)(t)) where N(t) = 85% * 56 = 47.6 N = 56 half life constant = 9.72*10^-10 atoms per year t = 1.67201x10^8 years! A loooong time. Easier to disassemble the nukes than to wait for them to expire.
[ "Meanwhile, Pryce investigated how long a runaway nuclear chain reaction in an atomic bomb would continue before it blew itself apart. He calculated that since the neutrons produced by fission have an energy of about this corresponded to a speed of . The major part of the chain reaction would be completed in the order of (ten \"shakes\"). From 1 to 10 per cent of the fissile material would fission in this time; but even an atomic bomb with 1 per cent efficiency would release as much energy as 180,000 times its weight in TNT. \n", "BULLET::::- Plans to detonate an American nuclear weapon, 40 miles above the Earth, were halted one minute and 40 seconds before the scheduled explosion. Failure of the tracking system in the Thor missile led to the decision to blow the warhead apart without an atomic blast.\n", "For an explosion to occur, there must be a rapid release of energy – a slow release of energy from uranium nuclei would give a uranium fire, but no explosion. Both sides poured their effort into creating the necessary conditions for a chain reaction.\n", "Fission reactions and subsequent neutron escape happen very quickly; this is important for nuclear weapons, where the objective is to make a nuclear pit release as much energy as possible before it physically explodes. Most neutrons emitted by fission events are prompt: they are emitted effectively instantaneously. Once emitted, the average neutron lifetime (formula_1) in a typical core is on the order of a millisecond, so if the exponential factor formula_3 is as small as 0.01, then in one second the reactor power will vary by a factor of (1 + 0.01), or more than ten thousand. Nuclear weapons are engineered to maximize the power growth rate, with lifetimes well under a millisecond and exponential factors close to 2; but such rapid variation would render it practically impossible to control the reaction rates in a nuclear reactor.\n", "A second, more powerful explosion occurred about two or three seconds after the first; this explosion dispersed the damaged core and effectively terminated the nuclear chain reaction. This explosion also compromised more of the reactor containment vessel and ejected hot lumps of graphite moderator. The ejected graphite and the demolished channels still in the remains of the reactor vessel caught fire on exposure to air, greatly contributing to the spread of radioactive fallout and the contamination of outlying areas.\n", "As a result of the above, the \"breakout time\"—the time in which it would be possible for Iran to make enough material for a single nuclear weapon—will increase from two to three months to one year, according to U.S. officials and U.S. intelligence. An August 2015 report published by a group of experts at Harvard University's Belfer Center for Science and International Affairs concurs in these estimates, writing that under the JCPOA, \"over the next decade would be extended to roughly a year, from the current estimated breakout time of 2 to 3 months\". The Center for Arms Control and Non-Proliferation also accepts these estimates. By contrast, Alan J. Kuperman, coordinator of the Nuclear Proliferation Prevention Project at the University of Texas at Austin, disputed the one-year assessment, arguing that under the agreement, Iran's breakout time \"would be only about three months, not much longer than it is today\".\n", "Since explosives detonate at typically 7–8 kilometers per second, or 7–8 meters per millisecond, a 1 millisecond delay in detonation from one side of a nuclear weapon to the other would be longer than the time the detonation would take to cross the weapon. The time precision and consistency of EBWs (0.1 microsecond or less) are roughly enough time for the detonation to move 1 millimeter at most, and for the most precise commercial EBWs this is 0.025 microsecond and about 0.2 mm variation in the detonation wave. This is sufficiently precise for very low tolerance applications such as nuclear weapon explosive lenses.\n" ]
When did people start to identify more with skin color rather than language/culture.
Hi there -- you may be interested in [this recent answer](_URL_0_) from u/sowser, in which they go into some detail about how race is constructed through the experience of the Transatlantic slave trade. The whole thing is worth a read, but constructions of race are in part 4.
[ "The historical context for the emergence in the Americas of racial identities based upon skin color was the establishment of colonies which developed a plantation economy dependent upon slave labor. Before that, the British identified themselves as Christians rather than white. \"At the start of the eighteenth century, Indians and Europeans rarely mentioned the color of each other's skins. By midcentury, remarks about skin color and the categorization of peoples by simple color-coded labels (red, white, black) had become commonplace.\"\n", "More recent research has found that human populations over the past 50,000 years have changed from dark-skinned to light-skinned and vice versa. Only 100–200 generations ago, the ancestors of most people living today likely also resided in a different place and had a different skin color. According to Nina Jablonski, darkly pigmented modern populations in South India and Sri Lanka are an example of this, having redarkened after their ancestors migrated down from areas much farther north. Scientists originally believed that such shifts in pigmentation occurred relatively slowly. However, researchers have since observed that changes in skin coloration can happen in as little as 100 generations (~2,500 years), with no intermarriage required. The speed of change is also affected by clothing, which tends to slow it down.\n", "The understanding of the genetic mechanisms underlying human skin color variation is still incomplete, however genetic studies have discovered a number of genes that affect human skin color in specific populations, and have shown that this happens independently of other physical features such as eye and hair color. Different populations have different allele frequencies of these genes, and it is the combination of these allele variations that bring about the complex, continuous variation in skin coloration we can observe today in modern humans. Population and admixture studies suggest a 3-way model for the evolution of human skin color, with dark skin evolving in early hominids in sub-Saharan Africa and light skin evolving independently in Europe and East Asia after modern humans had expanded out of Africa.\n", "Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. Scientists discovered a skin-lighting mutation that partially accounts for the appearance of Light skin in humans (people who migrated out of Africa northward into what is now Europe) which they estimate occurred 20,000 to 50,000 years ago. The East Asians owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond to the same geographical location. Or as put it:\n", "For the most part, the evolution of light skin has followed different genetic paths in European and East Asian populations. Two genes however, KITLG and ASIP, have mutations associated with lighter skin that have high frequencies in both European and East Asian populations. They are thought to have originated after humans spread out of Africa but before the divergence of the European and Asian lineages around 30,000 years ago. Two subsequent genome-wide association studies found no significant correlation between these genes and skin color, and suggest that the earlier findings may have been the result of incorrect correction methods and small panel sizes, or that the genes have an effect too small to be detected by the larger studies.\n", "Early humans evolved to have dark skin color around 1.2 million years ago, as an adaptation to a loss of body hair that increased the effects of UV radiation. Before the development of hairlessness, early humans had reasonably light skin underneath their fur, similar to that found in other primates. The most recent scientific evidence indicates that anatomically modern humans evolved in Africa between 200,000 and 100,000 years, and then populated the rest of the world through one migration between 80,000 and 50,000 years ago, in some areas interbreeding with certain archaic human species (Neanderthals, Denisovans, and possibly others). It seems likely that the first modern humans had relatively large numbers of eumelanin-producing melanocytes, producing darker skin similar to the indigenous people of Africa today. As some of these original people migrated and settled in areas of Asia and Europe, the selective pressure for eumelanin production decreased in climates where radiation from the sun was less intense. This eventually produced the current range of human skin color. Of the two common gene variants known to be associated with pale human skin, \"Mc1r\" does not appear to have undergone positive selection, while \"SLC24A5\" has undergone positive selection.\n", "For the most part, the evolution of light skin has followed different genetic paths in Western and Eastern Eurasian populations. Two genes however, KITLG and ASIP, have mutations associated with lighter skin that have high frequencies in Eurasian populations and have estimated origin dates after humans spread out of Africa but before the divergence of the two lineages.\n" ]
If you were to theoretically use a microwave to heat a freeze dried food product in an environment with 0% humidity, what would the outcome be?
A microwave oven will cause any molecule with dipoles to 'vibrate.' This includes water but also includes fats and sugars, so it would still heat up.
[ "Microwave ovens are frequently used for reheating leftover food, and bacterial contamination may not be repressed if the safe temperature is not reached, resulting in foodborne illness, as with all inadequate reheating methods. While microwaves can destroy bacteria as well as conventional ovens, they do not cook as evenly, leading to an increased risk that parts of the food will not reach recommended temperatures.\n", "Susceptors built into packaging create high temperatures in a microwave oven. This is useful for crisping and browning foods, as well as concentrating heat on the oil in a microwave popcorn bag (which is solid at room temperature) in order to melt it rapidly.\n", "The FDA accepts that microwaves can be used to heat food for commercial use, pasteurization and sterilization. The main mechanism of microbial inactivation by microwaves is due to thermal effect; the phenomenon of lethality due to 'non-thermal effect' is controversial, and the mechanisms suggested include selective heating of micro-organisms, electroporation, cell membrane rupture, and cell lysis due to electromagnetic energy coupling. \n", "For example, the uniformity of microwave heat distribution is key parameter in microwave food sterilization, due to the potential danger directly related to human health if the food has not been heated evenly up to desirable temperature for neutralization of possible bacteria population.\n", "Microwave-assisted freeze dryers utilize microwaves to allow for deeper penetration into the sample to expedite the sublimation and heating processes in freeze-drying. This method can be very complicated to set up and run as the microwaves can create an electrical field capable of causing gases in the sample chamber to become plasma. This plasma could potentially burn the sample, so maintaining a microwave strength appropriate for the vacuum levels is imperative. The rate of sublimation in a product can affect the microwave impedance, in which power of the microwave must be changed accordingly.\n", "Microwave heating seems to cause more damage to bacteria than equivalent thermal-only heating. However food reheated in a microwave oven typically reaches lower temperature than classically reheated, therefore pathogens are more likely to survive.\n", "If a large part of the cooking time is spent at temperatures lower than 60 °C (as when the contents of the cooker are slowly cooling over a long period), a danger of food poisoning due to bacterial infection, or toxins produced by multiplying bacteria, arises. It is essential to heat food sufficiently at the outset of vacuum cooking; 60 °C throughout the dish for 10 minutes is sufficient to kill most pathogens of interest, effectively pasteurizing the dish. Some foods, such as kidney beans, fava beans, and many other varieties of beans contain a toxin, phytohaemagglutinin, that requires boiling at 100 °C for at least 10 minutes to break down to safe levels. The best practice is to bring briefly to a rolling boil then put the pot in the flask. This keeps it hottest longest. With big chunks of food, boil a little longer before putting into the flask.\n" ]
it takes 11 minutes of hypoxia for the brain to die, but yet you can kill a man by strangling him much less. how come?
Strangling someone where pressure is put on the blood vessels in the neck can cause feedback to the heart which can cause it to go into cardiac arrest (gentle massage to the carotid is used to slow down rapid heartbeats). If done properly that can be done in only seconds. The person still takes a while to die but they have no heart beat.
[ "At the onset of clinical death, consciousness is lost within several seconds. Measurable brain activity stops within 20 to 40 seconds. Irregular gasping may occur during this early time period, and is sometimes mistaken by rescuers as a sign that CPR is not necessary. During clinical death, all tissues and organs in the body steadily accumulate a type of injury called ischemic injury.\n", "Decapitation is quickly fatal to humans and most animals. Unconsciousness occurs within 10 seconds without circulating oxygenated blood (brain ischemia). Cell death and irreversible brain damage occurs after 3–6 minutes with no oxygen, due to excitotoxicity. Some anecdotes suggest more extended persistence of human consciousness after decapitation, but most doctors consider this unlikely and consider such accounts to be misapprehensions of reflexive twitching rather than deliberate movement, since deprivation of oxygen must cause nearly immediate coma and death (\"[Consciousness is] probably lost within 2–3 seconds, due to a rapid fall of intracranial perfusion of blood.\"). A laboratory study testing for humane methods of euthanasia in awake animals used EEG monitoring to measure the time duration following decapitation for rats to become fully unconscious, unable to perceive distress and pain. It was estimated that this point was reached within 3 - 4 seconds, correlating closely with results found in other studies on rodents (2.7 seconds, and 3 - 6 seconds). The same study also suggested that the massive wave which can be recorded by EEG monitoring approximately one minute after decapitation ultimately reflects brain death. Other studies indicate that electrical activity in the brain has been demonstrated to persist for 13 to 14 seconds following decapitation (although it is disputed as to whether such activity implies that pain is perceived), and a 2010 study reported that decapitation of rats generated responses in EEG indices over a period of 10 seconds that have been linked to nociception across a number of different species of animals, including rats. \n", "As a consequence of rapid decompression, oxygen dissolved in the blood empties into the lungs to try to equalize the partial pressure gradient. Once the deoxygenated blood arrives at the brain, humans lose consciousness after a few seconds and die of hypoxia within minutes. Blood and other body fluids boil when the pressure drops below 6.3 kPa, and this condition is called ebullism. The steam may bloat the body to twice its normal size and slow circulation, but tissues are elastic and porous enough to prevent rupture. Ebullism is slowed by the pressure containment of blood vessels, so some blood remains liquid. Swelling and ebullism can be reduced by containment in a pressure suit. The Crew Altitude Protection Suit (CAPS), a fitted elastic garment designed in the 1960s for astronauts, prevents ebullism at pressures as low as 2 kPa. Supplemental oxygen is needed at to provide enough oxygen for breathing and to prevent water loss, while above pressure suits are essential to prevent ebullism. Most space suits use around 30–39 kPa of pure oxygen, about the same as on the Earth's surface. This pressure is high enough to prevent ebullism, but evaporation of nitrogen dissolved in the blood could still cause decompression sickness and gas embolisms if not managed.\n", "Under normal conditions, humans cannot store much oxygen in the body. Prolonged apnea leads to severe lack of oxygen in the blood circulation. Permanent brain damage can occur after as little as three minutes and death will inevitably ensue after a few more minutes unless ventilation is restored. However, under special circumstances such as hypothermia, hyperbaric oxygenation, apneic oxygenation (see below), or extracorporeal membrane oxygenation, much longer periods of apnea may be tolerated without severe consequences.\n", "There is debate over the dangers of choke-outs. After 4 to 6 minutes of sustained cerebral anoxia, permanent brain damage will begin to occur, but the long-term effects of a controlled choke-out for less than 4 minutes (as most are applied for mere seconds and released when unconsciousness is achieved) are disputed. However, everyone should note that generally loss of oxygen is never safe and always (even if minimal) causes death of brain cells. There is always risk of short-term memory loss, hemorrhage and harm to the retina, concussions from falling when unconscious, stroke, seizures, permanent brain damage, coma, and even death.\n", "Suicide by hypothermia is a slow death that goes through several stages. Hypothermia begins with mild symptoms, gradually leading to moderate and severe penalties. This may involve shivering, delirium, hallucinations, lack of coordination, sensations of warmth, then finally death. One's organs cease to function, though clinical brain death can be delayed.\n", "BULLET::::1. Duration-induced hypoxia occurs when the breath is held long enough for metabolic activity to reduce the oxygen partial pressure sufficiently to cause loss of consciousness. This is accelerated by exertion, which uses oxygen faster or hyperventilation, which reduces the carbon dioxide level in the blood which in turn may:\n" ]
How far does the effect of time dilation "spread" from an object traveling at relativistic speeds?
No, it doesn't affect your clock at all (except an incredibly tiny amount of gravitational time dilation, which I don't think is what you're talking about and certainly isn't important for the discussion.) In special relativity, time dilation is not a 'field' or localized effect. It's just a thing that happens to objects that are moving *relative to you*. Importantly, from the spaceship's point of view its clock is totally normal and your clock is the one going slow. It doesn't matter how far away the ship is or whether or not you can see it.
[ "Relativistic time dilation means that a clock (indicating its proper time) that moves relative to an observer is observed to run slower. In fact, time itself in the frame of the moving clock is observed to run slower. This can be read immediately from the adjoining Loedel diagram quite straightforwardly because unit lengths in the two system of axes are identical. Thus, in order to compare reading between the two systems, we can simply compare lengths as they appear on the page: we do not need to consider the fact that unit lengths on each axis are warped by the factor\n", "The transverse Doppler effect and consequently time dilation was directly observed for the first time in the Ives–Stilwell experiment (1938). In modern Ives-Stilwell experiments in heavy ion storage rings using saturated spectroscopy, the maximum measured deviation of time dilation from the relativistic prediction has been limited to ≤ 10. Other confirmations of time dilation include Mössbauer rotor experiments in which gamma rays were sent from the middle of a rotating disc to a receiver at the edge of the disc, so that the transverse Doppler effect can be evaluated by means of the Mössbauer effect. By measuring the lifetime of muons in the atmosphere and in particle accelerators, the time dilation of moving particles was also verified. On the other hand, the Hafele–Keating experiment confirmed the twin paradox, \"i.e.\" that a clock moving from A to B back to A is retarded with respect to the initial clock. However, in this experiment the effects of general relativity also play an essential role.\n", "The current precision with which time dilation is measured (using the RMS test theory), is at the ~10 level. It was shown, that Ives-Stilwell type experiments are also sensitive to the formula_3 isotropic light speed coefficient of the SME, as introduced above. Chou \"et al.\" (2010) even managed to measure a frequency shift of ~10 due to time dilation, namely at everyday speeds such as 36 km/h.\n", "There are real phenomena that cause time dilation similar that of a stasis field. Extremely high velocities approaching light speed or immensely powerful gravitational fields such as those existing near the event horizons of black holes will cause time to progress more slowly. However, there is no known theoretical way of causing such time dilation independently of such conditions.\n", "Theoretically, time dilation would make it possible for passengers in a fast-moving vehicle to advance further into the future in a short period of their own time. For sufficiently high speeds, the effect is dramatic. For example, one year of travel might correspond to ten years on Earth. Indeed, a constant 1 g acceleration would permit humans to travel through the entire known Universe in one human lifetime.\n", "That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and Shapiro signal travel time delays near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect.\n", "There is a similar idea of time dilation occurrence in Einstein's theory of special relativity (which deals with neither gravity nor the idea of curved spacetime). Such time dilation appears in the Rindler coordinates, attached to a uniformly accelerating particle in a flat spacetime. Such a particle would observe time passing faster on the side it is accelerating towards and more slowly on the opposite side. From this apparent variance in time, Einstein inferred that change in velocity affects the relativity of simultaneity for the particle. Einstein's equivalence principle generalizes this analogy, stating that an accelerating reference frame is locally indistinguishable from an inertial reference frame with a gravity force acting upon it. In this way, the Gravity Probe A was a test of the equivalence principle, matching the observations in the inertial reference frame (of special relativity) of the Earth's surface affected by gravity, with the predictions of special relativity for the same frame treated as being accelerating upwards with respect to free fall reference, which can thought of being inertial and gravity-less.\n" ]
During the height of the Cathar movement, what were gender relations like among the Cather Christians? Did their theology translate into women having a more equal status in society?
Catharism is a well-studied topic, and while you are waiting for fresh responses to your question, it is well worth reviewing [this earlier thread](_URL_0_), led by u/sunagainstgold, which looks at the the history and historiography of the supposed heresy, and points out that our understanding of "Catharism" is really a construct imposed by the outsiders who persecuted it, adding that "medievalists today are pretty unanimous that there was no such thing as 'Catharism' in southern France in the 12th-13th century." Sun does also touch on the specific area of gender relations in the time and place that you're interested in. Meanwhile, and in the same thread, u/idjet (who was writing a dissertation on the Cathars) pushes back in posts that argue for something more approaching the old standard view of Catharism as a distinct set of real beliefs.
[ "Sociologist Linda L. Lindsey says \"Belief in the spiritual equality of the genders (Galatians 3:28) and Jesus' inclusion of women in prominent roles, led the early New Testament church to recognize women's contributions to charity, evangelism and teaching.\" Pliny the Younger, first century, says in his letter to Emperor Trajan that Christianity had people from every age and rank, and refers to \"two women slaves called deaconesses\" . Professor of religion Margaret Y. MacDonald uses a \"social scientific concept of power\" which distinguishes between power and authority to show early Christian women, while lacking overt authority, still retained sufficient indirect power and influence to play a significant role in Christianity's beginnings. \n", "The first wave of feminism in the nineteenth and early twentieth centuries included an increased interest in the place of women in religion. Women who were campaigning for their rights began to question their inferiority both within the church and in other spheres, which had previously been justified by church teachings. Some Christian feminists of this period were Marie Maugeret, Katharine Bushnell, Catherine Booth, Frances Willard, and Elizabeth Cady Stanton.\n", "Historian Geoffrey Blainey writes that women probably comprised the majority in early Christian congregations. This large female membership likely stemmed in part from the early church's informal and flexible organization offering significant roles to women. Another factor is that there appeared to be no division between clergy and laity. Leadership was shared among male and female members according to their \"gifts\" and talents. \"But even more important than church organization was the way in which the Gospel tradition and the Gospels themselves, along with the writing of Paul, could be interpreted as moving women beyond silence and subordination.\" Women may also have been driven from Judaism to Christianity through the taboos and rituals related to the menstrual cycle, and a society preference for male over female children.\n", "Cathars believed that one would be repeatedly reincarnated until one commits to the self-denial of the material world. A man could be reincarnated as a woman and vice versa, thereby rendering gender meaningless. The spirit was of utmost importance to the Cathars and was described as being immaterial and sexless. Because of this belief, the Cathars saw women as equally capable of being spiritual leaders, which undermined the very concept of gender as held by the Catholic Church.\n", "Early feminists such as Elizabeth Cady Stanton concentrated almost solely on \"making women equal to men.\" However, the Christian feminist movement chose to concentrate on the language of religion because they viewed the historic gendering of God as male as a result of the pervasive influence of patriarchy. Rosemary Radford Ruether provided a systematic critique of Christian theology from a feminist and theist point of view. Stanton was an agnostic and Reuther is an agnostic who was born to Catholic parents but no longer practices the faith.\n", "Some Christian feminists believe that the principle of egalitarianism was present in the teachings of Jesus and the early Christian movements, but this is a highly contested view by many feminist scholars who believe that Christianity itself relies heavily on gender roles. These interpretations of Christian origins have been criticized by secular feminists for \"anachronistically projecting contemporary ideals back into the first century.\" In the Middle Ages Julian of Norwich and Hildegard of Bingen explored the idea of a divine power with both masculine and feminine characteristics. Feminist works from the fifteenth to seventeenth centuries addressed objections to women learning, teaching and preaching in a religious context. One such proto-feminist was Anne Hutchinson who was cast out of the Puritan colony of Massachusetts for teaching on the dignity and rights of women.\n", "From the very beginning of the early Christian church, women were important members of the movement, although some complain that much of the information in the New Testament on the work of women has been overlooked. Some also argue that many assumed that it had been a \"man's church\" because sources of information stemming from the New Testament church were written and interpreted by men. Recently, scholars have begun looking in mosaics, frescoes, and inscriptions of that period for information about women's roles in the early church.\n" ]
Why does the reflection in a shallow pond change depending on the viewing angle?
The answer you seek lies in [Fresnel Equations](_URL_0_). While Snell's Law (n1 Sin[theta1] = n2 Sin[theta2]) will tell you about the angle of refraction compared to the angle of incidence, you need Fresnel equations to tell you *how much* light is refracted vs how much is transmitted. Take a look at [this image](_URL_1_), which shows reflectance and transmittance as a function of incident angle (in this case, specifically in regards to light transmitting between air and glass). You can see in the graph on the left - light in air striking a surface of glass - that when the angle of incidence is close to zero, there is a very low reflection coefficient, which means that *most* of the light that hits the surface will transmit rather than reflect. As that angle of incidence increases, the proportion of light that is reflected only increases. Eventually, the brightness of the reflected light will outstrip the brightness of any transmitted light coming from inside the glass/second material. This same thing applies to seeing a reflection in a pond. At low angles of incidence (looking straight down), very little light reflects so most of what you see is transmitted from under the water out into the air. At high angles of incidence a lot of light reflects so that reflected light is most of what you see.
[ "A similar effect can be observed by opening one's eyes while swimming just below the water's surface. If the water is calm, the surface outside the critical angle (measured from the vertical) appears mirror-like, reflecting objects below. The region above the water cannot be seen except overhead, where the hemispherical field of view is compressed into a conical field known as \"Snell's window\", whose angular diameter is twice the critical angle (cf. Fig.6). The field of view above the water is theoretically 180° across, but seems less because as we look closer to the horizon, the vertical dimension is more strongly compressed by the refraction; e.g., by Eq.(), for air-to-water incident angles of 90°, 80°, and 70°, the corresponding angles of refraction are 48.6° (\"θ\" in Fig.6), 47.6°, and 44.8°, indicating that the image of a point 20° above the horizon is 3.8° from the edge of Snell's window while the image of a point 10° above the horizon is only 1° from the edge.\n", "Under ideal conditions, an observer looking up at the water surface from underneath sees a perfectly circular image of the entire above-water hemisphere—from horizon to horizon. Due to refraction at the air/water boundary, Snell's window compresses a 180° angle of view above water to a 97° angle of view below water, similar to the effect of a fisheye lens. The brightness of this image falls off to nothing at the circumference/horizon because more of the incident light at low grazing angles is reflected rather than refracted (see Fresnel equations). Refraction is very sensitive to any irregularities in the flatness of the surface (such as ripples or waves), which will cause local distortions or complete disintegration of the image. Turbidity in the water will veil the image behind a cloud of scattered light.\n", "If a sheet of glass is placed in the tank, the depth of water in the tank will be shallower over the glass than elsewhere. The speed of a wave in water depends on the depth, so the ripples slow down as they pass over the glass. This causes the wavelength to decrease. If the junction between the deep and shallow water is at an angle to the wavefront, the waves will refract. In the diagram above, the waves can be seen to bend towards the normal. The normal is shown as a dotted line. The dashed line is the direction that the waves would travel if they had not met the angled piece of glass.\n", "For small angles of incidence (measured from the normal, when sin θ is approximately the same as tan θ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But, as the angle of incidence approaches 90, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached.\n", "This reasoning can be easily visualized by dropping two stones in the center of a calm pond. Near the center of the pond, the disturbance created by the two stones will be very complicated. As the disturbance propagates towards the edge of the pond, however, the waves will smooth out and will appear to be nearly circular.\n", "As a progressive wave approaches shore and the water depth decreases, the wave height increases due to wave shoaling. As a result, there is additional wave-induced flux of horizontal momentum. The horizontal momentum equations of the mean flow requires this additional wave-induced flux to be balanced: this causes a decrease in the mean water level before the waves break, called a \"setdown\". \n", "If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, after reflecting off the walls, converge simultaneously to a single point: the \"second focus\". This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci.\n" ]
the process and significance of "making partner" in a law firm
It takes anywhere from 2 to 10 years to make partner at an average law firm (sometimes longer). In order to be considered for a partner position (while working as an associate), you usually need to work very hard and contribute a lot to your firm's business. It helps if you can pull a lot of all-nighters, find new clients through connections, publish articles in law journals to gain prestige for your firm, or show great talent in a particular field of law that your firm works in. In some firms, once you've gained enough experience, you're given a big, important client or assignment. If you succeed, you are promoted to partnership. A partner usually owns a share in the company, which means that they automatically make money every year by receiving a portion of the company's profits. There is also less pressure on a partner to work hard, because they've already "made it". So as a partner, you can take it easy unless you're really into your work or want to make even more money. An associate at a big law firm makes around $100,000 per year. A partner at the same law firm will make anywhere from $300,00 to over a million. In small firms that only have a couple of partners, your last name will also be added to the firm's name. So if your name is Johnson and you work at Anderson and Smith, your firm may be renamed to Anderson, Smith and Johnson once you make partner.
[ "4) Partners are Mutual Agents.The business of firm can be carried on by all or any of them for all. Any partner has authority to bind the firm. Act of any one partner is binding on all the partners. Thus, each partner is ‘agent’ of all the remaining partners. Hence, partners are ‘mutual agents’. Section 18 of the Partnership Act, 1932 says \"Subject to the provisions of this Act, a partner is the agent of the firm for the purpose of the business of the firm\"\n", "In law firms, partners are primarily those senior lawyers who are responsible for generating the firm's revenue. The standards for equity partnership vary from firm to firm. Many law firms have a \"two-tiered\" partnership structure, in which some partners are designated as \"salaried partners\" or \"non-equity\" partners, and are allowed to use the \"partner\" title but do not share in profits. This position is often given to lawyers on track to become equity partners so that they can more easily generate business; it is typically a \"probationary\" status for associates (or former equity partners, who do not generate enough revenue to maintain equity partner status). The distinction between equity and non-equity partners is often internal to the firm and not disclosed to clients, although a typical equity partner could be compensated three times as much as a non-equity partner billing at the same hourly rate. In America, senior lawyers not on track for partnership often use the title \"of counsel\", whilst their equivalents in Britain use the title \"Senior Counsel\".\n", "In law, the relationship that exists when one person or party (the principal) engages another (the agent) to act for him, e.g. to do his work, to sell his goods, to manage his business. The law of agency thus governs the legal relationship in which the agent deals with a third party on behalf of the principal. The competent agent is legally capable of acting for this principal vis-à-vis the third party. Hence, the process of concluding a contract through an agent involves a twofold relationship. On the one hand, the law of agency is concerned with the external business relations of an economic unit and with the powers of the various representatives to affect the legal position of the principal. On the other hand, it rules the internal relationship between principal and agent as well, thereby imposing certain duties on the representative (diligence, accounting, good faith, etc.)\n", "A partnership is a business relationship entered into by a formal agreement between two or more persons or corporations carrying on a business in common. The capital for a partnership is provided by the partners who are liable for the total debts of the firms and who share the profits and losses of the business concern according to the terms of the partnership agreement.\n", "The court defines a \"partnership\" as having three elements: it is a business, it is carried on in common, and it is carried on with a view to profit. On the first, Bastarache J passes the partnership because it is a trade, occupation, or profession as per section 1(1)(a) of the Partnership Act. The partnership is also deemed to carry on business in common because, as long as management and duties are outlined in the partnership agreement, there are no requirements about the length of a partnership relationship, or that the partnership need expand its business in that time—even if, as in this case, the partnership business was fairly idle over the Christmas holidays. Last, the court concluded that the pursuit of profit need only be an ancillary purpose to the creation of the partnership.\n", "Law firms are typically organized around partners, who are joint owners and business directors of the legal operation; associates, who are employees of the firm with the prospect of becoming partners; and a variety of staff employees, providing paralegal, clerical, and other support services. An associate may have to wait as long as 11 years before the decision is made as to whether the associate is made a partner. Many law firms have an \"up or out policy\", integral to the Cravath System, which had been pioneered during the early 20th century by partner Paul Cravath of Cravath, Swaine & Moore), and became widely adopted by, particularly, white-shoe firms; associates who do not make partner are required to resign, and may join another firm, become a solo practitioner, work in-house for a corporate legal department, or change professions. Burnout rates are notably high in the profession.\n", "BULLET::::- A partnership agreement may be a contract which formally establishes the terms of a partnership between two legal entities such that they regard each other as 'partners' in a commercial arrangement. However, such expressions may also be merely a means to reflect the desire of the contracting parties to act 'as if' both are in a partnership with common goals. Therefore, it might not be the common law arrangement of a partnership which by definition creates fiduciary duties and which also has 'joint and several' liabilities.\n" ]
the corruption in illinois.
If you're looking for a simple answer, you're not going to find one. The various motivations and relationships between corruption, politics, and power is an extremely complex issue that can be interpreted through multiple lenses. Aside from what you can easily read on the relevant Wikipedia articles, there's a deeper story about the history of Illinois politics vis-a-vis the history of Chicago. I was born and raised in Chicago; much of the historical corruption here is a result of the city's rise as an industrial powerhouse in the late 1800s and its foundation as both a nexus of immigration and as a labor/union stronghold. Chicago was the first uniquely "American" city - unlike Philadelphia, NYC, and Boston, it does not have its roots in the colonies. As such, it has its own set of unique cultural and sociological identifiers that differentiate it from other urban metropoles. Chicago was a huge destination for Irish, German, Polish, Russian, and Italian immigrants (to name only a few) during the industrial era... these ethnic groupings paved the way for Chicago's [political machine](_URL_0_) that sought to protect the interests of ethnic immigrants by trading votes for patronage. Being the two primary nodes of political power in the state, Springfield (Illinois' capital) and Chicago have historically maintained a very close relationship as well. Although the era of the ethnic political machine has passed, elements of machine politics are still very much prevalent today. While this does not provide an explicit answer for the contemporary corruption in Springfield (Blago, Ryan, et cetera), I feel that the history of Chicago provides a lot of context to why the game of politics is played just a bit differently here than in other places. It really is a fascinating story.
[ "Corruption in Illinois has been a problem from the earliest history of the state. Electoral fraud in Illinois pre-dates the territory's admission to the Union in 1818, Illinois was the third most corrupt state in the country, after New York and California, judging by federal public corruption convictions between 1976-2012.\n", "The Northern District of Illinois, which contains the entire Chicago metropolitan area, accounts for 1531 of the 1828 public corruption convictions in the state between 1976 and 2012, almost 84%, also making it the federal district with the most public corruption convictions in the nation between 1976 and 2012.\n", "Chicago has a long history of political corruption, dating to the incorporation of the city in 1833. It has been a de facto monolithic entity of the Democratic Party from the mid 20th century onward. Research released by the University of Illinois at Chicago reports that Chicago and Cook County's judicial district recorded 45 public corruption convictions for 2013, and 1642 convictions since 1976, when the Department of Justice began compiling statistics. This prompted many media outlets to declare Chicago the \"corruption capital of America\". Gradel and Simpson's \"Corrupt Illinois\" (2015) provides the data behind Chicago's corrupt political culture. They found that a tabulation of federal public corruption convictions make Chicago \"undoubtedly the most corrupt city in our nation\", with the cost of corruption \"at least\" $500 million per year.\n", "Most corruption cases in Chicago are prosecuted by the US Attorney's office, as legal jurisdiction makes most offenses punishable as a federal crime. The current US Attorney for the Northern district of Illinois is Zachary T. Fardon. In a press conference in January 2016, in the wake of the conviction of former Chicago City Hall official, John Bills, for taking 2 million dollars in bribes, Fardon commented \"Public corruption [in Chicago] is a disease and where public officials violate the public trust, we have to hold them accountable. And I do believe that by doing so, it sends a deterrent message.\"\n", "Chicago City Council Chambers has long been the center of public corruption in Chicago. The first conviction of Chicago aldermen and Cook County Commissioners for accepting bribes to rig a crooked contract occurred in 1869. Between 1972 and 1999, 26 current or former Chicago aldermen were convicted for official corruption. Between 1973 and 2012, 31 aldermen were convicted of corruption. Approximately 100 aldermen served in that period, which is a conviction rate of about one-third.\n", "BULLET::::- The Commission monitors the federal prosecutions of Illinois' alleged pay-to-play political insiders and the general public corruption allegations levied against government officials throughout Illinois, including the trials involving former Governor George H. Ryan, his former Chief-of-Staff, Scott Fawell and current Chicago businessman Antoin Rezko.\n", "A 2015 report released by the University of Illinois at Chicago's political science department declared Chicago the \"corruption capital of America\", citing that the Chicago-based Federal Judicial District for Northern Illinois reported 45 public corruption convictions for 2013 and a total of 1,642 convictions for the 38 years since 1976 when the U.S. Department of Justice began compiling the statistics. UIC Professor and former Chicago Alderman Dick Simpson noted in the report that \"To end corruption, society needs to do more than convict the guys that get caught. A comprehensive anti-corruption strategy must be forged and carried out over at least a decade. A new political culture in which public corruption is no longer tolerated must be created\".\n" ]
why do phone carriers (verizon, etc) have a say in the release of updates for android phones, but not iphones?
iPhones are a locked ecosystem. The software & hardware is produced by them and therefore updates are pushed out whenever they want independent of the carrier. Android is an operating system that runs on other peoples hardware ... The hardware manufacturer has a deal with the carriers, the carrier sells their phones if they add in / lock the phone to that carrier and add their proprietary apps with backdoor access. Therefore google launches an updates, but until the hardware manufacturer configures it for the phone and hands it to the carrier and then the carrier rolls it out ... You are stuck in the middle.
[ "The continued top popularity of the iPhone despite growing Android competition was also attributed to Apple being able to deliver iOS updates over the air, while Android updates are frequently impeded by carrier testing requirements and hardware tailoring, forcing consumers to purchase a new Android smartphone to get the latest version of that OS. However, by 2013, Apple's market share had fallen to 13.1%, due to the surging popularity of the Android offerings.\n", "Compared to its primary rival mobile operating system, Apple's iOS, Android updates typically reach various devices with significant delays. Except for devices within the Google Nexus and Pixel brands, updates often arrive months after the release of the new version, or not at all. This was partly due to the extensive variation in hardware in Android devices, to which each upgrade must be specifically tailored, a time- and resource-consuming process. Manufacturers often prioritize their newest devices and leave old ones behind. Additional delays can be introduced by wireless carriers that, after receiving updates from manufacturers, further customize and brand Android to their needs and conduct extensive testing on their networks before sending the upgrade out to users. There are also situations in which upgrades are not possible due to one manufacturing partner not providing necessary updates to drivers.\n", "Android users will no longer be able to receive notifications without an app after Nearby shuts down on December 6, 2018. Proximity marketing campaigns can still be run on smartphones running on Android but they need a compatible app.\n", "Users may customize their phones by installing apps through the Android Market; however, some carriers (AT&T) do not give users the option to install non-market apps onto the Backflip (a policy they have continued with all of their Android phones). This has created some controversy with users, as the non-market apps are often seen as a useful way to expand a phone's capabilities. Users can circumvent this limitation by manually installing 3rd party apps using the tools included with the SDK while the handset is connected to a computer.\n", "Google countered to this investigation that their practices with Android were no different with how Apple, Inc. or Microsoft bundles their own proprietary apps on their respective iOS and Windows Phone, and that OEMs were still able to distribute Android-based phones without the Google suite of apps.\n", "Since these phones run Android 4.0, they are still supported by cloud, communications and social networking services that push the latest versions of their apps, which have in some cases been designed with only the newest hardware in mind. Such applications hog system resources and cause the phones to run slowly. As a remedy, phone owners can replace those apps with less resource-hungry equivalents, or remove them entirely and use a web browser to access the services' sites.\n", "However, to date, many customers have never received the update, and T-Mobile support representatives on Twitter suggested in October 2011 that the OTA update is no longer available as T-Mobile is working on further improvements before making the update available again. No clarification is ever given on the exact reasons why the update is no longer available, or why it was pulled out, or what further improvements are required to be completed before it would be available again. Moreover, there is no option within the interface that allows you to check for updates.\n" ]
Do all species eventually face extinction?
So I hesitate to answer your question, because it enters more of a philosophical realm to truly answer it. What you're asking is basically: 1) Can species remain indefinitely? 2) Are all species subject to extinction? I break these up because they require different answers, which are: 1) sorta 2) Yeah When we see radial speciation happen (as with Darwin's finches) at what point does the ancestor cease to exist? This question is philosophical as much as it is biological. Certainly we can use the biological species concept to differentiate between species, but from a strictly taxonomical standpoint, the ancestor and it's daughter species are part of the same lineage and sometimes the distinction between the nodes of the evolutionary tree of life becomes a little arbitrary. A descendent never ceases "being" it's ancestor, which is why birds are dinosaurs and humans are amphibians (if we're being technical). If we considered the species merely a lineage over time, then yes there have been lineages that have last since life began 3.5 billion years ago. For the second answer, all species will eventually go completely extinct, with the exception of small amount lineages that are continued on. Van Valen gets at this with the Red Queen hypothesis: at some point the environment in which an organisms lives will change to the point where the organism cannot adapt and will drive it to extinction. The central idea behind this is **constraint**. Both genetically and physiologically, most species are limited in their capability for diversification, and more often than not, the environment proves too much for the organism driving it to extinction. These timescales are very large, on the order of millions of years.
[ "There are a variety of causes that can contribute directly or indirectly to the extinction of a species or group of species. \"Just as each species is unique\", write Beverly and Stephen C. Stearns, \"so is each extinction ... the causes for each are varied—some subtle and complex, others obvious and simple\". Most simply, any species that cannot survive and reproduce in its environment and cannot move to a new environment where it can do so, dies out and becomes extinct. Extinction of a species may come suddenly when an otherwise healthy species is wiped out completely, as when toxic pollution renders its entire habitat unliveable; or may occur gradually over thousands or millions of years, such as when a species gradually loses out in competition for food to better adapted competitors. Extinction may occur a long time after the events that set it in motion, a phenomenon known as extinction debt.\n", "Species go extinct constantly as environments change, as organisms compete for environmental niches, and as genetic mutation leads to the rise of new species from older ones. Occasionally biodiversity on Earth takes a hit in the form of a mass extinction in which the extinction rate is much higher than usual. A large extinction-event often represents an accumulation of smaller extinction- events that take place in a relatively brief period of time.\n", "It is known that extinction risk is directly correlated to the size of a species population. Small populations tend to go extinct more frequently than large ones (MacArthur and Wilson, 1967). As large species require more daily resources they are forced to have low population densities, thereby lowering the size of the population in a given area and allowing each individual to have access to enough resources to survive. In order to increase the population size and avoid extinction, large organisms are constrained to have large ranges (see Range (biology)). Thus, the extinction of large species with small ranges becomes inevitable (MacArthur and Wilson, 1967; Brown and Maurer, 1989; Brown and Nicoletto, 1991). This results in the amount of space limiting the overall number of large animals that can be present on a continent, while range size (and risk of extinction) prevents large animals from inhabiting only a small area. These constraints undoubtedly have implications for the species richness patterns for both large and small-bodied organisms, however the specifics have yet to be elucidated.\n", "In extinct animal species for which the cause of extinction is known, over 50% were affected by invasive species. For 20% of extinct animal species, invasive species are the only cited cause of extinction. Invasive species are the second-most important cause of extinction for mammals.\n", "The extinction of one species' wild population can have knock-on effects, causing further extinctions. These are also called \"chains of extinction\". This is especially common with extinction of keystone species.\n", "In the natural course of events, species become extinct for a number of reasons, including but not limited to: extinction of a necessary host, prey or pollinator, inter-species competition, inability to deal with evolving diseases and changing environmental conditions (particularly sudden changes) which can act to introduce novel predators, or to remove prey. Recently in geological time, humans have become an additional cause of extinction (many people would say premature extinction) of some species, either as a new mega-predator or by transporting animals and plants from one part of the world to another. Such introductions have been occurring for thousands of years, sometimes intentionally (e.g. livestock released by sailors on islands as a future source of food) and sometimes accidentally (e.g. rats escaping from boats). In most cases, the introductions are unsuccessful, but when an invasive alien species does become established, the consequences can be catastrophic. Invasive alien species can affect native species directly by eating them, competing with them, and introducing pathogens or parasites that sicken or kill them; or indirectly by destroying or degrading their habitat. Human populations may themselves act as invasive predators. According to the \"overkill hypothesis\", the swift extinction of the megafauna in areas such as Australia (40,000 years before present), North and South America (12,000 years before present), Madagascar, Hawaii (AD 300–1000), and New Zealand (AD 1300–1500), resulted from the sudden introduction of human beings to environments full of animals that had never seen them before, and were therefore completely unadapted to their predation techniques.\n", "It's estimated that, because of human activities, current species extinction rates are about 1000 times greater than the background extinction rate (the 'normal' extinction rate that occurs without additional influence) . According to the IUCN, out of all species assessed, over 27,000 are at risk of extinction and should be under conservation. Of these, 25% are mammals, 14% are birds, and 40% are amphibians. However, because not all species have been assessed, these numbers could be even higher. A 2019 UN report assessing global biodiversity extrapolated IUCN data to all species and estimated that 1 million species worldwide could face extinction. Yet, because resources are limited, sometimes it's not possible to give all species that need conservation due consideration. Deciding which species to conserve is a function of how close to extinction a species is, whether the species is crucial to the ecosystem it resides in, and how much we care about it.\n" ]
when drinking water, what is the mechanism that decides if the water will go to the bladder or be absorbed?
The water is absorbed. The water that goes to your bladder is excreted by the kidneys as it filters your blood.
[ "As water is pumped out, the bladder's walls are sucked inwards by the partial vacuum created, and any dissolved material inside the bladder becomes more concentrated. The sides of the bladder bend inwards, storing potential energy like a spring. Eventually, no more water can be extracted, and the bladder trap is 'fully set' (technically, osmotic pressure rather than physical pressure is the limiting factor).\n", "The swim bladder (or gas bladder) is an internal organ that contributes to the ability of a fish to control its buoyancy, and thus to stay at the current water depth, ascend, or descend without having to waste energy in swimming. The bladder is found only in the bony fishes. In the more primitive groups like some minnows, bichirs and lungfish, the bladder is open to the esophagus and doubles as a lung. It is often absent in fast swimming fishes such as the tuna and mackerel families. The condition of a bladder open to the esophagus is called physostome, the closed condition physoclist. In the latter, the gas content of the bladder is controlled through a rete mirabilis, a network of blood vessels effecting gas exchange between the bladder and the blood.\n", "In physostomous swim bladders, a connection is retained between the swim bladder and the gut, the pneumatic duct, allowing the fish to fill up the swim bladder by \"gulping\" air. Excess gas can be removed in a similar manner.\n", "Physostomes are fishes that have a pneumatic duct connecting the gas bladder to the alimentary canal. This allows the gas bladder to be filled or emptied via the mouth. This not only allows the fish to fill their bladder by gulping air, but also to rapidly ascend in the water without the bladder expanding to bursting point. In contrast, fish without any connection to their gas bladder are called physoclisti.\n", "The swim bladder normally consists of two gas-filled sacs located in the dorsal portion of the fish, although in a few primitive species, there is only a single sac. It has flexible walls that contract or expand according to the ambient pressure. The walls of the bladder contain very few blood vessels and are lined with guanine crystals, which make them impermeable to gases. By adjusting the gas pressurising organ using the gas gland or oval window the fish can obtain neutral buoyancy and ascend and descend to a large range of depths. Due to the dorsal position it gives the fish lateral stability.\n", "Fish with isolated swim bladders are susceptible to barotrauma of ascent when brought to the surface by fishing. The swim bladder is an organ of buoyancy control which is filled with gas extracted from solution in the blood, and which is normally removed by the reverse process. If the fish is brought upwards in the water column faster than the gas can be resorbed, the gas will expand until the bladder is stretched to its elastic limit, and may rupture.\n", "The urinary bladder is a hollow muscular organ in humans and some other animals that collects and stores urine from the kidneys before disposal by urination. In the human the bladder is a hollow muscular, and distensible (or elastic) organ, that sits on the pelvic floor. Urine enters the bladder via the ureters and exits via the urethra. The typical human bladder will hold between 300 and (10.14 and ) before the urge to empty occurs, but can hold considerably more.\n" ]
how can they prove paedophilia, such as rolf harris, decades after the offences?
They usually take statements and try to corroborate them with accused testimony alibi. I watched a case link to Jimmy Saville where the women described a wall covered in graffiti where she was raped, years later they took new wall paper down and it was still there, all names of underage girls and their phone numbers. Then it's usually put forward to a jury for them to decide.
[ "On 30 July 2014, the board of the National Trust of Australia (NSW) voted to remove Rolf Harris from the list after his conviction on 12 charges of indecent assault between 1969 and 1986 and to also withdraw the award. Harris had been among the original 100 Australians selected for the honour in 1997.\n", "The South Australian police prosecuted Harris for publishing immoral and obscene material. The only prosecution witness was a police detective, whose evidence is full of unintended humour: \"Another evidence of indecency was the word 'incestuous'. Detective Volgelsang said: 'I don't know what \"incestuous\" means, but I think there is a suggestion of indecency about it'\". Despite the woeful case, and several distinguished expert witnesses arguing for Harris, he was found guilty and fined £5.\n", "The trial of Harris began on 6 May 2014 at Southwark Crown Court. Seven of the twelve charges involved allegations of a sexual relationship between Harris and one of his daughter's friends. Six charges related to when she was between the ages of 13 and 15, and one when she was 19. Harris denied that he had entered into a sexual relationship with the girl until she was 18. During the trial, a letter Harris had written to the girl's father in 1997 after the end of the relationship was shown in court, saying: \"I fondly imagined that everything that had taken place had progressed from a feeling of love and friendship—there was no rape, no physical forcing, brutality or beating that took place.\"\n", "The paper began a controversial campaign to name and shame alleged paedophiles in July 2000, following the abduction and murder of Sarah Payne in West Sussex. During the trial of her killer Roy Whiting, it emerged that he had a previous conviction for abduction and sexual assault against a child. The paper's decision led to some instances of action being taken against those suspected of being child sex offenders, which included several cases of mistaken identity, including one instance where a paediatrician had her house vandalised, and another where a man was confronted because he had a neck brace similar to one a paedophile was wearing when pictured. The campaign was labelled \"grossly irresponsible\" journalism by the then-chief constable of Gloucestershire, Tony Butler. The paper also campaigned for the introduction of Sarah's Law to allow public access to the sex offender registry.\n", "In 2013 at the Jehovah's Witnesses congregation of Moston, Manchester, England, church elder and convicted paedophile Jonathan Rose, following his completion of a nine-month jail sentence for paedophile offences, was allowed in a series of a public meetings to cross-examine the children he had molested. Rose was finally 'disfellowshipped' after complaints to the police and the Charity Commission for England and Wales.\n", "Denning's first conviction for gross indecency and indecent assault was in 1974, when he was convicted at the Old Bailey, although he was not imprisoned. Before his conviction Denning had been working for Jonathan King's newly founded UK Records, but King sacked him after the guilty verdict.\n", "On 12 February 2016, the Crown Prosecution Service announced that Harris would face seven further indecent assault charges. The offences allegedly occurred between 1971 and 2004 and involve seven complainants who were aged between 12 and 27 at the time. Harris pleaded not guilty to all of the charges via videolink at Westminster Magistrates' Court on 17 March and was told to appear at Southwark Crown Court on 14 April. On 14 April, he pleaded not guilty to seven charges of indecent assault and one charge of sexual assault.\n" ]
Is it possible that our universe exists within something else? Where can I find more information about this?
There are a few things you should know: 1. Science is based on **observation**, not conjecture. An idea is worthless if it has no evidence to uphold it. 2. We observe things that are very far away by detecting the light they emit. 3. For things that are very, very, very far away (say, at the other edge of the universe), the light we use to observe them has been travelling for billions of years, close to the age of the universe itself. 4. We can't observe anything that's more than about 14 billion light-years away, because the universe came to be about 14 billion years ago. The light we use to observe such objects would have had to travel for longer than our universe has existed. All these things come together to support one fact: Not only do we not know what's outside our universe, it seems we *CAN'T* know what's outside our universe. It would violate the laws of physics.
[ "\"There are clear unknowables in science—reasonable questions that, unless currently accepted laws of nature are violated, we cannot find answers to. One example is the multiverse: the conjecture that our universe is but one among a multitude of others, each potentially with a different set of laws of nature. Other universes lie outside our causal horizon, meaning that we cannot receive or send signals to them. Any evidence for their existence would be circumstantial: for example, scars in the radiation permeating space because of a past collision with a neighboring universe.\"\n", "Both popular and professional research articles in cosmology often use the term \"universe\" to mean \"observable universe\". This can be justified on the grounds that we can never know anything by direct experimentation about any part of the universe that is causally disconnected from the Earth, although many credible theories require a total universe much larger than the observable universe. No evidence exists to suggest that the boundary of the observable universe constitutes a boundary on the universe as a whole, nor do any of the mainstream cosmological models propose that the universe has any physical boundary in the first place, though some models propose it could be finite but unbounded, like a higher-dimensional analogue of the 2D surface of a sphere that is finite in area but has no edge. It is plausible that the galaxies within our observable universe represent only a minuscule fraction of the galaxies in the universe. According to the theory of cosmic inflation initially introduced by its founder, Alan Guth (and by D. Kazanas), if it is assumed that inflation began about 10 seconds after the Big Bang, then with the plausible assumption that the size of the universe before the inflation occurred was approximately equal to the speed of light times its age, that would suggest that at present the entire universe's size is at least 3 times the radius of the observable universe. There are also lower estimates claiming that the entire universe is in excess of 250 times larger (by volume, not by radius) than the observable universe and also higher estimates implying that the universe could have the size of at least 10 Mpc.\n", "The observable universe is one \"causal patch\" of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone.\n", "Even though this paper gives the impression that the mystery of where our universe originated is solved, it is not. In his paper Tryon mentions how there is this \"larger space in which our Universe is embedded,\" but this place is given only a very vague and short description. Additionally, while Tryon says our universe came into being from an accident of the laws of physics, he does not say what created the laws of physics, leaving the mystery incompletely solved.\n", "The observable universe can be thought of as a sphere that extends outwards from any observation point for 46.5 billion light years, going farther back in time and more redshifted the more distant away one looks. Ideally, one can continue to look back all the way to the Big Bang; in practice, however, the farthest away one can look using light and other electromagnetic radiation is the cosmic microwave background (CMB), as anything past that was opaque. Experimental investigations show that the observable universe is very close to isotropic and homogeneous.\n", "A Universe from Nothing: Why There Is Something Rather than Nothing is a non-fiction book by the physicist Lawrence M. Krauss, initially published on January 10, 2012 by Free Press. It discusses modern cosmogony and its implications for the debate about the existence of God. The main theme of the book is how \"we have discovered that all signs suggest a universe that could and plausibly did arise from a deeper nothing—involving the absence of space itself and—which may one day return to nothing via processes that may not only be comprehensible but also processes that do not require any external control or direction.\"\n", "The idea that the properties of our universe are an accident and come from a theory that allows a multiverse of other possibilities is hard to reconcile with the fact that the universe is extraordinarily simple (uniform and flat) on large scales and that elementary particles appear to be described by simple symmetries and interactions. Also, the accidental concept cannot be falsified by an experiment since any future experiments can be viewed as yet other accidental aspects.\n" ]
why do people constantly encourage others to vote, when 90% of the public are uneducated about the topics they are voting about?
You are right, in principle, that people probably shouldn't vote if they don't know what they're doing. But obtaining a basic overview of issues and candidates is not hard--someone who is encouraged to vote is more likely to educate themselves in this way than someone who does not vote. Besides, a great many people who *do* have basic civic knowledge do not vote. There has to be more than just the mechanical action of casting a ballot, you're right. But it makes more sense to encourage people to vote *and* to try to educate them than to discourage them from voting.
[ "One reason cited for why children and the mentally disabled are not permitted to vote in elections is that they are too intellectually immature to understand voting issues. This view is echoed in concerns about the adult voting population, with observers citing concern for a decrease in 'civic virtue' and 'social capital,' reflecting a generalized panic over the political intelligence of the voting population. Although critics have cited 'youth culture' as contributing to the malaise of modern mass media's shallow treatment of political issues, interviews with youth themselves about their political views have revealed a widespread sense of frustration in their political powerlessness as well as a strongly cynical view of the actions of politicians. Several researchers have attempted to explain this sense of cynicism as a way of rationalizing the sense of alienation and legal exclusion of youth in political decision-making.\n", "Despite high levels of political apathy in the United States, however, this collective action problem does not decrease voter turnout as much as some political scientists might expect. It turns out that most Americans believe their political efficacy to be higher than it actually is, stopping millions of Americans from believing their vote does not matter and staying home from the polls. Thus, it appears collective action problems can be resolved not just by tangible benefits to individuals participating in group action, but by a mere belief that collective action will also lead to individual benefits.\n", "Because of the multitude of different and contradictory definitions of expressive voting, recently another effort by political scientists and public choice theorists has been made to explain voting behavior with reference to instrumental benefits received from influencing the outcome of the election. If voters assumed to be rational but also to have altruistic tendencies and some preference for outcomes enhancing the social welfare of others, they will reliably vote in favor of the policies they perceive to be for the common good, rather than for their individual benefit.\n", "Google extensively studied the causes behind low voter turnout in the United States, and argues that one of the key reasons behind lack of voter participation is the so-called \"interested bystander\". According to Google's study, 48.9% of adult Americans can be classified as \"interested bystanders\", as they are politically informed but are reticent to involve themselves in the civic and political sphere. This category is not limited to any socioeconomic or demographic groups. Google theorizes that individuals in this category suffer from voter apathy, as they are interested in political life but believe that their individual effect would be negligible. These individuals often participate politically on the local level, but shy away from national elections.\n", "In order for a person to be an issue voter, they must be able to recognize that there is more than one opinion about a particular issue, have formed a solid opinion about it and be able to relate that to a specific political party. According to Campbell, only 40 to 60 percent of the informed population even perceives party differences, and can thus partake in party voting. This would suggest that it is common for individuals to develop opinions of issues without the aid of a political party.\n", "One example is voting in the United States. Even though most people say that voting is important, and a right that should be exercised, every election a sub-optimal percentage of Americans turn out to vote, especially in presidential elections (only 51 percent in the 2000 election). One vote may feel very small in a group of millions, so people may not think a vote is worth the time and effort. If too many people think this way, there is a small voter turnout. Some countries enforce compulsory voting to eliminate this effect.\n", "One of the results of the study indicated that higher voter turnouts occurred when one candidate is disliked to the point of being a threat to voters, while the other is perceived as a hero. However, subjects who liked both candidates were not as likely to vote, even if they liked one significantly more than the other. This also holds true for subjects who disliked both candidates because in these cases voters would be happy or unhappy with either outcome. The studies also indicated that mudslinging in political campaigns effectively increased voter turnout, provided that candidates vilified their opponents tastefully without tarnishing their own image. The study also revealed that if people liked or disliked the candidate at the first encounter, their opinion was difficult to change later on. In fact, Krosnick's studies show that people become more resistant to changing their views as they learn more and more about a candidate. At the start of a campaign, most candidates are viewed in a mildly positive light. After presenting their positions, impressions of candidates solidify and information gained earlier in the campaign tends to have a greater impact. Krosnick calls this model the \"asymmetrical\" model of voting behavior. This suggests that the current marketing strategy for campaigning - saving money for advertising more at the end of a campaign - is completely wrong.\n" ]
How big of a nuclear bomb would be needed to disrupt or destroy a massive wedge Tornado?
That is one of the coolest questions I've ever seen.
[ "As a comparison, the blast yield of the GBU-43 Massive Ordnance Air Blast bomb is 0.011 kt, and that of the Oklahoma City bombing, using a truck-based fertilizer bomb, was 0.002 kt. Most artificial non-nuclear explosions are considerably smaller than even what are considered to be very small nuclear weapons.\n", "BULLET::::- On October 27, 1966, a nuclear-tipped DF-2A missile was launched from Jiuquan and the 20 kilotons yield nuclear warhead exploded at the height of 569 meters over the target in Lop Nor or Base 21 situated 894 km away.\n", "So, one can state that a nuclear bomb has a yield of 15 kt (63×10 or 6.3×10 J); but an actual explosion of a 15 000 ton pile of TNT may yield (for example) 8×10 J due to additional carbon/hydrocarbon oxidation not present with small open-air charges.\n", "From many smaller detonations combined the fallout for the entire launch of a 6,000 short ton (5,500 metric ton) Orion is equal to the detonation of a typical 10 megaton (40 petajoule) nuclear weapon as an air burst, therefore most of its fallout would be the comparatively dilute delayed fallout. Assuming the use of nuclear explosives with a high portion of total yield from fission, it would produce a combined fallout total similar to the surface burst yield of the \"Mike\" shot of Operation Ivy, a 10.4 Megaton device detonated in 1952. The comparison is not quite perfect as, due to its surface burst location, \"Ivy Mike\" created a large amount of early fallout contamination. Historical above-ground nuclear weapon tests included 189 megatons of fission yield and caused average global radiation exposure per person peaking at 0.11 mSv/a in 1963, with a 0.007 mSv/a residual in modern times, superimposed upon other sources of exposure, primarily natural background radiation, which averages 2.4 mSv/a globally but varies greatly, such as 6 mSv/a in some high-altitude cities. Any comparison would be influenced by how population dosage is affected by detonation locations, with very remote sites preferred.\n", "Though dangerous and frequently lethal to humans within the immediate area, the critical mass formed would not be capable of producing a massive nuclear explosion of the type that fission bombs are designed to produce. This is because all the design features needed to make a nuclear warhead cannot arise by chance.\n", "Although neutron bombs are commonly believed to \"leave the infrastructure intact\", with current designs that have explosive yields in the low kiloton range, detonation in (or above) a built-up area would still cause a sizable degree of building destruction, through blast and heat effects out to a moderate radius, albeit considerably less destruction, than when compared to a standard nuclear bomb of the \"exact\" same total energy release or \"yield\".\n", "The 15 megaton (Mt) nuclear explosion far exceeded the expected yield of 4 to 8 Mt (6 Mt predicted), and was about 1,000 times more powerful than each of the atomic bombs dropped on Hiroshima and Nagasaki during World War II. The device was the most powerful nuclear weapon ever detonated by the United States and just under one-third the energy of the Tsar Bomba, the largest ever tested. The scientists and military authorities were shocked by the size of the explosion, and many instruments were destroyed which they had put in place to evaluate the effectiveness of the device.\n" ]
what was building 7? why do conspiracy theorists use it as an example? what is the "real explanation" behind its collapse? what do the theorists think happened?
The World Trade Center was a complex of seven buildings. The twin towers were 1 WTC and 2 WTC. Four other buildings were on the same block, and 7 WTC was across the street. While only the twin towers were struck by planes, their collapse caused substantial, irreperable damage to all the other buildings part of the WTC, and other neighboring buildings as well. 3 WTC immediately collapsed from the twin towers essentially falling on it. Same thing happened to a church across the street. Debris that struck 7 WTC didn't cause it to collapse immediately, but started fires that weakened the building, causing it to collapse later that day. Conspiracy theorists think that, because the building was across the street from the WTC and its collapse wasn't *directly* caused by the collapse of the twin towers, that its collapse must have been a controlled demolition. They add to this that the building had offices of the SEC and Secret Service, theorizing that someone wanted to set back investigations into potential financial wrongdoing.
[ "The National Institute of Standards and Technology (NIST) concluded the accepted version was more than sufficient to explain the collapse of the buildings. NIST and many scientists refuse to debate conspiracy theorists because they feel it would give those theories unwarranted credibility. Specialists in structural mechanics and structural engineering accept the model of a fire-induced, gravity-driven collapse of the World Trade Center buildings without the use of explosives. As a result, NIST said that it did not perform any test for the residue of explosive compounds of any kind in the debris.\n", "The inquiry into the collapse began just minutes after it occurred. The police investigated three theories: first, that there was an error in structural design, and authorities overseeing planning had been negligent; second, that the cause is related to initial building procedures; third, that it was caused by the construction of the green roof.\n", "On September 5, 2011, \"The Guardian\" published an article entitled, \"9/11 conspiracy theories debunked\". The article noted that unlike the collapse of World Trade Centers 1 and 2 a controlled demolition collapses a building from the bottom and explains that the windows popped because of collapsing floors. The article also said there are conspiracy theories that claim that 7 World Trade Center was also downed by a controlled demolition, that the Pentagon being hit by a missile, that the hijacked planes were packed with explosives and flown by remote control, that Israel was behind the attacks, that a plane headed for the Pentagon was shot down by a missile, that there was insider trading by people who had foreknowledge of the attacks were all false.\n", "Gage dismisses the explanation of the collapse of 7 World Trade Center given by the National Institute of Standards and Technology (NIST), according to which uncontrolled fires and the buckling of a critical support column caused the collapse, and argues that this would not have led to the uniform way the building actually collapsed. \"The rest of the columns could not have been destroyed sequentially so fast to bring this building straight down into its own footprint,\" he says. Gage argues that skyscrapers that have suffered \"hotter, longer lasting and larger fires\" have not collapsed. \"Buildings that fall in natural processes fall to the path of least resistance,\" says Gage, \"they don't go straight down through themselves.\" Architects & Engineers for 9/11 Truth also questions the computer models used by NIST, and argues that evidence pointing to the use of explosives had been omitted in its report on the collapse of 7 WTC.\n", "The collapse of the old 7 World Trade Center is remarkable because it was the first known instance of a tall building collapsing primarily as a result of uncontrolled fires. Based on its investigation, NIST reiterated several recommendations it had made in its earlier report on the collapse of the twin towers, and urged immediate action on a further recommendation: that fire resistance should be evaluated under the assumption that sprinklers are unavailable; and that the effects of thermal expansion on floor support systems be considered. Recognizing that current building codes are drawn to prevent loss of life rather than building collapse, the main point of NIST's recommendations is that buildings should not collapse from fire even if sprinklers are unavailable.\n", "NIST released its final report on the collapse of 7 World Trade Center on November 20, 2008. Investigators used videos, photographs and building design documents to come to their conclusions. The investigation could not include physical evidence as the materials from the building lacked characteristics allowing them to be positively identified and were therefore disposed of prior to the initiation of the investigation. The report concluded that the building's collapse was due to the effects of the fires which burned for almost seven hours. The fatal blow to the building came when the 13th floor collapsed, weakening a critical steel support column that led to catastrophic failure, and extreme heat caused some steel beams to lose strength, causing further failures throughout the buildings until the entire structure succumbed. Also cited as a factor was the collapse of the nearby towers, which broke the city water main, leaving the sprinkler system in the bottom half of the building without water.\n", "Investigations by the Federal Emergency Management Agency and the National Institute of Standards and Technology (NIST) have concluded that the buildings collapsed as a result of the impacts of the planes and of the fires that resulted from them. In 2005, a report from NIST concluded that the destruction of the World Trade Center towers was the result of progressive collapse initiated by the jet impacts and the resultant fires. A 2008 NIST report described a similar progressive collapse as the cause of the destruction of the third tallest building located at the World Trade Center site, the 7 WTC. Many mainstream scientists choose not to debate proponents of 9/11 conspiracy theories, saying they do not want to lend them unwarranted credibility. The NIST explanation of collapse is universally accepted by the structural engineering, and structural mechanics research communities.\n" ]
How were the Romans able to field much larger armies than Medieval Europe?
Firstly, keep in mind that the ancient armies you are describing were fielded by what were essentially ancient superpowers. At the time of the Punic Wars, the Carthaginians held an empire that controlled the western Mediterranean, spanning much of North Africa and Spain. Similarly, when you look at the various Persian Empires, they controlled vast territories and had a large population base to draw upon (consider Thermopylae, where the smaller Greece was only able to assemble a few thousand men against the hundred thousand of Persia). The size of the ancient powers' armies was larger than those of medieval kingdoms in part because the ancient powers were simply larger than medieval kingdoms. With the Gauls and other European barbarians that the Romans encountered, often the Romans were encountering an entire society of people who were living there (Gaul) or an entire society that had picked up and migrated (Cimbri, Teutones). The size of their armies gets blurred a bit there, since numbers may include civilians as well as soldiers. And lastly, at least later in the Roman Republic and through the Empire, the army consisted in large part of *auxilia,* or foreign auxiliary forces drawn up from allied and conquered territories, which included most of the European countries you are comparing the Roman Army to. So, the reason the Roman Army was so much larger than the English Army is in part due to the fact that the English Army was just one part of the Roman Army.
[ "Until the time of Napoleon, European states employed relatively small armies, made up of both national soldiers and mercenaries. These regulars were highly drilled professional soldiers. Ancien Régime armies could only deploy small field armies due to rudimentary staffs and comprehensive yet cumbersome logistics. Both issues combined to limit field forces to approximately 30,000 men under a single commander.\n", "More recent scholarly works mostly agree that the armies were similarly sized, that the Gothic infantry was more decisive than their cavalry, and that neither the Romans nor the Goths used stirrups until the 6th century. probably brought by the Avars.\n", "It is not clear how large armies were; the Saxons themselves described anything more than 30 warriors as an army. This was about same number as a ship's crew. The general view is that an army would have been made up of a number of warbands under a senior chief, or \"Althing\", and would have been between 200 and 600 strong.\n", "Despite its military strength, the Empire made few efforts to expand its already vast extent; the most notable being the conquest of Britain, begun by emperor Claudius (47), and emperor Trajan's conquest of Dacia (101–102, 105–106). In the 1st and 2nd century, Roman legions were also employed in intermittent warfare with the Germanic tribes to the north and the Parthian Empire to the east. Meanwhile, armed insurrections (e.g. the Hebraic insurrection in Judea) (70) and brief civil wars (e.g. in 68 CE the year of the four emperors) demanded the legions' attention on several occasions. The seventy years of Jewish–Roman wars in the second half of the 1st century and the first half of the 2nd century were exceptional in their duration and violence. An estimated 1,356,460 Jews were killed as a result of the First Jewish Revolt; the Second Jewish Revolt (115–117) led to the death of more than 200,000 Jews; and the Third Jewish Revolt (132–136) resulted in the death of 580,000 Jewish soldiers. The Jewish people never recovered until the creation of the state of Israel in 1948.\n", "In all, both writers agree that the Roman army was about 30,000 strong and the Seleucids about 70,000. However, modern sources state that the two armies might have been not that numerically different and supports that the Romans fielded about 50,000 men as did Antiochus. A popular anecdote regarding the array of the two armies is that Antiochus supposedly asked Hannibal whether his vast and well-armed formation would be enough for the Roman Republic, to which Hannibal tartly replied, \"\"quite enough for the Romans, however greedy they are.\"\"\n", "The size of the army, and therefore of the limitanei, remains controversial. A.H.M. Jones and Warren Treadgold argue that the late Roman army was significantly larger than earlier Roman armies, and Treadgold estimates they had up to 645,000 troops. Karl Strobel denies this, and Strobel estimates that the late Roman army had some 435,000 troops in the time of Diocletian and 450,000 in the time of Constantine I.\n", "By the late Empire, enemy forces in both the East and West were \"sufficiently mobile and sufficiently strong to pierce [the Roman] defensive perimeter on any selected axis of penetration\"; from the 3rd century onwards, both Germanic tribes and Persian armies pierced the frontiers of the Roman Empire. In response, the Roman army underwent a series of changes, more organic and evolutionary than the deliberate military reforms of the Republic and early Empire. A stronger emphasis was placed upon ranged combat ability of all types, such as field artillery, hand-held \"ballistae\", archery and darts. Roman forces also gradually became more mobile, with one cavalryman for every three infantryman, compared to one in forty in the early Empire. Additionally, the Emperor Gallienus took the revolutionary step of forming an entirely cavalry field army, which was kept as a mobile reserve at the city of Milan in northern Italy. It is believed that Gallienus facilitated this concentration of cavalry by stripping the legions of their integral mounted element. A diverse range of cavalry regiments existed, including \"catafractarii\" or \"clibanarii\", \"scutarii\", and legionary cavalry known as \"promoti\". Collectively, these regiments were known as \"equites\". Around 275 AD, the proportion of \"catafractarii\" was also increased. There is some disagreement over exactly when the relative proportion of cavalry increased, whether Gallienus' reforms occurred contemporaneously with an increased reliance on cavalry, or whether these are two distinct events. Alfoldi appears to believe that Gallienus' reforms were contemporaneous with an increase in cavalry numbers. He argues that, by 258, Gallienus had made cavalry the predominant troop type in the Roman army in place of heavy infantry, which dominated earlier armies. According to Warren Treadgold, however, the proportion of cavalry did not change between the early 3rd and early 4th centuries.\n" ]
Did the UK have any options at the start of World War I other than to commit a land army?
Not really; for one thing, with Britain now at war, the staff talks with the French Army came into play, wherein the British would despatch an expeditionary force to assist them in fighting the Germans. Plus, the immediate reason for British involvement was the Invasion of Belgium, so the British could hardly be seen to sit around and do nothing while there was serious fighting taking place across the Channel. You also have to take into account the fact that it took until November 1914 for the Blockade to actually be in place, and it wasn't until March 1915 that it became much stronger (and also borderline illegal). The British did not have the time to wait around for two months doing nothing, while their now-allies the Russians and the French bore the brunt of the fighting. It's also important to consider that a more 'material' contribution by the British, ie actually sending ground forces to fight instead of just relying on their Navy while the French and Russians absorbed casualties, would give the British greater influence in negotiations when the war was over.
[ "Such settlement plans initially began during World War I, with South Australia first enacting legislation in 1915. Similar schemes gained impetus across Australia in February 1916 when a conference of representatives from the Commonwealth and all States was held in Melbourne to consider a report prepared by the Federal Parliamentary War Committee regarding the settlement of returned soldiers on the land. The report focused specifically on a Commonwealth-State cooperative process of selling or leasing Crown land to soldiers who had been demobilised following the end of their service in this first global conflict. The meeting agreed that it was the Commonwealth Government's role to select and acquire land whilst the State government authorities would process applications and grant land allotments.\n", "The Commonwealth government wished to purchase land for resettlement after World War II. Because the States are not required to acquire property on just terms, the Commonwealth government entered into a deal with the New South Wales government, which would purchase the land for a lower price. The Commonwealth government would then pay the New South Wales government in the form of a grant (section 96).\n", "In 1901, following lessons learned from the Second Boer War and diplomatic clashes with the growing German Empire, the United Kingdom sought to reform the British Army to be able to fight a European adversary. This task fell to Secretary of State for War, Richard Haldane who implemented policies known as the Haldane Reforms. The Territorial and Reserve Forces Act 1907 created a new Territorial Force by merging the Yeomanry and the Volunteer Force in 1908. This resulted in the creation of 14 Territorial divisions, including the West Lancashire Division.\n", "During the interwar period, the British Army envisioned that, during future conflicts, the Territorial Army would be used as the basis for future expansion so as to avoid raising a new Kitchener's Army. However, as the 1920s and 1930s wore on, the British Government prioritised funding for the regular army over the territorials, allowing recruitment and equipment levels to languish. Baron Templemore, as part of a House of Lords debate on the Territorial Army, stated that the division - on 1 October 1924 - mustered 338 officers and 7,721 other ranks. Historian David French highlights that \"by April 1937 the Territorial Army had reached less than 80 per cent of its shrunken peacetime establishment\" and \"Its value as an immediate reserve was, therefore, limited.\" Edward Smalley comments that \"48th Divisional Signals operated on an improvised organizational structure\" for most of the 1930s, due to being below 50 per cent strength. He further highlights how the TA, and the division in particular, \"never kept pace with technological developments.\" In 1937, the division was operating just two radio sets on a full-time basis and had to borrow additional units from the 3rd Infantry Division for annual training camps.\n", "Another issue was military aid to the civil power during the industrial unrest that followed the war. The thinly-stretched army was reluctant to become involved, so Churchill proposed using the territorials. Concerns that the force would be deployed to break strikes adversely affected recruitment, which had recommenced on 1 February 1920, resulting in promises that the force would not be so used. The government nevertheless deployed the Territorial Force in all but name during the miner's strike of April 1921 by the hasty establishment of the Defence Force. The new organisation relied heavily on territorial facilities and personnel, and its units were given territorial designations. Territorials were specially invited to enlist. Although those that did were required to resign from the Territorial Force, their service in the Defence Force counted towards their territorial obligations, and they were automatically re-admitted to the Territorial Force once their service in the Defence Force was completed.\n", "In 1901, following lessons learned from the Second Boer War and diplomatic clashes with the growing German Empire, the United Kingdom sought to reform the British Army so it would be able to engage in European affairs if required. This task fell to Secretary of State for War, Richard Haldane who implemented several policies known as the Haldane Reforms. As part of these reforms, the Territorial and Reserve Forces Act 1907 created a new Territorial Force by merging the existing Yeomanry and Volunteer Force in 1908. This resulted in the creation of 14 Territorial Divisions, including the South Midland Division.\n", "Although the territorials could not be compelled to serve outside the United Kingdom, they could volunteer to do so, and when large numbers did, units of the Territorial Force began to be posted overseas. By July 1915, the home army had been stripped of all its original territorial divisions, and their places in the home defences were taken by second-line territorial units. The new units competed for equipment with the 'New Army' being raised to expand the army overseas, the reserves of which were also allocated to home defence while they trained, and suffered from severe shortages. The second line's task in home defence was also complicated by having to supply replacement drafts to the first line and the need to train for their own eventual deployment overseas. Most of the second line divisions had departed the country by 1917, and the territorial brigades in those that remained were replaced by brigades of the Training Reserve, created in 1916 by a reorganisation of New Army reserve units.\n" ]
What's the difference between an endosome and lysosome?
My understanding is that all material that's internalised by a cell starts as an endosome. If this material is destined for degradation, it becomes a lysosome. I.e., an endosome is a step on the way to lysosome.
[ "An endosome is a membrane-bound compartment inside a eukaryotic cell. It is an organelle of the endocytic membrane transport pathway originating from the trans Golgi network. Molecules or ligands internalized from the plasma membrane can follow this pathway all the way to lysosomes for degradation, or they can be recycled back to the plasma membrane, in the endocytic cycle. Molecules are also transported to endosomes from the trans Golgi network and either continue to lysosomes or recycle back to the Golgi apparatus. Endosomes can be classified as early, sorting, or late depending on their stage post internalization. Endosomes represent a major sorting compartment of the endomembrane system in cells. In HeLa cells, endosomes are approximately 500 nm in diameter when fully mature.\n", "There are three different types of endosomes: \"early endosomes\", \"late endosomes\", and \"recycling endosomes\". They are distinguished by the time it takes for endocytosed material to reach them, and by markers such as rabs. They also have different morphology. Once endocytic vesicles have uncoated, they fuse with early endosomes. Early endosomes then \"mature\" into late endosomes before fusing with lysosomes.\n", "These models argue that cilia developed from pre-existing components of the eukaryotic cytoskeleton (which has tubulin and dynein also used for other functions) as an extension of the mitotic spindle apparatus. The connection can still be seen, first in the various early-branching single-celled eukaryotes that have a microtubule basal body, where microtubules on one end form a spindle-like cone around the nucleus, while microtubules on the other end point away from the cell and form the cilium. A further connection is that the centriole, involved in the formation of the mitotic spindle in many (but not all) eukaryotes, is homologous to the cilium, and in many cases \"is\" the basal body from which the cilium grows.\n", "Evolving consensus in the field is that the term \"exosome\" should be strictly applied to an EV of endosomal origin. Since it can be difficult to prove such an origin after an EV has left the cell, variations on the term \"extracellular vesicle\" are often appropriate instead.\n", "There are three main compartments that have pathways that connect with endosomes. More pathways exist in specialized cells, such as melanocytes and polarized cells. For example, in epithelial cells, a special process called transcytosis allows some materials to enter one side of a cell and exit from the opposite side. Also, in some circumstances, late endosomes/MVBs fuse with the plasma membrane instead of with lysosomes, releasing the lumenal vesicles, now called exosomes, into the extracellular medium.\n", "Endosomes provide an environment for material to be sorted before it reaches the degradative lysosome. For example, low-density lipoprotein (LDL) is taken into the cell by binding to the LDL receptor at the cell surface. Upon reaching early endosomes, the LDL dissociates from the receptor, and the receptor can be recycled to the cell surface. The LDL remains in the endosome and is delivered to lysosomes for processing. LDL dissociates because of the slightly acidified environment of the early endosome, generated by a vacuolar membrane proton pump V-ATPase. On the other hand, EGF and the EGF receptor have a pH-resistant bond that persists until it is delivered to lysosomes for their degradation. The mannose 6-phosphate receptor carries ligands from the Golgi destined for the lysosome by a similar mechanism.\n", "Another unique identifying feature that differs between the various classes of endosomes is the lipid composition in their membranes. Phosphatidyl inositol phosphates (PIPs), one of the most important lipid signaling molecules, is found to differ as the endosomes mature from early to late. PI(4,5)P is present on plasma membranes, PI(3)P on early endosomes, PI(3,5)P on late endosomes and PI(4)P on the trans Golgi network. These lipids on the surface of the endosomes help in the specific recruitment of proteins from the cytosol, thus providing them an identity. The inter-conversion of these lipids is a result of the concerted action of phosphoinositide kinases and phosphatases that are strategically localized \n" ]
why do children dislike the taste of alcohol so much?
Who likes the taste of alchohol?
[ "Taste preferences and eating behaviors in children are molded at a young age by factors, such as parents' habits and advertisements. One study compared what adults and children considered when choosing beverages. For the most part, adults considered whether beverages had sugar, caffeine, and additives. Some of the 7- to 10-year-old children in the study also mentioned \"additives\" and \"caffeine\", which may be unfamiliar terms to them. This showed the possibility of the parents' influence on their children's decision-making on food choice and eating behaviors.\n", "In the care of paediatric patients, young children may be unwilling to take medication with an unpleasant taste or smell, or due to fear of the unfamiliar. In these cases, the medication is mixed with food or drink to make it more acceptable.\n", "The process of acquiring a taste can involve developmental maturation, genetics (of both taste sensitivity and personality), family example, and biochemical reward properties of foods. Infants are born preferring sweet foods and rejecting sour and bitter tastes, and they develop a preference for salt at approximately 4 months. Neophobia (fear of novelty) tends to vary with age in predictable, but not linear, ways. Babies just beginning to eat solid foods generally accept a wide variety of foods, toddlers and young children are relatively neophobic towards food, and older children, adults, and the elderly are often adventurous eaters with wide-ranging tastes. \n", "Professor and psychiatric Dieter J. Meyerhoff state that the negative effects of alcohol on the body and on health are undeniable, but we should not forget the most important unit in our society that this is affects the family and the children. The family is the main institution in which the child should feel safe and have moral values. If a good starting point is given, it is less likely that when a child becomes an adult, has a mental disorder or is addicted to drugs or alcohol. According to the American Academy of Child and Adolescent Psychiatry (AACAP) children are in a unique position when their parents abuse alcohol. The behavior of a parent is the essence of the problem, because such children do not have and do not receive support from their own family. Seeing changes from happy to angry parents, the children begin to think that they are the reason for these changes. Self-accusation, guilt, frustration, anger arises because the child is trying to understand why this behavior is occurs. Dependence on alcohol has a huge harm in childhood and adolescent psychology in a family environment. Psychologists Michelle L. Kelley and Keith Klostermann describe the effects of parental alcoholism on children, and describe the development and behavior of these children. Alcoholic children often face problems such as behavioral disorders, oppression, crime and attention deficit disorder, and there is a higher risk of internal behavior, such as depression and anxiety. Therefore, they are drinking earlier, drinking alcohol more often and are more likely to grow from moderate to severe alcohol consumption. Young people with parental abuse and parental violence are likely to live in large crime areas, which may have a negative impact on the quality of schools and increase the impact of violence in the area. Paternity alcoholism and the general parental verbal and physical spirit of violence witnessed the fears of children and the internalization of symptoms, greater likelihood of child aggression and emotional misconduct.\n", "Conditioned taste aversion sometimes occurs when sickness is merely coincidental to, and not caused by, the substance consumed. For example, a person who becomes very sick after consuming vodka-and-orange-juice cocktails may then become averse to the taste of orange juice, even though the sickness was caused by the over-consumption of alcohol. Under these circumstances, conditioned taste aversion is sometimes known as the \"Sauce-Bearnaise Syndrome\", a term coined by Seligman and Hager.\n", "Some population groups, such as the elderly, may benefit from umami taste because their taste and smell sensitivity is impaired by age and medication. The loss of taste and smell can contribute to poor nutrition, increasing their risk of disease. Some evidence exists to show umami not only stimulates appetite, but also may contribute to satiety.\n", "Research has generally shown striking uniformity across different cultures in the motives behind teen alcohol use. Social engagement and personal enjoyment appear to play a fairly universal role in adolescents' decision to drink throughout separate cultural contexts. Surveys conducted in Argentina, Hong Kong, and Canada have each indicated the most common reason for drinking among adolescents to relate to pleasure and recreation; 80% of Argentinian teens reported drinking for enjoyment, while only 7% drank to improve a bad mood. The most prevalent answers among Canadian adolescents were to \"get in a party mood,\" 18%; \"because I enjoy it,\" 16%; and \"to get drunk,\" 10%. In Hong Kong, female participants most frequently reported drinking for social enjoyment, while males most frequently reported drinking to feel the effects of alcohol.\n" ]
In fiction, the gamma radiation (esp. from nuclear weapons) is usually depicted with a greenish, yellowish colour, and often makes objects glow. Does this occur in real life?
You can also look at the aurora borealis, which is caused by interaction of fast electrons (beta radiation) with the molecules in the atmosphere. The different molecules give off different colors after being excited by interaction with the electrons. In fiction (movies) you have the problem that the viewer needs to be guided to understand that the object is somehow special without using to much screen time - a glow is easy to do and serves the purpose. Realism is usually not the first priority.
[ "A gamma ray, or gamma radiation (symbol γ or formula_1), is a penetrating electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves and so imparts the highest photon energy. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation \"gamma rays\" based on their relatively strong penetration of matter; he had previously discovered two less penetrating types of decay radiation, which he named alpha rays and beta rays in ascending order of penetrating power.\n", "A soft gamma repeater (SGR) is an astronomical object which emits large bursts of gamma-rays and X-rays at irregular intervals. It is conjectured that they are a type of magnetar or, alternatively, neutron stars with fossil disks around them.\n", "Gamma (γ) radiation consists of photons with a wavelength less than 3x10 meters (greater than 10 Hz and 41.4 keV). Gamma radiation emission is a nuclear process that occurs to rid an unstable nucleus of excess energy after most nuclear reactions. Both alpha and beta particles have an electric charge and mass, and thus are quite likely to interact with other atoms in their path. Gamma radiation, however, is composed of photons, which have neither mass nor electric charge and, as a result, penetrates much further through matter than either alpha or beta radiation.\n", "BULLET::::- Ionizing radiation is the cause of blue glow surrounding sufficient quantities of strongly radioactive materials in air, e.g. some radioisotope specimens (e.g. radium or polonium), particle beams (e.g. from particle accelerators) in air, the blue flashes during criticality accidents, and the eerie/low brightness \"purple\" to \"blue\" glow enveloping the mushroom cloud during the first several dozen seconds after nuclear explosions near sea level. An effect that has been observed only at night from atmospheric nuclear tests owing to its low brightness, with observers noticing it following the pre-dawn Trinity (test), Upshot-Knothole Annie, and the \"Cherokee\" shot of Operation Redwing. The emission of blue light is often incorrectly attributed to Cherenkov radiation. For more on ionized air glow by nuclear explosions see the near local midnight, high altitude test shot, Bluegill Triple Prime.\n", "A gamma-ray burst is a highly luminous flash associated with an explosion in a distant galaxy and producing gamma rays, the most energetic form of electromagnetic radiation, and often followed by a longer-lived \"afterglow\" emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, and radio).\n", "Natural gamma ray tools are designed to measure gamma radiation in the Earth caused by the disintegration of naturally occurring potassium, uranium, and thorium. Unlike nuclear tools, these natural gamma ray tools emit no radiation. The tools have a radiation sensor, which is usually a scintillation crystal that emits a light pulse proportional to the strength of the gamma ray striking it. This light pulse is then converted to a current pulse by means of a photomultiplier tube (PMT). From the photomultiplier tube, the current pulse goes to the tool's electronics for further processing and ultimately to the surface system for recording. The strength of the received gamma rays is dependent on the source emitting gamma rays, the density of the formation, and the distance between the source and the tool detector. The log recorded by this tool is used to identify lithology, estimate shale content, and depth correlation of future logs.\n", "The convention that EM radiation that is known to come from the nucleus, is always called \"gamma ray\" radiation is the only convention that is universally respected, however. Many astronomical gamma ray sources (such as gamma ray bursts) are known to be too energetic (in both intensity and wavelength) to be of nuclear origin. Quite often, in high energy physics and in medical radiotherapy, very high energy EMR (in the 10 MeV region)—which is of higher energy than any nuclear gamma ray—is not called X-ray or gamma-ray, but instead by the generic term of \"high energy photons.\"\n" ]
When and why did the "corporation" become the dominant business entity in America, when all of the great gilded age companies were organized as "trusts?"
I'm not sure I agree with the premise. The corporation's popularity spread with the growth of railroads: large undertakings, needing lots of investors, operating over large areas, usually with some years before any dividends would accrue—and most importantly, whose operations were inherently dangerous. Investors naturally sought to be shielded from personal liability for wrongs done by some remote employee. Trusts arose much later, as a way to get around the early restrictions on corporations. State laws often did not allow corporations to own stock in other companies, to operate in more than one state, or to undertake activities (such as owning an office building not entirely for their own use) even slightly peripheral to the powers enumerated in their charters.
[ "Robert E. Wright argues in \"Corporation Nation\" (2014) that the governance of early U.S. corporations, of which over 20,000 existed by the Civil War of 1861-1865, was superior to that of corporations in the late 19th and early 20th centuries because early corporations governed themselves like \"republics\", replete with numerous \"checks and balances\" against fraud and against usurpation of power by managers or by large shareholders. (The term \"robber baron\" became particularly associated with US corporate figures in the Gilded Age - the late 19th century.)\n", "The term \"corporation\" was used as late as the 18th century in England to refer to such ventures as the East India Company or the Hudson's Bay Company: commercial organizations that operated under royal patent to have exclusive rights to a particular area of trade. In the medieval town, however, corporations were a conglomeration of interests that existed either as a development from, or in competition with, guilds. The most notable corporations were in trade and banking.\n", "In the late-nineteenth century, several large businesses, including Standard Oil, had either bought their rivals or had established business arrangements that effectively stifled competition. Many companies followed the model of Standard Oil, which organized itself as a trust in which several component corporations were controlled by one board of directors. While Congress had passed the 1890 Sherman Antitrust Act to provide some federal regulation of trusts, the Supreme Court had limited the power of the act in the case of \"United States v. E. C. Knight Co.\". By 1902, the 100 largest corporations held control of 40 percent of industrial capital in the United States. Roosevelt did not oppose all trusts, but sought to regulate trusts that he believed harmed the public, which he labeled as \"bad trusts.\"\n", "To finance the larger-scale enterprises required during this era, the Stockholder Corporation emerged as the dominant form of business organization. Corporations expanded by combining into trusts, and by creating single firms out of competing firms, known as monopolies.\n", "During the colonial era, British corporations were chartered by the crown to do business in North America. This practice continued in the early United States. They were often granted monopolies as part of the chartering process. For example, the controversial Bank Bill of 1791 chartered a 20-year corporate monopoly for the First Bank of the United States. Although the Federal government has from time to time chartered corporations, the general chartering of corporations has been left to the states. In the late 18th and early 19th centuries, corporations began to be chartered in greater numbers by the states, under general laws allowing for incorporation at the initiative of citizens, rather than through specific acts of the legislature.\n", "The end of the 19th century saw the emergence of holding companies and corporate mergers creating larger corporations with dispersed shareholders. Countries began enacting anti-trust laws to prevent anti-competitive practices and corporations were granted more legal rights and protections.\n", "Christian A. Luhnow founded Trust Companies in March 1904 in response to the rise of the trust banking industry in the United States. Most of the 1,300 United States trust companies then in existence had been formed in the previous 25 years. Yet, according to the magazine back then, \"no other financial institutions of comparatively recent growth have made such giant strides and at the same time are so little understood outside of those immediately interested.\"trusts in the 1800 were used as business techniques during the industrial growth these ways were often used by large companies\n" ]
Who decided that north was up?
Great answer to this question from /u/khosikulu [here](_URL_0_). > Historian of cartography (among other things) here. The northward orientation has a great deal to do with the importance of northward orientation to compass navigation. Portolans, and later projections aimed at navigation purposes (e.g., Mercator), made note of latitude and direction much more reliably than longitude, so the coastline was easier to fit to an evolving graticule that way (plus it worked better relative to sun- and star-sighting) while the east-west features were still of uncertain size and distance. Smileyman is right that cartographers often didn't put north at the top before the Renaissance and Enlightenment eras and the flowering of European navigation, and that Claudius Ptolemy is probably a big culprit for why it's north-up and not south-up--the power of classical conventions at that moment is hard to deny. It also helps that we're very clearly north of the Equator in the European Atlantic, so that would be the first area depicted to the terminus of navigation. > > Have a dig in volume 1 of the monumental History of Cartography Project and you may find a bit more. Volume 3 would also discuss some of the specific developments of the Renaissance era but that's still in print only; I'm not even sure Volume 4 is close to release yet. There's some more good threads about this topic listed in the [FAQ](_URL_1_).
[ "BULLET::::- Up is a metaphor for north. The notion that north should always be up and east at the right was established by the Greek astronomer Ptolemy. The historian Daniel Boorstin suggests that perhaps this was because the better-known places in his world were in the northern hemisphere, and on a flat map these were most convenient for study if they were in the upper right-hand corner.\n", "The visible rotation of the night sky around the visible celestial pole provides a vivid metaphor of that direction corresponding to up. Thus the choice of the north as corresponding to up in the northern hemisphere, or of south in that role in the southern, is, prior to worldwide communication, anything but an arbitrary one. On the contrary, it is of interest that Chinese and Islamic culture even considered south as the proper top end for maps.\n", "BULLET::::- In the Northern Hemisphere, north is to the left. The Sun rises in the east (far arrow), culminates in the south (to the right) while moving to the right, and sets in the west (near arrow). Both rise and set positions are displaced towards the north in midsummer and the south in midwinter.\n", "The existence of \"The North\" implies the existence of \"The South\", and the socio-economic divide between North and South. The term \"the North\" has in some contexts replaced earlier usage of the term \"\"the West\"\", particularly in the critical sense, as a more robust demarcation than the terms \"\"West\"\" and \"East\". The North provides some absolute geographical indicators for the location of wealthy countries, most of which are physically situated in the Northern Hemisphere, although, as most countries are located in the northern hemisphere in general, some have considered this distinction equally unhelpful. Modern financial services and technologies are largely developed by Western nations: Bitcoin, most known digital currency is subject to skepticism in the Eastern world whereas Western nations are more open to it.\n", "The word \"north\" is related to the Old High German \"nord\", both descending from the Proto-Indo-European unit *\"ner-\", meaning \"left; below\" as north is to left when facing the rising sun. Similarly, the other cardinal directions are also related to the sun's position.\n", "The \"westerly group\" of the \"northern branch\" migrated along the Rainy River, Red River of the North, and across the northern Great Plains until reaching the Pacific Northwest. Along their migration to the west, they came across many \"miigis\", or cowry shells, as told in the prophecy.\n", "BULLET::::- In the northern hemisphere, north is to the left, the Sun rises in the east (far arrow), culminates in the south (right arrow), while moving to the right and setting in the west (near arrow).\n" ]
what happens when i "zone out" after a few hours of being on the computer?
It's sort of like a vegetative state. You're letting your brain run on auto-pilot without paying attention to the world around you. This is why video games warn you to "take frequent breaks" these days - they reassert your grasp on reality and keep you from trancing too long. My advice? Set something in motion _before_ you get on the computer that will "interrupt" you later and snap you out of it. Like setting an alarm in the other room, or telling a friend to come get you for a walk later.
[ "The \"end\" case is a very simple case that works to simply delay the program to allow the user enough time to check that they have received their change and picked up their item. After 5000 milliseconds (5 seconds) the wait timer is used, up and the program continues back to the start page to wait for another user to come by to begin the process over again.\n", "In computers, entering a sleep state is roughly equivalent to \"pausing\" the state of the machine. When restored, the operation continues from the same point, having the same applications and files open.\n", "Out-of-box experience (OOBE pronounced oo-bee) is the experience a consumer (or user) has when preparing to first use a new product. In relation to computing, this includes the setup process of installing and/or performing initial configuration of a piece of hardware or software on a computer. This generally follows the point-of-sale or the interaction of an expert user.\n", "A user can switch between \"active\" tasks by pressing and holding the Back button, but any application listed may be suspended or terminated under certain conditions, such as a network connection being established or battery power running low. An app running in the background may also automatically suspend, if the user has not opened it for a long duration of time.\n", "After the file becomes resident in the system memory below the 640k DOS boundary, the operator will experience total system slow down as a result of the virus' polymorphic code. Symptoms include video flicker to the screen writing very slowly. Files may seem to \"hang\" even though they will eventually execute correctly. This is just a product of the total system slow down within the system's memory.\n", "The software allows users to set a time from one minute to 24 hours and then chose one of three options to block the internet for the time period they have selected. One option allows users to block the internet connection completely but reconnect to the internet by restarting the computer before the time is completed, while a second option prevents users getting back online until the time is up, even if they restart. The software offers a third option called a blacklist, where users can list websites they wish to block, thus still having access to the internet connection, except for the websites they have listed. It was set up by a group of freelance writers and programmers who claim they needed to develop a tool to help cut online distraction. The application is described as helping students, writers, self-employed workers, businesses, office workers, and teenagers who want to block the internet in order to complete their homework, and as a parental control.\n", "After the warm-up phase, the VM will be stopped on the original host, the remaining dirty pages will be copied to the destination, and the VM will be resumed on the destination host. The time between stopping the VM on the original host and resuming it on destination is called \"down-time\", and ranges from a few milliseconds to seconds according to the size of memory and applications running on the VM. There are some techniques to reduce live migration down-time, such as using probability density function of memory change.\n" ]
how do humans lose 100 hairs per day and still maintain a full head of long locks?
Because if I have 100,000 hairs and lose 100 every day it would take 1000 days for me to run out IF I wasn't making more. For hair this is about 3 years which in that time My hair would have grown an additional ~18 inches or ~6 inches per year or half an inch per month. At that rate, you would have to be losing more than 10x that amount before you noticed your hair was thinning and even then your hairs growth rate might cover it up.
[ "In most people, scalp hair growth will halt due to follicle devitalization after reaching a length of generally two or three feet. Exceptions to this rule can be observed in individuals with hair development abnormalities, which may cause an unusual length of hair growth.\n", "Human hair follicles are very sensitive to the effects of radiation therapy administered to the head, most commonly used to treat cancerous growths within the brain. Hair shedding may start as soon as two weeks after the first dose of radiation and will continue for a couple of weeks. Hair follicles typically enter the resting telogen phase and regrowth should commence 2.5 to 3 months after the hair begins to shed. Regrowth may be sparser after treatment.\n", "During the telogen or resting phase (also known as shedding phase) the follicle remains dormant for one to four months. Ten to fifteen percent of the hairs on one's head are in this phase of growth at any given time. In this phase the epidermal cells lining the follicle channel continue to grow as normal and may accumulate around the base of the hair, temporarily anchoring it in place and preserving the hair for its natural purpose without taxing the body's resources needed during the growth phase.\n", "People have between 100,000 and 150,000 hairs on their head. The number of strands normally lost in a day varies but on average is 100. In order to maintain a normal volume, hair must be replaced at the same rate at which it is lost. The first signs of hair thinning that people will often notice are more hairs than usual left in the hairbrush after brushing or in the basin after shampooing. Styling can also reveal areas of thinning, such as a wider parting or a thinning crown.\n", "The maximum terminal hair length depends on the length of the anagen (period of hair growth) for the individual. Waist-length hair or longer is only possible to reach for people with long anagen. The anagen lasts between 2 and 7 years, for some individuals even longer, and is followed by shorter catagen (transition) and telogen (resting) periods. At any given time, about 85% of hair strands are in anagen. The fibroblast growth factor 5 (FGF5) gene affects the hair cycle in mammals including humans; blocking FGF5 in the human scalp (by applying a herbal extract that blocked FGF5) extends the hair cycle, resulting in less hair fall and increased hair growth.\n", "Normally, about 40 (0–78 in men) hairs reach the end of their resting phase each day and fall out. When more than 100 hairs fall out per day, clinical hair loss (telogen effluvium) may occur. A disruption of the growing phase causes abnormal loss of anagen hairs (anagen effluvium).\n", "Many people underestimate the tensile strength of hair. A single strand can potentially carry a weight of up to 100 grams; in theory, with proper technique, a full head of human hair could eventually hold between 5,600 kg and 8,400 kg (12,345 to 18,518 lbs) without breaking individual hairs or pulling out any follicles. However, the act still hurts, especially for new performers.\n" ]
How was bidirectional travel handled on the transcontinental railroad?
Many, many rail lines have only one track. Passing sidings are installed at regular intervals, and the train orders specify things like "Train #97 take siding at Danville to await passage of eastbound train #38." Unlike earlier railroads, the transcontinental was accompanied the entire length by extension of telegraph lines. Thus revised train orders—using updated info about the location of other trains on the line—could be given the conductor at any staffed station. In earlier decades, some lines used "timetable control:" during a certain period only the eastbound train had authority to use a certain stretch of track. In later decades, electric-light signaling systems would be installed showing whether the "track block" ahead was clear, and usually the status of the block beyond that.
[ "In 1861 Congress passed the Land-Grant Telegraph Act which financed the construction of Western Union's transcontinental telegraph lines. Hiram Sibley, Western Union's head, negotiated exclusive agreements with railroads to run telegraph lines along their right-of-way. Eight years before the transcontinental railroad opened, the First Transcontinental Telegraph linked Omaha, Nebraska and San Francisco (and points in-between) on October 24, 1861. The Pony Express ended in just 18 months because it could not compete with the telegraph.\n", "The first contiguous transcontinental rail service on \"The Great American Over-land Route\" between the eastern terminus of the Union Pacific on the Missouri River at Council Bluffs, Iowa/Omaha, Nebraska via Ogden, Utah (CPRR) and Sacramento (WPRR/CPRR) to the San Francisco Bay at the Oakland Wharf was opened over its full length in late 1869. At that time just one daily passenger express train (and one slower mixed train) ran in each direction taking 102 hours to cover that 1,912 miles of the just completed Pacific Railroad route. The first class fare between Council Bluffs/Omaha and Sacramento (the end of the Central Pacific Railroad proper) was $131.50. The additional fares on connecting trains east of Omaha/Council Bluffs on other lines were $20.00 to St. Louis, $22.00 to Chicago, $42.00 to New York, and $45.00 to Boston. Round trip first class 30-day excursion fares between Omaha and San Francisco in 1870 ranged from $170 per person for groups of 20 to 24 to $130 for groups of 50 or more plus $14 for each double sleeping berth. During the decade of the 1870s the schedule was shortened by only 3 hours. In 1881 the scheduled time for the by then 43 mile shorter trip from Council Bluffs to San Francisco was about 98 hours. The first class fare had dropped to $100 with the combined charges for sleeping car accommodations on the Pullman's (UP) and Silver (CP) Palace Cars totaling $14 for a double berth and $52 for a Drawing Room that slept four.\n", "In 1890 the port commissioners began developing a series of switchyards and warehouses on the reclaimed land for use of the San Francisco Belt Railroad, a line of over fifty miles that connected every berth and every pier with the industrial parts of the city and railways of America with all the trade routes of the Pacific. For a decade or more, railcar ferry transfers on steamers were the means of carrying railcars to the transcontinental systems. Later, in 1912, the belt line was driven across Market Street in front of the Ferry Building to link the entire commercial waterfront with railways both south and north and across the continent. The line was extended north along Jefferson Street through the tunnel to link up with U.S. Transport Docks at Fort Mason and south to China Basin.\n", "On September 6, 1869, the Alameda Terminal made history; it was the site of the arrival of the first train via the First Transcontinental Railroad to reach the shores of San Francisco Bay, thus achieving the first coast to coast transcontinental railroad in North America. The transcontinental terminus was switched to the Oakland Pier two months later, on November 8, 1869.\n", "The first transcontinental rail passengers arrived at the Pacific Railroad's original western terminus at the Alameda Terminal on September 6, 1869, where they transferred to the steamer \"Alameda\" for transport across the Bay to San Francisco. The road's rail terminus was moved two months later to the Oakland Long Wharf, about a mile to the north, when its expansion was completed and opened for passengers on November 8, 1869. Service between San Francisco and Oakland Pier continued to be provided by ferry.\n", "George J. Gould attempted to assemble a truly transcontinental system in the 1900s. The line from San Francisco, California, to Toledo, Ohio, was completed in 1909, consisting of the Western Pacific Railway, Denver and Rio Grande Railroad, Missouri Pacific Railroad, and Wabash Railroad. Beyond Toledo, the planned route would have used the Wheeling and Lake Erie Railroad (1900), Wabash Pittsburgh Terminal Railway, Little Kanawha Railroad, West Virginia Central and Pittsburgh Railway, Western Maryland Railroad, and Philadelphia and Western Railway, but the Panic of 1907 strangled the plans before the Little Kanawha section in West Virginia could be finished. The Alphabet Route was completed in 1931, providing the portion of this line east of the Mississippi River. With the merging of the railroads, only the Union Pacific Railroad and the BNSF Railway remain to carry the entire route.\n", "The Atlantic shipping terminal was in Colon, Panama. The Pacific terminal was in Panama City. The 48-mile double track railway was the first transcontinental railway and an engineering marvel of the era. Until the opening of the Panama Canal in 1914, the Panama Railway Company carried the heaviest volume of freight per unit length of any railroad in the world. H. W. Corbett and others from Portland would then use it to get back and forth to the connecting ships to and from the East, rather than crossing on mule back. When the transcontinental Union Pacific Railroad to San Francisco was completed on May 10, 1869, this more direct route was then used for shipping and travel connecting to Portland by boat or stage coach.\n" ]
why did adolf hitler consider native americans as equal to “arians”?
Noble Savage type of thing. He was a member of the Rune Society (sp?). They believed in the restoration of the ancient Germanic way. If memory serves me, he was president of the group at one time. Some of his initial support for becoming Chancellor may have come from this association.
[ "U.S. pro-Nazi movements such as the Friends of the New Germany and the German-American Bund played no role in Hitler's plans for the country, and received no financial or verbal support from Germany after 1935. However, certain Native American advocate groups, such as the fascist-leaning American Indian Federation, were to be used to undermine the Roosevelt administration from within by means of propaganda. In addition, in an effort to gain Native American support, the Nazis classified the Sioux, and by extension all Native Americans, to be Aryans, a theory echoed in the sympathetic portrayal of the Natives in German westerns of the 1930s such as \"Der Kaiser von Kalifornien\". Nazi propagandists went as far as declaring that Germany would return expropriated land to the Indians, while Goebbels predicted they possessed little loyalty to the U.S. and would rather rebel than fight against Germany. As a boy, Hitler had been an enthusiastic reader of Karl May westerns and he told Albert Speer that he still turned to them for inspiration as an adult when he was in a tight spot; the Karl May westerns contained highly sympathetic portrayals of American Indians.\n", "The treatment of the Native Americans was admired by the Nazis. Nazi expansion eastward was accompanied with invocation of America's colonial expansion westward under the banner of Manifest Destiny, with the accompanying wars on the Native Americans. In 1928, Hitler praised Americans for having \"gunned down the millions of Redskins to a few hundred thousand, and now kept the modest remnant under observation in a cage\" in the course of founding their continental empire. On Nazi Germany's expansion eastward, Hitler stated, \"Our Mississippi [the line beyond which Thomas Jefferson wanted all Indians expelled] must be the Volga, and not the Niger.\"\n", "There was a widespread cultural passion for Native Americans in Germany throughout the 19th and 20th centuries. \"Indianthusiasm\" contributed to the evolution of German national identity. Imagery of Native Americans was appropriated in Nazi propaganda and used both against the US and to promote a \"holistic understanding of Nature\" among Germans, which gained widespread support from various segments of the political spectrum in Germany. The connection between anti-American sentiment and sympathetic feelings toward the underprivileged but authentic Indians is common in Germany, and it was to be found among both Nazi propagandists such as Goebbels and left-leaning writers such as Nikolaus Lenau as well. During the German Autumn in 1977, an anonymous text by a leftist \"Göttinger Mescalero\" spoke positively of the murder of German attorney general Siegfried Buback and used the positive image of \"Stadtindianer\" (Urban Indians) within the radical left.\n", "Hitler and other Nazis praised America's system of institutional racism and they also believed that it was the model which should be followed in their Reich. In particular, they believed that it was the model for the expansion of German territory into the territory of other nations and the elimination of their indigenous inhabitants, for the implementation of racist immigration laws which banned some races, and laws which denied full citizenship to blacks, which they also wanted to implement against Jews. Hitler's book \"Mein Kampf\" extolled America as the only contemporary example of a country with racist (\"völkisch\") citizenship statutes in the 1920s, and Nazi lawyers made use of the American models in crafting laws for Nazi Germany. U.S. citizenship laws and anti-miscegenation laws directly inspired the two principal Nuremberg Laws—the Citizenship Law and the Blood Law. Establishing a restrictive entry system for Germany, Hitler admiringly wrote: “The American Union categorically refuses the immigration of physically unhealthy elements, and simply excludes the immigration of certain races.”\n", "H. Glenn Penny states a striking sense, for over two centuries, of affinity among Germans for their ideas of what American Indians are like. According to him, those affinities stem from German polycentrism, notions of tribalism, longing for freedom, and a melancholy sense of \"shared fate.\" In the 17th and 18th centuries, German intellectuals' image of Native American were based on earlier heroes such as those of the Greeks, the Scythians, or the Polish struggle for independence (as in \"Polenschwärmerei\") as a base for their projections. The then popular recapitulation theory on the evolution of ideas was also involved. Such sentiments underwent ups and downs. Philhellenism, rather strong around 1830, faced a setback when the actual Greeks did not fulfill the classic ideals.\n", "The harsh condemnation by Marta Carlson, a Native American activist, of Germans for getting pleasure from \"something their whiteness has participated in destroying\", is not shared by others. As with Irish or Scottish immigrants, the \"whiteness\" of German immigrants was not a given for WASP Americans. Both Germans and Native Americans had to regain some of their customs, as a direct heritage tradition was no longer in place. It is however still somewhat disturbing for both sides when German hobby Indians meet Native German enthusiasts. There are allegations of plastic shamanism versus mockery about Native Americans excluding non-Indians and banning alcohol at their events. German (and Czech) hobbyists' concept of multiculturalism includes the inaleniable right to keep and drink beer in their tipis or kohtes.\n", "Since the military defeat of Nazi Germany by the Allies in 1945, some neo-Nazis have developed a more inclusive definition of \"Aryan\", claiming that the peoples of Western Europe are the closest descendants of the ancient Aryans, with Nordic and Germanic peoples being the most \"racially pure.\"\n" ]
How are gaseous elements harvested and purified?
They aren't harvesting cow burps. They are taking all the livestock dung, putting it in a huge airtight container and letting bacteria digest the organic matter. Methane is a biproduct of the bacterial digestion process. They then collect the methane, compress it, dry it and then burn it. _URL_0_
[ "The metal can also be isolated by electrolysis of fused caesium cyanide (CsCN). Exceptionally pure and gas-free caesium can be produced by thermal decomposition of caesium azide , which can be produced from aqueous caesium sulfate and barium azide. In vacuum applications, caesium dichromate can be reacted with zirconium to produce pure caesium metal without other gaseous products.\n", "Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed \"EUROPART\" studied triazines and other compounds as potential extraction agents. A \"bis\"-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away.\n", "The inert gases are obtained by fractional distillation of air, with the exception of helium which is separated from a few natural gas sources rich in this element, through cryogenic distillation or membrane separation. For specialized applications, purified inert gas shall be produced by specialized generators on-site. They are often used by chemical tankers and product carriers (smaller vessels). Benchtop specialized generators are also available for laboratories.\n", "Mining and refining pollucite ore is a selective process and is conducted on a smaller scale than for most other metals. The ore is crushed, hand-sorted, but not usually concentrated, and then ground. Caesium is then extracted from pollucite primarily by three methods: acid digestion, alkaline decomposition, and direct reduction.\n", "The resulting fermented liquid, may be drunk, used in recipes, or kept aside in a sealed container for additional time to undergo a secondary fermentation. Because of its acidity the beverage should not be stored in reactive metal containers such as aluminium, copper, or zinc, as these may leach into it over time. The shelf life, unrefrigerated, is up to thirty days. \n", "Inert gases are often used in the chemical industry. In a chemical manufacturing plant, reactions can be conducted under inert gas to minimize fire hazards or unwanted reactions. In such plants and in oil refineries, transfer lines and vessels can be purged with inert gas as a fire and explosion prevention measure. At the bench scale, chemists perform experiments on air-sensitive compounds using air-free techniques developed to handle them under inert gas. Helium, neon, argon, krypton, xenon, and radon are inert gases.\n", "Disposal of the gases can be done by burning the flammable ones (carbon monoxide, hydrogen, hydrocarbons), absorbing them in water (ammonia, hydrogen sulfide, sulfur dioxide, chlorine), or reacting them with a suitable reagent.\n" ]
reddit, can you explain to me the relationship between megapixels, resolution, and screen size?
A screen's resolution tells you how many individual pixels or dots of light are there both horizontally and vertically. So 1920x1080 means there's 1920 pixels horizontally on the screen and 1080 vertically. If that's spread out over a screen that's 42 inches across that just means there's more room to put each individual pixel in. But you get the same amount of pixels on a screen that's 42 inches diagonally or 4 inches diagonally. The same goes for photos. This means that if you're looking at a very lage image, ( lots of pixels ) on a screen with a smaller resolution your screen won't be displaying all the pixels that there are in the image. But you can zoom in on a particular part of an image to reveal the extra pixels. So the image you're viewing remains sharp.
[ "The eye's perception of \"display resolution\" can be affected by a number of factors see image resolution and optical resolution. One factor is the display screen's rectangular shape, which is expressed as the ratio of the physical picture width to the physical picture height. This is known as the aspect ratio. A screen's physical aspect ratio and the individual pixels' aspect ratio may not necessarily be the same. An array of 1280 × 720 on a display has square pixels, but an array of 1024 × 768 on a 16:9 display has oblong pixels.\n", "The graphics display resolution is the width and height dimension of an electronic visual display device, such as a computer monitor, in pixels. Certain combinations of width and height are standardized and typically given a name and an initialism that is descriptive of its dimensions. A higher display resolution in a display of the same size means that displayed photo or video content appears sharper, and pixel art appears smaller.\n", "The PPI/PPCM of a computer display is related to the size of the display in inches/centimetres and the total number of pixels in the horizontal and vertical directions. This measurement is often referred to as dots per inch, though that measurement more accurately refers to the resolution of a computer printer.\n", "The size of a screen is usually described by the length of its diagonal, which is the distance between opposite corners, usually in inches. It is also sometimes called the physical image size to distinguish it from the \"logical image size,\" which describes a screen's display resolution and is measured in pixels.\n", "On 2D displays, such as computer monitors and TVs, the display size (or viewable image size or VIS) is the physical size of the area where pictures and videos are displayed. The size of a screen is usually described by the length of its diagonal, which is the distance between opposite corners, usually in inches. It is also sometimes called the physical image size to distinguish it from the \"logical image size,\" which describes a screen's display resolution and is measured in pixels.\n", "The relative increase in detail resulting from an increase in resolution is better compared by looking at the number of pixels across (or down) the picture, rather than the total number of pixels in the picture area. For example, a sensor of 2560 × 1600 sensor elements is described as \"4 megapixels\" (2560 × 1600= 4,096,000). Increasing to 3200 × 2048 increases the pixels in the picture to 6,553,600 (6.5 megapixels), a factor of 1.6, but the pixels per cm in the picture (at the same image size) increases by only 1.25 times. A measure of the comparative increase in linear resolution is the square root of the increase in area resolution, i.e., megapixels in the entire image.\n", "For example, a 15-inch (38 cm) display whose dimensions work out to 12 inches (30.48 cm) wide by 9 inches (22.86 cm) high, capable of a maximum 1024×768 (or XGA) pixel resolution, can display around 85 PPI/33.46PPCM in both the horizontal and vertical directions. This figure is determined by dividing the width (or height) of the display area in pixels by the width (or height) of the display area in inches. It is possible for a display to have different horizontal and vertical PPI measurements (e.g., a typical 4:3 ratio CRT monitor showing a 1280×1024 mode computer display at maximum size, which is a 5:4 ratio, not quite the same as 4:3). The apparent PPI of a monitor depends upon the screen resolution (that is, the number of pixels) and the size of the screen in use; a monitor in 800×600 mode has a lower PPI than does the same monitor in a 1024×768 or 1280×960 mode.\n" ]
why if co2 is only .038% of atmospheric gases, does it have so much impact on global warming?
The most abundant gases - O2, N2, argon - don't absorb heat. CO2 and H2O do absorb heat so when you increase them, you are directly increasing the greenhouse gas effect because you are increasing the most abundant heat absorbing molecule (with H2O). That's very crude, but it's ELI5. Also as an aside, it's somewhat of a diversion tactic to say, "it's only a small amount therefore it can't be that important." Skeptics/denialists love this tactic but it's pretty flawed. Think of it this way: It won't take but a very small amount of cyanide (less than 1% of body weight) in my body to notice it.
[ "In the 1998 paper, \"CO2-induced global warming: a skeptic's view of potential climate change\" Idso said: \"Several of these cooling forces have individually been estimated to be of equivalent magnitude, but of opposite sign, to the typically predicted greenhouse effect of a doubling of the air’s CO2 content, which suggests to me that little net temperature change will ultimately result from the ongoing buildup of CO2 in Earth's atmosphere.\"\n", "The executive summary of the WG I Summary for Policymakers report says they are certain that emissions resulting from human activities are substantially increasing the atmospheric concentrations of the greenhouse gases, resulting on average in an additional warming of the Earth's surface. They calculate with confidence that CO has been responsible for over half the enhanced greenhouse effect. They predict that under a \"business as usual\" (BAU) scenario, global mean temperature will increase by about 0.3 °C per decade during the [21st] century. They judge that global mean surface air temperature has increased by 0.3 to 0.6 °C over the last 100 years, broadly consistent with prediction of climate models, but also of the same magnitude as natural climate variability. The unequivocal detection of the enhanced greenhouse effect is not likely for a decade or more.\n", "There have been predictions, and some evidence, that global warming might cause loss of carbon from terrestrial ecosystems, leading to an increase of atmospheric levels. Several climate models indicate that global warming through the 21st century could be accelerated by the response of the terrestrial carbon cycle to such warming. All 11 models in the C4MIP study found that a larger fraction of anthropogenic CO will stay airborne if climate change is accounted for. By the end of the twenty-first century, this additional CO varied between 20 and 200 ppm for the two extreme models, the majority of the models lying between 50 and 100 ppm. The higher CO levels led to an additional climate warming ranging between 0.1° and 1.5 °C. However, there was still a large uncertainty on the magnitude of these sensitivities. Eight models attributed most of the changes to the land, while three attributed it to the ocean. The strongest feedbacks in these cases are due to increased respiration of carbon from soils throughout the high latitude boreal forests of the Northern Hemisphere. One model in particular (HadCM3) indicates a secondary carbon cycle feedback due to the loss of much of the Amazon Rainforest in response to significantly reduced precipitation over tropical South America. While models disagree on the strength of any terrestrial carbon cycle feedback, they each suggest any such feedback would accelerate global warming.\n", "Of most concern in these anthropogenic factors is the increase in CO levels. This is due to emissions from fossil fuel combustion, followed by aerosols (particulate matter in the atmosphere), and the CO released by cement manufacture. Other factors, including land use, ozone depletion, animal husbandry (ruminant animals such as cattle produce methane, as do termites), and deforestation, are also of concern in the roles they play—both separately and in conjunction with other factors—in affecting climate, microclimate, and measures of climate variables.\n", "It is commonly agreed upon that global climate fluctuations are strongly dictated by the presence or absence of greenhouse gases in the atmosphere and carbon dioxide (CO) is typically considered the most significant greenhouse gas. Observations infer that large uplifts of mountain ranges globally result in higher chemical erosion rates, thus lowering the volume of CO in the atmosphere as well as causing global cooling. This occurs because in regions of higher elevation there are higher rates of mechanical erosion (i.e. gravity, fluvial processes) and there is constant exposure and availability of materials available for chemical weathering. The following is a simplified equation describing the consumption of CO during chemical weathering of silicates:\n", "The quick increment in the centralization of air CO2 proceeded with anthropogenic emanations of this gas is the fundamental factor driving worldwide environmental change. Due to many different causes global temperatures are to increase by 3-5 degrees celsius or 5.4 - 9 degrees fahrenheit within this century.\n", "The effects of fossil fuels emissions, the largest contributor to climate change, cause rising CO2 levels in the earth’s atmosphere. This raises atmospheric temperatures and levels of precipitation in the Northwestern Forested Mountains. Being a very mountainous region, weather patterns contribute higher levels of precipitation. This can cause landslides, channel erosion and floods. The warmer air temperatures also create more rain and less snow, something dangerous for many animal and tree species; with less snow pack comes more vulnerability for trees and insects.\n" ]
How muh gear would a WWII British Commando carry into the field? Also: beret or helmet?
Not sure about the packs, but the steel helmet protects against shrapnel, not direct hits from bullets. Since the commandos were involved in small raids and unconventional warfare, it's not unreasonable that they would have preferred to save on weight when shrapnel would have been unlikely. See this youtube video for a steel helmet penetration test: _URL_0_
[ "The Mk III Helmet was a steel military combat helmet first developed for the British Army in 1941 by the Medical Research Council. First worn in combat by British and Canadian troops on D-Day, the Mk III and Mk IV were used alongside the Brodie helmet for the remainder of the Second World War. It is sometimes referred to as the \"turtle\" helmet by collectors, because of its vague resemblance to a turtle shell, as well as the 1944 pattern helmet.\n", "The forces wore the green beret, which was the official headdress of the British Commandos of World War II. Under the name No. 2 (Dutch) Troop, the first Dutch commandos were trained in Achnacarry, Scotland, as part of No. 10 (Inter-Allied) Commando'. After the war, members of No. 2 Dutch troop served in RST (1945–1950). The paratrooper wing of the KST No 1 parachute company wore the Red beret.\n", "The M42 Duperite helmet was a paratrooper helmet issued to Australian paratroopers during WW2. The helmet got its eponymous name from the shock impact-absorbing material it was composed of. It was similar to the first of the British dispatch rider helmets.\n", "The M1C helmet was a variant of the U.S. Army's popular and iconic M1 helmet. Developed in World War II to replace the earlier M2 helmet, it was issued to paratroopers. It was different from the M2 in various ways, most importantly its bails (chinstrap hinges). The M2 had fixed, spot welded \"D\" bales so named for their shape, similar to early M1s. It was found that when sat on or dropped, these bails would snap off. The solution was the implementation of the swivel bail, which could move around and so was less susceptible to breaking.\n", "During World War II some British Army units followed the lead of the Armoured Corps and adopted the beret as a practical headgear, for soldiers who needed a hat that could be worn in confined areas, slept in and could be stowed in a small space when they wore steel helmets.\n", "Initially the Commandos were indistinguishable from the rest of the British Army and volunteers retained their own regimental head-dress and insignia. No. 2 Commando adopted Scottish head-dress for all ranks and No. 11 (Scottish) Commando wore the Tam O'Shanter with a black hackle. The official head-dress of the Middle East Commandos was a bush hat with their own knuckleduster cap badge. This badge was modelled on their issue fighting knife (the Mark I trench knife) which had a knuckleduster for a handle. In 1942 the green Commando beret and the Combined Operations tactical recognition flash were adopted.\n", "The Helmet Steel Airborne Troops is a paratrooper helmet of British origin worn by Paratroopers and Airborne forces. It was introduced in Second World War and was issued to Commonwealth countries in the post-1945 era up to the Falklands War. As with the similarly shaped RAC helmet, it was initially manufactured by Briggs Motor Bodies at Dagenham.\n" ]
how do locksmiths verify that you own a key before making a copy of it?
They don't know you aren't a thief. However some locks and some keys are protected from this with security measures. Locksmiths aren't worried about copying a house key. But if you try to get a key for a high security lock copied, it's not going to happen. These keys will often have writing on them for "do not copy" and use multiple rows of pins. The only time that a locksmith might want to verify your identity is if you are asking them to get into a locked car, house, or business. They need reasonable assurances that you are authorized to be there and to enter the premises. If it turns out that you are lying and the police get involved the locksmith has an out if they took reasonable precautions to ensure you were authorized. Source - I'm an amateur lock smith with about 10 years experience keying, re-pinning, picking, repairing, and bypassing locks. Locks do not keep someone from breaking in to a home or vehicle in any case. They are there to keep honest people honest, and to deter thieves to pick easier targets. If someone wants to steal from you, there is not a lot you can do to stop them short of guarding your property 24x7. However you can take reasonable precautions so you aren't the low hanging fruit when a thief wants to break in. And if someone does want to steal, they are not going to use a locksmith who could be a witness against them. They will simply smash and grab, or con their way on premises. And being a locksmith, if I wanted to break the law and make a key I don't need the original key to copy. I can cut my own key for most locks using a few simple tricks for pin lengths. But I wouldn't bother. Most residential locks can be opened in under 10 seconds by an amateur simply by raking.
[ "The State of California prohibits locksmiths from copying keys marked \"Do Not Duplicate\" or \"Unlawful to Duplicate\", provided the key originator's company name and telephone number are included on the key.\n", "In master locksmithing, key relevance is the measurable difference between an original key and a copy made of that key, either from a wax impression or directly from the original, and how similar the two keys are in size and shape. It can also refer to the measurable difference between a key and the size required to fit and operate the keyway of its paired lock.\n", "Experienced locksmiths might be able to figure out a bitting code from looking at a picture of a key. This happened to Diebold voting machines in 2007 after they posted a picture of their master key online, people were able to make their own key to match it and open the machines.\n", "In a 2014 article in the \"Washington Post\" a picture of the special tools was included, and while this picture was later removed it quickly spread. Security researchers have pointed out that it is now possible for anyone to make new master keys and open the locks without any sign of entry, and the locks can now be considered compromised. It is likely that professional thieves have possessed the master keys well before the publication, perhaps by reverse engineering the TSA-approved locks.\n", "A lock is a mechanism that secures buildings, rooms, cabinets, objects, or other storage facilities. A \"smith\" of any type is one who shapes metal pieces, often using a forge or mould, into useful objects or to be part of a more complex structure. Locksmithing, as its name implies, is the assembly and designing of locks and their respective keys.\n", "BULLET::::- \"Lock and key\" systems where there are many types of locks and many types of keys and every type of key opens multiple types of locks. Not only do you need to know the types of the objects involved, but the subset of \"information about a particular key that are relevant to seeing if a particular key opens a particular lock\" is different between different lock types.\n", "Exclusive locks are, as the name implies, exclusively held by a single entity, usually for the purpose of writing to the record. If the locking schema was represented by a list, the holder list would contain only one entry. Since this type of lock effectively blocks any other entity that requires the lock from processing, care must be used to:\n" ]
If a woman's on birth control that stops her menstruating once a month, will she remain fertile for longer?
That makes sense biologically (and is why nulliparity is thought to contribute to earlier menopause). However, this is not always supported by epidemiological studies. [This study](_URL_0_) found that history of oral contraceptive use significantly *increased* the risk for *early* menopause (defined here as prior to 49yo), while parity did not. > Ever-users of OC in our study had a mean age at menopause of 45.7 years (SD 6.00 years) while never-users' mean age at menopause was 47.2 years (SD 5.50 years). It goes on to explain: > It is known that OC use and pregnancy disrupt the ovulation cycle. Whether this contributes to a later age at natural menopause is disputed. We found that ever-use of OC was significantly associated with early rather than later natural menopause. We have no obvious explanation for this finding, thus it is important that others investigate this. A Dutch cohort study found that ever-users of OC had a significantly later natural menopause than never-users (mean 51.2 years, SD 3.29 vs 50.1 years, SD 4.16; P < .01). In contrast to these findings, the Massachusetts Women's Health Study did not find an association between ever-use or duration of OC use and age at menopause.
[ "Menstrual regulation allows a woman to terminate within 10 weeks of her last period, but unsafe methods to terminate pregnancy are widespread. In response, a hotline was created for women to get information about fertility control, including menstrual regulation.\n", "Return of menstruation following childbirth varies widely among individuals. This return does not necessarily mean a woman has begun to ovulate again. The first postpartum ovulatory cycle might occur before the first menses following childbirth or during subsequent cycles. A strong relationship has been observed between the amount of suckling and the contraceptive effect, such that the combination of feeding on demand rather than on a schedule and feeding only breast milk rather than supplementing the diet with other foods will greatly extend the period of effective contraception. In fact, it was found that among the Hutterites, more frequent bouts of nursing, in addition to maintenance of feeding in the night hours, led to longer lactational amenorrhea. An additional study that references this phenomenon cross-culturally was completed in the United Arab Emirates (UAE) and has similar findings. Mothers who breastfed exclusively longer showed a longer span of lactational amenorrhea, ranging from an average of 5.3 months in mothers who breastfed exclusively for only two months to an average of 9.6 months in mothers who did so for six months. Another factor shown to affect the length of amenorrhea was the mother's age. The older a woman was, the longer period of lactational amenorrhea she demonstrated. The same increase in length was found in multiparous women as opposed to primiparous. With regards to the use of breastfeeding as a form of contraception, most women who choose not to breastfeed will resume regular menstrual cycling within 1.5 to 2 months following parturition. Furthermore, the closer a woman's behavior is to the Seven Standards of ecological breastfeeding, the later (on average) her cycles will return. Overall, there are many factors including frequency of nursing, mother's age, parity, and introduction of supplemental foods into the infant's diet among others which can influence return of fecundity following pregnancy and childbirth and thus the contraceptive benefits of lactational amenorrhea are not always reliable but are evident and variable among women. Couples who desire spacing of 18 to 30 months between children can often achieve this through breastfeeding alone, though this is not a foolproof method as return of menses is unpredictable and conception can occur in the weeks preceding the first menses. \n", "During a woman's menstrual cycle, the endometrium thickens in preparation for potential pregnancy. After ovulation, if the ovum is not fertilized and there is no pregnancy, the built-up uterine tissue is not needed and thus shed.\n", "The menstrual cycle occurs due to the rise and fall of hormones. This cycle results in the thickening of the lining of the uterus, and the growth of an egg, (which is required for pregnancy). The egg is released from an ovary around day fourteen in the cycle; the thickened lining of the uterus provides nutrients to an embryo after implantation. If pregnancy does not occur, the lining is released in what is known as menstruation.\n", "On the other hand, not every girl follows the typical pattern, and some girls ovulate before the first menstruation. Although unlikely, it is possible for a girl who has engaged in sexual intercourse shortly before her menarche to conceive and become pregnant, which would delay her menarche until after the end of the pregnancy. This goes against the widely held assumption that a woman cannot become pregnant until after menarche. A young age at menarche is not correlated with a young age at first sexual intercourse.\n", "Menstruation can be delayed by the use of progesterone or progestins. For this purpose, oral administration of progesterone or progestin during cycle day 20 has been found to effectively delay menstruation for at least 20 days, with menstruation starting after 2–3 days have passed since discontinuing the regimen.\n", "If menstruation does not resume spontaneously following lifestyle changes, the patient should be monitored for thyroid function, HPO axis function, and concentrations of ACTH, cortisol, and prolactin every 4-5 months.\n" ]
Are there any biographies available about Native North Americans who lived before 1492?
Because of the strong oral traditions in many Nations, it is difficult to find records of individuals, and the ones who do get recorded are those who have done something great, and they get wrapped into lessons and tales that it becomes hard to tell if the person existed at all. Were you looking for a story of the life of someone, or how someone would have lived before European contact?
[ "The History of the Indian Tribes of North America is a three-volume collection of Native American biographies and accompanying lithograph portraits originally published in the United States from 1836 to 1844 by Thomas McKenney and James Hall. The majority of the portraits were first painted in oil by Charles Bird King. McKenney was working as the US Superintendent of Indian Trade and would head the Office of Indian Affairs, both then within the War Department. He planned publication of the biographical project to be supported by private subscription, as was typical for publishing of the time.\n", "The expeditions of French explorer Jacques Cartier in the 1540s made the first written records of the Native Americans in North America. French explorers and fishermen had traded in the region near the mouth of the St. Lawrence River estuary a decade before then for valuable furs. Cartier wrote of encounters with a people later classified as the St. Lawrence Iroquoians, also known as the \"Stadaconan\" or \"Laurentian\" people, who occupied several fortified villages, including \"Stadacona\" and \"Hochelaga\". Cartier recorded an ongoing war between the Stadaconans and another tribe known as the \"Toudaman\", who had destroyed one of their forts the previous year, resulting in 200 deaths.\n", "BULLET::::- Hoxie, Frederick E. \"Encyclopedia of North American Indians: Native American History, Culture, and Life From Paleo-Indians to the Present\", Boston: Houghton Mifflin Harcourt, 2006: 191–2. (retrieved through Google Books, July 26, 2009)\n", "Frank Gouldsmith Speck (November 8, 1881 – February 6, 1950) was an American anthropologist and professor at the University of Pennsylvania, specializing in the Algonquian and Iroquoian peoples among the Eastern Woodland Native Americans of the United States and First Nations peoples of eastern boreal Canada.\n", "Plausawa (1700February 9, 1754) was a Pennacook Indian who lived in what is now New Hampshire. In 1728 he was the last known Native American living in the town of Suncook. At the start of King George's War in 1740 Plausawa moved to St. Francis in Quebec and fought against the settlers of the British.\n", "By the 16th century, the earliest time for which there is a historical record, major Native American groups included the Apalachee of the Florida Panhandle, the Timucua of northern and central Florida, the Ais of the central Atlantic coast, the Tocobaga of the Tampa Bay area, the Calusa of southwest Florida and the Tequesta of the southeastern coast.\n", "BULLET::::- Philip J. Deloria, \"Ella Deloria (\"Anpetu Waste\").\" \"Encyclopedia of North American Indians: Native American History, Culture, and Life from Paleo-Indians to the Present.\" Ed. Frederick E. Hoxie. Boston: Houghton Mifflin Harcourt, 1996. 159-61. .\n" ]
if the metric system is designed to make for easy calculations and conversions, why wasn't the 60 minute hour changed to a base 10 unit?
time is always expressed in seconds in the metric system. or multiples, like milliseconds, kiloseconds, etc. "Other units of time, the minute, hour, and day, are accepted for use with the modern metric system, but are not part of it." _URL_0_
[ "When the metric system was first introduced in 1795, all metric units could be defined by reference to the standard metre or to the standard kilogram. In 1832 Carl Friedrich Gauss, when making the first absolute measurements of the Earth's magnetic field, needed standard units of time alongside the units of length and mass. He chose the second (rather than the minute or the hour) as his unit of time, thereby implicitly making the second a base unit of the metric system. The hour and minute have however been \"accepted for use within SI\".\n", "The metric system is based on the number 10, so converting units is done by adding or removing zeros (e.g. 1 centimeter = 10 millimeters, 1 decimeter = 10 centimeters, 1 meter = 100 centimeters, 1 dekameter = 10 meters, 1 kilometer = 1,000 meters).\n", "Full metrication with the passage of the Standards of Weights and Measures Act, 1956, now replaced by the Standards of Weights and Measures Act, 1976: these Acts quote the legal conversion factors for Imperial units to SI units. Exact conversions can be made for customary units if they had previously been defined in terms of Imperial units: however, even when legally defined, the value of a unit could vary between different localities.\n", "The first practical realisation of the metric system came in 1799, during the French Revolution, when the existing system of measures, which had become impractical for trade, was replaced by a decimal system based on the kilogram and the metre. The basic units were taken from the natural world: the unit of length, the metre, was based on the dimensions of the Earth, and the unit of mass, the kilogram, was based on the mass of water having a volume of one litre or a cubic decimetre. Reference copies for both units were manufactured in platinum and remained the standards of measure for the next 90 years. After a period of reversion to the \"mesures usuelles\" due to unpopularity of the metric system, the metrication of France as well as much of Europe was complete by mid-century.\n", "With the advent of the metric system after the French Revolution it was decided that the quarter circle should be divided into 100 degrees instead of 90 degrees, and the degree into 100 seconds instead of 60 seconds. This required the calculation of trigonometric tables and logarithms corresponding to the new size of the degree and instruments for measuring angles in the new system.\n", "The third method is to redefine traditional units in terms of metric values. These redefined \"quasi-metric\" units often stay in use long after metrication is said to have been completed. Resistance to metrication in post-revolutionary France convinced Napoleon to revert to \"mesures usuelles\" (usual measures), and, to some extent, the names remain throughout Europe. In 1814, Portugal adopted the metric system, but with the names of the units substituted by Portuguese traditional ones. In this system, the basic units were the \"mão-travessa\" (hand) = 1 decimetre (10 \"mão-travessas\" = 1 \"vara\" (yard) = 1 metre), the \"canada\" = 1 litre and the \"libra\" (pound) = 1 kilogram. In the Netherlands, 500 g is informally referred to as a \"pond\" (pound) and 100 g as an \"ons\" (ounce), and in Germany and France, 500 g is informally referred to respectively as \"ein Pfund\" and \"une livre\" (\"one pound\"). In Denmark, the re-defined \"pund\" (500 g) is occasionally used, particularly among older people and (older) fruit growers, since these were originally paid according to the number of pounds of fruit produced. In Sweden and Norway, a \"mil\" (Scandinavian mile) is informally equal to 10 km, and this has continued to be the predominantly used unit in conversation when referring to geographical distances. In the 19th century, Switzerland had a non-metric system completely based on metric terms (e.g. 1 \"Fuss\" (foot) = 30 cm, 1 \"Zoll\" (inch) = 3 cm, 1 \"Linie\" (line) = 3 mm). In China, the \"jin\" now has a value of 500 g and the liang is 50 g.\n", "SI allows for the use of larger prefixed units based on the second, a system known as metric time, but this is seldom used, since the number of seconds in a day (86,400 or, in rare cases, 86,401) negate one of the metric system's primary advantages: easy conversion by multiplying or dividing by powers of ten.\n" ]
how exactly is there a connection with binaural beats and lucid dreaming -
At least in my experience, first of all not all binaural beats do anything and secondly, they do not really get you to dream lucidly, they rather get you to dream more vividly, which makes it easier for you to write a dream diary (important step for lucid dreaming) and makes entering the lucid status more easily. But if you can't do it, binaural beats won't suddenly make you able to.
[ "Hobson asserts that the existence of lucid dreaming means that the human brain can simultaneously occupy two states: waking and dreaming. The dreaming portion has experiences and therefore has primary consciousness, while the waking self recognizes the dreaming and can be seen as having a sort of secondary consciousness in the sense that there is an awareness of mental state. Studies have been able to show that lucid dreaming is associated with EEG power and coherence profiles that are significantly different from both non-lucid dreaming and waking. Lucid dreaming situates itself between those two states. Lucid dreaming is characterized by more 40 Hz power than non-lucid dreaming, especially in frontal regions. Since it is 40 Hz power that has been correlated with waking consciousness in previous studies, it can be suggested that enough 40 Hz power has been added to the non-lucid dreaming brain to support the increase in subjective awareness that permits lucidity but not enough to cause full awakening.\n", "A study on lucid dreaming found that frequent and occasional lucid dreamers scored higher on NFC than non-lucid dreamers. This suggests there is continuity between waking and dreaming cognitive styles. Researchers have argued that this is because self-reflectiveness or self-focused attention is heightened in lucid dreams and also is associated with greater need for cognition.\n", "Using electroencephalography (EEG) and other polysomnographical measurements, LaBerge and others have shown that lucid dreams begin in the Rapid Eye Movement (REM) stage of sleep. LaBerge also proposes that there are higher amounts of beta-1 frequency band (13–19 Hz) brain wave activity experienced by lucid dreamers, hence there is an increased amount of activity in the parietal lobes making lucid dreaming a conscious process.\n", "The term \"lucid dreaming\" was first coined by Dutch psychologist Frederik Willems Van Eeden who introduced the concept on the 22nd of April during a meeting held by the Society for Psychical Research in 1913, but this phenomenon has been present all throughout historical periods with some findings even dating back to the writings of Aristotle. Stephen LaBerge, American psychophysiologist, introduced his method for physiological investigation of lucid dreaming through eye signals in the 1980s and ever since, more modern research has been established on the studies of the lucid dreaming process.\n", "In one study, researchers sought physiological correlates of lucid dreaming. They showed that the unusual combination of hallucinatory dream activity and wake-like reflective awareness and agentive control experienced in lucid dreams is paralleled by significant changes in electrophysiology. Participants were recorded using 19-channel Electroencephalography (EEG), and 3 achieved lucidity in the experiment. Differences between REM sleep and lucid dreaming were most prominent in the 40-Hz frequency band. The increase in 40-Hz power was especially strong at frontolateral and frontal sites. Their findings include the indication that 40-Hz activity holds a functional role in the modulation of conscious awareness across different conscious states. Furthermore, they termed lucid dreaming as a hybrid state, or that lucidity occurs in a state with features of both REM sleep and waking. In order to move from non-lucid REM sleep dreaming to lucid REM sleep dreaming, there must be a shift in brain activity in the direction of waking.\n", "Lucid dreaming is the conscious perception of one's state while dreaming. In this state the dreamer may often have some degree of control over their own actions within the dream or even the characters and the environment of the dream. Dream control has been reported to improve with practiced deliberate lucid dreaming, but the ability to control aspects of the dream is not necessary for a dream to qualify as \"lucid\" — a lucid dream is any dream during which the dreamer knows they are dreaming. The occurrence of lucid dreaming has been scientifically verified.\n", "A lucid dream is one in which the dreamer is aware of dreaming and may be able to exert some degree of control over the dream's characters, narrative or environment. Early references to the phenomenon are found in ancient Greek texts.\n" ]
Why do different viruses (HPV/Warts, Herpes) discriminate between different areas of the body?
That is called as tropism, specifically tissue or cell tropism. Usually, there is a specific receptor on certain tissue to which the virus attaches (virus attachment protein). A typical example is the human immunodeficiency virus and its affinity to the T lymphocyte cells. Herpes simplex 1 exhibits tropism towards epithelial and neural cells, papilloma virus to cutaneous tissue and mucosal cells. Oral, plantar/palmar and genital warts are caused by different sub-types of the papilloma virus.
[ "In addition, the same viruses were prevalent in multiple body habitats within individuals. For instance, the beta- and gamma-papillomaviruses were the viruses most commonly found in the skin and the nose (anterior nares; see Figure 4A,B), which may reflect proximity and similarities in microenvironments that support infection with these viruses.\n", "Some bacteria and viruses have a broad tissue tropism and can infect many types of cells and tissues. Other viruses may infect primarily a single tissue. For example, rabies virus affects primarily neuronal tissue.\n", "In other words; nongenetic interaction in which virus particles released from a cell that is infected with two different viruses have components from both the infecting agents, but with a genome from one of them.\n", "Mechanisms of infection differ between typhoidal and nontyphoidal serotypes, owing to their different targets in the body and the different symptoms that they cause. Both groups must enter by crossing the barrier created by the intestinal cell wall, but once they have passed this barrier, they use different strategies to cause infection.\n", "FeLV and feline immunodeficiency virus (FIV) and are sometimes mistaken for one another though the viruses differ in many ways. Although they are both in the same retroviral subfamily (orthoretrovirinae), they are classified in different genera (FeLV is a gamma-retrovirus and FIV is a lentivirus like HIV-1). Their shapes are quite different: FeLV is more circular while FIV is elongated. The two viruses are also quite different genetically, and their protein coats differ in size and composition. Although many of the diseases caused by FeLV and FIV are similar, the specific ways in which they are caused also differs. Also, while the feline leukemia virus may cause symptomatic illness in an infected cat, an FIV infected cat can remain completely asymptomatic its entire lifetime.\n", "A common origin for the herpesviruses and the caudoviruses has been suggested on the basis of parallels in their capsid assembly pathways and similarities between their portal complexes, through which DNA enters the capsid. These two groups of viruses share a distinctive 12-fold arrangement of subunits in the portal complex. A second paper has suggested an evolutionary relationship between these two groups of viruses.\n", "Due to considerable genetic variation among strains, isolates from both viruses have previously been designated as belonging to new and separate species before being reassigned to one of the two recognized viruses.\n" ]
What spoken language carries the most information per sound or time of speech?
[Here's](_URL_0_) a paper on information density vs speed of speech, done by the University of Lyon. I am not sure how accurate their methods are, but they seem to believe that some languages convey more information per syllable and for 5 out of 7 languages, that ones with lower information density are spoken faster. Note that the sample size was only 59 and only compared how fast 20 different texts were read out, all silences that lasted longer than 150 ms were edited out as well.
[ "Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However, other considerations often need to be taken into account as well.\n", "For the science of linguistics, language is first and foremost spoken language. The medium of spoken language is sound. The Pangloss collection gives access to original recordings simultaneously with transcriptions and translations, as a resource for further research. After being recorded in its cultural context, texts have been transcribed in collaboration with native speakers.\n", "In spoken language analysis, an utterance is the smallest unit of speech. It is a continuous piece of speech beginning and ending with a clear pause. In the case of oral languages, it is generally but not always bounded by silence. Utterances do not exist in written language, only their representations do. They can be represented and delineated in written language in many ways.\n", "Humans have a set of distinctive features (known as phonetic features), and by this set they can produce any speech sound (phoneme) of any human language. BUT NOTE: a particular language have limited features and phonemes, thus speaker of language A may not produce phonemes of language B. In this way a particular language is a \"Constraint\" on the ability of humans to produces many more speech sounds.\n", "While lipread speech can carry useful speech information, it is inherently less accurate than (clearly) heard speech because many distinctive features of speech are produced by actions of the tongue within the oral cavity and are not visible. This is a limitation imposed by speech itself, not the expertise of the speechreader. It is the main reason why the accuracy of a speechreader working on a purely visual record cannot be considered wholly reliable, however skilled they may be and irrespective of hearing status. The type of evidence and the utility of such evidence varies from case to case.\n", "It is a well-established finding that, unlike written language, spoken language does not have any clear boundaries between words; spoken language is a continuous stream of sound rather than individual words with silences between them. This lack of segmentation between linguistic units presents a problem for young children learning language, who must be able to pick out individual units from the continuous speech streams that they hear. One proposed method of how children are able to solve this problem is that they are attentive to the statistical regularities of the world around them. For example, in the phrase \"pretty baby,\" children are more likely to hear the sounds \"pre\" and \"ty\" heard together during the entirety of the lexical input around them than they are to hear the sounds \"ty\" and \"ba\" together. In an artificial grammar learning study with adult participants, Saffran, Newport, and Aslin found that participants were able to locate word boundaries based only on transitional probabilities, suggesting that adults are capable of using statistical regularities in a language-learning task. This is a robust finding that has been widely replicated.\n", "If speech is identified in terms of how it is physically made, then nonauditory information should be incorporated into speech percepts even if it is still subjectively heard as \"sounds\". This is, in fact, the case.\n" ]
bohr's theory of the hydrogen atom
Basically, the atom was understood like a solar system. Electrons orbiting the nucleus. Bohr suggested that the electrons could only be in very specific orbits and light was emitted when it went from a high to a lower orbit and light was absorbed when it went from a lower to a higher.
[ "Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's [[electron]] as a classical wave, moving in a well of electrical potential created by the proton. This calculation accurately reproduced the energy levels of the Bohr model.\n", "The solutions to the Schrödinger equation for hydrogen are analytical, giving a simple expression for the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines and fully reproduced the Bohr model and went beyond it. It also yields two other quantum numbers and the shape of the electron's wave function (\"orbital\") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds.\n", "The hydrogen hypothesis is a model proposed by William F. Martin and Miklós Müller in 1998 that describes a possible way in which the mitochondrion arose as an endosymbiont within an archaeon (without doubts classified as prokaryote at then times), giving rise to a symbiotic association of two cells from which the first eukaryotic cell could have arisen (symbiogenesis).\n", "The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the \"n\" = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the \"n\" = 1 state can hold one or two electrons, while the \"n\" = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all \"n\" = 1 states are fully occupied; the same for \"n\" = 1 and \"n\" = 2 in neon. In argon the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation in the hydrogen atom) and remains empty.\n", "The Bohr model explains the atomic spectrum of hydrogen (see hydrogen spectral series) as well as various other atoms and ions. It is not perfectly accurate, but is a remarkably good approximation in many cases, and historically played an important role in the development of quantum mechanics. The Bohr model posits that electrons revolve around the atomic nucleus in a manner analogous to planets revolving around the sun.\n", "Schrödinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's [[electron]] as a wave, represented by the \"[[wave function]]\" , in an [[electric potential]] [[potential well|well]], , created by the proton. The solutions to Schrödinger's equation are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model.\n", "A hydrogen atom consists of an electron orbiting its nucleus. The electromagnetic force between the electron and the nuclear proton leads to a set of quantum states for the electron, each with its own energy. These states were visualized by the Bohr model of the hydrogen atom as being distinct orbits around the nucleus. Each energy state, or orbit, is designated by an integer, as shown in the figure. The Bohr model was later replaced by quantum mechanics in which the electron occupies an atomic orbital rather than an orbit, but the allowed energy levels of the hydrogen atom remained the same as in the earlier theory.\n" ]
How did chemists explain reactions before the discovery of the atom?
To put it bluntly, they didn’t. The first attempts at explaining the states and reactions of matter led to the postulates that theorized the existence of the atom, so they were mutually dependent. Reactions such as fire and the creation of alloys were found empirically, but never studied like they are now. There were early theories as to what composed matter, such as the idea that all matter consists of fire, water, earth, etc. But such theories never tried to “explain” reactions other than saying that things were how they were.
[ "At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of Cambridge University discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles.\n", "Early experiments in chemistry had their roots in the system of Alchemy, a set of beliefs combining mysticism with physical experiments. The science of chemistry began to develop with the work of Robert Boyle, the discoverer of gas, and Antoine Lavoisier, who developed the theory of the Conservation of mass.\n", "Several chemists discovered during the 19th century some fundamental concepts of the domain of organic chemistry. One of them for example was the French chemist Joseph Louis Gay-Lussac, who was especially interested in fermentation processes, and he passed this fascination to one of his best students, Justus von Liebig. With a difference of some years, each of them described, together with colleague, the chemical structure of the lactic acid molecule as we know it today. They had a purely chemical understanding of the fermentation process, which means that you can’t see it using a microscope, and that it can only be optimized by chemical catalyzers. It was then in 1857 when the French chemist Louis Pasteur first described the lactic acid as the product of a microbial fermentation. During this time, he worked at the university of Lille, where a local distillery asked him for advice concerning some fermentation problems. Per chance and with the badly equipped laboratory he had at that time, he was able to discover that in this distillery, two fermentations were taking place, a lactic acid one and an alcoholic one, both induced by some microorganisms. He then continued the research on these discoveries in Paris, where he also published his theories that presented a stable contradiction to the purely chemical version represented by Liebig and his followers. Even though Pasteur described some concepts that are still accepted nowadays, Liebig refused to accept them until his death in 1873. But even Pasteur himself wrote that he was “driven” to a completely new understanding of this chemical phenomenon. Even if Pasteur didn’t find every detail of this process, he still discovered the main mechanism of how the microbial lactic acid fermentation works. He was for example the first to describe fermentation as a “form of life without air.\"\n", "As organic chemistry developed during the 20th century, chemists started associating synthetically useful reactions with the names of the discoverers or developers; in many cases, the name is merely a mnemonic. Some cases of reactions that were not really discovered by their namesakes are known. Examples include the Pummerer rearrangement, the Pinnick oxidation and the Birch reduction.\n", "Evidence for the existence of atoms was the law of definite proportions proposed by him in 1792. Richter found that the ratio by weight of the compounds consumed in a chemical reaction was always the same. It took 615 parts by weight of magnesia (MgO), for example, to neutralize 1000 parts by weight of sulfuric acid. From his data, Ernst Gottfried Fischer calculated in 1802 the first table of chemical equivalents, taking sulphuric acid as the standard with the figure 1000. When Joseph Proust reported his work on the constant composition of chemical compounds, the time was ripe for the reinvention of an atomic theory. The law of definite proportions and constant composition do not prove that atoms exist, but they are difficult to explain without assuming that chemical compounds are formed when atoms combine in constant proportions.\n", "Significant advances in chemistry also took place during the scientific revolution. Antoine Lavoisier, a French chemist, refuted the phlogiston theory, which posited that things burned by releasing \"phlogiston\" into the air. Joseph Priestley had discovered oxygen in the 18th century, but Lavoisier discovered that combustion was the result of oxidation. He also constructed a table of 33 elements and invented modern chemical nomenclature. Formal biological science remained in its infancy in the 18th century, when the focus lay upon the classification and categorization of natural life. This growth in natural history was led by Carl Linnaeus, whose 1735 taxonomy of the natural world is still in use. Linnaeus in the 1750s introduced scientific names for all his species.\n", "Near the end of the 18th century, two laws about chemical reactions emerged without referring to the notion of an atomic theory. The first was the law of conservation of mass, closely associated with the work of Antoine Lavoisier, which states that the total mass in a chemical reaction remains constant (that is, the reactants have the same mass as the products). The second was the law of definite proportions. First established by the French chemist Joseph Louis Proust in 1799, this law states that if a compound is broken down into its constituent chemical elements, then the masses of the constituents will always have the same proportions by weight, regardless of the quantity or source of the original substance.\n" ]
How do they determine the longitude on another planet?
For a rocky planet like Venus, the prime meridian is chosen to cross through some arbitrarily chosen reference surface feature, like a crater. The direction of increasing longitude is then measured in a direction opposite to the rotation of the planet about its axis. So, for instance, if you look down at Venus's north pole, then the planet rotates clockwise. So from the prime meridian, longitude increases from 0 to 360 degrees in the anti-clockwise direction (i.e., east). (Note that the convention of the direction of increasing longitude is for non-Earth planets only. Earth rotates anti-clockwise as seen from above the north pole. But Earth longitude also increases in the anti-clockwise direction.)
[ "Similar to latitude, the longitude of a place on Earth is the angular distance east or west of the prime meridian or Greenwich meridian. Longitude is usually expressed in degrees (marked with °) ranging from 0° at the Greenwich meridian to 180° east and west. Sydney, for example, has a longitude of about 151° east. New York City has a longitude of 74° west. For most of history, mariners struggled to determine longitude. Longitude can be calculated if the precise time of a sighting is known. Lacking that, one can use a sextant to take a lunar distance (also called \"the lunar observation\", or \"lunar\" for short) that, with a nautical almanac, can be used to calculate the time at zero longitude (see Greenwich Mean Time). Reliable marine chronometers were unavailable until the late 18th century and not affordable until the 19th century. For about a hundred years, from about 1767 until about 1850, mariners lacking a chronometer used the method of lunar distances to determine Greenwich time to find their longitude. A mariner with a chronometer could check its reading using a lunar determination of Greenwich time.\n", "Longitude can be measured in the same way. If the angle to Polaris can be accurately measured, a similar measurement to a star near the eastern or western horizons will provide the longitude. The problem is that the Earth turns 15 degrees per hour, making such measurements dependent on time. A measure a few minutes before or after the same measure the day before creates serious navigation errors. Before good chronometers were available, longitude measurements were based on the transit of the moon, or the positions of the moons of Jupiter. For the most part, these were too difficult to be used by anyone except professional astronomers. The invention of the modern chronometer by John Harrison in 1761 vastly simplified longitudinal calculation.\n", "Many nations, such as France, have proposed their own reference longitudes as a standard, although the world’s navigators have generally come to accept the reference longitudes tabulated by the British. The reference longitude adopted by the British became known as the Prime Meridian and is now accepted by most nations as the starting point for all longitude measurements. The Prime Meridian of zero degrees longitude runs along the meridian passing through the Royal Observatory at Greenwich, England. Longitude is measured east and west from the Prime Meridian. To determine \"longitude by chronometer,\" a navigator requires a chronometer set to the local time at the Prime Meridian. Local time at the Prime Meridian has historically been called Greenwich Mean Time (GMT), but now, due to international sensitivities, has been renamed as Coordinated Universal Time (UTC), and is known colloquially as \"zulu time\".\n", "The Longitude Act only addressed the determination of longitude at sea. Determining longitude reasonably accurately on land was, from the 17th century onwards, possible using the Galilean moons of Jupiter as an astronomical 'clock'. The moons were easily observable on land, but numerous attempts to reliably observe them from the deck of a ship resulted in failure. For details on other efforts towards determining the longitude, see History of longitude.\n", "Longitude (, ), is a geographic coordinate that specifies the east–west position of a point on the Earth's surface, or the surface of a celestial body. It is an angular measurement, usually expressed in degrees and denoted by the Greek letter lambda (λ). Meridians (lines running from pole to pole) connect points with the same longitude. By convention, one of these, the Prime Meridian, which passes through the Royal Observatory, Greenwich, England, was allocated the position of 0° longitude. The longitude of other places is measured as the angle east or west from the Prime Meridian, ranging from 0° at the Prime Meridian to +180° eastward and −180° westward. Specifically, it is the angle between a plane through the Prime Meridian and a plane through both poles and the location in question. (This forms a right-handed coordinate system with the -axis (right hand thumb) pointing from the Earth's center toward the North Pole and the -axis (right hand index finger) extending from the Earth's center through the Equator at the Prime Meridian.)\n", "Latitude and longitude uniquely describe the location of any point on Earth. Latitude may be simply calculated from astronomical or solar observation, either at land or sea, interrupted only by cloudy skies. Longitude, on the other hand, requires both astronomical or solar observation and some form of time reference to a longitude reference point. John Harrison produced the first precise marine chronometer in 1761.\n", "Latitude and longitude uniquely describe the location of any point on Earth. Latitude may be simply calculated from astronomical or solar observation, either at land or sea, interrupted only by cloudy skies. Longitude, on the other hand, requires both astronomical or solar observation and some form of time reference to a longitude reference point. John Harrison produced the first precise marine chronometer in 1761.\n" ]
why is the tea party republican? why aren't they their own party?
You and a bunch of your friends decide to vote on what to do this afternoon. You want to play baseball. Jimmy wants to play baseball. Mike wants to play soccer. Tommy wants to play soccer. Jenny wants to play dolls. Mary wants to play dolls. Anne wants to play dolls. 3 people want to play dolls. 2 people want to play soccer. 2 people want to play baseball. If everyone votes what they want to do, dolls wins. But if the two people who want to play baseball dislike playing with dolls more than they dislike playing soccer, they can switch their vote to football and play soccer. The Tea Party is like the boys who want to play baseball. They want to play baseball, but they will accept playing soccer over playing with dolls.
[ "The Tea Party is generally associated with the Republican Party. Most politicians with the \"Tea Party brand\" have run as Republicans. In recent elections in the 2010s, Republican primaries have been the site of competitions between the more conservative, Tea Party wing of the party and the more moderate, establishment wing of the party. The Tea Party has incorporated various conservative internal factions of the Republican Party to become a major force within the party.\n", "The Tea Party movement is not a national political party; polls show that most Tea Partiers consider themselves to be Republicans and the movement's supporters have tended to endorse Republican candidates. Commentators, including Gallup editor-in-chief Frank Newport, have suggested that the movement is not a new political group but simply a re-branding of traditional Republican candidates and policies. An October 2010 \"Washington Post\" canvass of local Tea Party organizers found 87% saying \"dissatisfaction with mainstream Republican Party leaders\" was \"an important factor in the support the group has received so far\".\n", "The Tea Party Caucus is often viewed as taking conservative positions, and advocating for both social and fiscal conservatism. Analysis of voting patterns confirm that Caucus members are more conservative than other House Republicans, especially on fiscal matters. Voting trends to the right of the median Republican, and Tea Party Caucus members represent more conservative, southern and affluent districts. Supporters of the Tea Party movement itself are largely economic driven.\n", "The Tea Party movement is an American fiscally conservative political movement within the Republican Party. Members of the movement have called for lower taxes, and for a reduction of the national debt of the United States and federal budget deficit through decreased government spending. The movement supports small-government principles and opposes government-sponsored universal healthcare. The Tea Party movement has been described as a popular constitutional movement composed of a mixture of libertarian, right-wing populist, and conservative activism. It has sponsored multiple protests and supported various political candidates since 2009. According to the American Enterprise Institute, various polls in 2013 estimate that slightly over 10 percent of Americans identified as part of the movement.\n", "The Tea Party is a conglomerate of conservatives with diverse viewpoints including libertarians and social conservatives. Most Tea Party supporters self-identify as \"angry at the government\". One survey found that Tea Party supporters in particular distinguish themselves from general Republican attitudes on social issues such as same-sex marriage, abortion and illegal immigration, as well as global warming. However, discussion of abortion and gay rights has also been downplayed by Tea Party leadership. In the lead-up to the 2010 election, most Tea Party candidates have focused on federal spending and deficits, with little focus on foreign policy.\n", "The Tea Party's involvement in the 2012 GOP presidential primaries was minimal, owing to divisions over whom to endorse as well as lack of enthusiasm for all the candidates. However, the 2012 GOP ticket did have an influence on the Tea Party: following the selection of Paul Ryan as Mitt Romney's vice-presidential running mate, \"The New York Times\" declared that the once fringe of the conservative coalition, Tea Party lawmakers are now \"indisputably at the core of the modern Republican Party.\"\n", "The party mood was glum in 2013 and one conservative analyst concluded: It would be no exaggeration to say that the Republican Party has been in a state of panic since the defeat of Mitt Romney, not least because the election highlighted American demographic shifts and, relatedly, the party's failure to appeal to Hispanics, Asians, single women and young voters. Hence the Republican leadership's new willingness to pursue immigration reform, even if it angers the conservative base.\n" ]
Did the Japanese ever repulse an island invasion by the US during WWII?
No. After the initial Japanese victories in the Pacific War, the United States won every major campaign and battle it entered. Even those cases where the Japanese scored tactical victories were strategic losses. Japan lacked the steel to make good its naval losses, and its cadres of experienced pilots were consumed in battle, making its carriers steadily less effective. The tensest point might have been the Japanese naval victory in the Battle of Savo Island, two days after the Allied landing on Guadalcanal; however, the Japanese did not press their advantage. They might have pulled off a victory there with luck and determination. Even so: it wouldn't have delayed the war long, because a Japanese victory at Guadalcanal would have simply fed the Allied strategy of attrition. Throughout the Pacific war, the United States brought to bear significant and growing advantages in resources and technology. Japanese air and naval forces became increasingly unable to contest American mobility and logistics. The Americans had the ability to dictate the day of battle, with combined-arms support that the Japanese simply could not respond to. The Americans were also able to bypass or "leapfrog" many of the more difficult targets; thousands of Japanese soldiers were simply stranded across the Pacific. For instance, the island of New Britain was held by 100,000 Japanese soldiers. An invasion would have been risky and extremely costly. Allied bombers neutralized the island's ports and airfields, leaving it surrounded by a ring of air bases. Limited offensives continued throughout the war, and the Japanese bases there were bombed in training missions for new Allied aircrews. When Australian troops accepted the island's surrender at the end of the war, there were still almost 70,000 Japanese soldiers there. *Edit: conflated Rabaul with New Britain*
[ "Japanese forces occupied the island in December 1941, days after the attack on Pearl Harbor, in order to protect their south-eastern flank from allied counterattacks, and isolate Australia, under the codename Operation FS. On 17–18 August 1942, in order to divert Japanese attention from the Solomon Islands and New Guinea areas, the United States launched a raid on the island, known as the raid on Makin.\n", "In order to capture the islands from Japan, the United States military employed a \"leapfrogging\" strategy which involved conducting amphibious assaults on selected Japanese island fortresses, subjecting some to air attack only and entirely skipping over others. This strategy caused the Japanese Empire to lose control of its Pacific possessions between 1943 and 1945.\n", "The Philippines, Burma, and the Dutch East Indies were the last major territories the Japanese invaded in World War II. As Corregidor surrendered, the Battle of the Coral Sea was in progress, turning back a Japanese attempt to seize Port Moresby, New Guinea by sea. By the final surrender on 9 June, the Battle of Midway was over, blunting Japan's naval strength with the loss of four large aircraft carriers and hundreds of skilled pilots. Both of these victories were costly to the US Navy as well, with two aircraft carriers lost, but the United States could replace their ships and train more pilots, and Japan, for the most part, could not do so adequately.\n", "On June 6/7, 1942, the Japanese Navy and Army participated in the only invasion of the United States during World War II through the Aleutian Islands of Kiska and Attu as part of the Aleutian Islands Campaign. Despite the loss of U.S. soil to a foreign enemy since the War of 1812, the campaign was not considered a priority by the Joint Chiefs of Staff. British Prime Minister Churchill stated that sending forces to attack the Japanese presence there was a diversion from the North African Campaign and Admiral Chester Nimitz saw it as a diversion from his operations in the Central Pacific. Commanders in Alaska, however, believed the Japanese occupiers would establish airbases in the Aleutians that would put major cities along the United States West Coast within range of their bombers and once the islands were again in United States hands, forward bases could be established to attack Japan from there.\n", "After the Japanese were forced into the defensive in the second half of 1942, the Americans were confronted with heavily fortified garrisons on small islands. They decided on a strategy of \"island hopping\", leaving the strongest garrisons alone, just cutting off their supply via naval blockades and bombardment, and securing bases of operation on the lightly defended islands instead. The most notable of these island battles was the Battle of Iwo Jima, where the American victory paved the way for the aerial bombing of the Japanese mainland, which culminated in the atomic bombings of Hiroshima and Nagasaki and the Bombing of Tokyo that forced Japan to surrender.\n", "In World War II, the United States, during the Gilbert and Marshall Islands campaign, invaded and occupied the islands in 1944, destroying or isolating the Japanese garrisons. In just one month in 1944, Americans captured Kwajalein Atoll, Majuro and Enewetak, and, in the next two months, the rest of the Marshall Islands, except for Wotje, Mili, Maloelap and Jaluit.\n", "The campaigns of August 1942 to early 1944 had driven Japanese forces from many of their island bases in the south and the central Pacific Ocean, while isolating many of their other bases (most notably in the Solomon Islands, Bismarck Archipelago, Admiralty Islands, New Guinea, Marshall Islands, and Wake Island), and in June 1944, a series of American amphibious landings supported by the Fifth Fleet's Fast Carrier Task Force captured most of the Mariana Islands (bypassing Rota). This offensive breached Japan's strategic inner defense ring and gave the Americans a base from which long-range Boeing B-29 Superfortress bombers could attack the Japanese home islands.\n" ]
Bacteria can only live at certain temperatures, so when I eat cooked meat, am I eating a lot of dead bacteria? If not where do they go?
Yes you are. Or at least the chemical composition or chemical products of cooking that made them up. Cooking kills bacteria by raising their internal temperature to the point where they die. Depending on the process and the temperature, the cell walls of the bacteria can rupture, they can carbonize and effectively turn to carbon char, they can just sit there as a dead cell, or they could be partially digested by enzyme or other chemical processes that destroy and/or dissociate the chemicals that compose them. Some of those constituents, such as water, could boil off or be washed out in the cooking water or oil in the frying pan or "burn" into carbon dioxide, and others will simply go into your mouth and be digested same as any other food. Finally, if you're eating leftovers or rare meat products, or eating meat that's been sitting out for a while, you're eating live bacteria too. But your body's digestive systems can easily handle most types of live bacteria without any trouble, it's only certain ones that cause problems, so that's usually nothing to worry about.
[ "Bacteria are typically killed at temperatures of around . Most harmful bacteria live on the surface of pieces of meat which have not been ground or shredded before cooking. As a result, for unprocessed steaks or chops of red meat it is usually safe merely to bring the surface temperature of the meat to this temperature and hold it there for a few minutes. See food safety. Meat which has been ground needs to be cooked at a temperature and time sufficient to kill bacteria. Poultry such as chicken has a porous texture not visible to the eye, and can harbour pathogens in its interior even if the exterior is heated sufficiently.\n", "Many bacteria can survive adverse conditions such as temperature, desiccation, and antibiotics by endospores, cysts, conidia or states of reduced metabolic activity lacking specialized cellular structures. Up to 80% of the bacteria in samples from the wild appear to be metabolically inactive—many of which can be resuscitated. Such dormancy is responsible for the high diversity levels of most natural ecosystems.\n", "Bacteria survive mainly on plant residues in the soil. They are spread by insects and by cultural practices, such as irrigation water and farm machinery. The disease is tolerant to low temperatures; it can spread in storages close to 0 °C, by direct contact and by drippint onto the plants below.\n", "An interesting feature peculiar to some of the \"Yersinia\" bacteria is the ability to not only survive, but also to actively proliferate at temperatures as low as 1–4 °C (e.g., on cut salads and other food products in a refrigerator). \"Yersinia\" bacteria are relatively quickly inactivated by oxidizing agents such as hydrogen peroxide and potassium permanganate solutions.\n", "The lifespan of microbes in the home varies similarly. Generally bacteria and viruses require a wet environment with a humidity of over 10 percent. \"E. coli\" can survive for a few hours to a day. Bacteria which form spores can survive longer, with \"Staphylococcus aureus\" surviving potentially for weeks or, in the case of \"Bacillus anthracis\", years.\n", "Different species of bacteria are able to survive at different temperature ranges. Ones living optimally at temperatures between 35–40 °C are called mesophiles or mesophilic bacteria. Some of the bacteria can survive at the hotter and more hostile conditions of 55–60 °C, these are called thermophile.Methanogens come from the domain of archaea. This family includes species that can grow in the hostile conditions of hydrothermal vents. These species are more resistant to heat and can therefore operate at high temperatures, a property that is unique to thermophile.\n", "Raw meats may also contain harmful parasites. As with bacteria, these parasites are destroyed during the heat processing of cooking meat or manufacturing pet foods. Some raw diet recipes call for freezing meat before serving it, which greatly reduces (but does not necessarily eliminate) extant parasites. According to a former European Union directive, freezing fish at -20 °C (-4 °F) for 24 hours kills parasites. The U.S. Food and Drug Administration (FDA) recommends freezing at -35 °C (-31 °F) for 15 hours, or at -20 °C (-4 °F) for 7 days. The most common parasites in fish are roundworms from the family Anisakidae and fish tapeworm. While freezing pork at -15 °C (5 °F) for 20 days will kill any \"Trichinella spiralis\" worm, trichinosis is rare in countries with well established meat inspection programs, with cases of trichinosis in humans in the United States mostly coming from consumption of raw or undercooked wild game. Trichinella species in wildlife are resistant to freezing. In dogs and cats symptoms of trichinellosis would include mild gastrointestinal upset (vomiting and diarrhea) and in rare cases, muscle pain and muscle stiffness.\n" ]
why is it that lead in paint is harmful, but the 40% lead in solder material isn't?
Lead in solder is harmful. There just aren't many alternatives. Lead free solder does exist, but it has a tendency to "whisker" which can create shorts that damage parts.
[ "Lead paint contains lead as pigment. Lead is also added to paint to speed drying, increase durability, retain a fresh appearance, and resist moisture that causes corrosion. Paint with significant lead content is still used in industry and by the military. For example, leaded paint is sometimes used to paint roadways and parking lot lines. Lead, a poisonous metal, can damage nerve connections (especially in young children) and cause blood and brain disorders. Because of lead's low reactivity and solubility, lead poisoning usually only occurs in cases when it is dispersed, such as when sanding lead-based paint prior to repainting.\n", "Some lead compounds are colorful and are used widely in paints, and lead paint is a major route of lead exposure in children. A study conducted in 1998–2000 found that 38 million housing units in the US had lead-based paint, down from a 1990 estimate of 64 million. Deteriorating lead paint can produce dangerous lead levels in household dust and soil. Deteriorating lead paint and lead-containing household dust are the main causes of chronic lead poisoning. The lead breaks down into the dust and since children are more prone to crawling on the floor, it is easily ingested. Many young children display pica, eating things that are not food. Even a small amount of a lead-containing product such as a paint chip or a sip of glaze can contain tens or hundreds of milligrams of lead. Eating chips of lead paint presents a particular hazard to children, generally producing more severe poisoning than occurs from dust. Because removing lead paint from dwellings, e.g. by sanding or torching creates lead-containing dust and fumes, it is generally safer to seal the lead paint under new paint (excepting moveable windows and doors, which create paint dust when operated). Alternatively, special precautions must be taken if the lead paint is to be removed. In oil painting it was once common for colours such as yellow or white to be made with lead carbonate. Lead white oil colour was the main white of oil painters until superseded by compounds containing zinc or titanium in the mid-20th century. It is speculated that the painter Caravaggio and possibly Francisco Goya and Vincent Van Gogh had lead poisoning due to overexposure or carelessness when handling this colour.\n", "Even though lead paint usage has been abolished, there are still houses and buildings that have not had the lead paint removed. The removal of lead paint may also cause symptoms because of the dust created in the process that still contains unhealthy amounts of lead.\n", "Most lead-based paint in the United Kingdom was banned from sale to the general public in 1992, apart from for specialist uses. Prior to this lead compounds had been used as the pigment and drying agent in different types of paint, for example brick and some tile paints\n", "Some pigments are toxic, such as the lead pigments that are used in lead paint. Paint manufacturers began replacing white lead pigments with titanium white (titanium dioxide), before lead was banned in paint for residential use in 1978 by the US Consumer Product Safety Commission. The titanium dioxide used in most paints today is often coated with silica/alumina/zirconium for various reasons, such as better exterior durability, or better hiding performance (opacity) promoted by more optimal spacing within the paint film.\n", "In South Africa, the Hazardous Substances Act of 2009 classifies lead as a hazardous substance and limits its use in paint to 600 parts per million (ppm). A proposed amendment will modify this to 90 ppm, thereby almost completely eradicating lead from paint. The amendment would also include all industrial paints, which were previously excluded.\n", "The reason that lead paint is such a common issue is because of its durability and widespread use. It was constantly endorsed by local and state governments until the 1970s, despite domestic occurrences of lead poisoning and reports from European countries that revealed its toxicity. By 1940, it was commonly associated with negative effects. It was only in the 1970s when the U.S. took action against lead based paints.\n" ]
what is the "cursive writing" thing i keep reading and what is the big deal about it?
Yes, that's cursive, in contrast to "print". [Here are some examples](_URL_0_) In the USA, kids usually learn cursive around ages 7-10, and print before that.
[ "Cursive is a style of penmanship in which the symbols of the language are written in a conjoined and/or \"flowing\" manner, generally for the purpose of making writing faster. This writing style is distinct from \"printscript\" using block letters, in which the letters of a word are unconnected and in Roman/Gothic letterform rather than joined-up script. Not all cursive copybooks join all letters: formal cursive is generally joined, but casual cursive is a combination of joins and pen lifts. In the Arabic, Syriac, Latin, and Cyrillic alphabets, many or all letters in a word are connected (while other must not), sometimes making a word one single complex stroke. In Hebrew cursive and Roman cursive, the letters are not connected. In Maharashtra, there is a version of Cursive called 'Modi'\n", "The name \"pe̍h-ōe-jī\" () means \"vernacular writing\", written characters representing everyday spoken language. The name \"vernacular writing\" could be applied to many kinds of writing, romanized and character-based, but the term \"pe̍h-ōe-jī\" is commonly restricted to the Southern Min romanization system developed by Presbyterian missionaries in the 19th century.\n", "Cursive (also known as script or longhand, among other names) is any style of penmanship in which some characters are written joined together in a flowing manner, generally for the purpose of making writing faster, in opposition to block letters. Formal cursive is generally joined, but casual cursive is a combination of joins and pen lifts. The writing style can be further divided as \"looped\", \"italic\" or \"connected\".\n", "Old Roman cursive, also called majuscule cursive and capitalis cursive, was the everyday form of handwriting used for writing letters, by merchants writing business accounts, by schoolchildren learning the Latin alphabet, and even by emperors issuing commands. A more formal style of writing was based on Roman square capitals, but cursive was used for quicker, informal writing. It was most commonly used from about the 1st century BC to the 3rd century AD, but it probably existed earlier than that. In the early 2nd century BC, the comedian Plautus, in \"Pseudolus\", makes reference to the illegibility of cursive letters:\n", "Some writers precede quoted material that is the grammatical object of an active verb of speaking or writing with a comma, as in \"Mr. Kershner says, \"You should know how to use a comma.\"\" Quotations that follow and support an assertion are often preceded by a colon rather than a comma.\n", "\"Cursiva\" refers to a very large variety of forms of blackletter; as with modern cursive writing, there is no real standard form. It developed in the 14th century as a simplified form of \"textualis\", with influence from the form of \"textualis\" as used for writing charters. \"Cursiva\" developed partly because of the introduction of paper, which was smoother than parchment. It was therefore, easier to write quickly on paper in a cursive script.\n", "Because \"the methodologies employed in the collection and categorisation of written signs is still controversial\", basic research questions are still being discussed, such as: \"do small, hand-made signs count as much as large, commercially made signs?\". The original technical scope of \"linguistic landscape\" involved plural languages, and almost all writers use it in that sense, but Papen has applied the term to the way public writing is used in a monolingual way in a German city and Heyd has applied the term to the ways that English is written, and people's reactions to these ways.\n" ]
What were the geographical boundaries of the "Old West?" Would we see "cowboy culture" in Canada? Mexico? The Caribbean?
I can speak for Canada a bit, having grown up in a cattle town. We have plenty of cowboy culture, especially in my home province of Alberta. For most of the our province's history, our main industries were agriculture and ranching, and even today, they are second only to oil and gas. Although a lot of the cowboy culture has faded, many of the values remain. Rodeo still thrives in Canada, centred around the Calgary Stampede and many other rural rodeos. We also have our own rodeo sport, chuck wagon racing, which I actually have a lot of family competing in. It's the main event here, but to my knowledge, has never caught on elsewhere. For the most part, Canada's cowboys resemble the American variety. This is because borders were basically meaningless back in our frontier days. One big difference might have been gun laws. In Canada, the North West Mounted Police (the first mounties) enforced strict laws on where guns went. It was illegal to carry a gun in most towns, especially during the Klondike gold rush. Another difference was the lack of Mexican cultural influences, slavery, and the American civil war. The first cowboy in Alberta, the man who brought cattle to the province, was actually a freed slave from the states named John Ware. The native population was never as violent as it is depicted in the states. They were relegated to reserves fairly early. When the Calgary stampede first opened, the natives were actually included in the festivities and events. Before that, their admission into the city was very regulated. The gold rush in the Yukon and the whiskey trade brought a lot of 'old west' culture to Canada as well. Canadian prohibition was never as strict as it was in the states, so lots of boot leggers out west here smuggled whiskey and rye across the border to serve Americans. Smaller populations and in turn less government and industry meant that the cowboy era in Canada started and ended a little later than in the states, but it was basically a branch of the same tree. It was all the frontier 'old west', and then somebody drew a line through it.
[ "As settlers from the United States moved west, they brought cattle breeds developed on the east coast and in Europe along with them, and adapted their management to the drier lands of the west by borrowing key elements of the Spanish vaquero culture.\n", "Cultures combine and collaborate in the Pacific Southwest. Traces of the Old American West can still be seen in some areas, especially in the deserts. Hip-hop is one of the many cultures prevalent here, most noticeable in Los Angeles and the Bay Area. Polynesian culture flourishes in Hawaii, and Hawaiian Pidgin can still be heard in certain areas of the state. Spanish/Mexican culture is the most visible in the region, due to the fact four of the five states were once Spanish/Mexican possessions. Cowboys can be found anywhere in the Pacific Southwest. Hawaii has its own version of the American cowboy, the paniolo. Asian culture is demonstrated in the region, especially in California and Hawaii. The area also has a sizeable black population, along with Arabic and Jewish culture.\n", "This area, identified with the current states of Colorado, Arizona, New Mexico, Utah, and Nevada in the western United States, and the states of Sonora and Chihuahua in northern Mexico, has seen successive prehistoric cultural traditions for a minimum of 12,000 years. An often-quoted statement from Erik Reed (1964) defined the Greater Southwest culture area as extending north to south from Durango, Mexico, to Durango, Colorado, and east to west from Las Vegas, Nevada, to Las Vegas, New Mexico. Differently areas of this region are also known as the American Southwest, North Mexico, and Oasisamerica, while its southern neighboring cultural region is known as Aridoamerica or Chichimeca.\n", "BULLET::::- Eric Van Young, \"The Indigenous Peoples of Western Mexico from the Spanish Invasion to the Present: The Center-West as Cultural Region and Natural Environment,\" in Richard E. W. Adams and Murdo J. MacLeod, The Cambridge History of the Native Peoples of the Americas, Volume II: Mesoamerica, Part 2. Cambridge, U.K.: Cambridge University Press, 2000, pp. 136–186.\n", "Indigenous peoples of the North American Southwest refers to the area identified with the current states of Colorado, Arizona, New Mexico, Utah, and Nevada in the western United States, and the states of Sonora and Chihuahua in northern Mexico. An often quoted statement from Erik Reed (1964) defined the Greater Southwest culture area as extending north to south from Durango, Mexico to Durango, Colorado and east to west from Las Vegas, Nevada to Las Vegas, New Mexico. Other names sometimes used to define the region include \"American Southwest\", \"North Mexico\", \"Chichimeca\", and \"Oasisamerica/Aridoamerica\". This region has long been occupied by hunter-gatherers and agricultural people.\n", "The National Multicultural Western Heritage Museum, formerly the National Cowboys of Color Museum and Hall of Fame, is a museum and hall of fame in Fort Worth, Texas. NMWHM takes a look at the people and activities that built the unique culture of the American West, in particular the contributions of Hispanic Americans, Native Americans, European Americans, and African Americans. The work of artists who documented the people and events of the time through journals, photographs and other historical items are part of this new collection. These long overlooked materials tell, perhaps for the first time, the complete story. The American West of today still operates on many of the principles and cultural relationships begun so long ago.\n", "Southwestern New Mexico is a region of the U.S. state of New Mexico commonly defined by Hidalgo County, Grant County, Catron County, Luna County, Doña Ana County, Sierra County, and Socorro County. Some important towns there are Lordsburg, Silver City, Deming, Las Cruces, Truth or Consequences, Socorro, Reserve, and Rodeo. Natural attractions there include White Sands National Monument, the Organ Mountains, Bosque del Apache National Wildlife Refuge, and the Gila Wilderness surrounding the Gila Cliff Dwellings National Monument. Southwestern New Mexico is also home to both the Very Large Array and White Sands Missile Range containing the Trinity Site.\n" ]
why can we use controllers with pcs but not keyboard and mouse with consoles?
Consoles can use a mouse and keyboard. Almost any USB mouse/kb will plug in and function with modern consoles. You can type messages, browse the web, etc. Some games do support kb/m on console: Counterstrike and War Thunder for example. Many games don't just because it takes extra effort to program for, and the kb/m has a distinct advantage in many game types that makes it unfair. TL;DR: It's extra work and an unfair advantage, but kb/m can be used on consoles and on certain console games.
[ "Virtually all personal computers use a keyboard and mouse for user input. Other common gaming peripherals are a headset for faster communication in online games, joysticks for flight simulators, steering wheels for driving games and gamepads for console-style games.\n", "Unlike the PlayStation, which requires the use of an official Sony PlayStation Mouse to play mouse-compatible games, the few PS2 games with mouse support work with a standard USB mouse as well as a USB trackball. In addition, some of these games also support the usage of a USB keyboard for text input, game control (in lieu of a DualShock or DualShock 2 gamepad, in tandem with a USB mouse), or both.\n", "Players are able to control their characters and interact with the virtual world by using various controller systems. When using a PC, a typical example of a games control system would be the use of a mouse and keyboard combined. For example, the movement of the mouse could provide control of the players viewpoint from the character and the mouse buttons may be used for weapon trigger control. Certain keys on the keyboard would control movement around the virtual scenery and also often add possible additional functions. Games consoles however, use hand held 'control pads' which normally have a number of buttons and joysticks (or 'thumbsticks') which provide the same functions as the mouse and keyboard. Players often have the option to communicate with each other during the game by using microphones and speakers, headsets or by 'instant chat' messages if using a PC.\n", "Since the advent and subsequent popularization of the personal computer, few genuine hardware terminals are used to interface with computers today. Using the monitor and keyboard, modern operating systems like Linux and the BSD derivatives feature virtual consoles, which are mostly independent from the hardware used.\n", "Most modern space flight games on the personal computer allow a player to utilise a combination of the WASD keys of the keyboard and mouse as a means of controlling the game (games such as Microsoft's \"Freelancer\" use this control system exclusively). By far the most popular control system among genre enthusiasts, however, is the joystick. Most fans prefer to use this input method whenever possible, but expense and practicality mean that many are forced to use the keyboard and mouse combination (or gamepad if such is the case). The lack of uptake among the majority of modern gamers has also made joysticks a sort of an anachronism, though some new controller designs and simplification of controls offer the promise that space sims may be playable in their full capacity on gaming consoles at some time in the future. In fact, \"\", sometimes considered one of the more cumbersome and difficult series to master within the trading and combat genre, was initially planned for the Xbox but later cancelled.\n", "PlayStation 3 controllers are also supported in Linux; simply connect the controller to the computer using a USB cable and press the PS button. One application to map controller buttons and joysticks to the keyboard keys used by a particular game is Qjoypad. The documentation is extensive and the application requires you to configure Profiles for each game you use.\n", "Personal Computer (PC) based controllers are relatively new, and use software with a feature set similar to that found in a hardware-based console. As dimmers, automated fixtures and other standard lighting devices do not generally have current standard computer interfaces, options such as DMX-512 ports and fader/submaster panels connected via USB are commonplace.\n" ]
Why does there seem to be such a lack of emphasis on the Pacific Theater of WWII in American pop culture and History?
I would wager it has partially to do with the different racial components of the two theaters, and the subsequent disparity in the "goodness" of the war in each. The fight against the Nazis has been continuously held up since the 1940s as the epitome of a "good war." American soldiers fought and died to liberate Western Europe from a Fascist anti-democratic foe, which has been consistently depicted in propaganda and pop culture as evil incarnate (an example of the latter might be the frequency with which Nazi soldiers are the bad guys in FPS video games, whose deaths in the games are never controversial). The eugenicist and genocidal practices of the Nazis lend greater support to this idea of the European theater being a fight between the forces of good and evil (this is helped by the fact that American eugenicist and anti-Semitic policies in the 1930s and 1940s are largely unknown or under-known by Americans today). Meanwhile, the American war in the Pacific is not nearly so easily depicted in stark moral terms. Although the war was initiated by a sneak attack carried out by Japan on the US, the American response to that attack was to corral the West coast's Japanese American population, citizens and noncitizens alike, into concentration camps. Furthermore, American propaganda throughout the war depicted the Japanese in explicitly racist terms; while the war in Europe was depicted as a fight between freedom and fascism, the war in the Pacific was depicted as a fight between white democracy and Oriental despotism. Finally, anti-Japanese sentiment lingered for decades after the war ended, while anti-German sentiment almost immediately disappeared at the beginning of the Cold War. In short, it has been relatively easy to depict WWII in Europe in terms of stark moral and political contrast (good vs evil, democracy vs fascism, liberty vs tyranny), while America's war with Japan was much more controversial, both in terms of its conduct (concentration camps, racist propaganda) and its aftermath (lingering anti-Asian sentiments and violence). Given this state of affairs, pop culture more readily focuses on the European Theater while paying much less attention to the Pacific.
[ "Part of the reason why \"South Pacific\" is considered a classic is its confrontation of racism. According to professor Philip Beidler, \"Rodgers and Hammerstein's attempt to use the Broadway theater to make a courageous statement against racial bigotry in general and institutional racism in the postwar United States in particular\" forms part of \"South Pacific\" 's legend. Although \"Tales of the South Pacific\" treats the question of racism, it does not give it the central place that it takes in the musical. Andrea Most, writing on the \"politics of race\" in \"South Pacific\", suggests that in the late 1940s, American liberals, such as Rodgers and Hammerstein, turned to the fight for racial equality as a practical means of advancing their progressive views without risking being deemed communists. Trevor Nunn, director of the 2001 West End production, notes the importance of the fact that Nellie, a southerner, ends the play about to be the mother in an interracial family: \"It's being performed in America in 1949. That's the resonance.\"\n", "In the case of the United States, the Pacific Theatre was primarily a naval and air effort despite losing ships during the 1941 Pearl Harbor Attack while ground forces were used in Europe. Like in Japan, most ground troops were fighting China, and the Pacific Theatre was also primarily a naval and aerial battle. It was also the first time the United States ever fought a two-front war.\n", "In Allied countries during the war, the \"Pacific War\" was not usually distinguished from World War II in general, or was known simply as the \"War against Japan\". In the United States, the term \"Pacific Theater\" was widely used, although this was a misnomer in relation to the Allied campaign in Burma, the war in China and other activities within the Southeast Asian Theater. However, the US Armed Forces considered the China-Burma-India Theater to be distinct from the Asiatic-Pacific Theater during the conflict.\n", "Between 1942 and 1945, there were four main areas of conflict in the Pacific War: China, the Central Pacific, South-East Asia and the South West Pacific. US sources refer to two theaters within the Pacific War: the Pacific theater and the China Burma India Theater (CBI). However these were not operational commands.\n", "In this history, the Pacific War against Japan is treated as essentially a sideshow, getting only a trickle of resources - since the US are facing a dangerous invasion of their industrial heartland. Strategic aims in the Pacific are confined to recapturing Midway to remove the threat to the Sandwich Islands, and characters consider the idea of conducting an island-hopping war all the way to the Japanese home islands (as the US did in World War II) as an unrealistic fantasy. Also, in this history, the Philippines and Guam are long-standing and recognized possessions of the Japanese, which they had wrested from Spain during the Hispano-Japanese War between the late 1800s and early 1900s and to which the US laid no claim.\n", "From December 8, 1941 is when the USA entered the WWII. It was a big year for the country because they had to arrange for the results of the war. The main focus that the US wanted to make on films was there own historical phenomena and a spread of US culture. The war films made focused mostly on the \"desperate affirmation\" and the \"societal tensions\". Many films main focus was about the war; they wanted to make sure that they explain the objectives. The US war films were good and bad, many of them showed the different lives of the people during the war. The importance of these films and as studies have mentioned, is the influence behind these films. Furthermore, war films showed a lot of information about the war and the life of their families just like the film Since You Went Away. When the US government noticed the content of the feature films they became more interested in the political and social significance messages in the film. This shows how Hollywood wanted to raise two important production of films together with war films. With the growth of the film industry came the growth of the influence of Hollywood celebrities. Hollywood stars appeared in advertisements and toured the country to encourage citizens to purchase war bonds to support their country in the war.\n", "During the social uprisings in the 1960s in North America and Europe against the Vietnam War, pacification came to connote bombing people into submission and waging an ideological war against the opposition. However, after the Vietnam War, pacification was dropped from the official discourse as well as from the discourse of opposition. Although approach towards the term and practices of pacification both in the concept's sixteenth-century and twentieth-century colonial meanings were somehow related to the concepts of war, security and police power, the real connection between pacification and these concepts has never been revealed in the literature on international relations, conflict studies, criminology or political science. Neocleous has argued that the connection between pacification and the ideological discourse on security is related to the terms use in broader Western social and political thought in general, and liberal theory in particular. In short, that liberalism’s key concept is less liberty and more security and that liberal doctrine is inherently less committed to peace and far more to legitimizing violence.\n" ]
Can someone with a weakened immune system receive a vaccine?
It depends on the vaccine and illness. Live vaccinations tend not to be given to persons with compromised immune systems (e.g. yellow fever vaccination), whereas some inactivated viral vaccinations may be given to those with weakened immune systems (depending on their clinical condition). For example, it might be preferable for someone on long term immune modulating drugs to receive a flu vaccine to prevent them developing full on flu. In the UK we use "the green book" for vaccine requirements and contraindications as well as taking into account an individual's clinical picture.
[ "Some people cannot be fully protected from vaccine-preventable diseases by direct vaccination. These are often people with weak immune systems, who are more likely to get seriously ill. Their risk of infection can be significantly reduced if those who are most likely to infect them get the appropriate vaccines.\n", "Immunity status of varicella should be performed at the pre-conception counseling session, in order to prevent the occurrence of congenital varicella syndrome and other adverse effects of varicella in pregnancy. Generally, a person with a positive medical history of varicella infection can be considered immune. Among adults in the United States having a negative or uncertain history of varicella, approximately 85%-90% will be immune. Therefore, an effective method is that people with a negative or uncertain history of varicella infection have a serology to check antibody production before receiving the vaccine. The CDC recommends that all adults be immunized if seronegative.\n", "Some individuals either cannot develop immunity after vaccination or for medical reasons cannot be vaccinated. Newborn infants are too young to receive many vaccines, either for safety reasons or because passive immunity renders the vaccine ineffective. Individuals who are immunodeficient due to HIV/AIDS, lymphoma, leukemia, bone marrow cancer, an impaired spleen, chemotherapy, or radiotherapy may have lost any immunity that they previously had and vaccines may not be of any use for them because of their immunodeficiency. Vaccines are typically imperfect as some individuals' immune systems may not generate an adequate immune response to vaccines to confer long-term immunity, so a portion of those who are vaccinated may lack immunity. Lastly, vaccine contraindications may prevent certain individuals from becoming immune. In addition to not being immune, individuals in one of these groups may be at a greater risk of developing complications from infection because of their medical status, but they may still be protected if a large enough percentage of the population is immune.\n", "However, the antigen of some pathogenic bacteria does not elicit a strong response from the immune system, so a vaccination against this weak antigen would not protect the person later in life. In this case, a conjugate vaccine is used in order to invoke an immune system response against the weak antigen. In a conjugate vaccine, the weak antigen is covalently attached to a strong antigen, thereby eliciting a stronger immunological response to the weak antigen. Most commonly, the weak antigen is a polysaccharide that is attached to strong protein antigen. However, peptide/protein and protein/protein conjugates have also been developed.\n", "Additionally, the vaccines may also prevent illness in non-vaccinated children by limiting exposure through the number of circulating infections. A 2014 review of available clinical trial data from countries routinely using rotavirus vaccines in their national immunization programs found that rotavirus vaccines have reduced rotavirus hospitalizations by 49–92% and all-cause diarrhea hospitalizations by 17–55%.\n", "If the tests show significant abnormalities of the immune system, a specialist in immunodeficiency or infectious diseases will be able to discuss various treatment options. Absence of immunoglobulin or antibody responses to vaccine can be treated with replacement gamma globulin infusions, or can be managed with prophylactic antibiotics and minimized exposure to infection. If antibody function is normal, all routine childhood immunizations including live viral vaccines (measles, mumps, rubella and varicella) should be given. In addition, several “special” vaccines (that is, licensed but not routine for otherwise healthy children and young adults) should be given to decrease the risk that an A–T patient will develop lung infections. The patient and all household members should receive the influenza (flu) vaccine every fall. People with A–T who are less than two years old should receive three (3) doses of a pneumococcal conjugate vaccine (Prevnar) given at two month intervals. People older than two years who have not previously been immunized with Prevnar should receive two (2) doses of Prevnar. At least 6 months after the last Prevnar has been given and after the child is at least two years old, the 23-valent pneumococcal vaccine should be administered. Immunization with the 23-valent pneumococcal vaccine should be repeated approximately every five years after the first dose.\n", "Some vaccines require multiple doses, spaced over time, to be effective. Those who have not yet received all the doses (including all young babies) are not yet fully immune, and rely on the immunity of those around them. Some vaccines also require booster doses in later life; those who have not received their booster doses can be infected and infect others.\n" ]
Could we in theory create Saturn style rings around the Earth, for shits and giggles.
Some people theorize that the Earth may have had a ring or two in it's past, and that they caused massive climate changes. Because of the Earth's significant tilt, a ring would cause winters to be far colder due to the increased shade. It is also thought that the Earth could not hold on to a ring for more than a million years or so due to solar wind and interference from the moon. Source: [Sandia National Laboratories](_URL_1_) [Astroscience](_URL_0_)
[ "Rings of Saturn is an American deathcore band from the Bay Area, California. The band was formed in 2009 and was originally just a studio project. However, after gaining a wide popularity and signing to Unique Leader Records, the band formed a full line-up and became a full-time touring band. Rings of Saturn's music features a highly technical style, heavily influenced by themes of alien life and outer space. They have released four full-length albums, with their third, \"Lugal Ki En\", released in 2014 and peaking at 126 on the American \"Billboard\" 200 chart while their fourth, \"Ultu Ulla\" was released in 2017 and peaked at 76 on the Billboard 200 chart, making it the band's highest peak to date.\n", "Saturn's rings are the most extensive ring system of any planet in the Solar System, and thus have been known to exist for quite some time. Galileo Galilei first observed them in 1610, but they were not accurately described as a disk around Saturn until Christiaan Huygens did so in 1655. With help from the NASA/ESA/ASI Cassini mission, a further understanding of the ring formation and active movement was understood. The rings are not a series of tiny ringlets as many think, but are more of a disk with varying density. They consist mostly of water ice and trace amounts of rock, and the particles range in size from micrometers to meters.\n", "According to lead vocalist Gruff Rhys, \"(Drawing) Rings Around the World\" is about \"all the rings of communication around the world. All the rings of pollution, and all the radioactivity that goes around. If you could visualize all the things we don't see, Earth could look like some kind of fucked-up Saturn. And that's the idea I have in my head – surrounded by communication lines and traffic and debris thrown out of spaceships.\" Rhys has claimed that the theory was initially his girlfriend's father's. The track was recorded in 2000 at Monnow Valley Studio, Rockfield, Monmouthshire and was produced by the Super Furry Animals and Chris Shaw.\n", "Rings of Saturn was formed in 2009 in high school only as a studio recording project with Lucas Mann on guitars, bass, and keyboards, Peter Pawlak on vocals, and Brent Silletto on drums. The band posted a track titled \"Abducted\" online and quickly gained listeners. The band recorded their debut album, \"Embryonic Anomaly\", with Bob Swanson at Mayhemnness Studios in Sacramento, CA. The album was self-released by the band on May 25, 2010. Four months after releasing \"Embryonic Anomaly\", the band signed to Unique Leader Records. In the months following the band's signing, Joel Omans was added as a second guitarist and the band graduated high school which led to their embarking on tours. \"Embryonic Anomaly\" was re-released through Unique Leader on March 1, 2011, and their two following albums would later also be released through the label. In December 2011, Brent Silletto and Peter Pawlak both left the band on their own decisions, mainly to seek out a different lifestyle.\n", "AllMusic described Rings of Saturn as a \"progressive, technical deathcore outfit\", writing that they have \"humorously deemed their brand of technical death metal 'aliencore.'\" The band employs fast riffing with an added harmony effect, fast tempos, ambient elements, and lyrics that deal with space invasion and extraterrestrial life.\n", "\"Rings of Saturn\" is located in the Sir Rupert Hamer Garden, in the grounds of the Heide Museum of Modern Art in Bulleen, a suburb of Melbourne. Shortly after the dedication of this work, in August 2006, King said:\n", "The rings of Saturn are the most extensive ring system of any planet in the Solar System. They consist of countless small particles, ranging in size from micrometers to meters, that orbit about Saturn. The ring particles are made almost entirely of water ice, with a trace component of rocky material. There is still no consensus as to their mechanism of formation. Although theoretical models indicated that the rings were likely to have formed early in the Solar System's history, new data from \"Cassini\" suggest they formed relatively late.\n" ]
In the American Civil War, was the Union victory at Vicksburg of equal, lesser, or greater significance than Antietam and/or Gettysburg were to ending the war?
This is a good question and difficult to asses still nowadays. First Antietam, I believe it was never considered a big victory like Vicksburg or Gettysburg. It was still considered a victory, and good enough for Lincoln to issue his Emancipation declaration, but for the public in general it was obscured by the fact it was the bloodiest day of the war up to then, and Lee left the field by extricating his troops over night which made a Union victory for the standards of the time. But it was an inconclusive victory anyway. Reaction to Vicksburg and Gettysburg was quite different. On one hand Vicksburg was well documented and expected, Grant had put the city under siege for months and everyone expected the outcome, the campaign was well covered by the newspapers. Gettysburg on the other hand just happened, and engagement was foreseen sooner or later as soon as both armies were set in motion, they should clash at one point. Reaction to both victories varied however. Grant's victory was much praised, Meade's victory in Gettysburg however seems to have received some criticism specially from President Lincoln himself. Meade was criticized for not counterattacking and pursuing Lee's army to destroy it. Pemberton's army at Vicksburg collapsed and surrendered, the place was lost and the Mississippi river closed to the Confederacy. Gettysburg on the other hand represented no territorial gains, Lee's army retreated (with heavy losses and never to regain the initiative) but kept its cohesion to fight again for 2 more years, the Union army of the Potomac was heavily battered too after 3 days of fighting, and overall the South did not seem to have perceived it as a major defeat. Yes, Lee was repulsed and it was a setback but he was not bowed. At the end after all these battles nobody could see a clear end to the war, and they were right as the war went on for 2 more years. Professor Gary Gallagher argues around it extensively. Answering your question, Vicksburg seem to have had a greater impact on the American public in terms of victory perception. Vicksburg would also precipitate the rise of Grant as military commander in chief of all Union armies bringing a much needed change in the chain of command and a badly needed change in the Eastern theater and the overall Union strategy.
[ "The Confederate and Union armies met at the Battle of Gettysburg on July 1. The battle, fought over three days, resulted in the highest number of casualties in the war. Along with the Union victory in the Siege of Vicksburg, the Battle of Gettysburg is often referred to as a turning point in the war. Though the battle ended with a Confederate retreat, Lincoln was dismayed that Meade had failed to destroy Lee's army. Feeling that Meade was a competent commander despite his failure to pursue Lee, Lincoln allowed Meade to remain in command of the Army of the Potomac. The Eastern Theater would be locked in a stalemate for the remainder of 1863.\n", "During the American Civil War, the city finally surrendered during the Siege of Vicksburg, after which the Union Army gained control of the entire Mississippi River. The 47-day siege was intended to starve the city into submission. Its location atop a high bluff overlooking the Mississippi River proved otherwise impregnable to assault by federal troops. The surrender of Vicksburg by Confederate General John C. Pemberton on July 4, 1863, together with the defeat of General Robert E. Lee at Gettysburg the day before, has historically marked the turning point of the Civil War in the Union's favor.\n", "The decisive victories by Grant and Sherman resulted in the surrender of the major Confederate armies. The first and most significant was on April 9, 1865, when Robert E. Lee surrendered the Army of Northern Virginia to Grant at Appomattox Court House. Although there were other Confederate armies that surrendered in the following weeks, such as Joseph E. Johnston's in North Carolina, this date was nevertheless symbolic of the end of the bloodiest war in American history, the end of the Confederate States of America, and the beginning of the slow process of Reconstruction.\n", "1863 was to be the year, however, in which the tide turned in favor of the Union. The Battle of Gettysburg in July 1863 was the first time that Lee was soundly defeated. Prompted by Sarah Josepha Hale,\n", "It is currently a widely held view that Gettysburg was a decisive victory for the Union, but the term is considered imprecise. It is inarguable that Lee's offensive on July 3 was turned back decisively and his campaign in Pennsylvania was terminated prematurely (although the Confederates at the time argued that this was a temporary setback and that the goals of the campaign were largely met). However, when the more common definition of \"decisive victory\" is intended—an indisputable military victory of a battle that determines or significantly influences the ultimate result of a conflict—historians are divided. For example, David J. Eicher called Gettysburg a \"strategic loss for the Confederacy\" and James M. McPherson wrote that \"Lee and his men would go on to earn further laurels. But they never again possessed the power and reputation they carried into Pennsylvania those palmy summer days of 1863.\"\n", "The Union counteroffensive never came; the Army of the Potomac was exhausted and nearly as damaged at the end of the three days as the Army of Northern Virginia. Meade was content to hold the field. On July 4, the armies observed an informal truce and collected their dead and wounded. Meanwhile, Maj. Gen. Ulysses S. Grant accepted the surrender of the Vicksburg garrison along the Mississippi River, splitting the Confederacy in two. These two Union victories are generally considered the turning point of the Civil War.\n", "By 1864, the American Civil War was slowly drawing to a close. With Abraham Lincoln re-elected as President of the Union, and Gen. Ulysses Grant made commander of the Union Army, the possibility of a Confederate victory was steadily lessened. Along the Eastern Seaboard, Union forces pushed the Confederate forces of Gen. Robert E. Lee steadily back in successive Union victories at Wilderness and Spotsylvania. In the Appalachian mountains, Phillip Sheridan had defeated Confederate armies in the Shenandoah valley. As Union forces pushed southward, they destroyed significant portions of the Confederate agriculture base. As Union forces defeated Confederate armies in the northern reaches of the CSA, Gen. William T. Sherman began his march to the sea, which would eventually succeed in destroying 20% of the agricultural production in Georgia.\n" ]
please explain utilitarianism to me like i'm 5.
There's this Cookie Monster, and he's obsessed with getting cookies. Whatever gets him the most cookies is what makes him the happiest. So if by taking a cookie from someone, the Cookie Monster can get two cookies, even though the person you took the cookie from is losing a cookie, there's still a net gain of one cookie. Utilitarianism is that idea on a grander scale. Whatever causes the greatest worldwide happiness is the best thing to.
[ "Utilitarianism (from the Latin utilis, useful) is a theory of ethics that prescribes the quantitative maximization of good consequences for a population. It is a form of consequentialism. This good to be maximized is usually happiness, pleasure, or preference satisfaction. Though some utilitarian theories might seek to maximize other consequences, these consequences generally have something to do with the welfare of people (or of people and nonhuman animals). For this reason, utilitarianism is often associated with the term welfarist consequentialism.\n", "Utilitarianism is a family of consequentialist ethical theories that promotes actions that maximize happiness and well-being for the majority of a population. Although different varieties of utilitarianism admit different characterizations, the basic idea behind all of them is to in some sense maximize utility, which is often defined in terms of well-being or related concepts. For instance, Jeremy Bentham, the founder of utilitarianism, described utility as\n", "Utilitarianism is a version of consequentialism, which states that the consequences of any action are the only standard of right and wrong. Unlike other forms of consequentialism, such as egoism and altruism, utilitarianism considers the interests of all beings equally.\n", "Utilitarianism is a type of consequentialist ethical theory. According to such theories, only the outcome of an action is morally relevant (this contrasts with deontology, according to which moral actions flow from duties or motives). Utilitarianism is \"a combination of consequentialism and the\" philosophical position hedonism, which states that pleasure, or happiness, is the only good worth pursuing. Therefore, since only the consequences of an action matter, and only happiness matters, \"only happiness that is the consequence of an action is morally relevant\". There are similarities with preference utilitarianism, where utility is defined as individual preference rather than pleasure.\n", "Utilitarianism was most prominently defended by British philosophers Jeremy Bentham and John Stuart Mill. Though there are many varieties of utilitarianism, generally it is the view that a morally right action is an action that produces the maximum good for people. Utilitarianism has often been used when deciding how to use land and it is closely connected with an economic-based ethic. For example, it forms the foundation for industrial farming; an increase in yield, which would increase the number of people able to receive goods from farmed land, is judged from this view to be a good action or approach. In fact, a common argument in favor of industrial agriculture is that it is a good practice because it increases the benefits for humans; benefits such as food abundance and a drop in food prices. However, a utilitarian-based land ethic is different from a purely economic one as it could be used to justify the limiting of a person's rights to make profit. For example, in the case of the farmer planting crops on a slope, if the runoff of soil into the community creek led to the damage of several neighbor's properties, then the good of the individual farmer would be overridden by the damage caused to his neighbors. Thus, while a utilitarian-based land ethic can be used to support economic activity, it can also be used to challenge this activity.\n", "Utilitarianism is a form of consequentialism whereby decisions are made by predicting the outcome that determines the moral worth of an action. It assumes that the system of legal rules as opposed to individual moral rules provide the relevant scope of a decision.\n", "Utilitarianism addresses problems with moral motivation neglected by Kantianism by giving a central role to happiness. It is an ethical theory holding that the proper course of action is the one that maximizes the overall good of the society. It is thus one form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. The most influential contributors to this theory are considered to be the 18th and 19th-century British philosophers Jeremy Bentham and John Stuart Mill. Conjoining hedonism—as a view as to what is good for people—to utilitarianism has the result that all action should be directed toward achieving the greatest total amount of happiness (see Hedonic calculus). Though consistent in their pursuit of happiness, Bentham and Mill's versions of hedonism differ. There are two somewhat basic schools of thought on hedonism:\n" ]
what do ionizers in airpurifiers do?
It introduces a mild charge to the small particles in the air, which makes them stick to things in the room rather than float around forever. But on a practical level with consumer devices...i cant actually tell when they're on or off so i don't think they do much. I leave mine off for the most part. I wouldn't factor the ionizer into your decision at all. I have cats, but I'm allergic to cats, so my allergy symptoms are a decent indicator of whether something works or not.
[ "Air ionisers are often used in places where work is done involving static-electricity-sensitive electronic components, to eliminate the build-up of static charges on non-conductors. As those elements are very sensitive to electricity, they cannot be grounded because the discharge will destroy them as well. Usually, the work is done over a special dissipative table mat, which allows a very slow discharge, and under the air gush of an ioniser.\n", "Air ionisers are used in air purifiers to remove particles from air. Airborne particles become charged as they attract charged ions from the ioniser by electrostatic attraction. The particles in turn are then attracted to any nearby earthed (grounded) conductors, either deliberate plates within an air cleaner, or simply the nearest walls and ceilings. The frequency of nosocomial infections in British hospitals prompted the National Health Service (NHS) to research the effectiveness of anions for air purification, finding that repeated airborne acinetobacter infections in a ward were eliminated by the installation of a negative air ioniser—the infection rate fell to zero, an unexpected result. Positive and negative ions produced by air conditioning systems have also been found by a manufacturer to inactivate viruses including influenza.\n", "An air ioniser (or negative ion generator or Chizhevsky's chandelier) is a device that uses high voltage to ionise (electrically charge) air molecules. Negative ions, or anions, are particles with one or more extra electron, conferring a net negative charge to the particle. Cations are positive ions missing one or more electrons, resulting in a net positive charge. Some commercial air purifiers are designed to generate negative ions. Another type of air ioniser is the electrostatic discharge (ESD) ioniser (balanced ion generator) used to neutralise static charge. In 2002, in an obituary in \"The Independent\" newspaper, Cecil Alfred 'Coppy' Laws was credited with being the inventor of the domestic air ioniser.\n", "BULLET::::- Ionizer purifiers use charged electrical surfaces or needles to generate electrically charged air or gas ions. These ions attach to airborne particles which are then electrostatically attracted to a charged collector plate. This mechanism produces trace amounts of ozone and other oxidants as by-products. Most ionizers produce less than 0.05 ppm of ozone, an industrial safety standard. There are two major subdivisions: the fanless ionizer and fan-based ionizer. Fanless ionizers are noiseless and use little power, but are less efficient at air purification. Fan-based ionizers clean and distribute air much faster. Permanently mounted home and industrial ionizer purifiers are called electrostatic precipitators.\n", "Ionizers are used especially when insulative materials cannot be grounded. Ionization systems help to neutralize charged surface regions on insulative or dielectric materials. Insulating materials prone to triboelectric charging of more than 2,000 V should be kept away at least 12 inches from sensitive devices to prevent accidental charging of devices through field induction. On aircraft, static dischargers are used on the trailing edges of wings and other surfaces.\n", "Ionisers are distinct from ozone generators, although both devices operate in a similar way. Ionisers use electrostatically charged plates to produce positively or negatively charged gas ions (for instance N or O) that particulate matter sticks to in an effect similar to static electricity. Even the best ionisers will also produce a small amount of ozone—triatomic oxygen, O—which is unwanted. Ozone generators are optimised to attract an extra oxygen ion to an O molecule, using either a corona discharge tube or UV light.\n", "Air ionization is achieved at high altitude (electrical conductivity of air increases as atmospheric pressure reduces according to Paschen's law) using various techniques: high voltage electric arc discharge, RF (microwaves) electromagnetic glow discharge, laser, e-beam or betatron, radioactive source… with or without seeding of low ionization potential alkali substances (like caesium) into the flow.\n" ]
how exactly are compounds named?
-ide is the suffix of any negatively charged anion, eg. the anion of chlorine (Cl) is called chloride (Cl^(-)) --- -ate and -ite are the suffixes of some polyatomic ions. That's a really messy topic to get in to and some of the naming isn't always logical. Nitr**ate** is NO*_3_*^(-), nitr**ite** is NO*_2_*^- Chlor**ate** is ClO*_3_*^(-), chlor**ite** is ClO*_2_*^- (also there is **per**chlor**ate** which is ClO*_4_*^(-) and **hypo**chlor**ite** which is ClO^(-)...) Sulf**ate** is SO*_4_*^(2-), sulf**ite** is SO*_3_*^(2-) Phosph**ate** is PO*_4_*^(3-), phosph**ite** is HPO*_3_*^(2-) Which is which sort of just has to be memorised, sorry... --- -ous and -ic have been depreciated but some syllabuses haven't been updated -ous is the lower of two oxidation states, -ic is the higher, and this gets applied to the latin name eg. ferrous is iron(II) and ferric is iron(III), cuprous is copper(I) and cupric is copper(II). Like I said, this system has been depreciated, IUPAC recommends everyone uses names like iron(III) chloride instead of ferric chloride. --- peroxide denotes an oxygen-oxygen single bond, hydrogen peroxide looks like this: H-O-O-H permanganate is another polyatomic ion like the other -ate ones above...it is MnO*_4_*^- not to be confused with just regular manganate which is MnO*_4_*^(2-)... --- As above the latin names have been depreciated, but you'll still have to learn them... The modern way is to write the oxidation state in brackets immediately after the metal.
[ "The chemical names are the scientific names, based on the molecular structure of the drug. There are various systems of chemical nomenclature and thus various chemical names for any one substance. The most important is the IUPAC name. Chemical names are typically very long and too complex to be commonly used in referring to a drug in speech or in prose documents. For example, \"1-(isopropylamino)-3-(1-naphthyloxy) propan-2-ol\" is a chemical name for propranolol. Sometimes, a company that is developing a drug might give the drug a company code, which is used to identify the drug while it is in development. For example, CDP870 was UCB’s company code for certolizumab pegol; UCB later chose \"Cimzia\" as its trade name. Many of these codes, although not all, have prefixes that correspond to the company name.\n", "A \"compound\" is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number.\n", "Chemical nomenclature, replete as it is with compounds with complex names, is a repository for some very peculiar and sometimes startling names. A browse through the \"Physical Constants of Organic Compounds\" in the \"CRC Handbook of Chemistry and Physics\" (a fundamental resource) will reveal not just the whimsical work of chemists, but the sometimes peculiar compound names that occur as the consequence of simple juxtaposition. Some names derive legitimately from their chemical makeup, from the geographic region where they may be found, the plant or animal species from which they are isolated or the name of the discoverer.\n", "A number of special naming systems exist for these compounds. For instance a Schiff base derived from an aniline, where R is a phenyl or a substituted phenyl, can be called an \"anil\", while bis-compounds are often referred to as salen-type compounds.\n", "This practice is fully well-established, and IUPAC has accepted such names. In light of the current chemical nomenclature, this practice is, however, very exceptional, because systematic names of all other compounds are formed only according to what elements they contain and what is their molecular structure, not according to what other properties (for example, acidity) they have.\n", "Many compounds are also known by their more common, simpler names, many of which predate the systematic name. For example, the long-known sugar glucose is now systematically named 6-(hydroxymethyl)oxane-2,3,4,5-tetrol. Natural products and pharmaceuticals are also given simpler names, for example the mild pain-killer Naproxen is the more common name for the chemical compound (S)-6-methoxy-α-methyl-2-naphthaleneacetic acid.\n", "IUPAC nomenclature is used for the naming of chemical compounds, based on their chemical composition and their structure. For example, on can deduce that 1-chloropropane has a Chlorine atom on the first carbon in the 3-carbon propane chain.\n" ]
I've read a little bit about the affects of THC on Cancer. Is any of this research substantial or is it just not known enough?
The research done in this paper was done to cell lines so it was not quite *in vivo*. > One possible drawback could be that use of select CB2 agonists to kill tumor cells may also cause immunosuppression. Thus, further studies are necessary to address the relative sensitivity of normal and transformed immune cells to CB2 agonists in vivo. The pathway they are testing here is very specific, so the researchers need to test it in a living specimen to see if they will get the same results. A lot can change from *in vitro* to *in vivo*. But it still is a cool study on THC. It was published in 2006 so there may be more modern articles
[ "While the mutagenic and genotoxic effects of TCDD are sometimes disputed and sometimes confirmed it does foster the development of cancer. Its main action in causing cancer is cancer promotion; it promotes the carcinogenicity initiated by other compounds. Very high doses may, in addition, cause cancer indirectly; one of the proposed mechanisms is oxidative stress and the subsequent oxygen damage to DNA. There are other explanations such as endocrine disruption or altered signal transduction. The endocrine disrupting activities seem to be dependent on life stage, being anti-estrogenic when estrogen is present (or in high concentration) in the body, and estrogenic in the absence of estrogen.\n", "The EPA has not assessed its effect on cancer in humans. However, one study performed by the Mt. Sinai School of Medicine linked Sumithrin with breast cancer; the link made by its effect on increasing the expression of a gene responsible for mammary tissue proliferation.\n", "Multiple studies have shown a relationship between α-linolenic acid and an increased risk of prostate cancer. This risk was found to be irrespective of source of origin (e.g., meat, vegetable oil). However, a large 2006 study found no association between total α-linolenic acid intake and overall risk of prostate cancer; and a 2009 meta-analysis found evidence of publication bias in earlier studies, and concluded that if ALA contributes to increased prostate cancer risk, the increase in risk is quite small.\n", "Due to its relatively severe side effects and toxicity, EMP has rarely been used in the treatment of prostate cancer. This is especially true in Western countries today. As a result, and also due to the scarce side effects of gonadotropin-releasing hormone modulators (GnRH modulators) like leuprorelin, EMP was almost abandoned. However, encouraging clinical research findings resulted in renewed interest of EMP for the treatment of prostate cancer.\n", "Breast cancer – One study indicates nitric oxide (NO) is able to have a cytostatic effect on the human breast cancer cell line MDA-MB-231. Not only does nitric oxide stop cell growth, the study shows that it can also induce apoptosis after the cancer cells have been exposed to NO over 48 hours\n", "Patients with cancer are at higher risk of venous thromboembolism and LMWHs are used to reduce this risk. The CLOT study, published in 2003, showed that, in patients with malignancy and acute venous thromboembolism, dalteparin was more effective than warfarin in reducing the risk of recurrent embolic events. Use of LMWH in cancer patients for at least the first 3 to 6 months of long-term treatment is recommended in numerous guidelines and is now regarded as a standard of care.\n", "Several studies have been completed on the health of the population of surrounding communities. While it has been established that people from Seveso exposed to TCDD are more susceptible to certain rare cancers, when all types of cancers are grouped into one category, no statistically significant excess has yet been observed. This indicates that more research is needed to determine the true long-term health effects on the affected population.\n" ]
why can dishwashers both wash and dry dishes, but clothes washers cannot wash and dry clothes?
Because clothes can't be tried as easily by just making them super hot like a dishwasher does. Since dishes don't absorb water, and they also don't burn. Dual machines for clothes can and do exist, but they're more expensive, and more prone to failure. Since the two jobs are really quite different (and plenty of clothes can be machine washed but not machine tried) it just makes more sense to buy them separate.
[ "Dishwashing or dish washing, also known as washing up, is the process of cleaning cooking utensils, dishes, cutlery and other items to prevent foodborne illness. This is either achieved by hand in a sink using dishwashing detergent or by using a dishwasher and may take place in a kitchen, utility room, scullery or elsewhere. In Britain to do the washing up also includes to dry and put away. There are cultural divisions over rinsing and drying after washing.\n", "Combination washer dryers are popular among those living in smaller urban properties as they only need half the amount of space usually required for a separate washing machine and clothes dryer, and may not require an external air vent. Additionally, combination washer dryers allow clothes to be washed and dried \"in one go\", saving time and effort from the user. Many washer dryer combo units are also designed to be portable so it can be attached to a sink instead of requiring a separate water line.\n", "Dish washing is usually done using an implement for the washer to wield, unless done using an automated dishwasher. Commonly used implements include cloths, sponges, brushes or even steel wool. As fingernails are often more effective than soft implements like cloths at dislodging hard particles, washing simply with the hands is also done and can be effective as well. Dishwashing detergent is also generally used, but bar soap can be used acceptably, as well. Rubber gloves are often worn when washing dishes by people who are sensitive to hot water or dish-washing liquids. According to dermatologists, the use of protective gloves is highly recommended whenever working with water and cleaning products, since some chemicals may damage the skin, or allergies may develop in some individuals. Dish gloves are also worn by those who simply don't want to touch the old food particles. Many people also wear aprons.\n", "Commercial dishwashers often have significantly different plumbing and operations than a home unit, in that there are often separate spray arms for washing and rinsing/sanitizing. The wash water is heated with an in-tank electric heat element and mixed with a cleaning solution, and is used repeatedly from one load to the next. The wash tank usually has a large strainer basket to collect food debris, and the strainer may not be emptied until the end of the day's kitchen operations.\n", "A dishwasher is a machine for cleaning dishware and cutlery automatically. Unlike manual dishwashing, which relies largely on physical scrubbing to remove soiling, the mechanical dishwasher cleans by spraying hot water, typically between , at the dishes, with lower temperatures used for delicate items.\n", "Hand dishwashing detergents utilize surfactants to play the primary role in cleaning. The reduced surface tension of dishwashing water, and increasing solubility of modern surfactant mixtures, allows the water to run off the dishes in a dish rack very quickly. However, most people also rinse the dishes with pure water to make sure to get rid of any soap residue that could affect the taste of the food.\n", "Hand dishwashing is generally performed in the absence of a dishwashing machine, when large \"hard-to-clean\" items are present, or through preference. Some dishwashing liquids can harm household silver, fine glassware, anything with gold leaf, disposable plastics, and any objects made of brass, bronze, cast iron, pewter, tin, or wood, especially when combined with hot water and the action of a dishwasher. When dishwashing liquid is used on such objects it is intended that they be washed by hand.\n" ]
Are there waves of air on top of our atmosphere like waves of water on the surface of the ocean?
Well, sort of. There's no real top to our atmosphere the way there is a surface of the ocean - it just sort of gradually thins out. With that said, though, both experience the same kind of [gravity waves](_URL_0_). Note these are not at all the same as *gravitational* waves you'd see around a black hole - similar name, very different phenomena. Gravity waves are essentially waves driven by a buoyancy force. In the ocean, you see them manifest as surface waves; in the atmosphere, they can sometimes be seen as undulations in clouds. Gravity waves in the ocean break when they hit the beach. In the atmosphere, they tend to propagate upwards, breaking when the air gets so thin that it can't really carry them any more. There's good evidence to show that quite a few upper atmospheres are warmer than expected due to gravity waves breaking and depositing their energy at those locations.
[ "Most people think of waves as a surface phenomenon, which acts between water (as in lakes or oceans) and the air. Where low density water overlies high density water in the ocean, internal waves propagate along the boundary. They are especially common over the continental shelf regions of the world oceans and where brackish water overlies salt water at the outlet of large rivers. There is typically little surface expression of the waves, aside from slick bands that can form over the trough of the waves.\n", "BULLET::::- Ocean surface waves - are surface waves that occur on the free surface of the ocean. They usually result from wind, and are also referred to as wind waves. Some waves can travel thousands of miles before reaching land.\n", "A series of surface waves can be generated due to large-scale displacement of the ocean water. These can be caused by sub-marine landslides, seafloor deformations due to earthquakes, or the impact of a large meteorite.\n", "Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, and capillary-gravity waves play an essential role in this effect. There are two distinct mechanisms involved, called after their proponents, Phillips and Miles.\n", "In the work of Phillips, the ocean surface is imagined to be initially flat (\"glassy\"), and a turbulent wind blows over the surface. When a flow is turbulent, one observes a randomly fluctuating velocity field superimposed on a mean flow (contrast with a laminar flow, in which the fluid motion is ordered and smooth). The fluctuating velocity field gives rise to fluctuating stresses (both tangential and normal) that act on the air-water interface. The normal stress, or fluctuating pressure acts as a forcing term (much like pushing a swing introduces a forcing term). If the frequency and wavenumber formula_45 of this forcing term match a mode of vibration of the capillary-gravity wave (as derived above), then there is a resonance, and the wave grows in amplitude. As with other resonance effects, the amplitude of this wave grows linearly with time.\n", "One of the main causes of hydro acoustic noise from fully submerged lifting surfaces is the unsteady separated turbulent flow near the surface's trailing edge that produces pressure fluctuations on the surface and unsteady oscillatory flow in the near wake.The relative motion between the surface and the ocean creates a turbulent boundary layer (TBL) that surrounds the surface. The noise is generated by the fluctuating velocity and pressure fields within this TBL.\n", "In fluid dynamics, wind waves, or wind-generated waves, are water surface waves that occur on the free surface of the oceans and other bodies (like lakes, rivers, canals, puddles or ponds). They result from the wind blowing over an area of fluid surface. Waves in the oceans can travel thousands of miles before reaching land. Wind waves on Earth range in size from small ripples, to waves over high.\n" ]
If computers/electronics short circuit due to water damage, and if pure water does not carry current, could an electronic technically run under pure water?
Yes it's technically possible, but the hazard that water poses extends beyond simply shorting circuits. Water can be corrosive to a lot of the different metals and chemical on a circuit board and it can especially react when exposed to metal containing flowing current. However, a circuit would most certainly be able to survive much longer in distilled water than it would tap water. One interesting fact about distilled water is that it is still slightly conductive! Even with 100% pure water, it will still be slightly conductive due to a thing called hydronium, however I'm not sure if this would be conductive enough to short any circuits.
[ "Water has been shown not to be a very reliable substance to store electric charge long term, so more reliable materials are used for capacitors in industrial applications. However water has the advantage of being self healing after a breakdown, and if the water is steadily circulated through a de-ionizing resin and filters, then the loss resistance and dielectric behavior can be stabilized. Thus, in certain unusual situations, such as the generation of extremely high voltage but very short pulses, a water capacitor may be a practical solution – such as in an experimental Xray pulser.\n", "Because of its high relative dielectric constant (~80), deionized water is also used (for short durations, when the resistive losses are acceptable) as a high voltage dielectric in many pulsed power applications, such as the Sandia National Laboratories Z Machine.\n", "The drawback to using water is the short length of time it can hold off the voltage, typically in the microsecond to ten microsecond (μs) range. Deionized water is relatively inexpensive and is environmentally safe. These characteristics, along with the high dielectric constant, make water an excellent choice for building large capacitors. If a way can be found to reliably increase the hold off time for a given field strength, then there will be more applications for water capacitors.\n", "It was known that water is a very good solvent for low ohmic electrolytes. However, the corrosion problems linked to water hindered, up to that time, the use of it in amounts larger than 20% of the electrolyte, the water-driven corrosion using the above-mentioned electrolytes being kept under control with chemical inhibitors that stabilize the oxide layer.\n", "Distilled water can be used in PC watercooling systems and Laser Marking Systems. The lack of impurity in the water means that the system stays clean and prevents a buildup of bacteria and algae. Also, the low conductance reduces the risk of electrical damage in the event of a leak. However, deionized water has been known to cause cracks in brass and copper fittings.\n", "It is also recommended to install residual-current devices for protection against electrocution. The greater danger associated with electrical shock in the water is that the person may be rendered immobile and unable to rescue themselves or to call for help and then drown.\n", "In addition, a stuck or malfunctioning water supply valve can deliver large amounts of water, causing extensive water damage if undetected for any period of time. A water alarm, possibly with an automatic water shutoff, can help prevent this malfunction from causing major problems.\n" ]
Why do wireless electronics only use 2.4 and 5ghz bands?
The relevant regulation can be found [in this Wikipedia page for ISM Band](_URL_0_). Your short range consumer electronics are designed to operate in the ISM band because it does not require a license. The Wikipedia page for [frequency allocation](_URL_1_) will also give you an idea of what the other bands are used for.
[ "Because DECT specifications are different between countries, developers who use the same product across different countries have launched wireless headsets which use 2.4GHz RF as opposed to the 1.89 or 1.9 GHz in DECT. Almost all countries in the world have the 2.4 GHz band open for wireless communications, so headsets using this RF band is sellable in most markets. However, the 2.4 GHz frequency is also the base frequency for many wireless data transmission, i.e. Wireless LAN, Wi-Fi, Bluetooth..., the bandwidth may be quite crowded, so using this technology may be more prone to interference.\n", "With the increasing popularity of unrelated consumer devices operating on the same 2.4 GHz band, many providers have migrated to the 5GHz ISM band. If the service provider holds the necessary spectrum license, it could also reconfigure various brands of off the shelf Wi-Fi hardware to operate on its own band instead of the crowded unlicensed ones. Using higher frequencies carries various advantages:\n", "In 2009 802.11n was added to 802.11. It operates in both the 2.4 GHz and 5 GHz bands at a maximum data transfer rate of 600 Mbit/s. Most newer routers are able to utilise both wireless bands, known as dualband. This allows data communications to avoid the crowded 2.4 GHz band, which is also shared with Bluetooth devices and microwave ovens. The 5 GHz band is also wider than the 2.4 GHz band, with more channels, which permits a greater number of devices to share the space. Not all channels are available in all regions.\n", "Some devices with dual-band wireless network connectivity do not allow the user to select the 2.4 GHz or 5 GHz band (or even a particular radio or SSID) when using Wi-Fi Protected Setup, unless the wireless access point has separate WPS button for each band or radio; however, a number of later wireless routers with multiple frequency bands and/or radios allow the establishment of a WPS session for a specific band and/or radio for connection with clients which cannot have the SSID or band (e.g., 2.4/5 GHz) explicitly selected by the user on the client for connection with WPS (e.g. pushing the 5 GHz, where supported, WPS button on the wireless router will force a client device to connect via WPS on only the 5 GHz band after a WPS session has been established by the client device which cannot explicitly allow the selection of wireless network and/or band for the WPS connection method).\n", "Due to the intended nature of the 2.4 GHz band, there are many users of this band, with potentially dozens of devices per household. By its very nature, \"long range\" connotes an antenna system which can see many of these devices, which when added together produce a very high noise floor, whereby no single signal is usable, but nonetheless are still received. The aim of a long-range system is to produce a system which over-powers these signals and/or uses directional antennas to prevent the receiver \"seeing\" these devices, thereby reducing the noise floor.\n", "ANT, ZigBee, Bluetooth, Wi-Fi, and some cordless phones all use the 2.4 GHz band (as well as 868- and 915 MHz for regional variants in the latter's case), along with proprietary forms of wireless Ethernet and wireless USB.\n", "Wireless solutions make use of the 5.8 GHz range to avoid interference from the increasingly crowded 2.4GHz radio band, which is widely used by WLAN 802.11b/g, Bluetooth devices, Cordless phones and Microwave ovens. Therefore, 5.8 GHz solutions are getting more and more public to use in home video transmission, especially in North America and Australia. In the security and surveillance markets, especially for long range video transmissions, more people are starting to use 5.8 GHz frequency for cleaner bandwidth for better outcome.\n" ]
When was the last time a president was elected who was "filled in" during local ballots?
Just to clarify, as I think I understand what you're asking about, but I want to be sure, you're talking about a 'straight ticket' ballot [such as this one](_URL_0_)?
[ "The 1852 United States presidential election in Arkansas took place on November 2, 1852, as part of the 1852 United States presidential election. Voters chose four representatives, or electors to the Electoral College, who voted for president and vice president.\n", "The 1856 United States presidential election in Rhode Island took place on November 4, 1856, as part of the 1856 United States presidential election. Voters chose four representatives, or electors to the Electoral College, who voted for president and vice president.\n", "The 1852 United States presidential election in New York took place on November 2, 1852, as part of the 1852 United States presidential election. Voters chose 35 representatives, or electors to the Electoral College, who voted for President and Vice President.\n", "The 1856 United States presidential election in Connecticut took place on November 4, 1856, as part of the 1856 United States presidential election. Voters chose six representatives, or electors to the Electoral College, who voted for president and vice president.\n", "The United States Senate elections of 1914, with the ratification of the 17th Amendment in 1913, were the first time that all seats up for election were popularly elected instead of chosen by their state legislatures. These elections occurred in the middle of Democratic President Woodrow Wilson's first term.\n", "The 1852 United States presidential election in Rhode Island took place on November 2, 1852, as part of the 1852 United States presidential election. Voters chose four representatives, or electors to the Electoral College, who voted for President and Vice President.\n", "An election for a seat in the United States House of Representatives took place in California's 12th congressional district on November 5, 1946, the date set by law for the elections for the 80th United States Congress. In the 12th district election, the candidates were five-term incumbent Democrat Jerry Voorhis, Republican challenger Richard Nixon, and former congressman and Prohibition Party candidate John Hoeppel. Nixon was elected with 56% of the vote, starting him on the road that would, almost a quarter century later, lead to the presidency.\n" ]
does the music you listen to in your childhood affect your future personality?
I truly think it has a great affect. I grew up listening to a lot of 60s and 70s rock, I still listen to it and it's shaped a lot of who I am and how I see the world. I love the lyrics, sound, expressionism and aesthetic. And I don't think I couldve gotten through the dark periods of my life without the wisdom those songs installed in me throughout my whole childhood. I sometimes feel as though I'm from that eras but reincarnated lol. Long live the hippie movement ✌️
[ "Enculturation affects music memory in early childhood before a child's cognitive schemata for music is fully formed, perhaps beginning at as early as one year of age. Like adults, children are also better able to remember novel music from their native culture than from unfamiliar ones, although they are less capable than adults at remembering more complex music.\n", "Music is a key factor in the socialization of children. Children and adolescents often turn to music lyrics as an outlet away from loneliness or as a source of advice and information. The results of a study through \"A Kaiser Family Foundation Study\" in 2005 showed that 85% of youth ages 8–18 listen to music each day. While music is commonly thought of as only a means of entertainment, studies have found that music is often chosen by youth because it mirrors their own feelings and the content of the lyrics is important to them. Numerous studies have been conducted to research how music influences listeners behaviors and beliefs. For example, a study featured in the \"Journal of Youth and Adolescence\" found that when compared to adolescent males who did not like heavy metal music, those who liked heavy metal had a higher occurrence of deviant behaviors. These behaviors included sexual misconduct, substance abuse and family issues.\n", "Studies indicate that the ability to understand emotional messages in music starts early, and improves throughout child development. Studies investigating music and emotion in children primarily play a musical excerpt for children and have them look at pictorial expressions of faces. These facial expressions display different emotions and children are asked to select the face that best matches the music's emotional tone. Studies have shown that children are able to assign specific emotions to pieces of music; however, there is debate regarding the age at which this ability begins.\n", "Studies done on children who have had a musical background, have shown that it increases brain function as well as brain stimulation. When children are exposed to music from other countries and cultures, they are able to learn about the instrument while at the same time being educated about a different part of the world.\n", "Children’s musical interest may vary from exploring a specific instrument to listening to a type of musical literature that the child finds interesting because of his or her cultural background. In other words, early childhood musical interest lies with the involvement that the child is actively engaged in the learning milieu. Morin’s article suggests that in order for students to develop a personal interest during the exploration of music, they need opportunities and experiences that have been aligned by the educator as developmentally appropriate. In essence, Morin communicates about the importance of giving young children ample opportunities to explore, manipulate, and play in the classroom.\n", "Music influences many regions of the brain including those associated with emotional and creative areas has the power to evoke emotion and memories from deep in the past, so it is understandable that Alzheimer's patients can recall musical memories from many decades prior given the richness and vividness of these memories. Music memory can be preserved for those living with Alzheimer's Disease and brought forth through various techniques of music therapy. Areas of the brain influenced by music are one of the final parts of the brain to degenerate in the progression of Alzheimer's Disease.\n", "Music also affects socially-relevant memories, specifically memories produced by nostalgic musical excerpts (e.g., music from a significant time period in one’s life, like music listened to on road trips). Musical structures are more strongly interpreted in certain areas of the brain when the music evokes nostalgia. The interior frontal gyrus, substantia nigra, cerebellum, and insula were all identified to have a stronger correlation with nostalgic music than not. Brain activity is a very individualized concept with many of the musical excerpts having certain effects based on individuals’ past life experiences, thus this caveat should be kept in mind when generalizing findings across individuals.\n" ]
why is it impossible to fold a piece of paper in half more than eight times?
For a regular A4 at eight folds its 256 layers thick and since the paper is so small at that point the amount of space needed to make a fold in each of the 256 layers there isn't enough room. However if you just get a bigger paper then you can, even though you are folding it by half each time. _URL_0_
[ "The maximum number of times an incompressible material can be folded has been derived. With each fold a certain amount of paper is lost to potential folding. The loss function for folding paper in half in a single direction was given to be formula_4, where \"L\" is the minimum length of the paper (or other material), \"t\" is the material's thickness, and \"n\" is the number of folds possible. The distances \"L\" and \"t\" must be expressed in the same units, such as inches. This result was derived by Gallivan in 2001, who also folded a sheet of paper in half 12 times, contrary to the popular belief that paper of any size could be folded at most eight times. She also derived the equation for folding in alternate directions.\n", "In January 2002, while a junior in high school, Gallivan demonstrated that a single piece of toilet paper 4000 ft (1200 m) in length can be folded in half twelve times. This was contrary to the popular conception that the maximum number of times any piece of paper could be folded in half was seven. She calculated that, instead of folding in half every other direction, the least volume of paper to get 12 folds would be to fold in the same direction, using a very long sheet of paper. A special kind of $85-per-roll toilet paper in a set of six met her length requirement. Not only did she provide the empirical proof, but she also derived an equation that yielded the width of paper or length of paper necessary to fold a piece of paper of thickness \"t\" any \"n\" number of times.\n", "The paper size is 'double folio', with two pages printed on each side (four pages per sheet). After printing the paper was folded once to the size of a single page. Typically, five of these folded sheets (10 leaves, or 20 printed pages) were combined to a single physical section, called a quinternion, that could then be bound into a book. Some sections, however, had as few as four leaves or as many as 12 leaves. Some sections may have been printed in a larger number, especially those printed later in the publishing process, and sold unbound. The pages were not numbered. The technique was not new, since it had been used to make blank \"white-paper\" books to be written afterwards. What was new was determining \"beforehand\" the correct placement and orientation of each page on the five sheets to result in the correct sequence when bound. The technique for locating the printed area correctly on each page was also new.\n", "Origami paper is used to fold \"origami\", the art of paper folding. The only real requirement of the folding medium is that it must be able to hold a crease, but should ideally also be thinner than regular paper for convenience when multiple folds over the same small paper area are required (e.g. such as would be the case if creating an origami bird's \"legs\", \"feet\", and \"beak\").\n", "In 1786, German physics professor Georg Lichtenberg found that any sheet of paper whose long edge is times longer than its short edge could be folded in half and aligned with its shorter side to produce a sheet with exactly the same proportions as the original. This ratio of lengths of the longer over the shorter side guarantees that cutting a sheet in half along a line results in the smaller sheets having the same (approximate) ratio as the original sheet. When Germany standarised paper sizes at the beginning of the 20 century, they used Lichtenberg's ratio to create the \"A\" series of paper sizes. Today, the (approximate) aspect ratio of paper sizes under ISO 216 (A4, A0, etc.) is 1:. \n", "In double parallel folds the paper is folded in half and then folded in half again with a fold parallel to the first fold. To allow for proper nesting the two inside folded panels are smaller than the two outer panels.\n", "In the mathematics of paper folding, map folding and stamp folding are two problems of counting the number of ways that a piece of paper can be folded. In the stamp folding problem, the paper is a strip of stamps with creases between them, and the folds must lie on the creases. In the map folding problem, the paper is a map, divided by creases into rectangles, and the folds must again lie only along these creases.\n" ]
if lockpicking guides and tools are available widely, why are so few houses lockpicked into?
Its far easier and more efficient to break a window or kick in a door.
[ "Posession of lockpicking tools, such as a bump key, are highly regulated by criminal law in four states in the United States, and are considered prima facie evidence of a crime in another four states in the United States. They are generally legal in the remaining states within the U.S.\n", "In some countries, such as Japan, lock-picking tools are illegal for most people to possess, but in many others, they are available and legal to own as long as there is no intent to use them for criminal purposes.\n", "Historically, locksmiths constructed or repaired an entire lock, including its constituent parts. The rise of cheap mass production has made this less common; the vast majority of locks are repaired through like-for-like replacements, high-security safes and strongboxes being the most common exception. Many locksmiths also work on any existing door hardware, including door closers, hinges, electric strikes, and frame repairs, or service electronic locks by making keys for transponder-equipped vehicles and implementing access control systems.\n", "The mechanism makes it easy to construct locks that can be opened with multiple different keys: \"blank\" discs with a circular hole are used, and only notches shared by the keys are employed in the lock mechanism. This is commonly used for locks of common areas such as garages in apartment houses.\n", "Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock, let it have been made in whatever country, or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance.\n", "The oldest type of lock used in the United Kingdom and Ireland. It is of a basic design using (usually) a single lever and a sliding bolt. Wards can be used for additional security. They are not used where high security is required. Most older locks were large, some as big as 40 cm by 25 cm.\n", "Tubular pin tumbler locks are generally considered to be safer and more resistant to picking than standard locks. This is primarily because they are often seen on coin boxes for vending machines and coin-operated machines, such as those used in a laundromat. However, the primary reason this type of lock is used in these applications is that it can be made physically shorter than other locks.\n" ]
Does light accelerate to the speed of light, or is it instantly the speed of light as soon as it is released from an electron?
They don't start off at zero, and there's no acceleration. They start off at c and always travel at c. This is because, due to special relativity, any massless particle can only ever move at c, any other speed isn't allowed physically. Source: [adamsolomon](_URL_0_)
[ "According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in a vacuum, \"c\". However, when relativistic electrons—that is, electrons moving at a speed close to \"c\"—are injected into a dielectric medium such as water, where the local speed of light is significantly less than \"c\", the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.\n", "The second postulate of Einstein's theory of special relativity states that the speed of light is invariant, regardless of the velocity of the source from which the light emanates. The extinction theorem (essentially) states that light passing through a transparent medium is simultaneously extinguished and re-emitted by the medium itself. This implies that information about the velocity of light from a moving source might be lost if the light passes through enough intervening transparent material before being measured. All measurements previous to the 1960s intending to verify the constancy of the speed of light from moving sources (primarily using moving mirrors, or extraterrestrial sources) were made only after the light had passed through such stationary material — that material being that of a glass lens, the terrestrial atmosphere, or even the incomplete vacuum of deep space. In 1961, Fox decided that there might not yet be any conclusive evidence for the second postulate: \"This is a surprising situation in which to find ourselves half a century after the inception of special relativity.\" Regardless, he remained fully confident in special relativity, noting that this created only a \"small gap\" in the experimental record.\n", "However, in an electron microscope, the accelerating potential is usually several thousand volts causing the electron to travel at an appreciable fraction of the speed of light. A SEM may typically operate at an accelerating potential of 10,000 volts (10 kV) giving an electron velocity approximately 20% of the speed of light, while a typical TEM can operate at 200 kV raising the electron velocity to 70% the speed of light. We therefore need to take relativistic effects into account. The relativistic relation between energy and momentum is E=pc+mc and it can be shown that,\n", "In the Bohr Model, an  electron has a velocity given by formula_40, where is the atomic number, formula_41 is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with formula_42 is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of  due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical  value which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. See Extension of the periodic table beyond the seventh period.\n", "where \"c\" is the speed of light, \"v\" that of the source, \"c' \" the resultant speed of light, and \"k\" a constant denoting the extent of source dependence which can attain values between 0 and 1. According to special relativity and the stationary aether, \"k\"=0, while emission theories allow values up to 1. Numerous terrestrial experiments have been performed, over very short distances, where no \"light dragging\" or extinction effects could come into play, and again the results confirm that light speed is independent of the speed of the source, conclusively ruling out emission theories.\n", "While electrodynamics holds that the speed of light \"in a vacuum\" is a universal constant (\"c\"), the speed at which light propagates in a material may be significantly less than \"c\". For example, the speed of the propagation of light in water is only 0.75\"c\". Matter can be accelerated beyond this speed (although still to less than \"c\") during nuclear reactions and in particle accelerators. Cherenkov radiation results when a charged particle, most commonly an electron, travels through a dielectric (electrically polarizable) medium with a speed greater than that at which light propagates in the same medium.\n", "The speed of light in a fluid is slower than the speed of light in vacuum, and it changes if the fluid is moving along with the light. In 1851, Fizeau measured the speed of light in a fluid moving parallel to the light using a interferometer. Fizeau's results were not in accord with the then-prevalent theories. Fizeau experimentally correctly determined the zeroth term of an expansion of the relativistically correct addition law in terms of as is described below. Fizeau's result led physicists to accept the empirical validity of the rather unsatisfactory theory by Fresnel that a fluid moving with respect to the stationary aether \"partially\" drags light with it, i.e. the speed is instead of , where is the speed of light in the aether, and is the speed of the fluid with respect to the aether.\n" ]
when i'm hungover why do i always crave greasy foods like pizza rather than foods that are better for me?
Your body is depleted of various electrolytes and calories since alcohol zaps your blood sugar levels, and dehydrates you, fatty salty food is an efficient albeit unhealthy way to replenish those stores.
[ "Everyone relieves his weary limbs by partaking of dinner, but not to excess - for being filled to excess, even with bread on its own, gives rise to dissipation - rather, everyone receives a meal according to the varying condition of their bodies or their age. They do not serve dishes of different flavours, nor richer types of food, but feeding on bread and herbs seasoned with salt, they quench their burning thirst with a temperate kind of drink. Then, for either the sick, those advanced in age, or likewise those tired by a long journey, they provide some other pleasures of tastier food, for it is not to be dealt out to all in equal measure.\n", "BULLET::::- Due to the warmness in their body they soon digest food and as they don't have adequate amount of nutrient supply to provide the energy needs of the body they soon get hungry and irritated if they don't get enough food timely.\n", "Food addiction has some physical signs and symptoms. Decreased energy; not being able to be as active as in the past, not being able to be as active as others around, also a decrease in efficiency due to the lack of energy. Having trouble sleeping; being tired all the time such as fatigue, oversleeping, or the complete opposite and not being able to sleep such as insomnia. Other physical signs and symptoms are restlessness, irritability, digestive disorders, and headaches.\n", "Another cause of hunger is related to agricultural policy. Due to the heavy subsidization of crops such as corn and soybeans, healthy foods such as fruits and vegetables are produced in lesser abundance and generally cost more than highly processed, packaged goods. Because unhealthful food items are readily available at much lower prices than fruits and vegetables, low-income populations often heavily rely on these foods for sustenance. As a result, the poorest people in the United States are often simultaneously undernourished and overweight or obese. This is because highly processed, packaged goods generally contain high amounts of calories in the form of fat and added sugars yet provide very limited amounts of essential micronutrients. These foods are thus said to provide \"empty calories.\" \n", "A boy buys junk food from the school canteen every day. His teacher gets annoyed, as does one of his classmates. But when he moves to a new house, he finds a spellbook; one of the spells allows him to pass his obesity to others. So every day, he eats enough junk food to make him sick; whenever someone insults him, he casts the spell on them.\n", "Some mass-produced pizzas by fast food chains have been criticized as having an unhealthy balance of ingredients. Pizza can be high in salt, fat and calories (food energy). The USDA reports an average sodium content of 5,101 mg per pizza in fast food chains. There are concerns about negative health effects. Food chains have come under criticism at various times for the high salt content of some of their meals.\n", "He finishes the feast, and several other courses, vomiting profusely all over himself, his table, and the restaurant's staff throughout his meal, causing other diners to lose their appetite, and in some cases, throw up as well. Finally, after being persuaded by the smooth maître d' to eat a single \"wafer-thin mint\", his stomach begins to rapidly expand until it explodes: covering the restaurant and diners with viscera and partially digested food—even starting a \"vomit-wave\" among the other diners, who leave in disgust.\n" ]
why does this camera distortion happen?
That is buffeting. Something is shaking the back of a digital video camera. You're seeing the effect of that vibration beat frequency interacting with the camera's 50-60Hz frame rate.
[ "BULLET::::- Distortion is an aberration that causes straight lines to curve. It can be troublesome for architectural photography and metrology (photographic applications involving measurement). Distortion tends to be noticeable in low cost cameras, including cell phones, and low cost DSLR lenses. It is usually very easy to see in wide angle photos. It can be now be corrected in software.\n", "This leads to noticeable distortion with perspective transformations (see figure – textures (the checker boxes) appear bent), especially as primitives near the camera. Such distortion may be reduced with subdivision.\n", "Distortion is caused by uneven shrinkage across a film's dimension and starts to warp or curl. This can be due to the difference in shrinkage between the film and emulsion layers, or areas of the film that start to shrink more than other areas. Temporary distortion can be reversed, but permanent distortion cannot.\n", "Distortion is caused by uneven shrinkage across a film's dimension and starts to warp or curl. This can be due to the difference in shrinkage between the film and emulsion layers, or areas of the film that start to shrink more than other areas. Temporary distortion can be reversed, but permanent distortion cannot.\n", "It logically follows that all film photography (now almost in disuse) distorted the image beheld by the eye, among other reasons because the film surface was flat in the manner of the picture plane. Artifactual characteristics of a camera lens may aggravate the distortion. This is demonstrated with a pinhole camera which has no lens but which produces the same distortion as described herein.\n", "Even if the image is sharp, it may be distorted compared to ideal pinhole projection. In pinhole projection, the magnification of an object is inversely proportional to its distance to the camera along the optical axis so that a camera pointing directly at a flat surface reproduces that flat surface. Distortion can be thought of as stretching the image non-uniformly, or, equivalently, as a variation in magnification across the field. While \"distortion\" can include arbitrary deformation of an image, the most pronounced modes of distortion produced by conventional imaging optics is \"barrel distortion\", in which the center of the image is magnified more than the perimeter (figure 3a). The reverse, in which the perimeter is magnified more than the center, is known as \"pincushion distortion\" (figure 3b). This effect is called lens distortion or image distortion, and there are algorithms to correct it.\n", "The popularity of amateur photography has made distorted photos made with cheap cameras so familiar that many people do not immediately realise the distortion. This \"distortion\" is relative only to the accepted norm of constructed perspective (where vertical lines in reality do not converge in the constructed image), which in itself is distorted from a true perspective representation (where lines that are vertical in reality would begin to converge above and below the horizon as they become more distant from the viewer).\n" ]
what do car fog lights *actually* do?
Fog lights produce a short but wide beam spread which illuminates the road close to the front of the vehicle. The driver can then see the edges of the road and slightly ahead without the blinding glare primary headlights would create during a heavy fog. My guess is only those who have been in a very thick fog or snowstorm appreciate the value of having fog lights -- the rest just have it for show.
[ "The respective purposes of front fog lamps and driving lamps are often confused, due in part to the misconception that fog lamps are necessarily selective yellow, while any auxiliary lamp that makes white light is a driving lamp. Automakers and aftermarket parts and accessories suppliers frequently refer interchangeably to \"fog lamps\" and \"driving lamps\" (or \"fog/driving lamps\").\n", "Front fog lamps provide a wide, bar-shaped beam of light with a sharp cutoff at the top, and are generally aimed and mounted low. They may produce white or selective yellow light, and were designed for use at low speed to increase the illumination directed towards the road surface and verges in conditions of poor visibility due to rain, fog, dust or snow.\n", "In most countries, weather conditions rarely necessitate the use of front fog lamps and there is no legal requirement for them, so their primary purpose is frequently cosmetic. They are often available as optional extras or only on higher trim levels of many cars. An SAE study has shown that in the United States more people inappropriately use their fog lamps in dry weather than use them properly in poor weather. Because of this, use of the fog lamps when visibility is not seriously reduced is often prohibited in most jurisdictions; for example, in New South Wales, Australia:\n", "Custom cars sometimes have indirect lighting underneath, glowing a color like green or purple which could not be confused with that of an emergency or other vehicle's normal lighting. These can be provided by strips of tubes of cold-cathode fluorescent lighting, or LEDs.\n", "Cars are typically fitted with multiple types of lights. These include headlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions, daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a trunk light and, more rarely, an engine compartment light.\n", "Sometimes, the existing lighting on a vehicle is modified to create warning beacons. In the case of wig-wag lighting, this involves adding a device to alternately flash the high-beam headlights, or, in some countries, the rear fog lights. It can also involve drilling out other lights on the vehicle to add ‘hideaway’ or ‘corner strobes’.\n", "A fog machine, fog generator, or smoke machine is a device that emits a dense vapor that appears similar to fog or smoke. This artificial fog is most commonly used in professional entertainment applications, but smaller, more affordable fog machines are becoming common for personal use. Fog machines can also be found in use in a variety of industrial, training, and some military applications. Typically, fog is created by vaporizing proprietary water and glycol-based or glycerin-based fluids or through the atomization of mineral oil. This fluid (often referred to colloquially as \"fog juice\") vaporizes or atomizes inside the fog machine. Upon exiting the fog machine and mixing with cooler outside air the vapor condenses, resulting in a thick visible fog.\n" ]
is it true that consuming your own species' flesh can cause madness?
I think you’re thinking of Kuru. Here’s the definition: Kuru is a very rare disease. It is caused by an infectious protein (prion) found in contaminated human brain tissue. Kuru is found among people from New Guinea who practiced a form of cannibalism in which they ate the brains of dead people as part of a funeral ritual.
[ "Although the dynamic of violent fantasy in lust murders is understood, an individual's violence fantasy alone is not enough to determine if an individual has or has not engaged in lust murder. Moreover, to conclude that an individual is a violent psychopath because they have drawn multitudes of violent images is overreaching.\n", "Characters may gain insanity when they see or experience something that strains the way they understand the world or something that harms them in a way that’s difficult to accept. Events which can inflict insanity include coming back from the dead, suffering a grievous wound, witnessing the brutal death of a loved one, or seeing a 30-foot tall demon waddle across the countryside as slime-covered, fleshy monstrosities spill from its countless drooling maws.\n", "Madness, the non-legal word for insanity, has been recognized throughout history in every known society. Some traditional cultures have turned to witch doctors or shamans to apply magic, herbal mixtures, or folk medicine to rid deranged persons of evil spirits or bizarre behavior, for example. Archaeologists have unearthed skulls (at least 7000 years old) that have small, round holes bored in them using flint tools. It has been conjectured that the subjects may have been thought to have been possessed by spirits which the holes would allow to escape. However, more recent research on the historical practice of trepanning supports the hypothesis that this procedure was medical in nature and intended as means of treating cranial trauma.\n", "[we] find that Madness is, contrary to the opinion of some unthinking persons, as manageable as many other distempers, which are equally dreadful and obstinate, and yet are not looked upon as incurable, and that such unhappy objects ought by no means to be abandoned, much less shut up in loathsome prisons as criminals or nuisances to the society.\n", "Teratophilia is classified as paraphilia. Rather than view the condition as a kink, defenders of teratophilia believe it allows people to see beauty outside of societal standards. Among other things it has been suggested, that monsters can function as an escapist fantasy for some straight women since the monster is able to embody masculine attributes without presenting itself as a man, which may embody trauma and terror in extreme cases, or aggravating patriarchal arrangements in the least.\n", "Can madness be described? Is it possible to express the pain that it entails? In 1994, when she was about to fall prey to her illness, Khady Sylla met Aminta Ngom, who exhibited her madness freely, without fear of provocation. During her years of suffering, Aminta was her window to the world.\n", "Some spiders, such as \"Pholcus phalangioides\", will prey on their own kind when food is scarce. Also, females of \"Phidippus johnsoni\" have been observed carrying dead males in their fangs. This behavior may be triggered by aggression, where females carry over hostility from their juvenile state and consume males just as they would prey. Sih and Johnson surmise that non-reproductive cannibalism can occur due to a remnant of an aggression trait in juvenile females. Known as the \"aggressive spillover hypothesis\", this tendency to unselectively attack anything that moves is cultivated by a positive correlation between hostility, foraging capability, and fecundity. Aggression at a young age leads to an increase in prey consumption and as such, a larger adult size. This behavior \"spills over\" into adulthood, and shows up as a nonadaptive trait that manifests itself through adult females preying on males of their same species.\n" ]