question
stringlengths 3
300
| answer
stringlengths 9
2.77k
| context
sequencelengths 7
7
|
---|---|---|
why are services like uber and airbnb considered by some to be disruptive to the economy? | Hotels and cab companies are regulated and taxed, they have to follow certain rules in order to keep their operating license. If I rent you my house for a short stay or pick you up and drive you around the government doesn't get any tax revenue from that and I'm not bound by the same licensing requirements. Because the hotels and cabs I'd be competing against do have to pay taxes and follow those regs I'm operating at an unfair advantage. Of course I can charge less than Yellow Cab, I don't have to pay for official inspections or cab medallions. | [
"Uber, Airbnb, and other companies have had drastic effects on infrastructures such as road congestion and housing. Major cities such as San Francisco and New York City have arguably become more congested due to their use. According to transportation analyst Charles Komanoff, \"Uber-caused congestion has reduced traffic speeds in downtown Manhattan by around 8 percent\".\n",
"However, in a report published in January 2017, Carl Benedikt Frey found that while the introduction of Uber had not led to jobs being lost, but had caused a reduction in the incomes of incumbent taxi drivers of almost 10%. Frey found that the \"sharing economy\", and Uber in particular, has had substantial negative impacts on workers wages.\n",
"In \"Drivers of Disruption? Estimating the Uber Effect\", Frey found that while the introduction of Uber had not led to jobs being lost, but had caused a reduction in the incomes of incumbent taxi drivers of almost 10 percent. On Al Jazeera he called the TFL decision to restrict Uber in London a massive transfer of consumer surplus from millions of users to a few taxi drivers.\n",
"Uber is not an example of disruption because it did not originate in a low-end or new market footholds. One of the conditions for the business to be considered disruptive according to Clayton M. Christensen is that the business should originate on a) low-end or b) new-market footholds. Instead, Uber was launched in San Franscisco, a large urban city with an established taxi service and did not target low-end customers or created a new market (from the consumer perspective). In contrast, UberSELECT, an option that provides luxurious cars such as limousine at a discounted price, is an example of disruption innovation because it originates from low-end customers segment - customers who would not have entered the traditional luxurious market.\n",
"In 2015, in close collaboration with the Taxi Workers Alliance, NYCC launched a campaign to transform workplace in the access, or temporary work, recognizing that companies like Uber were shortchanging its low-income workforce. The campaign argues that the company evades payroll taxes by designating their workers as \"independent contractors\" even though Uber sets its own prices and rules for ride-sharing transactions, and that by finding a loophole around the National Labor Relations Act, they fail to give their workers basic labor protections, such as a minimum wage, overtime compensation, and unemployment compensation. One recent study showed that these drivers made as little as $2.89 per hour after Uber cut fares. The organization has also partnered with Amazon workers in their successful push for a $15/hour minimum wage. \n",
"Uberisation has, as of yet, taken place in a limited but growing amount of industries. For example, with the advent of Airbnb, the hospitality industry has been transformed to a large extent, estimated by industry analysts to have a total annual value, just in New York City, of over US $2.1 billion. While uberisation has been criticised as potentially catalysing a chaotic shift by undermining existing corporate models in the hospitality and taxi industries, existing companies in industries such as marketing can use the phenomenon to reduce expenses and provide more specialised services for customers.\n",
"Transportation infrastructure mirrors telecommunication facilities; by impeding transportation for individuals in a city or region, the economy will slightly degrade over time. Successful cyber-attacks can impact scheduling and accessibility, creating a disruption in the economic chain. Carrying methods will be impacted, making it hard for cargo to be sent from one place to another. In January 2003 during the \"slammer\" virus, Continental Airlines was forced to shut down flights due to computer problems. Cyberterrorists can target railroads by disrupting switches, target flight software to impede airplanes, and target road usage to impede more conventional transportation methods. In May 2015, a man, Chris Roberts, who was a cyberconsultant, revealed to the FBI that he had repeatedly, from 2011 to 2014, managed to hack into Boeing and Airbus flights' controls via the onboard entertainment system, allegedly, and had at least once ordered a flight to climb. The FBI, after detaining him in April 2015 in Syracuse, had interviewed him about the allegations.\n"
] |
What is the definition of a "Great Power" and what makes a country one? | I was a history and international relations major as an undergrad, and here is where the two disciplines meet. The term "Great Power" comes from the Realist tradition in international relations. Without going too far down the rabbit hole of explaining what Realists believe, they essentially view international politics as a struggle for survival between nation-states and they see power rather than ideology as the key variable that leads nations into conflict (ideology may be an important reason why a war is worth fighting for the common man, but the conflicts are ultimately driven by competition for security between nations rather than conflicts over ideology.)
Realists like John Mearsheimer would define great powers generally as those which have a significant military capability (including latent capability) and an influence on affairs that extends beyond it's immediate region. Great powers can also extend their territorial influence beyond their defined borders in the absence of another power stopping them from doing so. There's not really a hard-and-fast way to differentiate great powers from lesser powers, but a nation that stands a realistic chance of defeating another great power in a war is generally going to be considered a great power.
The term was much more relevant in the absence of *superpowers*, which are essentially magnified great powers capable of projecting power over vast regions of the globe. *Great Powers* are much more important when there are powerful states with great military capabilities with no clear hegemon dominating a region. Historically, Western Europe has often been characterized by competition among Great Powers including Russia, England, France, the Netherlands, Germany, and Spain. Certain of these countries have dropped off the list of Great Powers at different points as their ability to influence affairs beyond their boarders lessened and their military capabilities became weakened. Now, some people would characterize Germany, Japan, the UK, France, and Russia as Great Powers (although these are all arguable) but most IR Realists would say that the United States is the sole Superpower in the world, rendering Great Power status fairly useless. | [
"A great power is a nation or state that, through its great economic, political and military strength, is able to exert power and influence not only over its own region of the world, but beyond to others.\n",
"BULLET::::- Great power: In historical mentions, the term \"great power\" refers to the states that have strong political, cultural and economical influence over nations around them and across the world.\n",
"A great power is a sovereign state that is recognized as having the ability and expertise to exert its influence on a global scale. Great powers characteristically possess military and economic strength, as well as diplomatic and soft power influence, which may cause middle or small powers to consider the great powers' opinions before taking actions of their own. International relations theorists have posited that great power status can be characterized into power capabilities, spatial aspects, and status dimensions.\n",
"Much effort in academic and popular writing is devoted to deciding which countries have the status of \"power\", and how this can be measured. If a country has \"power\" (as influence) in military, diplomatic, cultural, and economic spheres, it might be called a \"power\" (as status). There are several categories of power, and inclusion of a state in one category or another is fraught with difficulty and controversy. In his famous 1987 work, \"The Rise and Fall of the Great Powers,\" British-American historian Paul Kennedy charts the relative status of the various powers from AD 1500 to 2000. He does not begin the book with a theoretical definition of a \"great power\"; however he does list them, separately, for many different eras. Moreover, he uses different working definitions of a great power for different eras. For example:\n",
"Wight begins: \"Power politics is a colloquial phrase for international politics.\" He explains that states exploit power to achieve expansion and dominance; \"every dominant power aspires... to become a universal empire.\" For diplomatic reasons, \"dominant powers\" are euphemized as \"Great Powers ... who wish to monopolise (sic) the right to create international conflict\". Great Powers win and lose their status through violence, and are defined by their ability to wage war; they are decreasing in number, but those remaining are increasing in size.\n",
"Formal or informal acknowledgment of a nation's great power status has also been a criterion for being a great power. As political scientist George Modelski notes, \"The status of Great power is sometimes confused with the condition of being powerful. The office, as it is known, did in fact evolve from the role played by the great military states in earlier periods... But the Great power system institutionalizes the position of the powerful state in a web of rights and obligations.\"\n",
"Power tends to confuse itself with virtue and a great nation is particularly susceptible to the idea that its power is a sign of God's favor, conferring upon it a special responsibility for other nations—to make them richer and happier and wiser, to remake them, that is, in its own shining image. Power confuses itself with virtue and tends also to take itself for omnipotence. Once imbued with the idea of a mission, a great nation easily assumes that it has the means as well as the duty to do God's work.\n"
] |
Just how credible is the Abiotic Oil Theory vs. the organic algae/zoo-plankton theory? | [This review from Resource Geology shoots it down pretty hard.](_URL_0_) Really, the most obvious criticism is that there haven't been any big oil discoveries that can be conclusively credited to this hypothesis.
This hypothesis might have been worth consideration back in the 1950s, when the Russians came up with it, but it doesn't mesh with what we now know about the world. | [
"One of the main counter arguments to the abiotic theory is the existence of biomarkers in petroleum. These chemical compounds can be best explained as residues of biogenic organic matter. They have been found in all oil and gas accumulations tested so far and suggest that oil has a biological origin and is generated from kerogen by pyrolysis.\n",
"E. N. Harvey (1932) was among the first to propose how bioluminescence could have evolved. In this early paper, he suggested that proto-bioluminescence could have arisen from respiratory chain proteins that hold fluorescent groups. This hypothesis has since been disproven, but it did lead to considerable interest in the origins of the phenomenon. Today, the two prevailing hypotheses (both concerning marine bioluminescence) are the ones put forth by Seliger (1993) and Rees et al. (1998).\n",
"The abiogenic hypothesis regained some support in 2009 when researchers at the Royal Institute of Technology (KTH) in Stockholm reported they believed they had proven that fossils from animals and plants are not necessary for crude oil and natural gas to be generated. In his 2014 publication \"Chemistry of the Climate System\", German chemist Detlev Moller documents sufficient reliable evidence to show that both processes can be shown to co-exist, that they're not mutually exclusive.\n",
"The wide-ranged biological purposes of bio-luminescence include but are not limited to attraction of mates, defense against predators, and warning signals. In the case of bioluminescent bacteria, bio-luminescence mainly serves as a form of dispersal. It has been hypothesized that enteric bacteria (bacteria that survive in the guts of other organisms) - especially those prevalent in the depths of the ocean - employ bio-luminescence as an effective form of distribution. After making their way into the digestive tracts of fish and other marine organisms and being excreted in fecal pellets, bioluminescent bacteria are able to utilize their bio-luminescent capabilities to lure in other organisms and prompt ingestion of these bacterial-containing fecal pellets. The bio-luminescence of bacteria thereby ensures their survival, persistence, and dispersal as they are able to enter and inhabit other organisms.\n",
"Ctenophores may balance marine ecosystems by preventing an over-abundance of copepods from eating all the phytoplankton (planktonic plants), which are the dominant marine producers of organic matter from non-organic ingredients.\n",
"Research into algae for the mass-production of oil focuses mainly on microalgae (organisms capable of photosynthesis that are less than 0.4 mm in diameter, including the diatoms and cyanobacteria) as opposed to macroalgae, such as seaweed. The preference for microalgae has come about due largely to their less complex structure, fast growth rates, and high oil-content (for some species). However, some research is being done into using seaweeds for biofuels, probably due to the high availability of this resource.\n",
"A chemical basis for the abiotic petroleum process is the serpentinization of peridotite, beginning with methanogenesis via hydrolysis of olivine into serpentine in the presence of carbon dioxide. Olivine, composed of Forsterite and Fayalite metamorphoses into serpentine, magnetite and silica by the following reactions, with silica from fayalite decomposition (reaction 1a) feeding into the forsterite reaction (1b).\n"
] |
what is happening when the body develops a cauliflower ear? | The ear fills with blood and fluid that calicifies and hardens over time if it isn't drained and taken care of right away. | [
"The most common cause of cauliflower ear is blunt trauma to the ear leading to a hematoma which, if left untreated, eventually heals to give the distinct appearance of cauliflower ear. The structure of the ear is supported by a cartilaginous scaffold consisting of the following distinct components: the helix, antihelix, concha, tragus, and antitragus. The skin that covers this cartilage is extremely thin with virtually no subcutaneous fat while also strongly attached to the perichondrium, which is richly vascularized to supply the avascular cartilage.\n",
"Cauliflower ear is an irreversible condition that occurs when the external portion of the ear is hit and develops a blood clot or other collection of fluid under the perichondrium. This separates the cartilage from the overlying perichondrium that supplies its nutrients, causing it to die and resulting in the formation of fibrous tissue in the overlying skin. As a result, the outer ear becomes permanently swollen and deformed, resembling a cauliflower.\n",
"Cauliflower ear is a blood clot that forms under the skin in the ear, causing there to be a large bump in the ear; the bump tends to be extremely hard. To develop cauliflower ear one must be hit in the ear many times or hit hard for it to form into a blood clot. When having cauliflower ear it is important to get the ear drained of liquid that has built up. Otherwise the ear will require surgery to return to normal shape and size. The best way to prevent cauliflower is to wear headgear. This will protect the ears from taking hard hits.\n",
"The components of the ear involved in cauliflower ear are the outer skin, the perichondrium, and the cartilage. The outer ear skin is tightly adherent to the perichondrium because there is almost no subcutaneous fat on the anterior of the ear. This leaves the perichondrium relatively exposed to damage from direct trauma and shear forces, created by a force pushing across the ear like a punch, and increasing the risk of hematoma formation. In an auricular hematoma, blood accumulates between the perichondrium and cartilage. The hematoma mechanically obstructs blood flow from the perichondrium to the avascular cartilage. This lack of perfusion puts the cartilage at risk for becoming necrotic and/or infected. If left untreated, disorganized fibrosis and cartilage formation will occur around the aforementioned cartilaginous components.\n",
"A 2005 study achieved successful regrowth of cochlea cells in guinea pigs. However, the regrowth of cochlear hair cells does not imply the restoration of hearing sensitivity, as the sensory cells may or may not make connections with neurons that carry the signals from hair cells to the brain. A 2008 study has shown that gene therapy targeting Atoh1 can cause hair cell growth and attract neuronal processes in embryonic mice. Some hope that a similar treatment will one day ameliorate hearing loss in humans.\n",
"Recessive, dominant, X-linked, or mitochondrial genetic mutations can affect the structure or metabolism of the inner ear. Some may be single point mutations, whereas others are due to chromosomal abnormalities. Some genetic causes give rise to a late onset hearing loss. Mitochondrial mutations can cause SNHL i.e. m.1555AG, which makes the individual sensitive to the ototoxic effects of aminoglycoside antibiotics.\n",
"An anecdotal medical approach is to install lidocaine liniment 3% or gel 2% into the ear canal. Somehow this creates a vagus nerve-triggering reflex through its extensions to the external ear and tympanus (ear drum). The effect can be immediate, and also have lasting effect after the lidocaine effect expires after about two hours.\n"
] |
Have there been any major Civil Rights movements in the US which ultimately failed totally and completely? | Well, there's the modern pederasty movement. By pederasty I mean movements defending adult - adolescent sexual relationships, especially of the homosexual kind. It arose during the 60s and 70s, along with other parts of the sexual revolution and civil rights movement. NAMBLA (North American Man Boy Love Association) was founded in 1978 and describes itself as a political, educational and civil rights association whose goal is to end "the extreme oppression of men and boys in mutually consensual relationships". So they (the few members) describe themselves as being part of civil rights, but I don't know how much other civil rights would agree with that (I think not much, at least from what the statements I've read of gay civil rights associations, that try to dissociate their image as much as possible from them). | [
"Most civil rights movements relied on the technique of civil resistance, using nonviolent methods to achieve their aims. In some countries, struggles for civil rights were accompanied, or followed, by civil unrest and even armed rebellion. While civil rights movements over the last sixty years have resulted in an extension of civil and political rights, the process was long and tenuous in many countries, and many of these movements did not achieve or fully achieve their objectives.\n",
"The civil rights movement marked an enormous change in American social, political, economic and civic life. It brought with it boycotts, sit-ins, nonviolent demonstrations and marches, court battles, bombings and other violence; prompted worldwide media coverage and intense public debate; forged enduring civic, economic and religious alliances; and disrupted and realigned the nation's two major political parties.\n",
"Movements for civil rights in the United States include noted legislation and organized efforts to abolish public and private acts of racial discrimination against African Americans and other disadvantaged groups between 1954 and 1968, particularly in the southern United States. It is sometimes referred to as the Second Reconstruction era, alluding to the unresolved issues of the Reconstruction Era (1863–77).\n",
"The civil rights movement (1896–1954) was a long, primarily nonviolent series of events to bring full civil rights and equality under the law to all Americans. The era has had a lasting impact on United States society, in its tactics, the increased social and legal acceptance of civil rights, and in its exposure of the prevalence and cost of racism.\n",
"Noted achievements of the Civil Rights Movement include the judicial victory in the \"Brown v. Board of Education\" case that nullified the legal article of \"separate but equal\" and made segregation legally impermissible, and the passages of the 1964 Civil Rights Act, . that banned discrimination in employment practices and public accommodations, passage of the Voting Rights Act of 1965 that restored voting rights, and passage of the Civil Rights Act of 1968 that banned discrimination in the sale or rental of housing.\n",
"While the Civil Rights Movement was ongoing, several anti-integration groups founded to attempt to stop the racial desegregation and removals of discrimination which were beginning to occur. One such network of organizations was the White Citizens' Councils, which founded in 1954, after the landmark U.S. Supreme Court ruling \"Brown v. Board of Education\", that same year, that stated state laws permitting white and black segregation were found to be unconstitutional. While the decision effectively overturned the \"Plessy v. Ferguson\" decision of 1896, which allowed states to segregate schools stating \"separate educational facilities are inherently unequal\", there was no true enforcement of this ruling in the south for many years. Blatant discrimination continued to take place for decades via Jim Crow laws, which had mandated racial segregation in all public facilities in the states of the former Confederate States of America, and continued to be enforced in many ways until 1965.\n",
"Civil rights movements are a worldwide series of political movements for equality before the law, that peaked in the 1960s. In many situations they have been characterized by nonviolent protests, or have taken the form of campaigns of civil resistance aimed at achieving change through nonviolent forms of resistance. In some situations, they have been accompanied, or followed, by civil unrest and armed rebellion. The process has been long and tenuous in many countries, and many of these movements did not, or have yet to, fully achieve their goals, although the efforts of these movements have led to improvements in the legal rights of some previously oppressed groups of people, in some places.\n"
] |
if someone were pushed into a bottomless hole, what would be the first thing that killed them and how long would it take? | If they could theoretically fall forever, my guess would be dehydration would kill them eventually. | [
"I was entombed once for 6 long hours. It seemed like 6 years. There were no visible means of getting out either – we had just to wait. I was once right next to a cave-in when my fire boss was buried alive. As we were working and chatting a big stone twice as big as a trunk came tumbling down on my mate from overhead, doubling him like a jack-knife. It squeezed his face right down on the floor. God knows I wasn't strong enough to lift that rock alone, but by superhuman efforts I did. This gave him a chance to breathe and then I shouted. Some men 70 yards away heard me and came and got him out alive. A chap who worked beside me was killed along with 71 others at Udston, and all they could identify him with was his pin leg. I wasn't there that day.\n",
"BULLET::::- March 10, 1999 - A caver climbing the Incredible pit became tangled in multiple ropes and was stranded 140 feet off the cave floor underneath water falling into the pit. The incident resulted in fatality due to hypothermia.\n",
"On March 1, 1991, Indiana caver, Christopher Yeager, made a fatal mistake while rappelling the 23-meter drop in Chevé Cave. Eleven days after the accident, Yeager was buried in a passage near where he fell. On February 8, 1992, almost a year after his death, a team of Yeager’s caving friends used ropes and pulleys to pull his body to the surface which was returned to his family in Indiana.\n",
"The fourth and final event involves jumping into a tub filled with water from a highly elevated plank. But as Pooch leaps, the tub somehow moves, and therefore an official sends him back up. And when Pooch plunges again, everyone below moves away instead of breaking his fall. Pooch plummets into the ground, creating a deep hole as a result. Surprisingly, a Tibetan man from the hole picks up and places him back on the surface.\n",
"Holes can occur for a number of reasons, including natural processes and intentional actions by humans or animals. Holes in the ground that are made intentionally, such as holes made while searching for food, for replanting trees, or postholes made for securing an object, are usually made through the process of digging. Unintentional holes in an object are often a sign of damage. Potholes and sinkholes can damage human settlements.\n",
"The rescue team all agreed that there was no possibility of the men left below ground being alive. Two explosions, blackdamp (locally called choak-damp), fire and the lethal afterdamp made any rescue attempt impossible. The suggestion was made that the pit be stopped up to extinguish the fire. However, local recollections of three men who had survived for 40 days in a pit near Byker led to shouts of \"Murder\" and obstruction.\n",
"A Hole Lot of Trouble is a 1969 British short comedy film. Lasting only twenty-seven minutes, it charts the efforts of a group of workmen trying to dig a hole. It was written and directed by Francis Searle and starred Arthur Lowe, Victor Maddern and Bill Maynard. It became available on DVD in the UK in 2015.\n"
] |
Can one neuron within the human brain have several types of neurotransmitters that binds to it? For instance, can a neuron that usually allows binding by GABA neurotransmitters also allow binding from other types of neurotransmitters, such as NDMA, DA, 5-HT, etc? | Yes. Neurons can have, for instance, both excitatory and inhibitory inputs, mostly mediated through glutamate and GABA | [
"A neuron of a given kind (e.g. a thalamic cell) cannot be functionally replaced by one of another type (e.g. an inferior ollivary cell) even if their synaptic connectivity and the type of neurotransmitter outputs are identical. (The difference is that the intrinsic electrophysiological properties of thalamic cells are extraordinarily different from those of inferior olivary neurons).\n",
"There are literally hundreds of different types of synapses. In fact, there are over a hundred known neurotransmitters, and many of them have multiple types of receptors. Many synapses use more than one neurotransmitter—a common arrangement is for a synapse to use one fast-acting small-molecule neurotransmitter such as glutamate or GABA, along with one or more peptide neurotransmitters that play slower-acting modulatory roles. Molecular neuroscientists generally divide receptors into two broad groups: chemically gated ion channels and second messenger systems. When a chemically gated ion channel is activated, it forms a passage that allows specific types of ions to flow across the membrane. Depending on the type of ion, the effect on the target cell may be excitatory or inhibitory. When a second messenger system is activated, it starts a cascade of molecular interactions inside the target cell, which may ultimately produce a wide variety of complex effects, such as increasing or decreasing the sensitivity of the cell to stimuli, or even altering gene transcription.\n",
"Since many of the same neurotransmitters are found in the ENS as the brain, it follows that myenteric neurons can express receptors for both peptide and non-peptide (amines, amino acids, purines) neurotransmitters. Generally, expression of a receptor is limited to a subset of myenteric neurons, with probably the only exception being expression of nicotinic cholinergic receptors on all myenteric neurons. One receptor that has been targeted for therapeutic reasons has been the 5-hydroxytryptamine (5-HT) receptor. Activating this pre-synaptic receptor enhances cholinergic neurotransmission and can stimulate gastrointestinal motility.\n",
"According to a rule called Dale's principle, which has only a few known exceptions, a neuron releases the same neurotransmitters at all of its synapses. This does not mean, though, that a neuron exerts the same effect on all of its targets, because the effect of a synapse depends not on the neurotransmitter, but on the receptors that it activates. Because different targets can (and frequently do) use different types of receptors, it is possible for a neuron to have excitatory effects on one set of target cells, inhibitory effects on others, and complex modulatory effects on others still. Nevertheless, it happens that the two most widely used neurotransmitters, glutamate and GABA, each have largely consistent effects. Glutamate has several widely occurring types of receptors, but all of them are excitatory or modulatory. Similarly, GABA has several widely occurring receptor types, but all of them are inhibitory. Because of this consistency, glutamatergic cells are frequently referred to as \"excitatory neurons\", and GABAergic cells as \"inhibitory neurons\". Strictly speaking, this is an abuse of terminology—it is the receptors that are excitatory and inhibitory, not the neurons—but it is commonly seen even in scholarly publications.\n",
"A neuron made of two coupled oscillators, one having a fixed and the other having a tunable natural frequency, has been shown able to run logic gates such as XOR that conventional sigmoid neurons cannot. \n",
"There are two types of receptors that neurotransmitters interact with on a post-synaptic neuron. The first types of receptors are ligand-gated ion channels or LGICs. LGIC receptors are the fastest types of transduction from chemical signal to electrical signal. Once the neurotransmitter binds to the receptor, it will cause a conformational change that will allow ions to directly flow into the cell. The second types are known as G-protein-coupled receptors or GPCRs. These are much slower than LGICs due to an increase in the amount of biochemical reactions that must take place intracellularly. Once the neurotransmitter binds to the GPCR protein, it causes a cascade of intracellular interactions that can lead to many different types of changes in cellular biochemistry, physiology, and gene expression. Neurotransmitter/receptor interactions in the field of neuropharmacology are extremely important because many drugs that are developed today have to do with disrupting this binding process.\n",
"By the end of the 20th century, it became clear that the human brain operates with more than a dozen neurotransmitters and a large number of neuropeptides and hormones. The relationships between these different chemical systems are complex as some of them suppress and some of them induce each other's release during neuronal exchanges. This complexity of relationships devalues the old approach of assigning “inhibitory vs. excitatory” roles to neurotransmitters: the same neurotransmitters can be either inhibitory or excitatory depending on what system they interact with. It became clear that an impressive diversity of neurotransmitters and their receptors is necessary to meet a wide range of behavioural situations, but the links between temperament traits and specific neurotransmitters are still a matter of research. Several attempts were made to assign specific (single) neurotransmitters to specific (single) traits. For example, dopamine was proposed to be a neurotransmitter of the trait of Extraversion, noradrenaline was linked to anxiety, and serotonin was thought to be a neurotransmitter of an inhibition system. These assignments of neurotransmitter functions appeared to be an oversimplification when confronted by the evidence of much more diverse functionality. \n"
] |
how do google glasses work if i can't focus on anything within three inches of my eyes? | You can't focus on anything within three inches of your eyes because the lens in your eye can't accommodate (become stronger) well enough. You can only make the lens so strong and it turns out that the shortest focal point you can get with just your eyes is around that distance, so you can't focus on anything closer than that.
What Glass does is focus the light for you. A little projector projects light onto a prism (the glass thing) and the prism will actually focus the light onto your retina, so your eye lens doesn't have to accommodate.
[See this infographic.](_URL_0_) | [
"Google Glass is a brand of smart glasses—an optical head-mounted display designed in the shape of a pair of eyeglasses. It was developed by X (previously Google X) with the mission of producing a ubiquitous computer. Google Glass displayed information in a smartphone-like, hands-free format. Wearers communicated with the Internet via natural language voice commands.\n",
"Other than the touchpad, Google Glass can be controlled using just \"voice actions\". To activate Glass, wearers tilt their heads 30° upward (which can be altered for preference) or simply tap the touchpad, and say \"O.K., Glass.\" Once Glass is activated, wearers can say an action, such as \"Take a picture\", \"Record a video\", \"Hangout with [person/Google+ circle]\", \"Google 'What year was Wikipedia founded?'\", \"Give me directions to the Eiffel Tower\", and \"Send a message to John\" (many of these commands can be seen in a product video released in February 2013). For search results that are read back to the user, the voice response is relayed using bone conduction through a transducer that sits beside the ear, thereby rendering the sound almost inaudible to other people.\n",
"In June 2014, Google Glass' ability to acquire images of a patient's retina (\"Glass Fundoscopy\") was publicly demonstrated for the first time at the Wilmer Clinical Meeting at Johns Hopkins University School of Medicine by Dr. Aaron Wang and Dr. Allen Eghrari. This technique was featured on the cover of the Journal for Mobile Technology in Medicine for January 2015. Doctors Phil Haslam and Sebastian Mafeld demonstrated the first application of Google Glass in the field of interventional radiology. They demonstrated how Google Glass could assist a liver biopsy and fistulaplasty, and the pair stated that Google Glass has the potential to improve patient safety, operator comfort, and procedure efficiency in the field of interventional radiology.\n",
"The Google Glass prototype resembled standard eyeglasses with the lens replaced by a head-up display. In mid-2011, Google engineered a prototype that weighed ; by 2013 they were lighter than the average pair of sunglasses.\n",
"At 8 mm and 16 mm, respectively, the lens is able to focus on an area and in width. Although the official close focus distance is , Scott Gietler reported that by using the spot-focus mode, rather than multiple focus points, he was able to achieve a minimum working distance (glass to subject).\n",
"In the event that a spectacle wearer cannot obtain the eye relief that they require, some cameras and microscopes allow prescription lenses to be fitted onto their eyepieces. In this way, the user can temporarily dispense with glasses in favor of the lens mounted on the optics. Although this method does not afford good incidental vision for the field around them, it might still be of use to some.\n",
"Current bifocals and progressive lenses are static, in that the user has to change their eye position to look through the portion of the lens with the focal power corresponding to the distance of the object. This usually means looking through the top of the lens for distant objects and down through the bottom of the lens for near objects. Adjustable focus eyeglasses have one focal length, but it is variable without having to change where one is looking.\n"
] |
the difference between functions, methods, objects, classes, and oop languages. | A *function* is a block of code which runs some commands and can be called on from elsewhere in the code.
example:
def function():
print "I do something."
# now we can call the function from other code, like this, and it will always print "I do something".
function()
A *class* represents an object. A thing. It can contain both *data* and *functions that operate on the data*.
A function which is attached to a class is called a *method*. It can only be called through the class.
So, for example:
class Car():
def __init__(self):
self.model = "Honda"
self.color = "red"
def drive(self, destination):
# do something
henry = Car()
henry.drive()
in this example, I have a "car" class with two attached methods. One of them, __init__, is something used by python to create the object. The other one, drive(), does something about driving.
then i created an *instance* of the car class, and called the method 'drive' on the instance.
So: function() is a 'function', meaning a block of code that can be called from elsewhere and run when it is called. 'Car' is a *class*, which is a collection of data and functions that operate on that data; the functions contained within a class are called 'methods'. So *drive* is a function contained within Car, meaning it is a method. | [
"Like Smalltalk, in Objective-C, class methods are simply methods called on the class object, hence a class's class methods must be defined as instance methods in its metaclass. Because different classes can have different sets of class methods, each class must have its own separate metaclass. Classes and metaclasses are always created as a pair: the runtime has functions codice_85 and codice_86 to create and register class-metaclass pairs, respectively.\n",
"In many object-oriented languages, classes are the main means of encapsulation and modularity; each class defines a namespace and controls which definitions are externally visible. Further, classes in many languages define an indivisible unit that must be used as a whole. For example, using a codice_23 concatenation function requires importing and compiling against all of codice_23.\n",
"Every class \"implements\" (or \"realizes\") an interface by providing structure and behavior. Structure consists of data and state, and behavior consists of code that specifies how methods are implemented. There is a distinction between the definition of an interface and the implementation of that interface; however, this line is blurred in many programming languages because class declarations both define and implement an interface. Some languages, however, provide features that separate interface and implementation. For example, an abstract class can define an interface without providing implementation.\n",
"Class-based languages, or, to be more precise, typed languages, where subclassing is the only way of subtyping, have been criticized for mixing up implementations and interfaces—the essential principle in object-oriented programming. The critics say one might create a bag class that stores a collection of objects, then extend it to make a new class called a set class where the duplication of objects is eliminated. Now, a function that takes an object of the bag class may expect that adding two objects increases the size of a bag by two, yet if one passes an object of a set class, then adding two objects may or may not increase the size of a bag by two. The problem arises precisely because subclassing implies subtyping even in the instances where the principle of subtyping, known as the Liskov substitution principle, does not hold. Barbara Liskov and Jeannette Wing formulated the principle succinctly in a 1994 paper as follows:\n",
"Computer programming languages having notions of either functions as the core module (see Functional programming) or functions as objects provide excellent examples of loosely coupled programming. Functional languages have patterns of Continuations, Closure, or generators. See Clojure and Lisp as examples of function programming languages. Object-oriented languages like Smalltalk and Ruby have code blocks, whereas Eiffel has agents. The basic idea is to objectify (encapsulate as an object) a function independent of any other enclosing concept (e.g. decoupling an object function from any direct knowledge of the enclosing object). See First-class function for further insight into functions as objects, which qualifies as one form of first-class function.\n",
"In computer science, a programming language is said to have first-class functions if it treats functions as first-class citizens. This means the language supports passing functions as arguments to other functions, returning them as the values from other functions, and assigning them to variables or storing them in data structures. Some programming language theorists require support for anonymous functions (function literals) as well. In languages with first-class functions, the names of functions do not have any special status; they are treated like ordinary variables with a function type. The term was coined by Christopher Strachey in the context of \"functions as first-class citizens\" in the mid-1960s.\n",
"In object-oriented programming, a class is an extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behavior (member functions or methods). In many languages, the class name is used as the name for the class (the template itself). The name for the default constructor of the class (a subroutine that creates objects), and as the type of objects generated by instantiating the class; these distinct concepts are easily conflated.\n"
] |
What technological breakthroughs between 15th century to the 19th century were required for the creation of pistols and rifles? | Depending on the specific cutoff date you've got, I'd argue that smokeless gunpowder in 1886 is the single most important breakthrough. Smokeless powder paved the way for firearms as we know them today. Smokeless powder burns slightly differently than black powder (rapid burn rather than low explosive) and is significantly cleaner on firing. Although the immediate effect is apparent in the name - less smoke means it's easier to see on the battlefield - the more significant benefit was the significantly cleaner result of firing, meaning guns no longer had to be designed around regular cleaning after fewer than a hundred rounds. This in turn allowed bores and bullets to decrease in size (as the utility of large bores with respect to getting more shots between having to clean was gone) while velocities went up, allowing for flatter-shooting, longer-ranged weapons. Smokeless powder also proved far more conducive to autoloading systems thanks to higher pressures and cleaner burns, which made machineguns a truly viable weapon on the battlefield.
& #x200B;
Prior to that, percussion caps and brass drawing were very important steps. Percussion caps at first allowed for a quicker and more convenient way to prime a gun compared to a flint lock, but they ultimately provided the foundation for the primers in every modern cartridge. Meanwhile, brass drawing provided an effective and consistent means to contain cartridges that could be mass-produced. Unlike paper cartridges, they left no residue in the chamber after firing, and unlike rolled brass cartridges, they were sturdier and easier to produce. | [
"During World War I, the Austrians introduced the world's first machine pistol the Steyr Repetierpistole M1912/P16. The Germans also experimented with machine pistols, by converting various types of semi-automatic pistols to full-auto, leading to the development of the first practical submachine gun. During World War II, machine pistol development was more or less ignored as the major powers were focused on mass-producing submachine guns. After the war, machine pistols development was limited and only a handful of manufacturers would develop new designs, with varying degrees of success.\n",
"The first primitive firearms were invented about 1250 AD in China when the man-portable fire lance (a bamboo or metal tube that could shoot ignited gunpowder) was combined with projectiles such as scrap metal, broken porcelain, or darts/arrows.\n",
"In 1825 he designed the first of the large caliber, short barreled pistols that would lead to considerable wealth and fame for himself. Using the basic flintlock action in common usage at the time, the pistols were muzzle loading single shots, or in some cases, double barreled in an over-under manner.\n",
"The ancestor to the modern minigun was a hand cranked mechanical device invented in the 1860s by Richard Jordan Gatling. Gatling later replaced the hand-cranked mechanism of a rifle-caliber Gatling gun with an electric motor, a relatively new invention at the time. Even after Gatling slowed down the mechanism, the new electric-powered Gatling gun had a theoretical rate of fire of 3,000 rounds per minute, roughly three times the rate of a typical modern, single-barreled machine gun. Gatling's electric-powered design received U.S. Patent #502,185 on July 25, 1893. Despite Gatling's improvements, the Gatling gun fell into disuse after cheaper, lighter-weight, recoil and gas operated machine guns were invented; Gatling himself went bankrupt for a period.\n",
"The first successful machine-gun designs were developed in the mid-19th century. The key characteristic of modern machine guns, their relatively high rate of fire and more importantly mechanical loading, first appeared in the Model 1862 Gatling gun, which was adopted by the United States Navy. These weapons were still powered by hand; however, this changed with Hiram Maxim's idea of harnessing recoil energy to power reloading in his Maxim machine gun. Dr. Gatling also experimented with electric-motor-powered models; this externally powered machine reloading has seen use in modern weapons as well.\n",
"Ferdinand Ritter von Mannlicher produced the first successful design for a semi-automatic rifle in 1885, and by the early 20th century, many manufacturers had introduced semi-automatic shotguns, rifles and pistols. \n",
"In 1857, Smith and Wesson formed another Smith & Wesson company, this time to produce a pistol with interchangeable parts, a repeating action, a revolving magazine, metallic cartridges, and an open cylinder. They developed more firearms using their own patents along with patents and licenses bought from other gunsmiths.\n"
] |
What was the impact of the Albigensian Crusade on the centralization of France ? | Hi there - unfortunately we have had to remove your question, because [/r/AskHistorians isn't here to do your homework for you](_URL_0_). However, our rules DO permit people to ask for help with their homework, so long as they are seeking clarification or resources, rather than the answer itself.
If you have indeed asked a homework question, you should consider resubmitting a question more focused on finding resources and seeking clarification on confusing issues: tell us what you've researched so far, what resources you've consulted, and what you've learned, and we are more likely to approve your question. Please see this [Rules Roundtable](_URL_1_) thread for more information on what makes for the kind of homework question we'd approve. Additionally, if you're not sure where to start in terms of finding and understanding sources in general, we have a six-part series, "[Finding and Understanding Sources](_URL_2_)", which has a wealth of information that may be useful for finding and understanding information for your essay. Finally, other subreddits are likely to be more suitable for help with homework - try looking for help at /r/HomeworkHelp.
Alternatively, if you are not a student and are not doing homework, we have removed your question because it resembled a homework question. It may resemble a common essay question from a prominent history syllabus or may be worded in a broad, open-ended way that feels like the kind of essay question that a professor would set. Professors often word essay questions in order to provide the student with a platform to show how much they understand a topic, and these questions are typically broader and more interested in interpretations and delineating between historical theories than the average /r/AskHistorians question. If your non-homework question was incorrectly removed for this reason, we will be happy to approve your question if you **wait for 7 days** and then ask a less open-ended question on the same topic. | [
"As a result of the Albigensian Crusade, there were only a small number of French recruits for the Fifth and Sixth crusades. Strayer argues that the Albigensian Crusade increased the power of the French monarchy and made the papacy more dependent on it. This would eventually lead to the Avignon Papacy.\n",
"The Albigensian Crusade was launched in 1209 to eliminate the heretical Cathars of Occitania (the south of modern-day France). It was a decade-long struggle that had as much to do with the concerns of northern France to extend its control southwards as it did with heresy. In the end, both the Cathars and the independence of southern France were exterminated.\n",
"The Albigensian Crusade or the Cathar Crusade (1209–1229) was a 20-year military campaign initiated by Pope Innocent III to eliminate Catharism in Languedoc, in southern France. The Crusade was primarily prosecuted by the French crown and it promptly took on a political flavour, resulting not only in a significant reduction in the number of practising Cathars, but also in a realignment of the County of Toulouse in Languedoc, bringing it into the sphere of the French crown and diminishing the distinct regional culture and high level of influence of the Counts of Barcelona.\n",
"The Albigensian Crusade had begun in 1209, ostensibly against the Cathar heretics of southern France and Languedoc in particular, though it soon became a contest between lords of northern France and those of Occitania in the south. The first phase from 1209 to 1215 was quite successful for the northern forces, but this was followed by a series of local rebellions from 1215 to 1225 that undid many of these earlier gains. There followed the seizure of Avignon and Languedoc.\n",
"The Albigensian Crusade was initiated by the Kingdom of France at the behest of Pope Innocent III. Its purpose was to squash the growing Cathar movement, which flourished mainly in the Languedoc region of southern France. The immediate cause was the killing of the papal legate, Pierre de Castelnau. The Crusaders set out in the summer of 1209. After several military victories, they were able to capture many towns without a fight before arriving at Minerve. After the fall of Carcassonne, papal legate Arnaud Amalric, who had led troops during the Massacre at Béziers, was replaced as commander of the Crusader force by Simon de Montfort, 5th Earl of Leicester, although Amalric continued to accompany the army.\n",
"The Albigensian Crusade or the Cathar Crusade (1209–1229; , ) was a 20-year military campaign initiated by Pope Innocent III to eliminate Catharism in Languedoc, in southern France. The Crusade was prosecuted primarily by the French crown and promptly took on a political flavour, resulting in not only a significant reduction in the number of practising Cathars, but also a realignment of the County of Toulouse in Languedoc, bringing it into the sphere of the French crown and diminishing the distinct regional culture and high level of influence of the Counts of Barcelona.\n",
"In the 13th century, the spiritual beliefs of the area were challenged by the See of Rome and the region became attached to the Kingdom of France following the Albigensian Crusade (1208–1229). This crusade aimed to put an end to what the Church considered the Cathar heresy, and enabled the Capetian dynasty to extend its influence south of the Loire. As part of this process, the former principalities of Trencavel (the Viscounty of Albi, Carcassona, Besièrs, Agde and Nimes) were integrated into the Royal French Domain in 1224. The Counts of Toulouse followed them in 1271. The remaining feudal enclaves were absorbed progressively up to the beginning of the 16th century; the County of Gévaudan in 1258, the County of Melgueil (Mauguiò) in 1293, the Lordship of Montpellier in 1349 and the Viscounts of Narbonne in 1507.\n"
] |
why does the box of my ps4 say it comes with 500 gb when it only comes with 407.2 gb? | GB can technically mean 2 different things. One is the computer definition. That works off binary and powers of 2. So, there are 8 bits in 1 byte. Then we start counting bytes by doubling 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and 1024. 1024 bytes is a kilobyte (KB). If you do the same thing, counting KB, 1024 KB is a megabyte (MB). If you do the same thing, counting MB, 1024 MB is a gigabyte (GB) and so on to terabytes (TB), etc.
Now, in non-computer contexts, the prefixes kilo, mega, giga, tera all mean a thousand, a million and a billion. So, while computers themselves use the first system to count up space, the manufacturers can use the definition of "a billion bytes" for GB on the packaging without the FTC coming down on them for lying in their advertising.
The bigger the hard drives get, the bigger the difference between the 2 methods of calculation get.
And, because this went to court a couple of times a while back, you'll find that the packaging itself usually has fine print explaining this in legalese. | [
"CECH-4000B consoles (those with hard drives) weigh approximately , while the CECH-4000A weighs approximately . Both are roughly 25% smaller and about 20% lighter than the original PS3 Slim. This version has a sliding disc cover rather than the slot-loading drive found on previous PlayStation 3 consoles (Similar to the Sony BRAVIA KDL22PX300, which includes a built in PlayStation 2).\n",
"The unit is sold in many hard drive sizes, 20, 40, 60, & 80 GB and the AV4100 which is 100 GB in size. The actual size of the unit depends on the capacity, the AV420 (20 GB) was smaller than all of the other models and also has a smaller screen size.\n",
"For \"2.5″\" bays, actual dimensions are wide, between and high, and deep. However, most laptops have drive bays smaller than the 15 mm specification. 2.5″ hard drives may range from 7 mm to 15 mm in height, there are two sizes that appear to be prominent. 9.51 mm size drives are predominantly used by laptop manufacturers, however at present 2.5″ Velociraptor and some higher capacity drives (above 1 TB), are 15 mm in height. The greater height of the 15 mm drives allow more platters and therefore greater data capacities. Many laptop drive bays are designed to be removable trays in which the drives are mounted, to ease removal and replacement.\n",
"In September 2012 at the Tokyo Game Show, Sony announced that a new, slimmer PS3 redesign (CECH-4000) was due for release in late 2012 and that it would be available with either a 250 GB or 500 GB hard drive. Three versions of the \"Super Slim\" model were revealed: one with a 500 GB hard drive, a second with a 250 GB hard drive which is not available in PAL regions, and a third with a 12 GB flash storage that was available in PAL regions, and in Canada. The storage of 12 GB model is upgradable with an official standalone 250 GB hard drive. A vertical stand was also released for the model. In the United Kingdom, the 500 GB model was released on September 28, 2012; and the 12 GB model was released on October 12, 2012. In the United States, the PS3 Super Slim was first released as a bundled console. The 250 GB model was bundled with the \"Game of the Year\" edition of \"\" and released on September 25, 2012; and the 500 GB model was bundled with \"Assassin's Creed III\" and released on October 30, 2012. In Japan, the black colored Super Slim model was released on October 4, 2012; and the white colored Super Slim model was released on November 22, 2012. The Super Slim model is 20 percent smaller and 25 percent lighter than the Slim model and features a manual sliding disc cover instead of a motorized slot-loading disc cover of the Slim model. The white colored Super Slim model was released in the United States on January 27, 2013 as part of the \"Instant Game Collection Bundle\". The \"Garnet Red\" and \"Azurite Blue\" colored models were launched in Japan on February 28, 2013. The \"Garnet Red\" version was released in North America on March 12, 2013 as part of the \"\" bundle with 500 GB storage and contained \"God of War: Ascension\" as well as the \"God of War Saga\". The \"Azurite Blue\" model was released as a GameStop exclusive with 250GB storage.\n",
"Critics noted that about half of the internal storage on the S4's 16 GB model was taken up by its system software, using 1 GB more than the S III and leaving only 8.5 to 9.15 GB for the storage of other data, including downloaded apps (some of which cannot be moved to the SD card). Samsung initially stated that the space was required for the S4's new features, but following a report regarding the issue on the BBC series \"Watchdog\", Samsung stated that it would review the possibility of optimising the S4's operating system to use less local drive space in a future update. Storage optimizations were brought in an update first released in June 2013, which frees 80 MB of internal storage and restores the ability to move apps to the device's microSD card.\n",
"Initially, 250 GB hard drives were only available through third-party manufacturers or through the purchase of a special-edition Xbox 360 console bundle, but from 2010, it was being sold as a separate accessory in Japan, North America, and the UK. Currently, the 320 GB hard drive is only available as part of either limited/special edition Xbox 360 S bundles or as a separate purchase for Xbox 360 S consoles; it is not available for original Xbox 360 models. Of the total storage capacity, approximately 6 GB is reserved for system use; around 4 GB of that portion is reserved for game title caching and other hard drive-specific elements in games that support the hard drive and an additional 2 GB is reserved for use by the Xbox 360 backwards-compatibility software. This leaves users with approximately 14, 54, 114, 244, or 314 GB (displayed as 14, 52, 107, 228, or 292 GiB) of free space on the drive. Depending on the market, the hard drive comes preloaded with content, such as videos and Xbox Live Arcade games or demos.\n",
"On February 22, 2012, Barnes & Noble released the Nook Tablet 8 GB at US $199 to compete with the Kindle Fire. The differences from the 16 GB model are: 512 MB RAM and 8 GB of internal storage, of which 5 GB is available for user content and 1 GB is reserved for NOOK Store content. On August 12, 2012, Barnes & Noble lowered the price to $179. On November 4, 2012, the price was further reduced to US $159.\n"
] |
Why didn't the Germans bombard southern England with artillery? | They did, to an extent. The Germans had quite a bit of cross-channel artillery at Calais, which they used to fire on Kent for years. Economically though, it probably wasn't all that useful. The German guns and their barrels and ammunition probably cost more than the damage they did to anything on land. The firing rate of the German guns was often less than one round per hour, and obviously they didn't have any spotters to help them zero in on targets.
On the other hand, the cross-channel guns were highly useful for attacking British shipping passing through the straits of Dover. The Germans had radar there, and anything passing through the straits could expect to be fired upon. | [
"Artillery bombardments were to be co-ordinated with infantry attacks, with various types of artillery given suitable targets for the cumulative destruction of field defences and the killing of German infantry. Heavy artillery and mortars were to be used for the destruction of field fortifications, howitzers and light mortars for the destruction of trenches, machine-gun and observation posts; heavy guns and mortars to destroy fortified villages and concrete strong points. Longer-range guns were to engage German artillery with counter-battery fire, to deprive German infantry of artillery support during the attack, when French infantry were at their most vulnerable. Wire cutting was to be performed by field artillery, firing high explosive (HE) shells and supported by specialist wire-cutting sections of infantry, which would go out the night before an attack. During the attack, the field artillery would fire a linear barrage on trenches and the edges of woods and villages. Infantry tactics were to be based on reconnaissance, clear objectives, liaison with flanking units and the avoidance of disorganisation within attacking units. General attacks would need to be followed by the systematic capture of remaining defences for jumping-off positions in the next general attack.\n",
"The British method of attack by 1916 was to fire an intense bombardment on the German front trenches just before zero hour, then lift the bombardment to the next trench, then the next according to a timetable. Before the barrage lifted, infantry were to creep as close as possible to the bombardment, considered to be from the trench and to attack as soon as the shellfire lifted. The destructive effect of the bombardment was said by Haig and Rawlinson, to be such that nothing could live in the target area and that infantry would only have to occupy the ground,\n",
"In May 1917 the Germans began using heavy bombers against England using Gotha G.IV and later supplementing these with \"Riesenflugzeuge\" (\"giant aircraft\"), mostly from the Zeppelin-Staaken firm. The targets of these raids were industrial and port facilities and government buildings, but few of the bombs hit military targets, most falling on private property and killing civilians. Although the German strategic bombing campaign against Britain was the most extensive of the war, it was largely ineffective, in terms of actual damage done. Only 300 tons of bombs were dropped, resulting in material damage of £2,962,111 damage, 1,414 dead and 3,416 injured, these figures including those due to shrapnel from the anti-aircraft fire. In the autumn of 1917, however, over 300,000 Londoners had taken shelter from the bombing, and industrial production had fallen.\n",
"BULLET::::- The third German attempt to bomb England using airships failed when a lone naval Zeppelin encountered a gale over the North Sea and was blown out of control over Nieuwpoort, Belgium where Belgian antiaircraft gunners shot her down.\n",
"German strategic bombing during World War I struck Warsaw, Paris, London and other cities. Germany led the world in Zeppelins, and used these airships to make occasional bombing raids on military targets, London and other British cities, without great effect. Later in the war, Germany introduced long range strategic bombers. Damage was again minor but they forced the British air forces to maintain squadrons of fighters in England to defend against air attack, depriving the British Expeditionary Force of planes, equipment, and personnel badly needed on the Western front.\n",
"Germany was the first country to organize regular air attacks on enemy infrastructure with the Luftstreitkräfte. In World War I, it used its zeppelins (airships) to drop bombs on British cities. At that time, Britain did have aircraft, though her airships were less advanced than the zeppelins and were very rarely used for attacking; instead, they were usually used to spy on German U-boats (submarines).\n",
"Although the coastal towns of NE England were bombarded by the German Navy on 16 December 1914 (Raid on Scarborough, Hartlepool and Whitby) and by Zeppelins in January and June 1915, it became clear that a fullscale German invasion of Britain was unlikely, while the armies in the field required large numbers of engineers. The Fortress Engineer units therefore began organising field companies for overseas service.\n"
] |
What is chemically happening when pasta sauce stains tupperware? | Without having a source I would rather say that there is nothing happening on a chemical level
It is just the Carotine (rather the Lycopin) that makes tomatos red, that is stuck to the plastic.
Since Carotine is liposoluble, you should try to rub it out with oil (just normal cooking oil). I learned that this hould help. | [
"Lycopene is the pigment in tomato-containing sauces, turning plastic cookware orange, and is insoluble in water. It can be dissolved only in organic solvents and oils. Because of its nonpolarity, lycopene in food preparations will stain any sufficiently porous material, including most plastics. To remove this staining, the plastics can be soaked in a solution containing a small amount of household bleach.\n",
"The addition of alkaline materials to such dishes as pasta makes them feel slippery in the mouth and on the fingers; they also develop a yellow color and are more elastic than ordinary noodles. Various flours such as ordinary all-purpose white flour, bread flour, and semolina flour can be used, with somewhat varying results. \n",
"The resulting powder or paste is mixed with water, or more often broth, and simmered until it is pungent and very thick. It is most often prepared in a \"cazuela\" () or a thick heavy clay cauldron and stirred almost constantly to prevent burning. The thickness of the sauce has prompted some, such as Mexican-food authority Patricia Quintana, to claim it is too substantial to be called a sauce. However, like a sauce, it is always served over something and never eaten alone. \"Mole poblano\" is most traditionally served with turkey, but it and many others are also served with chicken, pork, or other meats (such as lamb).\n",
"Before the mixing process takes place, semolina particles are irregularly shaped and present in different sizes. Semolina particles become hydrated during mixing. The amount of water added to the semolina is determined based on the initial moisture content of the flour and the desired shape of the pasta. The desired moisture content of the dough is around 32% wet basis and will vary depending on the shape of pasta being produced.\n",
"Short pasta pieces fall on the shaker conveyor and powerful hot air is blown to them immediately after the extrusion. This reduces the moisture content by 5% and prevents the pieces from sticking and flattening. Shaker then carries the product through tiers with dry hot air and buckets collect the pasta and spread them on the upper tier of the multi-tier drying unit. This unit has four areas which periods of intense moisture extraction alternately followed by periods of rest occur at eight drying/stabilizing cycle in total. Process ends in cold air chamber for stabilizing.\n",
"The pasta is cooked in moderately salted boiling water. The guanciale is briefly fried in a pan in its own fat. A mixture of raw eggs (or yolks), grated Pecorino (or a mixture with Parmesan), and a good amount of ground black pepper is combined with the hot pasta either in the pasta pot or in a serving dish, but away from direct heat, to avoid curdling the egg. The fried guanciale is then added, and the mixture is tossed, creating a rich, creamy sauce with bits of meat spread throughout. Although various shapes of pasta can be used, the raw egg can only cook properly with a shape that has a sufficiently large ratio of surface area to volume, such as the long, thin types fettucine, linguine, or spaghetti.\n",
"Starch becomes soluble in water when heated. The granules swell and burst, the semi-crystalline structure is lost and the smaller amylose molecules start leaching out of the granule, forming a network that holds water and increasing the mixture's viscosity. This process is called starch gelatinization. During cooking, the starch becomes a paste and increases further in viscosity. During cooling or prolonged storage of the paste, the semi-crystalline structure partially recovers and the starch paste thickens, expelling water. This is mainly caused by retrogradation of the amylose. This process is responsible for the hardening of bread or staling, and for the water layer on top of a starch gel (syneresis).\n"
] |
why my dog loves his collar so much? | It probably feels odd to have it off. If he wears it all the time it would be like having a necklace on all the time, you'd notice when it wasn't there and feel a bit 'off'. | [
"William Harrison, in his description of England during 1586, describes the type as: “... Mastiff, tie dog, or band dog, so called because many of them are tied up in chains and strong bonds in the daytime, for doing hurt abroad, which is a huge dog, stubborn, uglier, eager, burthenouse of bodie, terrible and fearful to behold and often more fierce and fell than any Archadian or Corsican cur ...”\n",
"A dog collar is a piece of material put around the neck of a dog. A collar may be used for restraint, identification, fashion, or protection. Identification tags and medical information are often placed on dog collars. Collars are often used in conjunction with a leash for restraining a dog. A better alternative to a dog collar is a dog harness, as collars are purely around the neck, causing a dog restrained in a collar to have severe pressure put on its trachea when it pulls, and slip out easier if it is too loose, yet collars are still the more common form of directing dogs.\n",
"Upon arriving home, Billy opens his present. It is a dog collar, which he politely places round the Snowdog's neck. The Snowdog turns into a real live dog that matches the one that Billy asked for. They both bid the Snowman a fond farewell and retire for the night.\n",
"Patsy Adam-Smith suggests that the couplet on Bob's collar may not be unique. She notes that correspondence in an \"Adelaide paper\" recalled seeing an 18th-century book which described a dog working with a fire brigade. The picture notes a similar couplet, \"Stop me not but onward let me jog, for I am Bob, the London Firemans Dog.\"\n",
"A wolf collar is normally made out of metals such as iron. The length of the spikes can be quite long, but styles differ in different places. The dogs that normally wore the collars were ones used to protect livestock from attack by wolves. The purpose of the collar is to protect the dog wearing it when it has to fight the wolves. The collar base protects the dog's throat and carotid arteries, while the spikes are intended to deter bites to the neck or even injure wolves trying to do so. There are some tales that suggest that dogs were only given them after they had killed their first wolf; however, these are normally considered to be inaccurate.\n",
"A widely reported case in 2008 concerned a gothic couple, Dani Graves and his fiancée Tasha Maltby, who wears a dog collar and lead. A driver had refused them travel and made comments to them, allegedly saying \"We don't let freaks and dogs like you on.\"\n",
"The little dog symbolizes fidelity (fido), loyalty, or can be seen as an emblem of lust, signifying the couple's desire to have a child. Unlike the couple, he looks out to meet the gaze of the viewer. The dog could also be simply a lap dog, a gift from husband to wife. Many wealthy women in the court had lap dogs as companions. So, the dog could reflect the wealth of the couple and their position in courtly life.\n"
] |
When I was in the USA I noticed the First World War memorials were dedicated to soldiers who died in The Great War of 1917-18. Why is it not described as 1914-18? | This seems too obvious, but wouldn't it be because the United States only joined the war in 1917? | [
"The Second World War that broke out in 1939 consumed the attention of a new generation. Across most of the theatres of conflict, the participants attempted to respect the memorials to World War I. After the Second World War there was no equivalent mass construction of memorials to the war dead; instead, often local World War I memorials were adapted for use instead: additional names might be inscribed to the existing lists. In some cases, this resulted in memorials losing their exclusive focus on World War I. The Tomb of the Unknown Soldier in Washington, for example, was expanded in 1950s to include corpses from the Second World War and Korea War, broadening the memorial's remit to commemorate most modern wars. In other cases, such as the Australian War Memorial, begun in the inter-war years but only opened in 1941, an essentially new memorial was formed to honour the multiple conflicts.\n",
"Several events during the 1920s influenced the creation of this memorial to an Unknown of the American Revolution. One was the memorialization of soldiers who had died in World War I and remained unidentified. On the second anniversary of the signing of the treaty that ended World War I, Armistice Day 1920, memorials to unknown soldiers were dedicated in Great Britain and France. The United States dedicated its memorial to an unknown soldier of that war at Arlington National Cemetery on November 11, 1921. The sarcophagus-style monument that now sits atop the burial vault of the Tomb of the Unknowns was added in 1932.\n",
"A small number of memorializations were made during the war, mainly as ship and place names. After the war, Robert E. Lee said on several occasions that he was opposed to any monuments, as they would, in his opinion, \"keep open the sores of war\". Nevertheless, monuments and memorials continued to be dedicated shortly after the American Civil War. Many more monuments were dedicated in the years after 1890, when Congress established the first National Military Park at Chickamauga and Chattanooga, and by the turn of the twentieth century, five battlefields from the Civil War had been preserved: Chickamauga-Chattanooga,\n",
"Following World War I the Soldiers' Memorial Stone was erected in 1921 to commemorate those townfolk who had been killed. The names of those who fell in World War II and the Vietnam War were subsequently added.\n",
"Finally, 60 years after the end of the Second World War, a memorial is now in place by the new Bull Ring to commemorate the civilians who died during air raids, naming all those who were killed or gave up their lives protecting the city.\n",
"Two war memorials were erected in the town after World War I to commemorate the hundreds of men from the town who lost their lives in the conflict. The memorial park was opened in 1922 in honour of those killed in World War I.\n",
"The huge losses of the American Civil War saw the first really large group of sculptural war memorials, as well as many monuments for individuals. Among the most artistically outstanding is the Memorial to Robert Gould Shaw and the all-black 54th Regiment by Augustus Saint-Gaudens in Boston, with a second cast in the National Gallery of Art, Washington. The even larger losses of World War I led even small communities in most nations involved to raise some form of memorial, introducing the widespread use of the form to Australia, Canada and New Zealand, the sudden increase in demand leading to a boom for sculptors of public art. Even more than in painting, the war brought a crisis in style, as much public opinion felt the traditional heroic styles inappropriate. One of the most successful British memorials is the starkly realist Royal Artillery Memorial in London, the masterpiece of Charles Sargeant Jagger, who had been wounded three times in the war and spent most of the next decade commemorating it. In the defeated nations of Germany and Austria controversy, which had a political aspect, was especially fierce, and a number of memorials considered excessively modern were removed by the Nazis, whose own memorials, such as the Tannenberg Memorial were removed after World War II. Other solutions were to make memorials more neutral, as in the repurposed Neue Wache in Berlin, since rededicated to different groups several times, and the dignified architectural forms of the Cenotaph in London (widely imitated) and the German Laboe Naval Memorial; tombs of the Unknown Warrior and eternal flames were other ways of avoiding controversy. Some, like the Canadian National War Memorial, and most French memorials, were content to update traditional styles.\n"
] |
Ionization Question - Ionization Energy | for electrons on atoms, the electrons are either bound or unbound depending on their total energy relative to the potential well ([a morse potential](_URL_0_)). if the electron in question has total energy less than the dissociation energy (the ionization energy), then the energy is quantized to discrete states (your E1, E2, E3, for example). as energy increases closer to the dissociation energy, the density of states (literally, the number of states per amount energy increased) increases dramatically, and there are many states near the dissociation energy. beyond the dissociation energy, the energy levels are so close together that they form a continuum of states, in which an electron can exist with any amount of energy. because of this, **a photon that can impart any amount of energy to make the total energy of the electron greater than the dissociation energy will lead to dissociation.** | [
"The ionization potential is the minimum amount of energy required to remove one electron from each atom in a mole of isolated, neutral and gaseous atom. The \"first ionization energy\" is the energy required to remove the first electron, and generally the \"nth ionization energy\" is the energy required to remove the atom's \"n\"th electron, after the (\"n\"−1) electrons before it has been removed. Trend-wise, ionization energy tends to increase while one progresses across a period because the greater number of protons (higher nuclear charge) attract the orbiting electrons more strongly, thereby increasing the energy required to remove one of the electrons. Ionization energy and ionization potentials are completely different. The potential is an intensive property and it is measured by \"volt\"; whereas the energy is an extensive property expressed by \"eV\" or \"kJ/mole\".\n",
"Above threshold ionization (ATI) is an extension of multi-photon ionization where even more photons are absorbed than actually would be necessary to ionize the atom. The excess energy gives the released electron higher kinetic energy than the usual case of just-above threshold ionization. More precisely, the system will have multiple peaks in its photoelectron spectrum which are separated by the photon energies, this indicates that the emitted electron has more kinetic energy than in the normal (lowest possible number of photons) ionization case. The electrons released from the target will have approximately an integer number of photon-energies more kinetic energy. In intensity regions between 10 W/cm and 10 W/cm, each of MPI, ATI, and barrier suppression ionization can occur simultaneously, each contributing to the overall ionization of the atoms involved.\n",
"The first ionization energy is the energy it takes to remove one electron from an atom, the second ionization energy is the energy it takes to remove a second electron from the atom, and so on. For a given atom, successive ionization energies increase with the degree of ionization. For magnesium as an example, the first ionization energy is 738 kJ/mol and the second is 1450 kJ/mol. Electrons in the closer orbitals experience greater forces of electrostatic attraction; thus, their removal requires increasingly more energy. Ionization energy becomes greater up and to the right of the periodic table.\n",
"Above-threshold ionization (ATI) is an extension of multi-photon ionization where even more photons are absorbed than actually would be necessary to ionize the atom. The excess energy gives the released electron higher kinetic energy than the usual case of just-above threshold ionization. More precisely, The system will have multiple peaks in its photoelectron spectrum which are separated by the photon energies, this indicates that the emitted electron has more kinetic energy than in the normal (lowest possible number of photons) ionization case. The electrons released from the target will have approximately an integer number of photon-energies more kinetic energy.\n",
"In physics and chemistry, ionization energy (American English spelling) or ionisation energy (British English spelling), denoted \"E\", is the minimum amount of energy required to remove the most loosely bound electron, the valence electron, of an isolated neutral gaseous atom or molecule. It is quantitatively expressed as \n",
"While the term ionization energy is largely used only for gas-phase atomic or molecular species, there are a number of analogous quantities that consider the amount of energy required to remove an electron from other physical systems.\n",
"Ionization energy is also a periodic trend within the periodic table organization. Moving left to right within a period, or upward within a group, the first ionization energy generally increases, with some exceptions such as aluminum and sulfur in the table above. As the nuclear charge of the nucleus increases across the period, the atomic radius decreases and the electron cloud becomes closer towards the nucleus.\n"
] |
why is china airlines from taiwan? why not call it taiwan airlines instead? | Because Taiwan's official name is the Republic of China (ROC).
Also, China Airlines was founded back when the Taiwanese government still considered itself the government of China (though exiled)
| [
"As Republic of China (Taiwan)'s flag carrier, China Airlines has been affected by disputes over the political status of Republic of China (Taiwan), and under pressure from the Communist Party of China, was barred from flying into a number of countries maintaining diplomatic relations with the People's Republic of China (\"China\"). As a result, in the mid-1990s, China Airlines subsidiary Mandarin Airlines took over some of its Sydney and Vancouver international routes. Partly as a way to avoid the international controversy, in 1995 China Airlines unveiled its \"plum blossom\" logo, replacing the national flag, which had previously appeared on the tail fins (empennage), and the aircraft livery from the red-white-blue national colors on the fuselage of its aircraft. The plum blossom (\"Prunus mume\") is Taiwan's National Flower.\n",
"China Airlines is the official flag carrier airline of Taiwan. The Taiwanese government refers to its state as the Republic of China and considers itself to be the legitimate, non-Communist leadership-in-exile of all of China since the Communist overthrow in the mid-1950s. The name of the airline carries the message of the long-lasting and ongoing cultural and political conflict between Communist \"mainland\" China (PRC) and Taiwan, that the Republic of China is the \"true\" China and that the state commonly referred to as \"China\" is illegitimate and usurped control of the country from the rightful leadership. Similarly, the PRC counters this message by having named one of its largest international carriers \"Air China\" to reinforce the PRC's claim to be the legitimate of the \"two Chinas.\" The implicit conflict between the two states is likely lost on the majority of the general public outside the immediate region, such as the United States and Europe, but the strategic use of using advertising and targeting the international community through tourism is apparent on both sides.\n",
"China Airlines (CAL) () is the national carrier of Taiwan (officially the Republic of China, hence the \"China\" name), and one of its two major airlines along with EVA Air. It is headquartered in Taiwan Taoyuan International Airport and operates over 1400 flights weekly (including 91 pure cargo flights) to 102 cities across Asia, Europe, North America and Oceania. Carrying over 19 million passengers and 5700 tons of cargo in 2017, the carrier was the 33rd and 10th largest airline in the world in terms of passenger revenue per kilometer (RPK) and freight RPK, respectively. China Airlines has three airline subsidiaries: China Airlines Cargo, a member of SkyTeam Cargo, operates a fleet of freighter aircraft and manages its parent airline's cargo-hold capacity; Mandarin Airlines operates flights to domestic and low-demand regional destinations; Tigerair Taiwan is a low-cost carrier established by China Airlines and Singaporean airline group Tigerair Holdings, but is now wholly owned by China Airlines Group.\n",
"Regular flights between Mainland China (PRC) and Taiwan (ROC) started in July 2009. Due to the political status of Taiwan, all Air China airframes that operate flights to and from Taiwan are required to cover the flag of the People's Republic of China on the fuselage, including a number of Airbus A320s, A330s, A340s, Boeing 777-200s, and Boeing 747-400BDSFs.\n",
"This was due to political sensitivities, as national airlines operating flights to the People's Republic of China were not permitted to fly to Taiwan. Similar arrangements were made by other airlines, such as Japan Airlines and Qantas.\n",
"The six mainland Chinese airlines originated in three cities in Mainland China: Beijing (Air China, Hainan Airlines), Shanghai (China Eastern Airlines), and Guangzhou (China Southern Airlines, Xiamen Airlines). All Air China's flight are operated by Shandong Airlines' aircraft to avoid Air China's livery which features the \"Five Star Red Flag\".\n",
"China Airlines is Republic of China's (Taiwan) largest airline, operating regular flights to over 90 destinations worldwide. China Airlines features full passenger and dedicated cargo operations to North America, Asia, Europe, and Oceania. \n"
] |
Planets revlove in an ellipsoid trajectory around the sun - but what are the foci? | One of the foci is, strictly, the centre of mass of the system, rather than the centre of the Sun itself, but effectively the difference is minute. The other focus doesn't have any astronomical meaning. | [
"The heliocentric ecliptic system describes the planets' orbital movement around the Sun, and centers on the barycenter of the solar system (i.e. very close to the center of the Sun). The system is primarily used for computing the positions of planets and other solar system bodies, as well as defining their orbital elements.\n",
"In addition, the orbital ellipse itself precesses in space, in an irregular fashion, completing a full cycle every 112,000 years relative to the fixed stars. Apsidal precession occurs in the plane of the ecliptic and alters the orientation of the Earth's orbit relative to the ecliptic. This happens primarily as a result of interactions with Jupiter and Saturn. Smaller contributions are also made by the sun's oblateness and by the effects of general relativity that are well known for Mercury.\n",
"In addition to lunisolar precession, the actions of the other planets of the Solar System cause the whole ecliptic to rotate slowly around an axis which has an ecliptic longitude of about 174° measured on the instantaneous ecliptic. This so-called planetary precession shift amounts to a rotation of the ecliptic plane of 0.47 seconds of arc per year (more than a hundred times smaller than lunisolar precession). The sum of the two precessions is known as the general precession.\n",
"The orbits of planets around the Sun do not really follow an identical ellipse each time, but actually trace out a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession.\n",
"In both Hipparchian and Ptolemaic systems, the planets are assumed to move in a small circle called an \"epicycle\", which in turn moves along a larger circle called a \"deferent\". Both circles rotate clockwise and are roughly parallel to the plane of the Sun's orbit (ecliptic). Despite the fact that the system is considered geocentric, each planet's motion was not centered on the Earth but at a point slightly away from the Earth called the \"eccentric\". The orbits of planets in this system are similar to epitrochoids.\n",
"Periastron precession is the rotation of a planet's orbit within the orbital plane, i.e. the axes of the ellipse change direction. In the Solar System, perturbations from other planets are the main cause, but for close-in exoplanets the largest factor can be tidal forces between the star and planet. For close-in exoplanets, the general relativistic contribution to the precession is also significant and can be orders of magnitude larger than the same effect for Mercury. Some exoplanets have significantly eccentric orbits, which makes it easier to detect the precession. The effect of general relativity can be detectable in timescales of about 10 years or less.\n",
"The direction of precession is opposite the direction of revolution. For a typical prograde orbit around Earth (that is, in the direction of primary body's rotation), the longitude of the ascending node decreases, that is the node precesses westward. If the orbit is retrograde, this increases the longitude of the ascending node, that is the node precesses eastward. This nodal progression enables heliosynchronous orbits to maintain a nearly constant angle relative to the Sun.\n"
] |
how does tidal energy not break conservation of energy? | In short, the moon is constantly slowing the rotation of the earth. Eventually the moon and earth will always be facing each other. | [
"Tidal power, also called tidal energy, is a form of hydropower that converts the energy obtained from tides into useful forms of power, mainly electricity. The potential of tidal wave energy becomes higher in certain regions by local effects such as shelving, funnelling, reflection and resonance.\n",
"This type of energy does not produce waste that is harmful to the environment and does not require high maintenance. Unlike the solar and wind energy models, tidal energy is quite stable because the tide of the day can be accurately predicted. The disadvantage of this type of energy is that it requires a large amount of investment in equipment and construction and at the same time changes the natural conditions of a very large area. \n",
"Tidal Energy has an expensive initial cost which may be one of the reasons tidal energy is not a popular source of renewable energy. It is important to realize that the methods for generating electricity from tidal energy is a relatively new technology. It is projected that tidal power will be commercially profitable within 2020 with better technology and larger scales. Tidal Energy is however still very early in the research process and the ability to reduce the price of tidal energy can be an option. The cost effectiveness depends on each site tidal generators are being placed. To figure out the cost effectiveness they use the Gilbert ratio, which is the length of the barrage in metres to the annual energy production in kilowatt hours (1 kilowatt hour = 1 KWH = 1000 watts used for 1 hour).\n",
"Although not yet widely used, tidal energy has potential for future electricity generation. Tides are more predictable than the wind and the sun. Among sources of renewable energy, tidal energy has traditionally suffered from relatively high cost and limited availability of sites with sufficiently high tidal ranges or flow velocities, thus constricting its total availability. However, many recent technological developments and improvements, both in design (e.g. dynamic tidal power, tidal lagoons) and turbine technology (e.g. new axial turbines, cross flow turbines), indicate that the total availability of tidal power may be much higher than previously assumed, and that economic and environmental costs may be brought down to competitive levels.\n",
"Another physical limitation is the energy available in the tidal fluctuations of the oceans, which is about 0.6 EJ (exajoule). Note this is only a tiny fraction of the total rotational energy of the Earth. Without forcing, this energy would be dissipated (at a dissipation rate of 3.7 TW) in about four semi-diurnal tide periods. So, dissipation plays a significant role in the tidal dynamics of the oceans. Therefore, this limits the available tidal energy to around 0.8 TW (20% of the dissipation rate) in order not to disturb the tidal dynamics too much. \n",
"BULLET::::- Tidal power, also called tidal energy, is a form of hydropower that converts the energy of tides into useful forms of power - mainly electricity, dynamic tidal power, tidal lagoons, tidal barrages\n",
"Because the Earth's tides are ultimately due to gravitational interaction with the Moon and Sun and the Earth's rotation, tidal power is practically inexhaustible and classified as a renewable energy resource. Movement of tides causes a loss of mechanical energy in the Earth–Moon system: this is a result of pumping of water through natural restrictions around coastlines and consequent viscous dissipation at the seabed and in turbulence. This loss of energy has caused the rotation of the Earth to slow in the 4.5 billion years since its formation. During the last 620 million years the period of rotation of the earth (length of a day) has increased from 21.9 hours to 24 hours; in this period the Earth has lost 17% of its rotational energy. While tidal power will take additional energy from the system, the effect is negligible and would only be noticed over millions of years.\n"
] |
does putting "i do not own this song" on youtube videos actually prevent it from getting taken down from the record label? | Nope! Some people think it's polite though, which is kind of strange, because the people are pretty much saying "Yes, I knowingly violated the copyright on your product, but at least I'm not claiming it's mine" | [
"Watson chose to remove the song from the iTunes Store, claiming the commission's advice was censorship. John Key commented that the song and its music video was, \"quite professionally done. It was anti-us but as a parody it was okay.\"\n",
"In August 2008, Judge Jeremy Fogel of the Northern District of California ruled in \"Lenz v. Universal Music Corp.\" that copyright holders cannot order a deletion of an online file without determining whether that posting reflected \"fair use\" of the copyrighted material. The case involved Stephanie Lenz, a writer and editor from Gallitzin, Pennsylvania, who made a home video of her thirteen-month-old son dancing to Prince's song Let's Go Crazy and posted the video on YouTube. Four months later, Universal Music, the owner of the copyright to the song, ordered YouTube to remove the video under the Digital Millennium Copyright Act. Lenz notified YouTube immediately that her video was within the scope of fair use, and she demanded that it be restored. YouTube complied after six weeks, rather than the two weeks required by the Digital Millennium Copyright Act. Lenz then sued Universal Music in California for her legal costs, claiming the music company had acted in bad faith by ordering removal of a video that represented fair use of the song. On appeal, the Court of Appeals for the Ninth Circuit ruled that a copyright owner must affirmatively consider whether the complained of conduct constituted fair use before sending a takedown notice under the Digital Millennium Copyright Act, rather than waiting for the alleged infringer to assert fair use. 801 F.3d 1126 (9th Cir. 2015). \"Even if, as Universal urges, fair use is classified as an 'affirmative defense,' we hold—for the purposes of the DMCA—fair use is uniquely situated in copyright law so as to be treated differently than traditional affirmative defenses. We conclude that because 17 U.S.C. § 107 created a type of non-infringing use, fair use is \"authorized by the law\" and a copyright holder must consider the existence of fair use before sending a takedown notification under § 512(c).\"\n",
"The song had not been intended for release and Jackson's record label Sony Music Entertainment gained the support of the late entertainer's estate and its lawyers in their endeavor to have the track removed from the Internet on the basis of copyright infringement. Some of their attempts at removal were successful, though individuals continued to upload the audio, one clip garnering 20,000 views within hours.\n",
"An independent test in 2009 uploaded multiple versions of the same song to YouTube, and concluded that while the system was \"surprisingly resilient\" in finding copyright violations in the audio tracks of videos, it was not infallible. The use of Content ID to remove material automatically has led to controversy in some cases, as the videos have not been checked by a human for fair use.\n",
"An aide to Rep. Smith said, \"This bill does not make it a felony for a person to post a video on YouTube of their children singing to a copyrighted song. The bill specifically targets websites dedicated to illegal or infringing activity. Sites that host user content—like YouTube, Facebook, and Twitter—have nothing to be concerned about under this legislation.\"\n",
"If the process of clearing the rights to the song is prohibitively expensive for the home video distributor, or clearance is refused by the copyright holders of the original song, the affected song is either replaced with a similar one, or the footage containing the copyrighted song is edited out. In a few cases, television shows, with extensive use of copyrighted music whose cost of \"after-market\" licensing is high, are withheld from release on DVD; notable examples include \"The Wonder Years\", \"WKRP in Cincinnati\", \"Third Watch\" (beyond its first two seasons), and \"Cold Case\", some of which were eventually released after long delays. Sony Entertainment cancelled the planned October 2007 DVD release of \"Dark Skies\" for that reason, but it was eventually released on January 18, 2011 through Shout! Factory.\n",
"\"Don't Download This Song\" references several court cases related to the RIAA and copyright infringement of music. Among these are lawsuits against \"a grandma\" (presumably Gertrude Walton, who was sued for copyright infringement six months after dying) and a \"7-year-old girl\" (presumably a reference to Tanya Andersen's daughter sued at age 10 for alleged copyright infringements made at the age of 7), as well as Lars Ulrich's strong stance against copyright infringement of music in the days of Napster. The song also challenges the RIAA's claim that file sharing prevents the artists from profiting from their work, as the song argues that they are still very financially successful via their recording contracts: (\"Don't take away money from artists just like me/How else can I afford another solid-gold Humvee?\"). Mention is also made of Tommy Chong's time spent in prison.\n"
] |
what damage can be done if someone gets access to your wi-fi password? | There are definitely a few guides out there detailing how to beef up your network's security, check them out when you have a chance (on mobile, can't link any at the moment).
There are a few implications when someone has access to your network:
* If they're doing something bandwidth-heavy (gaming, streaming HD, etc.), it can slow down the connection for other devices on the network.
* If they're doing illegal things, you are the first person your ISP will come after or warn.
* There is software out there that allows people to "sniff" the traffic of other people on the same network that is being sent back and forth from the device to the network. They can't explicitly see what you're doing, but the software can glimpse at the data being sent back and forth and steal cookies, passwords, and other data being transferred.
ELI5 Version:
* You start taking a shower on the first floor, but only have lukewarm water because someone else in your house has been running the hot water in the shower on the second floor for the past hour.
* I stole something with your name on it, started beating people with it, and left the object with your name on it behind as the initial piece of evidence.
* You snuck into a dark closet with your friend, shut the door, and told them who your crush is. But before you came in, someone else snuck in, hid, and overhead you telling your friend who you have a crush on. They now have your sensitive information. | [
"On April 28th, 2017 the Tokyo District Court ruled that accessing a wireless LAN network without authorization is not a crime, even if the network is protected with a password. In a case brought before the court involved a man named Hiroshi Fujita, who was accused of accessing a neighbors wi-fi network without authorization and sending virus-infected emails, and then using that to steal internet banking information and send funds to his own bank account without authorization. Hiroshi was found guilty of most of what he was accused of and sentenced to 8 years in prison. Regarding the unauthorized access of wireless networks, prosecutors argued that wi-fi passwords fall under the category of \"secrets of wireless transmission\" (無線通信の秘密) and that therefore obtaining and using passwords without permission of the network operator would fall under the category of unauthorized use of wireless transmission secrets, which is prohibited by law. However, the court ruled that the defendant is not guilty, stating in their ruling that wi-fi passwords do not fall under that category and therefore the unauthorized obtainment of passwords and subsequent accessing of protected wireless networks is not a crime.\n",
"it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyber attacks are likely to become an increasingly physical (rather than simply virtual) threat. If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.\n",
"Any unattended device can be vulnerable to a network evil maid attack. If the attacker knows the victim's device well enough, they can replace the victim's device with an identical model with a password-stealing mechanism. Thus, when the victim inputs their password, the attacker will instantly be notified of it and be able to access the stolen device's information.\n",
"Another security consideration is the ability of malicious software to spoof dialogs that look like legitimate security confirmation requests. If the user were to input credentials into a fake dialog, thinking the dialog was legitimate, the malicious software would then know the user's password. If the Secure Desktop or similar feature were disabled, the malicious software could use that password to gain higher privileges.\n",
"An attacker can try to eavesdrop on Wi-Fi communications to derive information (e.g. username, password). This type of attack is not unique to smartphones, but they are very vulnerable to these attacks because very often the Wi-Fi is the only means of communication they have to access the internet. The security of wireless networks (WLAN) is thus an important subject. Initially, wireless networks were secured by WEP keys. The weakness of WEP is a short encryption key which is the same for all connected clients. In addition, several reductions in the search space of the keys have been found by researchers. Now, most wireless networks are protected by the WPA security protocol.\n",
"An attacker can send a deauthentication frame at any time to a wireless access point, with a spoofed address for the victim. The protocol does not require any encryption for this frame, even when the session was established with Wired Equivalent Privacy (WEP) for data privacy, and the attacker only needs to know the victim's MAC address, which is available in the clear through wireless network sniffing.\n",
"Local Wi-Fi networks may be configured with varying levels of security enabled. Using a Wired Equivalent Privacy (WEP) password, the attacker running Firesheep must have the password, but once this has been achieved (a likely scenario if a coffee shop is asking all users for the same basic password) they are able to decrypt the cookies and continue their attack. In addition, the WEP protocol has been proven to have severe flaws which allow attackers to decrypt WEP traffic very quickly, even without the password. However, using Wi-Fi Protected Access (WPA or WPA2) encryption offers individual user isolation, preventing the attacker from using Firesheep from decrypting cookies sent over the network even if the Firesheep user has logged into the network using the same password. An attacker would be able to manually retrieve and decrypt another user's data on a WPA-PSK connection, if the key is known and the attacker was present at the time of the handshake, or if they send a spoofed de-authenticate packet to the router, causing the user to re-authenticate and allow the attacker to capture the handshake. This attack would not work on WPA-Enterprise networks as there is no single password (the 'Pre Shared Key' in PSK).\n"
] |
why do politicians and the media never just call people liars? | Because that can be considered defamatory, leading the person to file, and win, a lawsuit against you. This will have the effect of making them look like a victim, and making you look like an asshole while paying them a bunch of money. | [
"The practice is not always referred to as a \"liars table,\" but that term appears across the United States, including Alabama, Florida, Iowa, Maine, Mississippi, Texas, and Ohio. The word \"liars\" refers to the idea that the men are lying or gossiping about local social or political happenings.\n",
"\"Showing some subjects up as liars is the very worst thing to do, because their determination not to lose face will only make them stick harder to the lie. For these it is necessary to provide loopholes by asking questions which let them correct their stories without any direct admission to lying\".\n",
"If \"there is no such thing as an honest politician\", this need not mean that all politicians are liars, but just that they are often not in a position to know or reveal the \"complete picture\" and thus express \"selected\" truths relevant to their actions, rather than all possible truths that could be told. In that sense, it is quite possible to be a \"principled\" politician – if that was not so, then (arguably) \"all\" politicians are opportunists. Yet if all politicians are opportunists—as many cynics believe—it becomes difficult to explain a politician's professional \"motivations\" . Namely, if their purpose is based \"only\" or \"primarily\" on self-interest—disregarding higher principles, which is the hallmark of opportunism—then politics is \"the least likely vocation\", since it requires that politicians serve a collective interest or cause bigger than themselves. They would then be better off in a line of business where they can just pursue their own interest to the full. If they are able to be politicians, they could easily do so. The question is then why they don't, if indeed only out to serve themselves.\n",
"Therefore, by manipulating a given piece of information as either included within the target or compared against, the same information can have different consequences for judgments. For example, thinking of a politician involved in a scandal (such as Eliot Spitzer) may make people believe that politicians in general are more corrupt because the corrupt exemplar is information that is included within the representation of \"politicians\". In short, people would be left thinking \"they are all like Spitzer\". Paradoxically, at the same time every individual politician that is rated may seem more honest, because for these judgments, the exemplar is used as the standard of comparison. In this case, people are left thinking \"he (or she) is not as bad as Spitzer\".\n",
"This does not mean that voters make poor and biased decisions: rather that in carrying out their everyday responsibilities (like working and taking care of a family), many people do not have the time to devote to researching every aspect of a candidate's policies. So many people find themselves making rational decisions meaning they let others who are more versed in the subject do the research and they form their opinion based on the evidence provided. They are being rationally ignorant not because they don't care but because they simply do not have the time.\n",
"In \"Lying\", neuroscientist Sam Harris argues that lying is negative for the liar and the person who's being lied to. To say lies is to deny others access to reality, and often we cannot anticipate how harmful lies can be. The ones we lie to may fail to solve problems they could have solved only on a basis of good information. To lie also harms oneself, makes the liar to distrust the person who's being lied to. Liars generally feel bad for it and sense a loss of sincerity, authenticity, integrity. Harris defends that honesty allows you to have deeper relationships, and to bring all dysfunction in one's life to the surface.\n",
"\"We Were Liars\" focuses on the theme of self-acceptance, family morals, and the possibly-deadly consequences of one's mistakes. It is centered on the wealthy, seemingly perfect Sinclair family, who spend every summer gathered on their private island. However, not every summer is the same—when something happens to Cadence during the summer of her fifteenth year, the four \"Liars\" (Cadence, Johnny, Gat and Mirren) re-emerge two years later to prompt Cadence to remember the incident.\n"
] |
string theory and m-theory | String theory: everything in the universe is connected by invisible strings and can be interacted with through the forces of nature.
I can't even imagine how someone would explain M theory to a five year old...
| [
"In physics string theory is an attempt to describe general relativity and quantum mechanics with a single mathematical model. Although it is an attempt to model our universe it takes place in a space with more dimensions than the four of spacetime that we are familiar with. In particular a number of string theories take place in a ten-dimensional space, adding an extra six dimensions. These extra dimensions are required by the theory, but as they cannot be observed are thought to be quite different, perhaps compactified to form a six-dimensional space with a particular geometry too small to be observable.\n",
"String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. String theory has been applied to a variety of problems in black hole physics, early universe cosmology, nuclear physics, and condensed matter physics, and it has stimulated a number of major developments in pure mathematics. Because string theory potentially provides a unified description of gravity and particle physics, it is a candidate for a theory of everything, a self-contained mathematical model that describes all fundamental forces and forms of matter. Despite much work on these problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of its details. \n",
"Speaking at the string theory conference at the University of Southern California in 1995, Edward Witten of the Institute for Advanced Study suggested that the five different versions of string theory might be describing the same thing seen from different perspectives. He proposed a unifying theory called \"M-theory\", in which the \"M\" is not specifically defined but is generally understood to stand for \"membrane\". The words \"matrix\", \"master\", \"mother\", \"monster\", \"mystery\" and \"magic\" have also been claimed. M-theory brought all of the string theories together. It did this by asserting that strings are really one-dimensional slices of a two-dimensional membrane vibrating in 11-dimensional spacetime. Vibrations of higher-dimensional objects (as in three-dimensional vibrating blob or sphere or even more possible dimensions) are certainly a part of M-theory, but the basic theory of branes is still in progress. Higher-dimensional objects are much harder to mathematically calculate than a point in classical physics or a one-dimension string in string theory or two-dimensional membranes in M-theory.\n",
"String theory has been used to construct a variety of models of particle physics going beyond the standard model. Typically, such models are based on the idea of compactification. Starting with the ten- or eleven-dimensional spacetime of string or M-theory, physicists postulate a shape for the extra dimensions. By choosing this shape appropriately, they can construct models roughly similar to the standard model of particle physics, together with additional undiscovered particles. One popular way of deriving realistic physics from string theory is to start with the heterotic theory in ten dimensions and assume that the six extra dimensions of spacetime are shaped like a six-dimensional Calabi–Yau manifold. Such compactifications offer many ways of extracting realistic physics from string theory. Other similar methods can be used to construct realistic or semi-realistic models of our four-dimensional world based on M-theory.\n",
"String theory is a theoretical framework that attempts to reconcile gravity and quantum mechanics. In string theory, the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other. In a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the string scale, a string will look just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In this way, all of the different elementary particles may be viewed as vibrating strings. One of the vibrational states of a string gives rise to the graviton, a quantum mechanical particle that carries gravitational force.\n",
"String theory is a model of physics where all \"particles\" that make up matter are composed of strings (measuring at the Planck length) that exist in an 11-dimensional (according to M-theory, the leading version) or 12-dimensional (according to F-theory) universe. These strings vibrate at different frequencies that determine mass, electric charge, color charge, and spin. A string can be open (a line) or closed in a loop (a one-dimensional sphere, like a circle). As a string moves through space it sweeps out something called a \"world sheet\". String theory predicts 1- to 10-branes (a 1-brane being a string and a 10-brane being a 10-dimensional object) that prevent tears in the \"fabric\" of space using the uncertainty principle (e.g., the electron orbiting a hydrogen atom has the probability, albeit small, that it could be anywhere else in the universe at any given moment).\n",
"In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. It describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Thus string theory is a theory of quantum gravity.\n"
] |
Was Bohemia very bohemian? | hell yeah dude, it was pretty much all bohemians! ... with a bunch of germans thrown in
on a more serious note, these threads are probably what you are looking for:
_URL_0_
_URL_1_ | [
"Bohemians were associated with unorthodox or anti-establishment political or social viewpoints, which often were expressed through free love, frugality, and—in some cases—simple living or voluntary poverty. A more economically privileged, wealthy, or even aristocratic bohemian circle is sometimes referred to as \"haute bohème\" (literally \"high Bohemia\").\n",
"An Australian-born poet, Louis Esson, was one of the first to label this melange a Bohemia. In 1916, he wrote a friend to say, \"We have deserted Greenwich Village and the haunts of the Bohemians and have landed near the centre of Broadway (between 65th and 66th streets). As a matter of fact, our present abode is much more Bohemian than Washington Square; at least it is New York's Bohemia. We have a big room in the Lincoln Square Arcade, with steam-heat, electric light, piano, bath, ice-box, elevator, etc.\" In October of that year, an article in the \"New York Times\" contrasted the downtown Bohemia in Greenwich Village with an unexpected Bohemia uptown that was both new and \"perhaps more democratic.\"\n",
"The term bohemianism emerged in France in the early 19th century when artists and creators began to concentrate in the lower-rent, lower class, Romani neighborhoods. \"Bohémien\" was a common term for the Romani people of France, who were mistakenly thought to have reached France in the 15th century via Bohemia (the western part of modern Czech Republic).\n",
"Bohemia, currently a part of the Czech Republic, became famous for its beautiful and colourful glass during the Renaissance. The history of Bohemian glass started with the abundant natural resources found in the countryside.\n",
"In modern use, the term \"Bohemian\" is applied to people who live unconventional, usually artistic, lives. The adherents of the \"Bloomsbury Group\", which formed around the Stephen sisters, Vanessa Bell and Virginia Woolf in the early 20th century, are among the best-known examples. The original \"Bohemians\" were travellers or refugees from central Europe (hence, the French \"bohémien\", for \"gypsy\").\n",
"A Bohemian () is a resident of Bohemia, a region of the Czech Republic or the former Kingdom of Bohemia, a region of the former Crown of Bohemia (lands of the Bohemian Crown). In English, the word \"Bohemian\" was used to denote the Czech people as well as the Czech language before the word \"Czech\" became prevalent in the early 20th century.\n",
"San Francisco journalist Bret Harte first wrote as \"The Bohemian\" in \"The Golden Era\" in 1861, with this persona taking part in many satirical doings, the lot published in his book \"Bohemian Papers\" in 1867. Harte wrote, \"Bohemia has never been located geographically, but any clear day when the sun is going down, if you mount Telegraph Hill, you shall see its pleasant valleys and cloud-capped hills glittering in the West...\"\n"
] |
why do most foods, drinks etc have to be refrigerated after one use? what happens to the contents after just one use? | In many cases, the contents have been pasteurized. So they're "clean" and "free" of bacteria sealed as they are. The second you open it, they become exposed to bacteria. Refrigeration slows the growth of bacteria. | [
"This can also be done by using reusable items such as thermoses for daily coffee or plastic containers for water and other cold beverages rather than disposable ones. If that option isn't available, it is best to properly recycle the disposable items after use. When one household recycles at least half of their household waste, they can save 1.2 tons of carbon dioxide annually.\n",
"One way to address this is to increase product longevity; either by extending a product’s first life or addressing issues of repair, reuse and recycling. Reusing products, and therefore extending the use of that item beyond the point where it is discarded by its first user is preferable to recycling or disposal, as this is the least energy intensive solution, although it is often overlooked.\n",
"People sometimes defrost frozen foods at room temperature because of time constraints or ignorance; such foods should be promptly consumed after cooking or discarded and never be refrozen or refrigerated since pathogens are not killed by the freezing process.\n",
"Because Oloroso Sherries have already been through years of oxidative aging, they can be safely stored for years before opening. Once opened, Oloroso will begin to slowly lose some of its aroma and flavor but can be kept, corked and refrigerated, for up to two months after opening. The older the Oloroso, the longer it will stay perfect for consumption, as much as 12 months.\n",
"Items deemed resellable are displayed for purchase in stores. Savers also has a recycling program and attempts to recycle any reusable items that cannot be sold at the stores, as well as any items that do not sell over a period of time to make room for fresh merchandise. Savers has buyers for its recyclables throughout the world and attempts to keep as much donated product out of the waste stream as possible.\n",
"The vegetables that need storage should be carefully considered, since not all produce can be stored together because some release ethylene, which can accelerate ripening or reduce postharvest quality. Like any device for storing food, the ECCs should be kept clean. The surface of the interior cooling space should be sponged off regularly.\n",
"Reusing current materials uses even less energy than recycling. Reusing is preferred to recycling because it eliminates the cost of transport to a recycling plant, sorting, re-manufacturing, distributing, and there are no wages needed to be paid to employees for doing these tasks. Reusable containers only have to be manufactured once for hundreds or thousands of uses (such as a water bottle used every day for years), and the energy cost between uses is approximately that of cleaning the container with soap and water, a negligible expense compared to sorting, melting down, and pouring the material into a mold again, for example. Reusing containers could, in theory, replace recyclable containers and one-use containers, if made out of a durable enough material. There are inconveniences that go along with reusing materials however. Some of these inconveniences include having to clean the containers between uses, carrying around full or empty containers, and they require a time commitment due to having to hold on to them instead of throwing them away.\n"
] |
how can people sell video game merch or art on etsy? | It’s not legal for sure, but they probably aren’t making enough to be on anyone’s radar. There is also the slim chance that they are doing really well for themselves and managed to get some sort of license.
_URL_0_ | [
"Players can optionally purchase cosmetic color palettes and tools from the game's virtual store. They are purchased with ducks, a virtual currency that they get from other players or with microtransactions, which, once made, gives the player access to Drawception Gold. Which gives the ability to create Draw First games and award ducks to others as a way to reward helpful players.\n",
"artFido is an online auction and shopping website in which people and businesses buy and sell works of art worldwide. In addition to its auction-style listings, the website also includes ordinary fixed-price shopping.\n",
"GameTZ.com is an online trading community established in late 1996 which allows people to trade video games, books, music, movies, and other items through negotiating with other traders from countries worldwide. Once a trade is completed, a record is created on the site for future reference.\n",
"• The Art in Video Games - French inspiration: the exhibition opened on September 25, 2015. It showcases the work of artists from French video game studios, such as Ubisoft, Spiders, Arkane, Osome, and Swing Swing Submarine, presenting more than 800 artworks: drawings and preparatory sketches, watercolors, sculptures and digital paintings. Emmanuel Ethis, in his contribution to the Nouvel Observateur, says that \"a video game is indeed a Total Art, because if it is ludique by nature, it also carries the sovereign ambition of being recorded in a connotated History, rich in correspondences and references to all art forms that preceded it, that we discover thanks to Jean-Jacques Launier, curator of the exhibition dedicated to French inspiration in the Art in the video games.\"\n",
"PlayerAuctions is an online platform for players of massively multiplayer online games (MMO) to buy, sell and trade digital assets such as in-game currency, items, accounts, and power leveling services. The site is a neutral marketplace that supports player-to-player trading for popular MMOs such as \"RuneScape\", \"Old School RuneScape\", \"World of Warcraft\", \"CSGO\", \"PUBG\", \"Path of Exile\", \"League of Legends\" and over 400 other games.\n",
"While Fan Art is mostly used for drawn art, Game Art HQ also considers cosplay, sculptures and crafts as artworks relevant for the site. Additionally, there are interviews with Game Producers, Artists and Cosplay Models done from time to time. The website aims to promote art by video game companies as well as helping independent artists to get more exposure. While there are many Communities supporting and showing fan art, there are strong guidelines and quality standards on Game Art HQ.\n",
" is a game in which players aim to complete a 3D animated picture of a Nintendo video game by gathering its pieces. If a player encountered on StreetPass possesses any pieces the player does not have, the player can choose one of their pieces to add to their own. The player may also use Play Coins to buy random pieces for their existing panels, although it won't always be a new piece. After the December 2011 update, new puzzles became available which included four or eight pink squares in the center. Pink pieces are distributed to players via SpotPass and can only be gathered via StreetPass; they cannot be bought using Play Coins. Some puzzles have more pieces than others, making them harder to complete.\n"
] |
What are the ergonomic effects of sleeping without a pillow? | I think this was linked before when this topic came up. _URL_0_ | [
"Orthopaedic pillows are regarded as therapeutic pillows based on claims that they can help relieve various conditions including sleep apnoea, snoring, insomnia, breathing difficulty, blood circulation problems, acid reflux, gastroesophageal reflux disease, lower back pain, sciatica pain, neck pain, whiplash, rotator cuff injury, amongst others.\n",
"Pillow fights are known to occur during children's sleepovers. Since pillows are usually soft, injuries rarely occur. The heft of a pillow can still knock a young person off balance, especially on a soft surface such as a bed, which is a common venue. In earlier eras, pillows would often break, shedding feathers throughout a room. Modern pillows tend to be stronger and are often filled with a solid block of artificial filling, so breakage occurs far less frequently.\n",
"Many times pillow fights occur during children's sleepovers. Since pillows are usually soft, injuries rarely occur. The heft of a pillow can still knock a young person off balance, especially on a soft surface such as a bed, which is a common venue. In earlier eras, pillows would often break, shedding feathers throughout a room. Modern pillows tend to be stronger and are often filled with a solid block of artificial filling, so breakage occurs far less frequently.\n",
"Certain herbs used in these type sachet \"sleep pillows\", like hops, have a soporific and a slight narcotic effect. These herb filled sachets are even called \"dreamtime pillows\". There are formulas using rosemary seeds to fill sachets and these are to be hung in a bedroom to promote sleep. The traditional method to treat insomnia with herb filled sachets of hops or lavender is to place them in, under or near your sleeping pillow. The \"dream pillow\" or \"sleep pillow\" sachet concept has been used for decades to help overcome sleeplessness. These \"sleep pillows\" have a therapeutic effect and hops as an ingredient to this type of sachet are considered best at inducing sleep. One type of \"sleep pillow\" sachet recipe by herb and flower author Penny Black calls for violets, rose petals, rosemary, tonka bean, vanilla bean, and a drop of lemon oil.\n",
"A pillow is a support of the body at rest for comfort, therapy, or decoration. Pillows are used by many species, including humans. Some types of pillows include throw pillows and decorative pillows. Pillows that aid sleeping are a form of bedding that supports the head and neck. Other types of pillows are designed to support the body when lying down or sitting. There are also pillows that consider human body shape for increased comfort during sleep. Decorative pillows used on beds, couches or chairs are sometimes referred to as cushions.\n",
"In a study conducted with depressed and healthy adults and were able to show that in healthy subjects, dreaming was a way to positively influence mood and cope with stress at night. Dreams of depressed persons, however, might deteriorate their mood further. This study's interesting results are limited in generalizability due to the small sample and the lack of reported dreams by depressed patients.\n",
"The earliest recorded use of the modern human device dates back to the civilizations of Mesopotamia around 7,000 BC. During this time, only the wealthy used pillows. The number of pillows symbolized status so the more pillows one owned the more affluence they held. Pillows have long been produced around the world in order to help solve the reoccurring problem of neck, back, and shoulder pain while sleeping. Besides for comfort, the pillow was also used for keeping bugs and insects out of people's hair, mouth, nose, and ears while sleeping.\n"
] |
why can websites appear to be down for me but be online for everyone else? | Unless a website has different servers for different people (very unlikely if it's within the same region) this should not happen ever. If you experience a website that is down and others don't, the problem is on your end and could be your internet connection. | [
"Along with proximity and time, physical appearance is another factor about the internet that is of no importance. Like previously mentioned in the anonymity paragraph, people are unable to see the physical characteristics of the person or persons that they are interacting with on the internet. This allows people to talk to others that they would normally not talk to if they had actually seen the person face to face. As a result, people are able to connect on a more meaningful level and are able to create closer relationships that are not just about physical attraction. This is also considered to be a very positive aspect about the internet.\n",
"An online reputation is the perception that one generates on the Internet based on their digital footprint. Digital footprints accumulate through all of the content shared, feedback provided and information that created online. Due to the fact that if someone has a bad online reputation, he can easily change his pseudonym, new accounts on sites such as eBay or Amazon are usually distrusted. If an individual or company wants to manage their online reputation, they will face many more difficulties. This is why a merchant on the web having a brick and mortar shop is usually more trusted.\n",
"For example, suppose there is a dating website where members scan the profiles of other members to see if they look interesting. For privacy reasons, this site hides everybody's real name and email. These are kept secret on the server. The only time a member's real name and email are in the browser is when the member is signed in, and they can't see anyone else's.\n",
"An online reputation is the perception that one generates on the Internet based on their digital footprint. Digital footprints accumulate through all of the content shared, feedback provided and information that is created online. Due to the fact that if someone has a bad online reputation, he can easily change his pseudonym, new accounts on sites such as eBay or Amazon are usually distrusted. If an individual or company wants to manage their online reputation, they will face many more difficulties.\n",
"Anonymity is a major feature that internet communication can provide. Not only are you not able to see the person's face that you are emailing and or communicating with, but they are also not able to see your face. This can be a very positive feature for those that are socially anxious and or have a social anxiety disorder because it eliminates the idea of being publicly humiliated and or embarrassed, which is something that most people who are socially anxious are very worried about. As a result, people with social anxiety are more inclined to open up, which allows them to get closer and form more relationships with others.\n",
"Various websites on the internet contain material that some deem offensive, distasteful or explicit, which may often be not of the user's liking. Such websites may include internet, shock sites, hate speech or otherwise inflammatory content. Such content may manifest in many ways, such as pop-up ads and unsuspecting links.\n",
"Many websites are targeted at audience with different languages and localized for different countries. This can cause a lot of duplicate content or near duplicate content, as well as targeting issues with users from search engines.\n"
] |
why is it that if you drop something electrical into a pool it affects the whole pool, but if you drop something electrical into the ocean, it dosent electrocute the whole ocean | The pool is ≠ the ocean, in any way.
The electrical thing you throw in the pool doesn't affect the whole pool either. Electricity is very, very good at finding the path of least resistance and following that. It's almost never through a human or a fish. Those metal drains and grounded lights are just a better path. | [
"Besides boats and dockside power hookups, several other potential causes exist. Lightning strikes over or near water have caused electric shock drownings. Faulty hydroelectric generators or damaged underwater power lines can cause leakage currents, potentially creating a hazard. In general, anything electrically active that comes in contact with water has the potential to create leakage currents and contribute to this type of safety hazard.\n",
"There is no visible warning to electrified water. Swimmers will be able to feel the electricity if the current is substantial. If the swimmers notice any unusual tingling feeling or symptoms of electrical shock, it is highly likely that stray currents exist and everyone needs to get out. Swimmers should always swim away from the suspected current source. In most cases this means swimming away from docks and boats and toward another safer portion of the shoreline.\n",
"Potential differences between pool water and railings, or shower facilities and grounded drain pipes are not uncommon as a result of neutral to earth voltages (NEV), and can be a major nuisance, but are usually not life-threatening. However, contact voltage resulting from damaged insulation on a current carrying conductor can be very dangerous, and can lead to shock or electrocution. Such a condition can arise spontaneously from mechanical, thermal, or chemical stress on insulation materials, or from unintentional damage from digging activity, freeze-frost seizing, corrosion and collapse of conduit, or even workmanship issues.\n",
"Electric shock drownings are most commonly caused by improper electrical connections on boats and docks. By law, all connections near water are required to have working ground fault circuit interruption technology, GFCI. These devices break the electrical circuit if any stray current fails to return to the source connection. If GFCI devices are missing or faulty, it is possible for current to leak into the water. If a system is leaking current into the water, appliances will likely function as normal without any indication of a problem. Correctly functioning GFCI and ELCI devices will instantaneously detect the problem and disconnect the power source. \n",
"When transmitting electric signals in aquatic environment, the physical and chemical nature of the surroundings can make big differences to signal transmission. Environmental factors that might impose influences include solute concentration, temperature, and background electrical noise (lightning or artificial facilities), etc.\n",
"The discussion above is in terms of charged droplets falling. The inductive charging effects occur while the water stream is continuous. This is because the flow and separation of charge occurs already when the streams of water approach the rings, so that when the water passes through the rings there is already net charge on the water. When drops form, some net charge is trapped on each drop as gravity pulls it toward the like-charged container.\n",
"This effect operates similarly to the patterns made by sunlight on the bottom of a pool, the difference is that the light is bent at the contact point with the water while the shock wave is distorted by density variations (e.g. due to temperature variations) in the atmosphere. Variations of wind can cause a similar effect. This will disperse the shock wave at some places and focus it at others. For powerful shock waves this can cause damage farther than expected; the shock wave energy density will decrease beyond expected values based on uniform geometry falloff for weak shock or acoustic waves, as expected at large distances).\n"
] |
how does the current "competitive healthcare market" benefit the patient? | The idea is that the insurance companies will compete with each other and this will cause lower prices for the patient. Sort of like the cell phone companies (Sprint, Verizon, AT & T, etc...). Unfortunately, this hasn't really happened, and I'm unsure if it ever will. | [
"Improving access, coverage and quality of health services depends on the ways services are organized and managed, and on the incentives influencing providers and users. In market-based health care systems, for example such as that in the United States, such services are usually paid for by the patient or through the patient's health insurance company. Other mechanisms include government-financed systems (such as the National Health Service in the United Kingdom). In many poorer countries, development aid, as well as funding through charities or volunteers, help support the delivery and financing of health care services among large segments of the population.\n",
"Free-market advocates claim that the health care system is \"dysfunctional\" because the system of third-party payments from insurers removes the patient as a major participant in the financial and medical choices that affect costs. The Cato Institute claims that because government intervention has expanded insurance availability through programs such as Medicare and Medicaid, this has exacerbated the problem. According to a study paid for by America's Health Insurance Plans (a Washington lobbyist for the health insurance industry) and carried out by PriceWaterhouseCoopers, increased utilization is the primary driver of rising health care costs in the U.S. The study cites numerous causes of increased utilization, including rising consumer demand, new treatments, more intensive diagnostic testing, lifestyle factors, the movement to broader-access plans, and higher-priced technologies. The study also mentions cost-shifting from government programs to private payers. Low reimbursement rates for Medicare and Medicaid have increased cost-shifting pressures on hospitals and doctors, who charge higher rates for the same services to private payers, which eventually affects health insurance rates.\n",
"Multiple factors are driving healthcare providers to dramatically improve business processes and operations as the United States healthcare industry embarks on the necessary migration from a largely fee-for service, volume-based system to a fee-for-performance, value-based system. Prescriptive analytics is playing a key role to help improve the performance in a number of areas involving various stakeholders: payers, providers and pharmaceutical companies.\n",
"Primary healthcare results in better health outcomes, reduced health disparities and lower spending, including on avoidable emergency department visits and hospital care. With that being said, primary care physicians are an important component in ensuring that the healthcare system as a whole is sustainable. However, despite their importance to the healthcare system, the primary care position has suffered in terms of its prestige in part due to the differences in salary when compared to doctors that decide to specialize. In a 2010 national study of physician wages conducted by the UC Davis Health System found that specialists are paid as much as 52 percent more than primary care physicians, even though primary care physicians see far more patients.\n",
"Healthcare markets are known for a lack of price transparency, which affects the health care prices in the United States. Consumers are unable to make health care decisions based on cost due to a lack of free market, a system in which price transparency is essential. CDHPs cannot effectively decrease health care costs without the ability for consumers to compare prices prior to use.\n",
"Enthoven has argued that integrated delivery systems — networks of health care organizations under a parent holding company that provide a continuum of health care services — align incentives and resources better than most healthcare delivery systems, leading to improved medical care quality while controlling costs. \n",
"An integrated healthcare delivery system will give individuals the ability to better manage their health and access high quality clinical care. It will provide cost-effective healthcare, excellence in service and support strong clinical research.\n"
] |
How can Burning wood (carbon) generate UV radiation? | Do you expect a lot of UV for some reason?
The thermal emission will contain tiny amounts of UV. In principle chemical reactions can directly lead to UV emissions as well but I'm not aware of specific reactions that would occur in a wood fire. | [
"For instance, upon harvesting, wood (as a carbon-rich material) can be immediately burned or otherwise serve as a fuel, returning its carbon to the atmosphere, \"or\" it can be incorporated into construction or a range of other durable products, thus sequestering its carbon over years or even centuries.\n",
"The environmental impact of using wood as a fuel depends on how it is burnt. Higher temperatures result in more complete combustion and less noxious gases as a result of pyrolysis. Some may regard the burning of wood from a sustainable source as carbon-neutral. A tree, over the course of its lifetime, absorbs as much carbon (or carbon dioxide) as it releases when burnt.\n",
"The Fraction of Absorbed Photosynthetically Active Radiation (FAPAR, sometimes also noted fAPAR or fPAR) is the fraction of the incoming solar radiation in the Photosynthetically Active Radiation spectral region that is absorbed by a photosynthetic organism, typically describing the light absorption across an integrated plant canopy. This biophysical variable is directly related to the primary productivity of photosynthesis and some models use it to estimate the assimilation of carbon dioxide in vegetation.\n",
"Black carbon particles (a component of soot) originating from combustion processes have been known for some time to absorb sunlight and warm the atmosphere, and pollution controls have been put into place to reduce their emissions and their effects.\n",
"Black carbon is primarily released by high-temperature combustion and brown carbon is emitted mainly by biomass combustion. These two are the two most important light absorbing substances in the atmosphere. The climate and radiative transfer are highly impacted by the absorptive properties of these substances.\n",
"As fossil fuels, burning wood causes greenhouse effect gases. However, wood is a renewable source of energy. A sustainable heat system would be to use solar heat in the summer, and the minimum of wood in the winter, thanks to maximum insulation.\n",
"The incomplete combustion of fossil fuels (such as diesel) and wood releases black carbon into the air. Though black carbon, most of which is soot, is an extremely small component of air pollution at land surface levels, the phenomenon has a significant heating effect on the atmosphere at altitudes above two kilometers (6,562 ft). Also, it dims the surface of the ocean by absorbing solar radiation.\n"
] |
james holmes killed 12 people and injured 70, but is charged with 24 counts of first degree murder and 140 counts of attempted first degree murder. why does he has 2 charges for every murder/attempted murder he did? | From a ways down that page:
> For each person killed in the shooting, Holmes was charged with one count of murder with deliberation and one count of murder with extreme indifference. | [
"On July 16, after jury deliberations, Holmes was found guilty of twenty-four counts of first-degree murder, 140 counts of attempted first-degree murder, one count of possessing illegal explosives, and a sentence enhancement of a crime of violence. The two murder convictions for each death were first-degree murder or attempted murder after deliberation, and first-degree murder or attempted murder with extreme indifference.\n",
"Holmes confessed to the shooting but pleaded not guilty by reason of insanity. Arapahoe County prosecutors sought the death penalty for Holmes. The trial began on April 27, 2015. On July 16, he was convicted of 24 counts of first-degree murder, 140 counts of attempted first-degree murder, and one count of possessing explosives. On August 7, he was sentenced to life in prison without the possibility of parole. On August 26, he was given twelve life sentences, one for every person he killed; he also received 3,318 years for the attempted murders of those he wounded and for rigging his apartment with explosives.\n",
"On July 16, after deliberating for over twelve hours, the jurors found Holmes guilty on all twenty-four counts of first-degree murder, 140 counts of attempted first-degree murder, one count of possessing explosives, and a sentence enhancement of a crime of violence. They began deciding his sentence on July 22. The court expected the sentencing phase to last for one month. Holmes declined to make an allocution statement. On July 23, the jury ruled that Holmes acted in a cruel manner, was lying in wait, and ambushed his victims during the shooting, which constitute as aggravating factors. However, the jurors decided that Holmes did not intend to kill children when he opened fire.\n",
"Thomas Roundtree, Ernest Bell and William Duncan were all found not guilty for the Capital Murder of Victor Arbuckle, however 1 week later they were found guilty of armed offences carrying sentences ranging from 6 to 10 years.\n",
"Holmes was found guilty of four counts of murder in the first degree and six counts of attempted murder and executed in May 1896 at the age of 34. His total number of victims has been estimated at around 200. However Erik Larson, who wrote extensively about Holmes in \"The Devil in the White City\" (2003), thought this was a gross exaggeration. Holmes himself confessed to 27 murders, unquestionably he killed nine times. \n",
"Holmes was arrested shortly after the shooting and was jailed without bail while awaiting trial. Following this, he was hospitalized after attempting suicide several times while in jail. Holmes entered a plea of not guilty by reason of insanity, which was accepted. His trial began on April 27, 2015, and on August 24 he was sentenced to 12 consecutive life sentences plus 3,318 years without parole.\n",
"The five men charged, who later all pleaded guilty or were convicted of the murder, had over fifty prior convictions for offences including armed robbery, assault, larceny, car theft, breaking and entering, drug use, escaping lawful custody, receiving stolen goods and rape.\n"
] |
What is the current state of knowledge on the long-term effects of caffeine on productivity/well-being? | What do you mean by long-term effects on productivity and well-being? There are tons of studies on the long-term biological effects of chronic caffeine consumption. Some of them say caffeine can be good, others say caffeine can be bad. It depends on the area/system of the body, and it depends on the methodology/outcome measures used. Caffeine also has a lot of metabolites, and numerous factors (tobacco use, alcohol use, liver health, etc) make a big difference in the biproducts produced during caffeine metabolism.
As for long-term effects on productivity and well-being? Those are really broad terms and the answer really depends on what exactly you're asking. There are a host of studies showing that caffeine improves performance on certain tasks requiring working memory, selective and sustained attention, memory encoding, and processing speed (mostly cognitive abilities in the immediate moment). There are many other studies suggesting that caffeine could disrupt more long term memory consolidation and retrieval, and could have a negative impact on some language functions (namely, word retrieval).
The thing to remember is that caffeine is a nonselective adenosine antagonist. While it's half life is typically between 4-6 hours in a healthy adult (LOTS of other factors play into speed of metabolism, and this number can be much higher in some people), there are some studies suggesting the actual effect on cognitive alertness and attention may be much shorter, on the order of 15 minutes or so. This is why drugs that mimic the adenosine antagonistic properties of caffeine haven't been used in treating ADHD.
Hope this helps somewhat. If you clarify your question, perhaps I can provide more information. | [
"The effects of caffeine on short-term memory (STM) are controversial. Findings are inconsistent, as many effects of caffeine sometimes impair short-term and working memory, whereas the other studies indicate enhancing effects. Increasing our capacities of STM and working memory only seem to have beneficial impacts upon our daily lives. Increasing our memory capacities would result in retaining more information for extended periods of time and encoding information from STM to long-term memory. However, the research consensus indicates an inhibitory effect, reducing the capacity of our short-term memory and working memory.\n",
"Positive effects of caffeine on long-term memory have been shown in a study analyzing habitual caffeine intake of coffee or tea in addition to consuming other substances. Their effect on cognitive processes was observed by performing numerous cognitive tasks. Words were presented and delayed recall was measured. Increased delayed recall was demonstrated by individuals with moderate to high habitual caffeine intake (mean 710 mg/week) as more words were successfully recalled compared to those with low habitual caffeine intake (mean 178 mg/week). Therefore, improved performance in long-term memory was shown with increased habitual caffeine intake due to better storage or retrieval. A similar study assessing effects of caffeine on cognition and mood resulted in improved delayed recall with caffeine intake. A dose-response relationship was seen as individuals were able to recall more words after a period of time with increased caffeine. Improvement of long-term memory with caffeine intake was also seen in a study using rats and a water maze. In this study, completion of training sessions prior to performing numerous trials in finding a platform in the water maze was observed. Caffeine was consumed by the rats before and after the training sessions. There was no effect of caffeine consumption before the training sessions; however, a greater effect was seen at a low dosage immediately afterward. In other words, the rats were able to find the platform faster when caffeine was consumed after the training sessions rather than before. This implies that memory acquisition was not affected, while increases in memory retention were.\n",
"Caffeine has been shown to have positive, negative, and no effects on long-term memory. When studying the effects of this and any drug, potential ethical restraints on human study procedures may lead researchers to conduct studies involving animal subjects in addition to human subjects.\n",
"As previously stated, the most pronounced effect of caffeine on memory appears to be on middle-aged subjects (26-64). None of the studies provide reasoning for why this group would be most affected, but one could hypothesize that because of cognitive decline due to age, caffeine has a powerful effect on brain chemistry (although this would suggest the older the person, the stronger the effect of caffeine). Furthermore, this age group is most likely to be the largest consumer of caffeine. The main studies reporting this finding show that at low, acute doses of caffeine consumption, working memory only slightly affects those in this age group, while no effect is observed for younger or older subjects. The authors conclude that larger doses may be needed to produce results that are supported by previous literature, and this is an avenue for further research. Furthermore, it is argued that consumption of caffeine generally aids cognitive performance for this age group, as long one does not exceed the recommended dose of 300 mg per day.\n",
"A 2011 review found that increased caffeine intake was associated with a variation in two genes that increase the rate of caffeine catabolism. Subjects who had this mutation on both chromosomes consumed 40 mg more caffeine per day than others. This is presumably due to the need for a higher intake to achieve a comparable desired effect, not that the gene led to a disposition for greater incentive of habituation.\n",
"When consumed in moderation, caffeine can have many beneficial effects. However, over the course of several years, chronic caffeine consumption can produce various long-term health deficits in individuals, \"including permanent changes in brain excitability\". As previously stated, long-term effects are most often seen in adolescents who regularly consume excess amounts of caffeine. This can affect their neuroendocrine functions and increase the risk of anxiety-disorder development.\n",
"The caffeine content in the daily recommended dose of Dexatrim products ranges from 50–400 mg per day. There are a number of studies showing that caffeine has a short-term stimulatory effect on basal metabolic rate. However, in 1992, in a double-blind placebo controlled study, caffeine (at a dose of 200 mg daily) was found to be no more effective in promoting weight loss as compared to a placebo. Potential side effects of caffeine may include insomnia, anxiety, gastrointestinal discomfort, diarrhea, headaches and abnormal heart beat.\n"
] |
Was there a particular flag that the Union used during the American Civil War other than the traditional 34-star flag? | I assume you're asking was there a flag that didn't have stars representing the Confederate States? In that case, no, not officially. Even when West Virginia broke away from Virginia, the official flag gained a [35th star](_URL_1_) (which still included Virginia and all the other rebelling states). In the view of the US, you can't actually secede from the Union (there was a [Supreme Court Case](_URL_0_) affirming this). They viewed the Confederacy as a group of rebelling states rather than an actual nation. Creating a flag without those states could be seen as official acknowledgment that they were in fact their own country, which as you said, would be a little counter-intuitive on their part. | [
"The first official flag of the Confederate States of America – called the \"Stars and Bars\" – originally had seven stars, representing the first seven states that initially formed the Confederacy. As more states joined, more stars were added, until the total was 13 (two stars were added for the divided states of Kentucky and Missouri). During the First Battle of Bull Run, (First Manassas) it sometimes proved difficult to distinguish the Stars and Bars from the Union flag. To rectify the situation, a separate \"Battle Flag\" was designed for use by troops in the field. Also known as the \"Southern Cross\", many variations sprang from the original square configuration. Although it was never officially adopted by the Confederate government, the popularity of the Southern Cross among both soldiers and the civilian population was a primary reason why it was made the main color feature when a new national flag was adopted in 1863. This new standard – known as the \"Stainless Banner\" – consisted of a lengthened white field area with a Battle Flag canton. This flag too had its problems when used in military operations as, on a windless day, it could easily be mistaken for a flag of truce or surrender. Thus, in 1865, a modified version of the Stainless Banner was adopted. This final national flag of the Confederacy kept the Battle Flag canton, but shortened the white field and added a vertical red bar to the fly end.\n",
"As the American Civil War was approaching, a flag was initially proposed for the state. It was modeled after the first flag of the Confederate States, albeit with the seal in the canton in lieu of stars representing the states.\n",
"Designed by William Porcher Miles, the chairman of the Flag and Seal Committee of the Confederate Provisional Congress, the flag now generally known as the \"Confederate flag\" was initially proposed, and rejected, as the national flag in 1861. The design was instead adopted as a battle flag by the Army of Northern Virginia (ANV) under General Robert E. Lee.\n",
"In 1863, the Confederate States of America adopted a new flag that played on the popularity of the Confederate Battle Flag, using a pure white field with the Battle Flag displayed in a canton in a position equivalent to the stars on the Flag of the United States. The design lasted until March 1865, when concerns about its being mistaken for a flag of truce when the flag was not completely flying necessitated the addition of a broad red band on the fly edge.\n",
"During the solicitation for a second Confederate national flag, many different types of designs were proposed, nearly all based on the battle flag, which by 1863 had become well-known and popular among those living in the Confederacy. The Confederate Congress specified that the new design be a white field \"...with the union (now used as the battle flag) to be a square of two-thirds the width of the flag, having the ground red; thereupon a broad saltire of blue, bordered with white, and emblazoned with mullets or five-pointed stars, corresponding in number to that of the Confederate States.\"\n",
"The 1879 flag was introduced by Georgia state senator Herman H. Perry and was adopted to memorialize Confederate soldiers during the American Civil War. Perry was a former colonel in the Confederate army during the war, and he presumably based the design on the First National Flag of the Confederacy, commonly known as the Stars and Bars. Over the years the flag was changed by adding and altering a charge on the vertical blue band at the hoist. The original 1879 design featured a solid blue band with no additional emblems.\n",
"The second national flag was later adapted as a naval ensign, using a shorter 2:3 ratio than the 1:2 ratio adopted by the Confederate Congress for the national flag. This particular battle ensign was the only example taken around the world, finally becoming the last Confederate flag lowered in the Civil War; this happened aboard CSS \"Shenandoah\" in Liverpool, England on November 7, 1865.\n"
] |
If you were on a spaceship in the absolute black void of space, how could you measure your speed without any points of reference? | You couldn't, using only local measurement. That is the whole point of relativity - there is no difference in local physics based on how fast you are moving (no preferred frame of reference).
You could measure the *difference* in your speed by keeping track of your instantaneous acceleration and integrating that.
For external references, you could use Doppler shift of spectral lines in the distant stars. | [
"There's no way you can visualize the speed. There's nothing you can see to see how fast you're going. You have no depth perception. If you're in a car driving down the road and you close your eyes, you have no idea what your speed is. It's the same thing if you're free falling from space. There are no signposts. You know you are going very fast, but you don't feel it. You don't have a 614-mph wind blowing on you. I could only hear myself breathing in the helmet.\n",
"If a spaceship travels to a planet one light-year (as measured in the Earth's rest frame) away from Earth at high speed, the time taken to reach that planet could be less than one year as measured by the traveller's clock (although it will always be more than one year as measured by a clock on Earth). The value obtained by dividing the distance traveled, as determined in the Earth's frame, by the time taken, measured by the traveller's clock, is known as a proper speed or a proper velocity. There is no limit on the value of a proper speed as a proper speed does not represent a speed measured in a single inertial frame. A light signal that left the Earth at the same time as the traveller would always get to the destination before the traveller.\n",
"In 2004, Espen Gaarder Haug published a theory he titled \"SpaceTime-Finance\" in \"Wilmott\" magazine to show how a series of finance calculations had to be adjusted to avoid arbitrage when hypothetically traveling at very high velocities relative to other observers (traders). He illustrated how such necessary adjustments already could be measurable at the speed of the space shuttle, but also that such calculations and adjustments were of little or no practical relevance today since we all are moving at the nearly same speed relative to the enormous speed of light.\n",
"In particular, the physical experience of an observer who whizzes by a gravitating object (such as a star or a black hole) at nearly the speed of light can be modelled by an \"impulsive\" pp-wave spacetime called the Aichelburg–Sexl ultraboost.\n",
"BULLET::::- At the test track, teams had to drive one lap around the track while obeying three different speed limits. If they went 3 km/h over or under the limit, they would have to start again. The twist was that only the team member \"not\" driving would be able to see the speedometer.\n",
"The rear of the device had a separate rotating calculator, where if the ship's speed was set against the 60-minute guide mark, then the distance travelled at any time 0–60 minutes could be read off against the logarithmic time scale.\n",
"BULLET::::- Imagine you are watching a rocket take off nearby and measuring the distance it has traveled once each second. In the first couple of seconds your measurements may be accurate to the nearest centimeter, say. However, 5 minutes later as the rocket recedes into space, the accuracy of your measurements may only be good to 100 m, because of the increased distance, atmospheric distortion and a variety of other factors. The data you collect would exhibit heteroscedasticity.\n"
] |
how do different antibiotics target different parts of the body? | The thing about antibiotics is that they only effect bacteria, which are very different from your cells (if you’re 16:bacteria are prokaryotes and your cells are eukaryotes). So as previous reply said, the antibiotics disperse throughout your body and attack the bacteria... all of them. Including the good ones in your gut. That’s why a common side effect of antibiotics is the runs (the poops, the scoots, diarrhea, etc) | [
"Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic (with the exception of bactericidal aminoglycosides). Further categorization is based on their target specificity. \"Narrow-spectrum\" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering new classes of antibacterial compounds, four new classes of antibiotics have been brought into clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin).\n",
"BULLET::::- Antibiotics are usually administered intravenously, but they may also be infused directly into the peritoneum. The empiric choice of broad-spectrum antibiotics often consist of multiple drugs, and should be targeted against the most likely agents, depending on the cause of peritonitis (see above); once one or more agents grow in cultures isolated, therapy will be target against them.\n",
"The following is a list of antibiotics. The highest division is between antibiotics is bactericidal and bacteriostatic. Bactericidals kill bacteria directly, whereas bacteriostatics prevent them from dividing. However, these classifications are based on laboratory behavior. In practice, both can effectively treat a bacterial infection.\n",
"There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring.\n",
"Bacterial infections may be treated with antibiotics, which are classified as bacteriocidal if they kill bacteria, or bacteriostatic if they just prevent bacterial growth. There are many types of antibiotics and each class inhibits a process that is different in the pathogen from that found in the host. An example of how antibiotics produce selective toxicity are chloramphenicol and puromycin, which inhibit the bacterial ribosome, but not the structurally different eukaryotic ribosome. Antibiotics are used both in treating human disease and in intensive farming to promote animal growth, where they may be contributing to the rapid development of antibiotic resistance in bacterial populations. Infections can be prevented by antiseptic measures such as sterilising the skin prior to piercing it with the needle of a syringe, and by proper care of indwelling catheters. Surgical and dental instruments are also sterilised to prevent contamination by bacteria. Disinfectants such as bleach are used to kill bacteria or other pathogens on surfaces to prevent contamination and further reduce the risk of infection.\n",
"Antibiotics are often not needed. If used they should target enteric organisms (e.g. Enterobacteriaceae), such as \"E. coli\" and \"Bacteroides\". This may consist of a broad spectrum antibiotic; such as piperacillin-tazobactam, ampicillin-sulbactam, ticarcillin-clavulanate (Timentin), a third generation cephalosporin (e.g.ceftriaxone) or a quinolone antibiotic (such as ciprofloxacin) and anaerobic bacteria coverage, such as metronidazole. For penicillin allergic people, aztreonam or a quinolone with metronidazole may be used.\n",
"Bacterial infections may be treated with antibiotics, which are classified as bacteriocidal if they kill bacteria or bacteriostatic if they just prevent bacterial growth. There are many types of antibiotics and each class inhibits a process that is different in the pathogen from that found in the host. For example, the antibiotics chloramphenicol and tetracyclin inhibit the bacterial ribosome but not the structurally different eukaryotic ribosome, so they exhibit selective toxicity. Antibiotics are used both in treating human disease and in intensive farming to promote animal growth. Both uses may be contributing to the rapid development of antibiotic resistance in bacterial populations. Phage therapy can also be used to treat certain bacterial infections.\n"
] |
Does a positive correlation exist between the length of a gestational period and the intelligence of the birthed animal? | Your comparison of a fish to an elephant is a little broad. But generally I would say no, the gestational period does not translate to inherent intelligence. The length of 'childhood' or child rearing may be a better indicator of intelligence. Certain animals have a long gestational period and hit the ground running (literally). Certain creatures are helpless when they are born and can only persist when taught proper environmental and cultural skills and behaviors.
I promise you, that if you investigate, you will see that species that child rear display more complex or 'intelligent' behaviors than those that don't. You can also expect, as a general trend, the longer that child-rearing period, the more complex the behaviors will be. Remember though, time does not scale evenly for all species. | [
"The time at which insemination occurs during the oestrus cycle has been found to affect the sex ratio of the offspring of humans, cattle, hamsters, and other mammals. Hormonal and pH conditions within the female reproductive tract vary with time, and this affects the sex ratio of the sperm that reach the egg.\n",
"Various other factors can come into play in determining the duration of gestation. For humans, male fetuses normally gestate several days longer than females and multiple pregnancies gestate for a shorter period. Ethnicity in humans is also a factor that may lengthen or shorten gestation. In dogs there is a positive correlation between a longer gestation time and a small litter size.\n",
"As of 2006 results from studies in humans had found conflicting evidence regarding the effect of prenatal exposure to hormones and psychosexual outcomes; Gooren noted in 2006 that studies in subprimate mammals are invalid measures of human sexual differentiation, as sex hormones follow a more \"on-off\" role in sex-typed behavior than is found in primates.\n",
"Nearly all mammals display sex-dimorphic reproductive and sexual behavior (e.g., lordosis and mounting in rodents). Much research has made it clear that prenatal and early postnatal androgens play a role in the differentiation of most mammalian brains. Experimental manipulation of androgen levels in utero or shortly after birth can alter adult reproductive behavior.\n",
"However, differences between species' physiology and gestation times mean findings in animals may not apply to humans. Mice, rats, and rabbits have shorter gestational times, so experimenters must continue giving drugs after they are born to more closely model human gestation; however this introduces more differences. Animals and humans metabolize drugs at different rates, and drugs that are highly teratogenic in animals may not be in humans and vice versa. Animals cannot be used to measure differences in abilities such as reasoning that are only found in humans. \n",
"Precopulatory mechanisms determine who father an offspring prior to sex. Male-male competition is the biggest precopulatory mechanism in mammals. Sexual dimorphism is a result of male-male competition that is easily seen in species.\n",
"The relationship between fertility and intelligence has been investigated in many demographic studies; there is no conclusive evidence of a positive or negative correlation between human intelligence and fertility rate.\n"
] |
If I urinated on an electrified fence, would it shock me? | If you can get close enough to produce a steady stream then yes, you would get shocked by the fence, because urine is electrically conductive, but you have to avoid 'fragmentation' of your urine.
[Here's](_URL_1_) a vid of a guy doing it, and [here's](_URL_0_) a Mythbusters vid of the same effect but using the 3rd rail instead of a fence. I know it's not a scientific paper or anything, but I hope it sufficiently demonstrates what's happening. | [
"There is no visible warning to electrified water. Swimmers will be able to feel the electricity if the current is substantial. If the swimmers notice any unusual tingling feeling or symptoms of electrical shock, it is highly likely that stray currents exist and everyone needs to get out. Swimmers should always swim away from the suspected current source. In most cases this means swimming away from docks and boats and toward another safer portion of the shoreline.\n",
"An \"underground fence\" is an electronic system to prevent pets from leaving a yard. A buried wire around the area to be used is energized with coded signals. A shock collar on the pet receives these signals. When the pet approaches the buried fence line, the collar makes a warning sound and then gives the pet a harmless electric shock. One popular brand claims more than three million installations. \n",
"Its disadvantages include the potential for the entire fence to be disabled due to a break in the conducting wire, shorting out if the conducting wire contacts any non-electrified component that may make up the rest of the fence, power failure, or forced disconnection due to the risk of fires starting by dry vegetation touching an electrified wire. Other disadvantages can be lack of visibility and the potential to shock an unsuspecting human passer-by who might accidentally touch or brush the fence.\n",
"Most modern fences emit pulses of high voltage at a given interval of time, and don't take into account whether there is an animal or person touching the conductive wires, except for the voltage multiplier based electric fence charger that stores high voltage potential and dumps its charges as soon as a conductive load (grounded animal/person) touches the wires.\n",
"An electric fence is a barrier that uses electric shocks to deter animals and people from crossing a boundary. The voltage of the shock may have effects ranging from discomfort to death. Most electric fences are used today for agricultural fencing and other forms of animal control, although they are frequently used to enhance the security of sensitive areas, such as military installations, prisons, and other security sensitive places; places exist where lethal voltages are used.\n",
"Unauthorized persons climbing on power pylons or electrical apparatus are also frequently the victims of electrocution. At very high transmission voltages even a close approach can be hazardous, since the high voltage may arc across a significant air gap.\n",
"Electric fences are barriers that uses electric shocks to deter animals or people from crossing a boundary. The voltage of the shock may have effects ranging from uncomfortable, to painful or even lethal. Most electric fencing is used today for agricultural fencing and other forms of animal control purposes, though it is frequently used to enhance security of restricted areas, and there exist places where lethal voltages are used.\n"
] |
how do we make extremely, extremely high frame-per-second cameras? | The sensors in most cameras are perfectly capable of capturing at a higher frame rate than what they are normally being used for, but the challenge is getting the data and putting it somewhere fast enough. If your storage subsystem is too slow, you won't be able to ingest the flood of incoming data fast enough. Some of the fastest high-speed cameras have ridiculous amounts of RAM to initially capture the video, and then they take a minute or two to dump that to a slower hard drive or SSD afterwards.
Another issue with super extreme high speed cameras is light sensitivity of the pixels, as well as cooling of the sensor. When you get into the *really* high speed territory, the individual pixels in the sensor have less time to gather light before the next frame, so you have to use extremely bright external lighting, or even sunlight to get a usable video. The sensors that are capable of such fast frame rates require additional cooling which makes the cameras bulky and loud. | [
"In 2010 researchers built a camera exposing each frame for two trillionths of a second (picoseconds), for an effective frame rate of half a trillion fps (femto-photography). Modern high-speed cameras operate by converting the incident light (photons) into a stream of electrons which are then deflected onto a photoanode, back into photons, which can then be recorded onto either film or CCD.\n",
"Digital cameras use a 1-dimensional array sensor to take 1-pixel-wide sequential images of the finish line. Since only a single line of the CCD is read out at a time, the frame rates can be very high (up to 10,000 frames per second). Unlike a film based photo finish, there is no delay from developing the film, and the photo finish is available immediately. They may be triggered by a laser or photovoltaic means.\n",
"BULLET::::- The world’s fastest receive-only 2-D camera has been demonstrated, capturing up to 100 billion frames per second. It is hoped this new system will improve the understanding of very fast biological interactions and chemical processes.\n",
"A high-speed camera equipment (capable of producing 1000 frames a second) was used to shoot possibly the first known instance (in feature films) of following a bullet's trajectory with high-speed cameras.\n",
"The rear-facing camera has an 8-megapixel back-illuminated sensor with a maximum aperture of f/2, autofocus, an LED flash dubbed HTC Smart Flash with three levels of brightness (determined by distance from the subject), and a dedicated imaging chip. With a startup time of 0.7 seconds and 0.2 seconds per shot, it beats even the Samsung Galaxy Nexus in camera speed. The camera can record 1080p video at 24 frames per second and 10 megabit/s in h.264 with the baseline profile. It can take four photos per second while recording video. It also has slow motion video capture and playback (768 × 432 pixels). Shooting modes include High Dynamic Range (HDR) and panorama. \n",
"Cameras capable of high continuous shooting rates are much desired when the subjects are in motion, as in sports photography, or where the opportunities are brief. Rather than anticipate the action precisely, photographers can simply start shooting from right before they believe the action will occur, giving a high chance of at least one frame being acceptable. Most modern digital SLR cameras have continuous shooting rates of between 3 and 8 frames per second, although very high end cameras such as the Canon EOS-1D X Mark II are capable of 14 frames per second with full autofocus, or 16 frames per second when in mirror lock-up mode. The Panasonic Lumix DMC-GH2 is capable of recording 40 still images per second in burst mode, at a slightly reduced resolution. In March 2014, Nikon claims its Nikon 1 V3 mirrorless interchangeable-lens camera has the world's fastest burst mode of 20fps Auto Focus tracking and 60fps at the first shot autofocus, both in 18.4MP full resolution. The claim is among digital cameras with interchangeable lenses (including (its) DSLR).\n",
"Most modern digital camera backs use CCD or CMOS matrix sensors. The matrix sensor captures the entire image frame at once, instead of incrementing scanning the frame area through the prolonged exposure. For example, Phase One produces a 39 million pixel digital camera back with a 49.1 x 36.8 mm CCD in 2008. This CCD array is a little smaller than a frame of 120 film and much larger than a 35 mm frame (36 x 24 mm). In comparison, consumer digital cameras use arrays ranging from 36 x 24 mm (full frame on high end consumer DSLRs) to 1.28 x 0.96 mm (on camera phones) CMOS sensor.\n"
] |
why are most (not all) military personnel right wing and anti-obama/universal anything if they are part of a government run, universal healthcare providing, free almost everything military? | The majority of America's military volunteers come from regions in the US that are majority conservative. That gives way to a majority conservative military. That being said, I've met plenty of liberals who served in the military, who are pro-gun, pro-gay, and anti-war. | [
"The American system is a mix of public and private insurance. The government provides insurance coverage for approximately 53 million elderly via Medicare, 62 million lower-income persons via Medicaid, and 15 million military veterans via the Veteran's Administration. About 178 million employed by companies receive subsidized health insurance through their employer, while 52 million other persons directly purchase insurance either via the subsidized marketplace exchanges developed as part of the Affordable Care Act or directly from insurers. The private sector delivers healthcare services, with the exception of the Veteran's Administration, where doctors are employed by the government.\n",
"The American system is a mix of public and private insurance. The government provides insurance coverage for approximately 53 million elderly via Medicare, 62 million lower-income persons via Medicaid, and 15 million military veterans via the Veteran's Administration. About 178 million employed by companies receive subsidized health insurance through their employer, while 52 million other persons directly purchase insurance either via the subsidized marketplace exchanges developed as part of the Affordable Care Act or directly from insurers. The private sector delivers healthcare services, with the exception of the Veteran's Administration, where doctors are employed by the government.\n",
"Military commissaries serve to provide discounted groceries and household goods to many members within the Department of Defense regardless of which country they are located in. Eligible patrons include active-duty personnel in all services, retirees of all services, Guard and Reserve personnel, and immediate family members of service personnel. Beyond providing members with discounted pricing, commissaries also provide employment for many family members of service personnel. This is especially significant in overseas locations where acquiring a job could prove difficult for U.S. citizens. \n",
"In the US, direct government funding of health care is limited to Medicare, Medicaid, and the State Children's Health Insurance Program (SCHIP), which cover eligible senior citizens, the very poor, disabled persons, and children. The federal government also runs the Veterans Administration, which provides care directly to retired or disabled veterans, their families, and survivors through medical centers and clinics.\n",
"Public programs provide the primary source of coverage for most seniors and also low-income children and families who meet certain eligibility requirements. The primary public programs are Medicare, a federal social insurance program for seniors (generally persons aged 65 and over) and certain disabled individuals; Medicaid, funded jointly by the federal government and states but administered at the state level, which covers certain very low income children and their families; and CHIP, also a federal-state partnership that serves certain children and families who do not qualify for Medicaid but who cannot afford private coverage. Other public programs include military health benefits provided through TRICARE and the Veterans Health Administration and benefits provided through the Indian Health Service. Some states have additional programs for low-income individuals. In 2011, approximately 60 percent of stays were billed to Medicare and Medicaid—up from 52 percent in 1997.\n",
"In the United States, the chief public health officer is the Surgeon General of the United States and many states have their own state surgeons general. Moreover, three of the U.S. military services have their own surgeon general, namely the Surgeon General of the United States Army, Surgeon General of the United States Navy, and Surgeon General of the United States Air Force.\n",
"The United States has a two-tier health system, but most of the population cannot gain access to the public provision tiers. Healthcare provided directly by the government is limited to military and veteran families and to certain Native American tribes. Certain cities and towns also provide free care directly but only to those who cannot afford to pay. Medicare, Medicaid, and the State Children's Health Insurance Program pay for health care obtained at private facilities but only for the elderly, disabled, and children in poor families. Since enacting the Patient Protection and Affordable Care Act in 2010, Medicaid has been substantially expanded, and federal subsidies are available for low- to middle-income individuals and families to purchase private health insurance.\n"
] |
Is there archaeological support for the stereotype of Roman infanticide as sex selection? | I can't speak for Roman society generally, but I am familiar with one specific case of sex-selective infanticide from Tel Ashkelon, Israel. We excavated a Roman/Byzantine bathhouse in Grid 38, (you can read the publication report for free at _URL_2_, just download the massive PDF of volume 1 and you can find some descriptions of the "baby drain" on page 295, and the publication of the DNA analysis on page 537), and found literally hundreds of infant skeletons in a drain underneath the bathhouse. Why were hundreds of dead babies thrown into the drain of a bathhouse? Why were they nearly all male? (Answer, it may have also been an illegal brothel). I have sat on this drain to do paperwork many times, and am excellent friend with the person who oversaw the excavation of the infant remains. Part of the drain is actually still there, it being made of Roman concrete and all. [This](_URL_1_) is a picture of the drain as it was excavated. [This](_URL_0_) is a picture of it basically as it is today, taken by one of my colleagues. The drain is the concrete thing that all the people are standing on, they are standing on the same thing the guy is standing on in the other picture. | [
"Sex selection may be one of the contributing factors of infanticide. In the absence of sex-selective abortion, sex-selective infanticide can be deduced from very skewed birth statistics. The biologically normal sex ratio for humans at birth is approximately 105 males per 100 females; normal ratios hardly ranging beyond 102–108. When a society has an infant male to female ratio which is significantly higher or lower than the biological norm, and biased data can be ruled out, sex selection can usually be inferred.\n",
"Maternal infanticide differs from other varieties of infanticide in that the resource competition and sexual selection hypotheses (see other sections) must be rejected. Resource competition and sexual selection are ruled out because it is the mother that is performing the infanticide, not another female.\n",
"Female infanticide is the deliberate killing of newborn female children. In countries with a history of female infanticide, the modern practice of sex-selective abortion is often discussed as a closely related issue. Female infanticide is a major cause of concern in several nations such as China, India and Pakistan. It has been argued that the low status in which women are viewed in patriarchal societies creates a bias against females.\n",
"This form of infanticide represents a struggle between the sexes, where one sex exploits the other, much to the latter's disadvantage. It is usually the male who benefits from this behavior, though in cases where males play similar roles to females in parental care the victim and perpetrator may be reversed (see Bateman's principle for discussion of this asymmetry).\n",
"This hypothesis suggests the adaptive advantage for women who had hidden estrus would be a reduction in the possibility of infanticide by men, as they would be unable to reliably identify, and kill, their rivals' offspring. This hypothesis is supported by recent studies of wild Hanuman langurs, documenting concealed ovulation, and frequent matings with males outside their fertile ovulatory period. Heistermann et al. hypothesize that concealed ovulation is used by women to confuse paternity and thus reduce infanticide in primates. He explains that as ovulation is always concealed in women, men can only determine paternity (and thus decide on whether to kill the woman's child) probabilistically, based on his previous mating frequency with her, and so he would be unable to escape the possibility that the child might be his own, even if he were aware of promiscuous matings on the woman's part.\n",
"Evolutionary psychology has proposed several theories for different forms of infanticide. Infanticide by stepfathers, as well as child abuse in general by stepfathers, has been explained by spending resources on not genetically related children reducing reproductive success (See the Cinderella effect and Infanticide (zoology)). Infanticide is one of the few forms of violence more often done by women than men. Cross-cultural research has found that this is more likely to occur when the child has deformities or illnesses as well as when there are lacking resources due to factors such as poverty, other children requiring resources, and no male support. Such a child may have a low chance of reproductive success in which case it would decrease the mother's inclusive fitness, in particular since women generally have a greater parental investment than men, to spend resources on the child.\n",
"The occurrence of infanticide seems to vary within rodent species between parents. For example, male meadow voles and house mice can be classed as either 'infanticidal' or 'non-infanticidal' depending on their history with other litters they have sired, although studies have shown that females do not discriminate between these classes when choosing a mate. Furthermore, recent studies in rodents have shown that infanticide is influenced by various hormones such as: prolactin, corticosterone, and progesterone.\n"
] |
how do courts decide who to send to white collar prison? | There's a point system that takes into account a number of things (age, gender, crime committed, whether the person is an escape risk, and whether they have violent tendencies). The more points the convict gets, the higher level of prison security they get. | [
"According to human rights groups, black jails are a growing industry. The system includes so-called \"interceptors\" (截访者, literally \"inquiry-stopper\"), or \"black guards\", often sent by local or regional authorities, who abduct petitioners and hold them against their will or bundle them onto a bus to send them back to where they came from. Non-government sources have estimated the number of black jails in operation to be between 7 and 50. The facilities may be located in state-owned hotels, hostels, hospitals, psychiatric facilities, residential buildings, or government ministry buildings, among others.\n",
"Allegedly, local officials, with the tolerance of public security authorities, establish the black jails as a way to ensure that complainants are detained, punished, and sent home so that these officials will not suffer demerits under rules that impose bureaucratic penalties when there is a large flow of petitioners from their areas. Black jails are used to protect government officials at the county, municipal, and provincial levels from financial and career advancement penalties. Unpublished local government documents describe penalties levied against local officials who fail to take decisive action when petitioners from their geographical area seek legal redress in provincial capitals and Beijing. The operators of black jails allegedly receive from those local-level governments daily cash payments of 150 yuan (US$22) to 200 yuan (US$29) per person.\n",
"The three prisoners are ordered to stand in a straight line facing the front, with A in front and C at the back. They are told that there will be two black hats and three white hats. One hat is then put on each prisoner's head; each prisoner can only see the hats of the people in front of him and not on his own. The first prisoner that is able to announce the color of his hat correctly will be released. No communication between the prisoners is allowed.\n",
"The jailer seats three of the men into a line. B faces the wall, C faces B, and D faces C and B. The fourth man, A, is put behind a screen (or in a separate room). The jailer gives all four men party hats. He explains that there are two black hats and two white hats, that each prisoner is wearing one of the hats, and that each of the prisoners see only the hats in front of him but neither on himself nor behind him. The fourth man behind the screen can't see or be seen by any other prisoner. No communication among the prisoners is allowed.\n",
"The black jail is a U.S. military detention camp established in 2002 inside Bagram Air Base, Afghanistan. Distinct from the main prison of the Bagram Internment Facility, the \"Black Jail\" is run by the U.S. Defense Intelligence Agency and U.S. Special Operations Forces. There are numerous allegations of abuse associated with the prison, including beatings, sleep deprivation and forcing inmates into stress positions. U.S. authorities refuse to acknowledge the prison's existence. The facility consists of individual windowless concrete cells, each illuminated by a single light bulb glowing 24 hours a day. Its existence was first reported by journalist Anand Gopal and confirmed by many subsequent investigations.\n",
"Black jails have no official or legal status, differentiating them from detention centers, the criminal arrest process, or formal sentencing to jail or prison. They are in wide use in Beijing, in particular, and serve as holding locations for the many petitioners who travel to the central Office of Letters and Calls to petition.\n",
"In May 2010, the PRC authorities officially passed new regulations in an attempt to nullify evidence gathered through violence or intimidation in their official judicial procedures, and to reduce the level of torture administered to prisoners already in jails. Little is known, however, about whether or how procedures were modified in black jails, which are not officially part of the judicial system. The move came after a public outcry following the revelation that a farmer, convicted for murder based on his confession under torture, was in fact innocent. The case came to light only when his alleged victim was found alive, after the defendant had spent ten years in prison. International human rights groups gave the change a cautious welcome.\n"
] |
How have small speakers (cellphones, beats pill) improved in quality so much recently? | Hoffman's iron law of speaker design/performance dictates that you can only pick 2 out of the following three things:
Small enclosure size, High efficiency, deep bass.
That means that in order to chase deep bass in a tiny enclosure, phone and mobile speaker makers have most likely sacrificed speaker electrical efficiency. Given the impressive gains in class D amplification efficiency and technology (which would offset the reduction of speaker/driver efficiency) over the last decade or so, this doesn't seem that implausible. The other side of the equation is that power handling and excursion of a small speaker must also improve, which can be overcome with good driver design, smart equalization, and materials science.
Now Hoffman's Iron Law is not hard cast - it's slightly malleable/ductile and can be stretched using several shortcuts such as passive radiators (which are functionally the same as vents/reflex, but have much smaller volume requirements), BMR and other high tech drivers, virtual bass and other DSP algorithms.
I would say that the biggest driver of everything going on here is simply consumer dollars - people want better sound from their mobile devices, and large manufacturers are now willing to spend good money in R & D and engineering talent trying to achieve differentiation (as opposed to side projects by independent speaker builders with limited resources).
Whatever the driver is, consumers are the beneficiary. I personally recommend the UE mini boom and the UE Boombox as my goto mobile bluetooth speakers - they have the hardest hitting bass in their respective classes. | [
"Business magazine \"Bloomberg Businessweek\" suggests that caution is in order with regard to high-resolution audio: \"There is reason to be wary, given consumer electronics companies’ history of pushing advancements whose main virtue is to require everyone to buy new gadgets.\"\n",
"Most designs produce high quality sound, even though some audiophiles consider chip-based amplifiers to be inferior to their discrete counterparts. The chips have been designed to incorporate a number of desirable features, including excellent power supply rejection ratio, fast response, accurate bias current, over-temperature protection and short circuit protection.\n",
"Hirsch helped draft the Institute of High Fidelity standards that made it easier for consumers to compare audio equipment. Bob Ankosko, an editor-in-chief at \"Sound & Vision\", said \"Julian Hirsch was one of the most influential writers ever in consumer electronics.\"\n",
"With the new Pill+ however, reviewers show better favor to the sound quality of the speaker. The Verge has called it a \"refinement on the recognizable Beats Pill look\" instead of a \"radical redesign\". It also says that the new Pill+ is the best sounding speaker from the Pill lineup. Similarly, \"Wired\" magazine points out that the sound of the new speaker is \"much, much improved.\n",
"Vince valued excellence in products and found those that lasted functionally beyond its supposed designated lifespan, and continued to excel, were particularly endearing. And while he vowed that he would only develop superior, premium speakers, he adamantly wanted to provide this quality at affordable prices.\n",
"Their claimed unique sales proposition is that they augment the volume, or hearability, of a speaker or musician, but not the quality of the sound. They supposedly achieve this flat response through digital signal processing (DSP) so that profile changes introduced by loud speakers and other audio components are compensated for and thus eliminated.\n",
"Some audio quality enhancing features, such as Voice over LTE and HD Voice have appeared and are often available on newer smartphones. Sound quality can remain a problem due to the design of the phone, the quality of the cellular network and compression algorithms used in long distance calls. Audio quality can be improved using a VoIP application over WiFi. Cellphones have small speakers so that the user can use a speakerphone feature and talk to a person on the phone without holding it to their ear. The small speakers can also be used to listen to digital audio files of music or speech or watch videos with an audio component, without holding the phone close to the ear.\n"
] |
At what point and location did the English language split among the use of the article "the" before "hospital"? | This is not a historical development, as such. Though nor is it usage which may be explained with a hard and fast rule, as there is some dialectal variation with respect to it.
What we are observing, essentially, is that English nouns require an article where they are countable, singular and concrete (“I found *a* quarter”, but not “I found quarter”) and do *not* require an article where they are abstract and uncountable (“the boy has spirit” is acceptable), or countable and plural (“the boy has legs” is acceptable).
But the prior category can give way to the latter in particular in cases where what is otherwise or previously a count noun is treated in an abstract fashion which construes it as uncountable. The extreme case of this is word such as “heaven” or “hell” which for conceptual reasons, cannot be enumerated in a given cultural context. One goes “to heaven” rather than “to the heaven”, as to qualify *which* heaven one is referring to, or how many, would be nonsensical (in a majority of English Christian contexts), making a countable use of the word impossible, and the interpretation of the word as uncountable the natural development. And indeed, in a less extreme case, when we say we go “to church”, we are implying our attending the uncountable abstraction of the church concept, rather than a specific edifice which is therefore countable. Though in this case, both approaches coexist, as they often do. Which nouns conventionally see this usage or these changes in countability is, however, as I say, subject to dialectal variation. | [
"The grammar of the word differs slightly depending on the dialect. In the United States, \"hospital\" usually requires an article; in the United Kingdom and elsewhere, the word normally is used without an article when it is the object of a preposition and when referring to a patient (\"in/to the hospital\" vs. \"in/to hospital\"); in Canada, both uses are found.\n",
"The trend toward using vernacular languages for medical writing began in the 12th century, and grew increasingly in the later Middle Ages. The many vernacular translations of the \"Trotula\" were therefore part of a general trend. The first known translation was into Hebrew, made somewhere in southern France in the late 12th century. The next translations, in the 13th century, were into Anglo-Norman and Old French. And in the 14th and 15th centuries, there are translations in Dutch, Middle English, French (again), German, Irish, and Italian. Most recently, a Catalan translation of one of the \"Trotula\" texts has been discovered in a 15th-century medical miscellany, held by the Biblioteca Riccardiana in Florence. This fragmentary translation of the \"De curis mulierum\" is here collated by the copyist (probably a surgeon making a copy for his own use) with a Latin version of the text, highlighting the differences.\n",
"of the College of Physicians of Philadelphia, Henry wrote in 1905 that \"It is the first edition of the first medical dictionary.\" By the time of Antonio Guaineri and Savonarola, this work was used alongside others by Oribasius, Isidore of Seville, Mondino dei Liuzzi, Serapion, and Pietro d'Abano. Then, as now, writers struggled with the terminology used in various translations from earlier Greek, Latin, Hebrew, and Arabic works. Later works by Jacques Desparts and Jacopo Berengario da Carpi continued building on the \"Synonyma\".\n",
"The first authoritative and full-featured English dictionary, the \"Dictionary of the English Language\", was published by Samuel Johnson in 1755. To a high degree, the dictionary standardized both English spelling and word usage. Meanwhile, grammar texts by Lowth, Murray, Priestly, and others attempted to prescribe standard usage even further.\n",
"Hospitals in medieval Scotland can be dated back to the 12th century. From c. 1144 to about 1650 many hospitals, bedehouses and \"Maisons Dieu\" were built in Scotland. There are many terms that apply to, or describe a \"Hospital\". The origin of the English term, \"Hospital\", is probably from the French or Latin. English and European terms for Hospital appear to have a common root. \"Hospital\" - from the Latin – \"a place of rest for guests\". Other terms are recognized. Almshouse; Bede House; Chantry ; God's House ; Infirmary ; Spital ; Domus hopitalis Sancti Spiritus (Lat) ; Gasthuis (Ger) ; Godshuis (Dut) ; Hôpital (Fr) ; Hôtel-Dieu (Fr) ; Krankenhaus(Ger) ; Maison Dieu (Fr) ; Ospedale (It) ; Sjukhus(Swe) ; Xenodochium(Gk). Records provide evidence of more than 180 Hospitals in Scotland. The term \"spit(t)al\" or \"temple/ templar\" may also indicate land endowed by churches or monasteries as well as sites associated with the Knights Templar and the Knights Hospitallers. Many hospitals were in the north east of Scotland in the cities of Dundee, Old Aberdeen and Aberdeen and across Aberdeenshire.\n",
"Doctor is an academic title that originates from the Latin word of the same spelling and meaning. The word is originally an agentive noun of the Latin verb \"\" 'to teach'. It has been used as an academic title in Europe since the 13th century, when the first Doctorates were awarded at the University of Bologna and the University of Paris. Having become established in European universities, this usage spread around the world. Contracted \"Dr\" or \"Dr.\", it is used as a designation for a person who has obtained a Doctorate (e.g. PhD). In many parts of the world it is also used by medical practitioners, regardless of whether or not they hold a doctoral-level degree.\n",
"In 1598 an Italian–English dictionary by John Florio was published. It was the first English dictionary to use quotations (\"illustrations\") to give meaning to the word; in none of these dictionaries so far were there any actual definitions of words. This was to change, to a small extent, in schoolmaster Robert Cawdrey's \"Table Alphabeticall\", published in 1604. Though it contained only 2,449 words, and no word beginning with the letters \"W\", \"X\", or \"Y\", this was the first monolingual English dictionary.\n"
] |
How was the iconography of the Confederacy reframed into something that's treated as honorable/worthy of obsession? | Civil War memory is something I write a lot about, so I'd point you to [this older answer of mine](_URL_0_) which focuses more on the evolution of Confederate statuary than the Lost Cause itself, but I think does speak well to your question, although I'm of course happy to do my best with any follow-ups you may have. | [
"Beginning in 2015 and accelerating in 2017, a national controversy grew over the prominent positions of monuments and memorials to the Confederacy in many public spaces across the United States, and particularly in the American South. In this context, the statues of Confederate notables along the university's South Mall that Coppini had designed for the Littlefield Fountain attracted increased public criticism.\n",
"Beginning in 2015 and accelerating in 2017, a national controversy grew over the prominent positions of monuments and memorials to the Confederacy in many public spaces across the United States, and particularly in the American South. In this context, the statues of Confederate notables along the university's South Mall that Coppini had designed for the Littlefield Fountain attracted increased public criticism.\n",
"Beginning in 2015 and accelerating in 2017, a national controversy grew over the prominent positions of monuments and memorials to the Confederacy in many public spaces across the United States, and particularly in the American South. In this context, the statues of Confederate notables along the university's South Mall that Coppini had designed for the Littlefield Fountain attracted increased public criticism.\n",
"Beginning in 2015 and accelerating in 2017, a national controversy grew over the prominent positions of monuments and memorials to the Confederacy in many public spaces across the United States, and particularly in the American South. In this context, the statues of Confederate notables along the university's South Mall that Coppini had designed for the Littlefield Fountain attracted increased public criticism.\n",
"Beginning in 2015 and accelerating in 2017, a national controversy grew over the prominent positions of monuments and memorials to the Confederacy in many public spaces across the United States, and particularly in the American South. In this context, the statues of Confederate notables along the university's South Mall that Coppini had designed for the Littlefield Fountain attracted increased public criticism.\n",
"Beginning in 2015 and accelerating in 2017, a national controversy grew over the prominent positions of monuments and memorials to the Confederacy in many public spaces across the United States, and particularly in the American South. In this context, the statues of Confederate notables along the university's South Mall that Coppini had designed for the Littlefield Fountain attracted increased public criticism.\n",
"Beginning in 2015 and accelerating in 2017, a national controversy grew over the prominent positions of monuments and memorials to the Confederacy in many public spaces across the United States, and particularly in the American South. In this context, the statues of Confederate notables along the university's South Mall that Coppini had designed for the Littlefield Fountain attracted increased public criticism, as did a dedication inscribed on a wall along the west edge of the fountain complex, which honored the Confederate cause along with American participation in World War I.\n"
] |
What happens to the body when your cortisol levels are constantly too high? | This is a really broad question, since excessive cortisol in the human body can have a lot of implications. I'll just talk about one of them.
One area of your brain that has a lot of cortisol receptors is the hippocampus. There is some evidence that excess cortisol can cause the hippocampus to be damaged in various ways. Individuals with excess cortisol have been show to have smaller hippocampi, suggesting that certain cells called pyramidal cells in the hippocampus likely atrophy due to cortisol activity. Another idea is that cortisol in the hippocampus suppresses neurogenesis, or the formation of new neurons. Both of these are likely causes of depression, and SSRIs both work to reverse these effects of excess cortisol in the hippocampus.
Extremely high cortisol in a short time can also impair memory. This is why individuals often can't remember times where they're extremely emotional.
| [
"Elevated levels of total cortisol can also be due to estrogen found in oral contraceptive pills that contain a mixture of estrogen and progesterone, leading to Pseudo-Cushing's syndrome. Estrogen can cause an increase of cortisol-binding globulin and thereby cause the total cortisol level to be elevated. However, the total free cortisol, which is the active hormone in the body, as measured by a 24-hour urine collection for urinary free cortisol, is normal.\n",
"Increased levels of catecholamines and cortisol can cause a hypermetabolic state that can last for years. This is associated with increased cardiac output, metabolism, a fast heart rate, and poor immune function.\n",
"Another study found that physical stress caused increased cortisol:DHEAS (dehydroepianodrosterone sulphate) molar ratios which may contribute to reduced immunity, especially in the elderly for whom cortisol:DHEAS ratios are already increased. This is because DHEAS levels decrease with age while cortisol levels do not. This high ratio was found to suppress the activity of neutrophils and raise susceptibility for infection.\n",
"In more specific studies looking at the link between cortisol levels and psychological phenomena, it has been found that chronic stressors such as life-threatening situations (example: diseases), depression, and social or economic hardship correlate with significantly higher cortisol levels. In situations where a subject undergoes induced anxiety, high cortisol levels correspond with experiencing more physiological symptoms of nervousness, such as increased heart rate, sweating, and skin conductance. Additionally, a negative correlation was discovered between baseline levels of cortisol and aggression. Salivary cortisol levels can thus provide insight into a number of other psychological processes.\n",
"Stress can cause high levels of the following hormones: norepinephrine, leptin, NPY, nitrite, ACTH and adrenomedullin. Elevated levels of adenosine, adrenaline, cortisol and dopamine in the blood can produce fatigue, depression, behavior changes, heart disease, weight problems, diabetes, and skin diseases. It also decreases the immune response, which can lead to heartburn and stomach ulcers.\n",
"Normal cortisol level can be explained by the strong negative feedback mechanism of cortisol on hypothalamus-pituitary axis system. That is, in the beginning, 17,20-lyase deficiency will block synthesis of sex steroid hormones, forcing the pathways to produce more cortisol. However, the initial excess of cortisol is rapidly corrected by negative feedback mechanism—high cortisol decreases secretion of adrenocorticotropic hormone (ACTH) from zona fasciculata of adrenal gland. Thus, there is no mineralocorticoid overproduction. Also, there is no adrenal hyperplasia.\n",
"Cortisol is a stress hormone secreted by the adrenal gland, which makes up part of the hypothalamic-pituitary-adrenal (HPA) axis. It is typically released at periods of high stress designed to help the individual cope with stressful situations. Cortisol secretion results in increased heart rate and blood pressure and the temporary shut down of metabolic processes such as digestion, reproduction, growth, and immunity as a means of conserving energy for the stress response. Chronic release of cortisol over extended periods of time caused by long-term high stress can result in: \n"
] |
how do electromagnetic pulses (emp) destroy electronics and is it possible to deploy it in bombs for warfare? | it is basically a very strong signal that is capable of frying weaker systems. It can be used for warfare, but its use is some what limited by the fact that military hardware is tough and most known systems are not big enough or thorough enough to take down civilians areas with any effectiveness. Either would take a nuclear sized blast to get anywhere, and by that point your already nuking them. | [
"An electromagnetic pulse (EMP) is a burst of electromagnetic radiation. Nuclear explosions create a pulse of electromagnetic radiation called a nuclear EMP or NEMP. Such EMP interference is known to be generally disruptive or damaging to electronic equipment. If a single nuclear weapon \"designed to emit EMP were detonated 250 to 300 miles up over the middle of the country it would disable the electronics in the entire United States.\"\n",
"The pulse is powerful enough to cause moderately long metal objects (such as cables) to act as antennas and generate high voltages due to interactions with the electromagnetic pulse. These voltages can destroy unshielded electronics. There are no known biological effects of EMP. The ionized air also disrupts radio traffic that would normally bounce off the ionosphere.\n",
"Nuclear and large conventional explosions produce radio frequency energy. The characteristics of the EMP will vary with altitude and burst size. EMP-like effects are not always from open-air or space explosions; there has been work with controlled explosions for generating electrical pulse to drive lasers and railguns.\n",
"During the Iraq War, electromagnetic weapons, including high power microwaves, were used by the U.S. military to disrupt and destroy Iraqi electronic systems and may have been used for crowd control. Types and magnitudes of exposure to electromagnetic fields are unknown.\n",
"In the case of electromagnetic side-channel attacks, attackers are often looking at electromagnetic radiation emitted by computing devices, which are made up of circuits. Electronic circuits consist of semiconducting materials upon which billions of transistors are placed. When a computer performs computations, such as encryption, electricity running through the transistors create a magnetic field and electromagnetic waves are emitted.\n",
"During the Starfish Prime high-altitude nuclear test in 1962, an unexpected effect was produced which is called a nuclear electromagnetic pulse. This is an intense flash of electromagnetic energy produced by a rain of high energy electrons which in turn are produced by a nuclear bomb's gamma rays. This flash of energy can permanently destroy or disrupt electronic equipment if insufficiently shielded. It has been proposed to use this effect to disable an enemy's military and civilian infrastructure as an adjunct to other nuclear or conventional military operations against that enemy. Because the effect is produced by high altitude nuclear detonations, it can produce damage to electronics over a wide, even continental, geographical area.\n",
"An energetic EMP can temporarily upset or permanently damage electronic equipment by generating high voltage and high current surges; semiconductor components are particularly at risk. The effects of damage can range from imperceptible to the eye, to devices literally blowing apart. Cables, even if short, can act as antennas to transmit pulse energy to equipment.\n"
] |
why do headphones sound tinny until you put them on? | Bass waves travel the least amount of distance, while higher pitched waves will reach your ears. Your ears are best at picking up and discerning those higher pitched sounds because they are most like the sounds you would normally be hearing. All of this considering the headphone speakers are very small and produce a relatively small decibel level. | [
"The outer shells of in-ear headphones are made up of a variety of materials, such as plastic, aluminum, ceramic and other metal alloys. Because in-ear headphones engage the ear canal, they can be prone to sliding out, and they block out much environmental noise. Lack of sound from the environment can be a problem when sound is a necessary cue for safety or other reasons, as when walking, driving, or riding near or in vehicular traffic.\n",
"Supra-aural headphones or on-ear headphones have pads that press against the ears, rather than around them. They were commonly bundled with personal stereos during the 1980s. This type of headphone generally tends to be smaller and lighter than circumaural headphones, resulting in less attenuation of outside noise. Supra-aural headphones can also lead to discomfort due to the pressure on the ear as compared to circumaural headphones that sit around the ear. Comfort may vary due to the earcup material.\n",
"Active noise-cancelling headphones use a microphone, amplifier, and speaker to pick up, amplify, and play ambient noise in phase-reversed form; this to some extent cancels out unwanted noise from the environment without affecting the desired sound source, which is not picked up and reversed by the microphone. They require a power source, usually a battery, to drive their circuitry. Active noise cancelling headphones can attenuate ambient noise by 20 dB or more, but the active circuitry is mainly effective on constant sounds and at lower frequencies, rather than sharp sounds and voices. Some noise cancelling headphones are designed mainly to reduce low-frequency engine and travel noise in aircraft, trains, and automobiles, and are less effective in environments with other types of noise.\n",
"This model also suffers from a whine on the headphone and microphone jacks that are located on the left of the unit. This is because of shared space with the leftmost fan, and the spinning of said fan causes interference. There is no known fix than to otherwise use a USB, FireWire/1394 or PCMCIA-based audio device or card for sound output.\n",
"Open-back headphones have the back of the earcups open. This leaks more sound out of the headphone and also lets more ambient sounds into the headphone, but gives a more natural or speaker-like sound, due to including sounds from the environment.\n",
"Electrostatic and piezoelectric noise can also become an issue in exotic headphone systems, if the headphones have a relatively high input impedance compared to traditional speakers which have a nominal impedance of 8 Ohms. This is where a careful choice of insulating materials can make a difference. This type of noise is often perceived as snap, crackle and pop when mechanically manipulating or handling the headphone cord. It is often hard to tell, without actual measurements if the source of this noise is electronic or mechanical in nature.\n",
"These early headphones used moving iron drivers, with either single-ended or balanced armatures. The common single-ended type used voice coils wound around the poles of a permanent magnet, which were positioned close to a flexible steel diaphragm. The audio current through the coils varied the magnetic field of the magnet, exerting a varying force on the diaphragm, causing it to vibrate, creating sound waves. The requirement for high sensitivity meant that no damping was used, so the frequency response of the diaphragm had large peaks due to resonance, resulting in poor sound quality. These early models lacked padding, and were often uncomfortable to wear for long periods. Their impedance varied; headphones used in telegraph and telephone work had an impedance of 75 ohms. Those used with early wireless radio had more turns of finer wire to increase sensitivity. Impedance of 1000 to 2000 ohms was common, which suited both crystal sets and triode receivers. Some very sensitive headphones, such as those manufactured by Brandes around 1919, were commonly used for early radio work.\n"
] |
Were Serbs exceptionally effective in the war against Austria during WWI? | Serbia's army had experience from the [Balkan Wars](_URL_0_), unlike the Austrians who were quite green. Austrian troops were better equipped, but had far less patriotism due to the fact that most of them weren't Austrian, but Hungarian, Czech, Slovak, etc. The Serbians also could match the Austrians in terms of numbers, since the bulk of Austria's army was engaged with Russia for most of the war. The land itself isn't exactly a bunch of flat open plains, and favored the defender. All-in-all, it isn't a surprise that Serbia performed how they did. | [
"The Serbs beat back an Austro-Hungarian invasion in August, at the Battle of Cer. It marked the first Allied victory over the Central Powers in World War I. Potiorek was humiliated by the defeat and was determined to resume the assault against the Serbs. He was given permission in September to launch another invasion of Serbia provided that he \"[did not] risk anything that might lead to a further fiasco.\" Under pressure from the Russians to launch their own offensive and keep as many Austro-Hungarian troops as possible away from the Eastern Front, the Serbs invaded Bosnia in September with the help of Chetnik irregulars but were repulsed after a month of fighting in what came to be known as the Battle of the Drina. Bojović was wounded during the battle and was replaced by Živojin Mišić as commander of the Serbian 1st Army.\n",
"In late 1915, however, German generals were given control and invaded Serbia with Austrian and Bulgarian forces. The Serbian army hastily retreated west but only 70,000 made it through, and Serbia became an occupied land. Disease was rampant, but the Austrians were pragmatic and paid well for food supplies, so conditions were not harsh. Instead Austria tried to depoliticize Serbia, to minimize violence, and to integrate the country into the Empire. Nevertheless, Serbian nationalism remained defiant and many young men slipped out to help rebuild the Serbian army in exile.\n",
"The exact role played by Serbian officials in the assassination of Archduke Franz Ferdinand is still debated but despite complying with most of their demands, Austria-Hungary invaded on 28 July 1914. While Serbia successfully repulsed the Austro-Hungarian army in 1914, it was exhausted by the two Balkan Wars and unable to replace its losses of men and equipment. In 1915, Bulgaria joined the Central Powers and by the end of the year, a combined Bulgar-Austrian-German army occupied most of Serbia. Between 1914–1918, Serbia suffered the greatest proportional losses of any combatant, with over 25% of all those mobilised becoming casualties; including civilians and deaths from disease, over 1.2 million died, nearly 30% of the entire population.\n",
"Austria-Hungary viewed the irredentist movements of South Slavs, as promoted by Serbia, to be a threat to the unity of the nation. Following the assassination, Austria sought to inflict a military blow on Serbia to demonstrate strength and so Serbia would be more cautious about supporting Yugoslav nationalism. However, it was wary of the reaction of the Russian Empire, who were a major supporter of Serbia, so sought a guarantee from its ally Germany that it would support Austria in any conflict. Germany guaranteed its support, but urged Austria to attack quickly, while world sympathy for the murdered heir was high, in order to localize the war and avoid drawing in Russia. Some German leaders believed that growing Russian economic power would change the balance of power between the two nations, that a war was inevitable, and that Germany would be better off if a war happened soon. However, rather than a quick attack with available military forces, Austrian leaders deliberated into mid-July before deciding that it would give Serbia a harsh ultimatum on 23 July and would not attack without a full mobilisation of its army that could not be accomplished before 25 July 1914.\n",
"Austria invaded and fought the Serbian army at the Battle of Cer and Battle of Kolubara beginning on 12 August. Over the next two weeks, Austrian attacks were thrown back with heavy losses, which marked the first major Allied victories of the war and dashed Austro-Hungarian hopes of a swift victory. As a result, Austria had to keep sizeable forces on the Serbian front, weakening its efforts against Russia. Serbia's defeat of the Austro-Hungarian invasion of 1914 has been called one of the major upset victories of the twentieth century. The campaign saw the very first use of medical evacuation by the Serbian army in autumn of 1915 and anti-aircraft warfare in the spring of 1915 after an Austrian plane was shot down with ground-to-air fire.\n",
"The Balkan Wars strained the German/Austro-Hungarian alliance. The attitude of the German government to Austrian requests of support against Serbia was initially both divided and inconsistent. After the German Imperial War Council of 8 December 1912, it was clear that Germany was not ready to support Austria-Hungary in a war against Serbia and her likely allies.\n",
"The 28 June 1914 assassination of Austro-Hungarian heir presumptive Archduke Franz Ferdinand precipitated Austria-Hungary's declaration of war against Serbia. The conflict quickly attracted the involvement of all major European countries, pitting the Central Powers against the Entente coalition and starting World War I. After the entry of the Ottoman Empire into the war on the side of the Central Powers (November 1914), the decisive factor in the Balkans became the attitude of Bulgaria. Bulgaria occupied a strategically important position on the Serbian flank and its intervention on either side of the belligerents would be decisive. Bulgaria and Serbia had fought each continuously in the previous thirty years: following the Serbo-Bulgarian War of 1885 hostilities continued in the form of an undeclared war during the Macedonian Struggle. The area of north-western Macedonia then belonging to the Ottoman Empire became the arena of the ethnic violence between the ethnic Serb population represented by the Serbian Chetnik Organization and ethnic Bulgarians from the Internal Macedonian Revolutionary Organization (IMRO). IMRO also engaged in hostilities with ethnic Greeks and their supporters in the rest of Macedonia.\n"
] |
the d & d alignment system, particularly the distinction between "neutral good/evil" and "chaotic good/evil." | It helps to just view the alignments one axis at a time -- lawful, neutral, chaotic; good, neutral evil.
Lawful means you will follow certain rules.
Chaotic means everything is random -- rules are made to be broken.
Neutral is somewhere in between these two; really, most people you meet with in real life would fall in the neutral spectum.
Good means you put others above yourself.
Evil means you willingly harm others, either for your own good or because you have been told to.
Neutral will generally not kill innocents, but certainly will not sacrifice themselves for others.
Neutral good would be someone who puts others above themselves, but isn't following a set pattern to it -- not helping others because their god said to, just because they want to. Neutral evil would be similar -- not killing just for the heck of it, but not killing just to obey a higher order.
Chaotic good is a character who is out for the greater good, but feels "the man" gets in the way, so (s)he will buck the rules constantly. Chaotic evil is one of those who just wants to watch the world burn.
Check the [Wikipedia](_URL_0_) page; it gives a pretty good overview of each of the nine types. | [
"\"D&D\" 4th Edition, released in 2008, reduced the number of alignments to five: lawful good, good, evil, chaotic evil, and unaligned. In that edition, \"good\" replaced neutral good and did not encompass chaotic good; \"evil\" replaced neutral evil and did not encompass lawful evil; \"unaligned\" replaced true neutral and did not encompass lawful neutral and chaotic neutral.\n",
"Alignment is slightly more muddied than in other official settings. Evil beings of traditionally good races and good beings of traditionally evil races are encouraged; but alignment definition remains true to D&D standards, with good and evil retaining their meanings. However, the situation often arises in the campaign world that oppositely aligned characters will side with each other briefly if a threat looms over all, and also both good and evil characters will infiltrate each other's organizations for purposes of espionage.\n",
"The conflict of good versus evil is a common motif in \"D&D\" and other fantasy fiction. Although player characters can adventure for personal gain rather than from altruistic motives, it is generally assumed that the player characters will be opposed to evil and will tend to fight evil creatures.\n",
"The \"D&D\" alignment system is occasionally referenced as a system of moral classification in other contexts. \"Salon\" television critic Heather Havrilesky, while reviewing the HBO television series \"True Blood\", analyzed the program's characters in terms of \"D&D\" alignments and identified protagonist Sookie Stackhouse as chaotic good, her vampire boyfriend Bill Compton as lawful neutral, Eric Northman as lawful evil, and Lafayette Reynolds as chaotic neutral. In \"Hostiles and Calamities\", the 11th episode of season 7 of \"The Walking Dead\" television series, the character Eugene Porter makes a reference to the \"D&D\" alignment system when describing himself as \"...not good. I’m not lawful, neutral, or chaotic.\" The alignment chart Internet meme humorously categorizes various items in a three-by-three grid.\n",
"When the rules for Third Edition \"D&D\" were updated to version 3.5, the grimlock again appeared in the first \"Monster Manual\" source book, but its description, abilities and illustration were reprinted verbatim from Third Edition, with the only exception being that their Alignment was changed from \"Always neutral evil\" to \"Often neutral evil\".\n",
"\"AD&D 2nd Edition\", released in 1988, retained the two-axis system. In that edition, a character who performs too many actions outside their alignment can find their alignment changed, and is penalized by losing experience points, making it harder to reach the next level. \"D&D\" 3rd Edition, released in 2000, kept the same alignment system.\n",
"The 1977 release of the \"Dungeons & Dragons Basic Set\" introduced a second axis of good, implying altruism and respect for life, vs evil, implying selfishness and no respect for life. As with the law-vs-chaos axis, a neutral position exists between the extremes. Characters and creatures could be lawful and evil at the same time (such as a tyrant), or chaotic but good (such as Robin Hood).\n"
] |
why can my dog eat shit and be fine, but not grapes or chocolate? | Because dog shit doesn't contain a compound (Theobromine) that is toxic to dogs. It *may* contain pathogens that make them sick, though. | [
"Dogs have around 1,700 taste buds compared to humans with around 9,000. The sweet taste buds in dogs respond to a chemical called furaneol which is found in many fruits and in tomatoes. It appears that dogs do like this flavor and it probably evolved because in a natural environment dogs frequently supplement their diet of small animals with whatever fruits happen to be available. Because of dogs' dislike of bitter tastes, various sprays, and gels have been designed to keep dogs from chewing on furniture or other objects. Dogs also have taste buds that are tuned for water, which is something they share with other carnivores but is not found in humans. This taste sense is found at the tip of the dog's tongue, which is the part of the tongue that he curls to lap water. This area responds to water at all times, but when the dog has eaten salty or sugary foods the sensitivity to the taste of water increases. It is proposed that this ability to taste water evolved as a way for the body to keep internal fluids in balance after the animal has eaten things that will either result in more urine being passed or will require more water to adequately process. It certainly appears that when these special water taste buds are active, dogs seem to get an extra pleasure out of drinking water, and will drink copious amounts of it.\n",
"While chocolate contains the chemical compound theobromine in levels that are toxic to some mammals, carob contains none, and it also has no caffeine, so it is sometimes used to make chocolate-like treats for dogs. Carob pod meal is also used as an energy-rich feed for livestock, particularly for ruminants, though its high tannin content may limit this use.\n",
"The consumption of grapes and raisins presents a potential health threat to dogs. Their toxicity to dogs can cause the animal to develop acute renal failure (the sudden development of kidney failure) with anuria (a lack of urine production) and may be fatal.\n",
"BULLET::::- Chocolate is a common cause of poisoning in dogs. The toxic principles in chocolate are theobromine and caffeine. Baker's Chocolate is the most dangerous form since it contains higher concentrations of these drugs, followed by semi-sweet, dark, and then milk chocolate. Signs include vomiting, diarrhea, tremors, difficulty walking, seizures, and heart problems.\n",
"Grapes and raisins can cause acute kidney failure in dogs (see also grape and raisin toxicity in dogs). The exact mechanism is unknown, nor is there any means to determine the susceptibility of an individual dog. While as little as one raisin can be toxic to a susceptible dog, some other dogs have eaten as much as a pound of grapes or raisins at a time without ill effects. The affected dog usually vomits a few hours after consumption and begins showing signs of kidney failure three to five days later. A mycotoxin is suspected to be involved, but one has not been found in grapes or raisins ingested by affected dogs. The reason some dogs develop kidney failure following ingestion of grapes and raisins is not known. The most common pathological finding is proximal renal tubular necrosis.\n",
"The consumption of grapes and raisins presents a potential health threat to dogs. Their toxicity to dogs can cause the animal to develop acute kidney injury (the sudden development of kidney failure) with anuria (a lack of urine production). The phenomenon was first identified by the Animal Poison Control Center (APCC), run by the American Society for the Prevention of Cruelty to Animals (ASPCA). Approximately 140 cases were seen by the APCC in the one year from April 2003 to April 2004, with 50 developing symptoms and seven dying.\n",
"A typical dog will normally experience great intestinal distress after eating less than of dark chocolate, but will not necessarily experience bradycardia or tachycardia unless it eats at least a half a kilogram (1.1 lb) of milk chocolate. Dark chocolate has 2 to 5 times more theobromine and thus is more dangerous to dogs. According to the Merck Veterinary Manual, approximately 1.3 grams of baker's chocolate per kilogram of a dog's body weight (0.02 oz/lb) is sufficient to cause symptoms of toxicity. For example, a typical baker's chocolate bar would be enough to bring about symptoms in a dog. Of course, baking chocolate is rarely consumed directly due to its unpleasant taste, but other dark chocolates' canine toxicities may be extrapolated based on this figure. Given access, dogs frequently consume chocolate at toxic levels because they like the taste of chocolate products and are capable of finding and eating quantities much larger than typical human servings. There are reports that mulch made from cacao bean shells is dangerous to dogs and livestock.\n"
] |
how credits were added to film | Assuming you are talking about the older days before computers, credits where often printed onto a a sheet which was attached to two rollers, kinda like a treadmill. Then they could just film it.
They could also layer films over one another to superimpose credits on a live action scene. | [
"Then, early in the 1930s, the more progressive motion picture studios started to change their approach in presenting their screen credits. The major studios took on the challenge of improving the way they introduced their movies. They made the decision to present a more complete list of credits to go with a higher quality of artwork to be used in their screen credits.\n",
"Credits for motion pictures often include the name of any locales (i.e., cities, states, and countries if outside of the US) used to film scenes, as well as any organizations not related to the production (e.g., schools, government entities, military bases, etc.) that played a role in the filming.\n",
"Film credits were almost universally unheard of in 1910 and 1911, because the film's actors were unidentified fans would often come up with their own names for prominent actors. Film studios were reluctant to credit the actors because other studios might hire or the actors could demand a higher wage. Due to public demand and interest, studios began adding credits to their film lists in later productions.\n",
"Unlike today's independently produced movies where on-screen credits are given to any and all participants, the sparse credits of \"big-studio\" films of the post-war period were usually limited to famous actors, music composers and studio executives. Even so, Comstock's work for Warner Brothers was notable enough to garner credits for many of his movies.\n",
"Up until the 1970s, closing credits for films usually listed only a reprise of the cast members with their roles identified, or even simply just said \"The End,\" requiring opening credits to normally contain the details. For instance, the title sequence of the 1968 film \"Oliver!\" runs for about three-and-a-half minutes, and while not listing the complete cast, does list nearly all of its technical credits at the beginning of the film, all set against a background of what appear to be, but in fact are not, authentic 19th-century engravings of typical London life. The only credit at film's end is a listing of most of the cast, including cast members not listed at the beginning. These are set against a replay of some of the \"'Consider Yourself\" sequence.\n",
"The \"credits,\" or \"end credits,\" is a list that gives credit to the people involved in the production of a film. Films from before the 1970s usually start a film with credits, often ending with only a title card, saying \"The End\" or some equivalent, often an equivalent that depends on the language of the production. From then onward, a film's credits usually appear at the end of most films. However, films with credits that end a film often repeat some credits at or near the start of a film and therefore appear twice, such as that film's acting leads, while less frequently some appearing near or at the beginning only appear there, not at the end, which often happens to the director's credit. The credits appearing at or near the beginning of a film are usually called \"titles\" or \"beginning titles.\" A post-credits scene is a scene shown after the end of the credits. \"Ferris Bueller's Day Off\" has a post-credit scene in which Ferris tells the audience that the film is over and they should go home.\n",
"In the creative arts, credits are an acknowledgement of those who participated in the production. They are often shown at the end of movies and on CD jackets. In film, video, television, theater, etc., \"credits\" means the list of actors and behind-the-scenes staff who contributed to the production.\n"
] |
If light has properties of waves, would it be possible to phase-cancel two laser beams? If yes, what would happen? If no, why not? | Yes. This is called interference and is a property of all waves. The prime apparatus that demonstrates laser beam interference is a Michelson interferometer. Basically, a laser beam is split into two different laser beams (so that they are coherent because significant interference requires coherence) which travel along different paths and then using mirrors are recombined and then hit a camera or a screen. One of the paths is a different length or through a different material so that one of the laser beams acquires a phase lag. On the screen, you get a series of dark and light rings (called an interference pattern). The dark rings are where the two laser beams are out of phase and cancel each other (called destructive interference). However, energy is not destroyed. Rather, energy is redirected to the areas with constructive interference (the bright rings).
Another approach is the double-slit setup. You send a single laser beam through two slits, which turns it into two coherent laser beams. These laser beams interfere, producing a pattern of light and dark bars. | [
"It is possible to arrange multiple beams of laser light such that destructive quantum interference suppresses the vacuum fluctuations. Such a squeezed vacuum state involves negative energy. The repetitive waveform of light leads to alternating regions of positive and negative energy.\n",
"For example, in the case of a hologram, illuminating the grating with just the reference beam causes the reconstruction of the original signal beam. When two coherent laser beams (usually obtained by splitting a laser beam by the use of a beamsplitter into two, and then suitably redirecting by mirrors) cross inside a photorefractive crystal, the resultant refractive index grating diffracts the laser beams. As a result, one beam gains energy and becomes more intense at the expense of light intensity reduction of the other. This phenomenon is an example of two-wave mixing. In this configuration, Bragg diffraction condition is automatically satisfied.\n",
"A laser beam generally approximates much more closely to a monochromatic source, and it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors.\n",
"For interference lithography to be successful, coherence requirements must be met. First, a spatially coherent light source must be used. This is effectively a point light source in combination with a collimating lens. A laser or synchrotron beam are also often used directly without additional collimation. The spatial coherence guarantees a uniform wavefront prior to beam splitting. Second, it is preferred to use a monochromatic or temporally coherent light source. This is readily achieved with a laser but broadband sources would require a filter. The monochromatic requirement can be lifted if a diffraction grating is used as a beam splitter, since different wavelengths would diffract into different angles but eventually recombine anyway. Even in this case, spatial coherence and normal incidence would still be required.\n",
"If the gain (amplification) in the medium is larger than the resonator losses, then the power of the recirculating light can rise exponentially. But each stimulated emission event returns an atom from its excited state to the ground state, reducing the gain of the medium. With increasing beam power the net gain (gain minus loss) reduces to unity and the gain medium is said to be saturated. In a continuous wave (CW) laser, the balance of pump power against gain saturation and cavity losses produces an equilibrium value of the laser power inside the cavity; this equilibrium determines the operating point of the laser. If the applied pump power is too small, the gain will never be sufficient to overcome the cavity losses, and laser light will not be produced. The minimum pump power needed to begin laser action is called the \"lasing threshold\". The gain medium will amplify any photons passing through it, regardless of direction; but only the photons in a spatial mode supported by the resonator will pass more than once through the medium and receive substantial amplification.\n",
"If instead of oscillating independently, each mode operates with a fixed phase between it and the other modes, the laser output behaves quite differently. Instead of a random or constant output intensity, the modes of the laser will periodically all constructively interfere with one another, producing an intense burst or pulse of light. Such a laser is said to be 'mode-locked' or 'phase-locked'. These pulses occur separated in time by , where \"τ\" is the time taken for the light to make exactly one round trip of the laser cavity. This time corresponds to a frequency exactly equal to the mode spacing of the laser, .\n",
"It is possible, using nonlinear optical processes, to exactly reverse the propagation direction and phase variation of a beam of light. The reversed beam is called a \"conjugate\" beam, and thus the technique is known as optical phase conjugation (also called \"time reversal\", \"wavefront reversal\" and is significantly different from \"retroreflection\").\n"
] |
Do all terrestrial bodies which experience a planetary wobble and orbit a star have four seasons? | The Earth's wobble (precession of the equinoxes) doesn't cause the seasons. The seasons are due to the axial tilt and the orbit of the Sun.
"Seasons" isn't an astronomical term. Any planet whose axis of rotation is tilted with respect to its orbital plane will have solstices and equinoxes. If you wanted to, you could define four seasons between those solstices and equinoxes. That's not quite the same thing as "the four seasons we experience", though, since the seasons (in terms of weather and biosphere) don't have to follow the equinoxes and solstices. Also, a planet with only very slight axial tilt will have only very slight changes in insolation throughout the year. | [
"The planets' orbits are chaotic over longer timescales, in such a way that the whole Solar System possesses a Lyapunov time in the range of 2–230 million years. In all cases this means that the position of a planet along its orbit ultimately becomes impossible to predict with any certainty (so, for example, the timing of winter and summer become uncertain), but in some cases the orbits themselves may change dramatically. Such chaos manifests most strongly as changes in eccentricity, with some planets' orbits becoming significantly more—or less—elliptical.\n",
"The orbits of many of the minor bodies of the Solar System, such as comets, are often heavily perturbed, particularly by the gravitational fields of the gas giants. While many of these perturbations are periodic, others are not, and these in particular may represent aspects of chaotic motion. For example, in April 1996, Jupiter's gravitational influence caused the period of Comet Hale–Bopp's orbit to decrease from 4,206 to 2,380 years, a change that will not revert on any periodic basis.\n",
"The more distant planets retrograde more frequently, as they do not move as much in their orbits while Earth completes an orbit itself. The center of the retrograde motion occurs when the body is exactly opposite the sun, and therefore high in the ecliptic at local midnight. The retrogradation of a hypothetical extremely distant (and nearly non-moving) planet would take place during a half-year, with the planet's apparent yearly motion being reduced to a parallax ellipse.\n",
"A planetary object that orbits a star with high orbital eccentricity may spend only some of its year in the CHZ and experience a large variation in temperature and atmospheric pressure. This would result in dramatic seasonal phase shifts where liquid water may exist only intermittently. It is possible that subsurface habitats could be insulated from such changes and that extremophiles on or near the surface might survive through adaptions such as hibernation (cryptobiosis) and/or hyperthermostability. Tardigrades, for example, can survive in a dehydrated state temperatures between and . Life on a planetary object orbiting outside CHZ might hibernate on the cold side as the planet approaches the apastron where the planet is coolest and become active on approach to the periastron when the planet is sufficiently warm.\n",
"In all cases this means that the position of a planet along its orbit ultimately becomes impossible to predict with any certainty (so, for example, the timing of winter and summer become uncertain), but in some cases the orbits themselves may change dramatically. Such chaos manifests most strongly as changes in eccentricity, with some planets' orbits becoming significantly more—or less—elliptical.\n",
"Like the Pythagoreans Hicetas and Ecphantus, Heraclides proposed that the apparent daily motion of the stars was created by the rotation of the Earth on its axis once a day. This view contradicted the accepted Aristotelian model of the universe, which said that the Earth was fixed and that the stars and planets in their respective spheres might also be fixed. Simplicius says that Heraclides proposed that the irregular movements of the planets can be explained if the Earth moves while the Sun stays still.\n",
"Over time, the orbits of planets will decay due to gravitational radiation, or planets will be ejected from their local systems by gravitational perturbations caused by encounters with another stellar remnant.\n"
] |
why do japanese swords only have one edge? | Japanese swordfighting styles are suited to slashing with a sharp, curved edge to negate the style of armour (or non-armor) popular at the time. European, medieval styled swords are double edged with a point to infiltrate the heavy, plated armour that was used. Thick armour, but many joints and separations can be penetrated by a point and heavier, "blunt" strikes. | [
"Over time, however, the curved single-edged sword became so dominant a style in Japan that \"tou\" and \"ken\" came to be used interchangeably to refer to swords in Japan and by others to refer to Japanese swords. For example, the Japanese typically refer to Japanese swords as 日本刀 \"nihontō\" (\"Japanese \"tou\"\" i.e. \"Japanese (single-edged) blade\"), while the character \"ken\" 剣 is used in such terms as kendo and kenjutsu. Modern formal usage often uses both characters in referring to a collection of swords, for example, in naming the Japanese Sword Museum 日本美術刀剣博物館.\n",
"Unlike western knives, Japanese knives are often only single ground, meaning that they are sharpened so that only one side holds the cutting edge. As shown in the image, some Japanese knives are angled from both sides, while others are angled only from one side with the other side of the blade being flat. It was traditionally believed that a single-angled blade cuts better and makes cleaner cuts, though requiring more skill to use than a blade with a double-beveled edge. Generally, the right-hand side of the blade is angled, as most people use the knife with their right hand. Left-handed models are rare and must be specially ordered and custom made.\n",
"Japanese swords were carried in several different ways, varying throughout Japanese history. The style most commonly seen in \"samurai\" movies is called \"buke-zukuri\", with the katana (and \"wakizashi\", if also present) carried edge up, with the sheath thrust through the \"obi\" (sash).\n",
"The legitimate Japanese sword is made from Japanese steel \"Tamahagane\". The most common lamination method the Japanese sword blade is formed from is a combination of two different steels: a harder outer jacket of steel wrapped around a softer inner core of steel. This creates a blade which has a hard, razor sharp cutting edge with the ability to absorb shock in a way which reduces the possibility of the blade breaking when used in combat. The \"hadagane\", for the outer skin of the blade, is produced by heating a block of raw steel, which is then hammered out into a bar, and the flexible back portion. This is then cooled and broken up into smaller blocks which are checked for further impurities and then reassembled and reforged. During this process the billet of steel is heated and hammered, split and folded back upon itself many times and re-welded to create a complex structure of many thousands of layers. Each different steel is folded differently, in order to provide the necessary strength and flexibility to the different steels. The precise way in which the steel is folded, hammered and re-welded determines the distinctive grain pattern of the blade, the \"jihada\", (also called \"jigane\" when referring to the actual surface of the steel blade) a feature which is indicative of the period, place of manufacture and actual maker of the blade. The practice of folding also ensures a somewhat more homogeneous product, with the carbon in the steel being evenly distributed and the steel having no voids that could lead to fractures and failure of the blade in combat.\n",
"A , literally translating into \"small or short \"tachi\" (sword)\", is one of the traditionally made Japanese swords (\"nihontō\") used by the samurai class of feudal Japan. Kodachi are from the early Kamakura period (1185–1333) and are in the shape of a tachi. Kodachi are mounted in tachi style but with a length of less than 60 cm. They are often confused with wakizashi, due to their length and handling techniques. However, their construction is what sets the two apart, as kodachi are a set length while wakizashi are forged to complement the wielder's height or the length of their katana. As a result, the kodachi was too short to be called a sword properly but was also too long to be considered a dagger, thus it is widely considered a primary short sword, unlike the tantō or the Wakizashi which would act as a secondary weapon that was used alongside a longer blade.\n",
"Japanese swords were often forged with different profiles, different blade thicknesses, and varying amounts of grind. \"Wakizashi\", for instance, were not simply scaled-down versions of \"katana\"; they were often forged in \"hira-zukuri\" or other such forms which were very rare on other swords.\n",
"The sword has long held a significance in Japanese culture from the reverence and care that the samurai placed in their weapons. The earliest swords in Japan were straight, based on early Chinese \"jian\". Curved blades became more common at the end of the 8th century, with the importation of the curved forging techniques of that time. The shape was more efficient when fighting from horseback. Japanese swordsmanship is primarily two-handed wherein the front hand pushes down and the back hand pulls up while delivering a basic vertical cut. The samurai often carried two swords, the longer \"katana\" and the shorter \"wakizashi\", and these were normally wielded individually, though use of both as a pair did occur.\n"
] |
how does professional poker work? | I am not a professional poker player, but I do know three pros personally. They are not big name pros, but they do earn a modest living ((think mid-five figures) playing poker.
One of them got his start by making the final table of a large multi-table tournament at a casino with a smallish ($200 or so) entry fee. He then ran that 25k up quickly and has settled into a routine playing $2-$5 hold'em, 2-5 Pot Limit Omaha, and slightly larger games when they are available at his home casino.
The other one saved his money working an 8-5 job until he had a decent amount of money to make a go at it (he said it was $10,000) and then ran that money up playing similar limits live and online.
They also invest their poker profits in other players that they know are above average.
You mention elimination in your question. I think you may be referring to tournament poker, which is only one of many, many forms of poker. Tournament poker is definitely one way to make money when playing poker professionally, but the pros that I am aware of make most of their money playing "cash" or "ring" games where you sit at a table with set limits on starting cash (typically at least 100 times the small and big blinds) and play against other players. So a 2-5 table might have a $200 minimum requirement and a $500 maximum, though many casinos do raise the maximum requirement to as much as twice that. | [
"Poker is a popular card game that combines elements of chance and strategy. There are various styles of poker, all of which share an objective of presenting the least probable or highest-scoring hand. A poker hand is usually a configuration of five cards depending on the variant, either held entirely by a player or drawn partly from a number of shared, community cards. Players bet on their hands in a number of rounds as cards are drawn, employing various mathematical and intuitive strategies in an attempt to better opponents.\n",
"Poker is a family of gambling games in which players bet into a pool, called the pot, value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence.\n",
"Poker is a family of card games that combines gambling, strategy, and skill. All poker variants involve betting as an intrinsic part of play, and determine the winner of each hand according to the combinations of players' cards, at least some of which remain hidden until the end of the hand. Poker games vary in the number of cards dealt, the number of shared or \"community\" cards, the number of cards that remain hidden, and the betting procedures.\n",
"A poker run is an organized event in which participants, usually using motorcycles, all-terrain vehicles, boats, snowmobiles, horses, on foot or other means of transportation, must visit five to seven checkpoints, drawing a playing card at each one. The object is to have the best poker hand at the end of the run. Having the best hand and winning is purely a matter of chance. The event has a time limit, however the individual participants are not timed.\n",
"All casinos and most home games play poker by what are called table stakes rules, which state that each player starts each deal with a certain stake, and plays that deal with that stake. A player may not remove money from the table or add money from their pocket during the play of a hand. In essence, table stakes rules creates a maximum and a minimum buy-in amount for cash game poker as well as rules for adding and removing the stake from play. A player also may not take a portion of their money or stake off the table, unless they opt to leave the game and remove their entire stake from play. Players are not allowed to hide or misrepresent the amount of their stake from other players and must truthfully disclose the amount when asked.\n",
"In the poker game of Texas hold 'em, a starting hand consists of two \"hole cards\", which belong solely to the player and remain hidden from the other players. Five community cards are also dealt into play. Betting begins before any of the community cards are exposed, and continues throughout the hand. The player's \"playing hand\", which will be compared against that of each competing player, is the best 5-card poker hand available from his two hole cards and the five community cards. Unless otherwise specified, here the term \"hand\" applies to the player's two hole cards, or \"starting hand\".\n",
"The game of poker involves not only an understanding of probability, but also the competence of reading and analyzing the body language of the opponents. A key component of poker is to be able to \"cheat\" the opponents. To spot these cheats, players must have the ability to spot the individual \"ticks\" of their opponents. Players also have to look out for signs that an opponent is doing well.\n"
] |
how do completely torn ligaments such as atfl heal? | Fairly complicated, the simple answer is that cells communicate. A cell can tell other cells where is it, what type of cell it is, and if it’s in some kind of “distress”. There is also a 3 step response when tissue tears, and the first step is basically inflammation. In this step, a ton of different cell types (tissue, stem, blood, immune, etc.) rush to the site of trauma and they all have different jobs. This is where a lot of communicating occurs, and your body is essentially trying to figure out what happened and how it can best be fixed.
Cells that make up what’s left of the ligament will communicate, and with the help of other cell types, will slowly undergo mitosis and other cellular processes to repair the ligament. | [
"Ulnar collateral ligament reconstruction, also known as Tommy John surgery (TJS), is a surgical graft procedure where the ulnar collateral ligament in the medial elbow is replaced with either a tendon from elsewhere in the patient's body, or with one from a dead donor. The procedure is common among collegiate and professional athletes in several sports, particularly in baseball.\n",
"The ligament is an uncalcified elastic structure comprised in its most minimal state of two layers: a lamellar layer and a fibrous layer. The lamellar layer consists entirely of organic material (a protein and collagen matrix), is generally brown in color, and is elastic in response to both compressional and tensional stresses. The fibrous layer is made of aragonite fibers and organic material, is lighter in color and often iridescent, and is elastic only under compressional stress. The protein responsible for the elasticity of the ligament is abductin, which has enormous elastic resiliency: this resiliency is what causes the valves of the bivalve mollusk to open when the adductor muscles relax.\n",
"Ligaments attach bone to bone or bone to tendon, and are vital in stabilizing joints as well as supporting structures. They are made up of fibrous material that is generally quite strong. Due to their relatively poor blood supply, ligament injuries generally take a long time to heal.\n",
"A slow and chronic deterioration of the ulnar collateral ligament can be due to repetitive stress acting on the ulna. At first, pain can be bearable and can worsen to an extent where it can terminate an athlete’s career. The repetitive stress placed on the ulna causes micro tears in the ligament resulting in the loss of structural integrity over time. The acute rupture is less common compared to the slow deterioration injury. The acute rupture occurs in collisions when the elbow is in flexion such as that in a wrestling match or a tackle in football. The ulnar collateral ligament distributes over fifty percent of the medial support of the elbow. This can result in an ulnar collateral ligament injury or a dislocated elbow causing severe damage to the elbow and the radioulnar joints.\n",
"Ligaments attach bone to bone, and are vital in stabilizing joints as well as supporting structures. They are made up of fibrous material that is generally quite strong. Due to their relatively poor blood supply, ligament injuries generally take a long time to heal.\n",
"Repair of a complete, full-thickness tear involves tissue suture. The method currently in favor is to place an anchor in the bone at the natural attachment site, with resuture of torn tendon to the anchor. If tissue quality is poor, mesh (collagen, Artelon, or other degradable material) may be used to reinforce the repair. Repair can be performed through an open incision, again requiring detachment of a portion of the deltoid, while a mini-open technique approaches the tear through a deltoid-splitting approach. The latter may cause less injury to muscle and produce better results. Contemporary techniques now use an all arthroscopic approach.\n",
"The ligament is sometimes described as consisting of two marginal bands and a thinner intervening portion, the two bands being attached respectively to the apex and the base of the coracoid process, and joining together at the acromion.\n"
] |
How do you continue studying history after graduation? | Keep up with the big journals, and go to some of the big conferences. You'll stay up-to-date with the most recent research, and get to keep interacting with people who hold a similar academic interest.
Once you have your JD, there's always the option of (potentially) writing academically on classical law on the side. Additionally, if you have the option for electives, take some on ancient law if they're available to you.
Hope this helps a little. Happy reading! | [
"Initially, graduate students usually rotate through the laboratories of several faculty researchers, after which the student commits to joining a particular laboratory for the remainder of his or her education. The remaining time is spent conducting original research under the direction of the principal investigator to complete and publish a dissertation. Unlike undergraduate and professional schools, there is no set time period for graduate education. Students graduate once a thesis project of significant scope to justify the writing of their dissertation has been completed, a point that is determined by the student's principal investigator as well as his or her faculty advisory committee. The average time to graduation can vary between institutions, but most programs average around 5–6 years.\n",
"Courses required to be taken in order to graduate are four years of English, three years of mathematics, two years of United States history, one year of world history/geography; three years of science; one year of fine, practical and/or performing arts, 1/2 year of digital technology, 1/2 year of financial literacy, one year of a business, life science, or vocational course, two years of a World Language, and four years of physical education/health.\n",
"Upon graduation, graduates will receive one of the following degrees: Master of Law (Politics & International Relations, Law & Society); Master of Economics (Economics & Management); Master of History (History & Archaeology); Master of Literature (Literature and Culture); or Master of Philosophy in China Studies (Philosophy & Religion)The first cohort of scholars have taken a range of paths. Roughly 30% continued to Ph.D. level studies at esteemed universities, while others are employed by Goldman Sachs, McKinsey & Company, Google, J.P. Morgan & Co., the Associated Press, the Boston Consulting Group, General Electric, HNA Group, NEO blockchain, Bank of Korea, the Chinese Ministry of Commerce, and more. All Yenching Scholars write a Master's thesis under the guidance of an adviser and defend it orally before an academic committee. In addition to a fully funded scholarship, scholars also receive a monthly stipend of $500 and round-trip airfare.\n",
"To graduate, all students complete four years each of Upper School-level English, Mathematics, Science, and History and three years of a single foreign language. Students are also required to take Music and Arts Appreciation, Logic and Rhetoric, three years of Theology, one year of Philosophy and four semesters of Physical Education.\n",
"Post graduation, students have pursued professional careers in law, medicine, nursing, pharmacy, business, and others. Graduates pursued their advanced education at some of the most recognized universities across the nation including Harvard, Columbia, Cornell, Princeton, Yale, New York University, Yeshiva University, SUNY colleges, CUNY colleges, etc. Most graduates spend a year studying in Israel.\n",
"Courses required to be taken in order to graduate are 4 years of English, 3 years of mathematics, 2 years of United States history, 1 year of world history/geography; 3 years of science; 1 year of fine, practical and/or performing arts, 1 year of digital technology, 1 year of business, life science, vocational course, 2 years of a World Language, 4 years of physical education/health, and 1/2 year of career exploration or development. All students must pass the State High School Proficiency Assessment called HSPA to graduate.\n",
"The first academic year began in 2010/2011 when 40 students enrolled in the first year of the study program of history. In October 2011, the University started its programs in psychology and sociology. First graduates of the undergraduate study program in history were promoted in 2014.\n"
] |
Aside from the obvious (algebra, chess, etc.), how did Western science benefit from encounters with Islam and the Middle East during the Crusades? | In my understanding that old idea of information transmitted through the Christian East has been rather debunked. Most of the things that the Islamic World transmitted to the West came through Spain, not Syria and Palestine. The eastern contacts were more important for economic reasons, moving goods into the Mediterranean that originated in the Far and Middle Easts. | [
"Medieval Islam's receptiveness to new ideas and heritages helped it make major advances in medicine during this time, adding to earlier medical ideas and techniques, expanding the development of the health sciences and corresponding institutions, and advancing medical knowledge in areas such as surgery and understanding of the human body, although many Western scholars have not fully acknowledged its influence (independent of Roman and Greek influence) on the development of medicine.\n",
"During the Middle Ages, there was frequently an exchange of works between Byzantine and Islamic science. The Byzantine Empire initially provided the medieval Islamic world with Ancient and early Medieval Greek texts on astronomy, mathematics and philosophy for translation into Arabic as the Byzantine Empire was the leading center of scientific scholarship in the region at the beginning of the Middle Ages. Later as the Caliphate and other medieval Islamic cultures became the leading centers of scientific knowledge, Byzantine scientists such as Gregory Choniades, who had visited the famous Maragheh observatory, translated books on Islamic astronomy, mathematics and science into Medieval Greek, including for example the works of Ja'far ibn Muhammad Abu Ma'shar al-Balkhi, Ibn Yunus, Al-Khazini (who was of Byzantine Greek descent but raised in a Persian culture), Muhammad ibn Mūsā al-Khwārizmī and Nasīr al-Dīn al-Tūsī (such as the \"Zij-i Ilkhani\" and other Zij treatises) among others.\n",
"During the Middle Ages, there was frequently an exchange of works between Byzantine and Islamic science. The Byzantine Empire initially provided the medieval Islamic world with Ancient and early Medieval Greek texts on astronomy, mathematics and philosophy for translation into Arabic as the Byzantine Empire was the leading center of scientific scholarship in the region at the beginning of the Middle Ages. Later as the Caliphate and other medieval Islamic cultures became the leading centers of scientific knowledge, Byzantine scientists such as Gregory Choniades, who had visited the famous Maragheh observatory, translated books on Islamic astronomy, mathematics and science into Medieval Greek, including for example the works of Ja'far ibn Muhammad Abu Ma'shar al-Balkhi, Ibn Yunus, Al-Khazini (who was of Byzantine Greek descent but raised in a Persian culture), Muhammad ibn Mūsā al-Khwārizmī and Nasīr al-Dīn al-Tūsī (such as the \"Zij-i Ilkhani\" and other Zij treatises) among others.\n",
"The medieval Muslims took a keen interest in the study of astrology: partly because they considered the celestial bodies to be essential, partly because the dwellers of desert-regions often travelled at night, and relied upon knowledge of the constellations for guidance in their journeys. After the advent of Islam, the Muslims needed to determine the time of the prayers, the direction of the Kaaba, and the correct orientation of the mosque, all of which helped give a religious impetus to the study of astronomy and contributed towards the belief that the heavenly bodies were influential upon terrestrial affairs as well as the human condition. The science dealing with such influences was termed astrology (Arabic: علم النجوم \"Ilm an-Nujūm\"), a discipline contained within the field of astronomy (more broadly known as علم الفلك \"Ilm al-Falak\" 'the science of formation [of the heavens]'). The principles of these studies were rooted in Arabian, Persian, Babylonian, Hellenistic and Indian traditions and both were developed by the Arabs following their establishment of a magnificent observatory and library of astronomical and astrological texts at Baghdad in the 8th century.\n",
"Some scholars, including Makdisi, have argued that early medieval universities were influenced by the madrasas in Al-Andalus, the Emirate of Sicily, and the Middle East during the Crusades. Norman Daniel, however, views this argument as overstated. Roy Lowe and Yoshihito Yasuhara have recently drawn on the well-documented influences of scholarship from the Islamic world on the universities of Western Europe to call for a reconsideration of the development of higher education, turning away from a concern with local institutional structures to a broader consideration within a global context.\n",
"In the early centuries of Islam the most important points of contact between the Latin West and the Islamic world from an artistic point of view were Southern Italy and Sicily and the Iberian peninsula, which both held significant Muslim populations. Later the Italian maritime republics were important in trading artworks. In the Crusades Islamic art seems to have had relatively little influence even on the Crusader art of the Crusader kingdoms, though it may have stimulated the desire for Islamic imports among Crusaders returning to Europe.\n",
"Islamic medicine preserved, systematized and developed the medical knowledge of classical antiquity, including the major traditions of Hippocrates, Galen and Dioscorides. During the post-classical era, Islamic medicine was the most advanced in the world, integrating concepts of ancient Greek, Roman and Persian medicine as well as the ancient Indian tradition of Ayurveda, while making numerous advances and innovations. Islamic medicine, along with knowledge of classical medicine, was later adopted in the medieval medicine of Western Europe, after European physicians became familiar with Islamic medical authors during the Renaissance of the 12th century.\n"
] |
Are artificial food dyes different than dyes used in craft supplies? | Dyes that are approved for use in food or hygiene products have undergone testing to various degrees in order to ensure that they are non-toxic in the quantities you'd find in those products. Crafts supplies have no such regulations in place, and there is no telling what materials are present in the dyes or pigments used. The best guarantee you can hope for is that they aren't toxic merely by being in their presence. For example, the glass containers you can buy for dirt cheap at a craft store are often full of lead, and they will usually say that they are not meant for the storage of food or drinks.
Unfortunately, we are a long way off from knowing what the specific effect of food dyes on children with ADHD is. The state of the field is that researchers are still trying to establish that there even is a reproducible link between food dyes and hyperactivity. If that research is successful in nailing down a precise link, other scientists can begin work on figuring out exactly how that effect comes about. | [
"The primary source of dye, historically, has been nature, with the dyes being extracted from animals or plants. Since the mid-19th century, however, humans have produced artificial dyes to achieve a broader range of colors and to render the dyes more stable to washing and general use. Different classes of dyes are used for different types of fiber and at different stages of the textile production process, from loose fibers through yarn and cloth to complete garments.\n",
"Many synthesized dyes were easier and less costly to produce and were superior in coloring properties when compared to naturally derived alternatives. Some synthetic food colorants are diazo dyes. Diazo dyes are prepared by coupling of a diazonium compound with a second aromatic hydrocarbons. The resulting compounds contain conjugated systems that efficiently absorb light in the visible parts of the spectrum, i.e. they are deeply colored. The attractiveness of the synthetic dyes is that their color, lipophilicity, and other attributes can be engineered by the design of the specific dyestuff. The color of the dyes can be controlled by selecting the number of azo-groups and various substituents. Yellow shades are often achieved by using acetoacetanilide. Red colors are often azo compounds. The pair indigo and indigo carmine exhibit the same blue color, but the former is soluble in lipids, and the latter is water-soluble because it has been fitted with sulfonate functional groups.\n",
"One other class that describes the role of dyes, rather than their mode of use, is the food dye. Because food dyes are classed as food additives, they are manufactured to a higher standard than some industrial dyes. Food dyes can be direct, mordant and vat dyes, and their use is strictly controlled by legislation. Many are azo dyes, although anthraquinone and triphenylmethane compounds are used for colors such as green and blue. Some naturally occurring dyes are also used.\n",
"In order to achieve dyes with sufficient order parameter, researchers have synthesized novel dyes which themselves are liquid crystalline in character to function as display ingredients. The challenges that researchers face are : (1) high purity (2) small quantities and (3) high efficiency. A few companies have overcome these challenges, such as Mitsui Toatsu in Japan and Merck in the U.K.\n",
"The majority of natural dyes are derived from plant sources: roots, berries, bark, leaves, wood, fungi and lichens. Most dyes are synthetic, i.e., are man-made from petrochemicals. Other than pigmentation, they have a range of applications including organic dye lasers, optical media (CD-R) and camera sensors (color filter array).\n",
"The United States Government and the European Union certify a small number of synthetic chemical colourings to be used in food. These are usually aromatic hydrocarbons, or azo dyes, made from petroleum. The most common ones are:\n",
"The UK FSA commissioned a study of six food dyes (tartrazine, Allura red, Ponceau 4R, Quinoline Yellow, sunset yellow, carmoisine (dubbed the \"Southampton 6\")), and sodium benzoate (a preservative) on children in the general population, who consumed them in beverages. The study found \"a possible link between the consumption of these artificial colours and a sodium benzoate preservative and increased hyperactivity\" in the children; the advisory committee to the FSA that evaluated the study also determined that because of study limitations, the results could not be extrapolated to the general population, and further testing was recommended.\n"
] |
How does the UV Catastrophe relate to the quantization of energy? | The basic problem can be thought of like this: in classical thermodynamics there is the [equipartition theorem](_URL_0_) which means that each mode has the same (finite) average energy. The electromagnetic field has an infinity of modes, hence the problem.
edit: corralled some runaway words | [
"The term \"ultraviolet catastrophe\" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law. The phrase refers to the fact that the Rayleigh–Jeans law accurately predicts experimental results at radiative frequencies below 10 GHz, but begins to diverge with empirical observations as these frequencies reach the ultraviolet region of the electromagnetic spectrum. Since the first appearance of the term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence.\n",
"The ultraviolet catastrophe results from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of formula_1.\n",
"The colourful term \"ultraviolet catastrophe\" was given by Paul Ehrenfest in 1911 to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black-body radiation. But this had not been part of Planck's thinking, because he had not tried to apply the doctrine of equipartition: when he made his discovery in 1900, he had not noticed any sort of \"catastrophe\". It was first noted by Lord Rayleigh in 1900, and then in 1901 by Sir James Jeans; and later, in 1905, by Einstein when he wanted to support the idea that light propagates as discrete packets, later called 'photons', and by Rayleigh and by Jeans.\n",
"At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called \"extreme UV.\" Ionizing UV is strongly filtered by the Earth's atmosphere).\n",
"The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body (also blackbody) at thermal equilibrium will emit radiation in all frequency ranges, emitting more energy as the frequency increases. By calculating the total amount of radiated energy (i.e., the sum of emissions in all frequency ranges), it can be shown that a blackbody is likely to release an arbitrarily high amount of energy. This would cause all matter to instantaneously radiate all of its energy until it is near absolute zero - indicating that a new model for the behaviour of blackbodies was needed.\n",
"The UV Index is a number linearly related to the intensity of sunburn-producing UV radiation at a given point on the earth's surface. It cannot be simply related to the irradiance (measured in W/m) because the UV of greatest concern occupies a spectrum of wavelength from 295 to 325 nm, and shorter wavelengths have already been absorbed a great deal when they arrive at the earth's surface. Skin damage from sunburn, however, is related to wavelength, the shorter wavelengths being much more damaging. The UV power spectrum (expressed as watts per square metre per nanometre of wavelength) is therefore multiplied by a weighting curve known as the erythemal action spectrum, and the result integrated over the whole spectrum. This gave Canadian scientists a weighted figure (sometimes called Diffey-weighted UV irradiance, or DUV, or erythemal dose rate) typically around 250 mW/m in midday summer sunlight. So, they arbitrarily divided by 25 mW/m to generate a convenient index value, essentially a scale of 0 to 11+ (though ozone depletion is now resulting in higher values, as mentioned above).\n",
"The name comes from the earliest example of such a divergence, the \"ultraviolet catastrophe\" first encountered in understanding blackbody radiation. According to classical physics at the end of the nineteenth century, the quantity of radiation in the form of light released at any specific wavelength should increase with decreasing wavelength—in particular, there should be considerably more ultraviolet light released from a blackbody radiator than infrared light. Measurements showed the opposite, with maximal energy released at intermediate wavelengths, suggesting a failure of classical mechanics. This problem eventually led to the development of quantum mechanics.\n"
] |
Is it possible to condition your own bladder to hold more liquid? | The feeling of needing to urinate stems from mechanoreceptors in the bladder, it's certainly possible to learn to develop tolerance to the desire to urinate and as such increase the length of time between urinating. Drugs such as Tolterodine and various bladder training techniques have been shown to help increase the volume stored within the bladder but these were patients with over-active bladders. | [
"Emptying the bladder is one of the defense mechanisms of this tortoise. This can leave the tortoise in a very vulnerable condition in dry areas, and it should not be alarmed, handled, or picked up in the wild unless in imminent danger. If it must be handled, and its bladder is emptied, then water should be provided to restore the fluid in its body.\n",
"Wraparound bladders are favored by some divers because they make it easier to maintain upright attitude on the surface. However, some designs have a tendency to squeeze the diver's torso when inflated, and they are often bulky at the sides or front when fully inflated. Back inflation BCs are less bulky at the sides but may have a tendency to float the diver tilted forward on the surface depending on weight and buoyancy distribution, which presents a possible hazard in an emergency if the diver is unconscious or otherwise unable to keep his or her head above the water.\n",
"BULLET::::- Redundant bladders may be inadvertently filled, either by unintended action of the diver, or by malfunction of the filling mechanism, and if the failure is not recognized and dealt with promptly, this may result in a runaway uncontrolled ascent, with associated risk of decompression sickness. There is a risk that the diver will not recognise which bladder is full and attempt to dump from the wrong one. The risk can be reduced by ensuring that the filling mechanisms are clearly distinguishable by both feel and position, and not connecting a low pressure supply hose to the reserve until needed, so it is impossible to add gas by accident.\n",
"If an incontinence is due to overflow incontinence, in which the bladder never empties completely, or if the bladder cannot empty because of poor muscle tone, past surgery, or spinal cord injury, a catheter may be used to empty the bladder. A catheter is a tube that can be inserted through the urethra into the bladder to drain urine. Catheters may be used once in a while or on a constant basis, in which case the tube connects to a bag that is attached to the leg. If a long-term (or indwelling) catheter is used, urinary tract infections may occur.\n",
"Dual bladder buoyancy compensators are considered both unnecessary and unsafe. Unnecessary in that there are alternative methods available to a correctly rigged diver to compensate for a defective BC, and unsafe in that there is no obvious way to tell which bladder is holding air, and a leak into the secondary bladder may go unnoticed until the buoyancy has increased to the extent that the diver is unable to stop the ascent, while struggling to empty the air from the wrong bladder. Monitoring the air content of two bladders is unnecessary additional task loading, which distracts attention from other matters.\n",
"Means of holding the bladder walls apart to encourage drying between uses are available, such as a plastic frame that collapses to pass through the fill opening, but expands inside the bladder to hold the sides apart even near the corners.\n",
"The swim bladder (or gas bladder) is an internal organ that contributes to the ability of a fish to control its buoyancy, and thus to stay at the current water depth, ascend, or descend without having to waste energy in swimming. The bladder is found only in the bony fishes. In the more primitive groups like some minnows, bichirs and lungfish, the bladder is open to the esophagus and doubles as a lung. It is often absent in fast swimming fishes such as the tuna and mackerel families. The condition of a bladder open to the esophagus is called physostome, the closed condition physoclist. In the latter, the gas content of the bladder is controlled through a rete mirabilis, a network of blood vessels effecting gas exchange between the bladder and the blood.\n"
] |
Why were the Anglo-Saxons one of the only Germanic groups who didn’t assimilate into the cultures they conquered? | Who says that they didn't? Robin Fleming argues in *Britain After Rome* that the idea of the Anglo-Saxons as a purely Germanic culture is misguided and not supported by the evidence that we have available through archaeology. She points to the blend of clothing and jewelry styles that emerged following "Anglo-Saxon" migration to Britain as evidence that these two cultures were assimilating into something difference from either that came before. She views this process as more or less a peaceful one. While they was some endemic violence inherent to the time period, she does not see evidence for the mass violence that is often assumed to have accompanied the Germanic migration into Britain.
However Peter Heather offers another explanation that is worth mentioning. He posits that due to the fragmented and small scale nature of migration into Britain, combined with a fluid cultural identity for the native British there was little reason for the native British to hang onto their culture in certain parts of Britain so the population assimilated into the new Germanic one.
Also worth bearing in mind is that the label of "Anglo-Saxon" as applied to the migrators themselves is misleading. While many of the Germanic people who came to England did come from Jutland or Saxony, others came from Norway, Frisia, Ireland (not even Germanic people!), and Sweden. Also that the process of assimilation was not as smooth in some of these places as you might imagine. For example, Frankish Law (or Salic Law) maintained legal distinctions between Franks and Romans for centuries following Frankish control over northern Gaul. Even though the populations "assimilated" in the end, we should not imagine that this process was quick, easy, or assumed. | [
"The Franks and the Anglo-Saxons were unlike the other Germanic peoples in that they entered the Western Roman Empire as Pagans and were forcibly converted to Chalcedonian Christianity by their kings, Clovis I and Æthelberht of Kent (see also Christianity in Gaul and Christianisation of Anglo-Saxon England). The remaining tribes – the Vandals and the Ostrogoths – did not convert as a people nor did they maintain territorial cohesion. Having been militarily defeated by the armies of Emperor Justinian I, the remnants were dispersed to the fringes of the empire and became lost to history. The Vandalic War of 533–534 dispersed the defeated Vandals. Following their final defeat at the Battle of Mons Lactarius in 553, the Ostrogoths went back north and (re)settled in south Austria.\n",
"The southward expansion of the East Germanic tribes pushed many other Germanic and Iranian peoples towards the Roman Empire, spawning the Marcomannic Wars in the 2nd century AD. Another East Germanic tribe were the Herules, who according to 6th century historian Jordanes were driven from modern-day Denmark by the Danes, who were an offshoot of the Swedes. The migration of the Herules is thought to have occurred around 250 AD. The Danes would eventually settle all of Denmark, with many its former inhabitants, including the Jutes and Angles, settling Britain, becoming known as the Anglo-Saxons. The Old English story Beowulf is a testimony to this connection. Meanwhile, Norway was inhabited by a large number of North Germanic tribes and divided into a score of petty kingdoms.\n",
"The Anglo-Saxons were a mix of invaders, migrants and acculturated indigenous people. Even before the withdrawal of the Romans, there were Germanic people in Britain who had been stationed there as \"foederati\". The migration continued with the departure of the Roman army, when Anglo-Saxons were recruited to defend Britain; and also during the period of the Anglo-Saxon first rebellion of 442. They settled in small groups covering a handful of widely dispersed local communities, and brought from their homelands the traditions of their ancestors. There are references in Anglo-Saxon poetry, including Beowulf, that show some interaction between pagan and Christian practices and values. There is enough evidence from Gildas and elsewhere that it is safe to assume some continuing form of Christianity survived. The Anglo-Saxons took control of Sussex, Kent, East Anglia and part of Yorkshire; while the West Saxons founded a kingdom in Hampshire under Cerdic, around 520.\n",
"The Anglo-Saxons' arrival is the most hotly disputed of events, and the extent to which they killed, displaced, or integrated with the existing society is still questioned. What is clear is that a separate Anglo-Saxon society, which would eventually become England with a more Germanic feel, was set up in the south east of the island. These new arrivals had not been conquered by the Romans but their society was perhaps similar to that of Britain. The main difference was their pagan religion, which the surviving northern areas of non-Saxon rule sought to convert to Christianity. During the 7th century these northern areas, particularly Northumbria, became important sites of learning, with monasteries acting like early schools and intellectuals such as Bede being influential. In the 9th century Alfred the Great worked to promote a literate, educated people and did much to promote the English language, even writing books himself. Alfred and his successors unified and brought stability to most of the south of Britain that would eventually become England.\n",
"More broadly, early Medieval Germanic peoples were often assimilated into the \"walha\" substrate cultures of their subject populations. Thus, the Burgundians of Burgundy, the Vandals of Northern Africa, and the Visigoths of France and Iberia, lost some Germanic identity and became part of Romano-Germanic Europe. For the Germanic Visigoths in particular, they had intimate contact with Rome for two centuries before their domination of the Iberian Peninsula and were accordingly permeated by Roman culture. Likewise, the Franks of Western Francia form part of the ancestry of the French people.\n",
"The various Germanic peoples of the Migrations period eventually spread out over a vast expanse stretching from contemporary European Russia to Iceland and from Norway to North Africa. The migrants had varying impacts in different regions. In many cases, the newcomers set themselves up as overlords of the pre-existing population. Over time, such groups underwent ethnogenesis, resulting in the creation of new cultural and ethnic identities (e.g., the Franks and Gallo-Romans becoming the French). Thus, many of the descendants of the ancient Germanic peoples do not speak Germanic languages, as they were to a greater or lesser degree assimilated into the cosmopolitan, literate culture of the Roman world. Even where the descendants of Germanic peoples maintained greater continuity with their common ancestors, significant cultural and linguistic differences arose over time, as is strikingly illustrated by the different identities of Christianized Saxon subjects of the Carolingian Empire and pagan Scandinavian Vikings.\n",
"The demise of Vulgar Latin in the face of Anglo-Saxon settlement is very different from the fate of the language in other areas of Western Europe which were subject to Germanic migration, like France, Italy and Spain, where Latin and the Romance languages continued. The likely reason is that in Britain there was a greater collapse in Roman institutions and infrastructure, leading to a much greater reduction in the status and prestige of the indigenous Romanised culture; and so the indigenous people were more likely to abandon their languages, in favour of the higher-status language of the Anglo-Saxons.\n"
] |
why do my muscles hurt after using them? | Lactic acid build up within the muscle may cause pain. Muscle tightness also may cause pain in the muscle. | [
"As a result of this effect, not only is the soreness reduced, but other indicators of muscle damage, such as swelling, reduced strength and reduced range of motion, are also more quickly recovered from. The effect is mostly, but not wholly, specific to the exercised muscle: experiments have shown that some of the protective effect is also conferred on other muscles.\n",
"The mechanism of delayed onset muscle soreness is not completely understood, but the pain is ultimately thought to be a result of microtrauma – mechanical damage at a very small scale – to the muscles being exercised.\n",
"The pain is caused by the inadequate blood flow to the muscle tissue, the inflammation from the resulting cell damage, and the release of cell contents. Muscle spasms, caused by the lack of blood to the muscle tissue, are also painful.\n",
"Soreness might conceivably serve as a warning to reduce muscle activity to prevent injury or further injury. With delayed onset muscle soreness (DOMS) caused by eccentric exercise (muscle lengthening), it was observed that light concentric exercise (muscle shortening) during DOMS can cause initially more pain but was followed by a temporary alleviation of soreness – with no adverse effects on muscle function or recovery being observed. Furthermore eccentric exercise during DOMS was found to not exacerbate muscle damage, nor did it have an adverse effect on recovery – considering this, soreness is not necessarily a warning sign to reduce the usage of the affected muscle. However it was observed that a second bout of eccentric exercise within one week of the initial exercise did lead to decreased muscle function immediately afterwards.\n",
"In the long term, the loss of muscle function can have additional effects from disuse, including atrophy of the muscle. Immobility can lead to pressure sores, particularly in bony areas, requiring precautions such as extra cushioning and turning in bed every two hours (in the acute setting) to relieve pressure. In the long term, people in wheelchairs must shift periodically to relieve pressure. Another complication is pain, including nociceptive pain (indication of potential or actual tissue damage) and neuropathic pain, when nerves affected by damage convey erroneous pain signals in the absence of noxious stimuli. Spasticity, the uncontrollable tensing of muscles below the level of injury, occurs in 65–78% of chronic SCI. It results from lack of input from the brain that quells muscle responses to stretch reflexes. It can be treated with drugs and physical therapy. Spasticity increases the risk of contractures (shortening of muscles, tendons, or ligaments that result from lack of use of a limb); this problem can be prevented by moving the limb through its full range of motion multiple times a day. Another problem lack of mobility can cause is loss of bone density and changes in bone structure. Loss of bone density (bone demineralization), thought to be due to lack of input from weakened or paralysed muscles, can increase the risk of fractures. Conversely, a poorly understood phenomenon is the overgrowth of bone tissue in soft tissue areas, called heterotopic ossification. It occurs below the level of injury, possibly as a result of inflammation, and happens to a clinically significant extent in 27% of people.\n",
"Physical exercise may cause pain both as an immediate effect that may result from stimulation of free nerve endings by low pH, as well as a delayed onset muscle soreness. The delayed soreness is fundamentally the result of ruptures within the muscle, although apparently not involving the rupture of whole muscle fibers.\n",
"Owens states that chronic pain remains to be one of the most common among medical complaints. Delos Therapy focuses on the principle that with repetitive motion and wear and tear of muscle tissue, the muscles become tight and fibrotic, causing common symptoms of pain, stiffness, and weakness. This fibrosis is not visible on conventional imaging, such as, MRIs or X-rays; and the fibrosis is getting missed diagnostically by mainstream medicine. Conventional treatments for such tightness and pain include stretching, strengthening, and/or medication management with opioids. Although beneficial in some cases, Owens believes this is not a complete therapy and is largely focused on symptoms rather than the root cause.\n"
] |
Why does my vision change when I focus intently on anything around me? | When you [stabilize an image on your retina](_URL_1_) for a long time, you adapt to portions of the image and stop noticing / seeing them. The auditory equivalent is when you do not notice the hum of a light or a fan until you pay attention to it again. Normally, your eyes are moving many times a second, even when you are fixating on something, in order to provide some change in the sensory input to a portion of your retina. This is called a [microsaccade](_URL_0_). | [
"Changes in spatial attention can occur with the eyes moving, overtly, or with the eyes remaining fixated, covertly. Within the human eye only a small part, the fovea, is able to bring objects into sharp focus. However, it is this high visual acuity that is needed to perform actions such as reading words or recognizing facial features, for example. Therefore, the eyes must continually move in order to direct the fovea to the desired goal. Prior to an overt eye movement, where the eyes move to a target location, covert attention shifts to this location. However, it is important to keep in mind that attention is also able to shift covertly to objects, locations, or even thoughts while the eyes remain fixated. For example, when a person is driving and keeping their eyes on the road, but then, even though their eyes do not move, their attention shifts from the road to thinking about what they need to get at the grocery store. The eyes may remain focused on the previous object attended to, yet attention has shifted.\n",
"A similar effect is found when people track moving objects with their eyes. The changing retinal image is referenced with the muscle movements of the eye resulting in the same type of retinal/body-centered alignment. This is one more process that helps the brain properly encode the relationships needed to deal with our changing perception, and also serves as a verification that the proper physical movements are being made.\n",
"The brain's ability to see three-dimensional objects depends on proper alignment of the eyes. When both eyes are properly aligned and aimed at the same target, the visual portion of the brain fuses the forms into a single image. When one eye turns inward, outward, upward, or downward, two different pictures are sent to the brain. This causes loss of depth perception and binocular vision. There have also been some reports of people that can \"control\" their afflicted eye. The term is from Greek \"exo\" meaning \"outward\" and \"trope\" meaning \"a turning\".\n",
"By this intellectual operation, comprehending every effect in our sensory organs as having an external cause, the external world arises. With vision, finding the cause is essentially simplified due to light acting in straight lines. We are seldom conscious of the process that interprets the double sensation in both eyes as coming from one object; that turns the upside down impression, and that adds depth to make from the planimetrical data stereometrical perception with distance between objects.\n",
"As the eye shifts its gaze from looking through the optical center of the corrective lens, the lens-induced astigmatism value increases. In a spherical lens, especially one with a strong correction whose base curve is not in the best spherical form, such increases can significantly impact the clarity of vision in the periphery.\n",
"The visual system in the brain is too slow to process that information if the images are slipping across the retina at more than a few degrees per second. Thus, to be able to see while we are moving, the brain must compensate for the motion of the head by turning the eyes. Another specialisation of visual system in many vertebrate animals is the development of a small area of the retina with a very high visual acuity. This area is called the fovea, and covers about 2 degrees of visual angle in people. To get a clear view of the world, the brain must turn the eyes so that the image of the object of regard falls on the fovea. Eye movement is thus very important for visual perception, and any failure can lead to serious visual disabilities. To see a quick demonstration of this fact, try the following experiment: hold your hand up, about one foot (30 cm) in front of your nose. Keep your head still, and shake your hand from side to side, slowly at first, and then faster and faster. At first you will be able to see your fingers quite clearly. But as the frequency of shaking passes about 1 Hz, the fingers will become a blur. Now, keep your hand still, and shake your head (up and down or left and right). No matter how fast you shake your head, the image of your fingers remains clear. This demonstrates that the brain can move the eyes opposite to head motion much better than it can follow, or pursue, a hand movement. When your pursuit system fails to keep up with the moving hand, images slip on the retina and you see a blurred hand.\n",
"For example, when looking out of the window at a moving train, the eyes can focus on a moving train for a short moment (by stabilizing it on the retina), until the train moves out of the field of vision. At this point, the eye is moved back to the point where it first saw the train (through a saccade).\n"
] |
what is going to make future 5g internet, faster than current 4g networks? | Well, there was a mobile network CEO or tech apecialist recently that described how the 5g network will work. It will be closer to skynet in terms of the net will be smarter, faster, more organized, and better equipped with newer generation technology that enables up to gigabit. There will be bigger, thicker cables to every cell tower so that connectivity will be wider spread and more reliable. The network will be smarter in the sense that it can tell when your battery is low and it'll start pinging your phone less, and more organized since there will likely be a prioritization system set up that deals with making business lines have a higher priority than our own commercial lines. Generally, the network will be smarter and more efficient for every cent spent on it. | [
"5G succeeds 4G LTE wireless technology; developments have been focused on enabling low-latency communications, and promises of a minimum peak network speed of 20 gigabits per/second (20 times faster than the equivalent on 4G LTE networks), and uses within Internet of things and smart city technology.\n",
"A new \"mobile broadband\" technology emerging in the United Kingdom is 4G which hopes to replace the old 3G technology currently in use and could see download speeds increased to 300Mbit/s. The company EE have been the first company to start developing a full scale 4G network throughout the United Kingdom. This was later followed by other telecommunications companies in the UK such as O2 (Telefónica) and Vodafone.\n",
"By 2009, it had become clear that, at some point, 3G networks would be overwhelmed by the growth of bandwidth-intensive applications like streaming media. Consequently, the industry began looking to data-optimized 4th-generation technologies, with the promise of speed improvements up to 10-fold over existing 3G technologies. The first two commercially available technologies billed as 4G were the WiMAX standard (offered in the U.S. by Sprint) and the LTE standard, first offered in Scandinavia by TeliaSonera.\n",
"Installation of a trans-Indian Ocean backbone cable in 2009 has, in theory, made Internet access much more readily available in Dar in particular and in East Africa in general. However, roll-out to end-users is slow, partly because of spotty telephone line coverage at the moment provided by the Tanzania Telecommunications Company Limited, partly due to the substantial prices and long contracts demanded for purchase of bandwidth for small ISPs. Mobile-telephone access to the Internet via 3G and 3.75G is still relatively expensive. 4G is making its way through major cities and towns with plans to go countrywide in the advanced planning stages.\n",
"3G networks have taken this approach to a higher level, using different underlying technology but the same principles. They routinely provide speeds over 300kbit/s. Due to the now increased internet speed, internet connection sharing via WLAN has become a workable reality. Devices which allow internet connection sharing or other types of routing on cellular networks are called also cellular routers.\n",
"In November 2014, ViewQwest unveiled plans for a 2Gbit/s fibre broadband service for households in Singapore, offering the country's fastest internet connection in the market. In March 2015, the service was officially launched making it the world's fastest home broadband plan alongside Japan.\n",
"As of 2015, the maximum plan for their connection is now at 1Gbit/s, while plans for lower speeds are scheduled for upgrades in the near future. As of 2017, they are aggressively increasing network presence in an attempt to improve internet speed and services, decried as one of the worst in Asia, apart from rivalry from other companies.\n"
] |
How was life as a Carthaginian compared to life as a Roman? | So there's this incident where Claudius is headed to what is now England in a ship. He gets spotted by a Carthaginian ship, and it's one group of rowers against the other. Claudius argues the reason his rowers won (and escaped) was that they were free men, while the Carthaginian rowers were slaves. But that was much later, and we're talking a very different Carthage than the one during the Punic wars.
Not only that, to believe the argument, you have to trust the ancient sources, and the modern one (Graves, in this case). For your basic question, "Was Rome really militaristic," the answer can only be yes. Was Carthage a democracy? that's a modern question, which may not actually be relevant in ancient terms.
They *did* have election of kings, but we would describe it as an oligarchy. Look up "Tribunal of 104" if you're interested. Your average Carthaginian citizen was more interested in trade than fighting, so they depended heavily on mercenaries from subjugated provinces for their military. The struggles for power would have been familiar to any Roman: political murders, bought offices, intrigue and deceit. Both systems thought of themselves as republics.
One other problem: most of the writers we base our view on were actually foreigners, in many cases hostile foreigners. It's difficult, under such circumstances, to make real assertions. But some basic things are clear: Rome had a plunder economy, while Carthage was based slightly more on trade. Land power vs. sea power. Citizen military vs. mercenaries. All those are oversimplifications, but have some truth to them. The modern concept of freedom can't be said to apply.
I haven't researched the "rage on the TW forums," but if they're pro-Rome, they're probably misreading. Arguing that the Romans were a positive influence is another modern simplification. They made life suck for any non-Roman area (such is the nature of a plunder economy), and for the majority of Romans themselves. Their whole system was based on the idea that "We're going to kick your ass and take all your stuff." | [
"The Punic Wars with Carthage had a particularly marked effect on Roman viticulture. In addition to broadening the cultural horizons of the Roman citizenry, Carthaginians also introduced them to advanced viticultural techniques, in particular the work of Mago. When the libraries of Carthage were ransacked and burned, among the few Carthaginian works to survive were the 26 volumes of Mago's agricultural treatise, which was subsequently translated into Latin and Greek in 146 BC. Although his work did not survive to the modern era, it has been extensively quoted in the influential writings of Romans Pliny, Columella, Varro and Gargilius Martialis.\n",
"The Carthaginian republic was one of the longest-lived and largest states in the ancient Mediterranean. Reports relay several wars with Syracuse and finally, Rome, which eventually resulted in the defeat and destruction of Carthage in the Third Punic War. The Carthaginians were Phoenician settlers originating in the Mediterranean coast of the Near East. They spoke Canaanite, a Semitic language, and followed a local variety of the ancient Canaanite religion.\n",
"The Carthaginians were rivals to the Greeks and Romans. Carthage fought the Punic Wars, three wars with Rome: the First Punic War (264 to 241 BC), over Sicily; the Second Punic War (218 to 201 BC), in which Hannibal invaded Europe; and the Third Punic War (149 to 146 BC). Carthage lost the first two wars, and in the third it was destroyed, becoming the Roman province of Africa, with the Berber Kingdom of Numidia assisting Rome. The Roman province of Africa became a major agricultural supplier of wheat, olives, and olive oil to imperial Rome via exorbitant taxation. Two centuries later, Rome brought the Berber kingdoms of Numidia and Mauretania under its authority. In the 420's AD, Vandals invaded North Africa and Rome lost her territories. The Berber kingdoms subsequently regained their independence.\n",
"\"Many of the Carthaginian institutions are excellent. The superiority of their constitution is proved by the fact that the common people remain loyal to the constitution; the Carthaginians have never had any rebellion worth speaking of, and have never been under the rule of a tyrant.\"\n",
"While the Carthaginians had been busy at Geronium, Fabius had left Minucius in charge of the Roman army with instructions to follow the ‘Fabian Strategy’ and journeyed to Rome to observe religious duties. He possibly also had engaged in political bickering because of his unpopularity among the Roman citizens. Minucius, who had always advocated a more forward strategy against Hannibal, moved down from the hills after a few days and set up a new camp in the plain of Larinum to the north of Geronium. The Romans then began harassing the Carthaginian foragers from their new camp as Minucius sought to provoke Hannibal into battle. Hannibal in response moved near the Roman camp from Geronium with two thirds of his army, built a temporary camp, and occupied a hill overlooking the Roman camp with 2,000 Libyphoenician pikemen. The mobility of the Carthaginians was restricted at this time as their cavalry horses were being rested. This had also deprived Hannibal of his best weapon against the Romans, a fact which would come into play soon. Minucius promptly attacked with his light infantry, driving back the pikemen posted on the hill, and moved his camp to the top of the captured hill.\n",
"The Carthaginian town came under Roman hegemony after the Punic Wars. In 46, the town was the first in Africa to ally itself with Julius Caesar during his civil war. The same year, the Battle of Ruspina was a victory for Pompey's ally T. Labienus.\n",
"The Carthaginians were totally unprepared for Rome's actions. The garrisons of Lilybaeum, Drepana and Hamilcar’s army at Eryx held fast, but without supplies from Carthage they could not hold out indefinitely. Now that Rome had seized the initiative with a battle ready fleet blockading Carthaginian holdings in Sicily, without warships the unescorted Carthaginian supply ships would fall prey to the Romans. \n"
] |
How did the United States of America arrive at their valuation of Greenland in 1946? Could the area have been worth the cost of purchase in terms of economic output, or was the value purely strategic? | Initially, America very much wanted it for strategic reasons. The GIUK Gap was hugely important. Specifically, it was important to the Soviet Union's submarine fleet.
If you look at the terms of the [Montreux Convention](_URL_2_), it was impossible to "sneak" a submarine through Turkish waters. If you look at a map of the Baltic Sea, or more specifically the [waters around Denmark](_URL_0_), it's similarly unlikely that you could ever sneak a submarine past even a semi-aware detection network. Denmark, of course, was one of the founding dozen of NATO.
This means that if the Soviets actually want to conduct any submarine operations with any degree of stealth, they need to be based out of Murmansk in the White Sea or somewhere in East Asia (Vladivostok, Petropavlovsk on Kamchatka, Magadan, or Sovetskaya Gavan. Realistically, if the Soviets wanted to have their submarines remain undetected, they really could only use those five ports. And if you wanted to operate in the Atlantic, you weren't going to put your HQ on the northwestern coast of the Pacific. You were going to put it in Murmansk.
This, effectively, meant that any ships the Soviets sent to the Atlantic had to pass through the GIUK Gap. And it would be relatively easy to detect (and subsequently track or shadow) them if you had assets in the area beforehand. And it's kind of hard to be all sneaky and such when the USN is dropping [practice depth charges on you](_URL_4_). They couldn't do any meaningful damage, of course, but it's an implicit threat: the USN was basically saying, "we could sink you at any time."
As for the overall economic value of Greenland: that's up in the air. We [already see](_URL_1_) Greenland being exploited for hydrocarbons. The USGS released a review of hydrocarbons in the region [here](_URL_3_). I believe they described it as "a genuinely stupid amount of dead dead plants buried in the sea floor." (Okay, that wasn't their exact phrasing.) (._.)
Theoretically, yes, Greenland would've paid for itself. Eventually. If the estimates actually pan out. | [
"Following World War II, the United States developed a geopolitical interest in Greenland, and in 1946 the United States offered to buy Greenland from Denmark for $100,000,000, but Denmark refused to sell.\n",
"Following World War II, the United States developed a geopolitical interest in Greenland, and in 1946 the United States offered to buy the island from Denmark for $100,000,000. Denmark refused to sell it. In the 21st century, the United States, according to WikiLeaks, remains highly interested in investing in the resource base of Greenland and in tapping hydrocarbons off the Greenlandic coast.\n",
"During World War II, when Denmark was occupied by Nazi Germany, the United States briefly controlled Greenland for battlefields and protection. In 1946, the United States offered to buy Greenland from Denmark for $100 million ($1.2 billion today) but Denmark refused to sell it. Several politicians and others have in recent years argued that Greenland could hypothetically be in a better financial situation as a part of the United States; for instance mentioned by professor Gudmundur Alfredsson at University of Akureyri in 2014. One of the actual reasons behind U.S. interest in Greenland could be the vast natural resources of the island. According to WikiLeaks, the U.S. appears to be highly interested in investing in the resource base of the island and in tapping the vast expected hydrocarbons off the Greenlandic coast.\n",
"The Louisiana Purchase gave the United States over a million square miles of previously French territory for the price of $15 million. The Purchase was ratified by the U.S. Senate on October 20, 1803, and the new land subsequently doubled the size of the United States and opened the door to a new period of westward expansion. In 1902, President Theodore Roosevelt signed a bill to subsidize the Louisiana Purchase Exposition, which would become known as the St. Louis World Fair of 1904. Of the $5 million paid to the fair by the government, $250,000 was in the form of commemorative gold dollar coins.\n",
"1962 brought about the construction of , which was the largest commercial vessel built in the United States at the time, and became the first ship to transit the Northwest Passage to the Alaska North Slope oil fields. The Bainbridge was launched in that year, but not without accusations from the government that Bethlehem overcharged the Navy, as the costs increased from almost $ (equivalent to $ in today's dollars) in 1959 to a negotiated $ (equivalent to $ in today's dollars) three years later, down from an estimate of $ (equivalent to $ in today's dollars) before then, although there was a $ (equivalent to $ in today's dollars) discrepancy in the yard. After the end of the strike mentioned above, the yard was accused by the government of overcharging for the first nuclear frigate, and the Long Beach. The shipyard later made up for the losses of $ (equivalent to $ in today's dollars) by crediting on other contracts that were being offered.\n",
"BULLET::::- Louisiana Purchase Bicentennial was commemorated with a 37-cent stamp issued on April 30, 2003, in New Orleans, Louisiana. The Purchase doubled the size of the United States, it became one of the largest countries in the world, and the most fertile lands of the continent were opened to American settlement. It is often called the greatest real estate deal in history, \"with a stroke of a pen\". The stamp was designed by Richard Sheaff and illustrated by Garin Baker. Sennett Security Products printed the stamp in gravure process in pressure-sensitive panes of twenty; 54 million were issued. An image of the stamp is available at Arago online at the link in the footnote.\n",
"BULLET::::- The Louisiana Purchase of 1803, also known as the \"Great Land Acquisition\", is often seen as one of the most important events in American history after the Declaration of Independence. At the time it had a total cost of $15 million, and it was financed in three ways. First by a down payment of $3 million, in gold by the U.S. government, followed by two loans, one by the London-based Barings Bank, and one by the Amsterdam-based Hope Bank. The original receipt still exists and is currently property of the Dutch ING Group, which has its headquarters in Amsterdam.\n"
] |
has the physiological damage caused by trauma been the same through out history? | No. Trauma has been present with humanity throughout our history. The difference is that nowadays, we are allowed to actually speak about our traumas, and help them heal, whereas in the past there was mostly an attitude of 'why are you acting like this, stop it'. Or if it was mentioned, it was mentioned in vague terms that do not always translate well to modern ears. The deadly sin sloth, for example, initially didn't really refer to simple laziness. It was meant for apathy and loss of interest in life, exactly the sort of symptoms commonly associated with depression (potentially due to former trauma). | [
"Historical trauma is described as collective emotional and psychological damage throughout a person's lifetime and across multiple generations. Examples of historical trauma can be seen through the Wounded Knee Massacre of 1890, where over 200 unarmed Lakota were killed, and the Dawes Allotment Act of 1887, when American Indians lost four-fifths of their land.\n",
"Historical Trauma (HT), or Historical Trauma Response (HTR), can manifest itself in a variety of psychological ways. However, it is most commonly seen through high rates of substance abuse, alcoholism, depression, anxiety, suicide, domestic violence, and abuse within afflicted communities. The effects and manifestations of trauma are extremely important in understanding the present-day conditions of afflicted populations.\n",
"Traumatic brain injuries vary in their mechanism of injury, producing a blunt or penetrating trauma resulting in a primary and secondary injury with excitotoxicity and relatively wide spread neuronal death. Due to the overwhelming number of traumatic brain injuries as a result of the War on Terror, tremendous amounts of research have been placed towards a better understanding of the pathophysiology of traumatic brain injuries as well as neuroprotective interventions and possible interventions prompting restorative neurogenesis. Hormonal interventions, such as progesterone, estrogen, and allopregnanolone have been examined heavily in recent decades as possible neuroprotective agents following traumatic brain injuries to reduce the inflammation response stunt neuronal death. In rodents, lacking the regenerative capacity for adult neurogenesis, the activation of stem cells following administration of α7 nicotinic acetylcholine receptor agonist, PNU-282987, has been identified in damaged retinas with follow-up work examining activation of neurogenesis in mammals after traumatic brain injury. Currently, there is no medical intervention that has passed phase-III clinical trials for use in the human population.\n",
"Survivors of war trauma or childhood maltreatment are at increased risk for trauma-spectrum disorders such as post-traumatic stress disorder (PTSD). In addition, traumatic stress has been associated with alterations in the neuroendocrine and the immune system, enhancing the risk for physical diseases. Traumatic experiences might even affect psychological as well as biological parameters in the next generation, i.e. traumatic stress might have trans generational effects. So currently there is a new field trying to explain how epigenetic processes, which represent a pivotal biological mechanism for dynamic adaptation to environmental challenges, might contribute to the explanation of the long-lasting and intergenerational effects of trauma. In particular, epigenetic alterations in genes regulating the hypothalamus–pituitary–adrenal axis as well as the immune system have been observed in survivors of childhood and adult trauma.\n",
"The trauma triad of death is a medical term describing the combination of hypothermia, acidosis and coagulopathy. This combination is commonly seen in patients who have sustained severe traumatic injuries and results in a significant rise in the mortality rate. Commonly, when someone presents with these signs, damage control surgery is employed to reverse the effects.\n",
"Scholarship and data suggests that violence has declined. Since World War II, there has been a decline in battle deaths and since the Cold War, there has been a decline in conflict. Recently, scholars have started to question this long-held belief.\n",
"Historical trauma, and its manifestations, are seen as an example of Transgenerational trauma (though the existence of transgenerational trauma itself is disputed). For example, a pattern of maternal abandonment of a child might be seen across three generations, or the actions of an abusive parent might be seen in continued abuse across generations. These manifestations can also stem from the trauma of events, such as the witnessing of war, genocide, or death. For these populations that have witnessed these mass level traumas (e.g., war, genocide, colonialism), several generations later these populations tend to have higher rates of disease.\n"
] |
Why was the USS Indianapolis sailing without an escort when she was sunk? | It's normal for a cruiser to operate alone without destroyer escort in some circumstances. A heavy cruiser is an important asset, but it's not a capital ship that will shift the balance of naval power if lost. Destroyers were always in high demand for various roles in world war two and there were usually not enough to go around. A cruiser task force going in to action might normally include some destroyers, but not each individual cruiser on a non-combat mission.
Anti-submarine weaponry in World War Two was generally only effective *after* the submarine was detected. An escorting destroyer would not have been able to detect the submarine and prevent the Indianapolis's loss, (high speed reduces the ability of surface ships to detect submarines) although it may have meant a ship was present to rescue survivors or counterattack the submarine. A heavy cruiser is much faster than a submarine (surfaced or submerged) and a fast speed and zig-zag pattern course (which the Indianapolis should have been following but wasn't) will generally provide as much protection as possible against the submarine's first salvo of torpedoes.
The main reason to avoid including destroyers as escorts to a heavy cruiser is range. Destroyers have a much shorter operating range than cruisers, especially at high speeds, and the voyage from Honolulu to the Marianas would be too close for safety to the maximum operating range of a WWII destroyer. Destroyers accompanying major task forces have to periodically refuel from supply ships or larger warships, which is time consuming and creates a moment of vulnerability.
| [
"USS \"Indianapolis\" (CL/CA-35) was a heavy cruiser of the United States Navy. At 00:15 on 30 July 1945, she was struck on her starboard side by two Type 95 torpedoes, one in the bow and one amidships, from the Japanese submarine , captained by Commander Mochitsura Hashimoto, who initially thought he had spotted the . The explosions caused massive damage. \"Indianapolis\" took on a heavy list, (the ship had a great deal of added armament and gun firing directors added as the war went on and was top heavy) and settled by the bow. Twelve minutes later, she rolled completely over, then her stern rose into the air, and she plunged down. Some 300 of the 1,195 crewmen went down with the ship. With few lifeboats and many without lifejackets, the remainder of the crew was set adrift.\n",
"The \"Indianapolis\" had been on a secret mission, and due to a communications error, had not been reported as overdue (or missing). An estimated 900 men survived the sinking, but spent days floating in life jackets trying to fight off sharks. While only 317 were rescued out of a crew of 1199 who were aboard the \"Indianapolis\", Claytor's actions were widely credited by survivors with preventing an even greater loss of life.\n",
"In 1996, sixth-grade student Hunter Scott began his research on the sinking of \"Indianapolis\" for a class history project, an assignment which eventually led to a United States Congressional investigation. In October 2000, the United States Congress passed a resolution that Captain McVay's record should state that \"he is exonerated for the loss of \"Indianapolis\"\"; President Bill Clinton signed the resolution. The resolution noted that, although several hundred ships of the US Navy were lost in combat during World War II, McVay was the only captain to be court-martialed for the sinking of his ship. In July 2001, the United States Secretary of the Navy ordered McVay's official Navy record cleared of all wrongdoing.\n",
"\"Indiana\" thereafter withdrew to escort the carrier task force overnight. While operating off the islands in the early hours of 1 February, \"Indiana\" collided with \"Washington\". The ships were blacked out to prevent Japanese observers from spotting them, and in the darkness, \"Indiana\" turned in front of \"Washington\". \"Indiana\" was badly damaged, with the starboard propeller shaft destroyed and significant damaged inflicted on the belt armor and torpedo defense system. The ship had some of armor plating torn from her hull, and \"Washington\" had a section of her bow ripped away and lodged into \"Indiana\"s side. The accident killed three men and injured another six aboard \"Indiana\", one of whom later died. A subsequent inquiry into the accident placed the blame on \"Indiana\", faulting her crew for failing to inform the other ships in the unit about her course changes.\n",
"Shortly thereafter, the fires broke through the flight deck and heat and smoke made the ship's bridge unusable. At 10:46, Admiral Nagumo transferred his flag to the light cruiser . \"Akagi\" stopped dead in the water at 13:50 and her crew, except for Captain Taijiro Aoki and damage-control personnel, was evacuated. She continued to burn as her crew fought a losing battle against the spreading fires. The damage-control teams and Captain Aoki were evacuated from the still floating ship later that night.\n",
"At 00:15 on 30 July, \"Indianapolis\" was struck on her starboard side by two Type 95 torpedoes, one in the bow and one amidships, from the Japanese submarine , captained by Commander Mochitsura Hashimoto, who initially thought he had spotted the . The explosions caused massive damage. \"Indianapolis\" took on a heavy list (the ship had had a great deal of armament and gun-firing directors added as the war went on, and was therefore top-heavy) and settled by the bow. Twelve minutes later, she rolled completely over, then her stern rose into the air, and she plunged down. Some 300 of the 1,195 crewmen aboard went down with the ship. With few lifeboats and many without life jackets, the remainder of the crew was set adrift.\n",
"The wreck of \"Indianapolis\" is in the Philippine Sea. In July–August 2001, an expedition sought to find the wreckage through the use of side-scan sonar and underwater cameras mounted on a remotely operated vehicle. Four \"Indianapolis\" survivors accompanied the expedition, which was not successful. In June 2005, a second expedition was mounted to find the wreck. \"National Geographic\" covered the story and released it in July. Submersibles were launched to find any sign of wreckage. The only objects ever found, which have not been confirmed to have belonged to \"Indianapolis\", were numerous pieces of metal of varying size found in the area of the reported sinking position (this was included in the National Geographic program \"Finding of the USS \"Indianapolis\"\").\n"
] |
What adaptations do humans have that allow them to remain balanced without a tail? | During the time our tails were disappearing (and they still are, if you look at our skeletons), we began to evolve a sense called Equilibrioception—or balance.
Equilibrioception makes use of a variety of sensory input to keep us from falling over while walking or standing:
1. Visual cues, like the horizon and the horizontal angle of local references (e.g. flat surfaces, the level of other's eyes).
2. Vestibular system—there are specialized, liquid-filled canals in our ears that contain super-sensitive hairs that can track the internal movement of the liquid when the head changes position. This gives us information on the angular and rotational movements of our head, much like a smartphone's accelerometer.
3. Proprioception—the body's own perception of where it is in space, made possible by special nerves located within joints and muscles attached to our skeletal system. These nerves allow our limbs and joints a general idea of relative distance to each other, and also clue them in on some motion/acceleration information by sensing the physical effort currently being exerted by muscles (e.g. during running, jumping, etc). | [
"Manx (and other tail-suppressed breeds) do not exhibit problems with balance, Balance is controlled primarily by the inner ear. In cats, dogs and other large-bodied mammals, balance involves but is not dependent upon the tail (contrast rats, for whom the tail is a quite significant portion of their body mass).\n",
"Animal tails are used in a variety of ways. They provide a source of locomotion for fish and some other forms of marine life. Many land animals use their tails to brush away flies and other biting insects. Some species, including cats and kangaroos, use their tails for balance; and some, such as New World monkeys and opossums, have what are known as prehensile tails, which are adapted to allow them to grasp tree branches.\n",
"Their forelegs were shortened, but their hind legs were elongated. While this anatomy is reminiscent of small kangaroos and jerboas, suggesting a jumping locomotion, the structure of the tarsal bones hints at a specialization for terrestrial running. Perhaps these animals were capable of both modes of locomotion; running slowly in search for food, and jumping quickly to avoid threats. Additionally, the Messel specimens feature a surprisingly long tail, unique among modern placental mammals, formed by 40 vertebrae and probably used for balance.\n",
"Evolution has provided the human body with two distinct features: the specialization of the upper limb for visually guided manipulation and the lower limb's development into a mechanism specifically adapted for efficient bipedal gait. While the capacity to walk upright is not unique to humans, other primates can only achieve this for short periods and at a great expenditure of energy. The human adaption to bipedalism is not limited to the leg, however, but has also affected the location of the body's center of gravity, the reorganisation of internal organs, and the form and biomechanism of the trunk. In humans, the double S-shaped vertebral column acts as a great shock-absorber which shifts the weight from the trunk over the load-bearing surface of the feet. The human legs are exceptionally long and powerful as a result of their exclusive specialization for support and locomotion — in orangutans the leg length is 111% of the trunk; in chimpanzees 128%, and in humans 171%. Many of the leg's muscles are also adapted to bipedalism, most substantially the gluteal muscles, the extensors of the knee joint, and the calf muscles.\n",
"Humans, like most of the other apes, lack external tails, have several blood type systems, have opposable thumbs, and are sexually dimorphic. The comparatively minor anatomical differences between humans and chimpanzees are a result of human bipedalism. One difference is that humans have a far faster and more accurate throw than other animals. Humans are also among the best long-distance runners in the animal kingdom, but slower over short distances. Humans' thinner body hair and more productive sweat glands help avoid heat exhaustion while running for long distances.\n",
"All adaptations have a downside: horse legs are great for running on grass, but they can't scratch their backs; mammals' hair helps temperature, but offers a niche for ectoparasites; the only flying penguins do is under water. Adaptations serving different functions may be mutually destructive. Compromise and makeshift occur widely, not perfection. Selection pressures pull in different directions, and the adaptation that results is some kind of compromise.\n",
"Most bipedal animals move with their backs close to horizontal, using a long tail to balance the weight of their bodies. The primate version of bipedalism is unusual because the back is close to upright (completely upright in humans), and the tail may be absent entirely. Many primates can stand upright on their hind legs without any support. \n"
] |
"Rubbing Alcohol" is the main ingredient in most skin care (and other) products but we've all been told to basically not use it for anything except an antiseptic. Why? | If I remember correctly, there are two big reasons for this:
When the skin dries up, it flakes off and ends up in the pores which will cause more blemishes then when you originally started.
Second, because it dries up so much, your skin will try to regain a balance and then overproduce oils, creating a longterm problem. | [
"All rubbing alcohols are unsafe for human consumption: isopropyl rubbing alcohols do not contain the ethyl alcohol of alcoholic beverages; ethyl rubbing alcohols are based on denatured alcohol, which is a combination of ethyl alcohol and one or more bitter poisons that make the substance toxic.\n",
"Product labels for rubbing alcohol include a number of warnings about the chemical, including the flammability hazards and its intended use only as a topical antiseptic and not for internal wounds or consumption. It should be used in a well-ventilated area due to inhalation hazards. Poisoning can occur from ingestion, inhalation, absorption, or consumption of rubbing alcohol.\n",
"Rubbing alcohol refers to either isopropyl alcohol (propan-2-ol) or ethanol based liquids, or the comparable British Pharmacopoeia defined surgical spirit, with isopropyl alcohol products being the most widely available. Rubbing alcohol is undrinkable even if it is ethanol based, due to the bitterants added.\n",
"They also have many industrial and household uses. The term \"rubbing alcohol\" has become a general non-specific term for either isopropyl alcohol (isopropanol) or ethyl alcohol (ethanol) rubbing-alcohol products.\n",
"Surfactants are commonly found in soaps and detergents. Solvents like alcohol are often used as antimicrobials. They are found in cosmetics, inks, and liquid dye lasers. They are used in the food industry, in processes such as the extraction of vegetable oil. \n",
"The term \"rubbing alcohol\" came into prominence in North America in the mid-1920s. The \"original\" rubbing alcohol was literally used as a liniment for massage; hence the name. This original rubbing alcohol was rather different from today's precisely formulated surgical spirit; in some formulations it was perfumed and included different additives, notably a higher concentration of methyl salicylate.\n",
"Alcohol-based hand rubs are extensively used in the hospital environment as an alternative to antiseptic soaps. Hand-rubs in the hospital environment have two applications: hygienic hand rubbing and surgical hand disinfection. Alcohol based hand rubs provide a better skin tolerance as compared to antiseptic soap.\n"
] |
When did the US Government begin doubting that China/Taiwan would ever retake the Chinese mainland? | The US was never really under any illusions that the nationalists would somehow turn things around in the Civil War after they retreated to Taiwan. Even before the end of WW2 there had been multiple American observers and experts in China who had reported the Communists enjoyed much broader popularity than the Nationalists, and by 1949 the GMD was very obviously overwhelmed.
Before the Chinese entered the Korean War, the US was expecting an invasion by the mainland to finish things off, and the US government had diplomatically indicated that they weren't going to do anything about it. Only when the Chinese entered the Korean War did the US send the 7th Fleet to the Strait of China to prevent the invasion. They also began to provide the Taiwanese military with equipment and weapons.
After the war the US certainly hoped that the CCP would crumble, but their support of Taiwan was based on denying the CCP territory, and especially on keeping China's UN security council vote out of the hands of the Communists. There was no real belief that Taiwan could attack the CCP. | [
"On 16 December 1978, U.S. President Jimmy Carter announced that the U.S. would sever its official relationship with the Republic of China as of 1 January 1979. It was the most serious challenge to the Taiwan government since it lost its seat at the United Nations to the People's Republic of China in 1971. President Chiang Ching-kuo immediately postponed all elections without a definite deadline for its restoration. Tangwai, which had won steadily expanding support, was strongly frustrated and disappointed about Chiang's decision since it suspended the only legitimate method they could use to express their opinions.\n",
"Since the end of the Chinese civil war in 1949, the Republic of China was limited to Taiwan (taken from Japan in 1945, ceded by Qing China in 1895 - although renounced in 1952) and a few islands near Fujian, while the People's Republic of China controlled mainland China, and since 1950 also the island of Hainan. Both Chinese governments claimed sovereignty over all of China, and regard the other government as being in rebellion. Until 1971, the Republic of China was a permanent member of the UN Security Council with veto power. Since then, however, it was excluded in favor of the People's Republic of China, and since 1972, it was also excluded from all UN-subcommittees. Since the death of Chiang Kai-shek in 1975, Republic of China no longer aggressively asserts its exclusive mandate and most of the world's nations have since broken their official diplomatic ties with Republic of China (except for 21 nations including Holy See as of 2008). Nevertheless, most nations, as well as the People's Republic government, continue to maintain unofficial relations.\n",
"The United States did not formally recognize the People's Republic of China (PRC) for 30 years after its founding. Instead, the US maintained diplomatic relations with the Republic of China government on Taiwan, recognizing it as the sole legitimate government of China.\n",
"Moreover, Japan formally surrendered its claim to sovereignty over Taiwan on 28 April 1952, thus calling into serious doubt the authority of Japan to formally make such an assignment regarding the status of Taiwan over three months later on 5 August 1952. Indeed, British and American officials did not recognize any transfer of Taiwan's sovereignty to \"China\" in either of the post-war treaties.\n",
"On the other hand, this meant China lost the opportunity to reunify Taiwan. Initially, the United States had abandoned the KMT and expected that Taiwan would fall to Beijing anyway, so the basic U.S. policy was to \"wait and see\" on the assumption that Taiwan's fall to Communist China was inevitable. However, the North Korean invasion of South Korea, in the context of the Cold War, meant U.S. President Truman intervened again and dispatched the Seventh Fleet to \"neutralize\" the Formosa (Taiwan) Strait.\n",
"During the Pacific War, the United States and China were allies against Japan. In October 1945, a month after Japan's surrender, representatives of Chiang Kai-shek, on behalf of the Allied Powers, were sent to Formosa to accept the surrender of Japanese troops. However, during the period of the 1940s, there was no recognition by the United States Government that Taiwan had ever been incorporated into Chinese national territory. Chiang continued to remain suspicious of America's motives.\n",
"BULLET::::- Taiwanese historian pointed out: After World War II ended, Republic of China officials went to Taiwan to accept the surrender of Japanese forces on behalf of the Allied Powers. Although they claimed that it was \"Taiwan Retrocession\", it was actually a provisional military occupation and was not a transfer of territories of Taiwan and Penghu. A transfer of territory requires a conclusion of an international treaty in order to be valid. But before the government of the Republic of China was able to conclude a treaty with Japan, it was overthrown by the Chinese Communist party and fled its territory. Consequently, that attributed to the controversy of the \"Undetermined Status of Taiwan\" and the controversy over \"Taiwan Retrocession\".\n"
] |
How do physics and astronomy undergrad majors differ? | They are very similar, but you'd do better to get a degree in physics if you're really interested in high level astronomy. Astronomy degrees can focus too much on what may or may not be relevant to your interests. It's better to get a broad understanding of physics, rather than an astronomical based understanding of it. Your appreciation and understanding of astronomy will only benefit.
If you plan on going to grad school, many people recommend a math degree with either a duel degree in physics, or at least a minor. Almost everyone regrets not taking more math. | [
"The courses for physics major have much higher level than those two case that had been talked above. At the beginning of the college. Their courses have few difference with the physics courses for the general education of science major. After the first year, the physics majors need to go up and study many deeper knowledge. The first change of the course is that the scale of the courses is much more smaller than before due to the different major of students in high grade. And for the content of the course, the quantitative analysis is really important. Meanwhile, there is usually a solid of homework. The grades of the students are largely decided by the homework and exams. The non-academic part, such as particitation, discussion, would have little weight. Each year, there are some specific required courses. But students usually can make some change due to their own ability. Students who are enthusiastic can take the graduate level course in senior year. There are also some purely lab courses, which teach students how to do the advanced experiment and write the lab report.\n",
"Students may major in either natural sciences, music, visual arts, or humanities, though they study most subjects (those which are not related to their area of interest) in mixed classes. The science students choose one main subject, such as physics, chemistry, or biology, and they must also learn computer science and/or another subject.\n",
"A standard undergraduate physics curriculum consists of classical mechanics, electricity and magnetism, non-relativistic quantum mechanics, optics, statistical mechanics and thermodynamics, and laboratory experience. Physics students also need training in mathematics (calculus, differential equations, linear algebra, complex analysis, etc.), and in computer science.\n",
"There are two paths to earning a bachelor's degree (SB) in physics from MIT. The first, \"Course 8 Focused Option\", is for students intending to continue studying physics in graduate school. The track offers a rigorous education in various fields in fundamental physics including classical and quantum mechanics, statistical physics, general relativity, electrodynamics, and higher mathematics. \n",
"United States undergraduate physics curriculum, since many students who plan to continue to graduate school apply during the first half of the fourth year. It consists of 100 five-option multiple-choice questions covering subject areas including classical mechanics, electromagnetism, wave phenomena and optics, thermal physics, relativity, atomic and nuclear physics, quantum mechanics, laboratory techniques, and mathematical methods. The table below indicates the relative weights, as asserted by ETS, and detailed contents of the major topics.\n",
"Undergraduate physics curricula in American universities includes courses for students choosing an academic major in physics, as well as for students majoring in other disciplines for whom physics courses provide essential prerequisite skills and knowledge. The term \"physics major\" can refer to the academic major in physics or to a student or graduate who has chosen to major in physics.\n",
"The School of Physics offers a bachelor's degree in both pure and Applied Physics plus both master's and doctoral degrees in several fields. These degrees are technically granted by the School's parent organization, the Georgia Tech College of Sciences, and often awarded in conjunction with other academic units within Georgia Tech. The graduate program was initiated under Joseph Howey's leadership and the undergraduate program grew in stature to become one of the larger departments in the United States. Howey remained at the helm of the School of Physics for 28 years.\n"
] |
how do octopuses avoid giving themselves brain damage? | They don't really have a localized brain, like we do. Rather, it is spread throughout their body, with a lot of neural tissue in the tentacles. | [
"The octopus (along with cuttlefish) has the highest brain-to-body mass ratios of all invertebrates; it is also greater than that of many vertebrates. It has a highly complex nervous system, only part of which is localised in its brain, which is contained in a cartilaginous capsule. Two-thirds of an octopus's neurons are found in the nerve cords of its arms, which show a variety of complex reflex actions that persist even when they have no input from the brain. Unlike vertebrates, the complex motor skills of octopuses are not organised in their brain via an internal somatotopic map of its body, instead using a nonsomatotopic system unique to large-brained invertebrates.\n",
"A benthic (bottom-dwelling) octopus typically moves among the rocks and feels through the crevices. The creature may make a jet-propelled pounce on prey and pull it towards the mouth with its arms, the suckers restraining it. Small prey may be completely trapped by the webbed structure. Octopuses usually inject crustaceans like crabs with a paralysing saliva then dismember them with their beaks. Octopuses feed on shelled molluscs either by forcing the valves apart, or by drilling a hole in the shell to inject a nerve toxin. It used to be thought that the hole was drilled by the radula, but it has now been shown that minute teeth at the tip of the salivary papilla are involved, and an enzyme in the toxic saliva is used to dissolve the calcium carbonate of the shell. It takes about three hours for \"O. vulgaris\" to create a hole. Once the shell is penetrated, the prey dies almost instantaneously, its muscles relax, and the soft tissues are easy for the octopus to remove. Crabs may also be treated in this way; tough-shelled species are more likely to be drilled, and soft-shelled crabs are torn apart.\n",
"Primarily, the octopus situates itself in a shelter where a minimal amount of its body is presented to the external water, which would pose a problem for an organism that breathes solely through its skin. When it does move, most of the time it is along the ocean or sea floor, in which case the underside of the octopus is still obscured. This crawling increases metabolic demands greatly, requiring they increase their oxygen intake by roughly 2.4 times the amount required for a resting octopus. This increased demand is met by an increase in the stroke volume of the octopus' heart.\n",
"A science-based report from the University of British Columbia to the Canadian Federal Government has been quoted as stating \"The cephalopods, including octopus and squid, have a remarkably well developed nervous system and may well be capable of experiencing pain and suffering.\"\n",
"Avoidance learning in octopuses has been known since 1905. Noxious stimuli, for example electric shocks, have been used as \"negative reinforcers\" for training octpuses, squid and cuttlefish in discrimination studies and other learning paradigms. Repeated exposure to noxious stimuli can have long-term effects on behaviour. It has been shown that in octopuses, electric shocks can be used to develop a passive avoidance response leading to the cessation of attacking a red ball.\n",
"Octopuses are highly intelligent, possibly more so than any other order of invertebrates. The level of their intelligence and learning capability are debated, but maze and problem-solving studies show they have both short- and long-term memory. Octopus have a highly complex nervous system, only part of which is localized in their brain. Two-thirds of an octopus' neurons are found in the nerve cords of their arms. Octopus arms show a variety of complex reflex actions that persist even when they have no input from the brain. Unlike vertebrates, the complex motor skills of octopuses are not organized in their brain using an internal somatotopic map of their body, instead using a non-somatotopic system unique to large-brained invertebrates. Some octopuses, such as the mimic octopus, move their arms in ways that emulate the shape and movements of other sea creatures.\n",
"Octopus remained in prison until the Martian invasion decimated much of Chicago and allowed CyberFace to assert control. The police tried to get Octopus to help them bring down CyberFace but he escaped and gathered up the remains of his former pawn when his unstable body blew up. Octopus took this disembodied head and attached it to the body of BrainiApe.\n"
] |
why did we go from round headphone wires to flat ones? | i'm not sure what you mean. all headphones i've bought in the past few years have round wires. can you give an example of these "flat" wires? | [
"Owing to the fact that a round wire will create air gaps that are not electrically used, the fill factor is always smaller than one. In order to achieve higher fill factors, rectangular or flat wire can be used. This can be wound on flat or upright.\n",
"Early speaker cable was typically stranded copper wire, insulated with cloth tape, waxed paper or rubber. For portable applications, common lampcord was used, twisted in pairs for mechanical reasons. Cables were often soldered in place at one end. Other terminations were binding posts, terminal strips, and spade lugs for crimp connections. Two-conductor ¼-inch tip-sleeve phone jacks came into use in the 1920s and '30s as convenient terminations.\n",
"One of the reasons behind the adoption of that particular design was that it was cheap to make, with the flat pins being able to be easily stamped out of sheet brass, in contrast to round pins or thicker rectangular ones used in other countries. This was also a consideration when the Chinese authorities officially adopted the design in relatively recent times, despite the considerable inroads the British plug had made, because of its use in Hong Kong. The Chinese socket is normally mounted with the earth pin at the top. This is considered to offer some protection should a conductive object fall between the plug and the socket. The Chinese CPCS-CCC (Chinese 10 A/250 V) plugs and socket-outlets are almost identical, differing by only 1 mm longer pins and installed \"upside down\". Though AS 3112 plugs will physically connect, they may not be electrically compatible to the Chinese 220 V standards. Originally there was no convention as to the direction of the earth pin. Often it was facing upwards, as socket-outlets in China now do but it could also be downwards or horizontal, in either direction.\n",
"A short-barrelled version of the phone plug was used for 20th century high-impedance mono headphones, and in particular those used in World War II aircraft. These have become rare. It is physically possible to use a normal plug in a short socket, but a short plug will neither lock into a normal socket nor complete the tip circuit.\n",
"Historically, all wire was round. Advances in technology now allow the manufacture of jewelry wire with different cross-sectional shapes, including circular, square, and half-round. Half round wire is often wrapped around other pieces of wire to connect them. Square wire is used for its appearance: the corners of the square add interest to the finished jewelry. Square wire can be twisted to create interesting visual effects.\n",
"First introduced in 1965, the Trimline included a lighted dial and was encased in a sleek, curved plastic housing that took up much less space than earlier Western Electric telephones. However, the glass-smooth and shallowly-curved plastic handset proved difficult to retain between cheek and shoulder for hands-free communication without slipping, and this problem was never corrected over the life of the model line. Cushioned clamp-on adaptors were manufactured and sold by third parties to make it easier to cradle the handset, but these add-ons would greatly compromise the aesthetic appearance of the telephone.\n",
"Disadvantages of single wire operation such as crosstalk and hum from nearby AC power wires had already led to the use of twisted pairs and, for long distance telephones, four-wire circuits. Users at the beginning of the 20th century did not place long distance calls from their own telephones but made an appointment to use a special sound proofed long distance telephone booth furnished with the latest technology.\n"
] |
American and Russian submarines during WW2 | During WWII the United States Navy used the Mark 14 torpedo. The torpedo had a speed of 46 knots. Now it is unclear what class of Soviet submarine was being used but I can assure you that it would have been hopelessly outmatched in terms of submerged speed. From what I found the submerged top speed of most submarines of that era top out at around 10-14 knots. And actually that torpedo would be able to catch the fastest submarine in the world, the Soviet K-222 which had a top speed of 44.7 knots and was commissioned in 1969. | [
"During the Cold War, the United States and the Soviet Union maintained large submarine fleets that engaged in cat-and-mouse games. This continues today, on a much-reduced scale. The Soviet Union suffered the loss of at least four submarines during this period: \"K-129\" was lost in 1968 (which the CIA attempted to retrieve from the ocean floor with the Howard Hughes-designed ship named Glomar Explorer), \"K-8\" in 1970, \"K -219\" in 1986 (subject of the film \"Hostile Waters\"), and \"Komsomolets\" (the only Mike class submarine) in 1989 (which held a depth record among the military submarines—1000 m, or 1300 m according to the article K-278). Many other Soviet subs, such as \"K-19\" (first Soviet nuclear submarine, and first Soviet sub at North Pole) were badly damaged by fire or radiation leaks. The United States lost two nuclear submarines during this time: USS \"Thresher\" and \"Scorpion\". The Thresher was lost due to equipment failure, and the exact cause of the loss of the Scorpion is not known.\n",
"During the Cold War, the US and the Soviet Union maintained large submarine fleets that engaged in cat-and-mouse games. The Soviet Union lost at least four submarines during this period: was lost in 1968 (a part of which the CIA retrieved from the ocean floor with the Howard Hughes-designed ship \"Glomar Explorer\"), in 1970, in 1986, and in 1989 (which held a depth record among military submarines—). Many other Soviet subs, such as (the first Soviet nuclear submarine, and the first Soviet sub to reach the North Pole) were badly damaged by fire or radiation leaks. The US lost two nuclear submarines during this time: due to equipment failure during a test dive while at its operational limit, and due to unknown causes.\n",
"Submarines of World War II represented a wide range of capabilities with many types of varying specifications produced by dozens of countries. The principle countries engaged in submarine warfare during the war were Germany, Italy, Japan, the United States, United Kingdom and the Soviet Union. The Italian and Soviet fleets were the largest. While the German and US fleets fought anti-shipping campaigns (in the Atlantic and Pacific respectively), the British and Japanese submarines were mostly engaged against enemy warships.\n",
"During World War I, the Russian subs operated together with the British submarine flotilla in the Baltic against the German Navy. This all changed with the October Revolution and the Finnish Civil War.\n",
"The American Holland-class submarines, also AG class or A class, were Holland 602 type submarines used by the Imperial Russian and Soviet Navies in the early 20th century. The small submarines participated in the World War I Baltic Sea and Black Sea theatres and a handful of them also saw action during World War II.\n",
"Japan, the United States, Great Britain, The Netherlands, and Australia all employed anti-submarine forces in the Pacific Theater during World War II. Because the Japanese Navy tended to utilize its submarines against capital ships such as cruisers, battleships and aircraft carriers, U.S. and Allied anti-submarine efforts concentrated their work in support of fleet defense.\n",
"As Cold War tensions increased, the United States Navy formed modernized hunter-killer groups in anticipation of potential use of Soviet submarines to intercept North American shipping to European NATO allies. As modern anti-submarine aircraft became too large to operate from escort carriers, s were reclassified as anti-submarine warfare carriers (CVS). Some second world war destroyers were reclassified as escort destroyers (DDE) with guns and torpedoes replaced by RUR-4 Weapon Alpha or hedgehog. Operational doctrine anticipated each CVS would be accompanied by eight DDEs. Four DDEs would provide a close screen for the CVS while the other four attacked submarines detected by aircraft. The cost of Vietnam War combat operations prevented replacement of these ASW ships when they reached the end of their design life. Newly operational SOSUS and shore-based Lockheed P-3 Orion maritime patrol aircraft assumed the mid-ocean ASW search and attack role of the disappearing CVS hunter-killer groups.\n"
] |
How was construction in ancient cities prioritized? | Dur Sharrukin was a brand new capital city built by Sargon II from 716 BC to 706 BC. The outer walls measured 1.76 km x 1.635 km, had 157 towers and seven gates. There was a large barracks, built in the south west quarter of the city. The palace and three important temples were built on a terrace on the northern edge of the city. There was a small working class residential district near the cerimonial core of the city. However, more than eighty percent of the land inside the city wall remained undeveloped. Sargon II soon died, and the Assyrian court, which had just recently moved into Dur Sharrukin, moved to another capital city. | [
"The construction of cities was the end product of trends which began in the Neolithic Revolution. The growth of the city was partly planned and partly organic. Planning is evident in the walls, high temple district, main canal with harbor, and main street. The finer structure of residential and commercial spaces is the reaction of economic forces to the spatial limits imposed by the planned areas resulting in an irregular design with regular features. Because the Sumerians recorded real estate transactions it is possible to reconstruct much of the urban growth pattern, density, property value, and other metrics from cuneiform text sources.\n",
"The pre-Classical and Classical periods saw a number of cities laid out according to fixed plans, though many tended to develop organically. Designed cities were characteristic of the Minoan, Mesopotamian, Harrapan, and Egyptian civilisations of the third millennium BC (see Urban planning in ancient Egypt). The first recorded description of urban planning appears in the Epic of Gilgamesh: \"Go up on to the wall of Uruk and walk around. Inspect the foundation platform and scrutinise the brickwork. Testify that its bricks are baked bricks, And that the Seven Counsellors must have laid its foundations. One square mile is city, one square mile is orchards, one square mile is claypits, as well as the open ground of Ishtar's temple.Three square miles and the open ground comprise Uruk. Look for the copper tablet-box, Undo its bronze lock, Open the door to its secret, Lift out the lapis lazuli tablet and read.\" \n",
"Factors such as wealth and high population densities in cities forced the ancient Romans to discover new architectural solutions of their own. The use of vaults and arches, together with a sound knowledge of building materials, enabled them to achieve unprecedented successes in the construction of imposing infrastructure for public use. Examples include the aqueducts of Rome, the Baths of Diocletian and the Baths of Caracalla, the basilicas and Colosseum. These were reproduced at a smaller scale in most important towns and cities in the Empire. Some surviving structures are almost complete, such as the town walls of Lugo in Hispania Tarraconensis, now northern Spain. The administrative structure and wealth of the empire made possible very large projects even in locations remote from the main centers, as did the use of slave labor, both skilled and unskilled.\n",
"Architecture developed significantly in the 2nd century BC with the arrival of the Romans, who called the Iberian Peninsula Hispania. Conquered settlements and villages were often modernised following Roman models, with the building of a forum, streets, theatres, temples, baths, aqueducts and other public buildings. An efficient array of roads and bridges was built to link the cities and other settlements.\n",
"There is evidence of urban planning and designed communities dating back to the Mesopotamian, Indus Valley, Minoan, and Egyptian civilizations in the third millennium BCE. Archeologists studying the ruins of cities in these areas find paved streets that were laid out at right angles in a grid pattern. The idea of a planned out urban area evolved as different civilizations adopted it. Beginning in the 8th century BCE, Greek city states were primarily centered on orthogonal (or grid-like) plans. The ancient Romans, inspired by the Greeks, also used orthogonal plans for their cities. City planning in the Roman world was developed for military defense and public convenience. The spread of the Roman Empire subsequently spread the ideas of urban planning. As the Roman Empire declined, these ideas slowly disappeared. However, many cities in Europe still held onto the planned Roman city center. Cities in Europe from the 9th to 14th centuries, often grew organically and sometimes chaotically. But in the following centuries some newly created towns were built according to preconceived plans, and many others were enlarged with newly planned extensions. From the 15th century on, much more is recorded of urban design and the people that were involved. In this period, theoretical treatises on architecture and urban planning start to appear in which theoretical questions are addressed and designs of towns and cities are described and depicted. During the Enlightenment period, several European rulers ambitiously attempted to redesign capital cities. During the Second French Republic, Baron Georges-Eugène Haussmann, under the direction of Napoleon III, redesigned the city of Paris into a more modern capital, with long, straight, wide boulevards.\n",
"Social elements such as wealth and high population densities in cities forced the ancient Romans to go discover new (architectural) solutions of their own. The use of vaults and arches together with a sound knowledge of building materials, for example, enabled them to achieve unprecedented successes in the construction of imposing structures for public use. Examples include the aqueducts of Rome, the Baths of Diocletian and the Baths of Caracalla, the basilicas and perhaps most famously of all, the Colosseum. They were reproduced at smaller scale in most important towns and cities in the Empire. Some surviving structures are almost complete, such as the town walls of Lugo in Hispania Tarraconensis, or northern Spain.\n",
"Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city both followed from the development of agriculture, which enabled production of surplus food, and thus a social division of labour (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal.\n"
] |
why do most unknown calls i receive on my home phone end up having no one on the other end? | They're usually from call centres where they have a machine automatically call numbers on a pre-assigned list. Then, when a voice is detected on your end, someone is connected on their end to talk to you. It saves them time, even if it does result it people going "Huh. No one there" and hanging up before they get a chance. | [
"Many of these towns have in fact refused to merge, leaving callers with more digits to dial when making local calls. This is partially balanced by not having to dial an area code for the neighboring city.\n",
"Because the SIT is well known in many countries, callers can understand that their call has failed, even though they do not understand the language of the recorded announcement (e.g., when calling internationally) instead of assuming the recording is voicemail or some other intended function.\n",
"Accidental calls are often cited as being one of the more annoying consequences of cell phone usage. Given the haphazard nature of inadvertent dialing, most actual misconnections do not result from the selection of random numbers. Instead, pocket dialing frequently triggers the \"recently dialed\" and \"contact\" lists that are contained within modern cell phones. The caller is frequently unaware that the call has taken place, whereas the recipient of the call often hears background conversation and background noises such as the rustling of clothes. Due to the dialing of common numbers, the recipient is likely to know the caller, and may overhear conversations that the caller would not want them to hear.\n",
"The phone rings again with no answer, and then the line is cut. No one can get a signal on their cell phones. The group decide to leave but when they run out to the car they see that it is missing. A van pulls up in the driveway, scaring the group back into the house. As everyone tries to get a signal on their phones again, all the power in the house goes out. Miriam finally gets a signal on her phone and dials 9-1-1, but the call drops out.\n",
"Despite its common usage to address people who call with no one answering the phone, the \"here\" here is semantically contradictory to one's absence. Nevertheless, this is considered normal for most people as speakers have to project themselves as answering the phone when in fact they are not physically. \n",
"Few families had telephones, relying instead on phone booths located about 100 feet apart. When a phone call would come, whoever was closest at the moment would answer, while the neighborhood children would run to see who the call was for, then pass the word to that person.\n",
"In addition to the inconvenience and embarrassment that may result from an erroneously dialed number, the phenomenon can have other consequences including using up a phone user's airtime minutes. Accidental calls, if not hung up immediately, tie up the recipient's phone line. If this is a landline, the recipient may have difficulty in disconnecting the call in order to use the phone, as networks sometimes define a timeout period between the recipient hanging up and the call actually being cleared.\n"
] |
Why is Zeta(0) equal to -1/2? | This is because the zeta function is not defined to be equal to that sum on the entire complex plane; it is only given by the sum you're looking at when Re(s) > 1. The Riemann Zeta function is actually defined to be the analytic continuation of the function given by the sum in the half plane Re(s) > 1.
There isn't one very easy proof of the fact that Re(0) = -1/2 without first developing some tools about the zeta function. If you're willing to assume some equations, then [Wikipedia has a clean proof](_URL_0_). | [
"The Riemann zeta function ζ(\"s\") is a function whose argument \"s\" may be any complex number other than 1, and whose values are also complex. It has zeros at the negative even integers; that is, ζ(\"s\") = 0 when \"s\" is one of −2, −4, −6, ... These are called its \"trivial zeros\". However, the negative even integers are not the only values for which the zeta function is zero. The other ones are called \"non-trivial zeros\". The Riemann hypothesis is concerned with the locations of these non-trivial zeros, and states that:\n",
"The Riemann zeta function is one of the most significant functions in mathematics because of its relationship to the distribution of the prime numbers. The zeta function is defined for any complex number with real part greater than 1 by the following formula:\n",
"where ζ is the Riemann zeta function. Keeping Grandi's series in mind, this relation explains why ζ(0) = −⁄; see also 1 + 1 + 1 + 1 + · · ·. The relation also implies a much more important result. Since \"η\"(\"z\") and (1 − 2) are both analytic on the entire plane and the latter function's only zero is a simple zero at \"z\" = 1, it follows that ζ(\"z\") is meromorphic with only a simple pole at \"z\" = 1.\n",
"Note that the ratio of the zeta functions is well defined, even for \"n\" \"s\" − 1 because the series representation of the zeta function can be analytically continued. This does not change the fact that the moments are specified by the series itself, and are therefore undefined for large \"n\".\n",
"Zeta (uppercase Ζ, lowercase ζ; , , classical or \"zē̂ta\"; \"zíta\") is the sixth letter of the Greek alphabet. In the system of Greek numerals, it has a value of 7. It was derived from the Phoenician letter zayin . Letters that arose from zeta include the Roman Z and Cyrillic З.\n",
"In mathematics, the arithmetic zeta function is a zeta function associated with a scheme of finite type over integers. The arithmetic zeta function generalizes the Riemann zeta function and Dedekind zeta function to higher dimensions. The arithmetic zeta function is one of the most-fundamental objects of number theory.\n",
"At rational arguments the Hurwitz zeta function may be expressed as a linear combination of Dirichlet L-functions and vice versa: The Hurwitz zeta function coincides with Riemann's zeta function ζ(\"s\") when \"q\" = 1, when \"q\" = 1/2 it is equal to (2−1)ζ(\"s\"), and if \"q\" = \"n\"/\"k\" with \"k\" 2, (\"n\",\"k\") 1 and 0 \"n\" \"k\", then\n"
] |
Why does the metal from meteorites have such a distinctive zig-zag pattern? | That pattern, called a [Widmanstatten pattern](_URL_1_) is due to the crystallization of iron and nickel minerals in the meteorite cooling very slowly. Here, 'very slowly' means a few hundred or thousand degrees C every *million years*. This slow cooling allows for large crystals of these minerals to form. They are actually interlaced crystals of two different alloys of iron and nickel. One type basically grows within the other type. [Here's an excellent review that explains the formation](_URL_0_).
The patterns are visible when meteorites are cut, polished, and etched using nitric acid or ferric chloride. These chemicals dissolve different minerals at different rates so you can eat away at one of the alloys more than the other, giving contrast to the two regions. | [
"When an iron meteorite is forged into a tool or weapon, the Widmanstätten patterns remain, but become stretched and distorted. The patterns usually cannot be fully eliminated by blacksmithing, even through extensive working. When a knife or tool is forged from meteoric iron and then polished, the patterns appear in the surface of the metal, albeit distorted, but they tend to retain some of the original octahedral shape and the appearance of thin lamellae criss-crossing each other. Pattern-welded steels such as Damascus steel also bear patterns, but they are easily discernible from any Widmanstätten pattern.\n",
"This type of rock formation and weathering process has happened in many other places locally and throughout the world, but what makes Meteora's appearance special is firstly, the uniformity of the sedimentary rock constituents deposited over millions of years leaving few signs of vertical layering, and secondly, the localised abrupt vertical weathering.\n",
"The meteorite was formed from nebular dust and gas during the early formation of the Solar System. It is a \"stony\" meteorite, as opposed to an \"iron,\" or \"stony iron,\" the other two general classes of meteorite. Most Allende stones are covered, in part or in whole, by a black, shiny crust created as the stone descended at great speed through the atmosphere as it was falling towards the earth from space. This causes the exterior of the stone to become very hot, melting it, and forming a glassy \"fusion crust.\"\n",
"As meteoroids are heated during atmospheric entry, their surfaces melt and experience ablation. They can be sculpted into various shapes during this process, sometimes resulting in shallow thumbprint-like indentations on their surfaces called regmaglypts. If the meteoroid maintains a fixed orientation for some time, without tumbling, it may develop a conical \"nose cone\" or \"heat shield\" shape. As it decelerates, eventually the molten surface layer solidifies into a thin fusion crust, which on most meteorites is black (on some achondrites, the fusion crust may be very light colored). On stony meteorites, the heat-affected zone is at most a few mm deep; in iron meteorites, which are more thermally conductive, the structure of the metal may be affected by heat up to below the surface. Reports vary; some meteorites are reported to be \"burning hot to the touch\" upon landing, while others are alleged to have been cold enough to condense water and form a frost.\n",
"The crystalline patterns become visible when the meteorites are cut, polished, and acid etched, because taenite is more resistant to the acid. In the picture shown, the broad white bars are \"kamacite\" (dimensions in the mm-range), and the thin line-like ribbons are \"taenite\". The dark mottled areas are called \"plessite\".\n",
"In 1808, he independently discovered some metallographic patterns, now called Widmanstätten patterns in iron meteorites, by flame-heating a slab of Hraschina meteorite. The different iron alloys of meteorites oxidized at different rates during heating, causing color and luster differences.\n",
"Due to the heterogeneous structure of Seymchan, there are two types of specimens: with or without olivine crystals. It is worthy to note that the specimen pictured to the left shows an interesting, seldom seen feature of iron meteorites. The Widmanstätten pattern on the left hand side of the specimen is visibly bent. This is caused by the shearing of the meteorite as it broke up during atmospheric entry and serves as testimony of the violent experience a meteor is subject to as it falls through the atmosphere.\n"
] |
How does parasitism between a host and a parasite of different domains work? | They aren't really tinkering with the code so much as producing an enviornment that induces the host to do something. It's more like instigating a pearl to form using a bead than genetic engineering (though the analogy isn't perfect). | [
"Parasitism is a kind of symbiosis, a close and persistent long-term biological interaction between a parasite and its host. Unlike commensalism and mutualism, the parasitic relationship harms the host, either feeding on it or, as in the case of intestinal parasites, consuming some of its food. However, parasites are different from saprophytes. Because parasites interact with other species, they can readily act as vectors of pathogens, causing disease. Predation is by definition not a symbiosis, as the interaction is brief, but the entomologist E. O. Wilson has characterised parasites as \"predators that eat prey in units of less than one\".\n",
"Parasites follow a wide variety of evolutionary strategies, placing their hosts in an equally wide range of relationships. Parasitism implies host–parasite coevolution, including the maintenance of gene polymorphisms in the host, where there is a trade-off between the advantage of resistance to a parasite and a cost such as disease caused by the gene.\n",
"As it forces its way into the host cell, the parasite forms a parasitophorous vacuole (PV) membrane from the membrane of the host cell. The PV encapsulates the parasite, and is both resistant to the activity of the endolysosomal system, and can take control of the host's mitochondria and endoplasmic reticulum.\n",
"In evolutionary biology, parasitism is a relationship between species, where one organism, the parasite, lives on or in another organism, the host, causing it some harm, and is adapted structurally to this way of life. The entomologist E. O. Wilson has characterised parasites as \"predators that eat prey in units of less than one\". Parasites include protozoans such as the agents of malaria, sleeping sickness, and amoebic dysentery; animals such as hookworms, lice, mosquitoes, and vampire bats; fungi such as honey fungus and the agents of ringworm; and plants such as mistletoe, dodder, and the broomrapes. There are six major parasitic strategies of exploitation of animal hosts, namely parasitic castration, directly transmitted parasitism (by contact), trophically transmitted parasitism (by being eaten), vector-transmitted parasitism, parasitoidism, and micropredation.\n",
"Parasitism is a relationship between species, where one organism, the parasite, lives on or in another organism, the host, causing it some harm, and is adapted structurally to this way of life. The parasite either feeds on the host, or, in the case of intestinal parasites, consumes some of its food.\n",
"A parasite may be passively transported into a nest by a group member or may actively search for the nest; once inside, parasite transmission can be vertical (from mother to daughter colony into the next generation) or horizontally (between/within colonies). In eusocial insects, the most frequent defence against parasite uptake into the nest is to prevent infection during and/or after foraging, and a wide range of active and prophylactic mechanisms have evolved to this end.\n",
"In a parasitic relationship, the parasite benefits while the host is harmed. Parasitism takes many forms, from endoparasites that live within the host's body to ectoparasites and parasitic castrators that live on its surface and micropredators like mosquitoes that visit intermittently. Parasitism is an extremely successful mode of life; as many as half of all animals have at least one parasitic phase in their life cycles, and it is also frequent in plants and fungi. Moreover, almost all free-living animal species are hosts to parasites, often of more than one species.\n"
] |
We all used to eats boogers: what effect could/did that have? | Unfortunately I can't find the study, but a while ago (2-3 years ago) I ran across an article suggesting that eating boogers was a pseudo-vaccination, killing whatever got caught and allowing the inert form to be ingested to create antibodies. However, due to the levels of pollution in most places, it's probably not too healthy. | [
"Boredom, stress, habit and addiction are all possible causes of cribbing and wind-sucking. It was proposed in a 2002 study that the link between intestinal conditions such as gastric inflammation or colic and abnormal oral behavior was attributable to environmental factors. There is evidence that stomach ulcers may be correlated to a horse becoming a cribber.\n",
"Mucophagy has also been referred to as a \"tension phenomenon\" based on children's ability to function in their environment. The different degrees of effectively fitting in socially may indicate psychiatric disorders or developmental stress reactions. However, most parents view these habits as pathological issues. Moreover, Andrade and Srihari cited a study performed by Sidney Tarachow of the State University of New York which reported that people who ate their boogers found them \"tasty.\"\n",
"BULLET::::- Purging: May use laxatives, diet pills, ipecac syrup, or water pills to flush food out of their system after eating or may engage in self-induced vomiting though this is a more common symptom of bulimia.\n",
"When Boog becomes sick from eating too many candy bars, events quickly spiral out of control, as the two raid the town's grocery store. Elliot escapes before Boog is caught by a friend of Beth's, police officer Gordy. At the nature show, Elliot being chased by Shaw, sees Boog, which \"attacks\" him. This causes the whole audience to panic. Shaw attempts to shoot Boog and Elliot, but Beth sedates them both with a tranquilizer gun just before Shaw fires his gun. Shaw flees before Gordy can arrest him for shooting a gun in the town. The two troublemakers are banned from the town and into the Timberline National Forest, only three days before open season starts, but they are relocated above the waterfalls, where they will be safe from the hunters.\n",
"Stefan Gates in his book \"Gastronaut\" discusses eating dried nasal mucus, and says that 44% of people he questioned said they had eaten their own dried nasal mucus in adulthood and said they liked it. As mucus filters airborne contaminants, eating it could be thought to be unhealthy; Gates comments that \"our body has been \"built\" to consume snot\", because the nasal mucus is normally swallowed after being moved inside by the motion of the cilia. Friedrich Bischinger, a lung specialist at Privatklinik Hochrum in Innsbruck, says that nose-picking and eating could actually be beneficial for the immune system.\n",
"Stefan Gates in his book \"Gastronaut\" discusses eating dried nasal mucus, and says that 44% of people he questioned said they had eaten their own dried nasal mucus in adulthood and said they liked it. As mucus filters airborne contaminants, eating it could be thought to be unhealthy; Gates comments that \"our body has been \"built\" to consume snot\", because the nasal mucus is normally swallowed after being moved inside by the motion of the cilia. Friedrich Bischinger, a lung specialist at Privatklinik Hochrum in Innsbruck, says that nose-picking and eating could actually be beneficial for the immune system.\n",
"Adult flukes are known to be quite harmless, as they do not attack on the host tissue. It is the immature flukes which are most damaging as they get attached to the intestinal wall, literally and actively sloughing off of the tissue. This necrosis is indicated by haemorrhage in faeces, which in turn is a sign of severe enteritis. Under such condition the animals become anorexic and lethargic. It is often accompanied by pronounced diarrhoea, dehydration, oedema, polydipsia, anaemia, listlessness and weight loss. In sheep profuse diarrhoea usually develops two to four weeks after initial infection. If infection is not properly attended death can ensue within 20 days, and in a farm mortality can be very high. In fact there are intermittent reports of mortality as high as 80% among sheep and cattle. Sometimes chronic form is also seen with severe emaciation, anaemia, rough coat, mucosal oedema, thickened duodenum and oedema in the sub maxillary space. The terminally sick animals lie prostrate on the ground, completely emaciated until they die. In buffalos, severe haemorrhage was found to be associated with liver cirrhosis and nodular hepatitis.\n"
] |
when you jump into a cold lake (say 60°f or ~15°c) why does the water no longer feel cold after about 5 minutes? | Your body has mechanisms in place to warm you up should you be in a cold environment. Dilated blood vessels provide a flush of warmth, and shivering also produces warmth. By doing this, your body can increase its internal temperature and keep you a bit more comfortable. Stay in too long though, and your vessels will end up constricting, because your body deems it's too cold and ends up preserving heat for your vital organs.
However, if you're staying in that cold water for too long, it'll cool your blood and by association, the rest of your body and that's how hypothermia happens. Stay comfortable, but stay warm. Old people can die of hypothermia simply by falling onto a cold floor without getting help getting up. | [
"Winter swimming can be dangerous to people who are not used to swimming in very cold water. After submersion in cold water the cold shock response will occur, causing an uncontrollable gasp for air. This is followed by hyperventilation, a longer period of more rapid breathing. The gasp for air can cause a person to ingest water, which leads to drowning. As blood in the limbs is cooled and returns to the heart, this can cause fibrillation and consequently cardiac arrest. The cold shock response and cardiac arrest are the most common causes of death related to cold water immersion.\n",
"Heat transfers very well into water, and body heat is therefore lost extremely quickly in water compared to air, even in merely 'cool' swimming waters around 70F (~20C). A water temperature of can lead to death in as little as one hour, and water temperatures hovering at freezing can lead to death in as little as 15 minutes. This is because cold water can have other lethal effects on the body, so hypothermia is not usually a reason for drowning or the clinical cause of death for those who drown in cold water.\n",
"BULLET::::- Water at near-freezing temperatures is less dense than slightly warmer water - maximum density of water is at about 4°C - so when near freezing, water may be slightly warmer at depth than at the surface.\n",
"Cold shock response is the physiological response of organisms to sudden cold, especially cold water, and is a common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body, and for people with heart disease, this additional workload can cause the heart to go into arrest. A person who survives the initial minute after falling into cold water can survive for at least thirty minutes provided they do not drown. The ability to stay afloat declines substantially after about ten minutes as the chilled muscles lose strength and co-ordination.\n",
"The unusual density curve and lower density of ice than of water is vital to life—if water were most dense at the freezing point, then in winter the very cold water at the surface of lakes and other water bodies would sink, the lake could freeze from the bottom up, and all life in them would be killed. Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer. The layer of ice that floats on top insulates the water below. Water at about 4 °C (39 °F) also sinks to the bottom, thus keeping the temperature of the water at the bottom constant (see diagram).\n",
"Cold shock response is the physiological response of organisms to sudden cold, especially cold water, and is a common cause of death from immersion in very cold water, such as by falling through thin ice. The immediate shock of the cold causes involuntary inhalation, which if underwater can result in drowning. The cold water can also cause heart attack due to vasoconstriction; the heart has to work harder to pump the same volume of blood throughout the body, and for people with heart disease, this additional workload can cause the heart to go into arrest. A person who survives the initial minute of trauma after falling into icy water can survive for at least thirty minutes provided they don't drown. However, the ability to perform useful work like staying afloat declines substantially after ten minutes as the body protectively cuts off blood flow to \"non-essential\" muscles.\n",
"Care should be taken when winter swimming in swimming pools and seas near the polar regions. The chlorine added to water in swimming pools and the salt in seawater allow the water to remain liquid at sub-zero temperatures. Swimming in such water is significantly more challenging and dangerous. The experienced winter swimmer Lewis Gordon Pugh swam near the North Pole in water and suffered a frostbite injury in his fingers. It took him four months to regain sensation in his hands.\n"
] |
what's the difference between nuclear and thermonuclear? | Usuaully Nuclear weapons are basic uranium or plutonium based 1 stage nuclear bombs that function by triggering a single fission reaction which causes a highly radioactive material to "break" apart and release large ammounts of energy.
Thermonuclear is used to describe Fusion devices which usualy fuse Hydrogen in to helium and release energy that way, a Thermonuclear weapon uses an initial fission explosion to "kickstart" a 2nd stage fusion explosion. thats why Hydrogen bombs are usualy refered to as "Thermonuclear bombs"
TL;DR
- Nuclear = Fission = Breaks apart atoms to generate energy
- Thermonuclear = Fusion = Joins Atoms from element A in to Element B to generate energy.
| [
"A thermonuclear weapon, or fusion weapon, is a second-generation nuclear weapon design. Its greater sophistication over pure fission weapons may afford it vastly greater destructive power than first-generation atomic bombs, a more compact size, a lower mass or a combination of these benefits. Characteristics of nuclear fusion reactions make possible the use of non-fissile depleted uranium as the weapon's main fuel, thus allowing more efficient use of scarce fissile material (U-235 and Pu-239).\n",
"Nuclear thermal propulsion systems (NTR) are based on the heating power of a fission reactor, offering a more efficient propulsion system than one powered by chemical reactions. Current research focuses more on nuclear electric systems as the power source for providing thrust to propel spacecraft that are already in space. \n",
"As thermonuclear weapons represent the most efficient design for weapon energy yield in weapons with yields above , virtually all the nuclear weapons of this size deployed by the five nuclear-weapon states under the Non-Proliferation Treaty today are thermonuclear weapons using the Teller–Ulam design.\n",
"This list of nuclear power systems in space includes nuclear power systems that were flown to space, or launched in an attempt to reach space. Examples of nuclear power systems include radioisotope heater units (RHU), Radioisotope thermoelectric generators (RTG), thermionic converters, and fission reactors. Initial total spacecraft power is provided as electrical energy (We) or thermal energy (Wt), depending on the intended application.\n",
"Compared to fission weapons, thermonuclear designs are exceedingly complex, and staged weapons in particular are so complex that only five countries (USA, Russia, France, UK, China) have created them in more than 70 years of research. The fuels for an H-bomb are also far more difficult to create. Several countries with long-standing nuclear weapons programs, such as India and Pakistan, are suspected of striving towards a hybrid or \"boosted\" design instead, which is easier. Since both fusion weapons and hybrid designs can at times be referred to as \"hydrogen bombs\", it cannot be said with certainty at present, what type of weapon North Korea may have been referring to in any given test. At present, analysts are skeptical of the 2016 test being a staged thermonuclear design, while noting that the most recent test, in 2017, was considerably more powerful. In 2018, North Korea had offered and was reportedly preparing for inspections at nuclear and missile sites.\n",
"Of the four basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three.\n",
"A thermonuclear weapon is a type of nuclear bomb that releases energy through the combination of fission and fusion of the light atomic nuclei of deuterium and tritium. With this type of bomb, a thermonuclear detonation is triggered by the detonation of a fission type nuclear bomb contained within a material containing high concentrations of deuterium and tritium. Weapon yield is typically increased with a tamper that increases the duration and intensity of the reaction through inertial confinement and neutron reflection. Nuclear fusion bombs can have arbitrarily high yields making them hundreds or thousands of times more powerful than nuclear fission.\n"
] |
Do electric motors in cars have a limited lifespan? | Not forever. Ac motors have bearings which will need periodic replacement, windings with relatively delicate insulation over many turns of thin wire, and a metal cage with a rotor. Stick all this in a bouncy box that accelerates and brakes constantly. Sure must of these issues can be minimized with good engineering but they are still issues. I've seen motors last many years but a car is a pretty tough environment with pretty harsh and variable demands. Motors will fail. I'd say ten to fifteen years of a trouble free running motor will be a good and attainable result.
This is just the motor I'm talking about all the rest of the running gear (suspension, cv's, etc) will have the same issues as a car with an internal combustion engine.
Source: electrician. Been working with AC motors and speed controllers for years.
| [
"In very small vehicles, the power demand decreases, so human power can be employed to make a significant improvement in battery life. Two such commercially made vehicles are the Sinclair C5 and TWIKE.\n",
"Its life cycle is usually far greater than a purely electronic UPS, up to 30 years or more. But they do require periodic downtime for mechanical maintenance, such as ball bearing replacement. In larger systems redundancy of the system ensures the availability of processes during this maintenance. Battery-based designs do not require downtime if the batteries can be hot-swapped, which is usually the case for larger units. Newer rotary units use technologies such as magnetic bearings and air-evacuated enclosures to increase standby efficiency and reduce maintenance to very low levels.\n",
"Electric motors are more efficient than internal combustion engines in converting stored energy into driving a vehicle. However, they are not equally efficient at all speeds. To allow for this, some cars with dual electric motors have one electric motor with a gear optimised for city speeds and the second electric motor with a gear optimised for highway speeds. The electronics select the motor that has the best efficiency for the current speed and acceleration. Regenerative braking, which is most common in electric vehicles, can recover as much as one fifth of the energy normally lost during braking. Efficiency increases when renewable electricity is used\n",
"The two electric motors will have a combined power output of and of torque. The car will have claimed acceleration figures of in a sub 4.0 seconds time and in 1.5 seconds, along with a top speed of . Maximum performance will be accessible regardless of battery charge. A prototype was tested at the Nurbürgring to ensure that the car delivers linear power despite hard usage.\n",
"All-electric have lower maintenance costs as compared to internal combustion vehicles, since electronic systems break down much less often than the mechanical systems in conventional vehicles, and the fewer mechanical systems on board last longer due to the better use of the electric engine. Electric cars do not require oil changes and other routine maintenance checks.\n",
"Tesla said in February 2009 that the ESS had expected life span of seven years/, and began selling pre-purchase battery replacements for about one third of the battery's price today, with the replacement to be delivered after seven years. Tesla says the ESS retains 70% capacity after five years and of driving, assuming driven each year. A July 2013 study found that after , Roadster batteries still had 80%–85% capacity and the only significant factor is mileage (not temperature).\n",
"Thanks to the on-demand torque output of the electric motor, the EV1 could accelerate from in 6.3 seconds, and from in eight seconds. The car's top speed was electronically limited to . At the time of release, the lead-acid battery-equipped EV1 was the only electric car produced which met all of the United States Department of Energy's EV America performance goals.\n"
] |
Were white British (and Dominion) troops' relationships with non-white troops from the colonies generally positive? | Quite a few from the West Indies served as aircrew
_URL_0_
It should be noted, that these aircrew didnt serve in segregated squadrons and they would often be the only non-white person in their crew, let alone the squadron. For example:
_URL_2_
Otherwise it should be noted that many colonial troops like the Indian Army, Kings African Rifles and Gurkhas had been in existance for an extended period of time and had developed their own set of loyalties and traditions. In this sense, their attitiude towards white troops (and vice versa) would have been no different to business as usual.
_URL_1_
Actually the only major full scale mutiny by colonial troops was the Indian Rebellion of 1857 and even then substantial numbers of colonial troops remained loyal (mainly Sikhs) | [
"All World War I belligerents with colonial possessions went to great lengths to recruit soldiers from their colonies. Germany was the only one of the Central Powers with substantial overseas possessions; it used numerous non-white troops to defend her colonies. Regardless of German attitudes toward the indigenous inhabitants of German colonies, Germany's lack of control of the sea lanes would have made it nearly impossible for the German Army to bring any substantial number of colonial troops to European battlefields. Notwithstanding the exact circumstances, most Germans quickly came to view non-white Allied troops with disdain and were contemptuous of the Allies' willingness to use these troops in Europe. \n",
"Though it was one of the few combatant territories not to raise fighting men through conscription, proportional to white population, Southern Rhodesia contributed more manpower to the British war effort than any other dominion or colony, and more than Britain itself. White troops numbered 5,716, about 40% of white men in the colony, with 1,720 of these serving as commissioned officers. The Rhodesia Native Regiment enlisted 2,507 black soldiers, about 30 black recruits scouted for the Rhodesia Regiment, and around 350 served in British and South African units. Over 800 Southern Rhodesians of all races lost their lives on operational service during the war, with many more seriously wounded.\n",
"Soldiers of the East-India Company, British Raj and Princely States in the Indian subcontinent were crucial in securing and defending Hong Kong as a crown colony for Britain. Examples of troops from the Indian sub-continent include the 1st Travancore Nair Infantry, 59th Madras Native Infantry, 26th Bengal Native Infantry, 5th Light Infantry, 40th Pathans, 6th Rajputana Rifles, 11th Rajputs, 10th Jats, 72nd Punjabis, 12th Madras Native Infantry, 38th Madras Native Infantry, Indian Medical Service, Indian Hospital Corps, Royal Indian Army Service Corps, etc. Large contingents of troops from India were garrisoned in Hong Kong right from the start of British Hong Kong and until after World War II. Contributions by the Indian military services in Hong Kong suffer from the physical decay of battle-sites, destruction of documentary archives and sources of information, questionable historiography, conveniently lopsided narratives, unchallenged confabulation of urban myths and incomplete research within academic circles in Hong Kong, Britain and India. Despite high casualties among troops from the British Raj during the Battle of Hong Kong, their contributions are either minimised or ignored. The use of generic words such as \"Allied\", \"British\", \"Commonwealth\" fails to highlight that a significant number of soldiers who defended Hong Kong were from India. Commonwealth War Graves Commission (CWGC) Sai Wan War Cemetery references the graves of Indian troops as \"Commonwealth\" soldiers. War office records about the Battle of Hong Kong are yet to be fully released online. Transcripts of proceedings from war tribunals held in Hong Kong from 1946 to 1948 by British Military Courts remain mostly confined to archives and specialised museums.\n",
"The British colonies deployed fewer numbers of black African troops. Thousands of white South Africans and Rhodesians saw service in the Middle East and the Mediterranean, while black soldiers generally were assigned to logistics formations. Some black regiments however did see combat, such as the King's African Rifles in the conquest of Madagascar from Vichy France in 1942, and the thousands of men of two West African divisions that fought with the British 14th Army against the Japanese in Burma. World War II was to have a profound effect on attitudes and developments in the African colonies. As various colonial reports note:\n",
"Proportional to white population, Southern Rhodesia had contributed more personnel to the British armed forces in World War I than any of the Empire's dominions or colonies, and more than Britain itself. About 40% of white males in the colony, 5,716 men, put on uniform, with 1,720 doing so as commissioned officers. Black Southern Rhodesians were represented by the 2,507 soldiers who made up the Rhodesia Native Regiment, the roughly 350 who joined the British East Africa Transport Corps, British South Africa Police Mobile Column and South African Native Labour Corps, and the few dozen black scouts who served with the 1st and 2nd Rhodesia Regiments in South-West and East Africa. Southern Rhodesians killed in action or on operational duty numbered over 800, counting all races together—more than 700 of the colony's white servicemen died, while the Rhodesia Native Regiment's black soldiers suffered 146 fatalities.\n",
"Carmichael is noted for recognising the value and usefulness of incorporating native Caribbean troops into the British Army. In 1797, he wrote that they were not only critical militarily, but their strength and stamina had been proven by their having to carry British soldiers through the heat and over the rocks at the Battle of Grenada. He campaigned for the right of slave soldiers to give evidence at Military tribunals. White and black soldiers alike were brutally flogged for violating military rules, but Carmichael found a more humane method to be equally as effective: During his eleven years as Lieutenant-Colonel of the 2nd West India Regiment, Carmichael instead demoted native offenders to a position resembling that of a common field slave – deprived of weapons and appointments and employed only on fatigue duties.\n",
"Although they were in the Cape Colony at the time, no units from the Australian colonies were involved in the Black Week between 10–17 December, in which Britain suffered three successive defeats at the Battle of Stormberg, the Battle of Magersfontein, and the Battle of Colenso. The Boers knew that Empire forces would be sent to reinforce the British positions, and so sought to strike quickly against them.\n"
] |
the science of coffee | Coffee is a solution - in the same way that mixing salt with water gives you salt water, coffee is bits of coffee mixed in with water.
When you grind a coffee bean, there are some parts of the bean that can dissolve in hot water, and other parts that can't. What you're tasting, then, is the bits that *can* dissolve into water leaving the bits that can't dissolve behind in the filter. | [
"CofFEE states that it seeks to undertake and promote research into the goals of full employment, price stability and achieving an economy that delivers equitable outcomes for all. Its main focus is on macroeconomics, labour economics, regional development and monetary economics.\n",
"\"Coffee: A Comprehensive Guide to the Bean, the Beverage, and the Industry\", senior editor and contributor, Rowman and Littlefield, 2013. The book won a prize from Gourmand Magazine as the best published on coffee in the U.S. in 2013. Named by Library Journal as one of the best reference works of 2013.\n",
"Coffee is a brewed drink prepared from the roasted seeds of several species of an evergreen shrub of the genus \"Coffea\". The two most common sources of coffee beans are the highly regarded \"Coffea arabica\", and the \"robusta\" form of the hardier \"Coffea canephora\". Coffee plants are cultivated in more than 70 countries. Once ripe, coffee \"berries\" are picked, processed, and dried to yield the seeds inside. The seeds are then roasted to varying degrees, depending on the desired flavor, before being ground and brewed to create coffee.\n",
"The Birth of Coffee is a transmedia project which includes a book of words and images, a photographic exhibit, and a website. It focuses on the people worldwide who grow and produce coffee. The project illustrates how coffee – combined with the volatile locations where it grows and labor-intensive growing processes – often shapes those people's lives.\n",
"Research for their second project, The Birth of Coffee, began in 1996. The aim of this project is to help the average coffee-drinker to be aware of the difficult process that laborers must endure in order to grow and produce coffee. A book of their findings from this expedition, \"The Birth of Coffee\", was published by Random House in 2001.\n",
"There are more than 1,000 chemical compounds in coffee, and their molecular and physiological effects are areas of active research in food chemistry. There are a large number of ways to organize coffee compounds. The major texts in the area variously sort by effects on flavor, physiology, pre- and post-roasting effects, growing and processing effects, botanical variety differences, country of origin differences, and many others. Interactions between compounds also is a frequent area of taxonomy, as are the major organic chemistry categories (Protein, carbohydrate, lipid, etc.) that are relevant to the field. In the field of aroma and flavor alone, Flament gives a list of 300 contributing chemicals in green beans, and over 850 after roasting. He lists 16 major categories to cover those compounds related to aroma and flavor.\n",
"CoffeeScript is a programming language that transcompiles to JavaScript. It adds syntactic sugar inspired by Ruby, Python and Haskell in an effort to enhance JavaScript's brevity and readability. Specific additional features include list comprehension and pattern matching.\n"
] |
Does anyone have examples of national anthems that were later abolished/replaced? | I believe the German National Anthem was during the Nazi era.
The Soviet anthem had the words replaced, but kept the rather stirring melody.
S. Africa replaced "Die Stem van Suid-Afrika", but kept a verse of it in the new anthem.
Canada stopped using God Save the Queen.
Czechoslovakia's anthem was split (like everything else) right down the middle, but this is a weak example as it was originally two songs that were fused together (like everything else). | [
"Adoption of national anthems prior to the 1930s was mostly by newly formed or newly independent states, such as the First Portuguese Republic (\"A Portuguesa\", 1911), the Kingdom of Greece (\"Hymn to Liberty\", 1865), the First Philippine Republic (\"Marcha Nacional Filipina\", 1898), Lithuania (\"Tautiška giesmė\", 1919), Weimar Germany (\"Deutschlandlied\", 1922), Republic of Ireland (\"Amhrán na bhFiann\", 1926) or Greater Lebanon (\"Lebanese National Anthem\", 1927).\n",
"The national anthem had two official versions. The original version which was in use from 1815 to 1898 was written to honor a king. The second version which was in use from 1898 to 1932 was rewritten and used to honor Queen Wilhelmina.\n",
"Despite the belief that it was adopted as the national anthem in 1866, no such recognition has ever been officially accorded. A kind of official recognition came in 1893, when King Oscar II rose in honor when the song was played. In 2000 a Riksdag committee rejected as \"unnecessary\" a proposal to give the song official status. The committee concluded that the song has been established as the national anthem by the people, not by the political system, and that it is preferable to keep it that way.\n",
"The last attempts to change the anthem were first during the administration of General Juan Velasco Alvarado who attempted to change the second and third stanzas. In similar form to previous attempts, it was imposed during official ceremonies and in schools and during the administration of General President Francisco Morales Bermudez the last stanza was sung instead of the first. But these attempts also had no success and the original anthem was once again sung when his successor Fernando Belaunde Terry became President in 1980.\n",
"Due to the fact that the traditional vocal adaptation composed by Alberto Nepomuceno for Joaquim Osorio Duque Estrada's lyrics of the national anthem was made official in 1971, other vocal arrangements (as well as other instrumental arrangements departing from the one recognized in law) are unofficial. Because of that, for the remainder of the Military Regime era (that lasted until 1985), the playing of the anthem with any artistic arrangement that departed from the official orchestration and vocal adaptation was prohibited, and there was strict vigilance regarding the use of the National Symbols and the enforcement of this norm. Since the redemocratization of the country, far greater artistic liberty has been allowed regarding renderings of the national anthem. Singer Fafá de Belém's interpretation of the national anthem (initially criticized during the final days of the Military Regime, but now widely accepted), is an example of that. In any event, although the use of different artistic arrangements for the anthem is now permitted (and although the statutory norms that prohibited such arrangements are no longer enforced, on the grounds of constitutional freedom of expression), a rendering of the national anthem is only considered fully official when the statutory norms regarding the vocal adaptation and orchestration are followed. However, the traditional vocal adaptation composed by Alberto Nepomuceno was so well established by the time it became official that the interpretations of the national anthem that depart from the official orchestration or from the official vocal adaptation are few. Indeed, although other arrangements are now allowed, the traditional form tends to prevail, so that, with few exceptions, even celebrity singers tend to only lend their voices to the singing of the official vocal adaptation by Alberto Nepomuceno.\n",
"Most of the best-known national anthems were written by little-known or unknown composers such as Claude Joseph Rouget de Lisle, composer of \"La Marseillaise\" and John Stafford Smith who wrote the tune for \"The Anacreontic Song\", which became the tune for the U.S. national anthem, \"The Star-Spangled Banner.\" The author of \"God Save the Queen\", one of the oldest and most well known anthems in the world, is unknown and disputed.\n",
"Although \"God Save The Queen\" ceased to be played at official occasions, no replacement was adopted or used as a national anthem immediately after the declaration of a republic. It was only in 1974 that \"Rise, O Voices of Rhodesia\", sung to the tune of \"Ode to Joy\", was adopted as the national anthem, after unsuccessful attempts to find an original melody.\n"
] |
How did calibers in uneven numbers come about? Like 152mm, 37mm and 76mm? | The general reason for these odd numbers is that they are conversions from before the metric system. 37mm is about 1.5 inches, 76mm is about 3 inches, 88mm is about 3.5 inches, 152mm is about 6 inches. Early tank and anti-tank guns are particularly prone because many were adopted from naval guns like the [US WWII 76mm] (_URL_0_). However, every country has a different specific reason for specific weapons keeping the old non-metric calibers, and I don't know enough to explain why the Soviets or Germans, for example, kept the odd calibers. | [
"Gun calibers have standardized around a few common sizes, especially in the larger range, mainly due to the uniformity required for efficient military logistics. Shells of 105 and 155 mm for artillery and 105mm and 120 mm for tank guns in NATO. Artillery shells of 122, 130 and 152 mm, and tank gun ammunition of 100, 115, or 125 mm caliber remain in use in Eastern Europe, Western Asia, Northern Africa, and Eastern Asia. Most common calibers have been in use for many years, since it is logistically complex to change the caliber of all guns and ammunition stores.\n",
"Gunsmiths and armament companies also employed the -inch line (the \"decimal line\"), in part owing to the importance of the German and Russian arms industries. These are now given in terms of millimeters, but the seemingly arbitrary 7.62 mm caliber was originally understood as a 3-line caliber (as with the 1891 Mosin–Nagant rifle). The 12.7 mm caliber used by the M2 Browning machine gun was similarly a 5-line caliber.\n",
"The following table lists some of the commonly used calibers where both metric and US customary are used as equivalents. Due to variations in naming conventions, and the whims of the cartridge manufacturers, bullet diameters can vary widely from the diameter implied by the name. For example, a difference of 0.045 in (1.15 mm) occurs between the smallest and largest of the several cartridges designated as \".38 caliber\". \n",
"The 1960s ushered a new generation of assault rifles with the introduction of smaller calibers. U.S. military analysis of combat during the Second World War showed that a greater volume of fire at shorter ranges was more significant than long range accuracy. They decided that a smaller caliber would be more effective in most conditions, because the soldier could carry more ammunition. In 1963, United States adopted the M16 Rifle and the smaller 5.56×45mm cartridge to replace the M14 Rifle and larger 7.62×51mm. In 1980, NATO adopted the 5.56mm as the standard issue rifle cartridge.\n",
"The 7\"/44 caliber gun Mark 1 (spoken \"seven-inch-forty-four--caliber\") and 7\"/45 caliber gun Mark 2 (spoken \"seven-inch-forty-five--caliber\") were used for the secondary batteries of the United States Navy's last generation of pre-dreadnought battleships, the and . The caliber was considered, at the time, to be the largest caliber weapon suitable as a rapid-fire secondary gun because its shells were the heaviest that one man could handle alone.\n",
"The standard calibers used by the world's militaries tend to follow worldwide trends. These trends have significantly changed during the centuries of firearm design and re-design. Muskets were normally chambered for large calibers, such as .50 or .59, with the theory that these large bullets caused the most damage.\n",
"Historically, ammunition rounds designed in the United States were denoted by their caliber in inches (e.g., .45 Colt and .270 Winchester.) Two developments changed this tradition: the large preponderance of different cartridges using an identical caliber and the international arms trade bringing metric calibers to the United States. The former led to bullet diameter (rather than caliber) often being used to describe rounds to differentiate otherwise similar rounds. A good example is the .308 Winchester, which fires the same .30-caliber projectile as the .30-06 Springfield and the .300 Savage. Occasionally, the caliber is just a number close to the diameter of the bullet, like the .220 Swift, .223 Remington and .222 Remington Magnum, all of which actually have .22 caliber or bullets.\n"
] |
if the majority of people are right handed, why does the fork go on the left when setting a table? | Because you want to be manipulating the sharp, dangerous, pointy knife with your dominant hand. Which is why the dinner knife is on the right, and that leaves the fork to be on the left. | [
"The fork may be used in the American style (in the left hand while cutting and in the right hand to pick up food) or the European Continental style (fork always in the left hand). (See Fork etiquette) The napkin should be left on the seat of a chair only when leaving temporarily. Upon leaving the table at the end of a meal, the napkin is placed loosely on the table to the left of the plate.\n",
"Forks are sometimes designated as right or left. Here, the \"handedness\" is from the point of view of an observer facing upstream. For instance, Steer Creek has a left tributary which is called Right Fork Steer Creek.\n",
"In much of the world, pointing with the index finger is considered rude or disrespectful, especially pointing to a person. Pointing with the left hand is taboo in some cultures. Pointing with an open hand is considered more polite or respectful in some contexts. In Nicaragua, pointing is frequently done with the lips in a \"kiss shape\" directed towards the object of attention.\n",
"The right hand rule is in widespread use in physics. A list of physical quantities whose directions are related by the right-hand rule is given below. (Some of these are related only indirectly to cross products, and use the second form.)\n",
"Proper right and proper left are conceptual terms used to unambiguously convey relative direction when describing an image or other object. The \"proper right\" hand of a figure is the hand that would be regarded by that figure as its right hand. In a frontal representation, that appears on the left as the viewer sees it, creating the potential for ambiguity if the hand is just described as the \"right hand\".\n",
"Pointing is a gesture specifying a direction from a person's body, usually indicating a location, person, event, thing or idea. It typically is formed by extending the arm, hand, and index finger, although it may be functionally similar to other hand gestures. Types of pointing may be subdivided according to the intention of the person, as well as by the linguistic function it serves. \n",
"A left-handed individual may be known as a southpaw, particularly in a sports context. It is widely accepted that the term originated in the United States, in the game of baseball. Ballparks are often designed so that batters are facing east, so that the afternoon or evening sun does not shine in their eyes. This means that left-handed pitchers are throwing with their south-side arm. The \"Oxford English Dictionary\" lists a non-baseball citation for \"south paw\", meaning a punch with the left hand, as early as 1848, just three years after the first organized baseball game, with the note \"(orig. U.S., in Baseball).\" A left-handed advantage in sports can be significant and even decisive, but this advantage usually results from a left-handed competitor's unshared familiarity with opposite-handed opponents. Baseball is an exception since batters, pitchers, and fielders in certain scenarios are physically advantaged or disadvantaged by their handedness. Some baseball players like Christian Yelich of the Milwaukee Brewers bat left-handed and throw right-handed.\n"
] |
why do some people leak pee when they sneeze? | The release of your bladder is controlled by muscles. Those muscles can, for a variety of reasons, be weakened. If those muscles are weak, a sudden jolt like a sneeze can dislodge them for a moment, releasing a small amount of pee. | [
"There is much debate about the true cause and mechanism of the sneezing fits brought about by the photic sneeze reflex. Sneezing occurs in response to irritation in the nasal cavity, which results in an afferent nerve fiber signal propagating through the ophthalmic and maxillary branches of the trigeminal nerve to the trigeminal nerve nuclei in the brainstem. The signal is interpreted in the trigeminal nerve nuclei, and an efferent nerve fiber signal goes to different parts of the body, such as mucous glands and the thoracic diaphragm, thus producing a sneeze. The most obvious difference between a normal sneeze and a photic sneeze is the stimulus: normal sneezes occur due to irritation in the nasal cavity, while the photic sneeze can result from a wide variety of stimuli. Some theories are below. There is also a genetic factor that increases the probability of photic sneeze reflex. The C allele on the rs10427255 SNP is particularly implicated in this although the mechanism is unknown by which this gene increases the probability of this response.\n",
"Some people may sneeze during the initial phases of sexual arousal. Doctors suspect that the phenomenon might arise from a case of crossed wires in the autonomic nervous system, which regulates a number of functions in the body, including \"waking up\" the genitals during sexual arousal. The nose, like the genitals, contains erectile tissue. This phenomenon may prepare the vomeronasal organ for increased detection of pheromones.\n",
"When sniffed, snuff often causes a sneeze, though this is often seen by snuff-takers as the sign of a beginner. This is not uncommon; however, the tendency to sneeze varies with the person and the particular snuff. Generally, drier snuffs are more likely to do this. For this reason, sellers of snuff often sell handkerchiefs. Slapstick comedy and cartoons have often made use of snuff's sneeze-inducing properties.\n",
"Peeps (voiced by Richard McGonagle) is a giant floating eyeball who apparently runs the surveillance company that's named after him. Benson bought his products to keep Mordecai and Rigby from slacking off, but they manage to constantly evade him, causing Benson to accidentally summon him to the park to watch over everybody and as a result, he cannot leave until they die (due to the contract that Benson signed without even reading it). After spooking everyone with his gazes, Mordecai challenges him to a staring contest in which Peeps must leave if Mordecai wins, but if Peeps wins, he will harvest their eyes. However, Peeps cheats using numerous eyes but Rigby cheats back using a laser light that causes him to lose the staring contest, setting him on fire and crashing into the lake. He is blinded in the process, and is last seen taken to the hospital.\n",
"In Japan, \"Tashiro\" is a slang word. Tashiro refers to acts of peeping and taking sneak shots. Origin of the term derives from Masashi Tashiro, a famous celebrity who was prosecuted for filming up a woman's skirt in addition to later being arrested for peeping through the bathroom window of a man's house.\n",
"PEEP is a pressure that an exhalation has to bypass, in effect causing alveoli to remain open and not fully deflate. This mechanism for maintaining inflated alveoli helps increase partial pressure of oxygen in arterial blood, and an increase in PEEP increases the PaO.\n",
"Sneezing typically occurs when foreign particles or sufficient external stimulants pass through the nasal hairs to reach the nasal mucosa. This triggers the release of histamines, which irritate the nerve cells in the nose, resulting in signals being sent to the brain to initiate the sneeze through the trigeminal nerve network. The brain then relates this initial signal, activates the pharyngeal and tracheal muscles and creates a large opening of the nasal and oral cavities, resulting in a powerful release of air and bioparticles. The powerful nature of a sneeze is attributed to its involvement of numerous organs of the upper body – it is a reflexive response involving the face, throat, and chest muscles.\n"
] |
I'd want to understand how and why Scandinavia became Christianized. | I'll yield to better historians, but my understanding was that it had more to do with trade and politics than natural spiritual inclinations. By the time it happened, Scandinavia had been increasingly in contact with Christian Europe and needed commercial contacts. The era of plunder and conquest was ending as more of the Christian kingdoms became better defended from attack. It became more politically expedient to join them than beat them.
It's not as if this has no precedent in history. Christianity and Islam were sprung from pagan converts. | [
"The Christianization of Scandinavia, as well as other Nordic countries and the Baltic countries, took place between the 8th and the 12th centuries. The realms of Denmark, Norway and Sweden (Sweden is an 11th or 12th century merger of the former countries Götaland and Svealand) established their own Archdioceses, responsible directly to the Pope, in 1104, 1154 and 1164, respectively. The conversion to Christianity of the Scandinavian people required more time, since it took additional efforts to establish a network of churches. The Sami remained unconverted until the 18th century. Newer archaeological research suggests there were Christians in Götaland already during the 9th century, it is further believed Christianity came from the southwest and moved towards the north.\n",
"The Christianization of Scandinavia started in the 8th century with the arrival of missionaries in Denmark and it was at least nominally complete by the 12th century, although the Samis remained unconverted until the 18th century. In fact, although the Scandinavians became nominally Christian, it would take considerably longer for actual Christian beliefs to establish themselves among the people. The old indigenous traditions that had provided security and structure since time immemorial were challenged by ideas that were unfamiliar, such as original sin, the Immaculate Conception, the Trinity and so forth. Archaeological excavations of burial sites on the island of Lovön near modern-day Stockholm have shown that the actual Christianization of the people was very slow and took at least 150–200 years, and this was a very central location in the Swedish kingdom. Thirteenth-century runic inscriptions from the bustling merchant town of Bergen in Norway show little Christian influence, and one of them appeals to a Valkyrie. At this time, enough knowledge of Norse mythology remained to be preserved in sources such as the Eddas in Iceland.\n",
"The Nordic world first encountered Christianity through its settlements in the (already Christian) British Isles and through trade contacts with the eastern Christians in Novgorod and Byzantium. By the time Christianity arrived in Scandinavia it was already the accepted religion across most of Europe. It is not well understood how the Christian institutions converted these Scandinavian settlers, in part due to a lack of textual descriptions of this conversion process equivalent to Bede's description of the earlier Anglo-Saxon conversion. However, it appears that the Scandinavian migrants had converted to Christianity within the first few decades of their arrival. After Christian missionaries from the British Isles—including figures like St Willibrord, St Boniface, and Willehad—had travelled to parts of northern Europe in the eighth century, Charlemagne pushed for Christianisation in Denmark, with Ebbo of Rheims, Halitgar of Cambrai, and Willeric of Bremen proselytizing in the kingdom during the ninth century. The Danish king Harald Klak converted (826), likely to secure his political alliance with Louis the Pious against his rivals for the throne. The Danish monarchy reverted to Old Norse religion under Horik II (854 – c. 867).\n",
"Christianity in Scandinavia came later than most parts of Europe. In Denmark Harald Bluetooth Christianized the country around 980. The process of Christianization began in Norway during the reigns of Olaf Tryggvason (reigned 995 AD–c.1000 AD) and Olaf II Haraldsson (reigned 1015 AD–1030 AD). Olaf and Olaf II had been baptized voluntarily outside of Norway. Olaf II managed to bring English clergy to his country. Norway's conversion from the Norse religion to Christianity was mostly the result of English missionaries. As a result of the adoption of Christianity by the monarchy and eventually the entirety of the country, traditional shamanistic practices were marginalized and eventually persecuted. Völvas, practitioners of seid, a Scandinavian pre-Christian tradition, were executed or exiled under newly Christianized governments in the eleventh and twelfth centuries.\n",
"Scandinavian individuals came into contact with Christianity already before the fall of the Roman Empire, but historian Ian N. Wood writes that the \"Christianisation of Scandinavia took the Church into relatively unknown areas\". According to Alcuin, an Anglo-Saxon monk, Willibrord, who had proselytized among the Frisians, tried to convert Ongendus, King of the Danes, in the early , but failed. From the 820s, the Frankish monarchs tried to take advantage of internal strifes to increase their influence in Denmark. After being dethroned and exiled from Denmark, King Harald Klak sought refugee in the Carolingian Empire and agreed to be baptised in 826. Harald Klak returned to Denmark, accompanied by Ansgar, a Frankish monk from the Corbie Abbey. During the next two years, Ansgar carried out missionary activities in Denmark. He even bought young boys to teach them for missionary work. However, Harald Klak was again dethroned in 827, and Ansgar left Denmark. \n",
"Anders Winroth, in his book \"The Conversion of Scandinavia\", tries to make sense of this problem by arguing that there was a \"long process of assimilation, in which the Scandinavians adopted, one by one and over time, individual Christian practices.\" Winroth certainly does not say that Olaf was not Christian, but he argues that we cannot think of any Scandinavians as quickly converting in a full way as portrayed in the later hagiographies or sagas. Olaf himself is portrayed in later sources as a saintly miracle-working figure to help support this quick view of conversion for Norway, although the historical Olaf did not act this way, as seen especially in the skaldic verses attributed to him.\n",
"While some Swedish areas had Christian minorities in the 9th century, Sweden was, because of its geographical location in northernmost Europe, not Christianized until around AD 1000, around the same time as the other Nordic countries, when the Swedish King Olof was baptized. This left only a modest gap between the Christianization of Scandinavia and the Great Schism, however there are some Scandinavian/Swedish saints who are venerated eagerly by many Orthodox Christians, such as St. Olaf. However, Norse paganism and other pre-Christian religious systems survived in the territory of what is now Sweden later than that; for instance the important religious center known as the Temple at Uppsala at Gamla Uppsala was evidently still in use in the late 11th century, while there was little effort to introduce the Sami of Lapland to Christianity until considerably after that.\n"
] |
topology/topological manifold | Topology: the study of the geometric properties of a space/surface and which properties are affected by changing that shape/space through a continuous deformation (i.e. no rips/tears and no gluing); or the rules governing a specific topological space or manifold.
Topological Manifold: A surface (or group of surfaces) with given properties (such as a metric, a specific number of holes/openings, etc.)
For example, a doughnut and a coffee mug belong to the same manifold (a solid with a single hole) which can be deformed from one to the other without changing some properties. A topologist would then look at what happens during the transformation to things like distance between points and any changes to a circle (does it get bigger smaller etc.) | [
"While topological spaces can be extremely varied and exotic, many areas of topology focus on the more familiar class of spaces known as manifolds. A \"manifold\" is a topological space that resembles Euclidean space near each point. More precisely, each point of an -dimensional manifold has a neighborhood that is homeomorphic to the Euclidean space of dimension . Lines and circles, but not figure eights, are one-dimensional manifolds. Two-dimensional manifolds are also called surfaces, although not all surfaces are manifolds. Examples include the plane, the sphere, and the torus, which can all be realized without self-intersection in three dimensions, and the Klein bottle and real projective plane, which cannot (that is, all their realizations are surfaces that are not manifolds).\n",
"In mathematics, low-dimensional topology is the branch of topology that studies manifolds, or more generally topological spaces, of four or fewer dimensions. Representative topics are the structure theory of 3-manifolds and 4-manifolds, knot theory, and braid groups. This can be regarded as a part of geometric topology. It may also be used to refer to the study of topological spaces of dimension 1, though this is more typically considered part of continuum theory.\n",
"Geometric topology is a branch of topology that primarily focuses on low-dimensional manifolds (i.e. spaces of dimensions 2, 3, and 4) and their interaction with geometry, but it also includes some higher-dimensional topology. Some examples of topics in geometric topology are orientability, handle decompositions, local flatness, crumpling and the planar and higher-dimensional Schönflies theorem.\n",
"In topology, a branch of mathematics, a topological manifold is a topological space (which may also be a separated space) which locally resembles real \"n\"-dimensional space in a sense defined below. Topological manifolds form an important class of topological spaces with applications throughout mathematics. All manifolds are topological manifolds by definition, but many manifolds may be equipped with additional structure (e.g. differentiable manifolds are topological manifolds equipped with a differential structure). When the phrase \"topological manifold\" is used, it is usually done to emphasize that the manifold does not have any additional structure, or that only the \"underlying\" topological manifold is being considered. Every manifold has an \"underlying\" topological manifold, gotten by simply \"forgetting\" any additional structure the manifold has.\n",
"In mathematics, differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds.\n",
"In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence. Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a central unifying notion and appear in virtually every branch of modern mathematics. The branch of mathematics that studies topological spaces in their own right is called point-set topology or general topology.\n",
"BULLET::::- Topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, such as connectedness and compactness.\n"
] |
When did the concept of "refugees" arise? It seems that in the past if your country was at war and you were a male of fighting age you would stay. When did men start leaving their country's conflicts? Is this a modern concept or are there examples of this happening throughout history? | This depends on your definition of 'refugees'.
In 1951, a convention was held in Geneva to give an official definition to the term, and from henceforth it was possible to declare whether a person was a refugee or not. [source: [UNHCR official site](_URL_0_) ]
However, before that there were already large population movements caused by war, famine and other forms of destruction which would cause the peoples' homeland to be inhospitable to them.
In China, one of the earliest records of such a wide scale immigration would be during the spring autumn period, when the Yue 越 king Gou Jian 勾践 destroyed the Wu 吴 kingdom. Due to the demeaning treatment that he had suffered under the Wu king previously, Gou Jian was determined to eliminate Wu utterly. Therefore the Wu people were forced to cross the sea to the Eastern islands, which is now modern day Japan. Future contact between the Han dynasty and the Japanese islands state that the Wa 倭 people claimed direct descent from king Taibo 泰伯 of Wu, and often spoke with a Wu accent and adhered to Wu customs, further supporting the theory of them being former refugees of the Chinese Wu. [source: *the Book of Han* 汉书, *Discourse on Balance* 论衡] | [
"As the war ended, these people found themselves facing an uncertain future. Allied military and civilian authorities faced considerable challenges resettling them. Since the reasons for displacement varied considerably, the Supreme Headquarters Allied Expeditionary Force classified individuals into a number of categories: evacuees, war or political refugees, political prisoners, forced or voluntary workers, Organisation Todt workers, former forces under German command, deportees, intruded persons, extruded persons, civilian internees, ex-prisoners of war, and stateless persons.\n",
"The first modern definition of international refugee status came about under the League of Nations in 1921 from the Commission for Refugees. Following World War II, and in response to the large numbers of people fleeing Eastern Europe, the UN 1951 Refugee Convention adopted (in Article 1.A.2) the following definition of \"refugee\" to apply to any person who: \"owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence as a result of such events, is unable or, owing to such fear, is unwilling to return to it.\"\n",
"After the Second World War, some British soldiers are guarding a theatre in Germany containing various refugees and prisoners trying to work out what to do with them. However, the displaced people, after uniting against fascism for five years, begin to disintegrate into their own ancient feuds: Serb against Croat, Pole against Russian, resistance fighter against collaborator and everyone against the Jews. Two people, Jan and Lily, begin a romance and decide to wed. However, one of the refugees is diagnosed with bubonic plague.\n",
"The word evacuation or \"evakuatsiia\" in 1941 was a somewhat new word that some described as \"terrible and unaccustomed\". For others, it was simply not used. \"Refugee\" or \"bezhenets\" was far too familiar given the country's history of war. During World War II refugee was replaced by evacuees.The shift in wording showed the government's resignation to the displacement of its citizens. The reasons for controlling the displaced population varied. Despite some preferring to consider themselves evacuees the term referred to different individuals. Some were of the “privileged elite\" class. Those who fell under this category were scientists, specialized workers, artists, writers and politicians. These elite individuals were evacuated to the rear of the country. The other portion of the evacuated were met with a suspicious eye. The evacuation process despite the Soviets best efforts, was far from organized. The state considered the majority of those heading east as suspicious. Since a large majority of the population were self evacuees they had not been assigned a location for displacement. Officials feared the disorder made it easy for deserters to flee. Evacuees who did not fall under the “privileged elite” title were are also suspected of potentially contaminating the rest of the population both epidemically and ideologically.\n",
"BULLET::::- The term displaced person (DP) was first widely used during World War II and the resulting refugee outflows from Eastern Europe, when it was used to specifically refer to one removed from their native country as a refugee, prisoner or a slave laborer. Most of the victims of war, political refugees and DPs of the immediate post-Second World War period were Ukrainians, Poles, other Slavs, as well as citizens of the Baltic states – Lithuanians, Latvians, and Estonians, who refused to return to Soviet-dominated eastern Europe. A.J. Jaffe claimed that the term was originally coined by Eugene M. Kulischer. The meaning has significantly broadened in the past half-century.\n",
"The term \"refugee\" sometime applies to people who might fit the definition outlined by the 1951 Convention, were it applied retroactively. There are many candidates. For example, after the Edict of Fontainebleau in 1685 outlawed Protestantism in France, hundreds of thousands of Huguenots fled to England, the Netherlands, Switzerland, South Africa, Germany and Prussia. The repeated waves of pogroms that swept Eastern Europe in the 19th and early 20th centuries prompted mass Jewish emigration (more than 2 million Russian Jews emigrated in the period 1881–1920). Beginning in the 19th century, Muslim people emigrated to Turkey from Europe. The Balkan Wars of 1912–1913 caused 800,000 people to leave their homes. Various groups of people were officially designated refugees beginning in World War I.\n",
"The conflict and political instability during World War II led to massive numbers of refugees (see World War II evacuation and expulsion). In 1943, the Allies created the United Nations Relief and Rehabilitation Administration (UNRRA) to provide aid to areas liberated from Axis powers, including parts of Europe and China. By the end of the War, Europe had more than 40 million refugees. UNRRA was involved in returning over seven million refugees, then commonly referred to as displaced persons or DPs, to their country of origin and setting up displaced persons camps for one million refugees who refused to be repatriated. Even two years after the end of War, some 850,000 people still lived in DP camps across Western Europe. DP Camps in Europe Intro, from: \"DPs Europe's Displaced Persons, 1945–1951\" by Mark Wyman After the establishment of Israel in 1948, Israel accepted more than 650,000 refugees by 1950. By 1953, over 250,000 refugees were still in Europe, most of them old, infirm, crippled, or otherwise disabled.\n"
] |
Violations of the equivalence principle? | > Is this credible?
Yes, quite. It's a nice paper.
> What does it mean if the equivalence principle really is violated?
Absolutely nothing. The equivalence principle rests on the principle of locality, and it holds whenever that principle is in effect. But the principle of locality is an approximation; it's violated by certain phenomena in both ordinary "first quantisation" mechanics and in "second quantisation" field theory. If locality doesn't hold, the equivalence principle doesn't either … which is less a *violation* of equivalence as it is a demonstration of the fact that equivalence depends on locality, which we knew already.
No, the interesting thing about this paper isn't that equivalence is violated when locality is violated. The interesting thing is that it's possible to *restore equivalence* even without locality. As the gravitational field gets stronger — that is, as you get closer to the event horizon of a black hole *that's only present in the paper to be the source of a gravitational field of arbitrary strength so please let's not turn this into another godawful black hole party* — the apparent violation of equivalence vanishes. *That's* interesting, and serves as yet more evidence in favour of the notion that quantum field theory and general relativity already, separately, comprise a compete quantum theory of gravity; we just have to work out the details.
Insultingly condescending summary: The violation of equivalence is expected. The *restoration* of equivalence in the strong-field limit isn't expected, and comes as a pleasant surprise. | [
"Equivalence allows for simplifying the constraint store by replacing some constraints with simpler ones; in particular, if the third constraint in an equivalence rule is codice_93, and the second constraint is entailed, the first constraint is removed from the constraint store. Inference allows for the addition of new constraints, which may lead to proving inconsistency of the constraint store, and may generally reduce the amount of search needed to establish its satisfiability.\n",
"The above-mentioned variants of the equivalence principle aim to guarantee the transition of General Relativity to Special Relativity in a certain reference frame. However, only the particular \"weakest\" and \"weak\" equivalence principles are true. \n",
"The equivalence principle is one of the corner-stones of gravitation theory. Different formulations of the equivalence principle are labeled \"weakest\", \"weak\", \"middle-strong\" and \"strong.\" All of these formulations are based on the empirical equality of inertial mass, gravitational active and passive charges.\n",
"The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way, and that the effects of gravity are indistinguishable from certain aspects of acceleration and deceleration. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the same rate when other forces (such as air resistance and electromagnetic effects) are negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are planned for more accurate experiments in space.\n",
"The \"rule of equivalence\" is verified when the code behavior matches the original concept. This equivalence may break down in many cases. Integer overflow breaks the equivalence between the mathematical integer concept and the computerized approximation of the concept.\n",
"The equivalence principle was properly introduced by Albert Einstein in 1907, when he observed that the acceleration of bodies towards the center of the Earth at a rate of 1\"\"g\"\" (\"g\" = 9.81 m/s being a standard reference of gravitational acceleration at the Earth's surface) is equivalent to the acceleration of an inertially moving body that would be observed on a rocket in free space being accelerated at a rate of 1\"g\". Einstein stated it thus:\n",
"The optical equivalence theorem in quantum optics asserts an equivalence between the expectation value of an operator in Hilbert space and the expectation value of its associated function in the phase space formulation with respect to a quasiprobability distribution. The theorem was first reported by George Sudarshan in 1963 for normally ordered operators and generalized later that decade to any ordering. \n"
] |
how come most 3d games render at 60 fps while it takes a few seconds to render a textureless cube in blender? | There are two ways to render graphics on the screen.
[Rasterization](_URL_0_) - Which is used in video games. The very simple version of this is the world is made up of triangles and all you have to do is figure out if a triangle is visible and if a pixel is in the triangle or not. Many graphics cards have electronics deigned to do this over and over again very quickly. Compare this with...
[Ray Tracing](_URL_3_) - Which is used for static 3D rendering. This **calculates the path light travels for every pixel on your screen** back to the light source. The objects don't have to be triangles and are often expressed as mathematical solids. This gives you, for all intents an purposes, unlimited resolution and detail depending on how much time and CPU power you want to throw at it. Because ray tracers use complex math, the CPU brute-forces the tracing calculations. In fact, with bender, when doing ray tracing, you don't use any of the 3D capability of your graphics card at all.
Upshot:
Rasterizer: "Hey 3D card, draw and fill 532 triangles the make that make up this [isohedron](_URL_2_) and texture it to make it look like sphere
Ray Tracer: "Hey CPU calculate how light will reflect on a [sphere](_URL_1_) of a volume of 4/3*πr^3"
One take much more time then the other, but it also make it much more realistic at infinite scales. | [
"Processing of 3D graphics is computationally expensive, especially in real-time games, and poses multiple limits. Levels have to be processed at tremendous speeds, making it difficult to render vast skyscapes in real-time. Additionally, real-time graphics generally have depth buffers with limited bit-depth, which puts a limit on the amount of details that can be rendered at a distance.\n",
"In the case of 3D graphics, rendering may be done slowly, as in pre-rendering, or in realtime. Pre-rendering is a computationally intensive process that is typically used for movie creation, while real-time rendering is often done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.\n",
"Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second (a.k.a. \"in one frame\": In the case of a 30 frame-per-second animation, a frame encompasses one 30th of a second).\n",
"The advantage of pre-rendering is the ability to use graphic models that are more complex and computationally intensive than those that can be rendered in real-time, due to the possibility of using multiple computers over extended periods of time to render the end results. For instance, a comparison could be drawn between rail-shooters \"Maximum Force\" (which used pre-rendered 3D levels but 2D sprites for enemies) and \"Virtua Cop\" (using 3D polygons); \"Maximum Force\" was more realistic looking due to the limitations of \"Virtua Cop's\" 3D engine, but \"Virtua Cop\" has actual depth (able to portray enemies close and far away, along with body-specific hits and multiple hits) compared to the limits of the 2D sprite enemies in \"Maximum Force\".\n",
"Vaa3D is able to render 3D, 4D, and 5D data (X,Y,Z,Color,Time) quickly. The volume rendering is typically at the scale of a few gigabytes and can be extended to the scale of terabytes per image set. The visualization is made fast by using OpenGL directly.\n",
"It was the first game to feature high resolution 3D texture mapping, a feature which was not seen on other platforms until the Dreamcast over three years later. \"Rave Racer\" ran at a resolution of 640x480 and 60 frames per second.\n",
"When performing basic 3D-rendering with only texture mapping and no other advanced features, ViRGE's pixel throughput was somewhat faster than the best software-optimized (host-based CPU) 3D-rendering of the era, and with better (16bpp) color fidelity. But when additional rendering operations were added to the polygon load (such as perspective-correction, Z-depth fogging, and bilinear filtering), rendering throughput dropped to the speed of software-based rendering on an entry-level CPU. 3D-rendering on the high-end VRAM based ViRGE/VX (988) was even slower than the less expensive ViRGE/325, due to the VX's slower core and memory clock rates. The upgraded ViRGE/DX and ViRGE/GX models did improve 3D rendering performance, but by the time of their introduction they were still unable to distinguish the ViRGE family in an already crowded 3D market.\n"
] |
how do lithium ion batteries work? | A lithium ion battery uses charged lithium particles (ions) to move electricity from one end of the battery to another. As energy leaves the battery, these lithium ions move from the negative side of the battery to the positive side, forming a conductive lithium layer that releases electricity. When all the ions are on the positive side of the battery, the battery is spent and no longer releases electricity. When the battery is put in a charger, the sides flip temporarily, and the addition of electrical energy to the lithium causes the ions to move back to the negative side of the battery, making the battery ready for use again.
Because of these properties, lithium ion batteries are among the more common rechargeable batteries for home electronic use. | [
"Lithium-ion batteries store chemical energy in reactive chemicals at the anodes and cathodes of a cell. Typically, anodes and cathodes exchange lithium (Li+) ions through a fluid electrolyte that passes through a porous separator which prevents direct contact between the anode and cathode. Such contact would lead to an internal short circuit and a potentially hazardous uncontrolled reaction. Electric current is usually carried by conductive collectors at the anodes and cathodes to and from the negative and positive terminals of the cell (respectively).\n",
"A lithium-ion battery or Li-ion battery (abbreviated as LIB) is a type of rechargeable battery. Lithium-ion batteries are commonly used for portable electronics and electric vehicles and are growing in popularity for military and aerospace applications. It was developed by John Goodenough, Rachid Yazami and Akira Yoshino in the 1980s, building on a concept proposed by M Stanley Whittingham in the 1970s.\n",
"Today’s lithium ion batteries have high power density (fast discharge) and high energy density (hold a lot of charge). It can also develop dendrites, similar to splinters, that can short-circuit a battery and lead to a fire. Aluminum also transfers energy more efficiently. Inside a battery, atoms of the element — lithium or aluminum — give up some of their electrons, which flow through external wires to power a device. Because of their atomic structure, lithium ions can only provide one electron at a time; aluminum can give three at a time. Aluminum is also more abundant than lithium, lowering material costs.\n",
"The thin film lithium ion battery can serve as a storage device for the energy collected from renewable sources with a variable generation rate, such as a solar cell or wind turbine. These batteries can be made to have a low self discharge rate, which means that these batteries can be stored for long periods of time without a major loss of the energy that was used to charge it. These fully charged batteries could then be used to power some or all of the other potential applications listed below, or provide more reliable power to an electric grid for general use.\n",
"In general lithium ions move between the anode and the cathode across the electrolyte. Under discharge, electrons follow the external circuit to do electric work and the lithium ions migrate to the cathode. During charge the lithium metal plates onto the anode, freeing at the cathode. Both non-aqueous (with LiO or LiO as the discharge products) and aqueous (LiOH as the discharge product) Li-O batteries have been considered. The aqueous battery requires a protective layer on the negative electrode to keep the Li metal from reacting with water.\n",
"In the batteries lithium ions move from the negative electrode to the positive electrode during discharge and back when charging. Li-ion batteries use an intercalated lithium compound as one electrode material, compared to the metallic lithium used in a non-rechargeable lithium battery. The batteries have a high energy density, no memory effect (other than LFP cells) and low self-discharge. They can however be a safety hazard since they contain a flammable electrolyte, and if damaged or incorrectly charged can lead to explosions and fires. Samsung were forced to recall Galaxy Note 7 handsets following lithium-ion fires, and there have been several incidents involving batteries on Boeing 787s.\n",
"A lithium-ion flow battery is a flow battery that uses a form of lightweight lithium as its charge carrier. The flow battery stores energy separately from its system for discharging. The amount of energy it can store is determined by tank size; its power density is determined by the size of the reaction chamber.\n"
] |
i turned on my old guitar amp with nothing plugged in and it started playing a radio station. how is this happening? | Radio waves are stupid easy to pick up on any basic consumer amplifier. I've picked up radio stations on PC speakers before. Somewhere along the lines, the radio signal is inadvertently translated to an electrical signal that your system can amplify. You don't need a loose wire; just an unshielded system.
Adding to that, I think the FCC mandates that consumer electronics must accept radio interference. | [
"\"Turning Up the Radio\" is a song by the American rock band Weezer from their studio album \"Death to False Metal\". Its genesis came about in 2008 when Weezer frontman Rivers Cuomo used YouTube to source ideas for creating a song using video submissions from other users of the platform.\n",
"The players themselves were manufactured by CBS Electronics. According to the official Chrysler press release of September 12, 1955, \"Highway Hi-Fi plays through the speaker of the car radio and uses the radio's amplifier system. The turntable for playing records, built for Chrysler by CBS-Columbia, is located in a shock-proof case mounted just below the center of the instrument panel. A tone arm, including sapphire stylus and ceramic pick up, plus storage space for six long-play records make up the unit.\" A button controlled whether you listened to the radio or the records. A proprietary 0.25-mil (i.e., or a quarter of a \"thou\") stylus was used with an unusually high stylus pressure of to prevent skipping or skating despite normal car vibrations.\n",
"The whole thing came through the famous \"listen mic\" on the SSL console. The SSL had put this massive compressor on it because the whole idea was to hang one mic in the middle of the studio and hear somebody talking on the other side. And it just so happened that we turned it on one day when Phil [Collins] was playing his drums. And then I had the idea of feeding that back into the console and putting the noise gate on, so when he stopped playing it sucked the big sound of the room into nothing.\n",
"\"There Ain't Nothin' Wrong with the Radio\" is a moderate up-tempo novelty song. In it, the male narrator describes how old and run-down his car is but explains that he continues to drive it because \"there ain't nothin' wrong with the radio\" — specifically, \"there ain't a country station that [he] can't tune in\". The song features electric guitar and fiddle accompaniments.\n",
"A radio pack is mainly used for musicians such as guitarists and singers for live performances. It is a small radio transmitter that is either placed in the strap or in the pocket. The receiver is connected to an amp or PA system and the user simply connects the transmitter into the instrument. This means that there is no wires in the way. By using a wireless system, musicians are free to move around the stage. This has meant that more elaborate stage shows are now possible, with musicians performing a long way from the amplifier or speakers. \n",
"Sharon Aguilar reminisced, “Plugged [my guitar] right into the amp; no time for a pedalboard… no fancy in-ear monitors or anything like that… More energy… it ended up being probably my favorite live show that we’ve ever done.”\n",
"\"There Ain't Nothin' Wrong with the Radio\" is a song co-written and recorded by American country music artist Aaron Tippin. It was released in February 1992 as the first single from his album \"Read Between the Lines\". The song is not only his first Number One hit on the country music charts but also his longest-lasting at three weeks.\n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.