question
stringlengths 3
300
| answer
stringlengths 9
2.77k
| context
sequencelengths 7
7
|
---|---|---|
Was Joseph Smith a convicted con-man? | Disorderly conduct is a really broad charge in NY, even today, where it's defined as
> A person is guilty of disorderly conduct when, with intent to cause public inconvenience, annoyance or alarm, or recklessly creating a risk thereof:
> 1. He engages in fighting or in violent, tumultuous or threatening behavior; or
2. He makes unreasonable noise; or
3. In a public place, he uses abusive or obscene language, or makes an obscene gesture; or
4. Without lawful authority, he disturbs any lawful assembly or meeting of persons; or
5. He obstructs vehicular or pedestrian traffic; or
6. He congregates with other persons in a public place and refuses to comply with a lawful order of the police to disperse; or
7. He creates a hazardous or physically offensive condition by any act which serves no legitimate purpose.
It's a bit hard to gather case law from the 1820s, but in that period of time, anything that causes excessive noise (see Conrad v. Williams, 6 Hill 444 (1844)) or causing public strife (see Duffy v. People, 6 Hill 75 (1843), Cowden v. Wright, 24 Wend. 429 (1840)) have been considered as disorderly conduct. With such broad common law examples, it would not be amiss that a lot of people's conduct fell within the prima facie definition. So the mere fact that he has been convicted a disorderly conduct charge doesn't say a whole lot about him. I can't comment as to how it ties into the greater scheme of things during his rise. | [
"Joseph Smith III was an ardent opponent of the practice of plural marriage throughout his life. For most of his career, Smith denied that his father had been involved in the practice and insisted that it had originated with Brigham Young. Smith served many missions to the western United States where he met with and interviewed associates and women claiming to be widows of his father, who attempted to present him with evidence to the contrary. In the end, Smith concluded that he was \"not positive nor sure that [his father] was innocent\" and that if, indeed, the elder Smith had been involved, it was still a false practice. However, many members of Community of Christ, and some of the groups that were formerly associated with it are still not convinced that Joseph Smith III's father did indeed engage in plural marriage, and feel that the evidence that he did so is largely flawed.\n",
"Vogel argues in the biography that Joseph Smith was a pious fraud—that Smith essentially invented his religious claims for what he believed were noble, faith-promoting purposes. Vogel identifies the roots of the pious fraud in the conflict between members of the Smith family, who were divided between the skepticism and universalism of Joseph Smith, Sr., and the more mainstream Protestant faith of Lucy Mack Smith. Vogel interweaves the history of Joseph Smith with interpretation of the Book of Mormon, which is read as springing from the young man's psychology and experiences.\n",
"Joseph Smith and five others were charged with treason under Missouri law in 1838, spending over five months in prison, but escaped while awaiting trial. Joseph Smith and Hyrum Smith were later charged with \"treason against the government and people of the State of Illinois.\"\n",
"In the summer of 1842 the excommunicated Dr. Bennett began to attack Joseph Smith via letters published in Springfield, Illinois. In the letter published 15 July 1842, Bennett included a claim that Sarah Pratt had virtuously rejected Joseph Smith's efforts to make her a spiritual wife while Orson Pratt was in Europe. In reaction to this, Joseph Smith explained that Sarah had participated in an affair with Dr. John C. Bennett, a claim supported by affidavits produced by non-Mormon Sheriff Jacob B. Backenstoes and Sarah's erstwhile landlords, Stephen Goddard and Zeruiah Goddard. At the time, Sarah maintained a public silence regarding the matter. Nancy Rigdon and Pamelia Michael rejected Bennett's accusations involving them. Meanwhile, Martha Brotherton produced a damning affidavit involving Joseph Smith at Bennett's request.\n",
"Joseph Smith — a self-professed \"lawful\" heir (an heirship long hidden \"from the world with Christ in God\") to the princely thrones of the house of Judah and the house of Joseph — believing that he had been sent to the earth in its final dispensation, or in \"the fulness of times,\" discovered (as he records in his history) the earth's peoples living in a degenerate, apostate world over which the power of the great Enemy (Satan/Leviathan) had grown monstrously strong. Having in early spring 1820 been given his prophetic calling in vision by God the Father's personal appearance with His Messiah Son (JSH–1), the boy-prophet Joseph (later a recipient of both the lesser and higher priesthoods under the hands of the resurrected John the Baptist and Christ's chief apostles Peter, James and John) went forth into the world, endowed with great authority and power from on high. His divinely decreed mission was to \"restore\" the fullness of God's kingdom with its saving ordinances and primordial doctrines that would bring \"efficacy\" to the Davidic Messiah's redeeming atonement made at the meridian of time, and thus consummate salvation for God's chosen people and for all the world, if they would believe, repent, accept His eternal covenant and everlasting gospel, and endure faithfully to the end.\n",
"Joseph Smith was never “convicted” or even tried on charges of sorcery, crystal ball gazing or fortune telling. In 1826 a written complaint was filed against Smith as a “disorderly person.” This resulted in what is referred to as the “1826 trial” of Joseph Smith. The charge was “glass looking,” in reference to Smith’s use of a stone to assist in the search for treasure during the time that he worked for Josiah Stowell. Contradictory accounts of the trial exist, and the outcome is not specified.\n",
"In 1844, in jail awaiting trial for treason charges, Joseph Smith was killed by an armed mob. Hyrum Smith, his presumed successor, was killed in the same incident. Smith had not indisputably established who was next in line as successor to President of the Church. Several claimants to the role of church president emerged during the succession crisis that ensued.\n"
] |
how do viruses like hiv transmit through fluids like blood, breast milk, semen, and vaginal secretions? | I'm not sure what you are asking. The viruses are within those secretions, moving there as they do through any sort of tissue. By their presence they are transmitted to whatever encounters those substances. | [
"Hepatitis B, hepatitis C, and hepatitis D are transmitted when blood or mucous membranes are exposed to infected blood and body fluids, such as semen and vaginal secretions. Viral particles have also been found in saliva and breastmilk. However, kissing, sharing utensils, and breastfeeding do not lead to transmission unless these fluids are introduced into open sores or cuts.\n",
"Twenty-seven different viruses have been identified in semen. Information on whether or not transmission occurs or whether the viruses cause disease is uncertain. Some of these microbes are known to be sexually transmitted. Those found in semen are listed by the CDC.\n",
"Horizontal transmission is the most common mechanism of spread of viruses in populations. Transmission can occur when: body fluids are exchanged during sexual activity, e.g., HIV; blood is exchanged by contaminated transfusion or needle sharing, e.g., hepatitis C; exchange of saliva by mouth, e.g., Epstein–Barr virus; contaminated food or water is ingested, e.g., norovirus; aerosols containing virions are inhaled, e.g., influenza virus; and insect vectors such as mosquitoes penetrate the skin of a host, e.g., dengue.\n",
"Transmission is believed to be by contact with the blood and body fluids of those infected with the virus, as well as by handling raw bushmeat such as bats and monkeys, which are important sources of protein in West Africa. Infectious body fluids include blood, sweat, semen, breast milk, saliva, tears, feces, urine, vaginal secretions, vomit, and diarrhea.\n",
"Semen can transmit many sexually transmitted diseases and pathogens, including viruses like HIV and Ebola. Swallowing semen carries no additional risk other than those inherent in fellatio. This includes transmission risk for sexually transmitted diseases such as human papillomavirus (HPV) or herpes, especially for people with bleeding gums, gingivitis or open sores. Viruses in semen survive for a long time once outside the body.\n",
"HIV is spread primarily by unprotected sex (including anal and oral sex), contaminated blood transfusions, hypodermic needles, and from mother to child during pregnancy, delivery, or breastfeeding. Some bodily fluids, such as saliva and tears, do not transmit HIV. Methods of prevention include safe sex, needle exchange programs, treating those who are infected, pre- and post-exposure prophylaxis, and male circumcision. Disease in a baby can often be prevented by giving both the mother and child antiretroviral medication. There is no cure or vaccine; however, antiretroviral treatment can slow the course of the disease and may lead to a near-normal life expectancy. Treatment is recommended as soon as the diagnosis is made. Without treatment, the average survival time after infection is 11 years.\n",
"More than 30 different bacteria, viruses, and parasites can be transmitted through sexual activity. Bacterial STIs include chlamydia, gonorrhea, and syphilis. Viral STIs include genital herpes, HIV/AIDS, and genital warts. Parasitic STIs include trichomoniasis. While usually spread by sex, some STIs can be spread by non-sexual contact with donor tissue, blood, breastfeeding, or during childbirth. STI diagnostic tests are usually easily available in the developed world, but this is often not the case in the developing world.\n"
] |
How can we be sure that planets light years away will still be there when we get there? | You cant. But the odds are pretty good for a planet to last a billion years or sk. We can tell what stage the star is at in its lifecycle. But yeah...especially if it's taking you thousands of years to get there the planet may have been destroyed or chucked out of its orbit (especially in a binary star system) | [
"Astrophysicist Sten Odenwald stated that the basic problem is that through intensive studies of thousands of detected exoplanets, most of the closest destinations within 50 light years do not yield Earth-like planets in the star's habitable zones. Given the multitrillion-dollar expense of some of the proposed technologies, travelers will have to spend up to 200 years traveling at 20% the speed of light to reach the best known destinations. Moreover, once the travelers arrive at their destination (by any means), they will not be able to travel down to the surface of the target world and set up a colony unless the atmosphere is non-lethal. The prospect of making such a journey, only to spend the rest of the colony's life inside a sealed habitat and venturing outside in a spacesuit, may eliminate many prospective targets from the list.\n",
"In a 2009 interview with the Discovery Channel, Mike Brown noted that, while it is not impossible that the Sun has a distant planetary companion, such an object would have to be lying very far from the observed regions of the Solar System to have no detectable gravitational effect on the other planets. A Mars-sized object could lie undetected at 300 AU (10 times the distance of Neptune); a Jupiter-sized object at 30,000 AU. To travel 1000 AU in two years, an object would need to be moving at 2400 km/s – faster than the galactic escape velocity. At that speed, any object would be shot out of the Solar System, and then out of the Milky Way galaxy into intergalactic space.\n",
"Since one can not travel faster than light, one might conclude that a human can never travel farther from Earth than 40 light years if the traveler is active between the ages of 20 and 60. One would easily think that a traveler would never be able to reach more than the very few solar systems which exist within the limit of 20–40 light years from the earth. But that would be a mistaken conclusion. Because of time dilation, a hypothetical spaceship can travel thousands of light years during the pilot's 40 active years. If a spaceship could be built that accelerates at a constant 1\"g\", it will, after a little less than a year, be travelling at almost the speed of light as seen from Earth. This is described by:\n",
"So, for Corot, due to the maximum duration of 6 months of observation for each star field, only planets closer to their stars than 0.3 Astronomical Units (less than the distance between the Sun and Mercury) can be detected, therefore generally not in the so-called habitable zone. The Kepler mission (NASA) has continuously observed the same field for many years and thus had the ability to detect Earth sized planets located farther from their stars.\n",
"Since one might not travel faster than light, one might conclude that a human can never travel further from the Earth than 40 light-years if the traveler is active between the age of 20 and 60. A traveler would then never be able to reach more than the very few star systems which exist within the limit of 20–40 light-years from the Earth. This is a mistaken conclusion: because of time dilation, the traveler can travel thousands of light-years during their 40 active years. If the spaceship accelerates at a constant 1 g (in its own changing frame of reference), it will, after 354 days, reach speeds a little under the speed of light (for an observer on Earth), and time dilation will increase their lifespan to thousands of Earth years, seen from the reference system of the Solar System, but the traveler's subjective lifespan will not thereby change. If the traveler returns to the Earth, they will land thousands of years into the Earth's future. Their speed will not be seen as higher than the speed of light by observers on Earth, and the traveler will not measure their speed as being higher than the speed of light, but will see a length contraction of the universe in their direction of travel. And as the traveler turns around to return, the Earth will seem to experience much more time than the traveler does. So, while their (ordinary) coordinate speed cannot exceed \"c\", their proper speed (distance as seen by Earth divided by their proper time) can be much greater than \"c\". This is seen in statistical studies of muons traveling much further than \"c\" times their half-life (at rest), if traveling close to \"c\".\n",
"Finally, one planet was found and deemed habitable. It was named Atlas. At that time, scientists discovered how to make objects travel at almost the speed of light. However, even with the relativistic effects of near light-speed travel and medical advances in longevity, the planet was still too far away for any human to reach in one lifetime.\n",
"At 4.2 light-years (1.3 parsecs, 40 trillion km, or 25 trillion miles) away from Earth, the closest potentially habitable exoplanet is Proxima Centauri b, which was discovered in 2016. This means it would take more than 18,100 years to get there if a vessel could consistently travel as fast as the Juno spacecraft (250,000 kilometers per hour or 150,000 miles per hour). In other words, it is currently not feasible to send humans or even probes to search for biosignatures outside of our solar system. Given this fact, the only way to search for biosignatures outside of our solar system is by observing exoplanets with telescopes.\n"
] |
what/ who are gypsies? | The Romani (also spelled Romany; /ˈroʊməni/, /ˈrɒ-/), or Roma, are a traditionally itinerant ethnic group living mostly in Europe and the Americas, who originate from the northwestern regions of the Indian subcontinent.[28][29] The Romani are widely known among English-speaking people by the exonym "Gypsies" (or "Gipsies"). However, according to many Romani people and academics who study them, the word has been tainted by its use as a racial slur and a pejorative connoting illegality and irregularity.[30][31][32][33][34][35][36] Other exonyms are Ashkali and Sinti.
Romani are dispersed, with their concentrated populations in Europe — especially Central, Eastern and Southern Europe including Turkey, Spain and Southern France. They originated in Northern India and arrived in Mid-West Asia, then Europe, around 1,000 years ago,[37] either separating from the Dom people or, at least, having a similar history;[38] the ancestors of both the Romani and the Dom left North India sometime between the sixth and eleventh century.[37]
Since the nineteenth century, some Romani have also migrated to the Americas. There are an estimated one million Roma in the United States;[4] and 800,000 in Brazil, most of whose ancestors emigrated in the nineteenth century from eastern Europe. Brazil also includes some Romani descended from people deported by the government of Portugal during the Inquisition in the colonial era.[39] In migrations since the late nineteenth century, Romani have also moved to other countries in South America and to Canada.[40]
More info at : _URL_0_ | [
"One theory suggests that the name ultimately derives from a form \"ḍōmba-\" 'man of low caste living by singing and music', attested in Classical Sanskrit. Many also believe that Gypsies are descendants of Dalit because of the word zingaro (ατσίγγανος) (\"untouchable\") that was used to designate gypsies in Greece . An alternative view is that the ancestors of the Romani were part of the military in Northern India. When there were invasions by Sultan Mahmud Ghaznavi and these soldiers were defeated, they were moved west with their families into the Byzantine Empire between AD 1000 and 1030.\n",
"BULLET::::- Pure Gypsies meant the itinerant people of a non-European race. While originally from India and with a language related to Sanskrit, and thus possessing true Aryan origins, Ritter rejected this argument for all but approximately ten percent of the Roma and Sinti population.\n",
"BULLET::::- \"Gyptians:\" A boat-dwelling, transient social group in Lyra's world. They live according to their own customs and traditions, outside mainstream society. They are reminiscent of \"Gypsies\" (Roma). Our word \"Gypsy\" is derived from the \"(mistaken)\" belief that Gypsies were Egyptian in origin.\n",
"The worldwide used name for Gypsies to identify themselves is the term \"Htom\", which in the Romani language means a man. The words Rom, Dom and Lom are used to describe Romani peoples who diverged in the 6th century. Several tribes moved as far as Western Europe and are called Rom, while the ones who remained in Persia and Turkey are called Dom.\n",
"On September 5, 2012, Sun News Network host Ezra Levant broadcast a commentary \"The Jew vs. the Gypsies\" on \"The Source\", in which he accused the Romani people as a group of being criminals and said: \"These are gypsies, a culture synonymous with swindlers. The phrase gypsy and cheater have been so interchangeable historically that the word has entered the English language as a verb: he gypped me. Gypsies are not a race. They're a shiftless group of hobos. They rob people blind. Their chief economy is theft and begging. For centuries these roving highway gangs have mocked the law and robbed their way across Europe.\"\n",
"BULLET::::- Part-Gypsies were classified by Ritter as persons who had one or two Gypsies among their grandparents. Further, a person was classed as part-Gypsy if two or more of his grandparents are part-Gypsy. Often it meant mixed-race individuals of Gypsy plus Jenische lineage.\n",
"On September 5, 2012, Levant broadcast a commentary that he titled \"The Jew vs. the Gypsies\" on \"The Source\", in which he accused the Romani people as a group of being criminals. Levant said, \"These are gypsies, a culture synonymous with swindlers. The phrase gypsy and cheater have been so interchangeable historically that the word has entered the English language as a verb: he gypped me. Gypsies are not a race. They’re a shiftless group of hobos. They rob people blind. Their chief economy is theft and begging. For centuries these roving highway gangs have mocked the law and robbed their way across Europe.\"\n"
] |
death by hanging | It depends if they do it properly. Hanging is supposed to snap your neck, severing your spinal cord,and killing you fairly quickly.
Often (especially when done by amateurs), if it isn't done right (drop isn't big enough, knot isn't done right, whatever), and the neck isn't snapped, they'll suffocate slowly. | [
"Hanging, as it was practised in 1817, was particularly cruel and inefficient. The story predates the adoption of the \"long drop,\" calculated to end the condemned person's life quickly by breaking the neck. During this time, all deaths by hanging are caused by slow choking. In the novel, James Botting, the executioner at Newgate Prison, accepts bribes from the prisoners to tug on their legs and quicken their demise.\n",
"Hanging is the suspension of a person by a noose or ligature around the neck. The \"Oxford English Dictionary\" states that hanging in this sense is \"specifically to put to death by suspension by the neck\", though it formerly also referred to crucifixion and death by impalement in which the body would remain \"hanging\". Hanging has been a common method of capital punishment since medieval times, and is the primary execution method in numerous countries and regions. The first known account of execution by hanging was in Homer's \"Odyssey\" (Book XXII). In this specialised meaning of the common word \"hang\", the past and past participle is \"hanged\" instead of \"hung\".\n",
"Hanging is divided into suspension hanging and the much rarer drop hanging — this last can kill in various ways. Suicide attempters who survive either because the cord or ligature point breaks or because they are discovered and cut down, can face a range of serious injuries, including cerebral anoxia (which can lead to permanent brain damage), laryngeal fracture, cervical spine fracture, tracheal fracture, pharyngeal laceration, and carotid artery injury. Ron M. Brown writes that hanging has a \"fairly imperspicuous and complicated symbolic history\". There are commentaries on hanging in antiquity, and it has various cultural interpretations. Throughout history, numerous famous people have committed suicide by hanging.\n",
"In general, there are two ways of performing suicide by hanging: suspension hanging (the suspension of the body at the neck) and drop hanging (a calculated drop designed to break the neck). Manual strangulation and suffocation may also be considered together with hanging.\n",
"The vast majority of deaths by hanging in the UK and US are suicides, although there are some cases involving erotic asphyxiation. Homicides may be disguised as a hanging suicide. Features that suggest that the death is a homicide include the ligature marks being under the larynx, scratch marks on the ligature, and the presence of significant injury on the skin of the neck.\n",
"Suicide by hanging is the act of intentionally killing oneself via suspension from an anchor-point or ligature point (e.g. an overhead beam or hooked up ) by a ligature or by jumping from a height with a noose around the head \n",
"The Code of Criminal Procedure (1898) called for the method of execution to be hanging. The same method was adopted in the Code of Criminal Procedure (1973). Section 354(5) of the above procedure reads as \"When any person is sentenced to death, the sentence shall direct that the person be hanged by the neck till the person is dead.\" The hanging method is long drop, the method devised by William Marwood in Britain. The person has their neck snapped as they fall through the trapdoor and is left hanging until they are dead.\n"
] |
What is the driving force of the solar cycle? Does it significantly affect total solar output? | The Sun is a giant rotating ball of plasma. Plasma is very electrically conductive so the currents in the Sun naturally give rise to strong magnetic fields. This "solar dynamo" is a complex phenomenon because the Sun's poles and equator rotate at different speeds, causing the magnetic field to evolve in an unusual fashion.
Typically, for our Sun the Solar magnetic field will evolve over a 22 year cycle (give or take), alternating between periods of relative quiescence while being established at one pole then evolving into a chaotic jumble with a peak of sunspot activity then flipping poles and reprising the same track again. Often this is just called an 11 year cycle since mostly we don't care about whether the Solar magnetic field is "upside down" or not.
During the periods where the magnetic fields are chaotic on the Sun this causes the formation of sunspots, Solar flares, etc. and changes the overall output of the Sun since sunspots are a slightly different temperature than the rest of the Sun. Additionally, it's known that the Sun can enter a period of time when it stops alternating the magnetic field and stops producing sunspots (a so called "maunder minimum" period).
These changes affect the total energy output of the Sun to a small degree (within about 1% or less), but they also change the Solar wind environment, which impacts cosmic ray flux at Earth and cloud formation, which has a complex impact on Earth's climate. There is a lot of suspicion that the maunder minimum was the cause for a general global cooling at around that time but overall the evidence is sketchy and such conclusions are more speculation than fact. | [
"The solar cycle also modulates the flux of short-wavelength solar radiation, from ultraviolet to X-ray and influences the frequency of solar flares, coronal mass ejections and other solar eruptive phenomena.\n",
"The solar cycle is an approximately 11-year period of varying solar activity including solar maximum where the solar wind is strongest and solar minimum where the solar wind is weakest. Galactic cosmic rays create a continuous radiation dose throughout the Solar System that increases during solar minimum and decreases during solar maximum (solar activity). The inner and outer radiation belts are two regions of trapped particles from the solar wind that are later accelerated by dynamic interaction with the Earth's magnetic field. While always high, the radiation dose in these belts can increase dramatically during geomagnetic storms and substorms. Solar proton events (SPEs) are bursts of energetic protons accelerated by the Sun. They occur relatively rarely and can produce extremely high radiation levels. Without thick shielding, SPEs are sufficiently strong to cause acute radiation poisoning and death.\n",
"Geography affects solar energy potential because areas that are closer to the equator have a greater amount of solar radiation. However, the use of photovoltaics that can follow the position of the Sun can significantly increase the solar energy potential in areas that are farther from the equator. Time variation effects the potential of solar energy because during the nighttime there is little solar radiation on the surface of the Earth for solar panels to absorb. This limits the amount of energy that solar panels can absorb in one day. Cloud cover can affect the potential of solar panels because clouds block incoming light from the Sun and reduce the light available for solar cells.\n",
"Variations in the solar output have effects on climate, less through the usually quite small effects on insolation and more through the relatively large changes of UV radiation and potentially also indirectly through modulation of cosmic ray radiation. The 11-year solar cycle measurably alters the behaviour of weather and atmosphere, but decadal and centennial climate cycles are also attributed to solar variation.\n",
"The Sun is the predominant source of energy input to the Earth. Both long- and short-term variations in solar intensity are known to affect global climate. Solar output varies on shorter time scales, including the 11-year solar cycle and longer-term modulations.\n",
"The net effect during periods of enhanced solar magnetic activity is increased radiant solar output because faculae are larger and persist longer than sunspots. Conversely, periods of lower solar magnetic activity and fewer sunspots (such as the Maunder Minimum) may correlate with times of lower irradiance.\n",
"The Sun's magnetic field structures its atmosphere and outer layers all the way through the corona and into the solar wind. Its spatiotemporal variations lead to various measurable solar phenomena. Other solar phenomena are closely related to the cycle, which serves as the energy source and dynamical engine for the former.\n"
] |
how do some websites keep you from navagating back? | Usually, this is done by sending people to a page, which immediately redirects to another page. So, if you click back on the second page, it just sends you back to the first page which immediately redirects you again.
Usually, if you click back fast enough, you can get past it and eventually make it back to where you were. | [
"The first layer of defense is a captcha page where the user is prompted to verify he is a real person and not a bot or tool. Solving the captcha will create a cookie that permits access to the search engine again for a while. After about one day the captcha page is removed again.\n",
"Delete messages: A user can delete a message in a one-on-one chat or a group chat. But this can be done within an hour of sending the message. Group admins have access to delete any message at any point in time. \n",
"This user identification procedure has received many criticisms, especially from people with disabilities, but also from other people who feel that their everyday work is slowed down by distorted words that are difficult to read. It takes the average person approximately 10 seconds to solve a typical CAPTCHA.\n",
"Another technique used consists of using a script to re-post the target site's CAPTCHA as a CAPTCHA to a site owned by the attacker, which unsuspecting humans visit and correctly solve within a short while for the script to use.\n",
"Users may wish to remove browsing history data or stop it being collected (at least temporarily). They may want or need to do this to try to prevent other people who have full access to the computer they are using (such as their parents, spouse, manager, or law enforcement officials) from seeing confidential information about websites they have visited.\n",
"Many of these techniques require the attacker to tamper with the keypad, wait for the unsuspecting user to enter the combination, and return at a later time to retrieve the information. These techniques are sometimes used by members of intelligence or law enforcement agencies, as they are often effective and surreptitious. \n",
"Stop and account\" is a little-known standard operating procedure, rather than a power, of the police, under Recommendation 61 (Rec.61); it is not a statutory procedure like stop and search. It applies to people on foot in a public place. There is no power to force a person to stop, or to detain them. The decision to \"request\" a person to \"stop and account\" is left to the discretion of the individual officer; there is no guidance on this. Unlike stop and search, there is no requirement for \"reasonable suspicion\". There is no actual requirement on a police officer, beyond identifying themself as such; no need to tell the persons stopped why they are being asked to account for themselves, or to say that they are free to leave without answering questions. However, police forces have procedures governing stops. The Metropolitan Police use the acronym \"WISER\": show Warrant card; state Identify and police Station; explain person's Entitlement to a copy of the stop record; give Reason for the stop. While a record must be made of every stop, there is no requirement for police forces to keep statistics on number of stops or ethnicity of people stopped, according to the College of Policing.\n"
] |
I've heard a lot that pre-modern soldiers generally did not aim to kill, or would purposefully miss their target so-as to not kill. Was this also a problem when melee weapons and archery was the dominant means of warfare? | You may be interested in a [previous answer](_URL_0_) I wrote to a similar question. | [
"As happens, the Army's men often had the weapons to fight the \"last\" war by the time of the following conflict. Most of the 19th century weapons were technologically obsolete at their introduction or within five years, and despite the apparently exhaustive testing many inadequate weapons were issued.\n",
"Traditionally, soldiers (infantry and cavalry alike) and officers had carried swords for both personal protection and use in combat. The development of firearms in the mid-14th century changed the way battles were fought, and by the late-15th century it was no longer especially practical to close to hand-to-hand combat range to engage one's opponents, owing to the prevalence of pikes and musket-fire (pike and shot) on the battlefield.\n",
"The development of firearms rendered bows obsolete in warfare, although efforts were sometimes made to preserve archery practice. In England and Wales, for example, the government tried to enforce practice with the longbow until the end of the 16th century. This was because it was recognized that the bow had been instrumental to military success during the Hundred Years' War. Despite the high social status, ongoing utility, and widespread pleasure of archery in Armenia, China, Egypt, England and Wales, America, India, Japan, Korea, Turkey and elsewhere, almost every culture that gained access to even early firearms used them widely, to the neglect of archery. Early firearms were inferior in rate-of-fire, and were very sensitive to wet weather. However, they had longer effective range and were tactically superior in the common situation of soldiers shooting at each other from behind obstructions. They also required significantly less training to use properly, in particular penetrating steel armor without any need to develop special musculature. Armies equipped with guns could thus provide superior firepower, and highly trained archers became obsolete on the battlefield. However, the bow and arrow is still an effective weapon, and archers have seen action in the 21st century. Traditional archery remains in use for sport, and for hunting in many areas.\n",
"Since human beings are lacking in the natural weapons possessed by other predators, humans have a long history of making tools to overcome this shortcoming. The evolution of hunting weapons shows an ever-increasing ability to extend the hunter's reach, while maintaining the ability to produce disabling or lethal wounds, allowing the hunter to capture the game. \n",
"The rural men used weapons at hand, often nothing more than lances, before they gained guns. When they gained firearms, they adapted their combat tactics. As the 19th century advanced, the increased number of fighters had to rely on less expensive weapons; they used spears combined with sabers, and failing that, the most primitive weapons, including indigenous \"bolas.\"\n",
"The advent of firearms eventually rendered bows obsolete in warfare. Despite the high social status, ongoing utility, and widespread pleasure of archery, almost every culture that gained access to even early firearms used them widely, to the relative neglect of archery.\n",
"\"In times of difficulty men had to carry their guns while they followed the plough… the nation or the people that had lost the fighting instinct was sure to be swamped by others who possessed that instinct.\" \n"
] |
Why was Gustav III of Sweden so determined to aid Louis XVI during the French Revolution? | Multiple reasons. First Gustav III was a king who believed in the idea of 'Enlightened Absolutism'. He'd instituted absolute monarchy (one of two quite short periods of that in Swedish history) in a coup d'etat, inspired strongly by Louis's absolutism, and Gustav in his coup strongly curtailed the power of the Swedish parliament. Gustav wanted to be hip to his age ('enlightened') but firmly believed in his - and other kings' - divine right to rule. (and indeed a huge stickler for rules and traditional formality in general, to the extent that other rulers like his cousin Catherine the Great found him annoying to deal with) His 'enlightened' ideas were - for instance - things like abolishing torture, but not popular rule.
Second, Gustav (as many others in Europe at that time) was a huge Francophile and admirer of France and the French monarchy, Versailles and all that. His court used French a lot more the Swedish language. (and the late 18th century is the one period in Swedish language history when loanwords from French dominated)
Third, per the above he didn't have so much to gain from aiding Louis as he stood to lose by _not_ doing so. Louis's downfall started earlier but Gustav III still ended up with the dubious honor of being murdered first.
| [
"When Gustav made war on Russia and did poorly he was assassinated by a conspiracy of nobles angry that he tried to restrict their privileges for the benefit of the peasants. Under King Charles XIII, Sweden joined various coalitions against Napoleon, but was badly defeated and lost much of its territory, especially Finland and Pomerania. The king was overthrown by the army, which in 1810 decided to bring in one of Napoleon's marshals, Bernadotte, as the heir apparent. He had a Jacobin background and was well-grounded in revolutionary principles, but put Sweden in the coalition that opposed Napoleon. He served as a quite conservative king Charles XIV John of Sweden (1818–44).\n",
"In Sweden, King Gustav III (reigned 1771–92) was an enlightened despot, who weakened the nobility and promoted numerous major social reforms. He felt the Swedish monarchy could survive and flourish by achieving a coalition with the newly emerged middle classes against the nobility. He was close to King Louis XVI so he was disgusted with French radicalism. Nevertheless, he decided to promote additional antifeudal reforms to strengthen his hand among the middle classes. When the king was assassinated in 1792 his brother Charles became regent, but real power was with Gustaf Adolf Reuterholm, who bitterly opposed the French Revolution and all its supporters. Under King Gustav IV Adolf, Sweden joined various coalitions against Napoleon, but was badly defeated and lost much of its territory, especially Finland and Pomerania. The king was overthrown by the army, which in 1810 decided to bring in one of Napoleon's marshals, Bernadotte, as the heir apparent and army commander. He had a Jacobin background and was well-grounded in revolutionary principles, but put Sweden in the coalition that opposed Napoleon. Bernadotte served as a quite conservative king Charles XIV John of Sweden (1818–44).\n",
"Following the uprising against the French monarchy in 1789, Gustav pursued an alliance of princes aimed at crushing the insurrection and re-instating his French counterpart, King Louis XVI, offering Swedish military assistance as well as his leadership. In 1792 he was mortally wounded by a gunshot in the lower back during a masquerade ball as part of an aristocratic-parliamentary coup attempt, but managed to assume command and quell the uprising before succumbing to sepsis 13 days later, a period during which he received apologies from many of his political enemies. Gustav's immense powers were placed in the hands of a regency under his brother Prince Carl and Gustaf Adolf Reuterholm until his son and successor Gustav IV Adolf reached adulthood in 1796. The Gustavian autocracy thus survived until 1809, when his son was ousted in another coup d'état, which definitively established parliament as the dominant political power.\n",
"Gustav was a vocal opponent of what he saw as the abuse of political privileges seized by the nobility since the death of King Charles XII. Seizing power from the government in a coup d'état, called the Swedish Revolution, in 1772 that ended the Age of Liberty, he initiated a campaign to restore a measure of Royal autocracy, which was completed by the Union and Security Act of 1789, which swept away most of the powers exercised by the Swedish Riksdag (parliament) during the Age of Liberty, but at the same time it opened up the government for all citizens, thereby breaking the privileges of the nobility.\n",
"The Swedish nobles were about this time violently opposed to the king, who, by the aid of the other orders of the state, had wrested their power from them and was now ruling despotically. This dislike was increased by the \"coup d'état\" of 1789 and by the king's known desire to interfere in favor of Louis XVIII in France. Anckarström, a man of strong passions and violent temper, resolved upon the assassination of Gustav and communicated his intention to other disaffected nobles, including Counts Horn and Ribbing.\n",
"After Choiseul's dismissal in 1770, Vergennes was sent to Sweden with instructions to help the pro-French party of The Hats with advice and money. The coup by which King Gustav III secured power (19 August 1772) was a major diplomatic triumph for France and brought to an end the Swedish Age of Liberty.\n",
"Gustav next aimed at forming a league of princes against the revolutionary government in France, and subordinated every other consideration to this goal. His profound knowledge of popular assemblies enabled him, alone among contemporary sovereigns, to gauge the scope of the French Revolution accurately from the first. He was hampered, however, by financial restrictions and lack of support from the other European Powers. Then, after the brief Diet of Gävle on 22 January – 24 February 1792, he fell victim to a widespread political conspiracy among his aristocratic enemies.\n"
] |
High heels were primarily worn by men for the first 700 years after they were invented, changing to being primarily worn by women in the 17th century. What triggered this change? Was there a time when both genders commonly wore them? | I've actually got a past answer that deals with this question: [How did heels became a purely feminine thing, after it was first used on shoes in the 16th century by noble or rich men?](_URL_0_) (For more on the Great Masculine Renunciation I mention in it, try [here](_URL_1_).) It's on the short side, though, so there is certainly room for someone else to write a fresh response. | [
"The girdles and buckled belts that were popular in the fifth and sixth century, with tools and personal items suspended from the belt, have gone out of fashion by the tenth century. Women wear simple ankle shoes and slippers in the tenth and eleventh century. Archaeological evidence suggests that a variety of shoe styles were available to women during this period.\n",
"In the 19th century, the woman's double suit was common, comprising a gown from shoulder to knees plus a set of trousers with leggings going down to the ankles. In the first half of the 19th century the top became knee-length while an ankle-length drawer was added as a bottom. By the second half of the 19th century, in France, the sleeves started to vanish, the bottom became shorter to reach only the knees and the top became hip-length and both became more form fitting. In the 1900s women wore wool dresses on the beach that were made of up to of fabric.\n",
"It has not been popular for men to wear high heels since the late 18th century. Some men see the cultural norm, which often mandates that women must wear heels to look professional, as completely unproblematic. However, women report that they are often painful to walk in, and commonly result in negative side effects to joints and veins after prolonged use.\n",
"Modern high heels were brought to Europe by emissaries of Shāh Abbās I of Persia in the early 17th century. Men wore them to imply their upper-class status; only someone who did not have to work could afford, both financially and practically, to wear such extravagant shoes. Royalty such as King Louis XIV wore heels to impart status. As the shoes caught on, and other members of society began donning high heels, elite members ordered their heels to be made even higher to distinguish themselves from lower classes. Authorities even began regulating the length of a high heel's point according to social rank. Klaus Carl includes these lengths in his book \"Shoes\": \"½ inch for commoners, 1 inch for the bourgeois, 1 and ½ inches for knights, 2 inches for nobles, and 2 and ½ inches for princes.\"” As women took to appropriating this style, the heels’ width changed in another fundamental way. Men wore thick heels, while women wore skinny ones. Then, when Enlightenment ideals such as science, nature, and logic took hold of many European societies, men gradually stopped wearing heels. After the French Revolution in the late 1780s, heels, femininity, and superficiality all became intertwined. In this way, heels became much more associated with a woman's supposed sense of impracticality and extravagance.\n",
"During the 16th century, royalty such as Catherine de Medici and Mary I of England began wearing high-heeled shoes to make them look taller or larger than life. By 1580, men also wore them, and a person with authority or wealth might be described as, \"well-heeled\". In modern society, high-heeled shoes are a part of women's fashion and are widespread in certain countries around the world.\n",
"Historically, high heels were used by aristocratic women for cosmetic reasons, to raise their height or to keep their feet and long dresses clean. The style was then subject to sumptuary laws. In more modern times, stiletto heels have been restricted when they might damage the floor surface or cause accidents.\n",
"During the Middle Ages, both men and women wore pattens in Europe, commonly seen as the predecessor of the modern high-heeled shoe, while menial classes usually wore hand-made footwear made from whatever materials were available. Going barefoot was seen as a mark of poverty and the lowest social class, as well as being the mark of a prisoner. In the 15th century, chopines were created in Turkey and were usually 7–8 inches (17.7–20.3 cm) high. These shoes became popular in Venice and throughout Europe as a status symbol revealing wealth and social standing. During the 16th century royalty, such as Catherine de Medici and Mary I of England, started wearing high-heeled shoes to make them look taller or larger than life. By 1580, even men wore them, and a person with authority or wealth was often referred to as \"well-heeled\".\n"
] |
What was the draw weight of a historical Yumi (Japanese bow)? | You would do far better to search on military forums, as there are no historical records of an actual number to it. There is no direct data that is reliable enough to draw a conclusion, unlike Chinese, Turkish and Middle Eastern, and European bows that still survive or were measured with quantitative draw weights. Not to mention that there are so many variables such as arrowtip, armour, range, penetration of various materials, the ability to wound v. kill, and so on.
| [
"Among Filipino swords, the most distinguishing characteristic of the Kampilan is its huge size. At about 36 to 40 inches (90 to 100 cm) long, it is much larger than other Filipino swords, and is thought to be the longest, though smaller versions (sometimes called the \"kampilan bolo\") exist. A notable exception would be the \"panabas\", another Philippine longsword, of which an unusually large example could measure up to four feet in length.\n",
"The wodao () is a Chinese sword from the Ming Dynasty. It is typically long and slender, but heavy, with a curved back and sharp blade. It bears a strong resemblance to the Tang sword, zhanmadao, Tachi or Odachi in form. Extant examples show a handle approximately 25.5 cm long, with a gently curved blade 80 cm long. The Japanese samurai warriors were also adept with the wodao.\n",
"The , the largest kofun in Japan, are believed to have been constructed over a period of 20 years in the mid 5th century during the Kofun Period. While it cannot be accurately confirmed, it is commonly accepted that the tomb was built for the late Emperor Nintoku. The Imperial Household Agency of Japan treats it as such.\n",
"An authentic tachi that was manufactured in the correct time period averaged 70–80 centimeters (27 9/16 - 31 1/2 inches) in cutting edge length (\"nagasa\") and compared to a katana was generally lighter in weight in proportion to its length, had a greater taper from hilt to point, was more curved with a smaller point area.\n",
"BULLET::::- The yumi (longbow), reflected in the art of \"kyūjutsu\" (lit. the skill of the bow) was a major weapon of the Japanese military. Its usage declined with the introduction of the tanegashima (Japanese matchlock) during the Sengoku period, but the skill was still practiced at least for sport. The yumi, an asymmetric composite bow made from bamboo, wood, rattan and leather, had an effective range of if accuracy was not an issue. On foot, it was usually used behind a (), a large, mobile wooden shield, but the yumi could also be used from horseback because of its asymmetric shape. The practice of shooting from horseback became a Shinto ceremony known as \"yabusame\" ().\n",
"The a Nanboku-chō period Japanese sword (\"Tantō\"), with a length of 27.4 cm. It was made in 1367 and donated to the temple by Yanagisawa Yoshiyasu on the occasion of the 133th memorial services for Takeda Shigen. It was designated a Important Cultural Property of Japan on March 26, 1915.\n",
"An (large/great sword) or nodachi (野太刀, field sword) was a type of traditionally made Japanese sword (日本刀, nihontō) used by the samurai class of feudal Japan. The Chinese equivalent and 'cousin' for this type of sword in terms of weight and length is the miao dao, and the Western battlefield equivalent (though less similar) is the longsword or claymore.\n"
] |
How much land does it take to support one human being? | > The minimum amount of agricultural land necessary for sustainable food security, with a diversified diet similar to those of North America and Western Europe (hence including meat), is 0.5 of a hectare per person. This does not allow for any land degradation such as soil erosion, and it assumes adequate water supplies. Very few populous countries have more than an average of 0.25 of a hectare. It is realistic to suppose that the absolute minimum of arable land to support one person is a mere 0.07 of a hectare–and this assumes a largely vegetarian diet, no land degradation or water shortages, virtually no post-harvest waste, and farmers who know precisely when and how to plant, fertilize, irrigate, etc. [FAO, 1993]
From the FAO (the Food and Agriculture Organization of the United Nations
Note: .07 hectare = .17 acres | [
"For example, there were 12 billion hectares of biologically productive land and water on this planet in 2008. Dividing by the number of people alive in that year, 6.7 billion, gives a biocapacity of 1.8 global hectares per person . This assumes that no land is set aside for other species that consume the same biological material as humans.\n",
"BULLET::::- Ownership of tax-paying land sized at least one-fourth of a \"veli\" (about an acre and a half). The land-owning requirement was reduced to one-eighth \"veli\" for people who had learned at least one Veda and one Bhashya.\n",
"Since its founding, Landesa has had one specific goal: securing land rights for the world's poorest people. This is because more than two billion people lives in extreme poverty, surviving on $2 a day or less. Of those, more than 75 percent live in rural areas and rely on agriculture for their sustenance. Most do not have secure rights to land and therefore, limited opportunity to build a better future for themselves and their family. True ownership of land in the developing world determines access to shelter, income, education, healthcare, and improves economic and nutritional security.\n",
"In 2007, the average biologically productive area per person worldwide was approximately 1.8 global hectares (gha) per capita. The U.S. footprint per capita was 9.0 gha, and that of Switzerland was 5.6 gha, while China's was 1.8 gha. The WWF claims that the human footprint has exceeded the biocapacity (the available supply of natural resources) of the planet by 20%. Wackernagel and Rees originally estimated that the available biological capacity for the 6 billion people on Earth at that time was about 1.3 hectares per person, which is smaller than the 1.8 global hectares published for 2006, because the initial studies neither used global hectares nor included bioproductive marine areas.\n",
"Eventually, as per Municipal Resolution on 1989 asserts Mr. Agustin “Usting” Mendez Tuboranon Local or Municipal Hero of our time. Apparently, series of donation from portion of track land in order to entice people and capture to live.\n",
"In comparison, based on a world population of seven billion, the world's inhabitants, as a loose crowd taking up ten square feet (one square metre) per person (Jacobs Method), would occupy a space a little larger than Delaware's land area.\n",
"In advocating the case of the persons thus dispossessed, it is a right, and not a charity ... [Government must] create a national fund, out of which there shall be paid to every person, when arrived at the age of twenty-one years, the sum of fifteen pounds sterling, as a compensation in part, for the loss of his or her natural inheritance, by the introduction of the system of landed property. And also, the sum of ten pounds per annum, during life, to every person now living, of the age of fifty years, and to all others as they shall arrive at that age.\n"
] |
Louis XIV and absolute monarchy | The standard Europe 101 argument (and I just gave a lecture on it last week) is:
- by building a state bureaucracy of non-noble, paid staff (instead of offices that would be handed to aristocrats)
- by having loyal administrators, *Intendants*, travel to distant areas and keep an eye on things;
- and by building Versailles... a huge symbol of status and power... and then inviting the most powerful nobles in the realm to come live there. (and distribute "favors," such as the opportunity to dress him in the morning, which translated into access.)
But a cooler, much more sophisticated argument--too complex for my 101 class, unfortunately-- is in William Beik's book on Absolutism, which basically shows how it's just as much a *bottom up* process... namely, all the aforementioned things (bureaucratic state-building, Versailles) were going on, of course,
... but the key factor became when the great nobles (Anjou, Berry, etc) started to perceive this growing "state" of Louis XIV's monarchy as an *opportunity*--as an avenue to advance their own interests.
So, imagine the Duc de Berry in competition with the Duc d'Anjou; instead of a futile (and often deadlocked) direct struggle of rallying their own supporters, resources, etc., the could now turn to the growing state--and use it to press their own claims to their own advantage.
Short version: once the great nobles saw that they could *use* Louis XIV for their own interests, they increasingly made Louis the de-facto "center" of power... and thereafter could be increasingly coopted *by* him...
| [
"Louis XIV, known as the \"Sun King\", reigned over France from 1643 until 1715 although his strongest period of personal rule did not begin until 1661 after the death of his Italian chief minister Cardinal Mazarin. Louis believed in the divine right of kings, which asserts that a monarch is above everyone except God, and is therefore not answerable to the will of his people, the aristocracy, or the Church. Louis continued his predecessors' work of creating a centralized state governed from Paris, sought to eliminate remnants of feudalism in France, and subjugated and weakened the aristocracy. By these means he consolidated a system of absolute monarchical rule in France that endured until the French Revolution. However, Louis XIV's long reign saw France involved in many wars that drained its treasury.\n",
"Though some historians doubt it, Louis XIV of France (1638–1715) is often said to have proclaimed \"L'état, c'est moi\" (\"I am the State!\"). Although often criticized for his extravagances, such as the Palace of Versailles, he reigned over France for a long period, and some historians consider him a successful absolute monarch. More recently, revisionist historians have questioned whether Louis' reign should be considered 'absolute', given the reality of the balance of power between the monarch and the nobility.\n",
"For much of the reign of Louis XIV, who was known as the \"Sun King\" (French: \"le Roi Soleil\"), France stood as the leading power in Europe, engaging in three major wars—the Franco-Dutch War, the War of the League of Augsburg, and the War of the Spanish Succession—and two minor conflicts—the War of Devolution, and the War of the Reunions. Louis believed in the Divine Right of Kings, the theory that the King was crowned by God and accountable to him alone. Consequently, he has long been considered the archetypal absolute monarch. Louis XIV continued the work of his predecessor to create a centralized state, governed from the capital to sweep away the remnants of feudalism that persisted in parts of France. He succeeded in breaking the power of the provincial nobility, much of which had risen in revolt during his minority called the Fronde, and forced many leading nobles to live with him in his lavish Palace of Versailles.\n",
"While often considered a tyrant and a warmonger (especially in England), Louis XIV was not in any way a despot in the 20th-century sense. The traditional customs and institutions of France limited his power and in any case, communications were poor and no national police force existed.\n",
"Louis XIV (Louis Dieudonné; 5 September 16381 September 1715), known as Louis the Great (') or the Sun King ('), was King of France from 14 May 1643 until his death in 1715. His reign of 72 years and 110 days is the longest recorded of any monarch of a sovereign country in European history. In the age of absolutism in Europe, Louis XIV's France was a leader in the growing centralisation of power.\n",
"Absolute monarchy in France slowly emerged in the 16th century and became firmly established during the 17th century. Absolute monarchy is a variation of the governmental form of monarchy in which the monarch holds supreme authority and where that authority is not restricted by any written laws, legislature, or customs. In France, Louis XIV was the most famous exemplar of absolute monarchy, with his court central to French political and cultural life during his reign.\n",
"The monarchy reached its peak during the 17th century and the reign of Louis XIV. By turning powerful feudal lords into courtiers at the Palace of Versailles, Louis XIV's personal power became unchallenged. Remembered for his numerous wars, he made France the leading European power. France became the most populous country in Europe and had tremendous influence over European politics, economy, and culture. French became the most-used language in diplomacy, science, literature and international affairs, and remained so until the 20th century. France obtained many overseas possessions in the Americas, Africa and Asia. Louis XIV also revoked the Edict of Nantes, forcing thousands of Huguenots into exile.\n"
] |
Can stars spit out elements we've yet to discover? | Currently, for transuranium elements the heavier atomic weight is, the less is half-life. So, by simple extrapolation, heavier nuclei shouldn't really exist. Now, it has been theorized that there is an ["island of stability"](_URL_0_), where extremely heavy nuclei suddenly become stable. We are currently unable to test this hypothesis with our accelerators, so it's highly controversial. Also, it's worth noting that unless the half-life of an element is millions of years, it's challenging to find it in nature, since it has already mostly decayed. | [
"The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star was being used as an incinerator/repository for nuclear waste products.\n",
"BULLET::::- 1957 – William Alfred Fowler, Margaret Burbidge, Geoffrey Burbidge, and Fred Hoyle, in their 1957 paper \"Synthesis of the Elements in Stars\", show that the abundances of essentially all but the lightest chemical elements can be explained by the process of nucleosynthesis in stars.\n",
"In collaboration with American physicist William Fowler and British astronomer Fred Hoyle, he and his wife were co-authors of \"Synthesis of the Elements in Stars\", a fundamental paper on stellar nucleosynthesis published in 1957. It is commonly referred to as the BFH paper after the initials of the surnames of the four authors. This paper describes the process of stars burning lighter elements into successively heavier atoms which then are expelled to form other structures in the universe, including other stars and planets.\n",
"In 1850, he \"lost\" a star that he had been observing, which Lt. Matthew Maury, the superintendent of the Observatory, claimed was evidence for a 9th planet (Pluto had not yet been discovered). In 1878, however, CHF Peters, director of the Hamilton College Observatory in New York, showed that the star had not in fact vanished, and that the previous results had been due to human error.\n",
"Schmidt is currently leading the SkyMapper telescope Project and the associated Southern Sky Survey, which will encompass billions of individual objects, enabling the team to pick out the most unusual objects. In 2014 they announced the discovery of the first star which did not contain any iron, indicating that it is a very primitive star, probably formed during the first rush of star formation following the Big Bang.\n",
"In 1957, the BFH group showed the famous result that all of the elements except the very lightest, are produced by nuclear processes inside stars. For this they received the Warner Prize in 1959. In her later research she was one of the first to measure the masses, compositions, and rotation curves of galaxies and was one of the pioneers in the spectroscopic study of quasars.\n",
"The light observed from the star was emitted when the universe was about 30% of its current age of 13.8 billion years. Kelly suggested that similar microlensing discoveries could help them identify the earliest stars in the universe. The star no longer exists as a blue supergiant, given the known lifetime of such stars.\n"
] |
how does photos taken on iphone geotag without service? | It simply uses the phone's built in GPS (which only requires a view of the sky) and puts the phone's current location into the metadata of the picture. | [
"Many smartphones automatically geotag their photos by default. Photographers who prefer not to reveal their location can turn this feature off. Additionally smartphones can use their GPS to geotag photos taken with an external camera.\n",
"Contributors gather imagery with their smartphones using an Android or iOS app. It is also possible to upload images captured with other cameras. The OpenStreetCam app supports using an OBD-II dongle plugged into the vehicle; in concert with the mobile device's GPS, OSC can derive more accurate image locations. The app also recognizes and processes street signs in real time while capturing imagery. Once the imagery is recorded, it is uploaded, processed, and published to the website.\n",
"Geotagged photos may be visually stamped with their GPS location information using software tools. A stamped photo affords universal and cross-platform viewing of the photo's location, and offers the security of retaining that location information in the event of metadata corruption, or if file metadata is stripped from a photo, e.g. when uploading to various online photo sharing communities.\n",
"Geotagging a photo is the process in which a photo is marked with the geographical identification of the place it was taken. Most technology with photo taking capabilities are equipped with GPS system sensors that routinely geotag photos and videos. Crowdsourced data available from photo-sharing services have the potentiality of tracking places. Geotagging can reveal the footprints and behaviors of travelers by utilizing spatial proximity of geo-tagged photos that are shared online, making it possible to extract travel information relating to a particular location. Instagram, Flickr, and Panoramio are a few services that provide the option of geotagging images. Flickr has over 40 million geotagged photos uploaded by 400 thousand users, and still growing at a rapid pace. Some sites including Panoramio and Wikimedia Commons show their geocoded photographs on a map, helping the user find pictures of the same or nearby objects from different directions.\n",
"MapWith.Us implemented automatic geotagging (auto-geotagging) via cell phones in late 2007. The free mobile application allows users at remote locations to create real-time Web-based maps by uploading and geotagging photos with cell phones. The application works by utilizing several different capabilities within a modern cell phone. When an image is captured by the cell phone camera, the built-in GPS tags the image with the present location. The images are afterwards uploaded via the cell phone's Internet data connection to the MapWith.Us website, where they are compiled into photo album \"map articles\".\n",
"Cyber-Shot models such as the DSC-HX20V, DSC-HX90V and the DSC-HX200V have a built-in GPS so the user can have their photos automatically geotagged as they are being taken. The feature can also serve as a compass as it shows the user's position on the camera screen.\n",
"A geotagged photograph is a photograph which is associated with a geographical location by geotagging. Usually this is done by assigning at least a latitude and longitude to the image, and optionally altitude, compass bearing and other fields may also be included.\n"
] |
why is a flat tax regressive? | Let's look at two people:
* A makes $50,000 and spends $25,000.
* B makes $100,000 and spends $40,000.
If you have a flat consumption tax of 50%, A pays $12,500 and B pays $20,000. The effective tax rate (the amount paid in tax relative to income) for A is 25% and B is 20%. Since A pays more as percentage of his income despite making less, it is a regressive tax.
It works this way because as people make more money, they spend less of it as a percentage of their income. The more income you have, the less you have to spend to stay alive and the more you can save. | [
"Where deductions are allowed, a 'flat tax' is a progressive tax with the special characteristic that, above the maximum deduction, the marginal rate on all further income is constant. Such a tax is said to be marginally flat above that point. The difference between a true flat tax and a marginally flat tax can be reconciled by recognizing that the latter simply excludes certain types of income from being defined as taxable income; hence, both kinds of tax are flat on taxable income.\n",
"A regressive tax is a tax imposed in such a manner that the average tax rate (tax paid ÷ personal income) decreases as the amount subject to taxation increases. \"Regressive\" describes a distribution effect on income or expenditure, referring to the way the rate progresses from high to low, so that the average tax rate exceeds the marginal tax rate. In terms of individual income and wealth, a regressive tax imposes a greater burden (relative to resources) on the poor than on the rich: there is an inverse relationship between the tax rate and the taxpayer's ability to pay, as measured by assets, consumption, or income. These taxes tend to reduce the tax burden of the people with a higher ability to pay, as they shift the relative burden increasingly to those with a lower ability to pay.\n",
"Taxes other than the income tax (for example, taxes on sales and payrolls) tend to be regressive. Hence, making the income tax flat could result in a regressive overall tax structure. Under such a structure, those with lower incomes tend to pay a \"higher\" proportion of their income in total taxes than the affluent do. The fraction of household income that is a return to capital (dividends, interest, royalties, profits of unincorporated businesses) is positively correlated with total household income. Hence a flat tax limited to wages would seem to leave the wealthy better off. Modifying the tax base can change the effects. A flat tax could be targeted at income (rather than wages), which could place the tax burden equally on all earners, including those who earn income primarily from returns on investment. Tax systems could utilize a flat sales tax to target all consumption, which can be modified with rebates or exemptions to remove regressive effects (such as the proposed Fair Tax in the U.S.).\n",
"Flat tax implementations \"without\" the provision of a negative income tax actually need an \"additional\" effort in order to \"avoid\" negative taxation. For such a tax, the exemption only can be paid after knowing the earned income. Flat tax implementations \"with\" negative income tax allow the payment or crediting of the income tax at any interval, independent of the amount of the actual income.\n",
"Modified flat taxes have been proposed which would allow deductions for a very few items, while still eliminating the vast majority of existing deductions. Charitable deductions and home mortgage interest are the most discussed examples of deductions that would be retained, as these deductions are popular with voters and are often used. Another common theme is a single, large, fixed deduction. This large fixed deduction would compensate for the elimination of various existing deductions and would simplify taxes, having the side-effect that many (mostly low income) households will not have to file tax returns.\n",
"Flat tax benefits higher income brackets progressively due to decline in marginal value. For example, if a flat tax system has a large per-citizen deductible (such as the \"Armey\" scheme below), then it is a progressive tax. As a result, the term Flat Tax is actually a shorthand for the more proper marginally flat tax.\n",
"In jurisdictions where tax rates are progressive - meaning that income taxes as a percentage of income are higher for higher incomes or tax brackets, resulting in a higher marginal tax rate - this often results in lower taxes paid, regardless of the time value of money.\n"
] |
Regarding the media treatment of President Kennedy's affairs in the 60's. Is this really accurate? | A lot of it was that the press really, really liked Kennedy. You don't smear people you like. No president since has been anywhere near as popular with the press, not even Obama, though he has come closest. | [
"President John F. Kennedy, whose government was the main sponsor of Diệm's regime, learned of Đức's death when handed the morning newspapers while he was talking to his brother, Attorney General Robert F. Kennedy, on the phone. Kennedy reportedly interrupted their conversation about segregation in Alabama by exclaiming \"Jesus Christ!\" He later remarked that \"no news picture in history has generated so much emotion around the world as that one.\" U.S. Senator Frank Church (D-ID), a member of the Senate Foreign Relations Committee, claimed that \"such grisly scenes have not been witnessed since the Christian martyrs marched hand in hand into the Roman arenas.\"\n",
"Following the assassination of President Kennedy on 22 November 1963, the BBC screened \"Dr Finlay's Casebook\" as part of its regular programming. There were reportedly over 2,000 phone calls and 500 letters and telegrams complaining about the decision.\n",
"Cochran was the main anchor of ABC's break in coverage of the Assassination of President Kennedy on November 22, 1963. Cochran announced the death of President Kennedy as \"confirmed\" and ABC News ran a graphic showing Kennedy's picture and the dates 1917-1963 after a wire service report came to him that \"government sources in Washington\" had stated the President was dead, something both CBS' Walter Cronkite and NBC's Bill Ryan chose not to do. This wire report came to Cochran several minutes before assistant press secretary Malcolm Kilduff officially announced the President's death. \n",
"Minster provided the narration for the controversial Central television documentary \"The Men Who Killed Kennedy\", which outlined various theories concerning the assassination of the American president John F. Kennedy.\n",
"Immediately following the announcement of the assassination of President Kennedy, Kelman spearheaded the national remote coverage of all events emanating from the Kennedy Compound in Hyannis Port, Massachusetts.\n",
"A week following the assassination of John F. Kennedy, a journalist visits Jacqueline Kennedy for an interview at her home in Hyannis Port regarding her husband's legacy. After Jackie reflects upon her 1961 televised tour of The White House, the journalist turns to inquiries about John F. Kennedy’s assassination and its aftermath for Jackie and her family. She talks about events shortly prior to the assassination, before describing her shock and horror in reaction. Members of the White House close to the newly sworn-in President Lyndon Johnson and his wife Lady Bird are seen comforting Jackie in the aftermath onboard Air Force One. Robert F. Kennedy soon appears and shares her grief, escorting her back to Washington. Jackie expresses her deep concern for the well-being of her children in adjusting to the loss of their father. \n",
"The series, run by director and executive producer Dawn Porter, looks at Kennedy's political lore, including his 1968 presidential campaign, which ended with his assassination on June 5, 1968. The show uses archival footage during Kennedy's time as Attorney General and Senator, as well as conversations he had with his brothers, President John F. Kennedy and Senator Ted Kennedy.\n"
] |
To what extent was the spread of Islam bloodless? | You're absolutely right that the expansion of territory under the control of Muslims happened very quickly and through military conquest! However, scholars differentiate between *Arabization* and *Islamicization* (or Islamization) of conquered territories. It generally happens that conquest preceded the gradual acculturation of Arab culture which preceded the bulk of local conversions to Islam.
Here are three earlier answers of mine that address the complexity of Arabization, Islamicization, the status of "peoples of the Book", and the possibilities of forced conversion:
* [What does it mean that the Middle East was 'Arabized' over the course of history?](_URL_0_)
* [How much of a financial burden was the jizya on non-Muslims?] (_URL_1_)
* [Can we blame the West for radical Islam/its ideology? Has Islam always been violent?] (_URL_2_)
Hopefully this will get you started! | [
"Besides the spread of Islam through Arabia by prophets it spread through trade routes like the Silk Road and through conflicts of war. Through the Silk road traders and members of the early Muslim faith were able to go to countries such as China and create mosques around 627 C. E. As men from the Middle East came to China they would get married to these Asian women, which led to a spreading of the faith and traditions of Islam in multiplicities. The Crusades in the 9th and 10th centuries encouraged the spread of Islam through the invasions of Latin Christian soldiers and Muslim soldiers into each other's lands. The whole conflict began on the premises of a Holy Land and which group of people owned these lands that led to these foes invading their respective lands. As the religion itself spread so did its implications of ritual, such as prayer.\n",
"In 614, the Muslims were on their way to the hills of Mecca to offer prayer with Muhammad, when a group of polytheists observed them. They began to abuse and fight them. Sa`ad beat a polytheist and shed his blood, reportedly becoming the first Muslim to shed blood in the name of Islam.\n",
"Historically, the Orthodox Church and the non-Chalcedonians were among the first peoples to have contact with Islam, which conquered Roman/Byzantine Syria-Palestine and Egypt in the 7th century, and fought many battles against Islamic conquests. The Qur'an itself records its concurrent observations regarding the Roman world in Surah al-Rum. The main contact with Islam however, came after the conquest of the Seljuk Turks of Roman/Byzantine Anatolia in the 13th century.\n",
"The spread of Islam was initially driven by increasing trade links outside of the archipelago. Traders and the royalty of major kingdoms were usually the first to convert to Islam. Dominant kingdoms included Mataram in Central Java, and the sultanates of Ternate and Tidore in the Maluku Islands to the east. By the end of the 13th century, Islam had been established in North Sumatra; by the 14th in northeast Malaya, Brunei, the southern Philippines and among some courtiers of East Java; and the 15th in Malacca and other areas of the Malay Peninsula. Although it is known that the spread of Islam began in the west of the archipelago, the fragmentary evidence does not suggest a rolling wave of conversion through adjacent areas; rather, it suggests the process was complicated and slow.\n",
"Prior to dismemberment of the Ottoman Empire, the population of the area comprising modern Israel, the West Bank, and Gaza Strip was not exclusively Muslim. Under the Empire's rule in the mid-16th century, there were no more than 10,000 Jews in Palestine, making up around 5% of the population. By the mid-19th century, Turkish sources recorded that 80% of the population of 600,000 was identified as Muslim, 10% as Christian Arab and 5–7% as Jewish.\n",
"There exist different views among scholars about the spread of Islam. Islam began in Arabia and from 633 AD until the late 10th century it was spread through conquests, far-reaching trade and missionary activity.\n",
"Initially, the spread of Islam was slow and gradual. Though historical documents are incomplete, the limited evidence suggests that the spread of Islam accelerated in the 15th century, as the military power of Melaka Sultanate in Malay Peninsular today Malaysia and other Islamic Sultanates dominated the region aided by episodes of Muslim coup such as in 1446, wars and superior control of maritime trading and ultimate markets. During 1511, Tome Pires found animists and Muslims in the north coast of Java. Some rulers were Islamized Muslims, others followed the old Hindu and Buddhist traditions. By the reign of Sultan Agung of Mataram, most of the older Hindu-Buddhist kingdoms of Indonesia, had at least nominally converted to Islam. The last one to do so was Makassar in 1605. After the fall of Majapahit empire, Bali became the refuge for the Hindu upper class, Brahmins and their followers that fled from Java, thus transferring the Hindu culture of Java to Bali. Hinduism and Buddhism remained extant in some areas of East Java where it syncretized with animism. Their traditions also continued in East and Central Java where they earlier held a sway. Animism was also practiced in remote areas of other islands of Indonesia.\n"
] |
can you get in trouble for streaming movies online for free? | Any authority is much, much more likely to go after the host of the stream rather than an individual user who watches it. Legal? No, it's not. Going to get you into trouble? Unlikely. | [
"The film can also be rented via DVD on Netflix as of March 17, 2010. When asked by a media provider on behalf of Netflix in April 2010 if she would also offer the film via the company's on-demand streaming service in exchange for a limited amount of money, Paley requested that the film be streamed either DRM-free or with an addendum telling viewers where the film was available for download. When the internet television service refused to meet these conditions due to a \"No Bumpers\" policy, Paley refused to accept their offer, citing her desire to uphold her principled opposition to DRM.\n",
"A 2009 court case, \"United States v. Dove,\" ruled that the content industry equation of lost sales with illegal downloads is not valid, with the judge noting \"Those who download movies and music for free would not necessarily purchase those movies and music at the full purchase price... although it is true that someone who copies a digital version of a sound recording has little incentive to purchase the recording through legitimate means, it does not necessarily follow that the downloader would have made a legitimate purchase if the recording had not been available for free.\"\n",
"It is possible to watch movies without an internet connection, using Download & Play, available on a PC through MEO Go. In 2014, the services was refreshed with an improved image, faster navigation and new features with additional content and information, and a more accessible user experience. MEO VideoClube offers multiple payment options including a monthly invoice and the prepaid MEO VideoClube card.\n",
"Video Unlimited allowed users to purchase or rent videos. Purchases and rentals could be made online through the Sony Entertainment Network store, through the PlayStation Store on PlayStation 3, PlayStation 4, and PlayStation Vita, through the Video Unlimited store on many Sony Blu-ray disc players and Bravia TVs or via an Xperia smartphone and tablet app. The services provided customers with an easy and accessible way to watch and discover new movies and TV. Through the wide range of options that Video Unlimited has, a user could choose from new movies, classic movies or to keep up with a TV series. The use of logging in enabled the customer to easily choose and watch videos from anywhere, on devices that are compatible with Video Unlimited. Video Unlimited was later replaced by PlayStation Video.\n",
"At present, this case has received considerable attention within the motion picture industry and DVD streaming websites. It has been cited as justification to close certain movie streaming websites. During October 2011, Zediva took down their DVD streaming service and agreed to pay $1.8 million to the Motion Picture Association. Zediva argued that it serves a similar function as rental stores like Blockbuster who don't need a licensing agreement to rent movies. Zediva only rented DVD's to one customer at a time and did not make DVD copies. The Motion Picture Association contended that this type of streaming was illegal and in violation of copyright law. In August 2011, the U.S. District court Judge John Walter ordered a preliminary injunction against Zediva, shutting down their service. The Columbia Pictures v. Redd Horne ruling played a key role in this case.\n",
"Netflix, an online movie streaming website founded in 1997, is an example of how an online business has affected a B&M businesses such as video rental stores. After Netflix and similar companies became popular, traditional DVD rental stores such as Blockbuster LLC went out of business. Customers preferred to be able to instantly watch movies and TV shows using \"streaming\", without having to go to a physical rental store to rent a DVD, and then return to the store to give the DVD back. \"The rapid rise of online film streaming offered by the likes of Lovefilm and Netflix made Blockbuster's video and DVD [rental] business model practically obsolete.'\n",
"A billion-dollar campaign called Total Access was introduced in 2007 as a strategy against Netflix. Through Blockbuster Online customers could rent a DVD online and receive a new movie for free when they returned it to a Blockbuster store. While it was a major success every free movie cost the company two dollars, but the hope was that it would attract enough new subscribers to cover the loss. Netflix felt threatened, and Hastings approached Antioco with a suggestion to buy Blockbuster's online business. In return, a new system would be introduced where customers could return their movies to a Blockbuster store. Before the deal could be realized, board member Carl Icahn intervened, refusing to let the company lose more money through Total Access. Antioco was pushed out in July and replaced with James Keyes, who rejected Hasting's proposal, raised the price of online DVD rentals and put an end to the free movie deal. As a consequence, Blockbuster Online's previously massive growth quickly stopped. Antioco's departure reportedly also involved continued controversy over his compensation. He left with a $24.7 million severance package.\n"
] |
How does a radio not pick up old signals? | Old radio signals from space? Radio telescopes *do* pick up radio waves from long ago because it takes so long for the wave to travel through space.
Old radio signals from Earth? They are absorbed and destroyed soon after they are created. For instance, a radio broadcast tower sends out waves. These waves travel through the air without being effected much. But soon after they reach the ground, people, antennas, and such, they are quickly absorbed. A little bit of the radio waves may be reflected around the local terrain for a fraction of a second, but with each reflection, the intensity of the wave greatly drops. It is possible to tell the difference between the primary wave and a significantly-delayed reflection of the primary wave from a microsecond ago because the primary wave is much stronger. The reflected signals act essentially as noise, degrading the primary signal. With the right set-up, the reflected signals can interfere quite a bit with the primary signal. But there are no radio broadcasts from 50 years ago still bouncing around along Earth's surface, if that's what you had in mind. | [
"Unlike modern AM radio stations that transmit a continuous radio frequency, whose amplitude (power) is modulated by an audio signal, the first radio transmitters transmitted information by wireless telegraphy (radiotelegraphy), the transmitter was turned on and off (on-off keying) to produce different length pulses of unmodulated carrier wave signal, \"dots\" and \"dashes\", that spelled out text messages in Morse code. As a result, early radio receiving apparatus merely had to detect the presence or absence of the radio signal, not convert it to audio. The device that did this was called a detector. The coherer was the most successful of many detector devices that were tried in the early days of radio.\n",
"Some conventional radios use, or have an option for, a talk-back-on-scan function. If the user transmits when the radio is in a scan mode, it may transmit on the last channel received instead of the selected channel. This may allow users of multi-channel radios to reply to the last message without looking at the radio to see which channel it was on. Without this feature, the user would have to use the channel selector to switch to the channel where the last message occurred. (This option can cause confusion and users must be trained to understand this feature.)\n",
"The primitive spark gap radio transmitters used during the first three decades of radio (1886-1916) could not transmit audio (sound) and instead transmitted information by wireless telegraphy; the operator switched the transmitter on and off with a telegraph key, creating pulses of radio waves to spell out text messages in Morse code. So the radio receiving equipment of the time did not have to convert the radio waves into sound like modern receivers, but merely detect the presence or absence of the radio signal. The device that did this was called a detector. The first widely used detector was the coherer, invented in 1890. The coherer was a very poor detector, insensitive and prone to false triggering due to impulsive noise, which motivated much research to find better radio wave detectors.\n",
"The first radio transmitters could not transmit audio (sound) like modern AM and FM transmitters, and instead transmitted information by radiotelegraphy; the transmitter was turned on and off rapidly using a switch called a telegraph key, creating different length pulses of radio waves (\"dots\" and \"dashes\") which spelled out text messages in Morse code. Marconi used several types of station:\n",
"Radios with DTMF decoders may monitor all system traffic or remain muted until called, depending on the system design. When the radio receives the correct digit string, it may momentarily buzz or sound a Sonalert. An indicator light may turn on and remain latched on. In most systems, the radio's receive audio would latch on after receiving a valid digit string if normally muted.\n",
"The \"spark\" radio transmitters during Collins time could not transmit sound (audio) as modern AM and FM radio transmitters do. This was because the discharge of a spark cannot produce continuous waves, but only damped waves. Instead they transmitted information by telegraphy, the operator turned the transmitter off and on by tapping on a switch called a telegraph key to produce different length pulses of damped radio waves, to spell out text messages in Morse code. By the last years of the century, many wireless researchers such as Reginald Fessenden, Ernst Ruhmer, William Dubilier, Quirino Majorana, and Valdemar Poulsen were working to develop continuous wave transmitters which could be modulated to carry sound, radiotelephony. \n",
"Due to historical reasons, all commonly used modulations are based on an idea of minimal modification of the radio itself, usually just connecting the external speaker or headphone output directly to the transmit microphone input and receiver audio output directly to the computer microphone input. Upon adding a \"turn the transmitter on\" output signal (\"PTT\") for transmitter control, one has made a \"radio modem\".\n"
] |
what does „star collapses under its own gravity” really mean when star dies? | The gravity of the star constantly pulls all of the mass toward the center. While the mass of the star is a plasma, a lot of that mass and energy pushes particles apart and forces the star to expand. The star expanding and the gravity pulling eventually reach a relative equilibrium that keeps the shape and size of the star. When the reactions stop, the star stops expanding and gravity overtakes the force of expansion as it weakens. This crushes the star.
It isn't a perfect overview; but, hopefully helpful. | [
"The gravitational collapse of a star is a natural process that can produce a black hole. It is inevitable at the end of the life of a star, when all stellar energy sources are exhausted. If the mass of the collapsing part of the star is below the Tolman–Oppenheimer–Volkoff (TOV) limit for neutron-degenerate matter, the end product is a compact star — either a white dwarf (for masses below the Chandrasekhar limit) or a neutron star or a (hypothetical) quark star. If the collapsing star has a mass exceeding the TOV limit, the crush will continue until zero volume is achieved and a black hole is formed around that point in space.\n",
"The collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. The result is one of the various types of compact star. Which type forms depends on the mass of the remnant of the original star left after the outer layers have been blown away. Such explosions and pulsations lead to planetary nebula. This mass can be substantially less than the original star. Remnants exceeding are produced by stars that were over before the collapse.\n",
"Stars already lose a small flow of mass via solar wind, coronal mass ejections, and other natural processes. Over the course of a star's life on the main sequence this loss is usually negligible compared to the star's total mass; only at the end of a star's life when it becomes a red giant or a supernova is a large proportion of material ejected. The star lifting techniques that have been proposed would operate by increasing this natural plasma flow and manipulating it with magnetic fields.\n",
"Most stars will eventually come to a point in their evolution when the outward radiation pressure from the nuclear fusions in its interior can no longer resist the ever-present gravitational forces. When this happens, the star collapses under its own weight and undergoes the process of stellar death. For most stars, this will result in the formation of a very dense and compact stellar remnant, also known as a compact star.\n",
"Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little \"fuel\" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight.\n",
"A Star Fell from Heaven is a 1936 British comedy film directed by Paul Merzbach and starring Joseph Schmidt, Florine McKinney and Billy Milton. It was made at Elstree Studios. It was a remake of the 1934 Austrian film of the same name which had also starred Schmidt.\n",
"At what is called the death of the star (when a star has burned out its fuel supply), it will undergo a contraction that can be halted only if it reaches a new state of equilibrium. Depending on the mass during its lifetime, these stellar remnants can take one of three forms:\n"
] |
How much radiation would you be exposed to holding weapons-grade plutonium in your hand? | It depends on how much of it you’re holding, and whether or not criticality is reached. The “common” isotopes of plutonium are radioactive to alpha decay, but Pu always comes cladded in other metals, which the alpha particles can’t penetrate. Some fraction of the time, it will undergo spontaneous fission instead, which will also result in the emission of neutrons and gamma rays. These will in general penetrate through the cladding and give you a dose.
As long as there is no criticality reached, the dose rate won’t be *too* high. People can handle amounts on the order of a few kilograms of weapons-grade plutonium (I personally have done so) without receiving a dangerous dose.
You don’t just hold bare Pu in your bare hands though, the Pu is cladded with some other metal (like zirconium), and you generally wear gloves when handling it. The gloves are not very heavy-duty, as they’re not used to shield radiation. Instead they’re used to mitigate the spread of radioactive contamination. When you’re done handling the material, your hands will generally be tested for contamination, and once you’ve been given the all-clear, you remove the gloves and dispose of them.
So it’s really not as dangerous as you think. The real danger is in criticality accidents, but as long as you’re handling less than the critical mass with your given amount of moderation, then criticality can’t be reached, by definition. A fun fact about it though is that for kilogram amounts of Pu, the cladding actually feels warm to the touch due to the radiation (most of which doesn’t penetrate the cladding). | [
"BULLET::::- While demonstrating his technique to visiting scientists at Los Alamos, Canadian physicist Louis Slotin manually assembled a critical mass of plutonium. A momentary slip of a screwdriver caused a prompt critical reaction. Slotin died on May 30 from massive radiation poisoning, with an estimated dose of 1,000 rads (rad), or 10 grays (Gy). Seven observers, who received doses as high as 166 rads, survived, yet three died within a few decades from conditions believed to be radiation-related.\n",
"The of plutonium which did not undergo fission and the of fission products were scattered. Plutonium is not a biological hazard unless ingested or inhaled, and its alpha radiation cannot penetrate skin. Once inside the body it is significantly toxic both radiologically and chemically, having a heavy metal toxicity on a par with that of arsenic. Estimates based on the Manhattan Project's \"tolerance dose\" of one microgram of plutonium per worker put 10.6 pounds at the equivalent of about five billion tolerable doses.\n",
"BULLET::::- Those tests in the air scattered some plutonium over the entire globe; this great dilution of the plutonium has resulted in the threat to each exposed person being very small as each person is only exposed to a very small amount.\n",
"BULLET::::- Evans, Robley D. (December 1962), \"Remarks on the Maximum Permissible Deposition of Plutonium in Man, and the Safety Factors in the Pivot Point Radiation Protection Guide of 0.1 µc of Radium in Man\", \"Health Physics\" 8 (6): 751-752\n",
"In a 1975 study of the eighteen people who received plutonium injections in Manhattan Project experiments, CAL-1 (Albert Stevens) was shown to have received by far the highest dose to his bones and liver, calculated as 580 and 1460 rad, respectively. The dose of 580 rad was calculated based on the \"average skeletal dose\" contributed from the two radionuclides Pu-238 (575 rad) and Pu-239 (7.7 rad). This was then converted to the bone's surface dose, which was 7,420 rad. Stevens's absorbed dose was almost entirely based on the Pu-238 in his system. One of the findings of the 1975 study was that Stevens and five others injected with plutonium had endured \"doses high enough to be considered carcinogenic. However, no bone tumors have yet appeared.\" The word \"yet\" reflected the fact that four other subjects were still alive in 1975.\n",
"By measuring the concentration of sodium-24, created by a neutron activation whereby sodium-23 nuclei were rendered radioactive by absorbing neutrons from the accident, it was possible to deduce the dose received by the technicians. According to the STA, Hisashi Ouchi was exposed to 17 sieverts (Sv) of radiation, Masato Shinohara received 10 Sv, and Yutaka Yokokawa 3 Sv. By comparison, a dose of .05 sieverts is the maximum allowable annual dose for Japanese nuclear workers. A dose of 8 Sv (800 rem) is normally fatal and more than 10 Sv almost invariably so. Normal background radiation amounts to an annual exposure of about 3 mSv (millisieverts). There were 56 plant workers whose exposures ranged up to 23 mSv and a further 21 workers received elevated doses when draining the precipitation tank. Seven workers immediately outside the plant received doses estimated at 6–15 mSv (combined neutron and gamma effects).\n",
"The \"Vixen\" experimental tests used TNT to blow up simulated nuclear warheads containing plutonium-239. In total, \"Vixen B\" scattered 22.2 kg of plutonium around the Maralinga test site known as Taranaki, in particles of widely divergent size. Plutonium is not particularly dangerous externally - it emits alpha particles which are stopped by of air, or the dead layer of skin cells on the body, and is not a very intensive source of radiation, due to its long half-life of 24,000 years. It is most dangerous when it enters the body, in the worst case by breathing, and therefore tiny particles, often the result of such explosion testing, are the worst threat. The extreme biological persistence of plutonium's radioactive contamination and the cancer threat posed by alpha radiation occurring internally together establish plutonium's dangers.\n"
] |
Breathing fumes of dry ice, bad for you? | A little bit of it, not that bad. But I once made the mistake of accidentally inhaling some really concentrated fumes, while bent over double trying to get some of the last chunks of the stuff out of a usually closed container. **River** of blood out my nose in under a second. Do not try at home. | [
"Dry ice sublimates at , at Earth atmospheric pressures. This extreme cold makes the solid dangerous to handle without protection due to burns caused by freezing (frostbite). While generally not very toxic, the outgassing from it can cause hypercapnia (abnormally elevated carbon dioxide levels in the blood) due to buildup in confined locations.\n",
"Prolonged exposure to dry ice can cause severe skin damage through frostbite, and the fog produced may also hinder attempts to withdraw from contact in a safe manner. Because it sublimes into large quantities of carbon dioxide gas, which could pose a danger of hypercapnia, dry ice should only be exposed to open air in a well-ventilated environment. For this reason, dry ice is assigned the S-phrase in the context of laboratory safety. Industrial dry ice may contain contaminants that make it unsafe for direct contact with foodstuffs. Tiny dry ice pellets used in dry ice blast cleaning do not contain oily residues.\n",
"Storing yam at low temperature reduces the respiration rates. However, temperatures below cause damage through chilling, causing a breakdown of internal tissues, increasing water loss and yam's susceptibility to decay. The symptoms of chilling injury are not always obvious when the tubers are still in cold storage. The injury becomes noticeable as soon as the tubers are restored to ambient temperatures.\n",
"Dry ice or solid carbon dioxide dramatically contains low temperature solid, which is –78.5 °C, and frostbite production. For safety, touching is restricted with bare skin. Handling dry ice with the appropriate gloves are crucial for protection. Waste contaminated dry ice should be rinsed and left in the fume hood or good ventilator area to allow it to sublime. The vaporization of dry ice can cause irritation to human if ingestion or inhalation occurred. Dry ice can cause headache and shortness of breath in case where a large amount of CO gets into the human respiratory tract resulting in the depletion of oxygen.\n",
"Dry ice can be used to arrest and prevent insect activity in closed containers of grains and grain products, as it displaces oxygen, but does not alter the taste or quality of foods. For the same reason, it can prevent or retard food oils and fats from becoming rancid.\n",
"A number of products are available for cold testing, each with varying melting points. Although household ice (0°C) is cheap and easy to obtain, it is not as accurate as colder products. Dry ice (-78°c) can be used, however there have been concerns regarding the damaging effects of using something so cold in the oral cavity despite evidence to suggest that dry ice has no negative impact on mucosal or tooth structure . Refrigerant sprays, such as ethyl chloride (-12.3°C), 1,1,1,2-tetrafluoroethane (-26.5°C) or a propane/butane/isobutane gas mixture are further commonly used cold tests. Cold testing is thought to stimulate Type Aδ fibres in the pulpal tissue, which elicit a short, sharp pain. \n",
"Freezing drizzle is extremely dangerous to aircraft, as the supercooled water droplets will freeze onto the airframe, degrading aircraft performance considerably. The loss of American Eagle Flight 4184 on October 31, 1994, has been attributed to ice buildup due to freezing drizzle aloft.\n"
] |
is the sugar in chocolate the same as the sugar in fruit? | No, they are different sugars.
The (added) sugar in chocolate is typically sucrose, the sugar in fruit is fructose.
However In practice, they both become glucose in the body, so a comparable amount of sucrose is 'the same' as that amount of fructose.
Remember, you're not just eating a teaspoon of sugar in either case. Eating fruit also means eating fiber, which tends to mitigate blood glucose spikes in a way that chocolate (or soda) won't. | [
"BULLET::::- Fructose, or fruit sugar, occurs naturally in fruits, some root vegetables, cane sugar and honey and is the sweetest of the sugars. It is one of the components of sucrose or table sugar. It is used as a high-fructose syrup, which is manufactured from hydrolyzed corn starch that has been processed to yield corn syrup, with enzymes then added to convert part of the glucose into fructose.\n",
"a. Fructose is not the only sugar found in fruits. Glucose and sucrose are also found in varying quantities in various fruits, and sometimes exceed the fructose present. For example, 32% of the edible portion of a date is glucose, compared with 24% fructose and 8% sucrose. However, peaches contain more sucrose (6.66%) than they do fructose (0.93%) or glucose (1.47%).\n",
"Chocolate is made from the fermented, roasted and ground beans of the tropical cacao tree. In America, cocoa refers to ground cacao beans. Chocolate is the combination of cocoa, cocoa butter, sugar and other ingredients (milk, flavorings, and emulsifiers)and they are sweet.\n",
"Fructose, or fruit sugar, is a simple ketonic monosaccharide found in many plants, where it is often bonded to glucose to form the disaccharide sucrose. It is one of the three dietary monosaccharides, along with glucose and galactose, that are absorbed directly into blood during digestion. Fructose was discovered by French chemist Augustin-Pierre Dubrunfaut in 1847. The name \"fructose\" was coined in 1857 by the English chemist William Allen Miller. Pure, dry fructose is a sweet, white, odorless, crystalline solid, and is the most water-soluble of all the sugars.\n",
"Raw chocolate is chocolate which is produced in a raw or minimally-processed form. It is made from unroasted (sun-dried) cacao beans and cold pressed cacao butter. A variety of crystalline and liquid sweeteners may be used, including: coconut sugar, coconut nectar, xylitol, agave nectar, maple syrup, and stevia. Cane sugar and other highly processed sugars are not used. Dairy products are not added to raw chocolate, therefore it is usually vegan. Soy is also usually avoided – soy lecithin is often used in processed chocolate. It is also naturally gluten-free.\n",
"Chocolate is a typically sweet, usually brown, food preparation of \"Theobroma cacao\" seeds, roasted, ground, and often flavored. Pure, unsweetened chocolate contains primarily cocoa solids and cocoa butter in varying proportions. Much of the chocolate currently consumed is in the form of sweet chocolate, combining chocolate with sugar. Milk chocolate is sweet chocolate that additionally contains milk powder or condensed milk. White chocolate contains cocoa butter, sugar, and milk, but no cocoa solids. Dark chocolate is produced by adding fat and sugar to the cacao mixture, with no milk or much less than milk chocolate.\n",
"Since sugar came to the Americas sometime after chocolate did, chocolate was said to have an acquired taste as it comes off as bitter without added sweetener. The Spaniards created a drink consisting of chocolate, vanilla, and other spices which was served chilled. This drink cannot be compared to modern-day hot chocolate as it was very spicy and bitter, contrasting with the modern notion of very sweet, warm chocolate.\n"
] |
how is it that the leaders in the house can hold back a vote (regarding the us budget)? | For one, spending bills have to originate in the House. Also, the House Rules Committee recently amended the rules to only allow the majority leader, Eric Cantor, to put bills before the House. This rule is temporary, but it is the reason why moderate Republicans have not allied with Democrats to pass a clean continuing resolution.
If you are wondering how/why this is legal or allowed, the Constitution allows the House to set its own rules with a simple majority vote, and the House has operated this way (majority party screwing minority party) for a long, long time. It's not democratic, but it is both legal and with precedent.
Anyone with better knowledge of parliamentary procedure please feel free to correct the above. | [
"Both majority and minority blocs in Congress have used the lack of quorum in defeating bills that they don't want to be passed without putting it to a vote. After an election during the lame-duck session, quorums are notoriously difficult to muster, more so in the House of Representatives as winning incumbents may opt to go on vacation, and defeated incumbents may opt to not to show up.\n",
"Both houses of the United States Congress have refused to seat new members based on Article I, Section 5 of the United States Constitution which states that, \"Each House shall be the judge of the elections, returns and qualifications of its own members, and a majority of each shall constitute a quorum to do business; but a smaller number may adjourn from day to day, and may be authorized to compel the attendance of absent members, in such manner, and under such penalties as each House may provide.\" This had been interpreted that members of the House of Representatives and of the Senate could refuse to recognize the election or appointment of a new representative or senator for any reason, often political heterodoxy or criminal record. \n",
"A two-thirds vote is required to pass a budget, and in both the original budget negotiations and in the attempt to revise the budget no political party by itself had enough votes to pass a budget. The majority Democrats fought to minimize cuts to programs, while most of the minority Republicans refused to accept any tax increase. The original budget was put together by Democrats and some Republicans using spending cuts, internal borrowing, and accounting maneuvers.\n",
"If a government cannot get its appropriation (budget) legislation passed by the House of Representatives, or the House passes a vote of \"no confidence\" in the government, the Prime Minister is bound by convention to immediately advise the Governor-General to dissolve the House of Representatives and hold a fresh election.\n",
"Congress can, and often does, work on its own proposals independently of the President. The congressional budget resolutions are under the jurisdiction of the United States House Committee on the Budget and the United States Senate Committee on the Budget. Traditionally, after both houses pass a budget resolution, selected representatives and senators negotiate a conference report to reconcile differences between the House and the Senate versions. The conference report, in order to become binding, must be approved by both the House and Senate. Because the budget resolution is a concurrent resolution, it is not signed by the President and \"does not have statutory effect; no money can be raised or spent pursuant to it\".\n",
"The majority leader must assess the risk in deciding to fill the tree. Some senators will reject a bill if they feel they have not been given an adequate opportunity to offer amendments. For example, Senator Susan Collins voted against the 2010 Defense Authorization Bill although she largely supported the substance of the bill, citing the filling of the amendment tree by Senate Majority Leader Harry Reid. Reid used this tactic during the Consolidated Appropriations Act, 2014 Senate floor debate, preventing amendments that would have removed the provisions that rolled back Section 716 (derivatives guarantees by the FDIC) of the Dodd-Frank legislation.\n",
"The House of Representatives is the principal legislative body. It consists of a maximum 596 representatives with 448 are directly elected through FPTP and another 120 elected through proportional representation in 4 nationwide districts while the President can appoint up to 28 . The House sits for a five-year term but can be dissolved earlier by the President. The Constitution reserves fifty percent of the House may force the resignation of the executive cabinet by voting a motion of censure. For this reason, the Prime Minister and his cabinet are necessarily from the dominant party or coalition in the assembly. In the case of a president and house from opposing parties, this leads to the situation known as cohabitation.\n"
] |
is saudi arabia's rejection of its security council seat anything other than symbolic? | I am not sure what would qualify this to you as a big deal. It is in protest of the veto members blocking Saudi's attempts to sanction Bashar Al-Asad. The security council seats are highly coveted as it signifies prestige and international influence for a country.
That region of the world will still have representation on the security council as the council is divided into regions (asia-pacific 3, africa 3, east-europe 2, latin america 2, europe/other 5, permanent members included in the count). The members are elected to two year terms. The Saudis declined their election to this council, so someone else from the region will be elected. ALthough one must question why Saudi Arabia campaigned for the seat in the first place, only to turn it down.
If you are questioning the power of a rotating seat on the security council is, then thats a different question. The Security Council members discuss and vote on matters of international peace and security (e.g. peacekeeping missions, sanctions, etc.). The 5 permanent members have power of veto. Saudi Arabia turned down the opportunity to participate and vote on such matters.
Here is a good article from the NYPost:
_URL_0_ | [
"Following the vote, Saudi Arabia, despite winning, declined to take the seat citing the UNSC's \"double standards\" in being allegedly ineffective in regards to the Israeli–Palestinian conflict, nuclear disarmament in the Middle East and putting an end to the Syrian civil war. This was the first time a state had rejected a Security Council seat. Saudi Arabia's refusal of the seat surprised both United Nations diplomats and some observers inside the country, where the announcement of the election had been received favorably. The Gulf Cooperation Council supported Saudi Arabia's bid. In addition, Saudi intelligence chief Prince Bandar bin Sultan suggested a distancing of Saudi Arabia–United States relations as a result of the same issue over the Syrian civil war, amongst other reasons. On 12 November, Saudi Arabia formally declined the seat, advising the Secretary-General that it \"would not be in a position to take the seat on the Security Council to which it was elected.\"\n",
"Khalid bin Salman, Saudi Arabia’s ambassador to United States said the reason for his taking part in Warsaw summit was, “\"to take a firm stand against forces that threaten the future of peace and security,\" in particular Iranian.\n",
"Prince Saud said in 2004 that Saudi Arabia would like to reduce its dependence on U.S.-dominated security arrangements. In July 2004, he claimed the real source of problems in the Middle East were not Muslims but \"injustice and deprivation inflicted in the region\". In August 2007, he denied allegations that terrorists were travelling from Saudi Arabia to Iraq and claimed it was vice versa.\n",
"The Council of Political and Security Affairs of Saudi Arabia is one of two Subcabinets of the Kingdom of Saudi Arabia, the other being the Council of Economic and Development Affairs. It is led by its Chairman Mohammad bin Salman. The Council is composed of the head of Intelligence and nine ministers. All members of the Council are appointed by royal decree. It was established by King Salman to replace the National Security Council in January 2015.\n",
"According to an October 2009 diplomatic cable from the U.S. Embassy in Riyadh, the Al Saud family described the Council as a \"codification of the unwritten rules that have governed the selection of Saudi rulers since the passing of King Abdulaziz in 1953.\"\n",
"The ministers were of the view that the current composition of the UN Security Council is not representative of the current world scenario. They highlighted the need for bringing about reforms which would make the Security council reflect the contemporary realities. Toward this end, they emphasised the need for expansion of Security council membership in the permanent as well as non-permanent categories.\n",
"The National Security Council was formed on 16 October 2005 by the newly crowned King Abdullah in response to major geopolitical shifts in the Middle East region. The occupation of Iraq made the region \"a center for reconstruction, globalization and reorganization\" with the entry of the United States as a major player. In addition to its regional influence in the Arabian Peninsula, Saudi Arabia is one of the leading actors in the Islamic world and has a central role in global energy policy.\n"
] |
how is maintaining a high credit card balance bad for your credit score, even though you constantly pay off 10x the monthly payment at a time? | Credit score is all about saying how responsible you are with credit. If you have a $10,000 credit limit, but you routinely carry a $9,000 balance, that tells other creditors that you run it pretty close to the edge and that you borrow a lot of money and take time to pay it back. They don't care *how much* you pay each month, that doesn't even show up on a credit report. They care that you pay on time, and that you pay off your debts.
EDIT: Think about it this way. Would you rather lend money to a friend who is constantly borrowing that money right back from you and always owes you a ton of money, or would you rather lend money to a friend who will pay it all back within a short time and usually doesn't owe you very much? | [
"Because a significant portion of the FICO score is determined by the ratio of credit used to credit available on credit card accounts, one way to increase the score is to increase the credit limits on one's credit card accounts.\n",
"Getting a higher credit limit can help a credit score. The higher the credit limit on the credit card, the lower the utilization ratio average for all of a borrower's credit card accounts. The utilization ratio is the amount owed divided by the amount extended by the creditor and the lower it is the better a FICO rating, in general. So if a person has one credit card with a used balance of $500 and a limit of $1,000 as well as another with a used balance of $700 and $2,000 limit, the average ratio is 40 percent ($1,200 total used divided by $3,000 total limits). If the first credit card company raises the limit to $2,000, the ratio lowers to 30 percent, which could boost the FICO rating.\n",
"Credit scores assess the likelihood that a borrower will repay a loan or other credit obligation based on factors like their borrowing and repayment history, the types of credit they have taken out and the overall length of their credit history. The higher the score, the better the credit history and the higher the probability that the loan will be repaid on time. When creditors report an excessive number of late payments, or trouble with collecting payments, the score suffers. Similarly, when adverse judgments and collection agency activity are reported, the score decreases even more. Repeated delinquencies or public record entries can lower the score and trigger what is called a negative credit rating or adverse credit history.\n",
"Most U.S. credit cards are quoted in terms of nominal annual percentage rate (APR) compounded daily, or sometimes (and especially formerly) monthly, which in either case is not the same as the effective annual rate (EAR). Despite the \"annual\" in APR, it is not necessarily a direct reference for the interest rate paid on a stable balance over one year.\n",
"According to the experts at MyFico.com, credit scores are enhanced by having multiple credit cards, the use of credit cards, and having installment loans. However, financially secure individuals who do not use multiple credit cards and/or self-finance installment type expenses may be inaccurately assessed a lower credit score.\n",
"A typical mistaken belief about credit scoring is that the only trait that matters is whether you have actually made payments on time as well as satisfied your monetary obligations in a prompt way. While payment background is essential, however it still just composes just over one-third of the credit rating score. Furthermore, the repayment background is only shown in your credit history.\n",
"Low introductory credit card rates are limited to a fixed term, usually between 6 and 12 months, after which a higher rate is charged. As all credit cards charge fees and interest, some customers become so indebted to their credit card provider that they are driven to bankruptcy. Some credit cards often levy a rate of 20 to 30 percent after a payment is missed. In other cases, a fixed charge is levied without change to the interest rate. In some cases universal default may apply: the high default rate is applied to a card in good standing by missing a payment on an unrelated account from the same provider. This can lead to a snowball effect in which the consumer is drowned by unexpectedly high interest rates. Further, most card holder agreements enable the issuer to arbitrarily raise the interest rate for any reason they see fit. First Premier Bank at one point offered a credit card with a 79.9% interest rate; however, they discontinued this card in February 2011 because of persistent defaults.\n"
] |
how do construction workers put together a crane | I assume you mean a tower crane. That is done piece by piece as with another mobile crane as helpt initially. You put it together with the horizontal beam relative close to the ground and then the crane can raise itself up and add another vertical segment.
Look at [_URL_0_](_URL_0_) The self raising part start at 4 minutes.
You can see a real video at [_URL_1_](_URL_1_) | [
"Cast Load: Crane has the ability to produce cast loaded explosives utilizing various production lines with mixing, melting, and holding kettles. We have the capability to produce bombs, mines, shock test charges, demolition charges, shape charges, burster tubes, underwater sound signals, cluster bombs and projectiles.\n",
"The construction coordinator, or construction company, provides all tools and equipment apart from small hand tools specific to a craftsman's work, such as screw guns, paint brushes and plastering trowels. This makes logistics and efficiency the responsibility of the construction manager and leaves each crew member as fluid freelancers to be hired and off hired at extremely short notice throughout the production.\n",
"Bents are generally pre-assembled, either at the timber framing company's shop or at the construction site. After the basic post and beam structure of the frame has been set in place, the bents are then lifted and simply dropped into place one by one by the crane. Next, the workers bring in additional members, purlins, which tie them together and give the frame a more rigid structure. This process is very safe and efficient, as it allows a crew to assemble a large portion of the frame without ever stepping off the ground. This, in turn, minimizes the amount of time that the crew must spend several stories in the air clambering along beams not much wider than their own feet.\n",
"Cranes are used to tilt the concrete elements from the casting slab to a vertical position. The slabs are then most often set onto a foundation and secured with braces until the structural steel and the roof diaphragm is in place.\n",
"Before construction can begin, the structural ironworkers have to put together cranes in order to lift the steel columns, beams, and girders according to structural blueprints. To hoist the steel, structural ironworkers use cables connected to the crane to lift the beams onto the steel columns. A rope called a tagline is attached to the beams so an ironworker can control them when needed. The crane hoists steel into place, and the ironworkers position the beams with spud wrenches to align bolt holes. Then the beams can be bolted to the steel columns. This process is continued until there are no beams or columns left to construct the structure. Structural ironworkers also erect joist girders, bar joists, and trusses, and also install metal decking.\n",
"BULLET::::- In pre-cast bridges, the concrete segment is constructed on the ground, and then transported and hoisted into place. As the new segment is suspended in place by the crane, workers install steel reinforcing that attaches the new segment to preceding segments. Each segment of the bridge is designed to accept connections from both preceding and succeeding segments.\n",
"Overhead cranes are commonly used in the refinement of steel and other metals such as copper and aluminium. At every step of the manufacturing process, until it leaves a factory as a finished product, metal is handled by an overhead crane. Raw materials are poured into a furnace by crane, hot metal is then rolled to specific thickness and tempered or annealed, and then stored by an overhead crane for cooling, the finished coils are lifted and loaded onto trucks and trains by overhead crane, and the fabricator or stamper uses an overhead crane to handle the steel in his factory. The automobile industry uses overhead cranes to handle raw materials. Smaller workstation cranes, such as jib cranes or gantry cranes, handle lighter loads in a work area, such as CNC mill or saw.\n"
] |
why is it so easy for a person to believe in a complex conspiracy such as the illuminati? | It's easy for people to believe conspiracy theories because they are basically shortcuts to explain the world. Understanding the Federal Reserve is complicated, understanding how power is allocated and what motivates people in power is complicated. Accepting cancer rates are rising due to known carcinogens, but we rarely know exactly which ones isn't satisfying. Understanding cancer rates are rising because people are living longer isn't satisfying.
Basically conspiracy theory fulfills two basic needs, feeling smarter than other people and replacing uninteresting work i.e. studying a dry subject with interesting work e.g. studying aliens.
If you don't like the explanation for something you can explain it however you want and be outraged at whoever you want. That's appealing to people. Chem trails cause it and Ronald Reagan started chem trails. It's arbitrary what "it" even is. The person gets to feel smart, explain a mystery and blame whoever they want without putting in any actual work into understanding the problem. | [
"Many conspiracy theories propose that world events are being controlled and manipulated by a secret society calling itself the Illuminati. Conspiracy theorists have claimed that many notable people were or are members of the Illuminati. Presidents of the United States are a common target for such claims.\n",
"On a psychological level, conspiracist ideation—belief in conspiracy theories—can be harmful or pathological, and is highly correlated with paranoia and Machiavellianism. Conspiracy theories once limited to fringe audiences have become commonplace in mass media, emerging as a cultural phenomenon of the late 20th and early 21st centuries.\n",
"Another study titled \"Dead and Alive: Beliefs in Contradictory Conspiracy Theories\" managed to show that, not only will cranks be attracted to and believe in numerous conspiracy theories all at once, but will continue to do so even if the theories in question are completely and utterly incompatible with one another. For instance, the study showed that: \"... the more participants believed that Princess Diana faked her own death, the more they believed that she was murdered [and that] ... the more participants believed that Osama Bin Laden was already dead when U.S. special forces raided his compound in Pakistan, the more they believed he is still alive,\" and that \"Hierarchical regression models showed that mutually incompatible conspiracy theories are positively associated because both are associated with the view that the authorities are engaged in a cover-up\".\n",
"People formulate conspiracy theories to explain, for example, power relations in social groups and the perceived existence of evil forces. Proposed psychological origins of conspiracy theorising include projection; the personal need to explain \"a significant event [with] a significant cause;\" and the product of various kinds and stages of thought disorder, such as paranoid disposition, ranging in severity to diagnosable mental illnesses. Some people prefer socio-political explanations over the insecurity of encountering random, unpredictable, or otherwise inexplicable events.\n",
"Conspiracy theorists of the Christian right, starting with British revisionist historian Nesta Helen Webster, believe there is an ancient occult conspiracy—started by the first mystagogues of Gnosticism and perpetuated by their alleged esoteric successors, such as the Kabbalists, Cathars, Knights Templar, Hermeticists, Rosicrucians, Freemasons, and, ultimately, the Illuminati—which seeks to subvert the Judeo-Christian foundations of the Western world and implement the New World Order through a one-world religion that prepares the masses to embrace the imperial cult of the Antichrist. More broadly, they speculate that globalists who plot on behalf of a New World Order are directed by occult agencies of some sort: unknown superiors, spiritual hierarchies, demons, fallen angels or Lucifer. They believe that these conspirators use the power of occult sciences (numerology), symbols (Eye of Providence), rituals (Masonic degrees), monuments (National Mall landmarks), buildings (Manitoba Legislative Building) and facilities (Denver International Airport) to advance their plot to rule the world.\n",
"The political scientist Michael Barkun, discussing the usage of \"conspiracy theory\" in contemporary American culture, holds that this term is used for a belief that explains an event as the result of a secret plot by exceptionally powerful and cunning conspirators to achieve a malevolent end. According to Barkun, the appeal of conspiracism is threefold:\n",
"Studies such as \"Belief in Conspiracy Theories\" state that conspiracy theories relating to the assassination of JFK, the moon landing and the September 11th attacks are united by a common thread: distrust of the government-endorsed story. This leads the believer to attach other conspiracies as well. Someone with a distrust of the government will likely reject any and all stories or reports directly issued by state agencies or other authorities that are seen as part of the establishment. Thus, any conspiracy will seem more plausible to the conspiracy theorist because this fits with their worldview.\n"
] |
what causes a drug addict's veins to deteriorate when i've been donating plasma for months with no deterioration? | [Here's my arm after 2 years of shooting H 3-4 times a day](_URL_0_). Only been about every second day for the last 2 months though cause I'm on Suboxone. So it's not always too bad.
It's not because drug users 'don't know what they're doing' as mentioned here. 99% of the time I get it right on the first try, and I've hit someone who nurses have a hard time with. And safely injecting is a big concern for drug addicts. | [
"Alcoholic cirrhosis caused by alcohol abuse is treated by abstaining from alcohol. Treatment for hepatitis-related cirrhosis involves medications used to treat the different types of hepatitis, such as interferon for viral hepatitis and corticosteroids for autoimmune hepatitis. Cirrhosis caused by Wilson's disease, in which copper builds up in organs, is treated with chelation therapy (for example, penicillamine) to remove the copper.\n",
"Drugs continue to be taken off the market due to late discovery of hepatotoxicity. Due to its unique metabolism and close relationship with the gastrointestinal tract, the liver is susceptible to injury from drugs and other substances. 75% of blood coming to the liver arrives directly from gastrointestinal organs and then spleen via portal veins that bring drugs and xenobiotics in near-undiluted form. Several mechanisms are responsible for either inducing hepatic injury or worsening the damage process.\n",
"These drugs work by increasing nitric oxide levels in the blood and inducing coronary vasodilation which will allow for more coronary blood flow due to a decreased coronary resistance, allowing for increased oxygen supply to the vital organs (myocardium). The nitric oxide increase in the blood resulting from these drugs also causes dilation of systemic veins which in turn causes a reduction in venous return, ventricular work load and ventricular radius. All of these reductions contribute to the decrease in ventricular wall stress which is significant because this causes the demand of oxygen to decrease. In general organic nitrates decrease oxygen demand and increase oxygen supply. It is this favourable change to the body that can decrease the severity of ischemic symptoms, particularly angina.\n",
"Excessive alcohol consumption is a significant cause of hepatitis and is the most common cause of cirrhosis in the U.S. Alcoholic hepatitis is within the spectrum of alcoholic liver disease. This ranges in order of severity and reversibility from alcoholic steatosis (least severe, most reversible), alcoholic hepatitis, cirrhosis, and liver cancer (most severe, least reversible). Hepatitis usually develops over years-long exposure to alcohol, occurring in 10 to 20% of alcoholics. The most important risk factors for the development of alcoholic hepatitis are quantity and duration of alcohol intake. Long-term alcohol intake in excess of 80 grams of alcohol a day in men and 40 grams a day in women is associated with development of alcoholic hepatitis (1 beer or 4 ounces of wine is equivalent to 12g of alcohol). Alcoholic hepatitis can vary from asymptomatic hepatomegaly (enlarged liver) to symptoms of acute or chronic hepatitis to liver failure.\n",
"Addiction to alcohol, as with any drug of abuse tested so far, has been correlated with an enduring reduction in the expression of GLT1 (EAAT2) in the nucleus accumbens and is implicated in the drug-seeking behavior expressed nearly universally across all documented addiction syndromes. This long-term dysregulation of glutamate transmission is associated with an increase in vulnerability to both relapse-events after re-exposure to drug-use triggers as well as an overall increase in the likelihood of developing addiction to other reinforcing drugs. Drugs which help to re-stabilize the glutamate system such as N-acetylcysteine have been proposed for the treatment of addiction to cocaine, nicotine, and alcohol.\n",
"A variety of addictive drugs produce an increase in reward-related dopamine activity. Stimulants such as nicotine, cocaine and methamphetamine promote increased levels of dopamine which appear to be the primary factor in causing addiction. For other addictive drugs such as the opioid heroin, the increased levels of dopamine in the reward system may only play a minor role in addiction. When people addicted to stimulants go through withdrawal, they do not experience the physical suffering associated with alcohol withdrawal or withdrawal from opiates; instead they experience craving, an intense desire for the drug characterized by irritability, restlessness, and other arousal symptoms, brought about by psychological dependence.\n",
"The controversy behind the company emerged as a result of the drugs that they made and how they carried high potential for addiction. The most commonly abused medications that the company produces are MS Contin and OxyContin. Both can be abused by crushing, chewing, snorting, or injecting the dissolved product. These ingestion methods create a significant risk to the abuser; they can result in overdose and death. Drug-seeking tactics that addicts undergo to obtain the medication include \"doctor shopping\", which is visiting a number of different physicians to obtain additional prescriptions and refusal to follow up with appropriate examinations. Along with the high potential for abuse among people without prescriptions, there is also a risk for physical dependency and reduced reaction or drug desensitization for patients that are prescribed them. Nevertheless, strong analgesic drugs remain indispensable to patients suffering from severe acute and cancer pain.\n"
] |
what happens to the Carbon-14 that decays inside of us? | It pretty much does what you think it might. It changes into nitrogen which changes its chemical properties. Since Carbon typically makes 4 bonds while Nitrogen prefers 3 if the bonds aren't destroyed by the nuclear reaction itself (Since these reactions can often give off a fairly large amount of energy) they'll likely react immediately to form a more stable structure since you'll have a nitrogen cation with 4 bonds. You'll also have an extra electron floating around if it doesn't exit your body which can also cause a few problems of its own.
As a result whatever had the carbon-14 in it would most likely not be the same compound it started as. However, since Carbon-14 is both in low abundance and has a fairly long half life (over 5,000 years if memory serves me right) the rate at which this occurs in your body is so rare that you will most likely replace the carbon through natural processes before it can actually do anything in your body. You might have a handful of these events occur in your entire lifetime. Over the course of a 100 years only about 1-2% of the C-14 that was present when you were born will have decayed and your body's Carbon is over 99.9% C-12 and C-13.
And even in the event that it does decay, it's fairly likely the damage will be so small that your body will either remove the damaged area or may just end up ignoring it altogether.
C-14 will never do enough damage fast enough, but on the other hand fast decaying isotopes in large enough quantities could do damage. The most common threat is Iodine-131 which has a half-life of about 8 days. Since your thyroid uses Iodine for its processes (and not much iodine is needed I might add) you can absorb enough to start killing thyroid cells. Between the radiation given off and the change of the chemical make up, it absolutely messes with your thyroid. Needless to say this is a very bad thing | [
"Nitrogen-14 is the source of naturally-occurring, radioactive, carbon-14. Some kinds of cosmic radiation cause a nuclear reaction with nitrogen-14 in the upper atmosphere of the Earth, creating carbon-14, which decays back to nitrogen-14 with a half-life of 5,730 ± 40 years.\n",
"By emitting an electron and an electron antineutrino, one of the neutrons in the carbon-14 atom decays to a proton and the carbon-14 (half-life of 5,700 ± 40 years) decays into the stable (non-radioactive) isotope nitrogen-14.\n",
"On the other hand, carbon-14 naturally decays by radioactive beta decay, whereby one neutron is transmuted into a proton with the emission of an electron and an anti-neutrino. Thus the atomic number increases by 1 (\"Z\": 6→7) and the mass number remains the same (\"A\" = 14), while the number of neutrons decreases by 1 (\"N\": 8→7). The resulting atom is nitrogen-14, with seven protons and seven neutrons:\n",
"Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of 5730 years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years.\n",
"Carbon-11 or C is a radioactive isotope of carbon that decays to boron-11. This decay mainly occurs due to positron emission; however, around 0.19–0.23% of the time, it is a result of electron capture. It has a half-life of 20.334 minutes.\n",
"BULLET::::- It will possess long life because it will run on radioactivity which takes an enormous amount of time to decay. The half life of C-14 is 5,730 years, so it will take that long to lose 50% of its power.\n",
"Carbon-14 has a long half-life of 5,730±40 years. Its maximum specific activity is 0.0624 Ci/mmol (2.31 TBq/mol). It is used in applications such as radiometric dating or drug tests. C-14 labeling is common in drug development to do ADME (absorption, distribution, metabolism and excretion) studies in animal models and in human toxicology and clinical trials. Since tritium exchange may occur in some radiolabeled compounds, this does not happen with C-14 and may thus be preferred.\n"
] |
why can't we simply restart the brain? | Brain-dead doesn't mean the software in your brain has crashed and requires a reboot. Brain death means the hardware of your brain has been chemically destroyed. It's not usable anymore. | [
"BULLET::::- Dying ReLU problem: ReLU neurons can sometimes be pushed into states in which they become inactive for essentially all inputs. In this state, no gradients flow backward through the neuron, and so the neuron becomes stuck in a perpetually inactive state and \"dies\". This is a form of the vanishing gradient problem. In some cases, large numbers of neurons in a network can become stuck in dead states, effectively decreasing the model capacity. This problem typically arises when the learning rate is set too high. It may be mitigated by using leaky ReLUs instead, which assign a small positive slope to the left of \"x\" = 0.\n",
"Modern science has discovered a way to rejuvenate people. It is just like immortality, but the rejuvenation process causes the human brain to be \"restarted\"–effectively losing all its former memories so the recipient starts life anew, like a blank slate.\n",
"Due to the lack of understanding of the brain this technique of destroying neurons may have a much larger effect on the patient than just the removal of the intended memories. Due to this complex nature of the brain treatment that would stun the neurons instead of destroying them could be another approach that could be taken.\n",
"Mechanisms for recovery differ from patient to patient. Some mechanisms for recovery occur spontaneously after damage to the brain, whereas others are caused by the effects of language therapy. FMRI studies have shown that recovery can be partially attributed to the activation of tissue around the damaged area and the recruitment of new neurons in these areas to compensate for the lost function. Recovery may also be caused in very acute lesions by a return of blood flow and function to damaged tissue that has not died around an injured area. It has been stated by some researchers that the recruitment and recovery of neurons in the left hemisphere opposed to the recruitment of similar neurons in the right hemisphere is superior for long-term recovery and continued rehabilitation. It is thought that, because the right hemisphere is not intended for full language function, using the right hemisphere as a mechanism of recovery is effectively a \"dead-end\" and can lead only to partial recovery.\n",
"Endogenous regeneration in the brain is the ability of cells to engage in the repair and regeneration process. While the brain has a limited capacity for regeneration, endogenous neural stem cells, as well as numerous pro-regenerative molecules, can participate in replacing and repairing damaged or diseased neurons and glial cells. Another benefit that can be achieved by using endogenous regeneration could be avoiding an immune response from the host.\n",
"- Researched information for article through internet and medical booksde\"/ They become unresponsive to any disciplinary actions that may typically be used. Their brain reaches a state of sensory overload and any new information, such as conversation to alter their current state of mind, becomes ineffective. They need to calm themselves and let their brains slow down.\n",
"A common method of deactivation when studying brain function is ablation of neural tissue, but there are several drawbacks. The exact location and extent of ablation, whether caused by chemicals or lesions, can only be defined post mortem. If the ablation occurred in an undesired location or has deactivated more of the tissue than intended, the time and resources were already been spent while obtaining results unrelated to the designed investigation. Also, ablation permanently deactivates the section of interest due to damage or removal of the neural tissue. Since the tissue cannot be reactivated, control measures that can be directly compared to the deactivation-induced effects cannot be obtained. Comparisons must be made between animals, which will have inherent differences, so internal double dissociations are not possible. Another major drawback in using ablation to deactivate tissue is that because the brain is plastic, while animals are recovering from ablation surgery, the cerebral cortex is able to modify the neural networking by activating new connections or strengthening pre-existing ones. This could cause the resulting behavior in the investigation to appear normal even though part of the animal’s brain has been deactivated, and then investigators would not be able to tell the contribution of the deactivated section to normal function. To overcome many of these drawbacks, cortical cooling devices may be used instead of ablation. \n"
] |
what is that yellow foil around space probes and what is its function? | The gold and silver colored sheets you see are often a single layer of aluminized polyimide with the silver aluminum side facing in. The yellowish-gold color of the polyimide on the outside gives the satellite the appearance of being wrapped in gold.
Multi-layer insulation is used on satellites primarily for thermal control and protects the delicate on-board instruments from the extreme temperatures of space. Depending on its orbit, a satellite can experience temperatures from below -200°F to well above 300°F, sometimes at the same time! Not to mention the high temperatures the onboard instruments can produce. | [
"Foil is commonly used in household applications. It is also useful in survival situations, because the reflective surface reduces the degree of hypothermia caused by thermal radiation (see space blanket).\n",
"An aluminum electrolytic capacitor with a non-solid electrolyte always consists of two aluminum foils separated mechanically by a spacer, mostly paper, which is saturated with a liquid or gel-like electrolyte. One of the aluminum foils, the anode, is etched (roughened) to increase the surface and oxidized (formed). The second aluminum foil, called the \"cathode foil\", serves to make electrical contact with the electrolyte. A paper spacer mechanically separates the foils to avoid direct metallic contact. Both foils and the spacer are wound and the winding is impregnated with liquid electrolyte. The electrolyte, which serves as cathode of the capacitor, covers the etched rough structure of the oxide layer on the anode perfectly and makes the increased anode surface effectual. After impregnation the impregnated winding is mounted in an aluminum case and sealed.\n",
"For use in space, polyimide (e.g. kapton, UPILEX) substrate is usually employed due to its resistance to the hostile space environment, large temperature range (cryogenic to −260 °C and for short excursions up to over 480 °C), low outgassing (making it suitable for vacuum use) and resistance to ultraviolet radiation. Aluminized kapton, with foil thickness of 50 and 125 µm, was used e.g. on the Apollo Lunar Module. The polyimide gives the foils their distinctive amber-gold color.\n",
"The second aluminum foil in the electrolytic capacitor, called the \"cathode foil\", serves to make electrical contact with the electrolyte. This foil has a somewhat lower degree of purity, about 99.8%. It is always provided with a very thin oxide layer, which arises from the contact of the aluminum surface with the air in a natural way. In order to reduce the contact resistance to the electrolyte and to make it difficult for oxide formation during discharging, the cathode foil is alloyed with metals such as copper, silicon, or titanium. The cathode foil is also etched to enlarge the surface.\n",
"A foil is an architectural device based on a symmetrical rendering of leaf shapes, defined by overlapping circles of the same diameter that produce a series of cusps to make a lobe. Typically, the number of cusps can be three (\"trefoil\"), four (\"quatrefoil\"), five (\"cinquefoil\"), or a larger number (\"multifoil\"). Foil motifs may be used as part of the heads and tracery of window lights, complete windows themselves, the underside of arches, in heraldry, within panelling, and as part of any decorative or ornament device. Foil types are commonly found in Gothic and Islamic architecture.\n",
"A second aluminum foil strip, called the \"cathode foil\", serves to make electrical contact with the electrolyte. The spacer separates the foil strips to avoid direct metallic contact which would produce a short circuit. Lead wires are attached to both foils which are then rolled with the spacer into a wound cylinder which will fit inside an aluminum case or \"can\". The winding is impregnated with liquid electrolyte. This provides a reservoir of electrolyte to extend the lifetime of the capacitor. The assembly is inserted into an aluminum can and sealed with a plug. Aluminum electrolytic capacitors with non-solid electrolyte have grooves in the top of the case, forming a vent, which is designed to split open in the event of excessive gas pressure caused by heat, short circuit, or failing electrolyte.\n",
"The tip of the electric foil terminates in a button assembly that generally consists of a barrel, plunger, spring, and retaining screws. The circuit is a \"normally closed\" one, meaning that at rest there is always a complete power circuit; depressing the tip breaks this circuit, and the scoring apparatus illuminates an appropriate light. Color-coding is used: white or yellow indicates hits not on the valid target area, and either red or green indicate hits on the valid target area (red for one fencer, green for the other).\n"
] |
If I filled up a water bottle underwater in the deep part of the ocean, and brought it back up again, would the bottle explode? | **Short answer:** Probably not, but it depends on the quality of your plastic bottle.
**Long answer:** Gasses and liquids behave a little differently when under pressure. The temperature or pressure of ideal gasses, for example, will have a dramatic response when compressed.
Water is different- in fact, it's often taken to be an *incompressible fluid* meaning that it's density doesn't really change as the pressure on it increases. This means that you have to squeeze really really hard on water to change it's volume a small amount. If you filled your water bottle at the very bottom of the ocean and brought it up, the change in pressure will only have a minute effect on the volume of the fluid. Provided your bottle is some sort of crazy cool pressure vessel that can handle deep sea pressures without getting crushed, then it's certainly capable of handling the water once it's back at the surface. | [
"The bottle, more precisely a metal or plastic cylinder, is lowered on a cable into the ocean, and when it has reached the required depth, a brass weight called a \"messenger\" is dropped down the cable. When the weight reaches the bottle, the impact tips the bottle upside down and trips a spring-loaded valve at the end, trapping the water sample inside. The bottle and sample are then retrieved by hauling in the cable.\n",
"An early-20th-century \"bottom\" (or seabed) drift bottle design by George Parker Bidder III involved weighting a bottle with a long copper wire that causes it to sink until the wire trails upon the sea bottom, at which time the bottle tends to remain a few inches above the bottom to be moved by the bottom current. A mushroom-shaped seabed drifter design has also been used. Seabed drifters are designed to be scooped up by a trawler or wash up on shore. Water pressure pressing on the cork or other closure was thought to keep a bottle better sealed; some designs included a wooden stick to stop the cork from imploding. Vessels of less scientific designs have survived for extended periods, including a baby food bottle a ginger beer bottle, and a 7-Up bottle.\n",
"The Ekman water bottle is a sea water temperature sample device. The cylinder is dropped at the desired depth, the trap door below is opened to let the water enter and then closed tightly. This can be repeated at different depths as each sample goes to a different chamber of the insulated bottle.\n",
"The bottle is filled with water, inverted, and placed into the pneumatic trough already containing water. The outlet tube from the gas-generating apparatus is inserted into the opening of the bottle so that gas can bubble up through it, displacing the water within. \n",
"At the lower position the process was reversed. Here the water pressure was strong enough to press the box tightly into position against the exit opening. Another rack and pinion (again operated from above) lifted the outer gate, the levels were equalised again, the inner door on the box was swung open and the boat floated out. Apart from the inevitable small leakages, no significant amount of water had been used in the process.\n",
"When the gas bubble's diameter equaled the water depth, , it hit the sea floor and the sea surface simultaneously. At the bottom, it started digging a shallow crater, ultimately deep and wide. At the top, it pushed the water above it into a \"spray dome\", which burst through the surface like a geyser. Elapsed time since detonation was four milliseconds.\n",
"In the Indian Ocean, a man (Robert Redford) wakes to find water flooding his boat. He has collided with a wayward shipping container, ripping a hole in the hull. He uses a sea anchor to dislodge the container, then changes course to tilt the boat away from the hole. He patches the hole and uses the manual bilge pump to remove the water from the cabin.\n"
] |
Can you tell the age of someone by their DNA? | To some extent, yes. In the S-phase of the cell cycle, when the cell is preparing to undergo mitosis, the chromosomes containing our DNA are duplicated by the DNA-polymerase enzyme.
The telomeres are strands of nucleotide bases at the end of the chromosomes where no genes are located. These are used to provide the location for RNA primers, allowing DNA polymerase to synthesise the lagging strand of the DNA.
As the first part of this video _URL_1_ shows, there is a small nucleotide sequence lacking at each replication, but since the telomeres are not involved in the translation of DNA to protein, they are expendable. However they will continue to get shorter at each replication and are widely believed to be responsible for part of the ageing process in eukaryotes.
By combining the knowledge of DNA replication rates in various tissues and the telomere length, an estimate of age can be provided.
Sources:
_URL_0_
_URL_2_
Campbell Biology, Pearson Education | [
"The determination of an individual's age by anthropologists depends on whether or not the individual was an adult or a child. The determination of the age of children, under the age of 21, is usually performed by examining the teeth. When teeth are not available, children can be aged based on which growth plates are sealed. The tibia plate seals around age 16 or 17 in girls and around 18 or 19 in boys. The clavicle is the last bone to complete growth and the plate is sealed around age 25. In addition, if a complete skeleton is available anthropologists can count the number of bones. While adults have 206 bones, the bones of a child have not yet fused resulting in a much higher number.\n",
"As new numbers were introduced over time, it is possible to recognize the age of a number: The oldest GSM numbers start with 1390…, the second oldest 1380… and 1300… Keeping the same number over time is somewhat associated with stability and reliability of the owner. As the fourth digit was introduced later, thus it is 0 for all old numbers. In further extensions, non-139,138,130 numbers were introduced. The fifth to seventh digit sometimes relates to age and location.\n",
"The 1841 census recorded people's names, age, sex, occupation, and if they were born in the county of their residence, and if they were born anywhere other than in England and Wales. Children under 15 were to have their age recorded accurately, while those over 15 were to be rounded down to the nearest 5 years so, for example, someone aged 63 should be recorded as aged 60. However, not all enumerators followed this instruction and exact ages may have been recorded.\n",
"The strong effects of age on DNA methylation levels have been known since the late 1960s. A vast literature describes sets of CpGs whose DNA methylation levels correlate with age, e.g. The first robust demonstration that DNA methylation levels in saliva could generate accurate age predictors was published by a UCLA team including Steve Horvath in 2011 (Bocklandt et al 2011). The labs of Trey Ideker and Kang Zhang at the University of California, San Diego published the Hannum epigenetic clock (Hannum 2013), which consisted of 71 markers that accurately estimate age based on blood methylation levels. \n",
"Therefore, rather than lumping together all people who have been defined as old, some gerontologists have recognized the diversity of old age by defining sub-groups. One study distinguishes the young old (60 to 69), the middle old (70 to 79), and the very old (80+). Another study's sub-grouping is young-old (65 to 74), middle-old (75–84), and oldest-old (85+). A third sub-grouping is \"young old\" (65–74), \"old\" (74–84), and \"old-old\" (85+). Describing sub-groups in the 65+ population enables a more accurate portrayal of significant life changes.\n",
"In May 2010, \"Solopos\" reported that census enumerators recorded Gotho's age as 142, which would have made him 20 years older than the verified oldest recorded person, Jeanne Calment. \"Liputan 6\" reported that his estimated age was 140, and that he could not remember his date of birth but claimed to remember the construction of a sugar factory in Sragen in 1880.\n",
"Blood tissue from five other female Syndrome X cases (whose average age was 6.3 years) turned out to be age appropriate according to a biomarker of aging known as epigenetic clock. The mean epigenetic age of the five pure Syndrome X subjects was 6.7 years (standard error=1.0) which is not significantly different from the mean chronological age of 6.3 years (standard error=1.8). Notably, the oldest pure Syndrome X case had an epigenetic age of 14.5 years which was 3.2 years older than her true chronological age. It is not yet known whether the epigenetic age of other tissues is also age appropriate in these cases.\n"
] |
In video games and popular media, the Sengoku Period of Japan are often characterized as a period of conflict between Japanese clans which were small but were of equal economic or military strength with each other. Was this accurate? | Well...it really depends on the specific clan, battle, and war. When we are talking about local strongman vs local strongman, it's probably not too far off to assume each side only had a few hundred, or at most a couple of thousand. However, things could be quite large and lopsided. For instance, the overall engagement at Nagashino was, according to the Chronicles of Lord Nobunaga, 38,500 for the Oda and Tokugawa and 15,000 for the Takeda, with the decisive engagement at Shitaragahara 32,000 vs 12,000. And this is the *lower* end of pre-modern sources. And engagements after Yamazaki got stupidly large because of the wide range of resources and clans mobilized.
On the other hand... it's my experience that media often *exaggerate* the strength disparity to make for a better story, so I'm not sure which media you're referring to. | [
"\"Sengoku BASARA\" takes place during the Sengoku period, or Warring States period, of feudal Japan during which Japan was split into many minor states battling over power and land. The game features two historical warlords as the main protagonists: Date Masamune and Sanada Yukimura.\n",
"The Sengoku Period is marked by social upheaval, political intrigue and near-constant military conflict. Less than a century after the end of the Nanboku-chō Wars, peace under the relatively weak Ashikaga shogunate was disrupted by the outbreak of the Ōnin War (1467–1477). This was a civil war between the Ashikaga shogunate and numerous daimyō. The ancient capital of Kyoto was converted into a battlefield and a heavily fortified city that suffered severe destruction.\n",
"The Genpei War (1180–1185) between the Minamoto and Taira clans, and the Nanboku-chō Wars (1336–1392) between the Northern and Southern Imperial Courts are the primary conflicts that define these developments during what is sometimes called Japan's medieval period.\n",
"Like other games in the series, the game reinvents the story based on the Sengoku period of Japan, a period where Japan was ruled by powerful \"daimyōs\" and where constant military conflict and much political intrigue happened that lasted from the middle of 16th century to the beginning of 17th century. However, the game has a slightly extended time frame compared to the previous game; while \"Samurai Warriors 2\" is mostly focused on the events leading to the great battle of Sekigahara, this game also covers the events beforehand.\n",
"The Sengoku period literally derives its name from the Japanese for \"warring states\". It was a militarily and politically turbulent period, with nearly constant military conflict which lasted roughly from the middle of the 15th century to the beginning of the 17th century, and which during which there were also developments in \"renga\" and \"waka\" poetry.\n",
"The Ōnin War, starting in 1467, was the prelude to over a century of civil war in Japan, and the stimulus for a reorganization of the warrior monks. Unlike the Jōkyū War and Mongol invasions of the 13th century, the Ōnin War was fought primarily in Kyoto, and thus the warrior monks could no longer remain non-violent and neutral.\n",
"The Ōnin War, which broke out in 1467, marked the beginning of nearly 150 years of widespread warfare (called the Sengoku period) between \"daimyōs\" (feudal lords) across the entire archipelago. For the duration of the Ōnin War (1467–1477), and into the Sengoku period, the entire city of Kyoto became a battlefield, and suffered extensive damage. Noble family mansions across the city became increasingly fortified over this ten-year period, and attempts were made to isolate the city as a whole from the marauding armies of samurai that dominated the landscape for over a century.\n"
] |
if humans have a night and day circadian clock, why have i (and others) been a night owl since birth? | I'm right there with you. Our knowledge of sleep is still very limited (as well as pretty much anything that has to do with our brain), but there are a large range of classified sleep disorders. Insomnia, night terrors, narcolepsy, things you've probably heard before. [Here's a good Wikipedia article on Delayed Sleep Phase Disorder](_URL_2_), which I'm guessing will sound pretty familiar to you.
DSPD (and its opposite [Advanced Sleep Phase Syndrome](_URL_1_)) is an *uncontrollable* shift in your sleep cycle. People with DSPD commonly go to sleep well past midnight. They find it difficult to keep a "normal" schedule, especially with standardized work hours being 9-5. If left on their own they will resort to a regular (albeit shifted) sleep schedule. This is more common in adolescents (possibly as high as 7%), and less common in adults (around .15%). If the DSPD does not disappear after adolescence/early adulthood it will be a lifelong condition.
There are some treatments, including medication and non-medication. Light therapy, sleep phase chronotherapy, meltanonin, modafinil, are more common. A significant thing to note, with DSPD you may find that people will label you as lazy. While this may be true, it is entirely separate from the condition itself. DSPD is a shift in how a person is able to fall asleep and wake up, not their ability to get out of bed--though it is far easier to get out of bed when your body wakes up naturally after a full night's rest.
You may find that you have difficulty feeling tired after a late night on Reddit. If you're looking at a screen late at night, the blue light (which is the majority of the light coming out of the screen usually) disrupts your sleep cycle by suppressing melatonin, [proportional to the light intensity and length of exposure](_URL_0_). I would recommend installing a program like [f.lux](_URL_3_) to lower the amount of blue light coming out of your screen late at night. | [
"The opposite of a night owl is an early bird – a lark as opposed to an owl – which is someone who tends to begin sleeping at a time that is considered early and also wakes early. Researchers traditionally use the terms \"morningness\" and \"eveningness\" for the two chronotypes or diurnality and nocturnality in animal behavior. In several countries, especially in Scandinavia, early birds are called \"A-people\" and night owls are called \"B-people\".\n",
"Usually, people who are night owls stay awake past midnight, and extreme night owls may stay awake until just before or even after dawn. Night owls tend to feel most energetic just before they go to sleep at night. \n",
"The genetic make-up of the circadian timing system underpins the difference between early and late chronotypes, or early birds and night owls. While it has been suggested that circadian rhythms may change over time, including dramatic changes that turn a morning lark to a night owl or vice versa, evidence for familial patterns of early or late waking would seem to contradict this, and individual changes are likely on a smaller scale.\n",
"The opposite of the lark is the owl, often awake at night. A person called a night owl is someone who usually stays up late and may feel most awake in the evening and at night. Researchers have traditionally used the terms \"morningness\" and \"eveningness\" to describe these two phenotypes.\n",
"A night owl, evening person or simply owl, is a person who tends to stay up until late at night, or the early hours of the morning. Night owls who are involuntarily unable to fall asleep for several hours after a normal time may have delayed sleep phase syndrome. \n",
"Modern humans often find themselves desynchronized from their internal circadian clock, due to the requirements of work (especially night shifts), long-distance travel, and the influence of universal indoor lighting. Even if they have sleep debt, or feel sleepy, people can have difficulty staying asleep at the peak of their circadian cycle. Conversely they can have difficulty waking up in the trough of the cycle. A healthy young adult entrained to the sun will (during most of the year) fall asleep a few hours after sunset, experience body temperature minimum at 6 a.m., and wake up a few hours after sunrise.\n",
"\"Night Watch\" hinted that animals do not always follow the same rules as humans or Others when it comes to the Twilight. In the first book of \"Night Watch\" entitled \"Story One: Destiny\", Anton explains that \"For cats there is no [human] world or Twilight—they live in all the worlds at once.\"\n"
] |
Should the Atlantropa project (the partial drying of the Mediterranean sea) be realised, wouldn't the newly freed land be too salty for agriculture ? | It's not easy but the Dutch have been doing it for a long time now.
Youtube clip, 19 minutes:
_URL_0_
Long read about the Dutch desalination and reclamation of land:
_URL_1_ | [
"In 1985 a new development proposal was formulated that would see additional land taken from the Mediterranean, and integrate as an urban park, a beach of three kilometers, as well as residential, commercial and tertiary zones. The first works for its revitalization began in 2006 and consisted of work of depollutions and fillings, the coast of Sfax being affected by the discharges of the phosphate industry .\n",
"In 2006, the proposal has also generated some concern by the chairman of Egypt's Suez Canal Authority, which believed that the canal will increase seismic activity in the region, provide Israel with water for cooling its nuclear reactor near Dimona, develop settlements in the Negev Desert, and increase well salinity. However, as proposed, most of the desalinated water is expected to be used by Jordan and the Palestinians. Under the current proposal, water sufficient only to prevent the Dead Sea from dehydrating will flow through the system, preventing salt water flow into wells. The World Bank study recommended re-routing the conduit to avoid the geological faults of the Araba Valley.\n",
"A new desalination plant to be built near the Jordanian tourist resort of Aqaba would convert salt water from the Red Sea into fresh water for use in southern Israel and southern Jordan; each region would get eight billion to 13 billion gallons a year. This process would produces about as much brine as a waste product; the brine would be piped more than 100 miles to help replenish the Dead Sea, already known for its high salt content. This would reinforce the status of the Dead Sea as an important economic resource to both nations, in multiple areas including tourism, industry and business.\n",
"A new desalination plant to be built near the Jordanian tourist resort of Aqaba would convert salt water from the Red Sea into fresh water for use in southern Israel and southern Jordan; each region would get eight billion to 13 billion gallons a year. This process would produces about as much brine as a waste product; the brine would be piped more than 100 miles to help replenish the Dead Sea, already known for its high salt content. This would reinforce the status of the Dead Sea as an important economic resource to both nations, in multiple areas including tourism, industry and business,\n",
"One major part of the plan includes the private sector development of a $3 billion, 166 km-long (103-mile) canal system along the Arava, known as the Two Seas Canal. This engineering scheme would bring Red Sea water to the Dead Sea and could provide additional projects and benefits to the region and increase cooperation between Israelis, Jordanians, and Palestinians, through greater development and economic integration. Some environmentalists have criticized the plan, saying that rehabilitation of the Jordan River would be a better way to save the Dead Sea, and would bring less disruption.\n",
"A water development project outlined by him became later known as the \"Lowdermilk plan\" The Plan was recommended by the United States in 1944 and is still of importance for the later National Water Carrier of Israel. Parts of it, as the use of the Litani River to irrigate the Negev desert have been controversial. As early as 1946 the Church of Scotland presbytery in Jerusalem submitted a memorandum against the plan, as they feared it would spoil the sanctity of the Sea of Galilee. The suggestion of refilling the Dead Sea through a Mediterranean–Dead Sea Canal was based on earlier approaches and is still of importance.\n",
"Completion of the «Tangier-Mediterranean» project will have important economic effects in terms of jobs, creation of added value and foreign investment. Its particular position on the Straits of Gibraltar, at the crossing of two major maritime routes, and 15 km from the European Union will enable it to serve a market of hundreds of millions of consumers through the industrial and commercial free zones which will be run by well-known private operators. It will also win part of the strong growth market of container transshipment and become the leading hub for cereal transshipment, a facility which is non-existent in the north-west African region at present.\n"
] |
why streaming porn on my tablet loads 10 times faster than any other video streaming domains. | There are a LOT of people using SFW video streaming domains like Netflix and YouTube. Compared to the relatively small amount of data not-video content takes up, it's an absolutely massive amount of traffic. Porn sites get a lot of traffic too, but I'd be willing to wager that at least within the US (because copyrighted material isn't as available in other countries), sites like Netflix, YouTube, and Hulu have a much higher demand. When you factor in the fact that most content watched on porn sites is lower resolution/bitrate, it's a lot easier to stream a 480p 5-minute video than a 1080p 55-minute episode of Orange is the New Black. | [
"The amount of data used by video streaming services depends on the quality of the video. Thus, Android Central breaks down how much data is used (on a smartphone) with regards to different video resolutions. According to their findings, per hour video between 240p and 320p resolution uses roughly 0.3GB. Standard video, which is clocked in at a resolution of 480p, uses approximately 0.7GB per hour. \n",
"A broadband speed of 2 Mbit/s or more is recommended for streaming standard definition video without experiencing buffering or skips, especially live video, for example to a Roku, Apple TV, Google TV or a Sony TV Blu-ray Disc Player. 5 Mbit/s is recommended for High Definition content and 9 Mbit/s for Ultra-High Definition content. Streaming media storage size is calculated from the streaming bandwidth and length of the media using the following formula (for a single user and file) requires a storage size in megabytes which is equal to length (in seconds) × bit rate (in bit/s) / (8 × 1024 × 1024). For example, one hour of digital video encoded at 300 kbit/s (this was a typical broadband video in 2005 and it was usually encoded in a 320 × 240 pixels window size) will be:\n",
"When streaming over-the-top (OTT) content and video on demand, systems do not typically recognize the specific size, type, and viewing rate of the video being streamed. Video sessions, regardless of the rate of views, are each granted the same amount of bandwidth. This bottlenecking of content results in longer buffering time and poor viewing quality. Some solutions, such as upLynk and Skyfire’s Rocket Optimizer, attempt to resolve this issue by using cloud-based solutions to adapt and optimize over-the-top content.\n",
"With the use of mobile devices increasing so rapidly, and almost half of the traffic on mobile internet networks being accounted for by video sessions, mobile service providers have begun to recognize the need to provide higher quality video access while using the lowest possible bandwidth.\n",
"Due to the exclusion of pornography from the App Store, YouPorn and others changed their video format from Flash to H.264 and HTML5 specifically for the iPad. In an e-mail exchange with Ryan Tate from Valleywag, Steve Jobs claimed that the iPad offers \"freedom from porn\", leading to many upset replies including Adbustings in Berlin by artist Johannes P. Osterhoff and in San Francisco during WWDC10.\n",
"muvee announced CODEN in 2010 to enable fast trimming of HD videos in under-powered phones. Previously, it was deemed impossible to edit video in low-mid range feature phones. In higher end Smartphones, although there is more memory and processor power, video is captured in up to 4K, hence the pixels involved are growing exponentially faster than the CPU power.\n",
"A common problem with video recordings is the action jumps, instead of flowing smoothly, due to low frame rate. Though getting faster all the time, ordinary PCs are not yet fast enough to play videos and simultaneously capture them at professional frame rates, \"i.e.\" 30 frame/s. For many cases, high frame rates are not required. This is not generally an issue if simply capturing desktop video, which requires far less processing power than video playback, and it is very possible to capture at 30 frame/s. This varies depending on desktop resolution, processing requirements needed for the application that is being captured, and many other factors.\n"
] |
Russian roulette - what is the origin? Has it actually been played? Are there testimonials from survivors? | One of the first mentions of Russian Roulette in literature was in a 1840 novel by Russian poet Lermontov "Hero of Our Time" (Full ebook available on [Gutenberg Project](_URL_1_), scene is in the last chapter of the book).
Since Lermontow was a Russian officer who served in Caucasus and at least some facts\stories in the novel (which is a work of fiction) were autobiographical, the Russian Roulette story might have some real background behind it.
EDIT: I did some additional research and found some obscure references in a biography of Russian general Mihail Skobelev (russian only, Google Books link [here](_URL_0_), unsure if it was ever translated), who lived 1843—1882 and was famous due to his service in one of many Russian-Turkish wars in 1870-s. The books mentions that Skobelev was aware of the risky game his officers played, unofficially approved it as a display of valor and bravery, but was forced to punish it severely due to special order from Emperor Alexander II by demoting involved officers to common soldiers (officers were mostly nobility, soldiers were mostly peasants, so this demotion would be quite shameful). Book fails to reference any sources though, and I also was unable to find any traces of such law or order.
But if those facts are true, it all fits quite well. Early 1800s during Lermontov time the Roulette appeared among officers on Caucasus (note that Lermontov describes the game, but never actually calls it Russian Roulette), late in 1870s it is well known, has its official name "the Roulette" and is popular enough to requite special actions from Emperor and generals to stop its spread among officers. | [
"Russian Roulette is the third studio album by American hip hop producer and recording artist The Alchemist, the album was released on July 17, 2012. The project is constructed from samples of Soviet music (hence the title), making it a concept album. Featured artists on the project consists of acts such as Evidence, Fashawn, Roc Marciano, Action Bronson, Guilty Simpson, Danny Brown, Schoolboy Q, Big Twins.\n",
"Russian Roulette is the debut mini album from South Korean girl group Spica. It was released on February 8, 2012 with the song of same name as the promotional song. The EP was re-released on March 29, 2012 with the name \"Painkiller\". The song of same name was used to promote the re-release.\n",
"\"Russian Roulette\" received generally favorable reviews from critics upon its release, both internationally and in the group's native country, South Korea. It also gained attention for its unique music video semi-inspired by \"The Simpsons'\" fictional animated television series, \"Itchy & Scratchy\" that masks violent and almost lethal pranks the members pull on each other in a bright, bubbly and seemingly fun-filled video. \"Dazed Digital\" placed it at number six on their 20 Best K-pop Tracks of the Year and it won Best Music Video at the 2016 Melon Music Awards.\n",
"\"Russian Roulette\" () is a song by South Korean girl group Red Velvet and was released as a single for their third extended play of the same name. Written by Jo Yun Gyeong, it is primarily a synth-pop song which lyrically compares the process of winning someone's heart to a game of Russian roulette. It was released on September 7, 2016 by S.M. Entertainment along with an accompanying music video.\n",
"BULLET::::- In 2001, in their debut album \"Ompa til du dør\", Norwegian band Kaizers Orchestra included numerable references to Russian roulette, most notably in the songs \"Rulett\", \"Fra sjåfør til passasjer\", and \"Resistansen\".\n",
"\"Russian Roulette\" was composed by Albi Albertsson, Belle Humble and Markus Lindell who previously worked with the group for their last single \"One Of These Nights\" while its lyrics were penned by Jo Yun Gyeong. The song was written before Red Velvet's debut, and was first heard by the members when they were still trainees, unaware that they will one day record and release it.\n",
"Russian Roulette EP is an EP by Ed Harcourt, released on 5 May 2009 worldwide by Dovecote Records. The EP was made available as a digital download-only release, and as a limited edition 512-megabyte USB stick in the shape of a bullet. The USB stick features album artwork, photos, and seven unreleased music videos. Official promo-only CDs were also released for radio promotion. The song \"Caterpillar\" was written by Ed about his newborn daughter Roxy, and the time she spent in an incubator shortly after her birth. Harcourt said, \"It's the first song I've written about her. She was a little ill and we [Ed and his wife Gita] waited for her in the hospital for the chrysalis so we could take her home.\" The EP is also dedicated to her.\n"
] |
What is the best book to explain the evidence and the argument for climate change? | There is a wealth of resources that deal with climate change.
For a brief summary, I think the [booklet published by the Australian Academy of Science](_URL_0_) does a fair job. There are probably many similar booklets out there.
If you want more details, the website [Skeptical Science](_URL_1_) is one extraordinary resource. Not only do they have concise explanations (with different technical levels) to many climate change phenomena and myths, they are also constantly addressing claims from the contrarian community.
The [IPCC 4th Assessment Report by Working Group I](_URL_2_) is actually a somewhat lengthy but good resource. If you don't want to read all of it, try the Technical Summary or FAQ chapters. | [
"The book presents an in-depth analysis and refutation of climate change denial, going over several arguments point-by-point and disproving them with peer-reviewed evidence from the scientific consensus for climate change. The authors assert that those denying climate change engage in tactics including cherry picking data purported to support their specific viewpoints, and attacking the integrity of climate scientists. They use social science theory to examine the phenomenon of climate change denial in the wider public, and call this phenomenon a form of pathology.\n",
"The online magazine londonbookreview.com remarked, \"For those who believe that the argument about the causes of climate change have been settled may find this a difficult book to read. But those who retain an open mind may find this an interesting read, even if it is only to confirm that the science is far from being settled.\"\n",
"The book includes 36 short essays predicting the consequences of global warming and has been translated into over twenty languages. The book reviews evidence of historical climate change and attempts to compare this with the current era. The book argues that if atmospheric carbon dioxide levels continue to increase at current rates, the resulting climate change will cause mass species extinctions. The book also asserts that global temperatures have already risen enough to cause the annual monsoon rains in the Sahel region of Africa to diminish, causing droughts and desertification. This in turn, according to Flannery, has contributed to the conflict in the Darfur region through competition for disappearing resources. Further consequences, argued in the book, include increasing hurricane intensity, and decline in the health of coral reefs.\n",
"Michiko Kakutani argues in \"The New York Times\" that the book's \"roots as a slide show are very much in evidence. It does not pretend to grapple with climate change with the sort of minute detail and analysis\" given by other books on the topic \"and yet as a user-friendly introduction to global warming and a succinct summary of many of the central arguments laid out in those other volumes, \"An Inconvenient Truth\" is lucid, harrowing and bluntly effective.\"\n",
"Massie has said that the evidence behind the scientific consensus on climate change is not compelling. On the topic of climate change, Massie said \"there's a conflict of interest for some of the people doing the research. I think some people are trying to integrate backwards, starting with the answer and working the other way. I think the jury is still out on the contribution of our activities to the change in the earth's climate\". In 2013, he implied that cold weather undercut the argument for climate change, tweeting \"Today's Science Committee Hearing on Global Warming canceled due to snow\". During a 2019 House Oversight Committee hearing on the impact of climate change, Massie suggested that concerns over rising carbon dioxide levels were exaggerated, asking a witness, former senator John Kerry, why carbon dioxide levels millions of years ago were higher despite the non-presence of humans. CNN and \"The Washington Post\" described Massie's exchange with the witness as \"surreal\" and \"bizarre\".\n",
"In the book, Plimer asserts that the current theory of human-induced global warming is not in accord with history, archaeology, geology or astronomy and must be rejected, that promotion of this theory as science is fraudulent, and that the current alarm over climate change is the result of bad science. He argues that climate models focus too strongly on the effects of carbon dioxide, rather than factoring in other issues such as solar variation, the effect of clouds, and unreliable temperature measurements.\n",
"The book is critical of political efforts to address climate change and argues that extreme environmental changes are inevitable and unavoidable. Meteorologists have a huge amount to gain from climate change research, the book claims, and they have narrowed the climate change debate to the atmosphere, whereas the truth is more complex. Money would be better directed to dealing with problems as they occur rather than making expensive and futile attempts to prevent climate change.\n"
] |
do birds pee? | Water is a pretty heavy material, so if birds were to have a bladder it would significantly affect weight distribution. So instead of converting amino acids to urea which needs to dissolve in water to be removed, a lot of water. The convert it to uric acid which forms a paste when a small amount of water is added to it. Lizards do the same since they live in dry, water sparse regions | [
"This is a vocal bird in the breeding season, with constant calling as the crazed tumbling display flight is performed by the male. The typical contact call is a loud, shrill \"pee-wit\" from which they get their other name of peewit. Displaying males usually make a wheezy \"pee-wit, wit wit, eeze wit\" during their display flight, these birds also make squeaking or mewing sounds.\n",
"Other animals also visit hummingbird feeders. Bees, wasps, and ants are attracted to the sugar-water and may crawl into the feeder, where they may become trapped and drown. Orioles, woodpeckers, bananaquits, raccoons and other larger animals are known to drink from hummingbird feeders, sometimes tipping them and draining the liquid. In the southwestern United States, two species of nectar-drinking bats (\"Leptonycteris yerbabuenae\" and \"Choeronycteris mexicana\") visit hummingbird feeders to supplement their natural diet of nectar and pollen from saguaro cacti and agaves.\n",
"This is a terrestrial species, feeding on seeds and insects on the ground. It is notoriously difficult to see, keeping hidden in crops, and reluctant to fly, preferring to creep away instead. Even when flushed, it keeps low and soon drops back into cover. Often the only indication of its presence is the distinctive \"wet-my-lips\" repetitive song of the male. The call is uttered mostly in the mornings, evenings and sometimes at night. It is a strongly migratory bird, unlike most game birds.\n",
"BULLET::::- Steady When hunting upland birds, a flushing dog should be steady to wing and shot, meaning that he sits when a bird rises or a gun is fired. He does this in order to mark the fall and to avoid flushing other birds when pursuing a missed bird.\n",
"BULLET::::- Steady When hunting upland birds, a flushing dog should be steady to wing and shot, meaning that he sits when a bird rises or a gun is fired. He does this in order to mark the fall and to avoid flushing other birds when pursuing a missed bird.\n",
"BULLET::::- Steady When hunting upland birds, a flushing dog should be steady to wing and shot, meaning that he sits when a bird rises or a gun is fired. He does this in order to mark the fall and to avoid flushing other birds when pursuing a missed bird.\n",
"Pigeons and doves are placed in their taxonomic groups based predominantly on structural characteristics. Pigeons feed their young by regurgitation and suck water while their beak is immersed. Males and females divide incubation duties.\n"
] |
Are there the equivalent of speech impediments in sign language? | I've worked with a lot of impaired students, and some of the students who sign have physical disabilities with their hands/arms, or cognitive disabilities that affect their motor skills. So yes, trying to understand their signing is a lot like trying to understand someone with a severe speech impediment | [
"For people who have hearing difficulties, sign language is sometimes employed to communicate. Sign language makes use of a combination of hand gestures, facial expressions, and body postures. Similar to speech, it has its own grammar and linguistic structure and may vary from each deaf community around the world.\n",
"In deaf patients who use manual language (such as American Sign Language), damage to the left hemisphere of the brain leads to disruptions in their signing ability. Paraphasic errors similar to spoken language have been observed; whereas in spoken language a phonemic substitution would occur (e.g. \"tagle\" instead of \"table\"), in ASL case studies errors in movement, hand position, and morphology have been noted. Agrammatism, or the lack of grammatical morphemes in sentence production, has also been observed in lifelong users of American Sign Language who have left hemisphere damage. The lack of syntactic accuracy shows that the errors in signing are not due to damage to the motor cortex, but rather are a manifestation of the damage to the language-producing area of the brain. Similar symptoms have been seen in a patient with left hemisphere damage whose first language was British Sign Language, further showing that damage to the left hemisphere primarily hinders linguistic ability, not motor ability. In contrast, patients who have damage to non-linguistic areas on the left hemisphere have been shown to be fluent in signing, but are unable to comprehend written language.\n",
"Speech disorders or speech impediments are a type of communication disorder where 'normal' speech is disrupted. This can mean stuttering, lisps, etc. Someone who is unable to speak due to a speech disorder is considered mute.\n",
"There is a common misconception that sign languages are somehow dependent on spoken languages: that they are spoken language expressed in signs, or that they were invented by hearing people. Similarities in language processing in the brain between signed and spoken languages further perpetuated this misconception. Hearing teachers in deaf schools, such as Charles-Michel de l'Épée or Thomas Hopkins Gallaudet, are often incorrectly referred to as \"inventors\" of sign language. Instead, sign languages, like all natural languages, are developed by the people who use them, in this case, deaf people, who may have little or no knowledge of any spoken language.\n",
"On the other side, sign language is a fully developed and autonomous language which individuals can learn with relative ease. It can be used to express a whole range of things which are impossible for individuals who can utilize only a limited amount of words. The drawback, however, is that deaf individuals sometimes totally depend on signing, and can barely communicate with people who do not know sign language.\n",
"Many languages, even not normally null-subject languages, omit the subject pronoun in imperative sentences, as usually occurs in English (see below). Details of the syntax of imperative sentences in certain other languages, and of differences between affirmative and negative imperatives, can be found in some of the other specific language sections below.\n",
"Words in sign languages, unlike those in spoken ones, are made not of sequential units but of spatial configurations of subword unit arrangements, the spatial analogue of the sonic-chronological morphemes of spoken language. These words, like spoken ones, are learnt by imitation. Indeed, rare cases of compulsive sign-language echolalia exist in otherwise language-deficient deaf autistic individuals born into signing families. At least some cortical areas neurobiologically active during both sign and vocal speech, such as the auditory cortex, are associated with the act of imitation.\n"
] |
why do we have to move our eyes when remembering something? | I'm not a neurologist or psychiatrist, but I can tell you how it kind of works for me. When I move my eyes to remember, it's more like I'm looking away from what I was focused on rather than visually focusing on something new. It's kind of like daydreaming when can be staring off into space without really seeing what you're looking at. Looking away just helps me look inward to figure something out (working out a problem, going through memories, etc). | [
"Primarily, experimenters recorded eye movements while participants studied a series of photos. Individuals were then involved in a recognition task in which their eye movements were recorded for the second time. From the previous tasks, it was discovered that eye fixations, maintaining a visual gaze on a single location, were more clustered for remembering rather than knowing tasks. This suggests that remembering is associated with encoding a specific salient component of an item whereas recognition is activated by an augmented memory for this part of the stimulus.\n",
"In situations where one's physical appearance and actions are important (for example, giving a speech in front of an audience), the memory of that situation will likely be remembered in the observer perspective. This is due to the general trend that when the focus of attention in a person's memory is on themselves, they will likely see themselves from someone else's point of view. This is because, in \"center-of-attention\" memories, the person is conscious about the way they are presenting themselves and instinctively try to envision how others were perceiving them.\n",
"To usefully remember something, people must later recognize that they've seen it before and correctly remember the context in which it was seen. With age, the ability to discriminate between new and previous events starts to fail, and errors in recalling experiences become more common. Larry Jacoby of New York University completed a study in 1999 showing how common these errors can become and giving a better understanding why recognition errors are particularly common in Alzheimer's disease. In Jacoby's study, participants were given two lists of words: one to read and one that would be said out loud to them. All subjects were then given a \"test\" list which contained some words they had read, some they had heard, and some new words; the subjects had to determine which words were which. Jacoby found that university students and 75-year-olds were equally likely to correctly recognize whether or not the word had been presented, but 75-year-olds are much more likely to mistake whether the word was spoken or read.\n",
"However, there is another way of functioning, which Gattegno called retention. An example of retention is the reception of sensory images. When we look at something – a street, a film, a person, a fine view – photons move from what we are contemplating and enter our eyes to strike the retina. When we listen to something, we create auditory images in a similar way, that is, through energy that enters our system, rather than energy we allocate from inside, to memorize an arbitrary item. To retain an auditory or visual image, we have to use perhaps only an insignificant amount of our own to retain it; the amount is so small we are not aware of any effort. Such images are easily acquired and generally remain for long periods. We all have experiences similar to the following examples Gattegno offered:\n",
"For sequential tasks, eye-gaze movement occurs during important kinematic events like changing the direction of a movement or when passing perceived landmarks. This is related to the task-search-oriented nature of the eyes and their relation to the movement planning of the hands and the errors between motor signal output and consequences perceived by the eyes and other senses that can be used for corrective movement. The eyes have a tendency to \"refixate\" on a target to refresh the memory of its shape, or to update for changes in its shape or geometry in drawing tasks that involve the relating of visual input and hand movement to produce a copy of what was perceived. In high accuracy tasks, when acting on greater amounts of visual stimuli, the time it takes to plan and execute movement increases linearly, per Fitts's law.\n",
"Using PET studies and word stimuli, Endel Tulving found that remembering is an automatic process. It is also well documented that a hemispheric asymmetry occurs in the PFC: When encoding memories, the Left Dorsolateral PFC (LPFC) is activated, and when retrieving memories, activation is seen in the Right Dorsolateral PFC (RPFC).\n",
"There is evidence suggesting that different processes are involved in remembering something versus knowing whether it is familiar. It appears that \"remembering\" and \"knowing\" represent relatively different characteristics of memory as well as reflect different ways of using memory.\n"
] |
How strong is the electric current in Earth's core that produces Earth's magnetic field? | [Here](_URL_0_) is a discussion on this topic, the first comment used the formula for a current loop and came up with a value of around 10^8 amps. The comments note that this uses the field strength at the surface when it should be at the core, so more like 10^11 amps.
I’m not really sure how valid of an estimation this is, as I understand it dynamo theory says the liquid iron motion itself is what causes the field, not the current through the liquid. So we’d be looking for equations that relate the motion of liquid to field strength as opposed to equations for current. Would think that also kinda throws out the concept of current flow. Could be way off on that though. | [
"The main part of Earth's magnetic field is generated in the core, the site of a dynamo process that converts the kinetic energy of thermally and compositionally driven convection into electrical and magnetic field energy. The field extends outwards from the core, through the mantle, and up to Earth's surface, where it is, approximately, a dipole. The poles of the dipole are located close to Earth's geographic poles. At the equator of the magnetic field, the magnetic-field strength at the surface is , with a magnetic dipole moment of at epoch 2000, decreasing nearly 6% per century. The convection movements in the core are chaotic; the magnetic poles drift and periodically change alignment. This causes secular variation of the main field and field reversals at irregular intervals averaging a few times every million years. The most recent reversal occurred approximately 700,000 years ago.\n",
"The Earth and most of the planets in the Solar System, as well as the Sun and other stars, all generate magnetic fields through the motion of electrically conducting fluids. The Earth's field originates in its core. This is a region of iron alloys extending to about 3400 km (the radius of the Earth is 6370 km). It is divided into a solid inner core, with a radius of 1220 km, and a liquid outer core. The motion of the liquid in the outer core is driven by heat flow from the inner core, which is about , to the core-mantle boundary, which is about . The heat is generated by potential energy released by heavier materials sinking toward the core (planetary differentiation, the iron catastrophe) as well as decay of radioactive elements in the interior. The pattern of flow is organized by the rotation of the Earth and the presence of the solid inner core.\n",
"Dynamo theory suggests that convection in the outer core, combined with the Coriolis effect, gives rise to Earth's magnetic field. The solid inner core is too hot to hold a permanent magnetic field (see Curie temperature) but probably acts to stabilize the magnetic field generated by the liquid outer core. The average magnetic field strength in Earth's outer core is estimated to be 25 Gauss (2.5 mT), 50 times stronger than the magnetic field at the surface.\n",
"The Earth's magnetic field is believed to be generated by electric currents in the conductive iron alloys of its core, created by convection currents due to heat escaping from the core. However the process is complex, and computer models that reproduce some of its features have only been developed in the last few decades.\n",
"Earth's magnetic field, also known as the geomagnetic field, is the magnetic field that extends from the Earth's interior out into space, where it interacts with the solar wind, a stream of charged particles emanating from the Sun. The magnetic field is generated by electric currents due to the motion of convection currents of molten iron in the Earth's outer core: these convection currents are caused by heat escaping from the core, a natural process called a geodynamo. The magnitude of the Earth's magnetic field at its surface ranges from 25 to 65 microteslas (0.25 to 0.65 gauss). As an approximation, it is represented by a field of a magnetic dipole currently tilted at an angle of about 11 degrees with respect to Earth's rotational axis, as if there were a bar magnet placed at that angle at the center of the Earth. The North geomagnetic pole, currently located near Greenland in the northern hemisphere, is actually the south pole of the Earth's magnetic field, and conversely.\n",
"Earth's magnetic field, predominantly dipolar at its surface, is distorted further out by the solar wind. This is a stream of charged particles leaving the Sun's corona and accelerating to a speed of 200 to 1000 kilometres per second. They carry with them a magnetic field, the interplanetary magnetic field (IMF).\n",
"The Earth's magnetic field strength was measured by Carl Friedrich Gauss in 1832 and has been repeatedly measured since then, showing a relative decay of about 10% over the last 150 years. The Magsat satellite and later satellites have used 3-axis vector magnetometers to probe the 3-D structure of the Earth's magnetic field. The later Ørsted satellite allowed a comparison indicating a dynamic geodynamo in action that appears to be giving rise to an alternate pole under the Atlantic Ocean west of South Africa.\n"
] |
A foolish question on my part, but it's bothering me (Relating to stellar evolution). | All other things being equal, more mass would mean longer lifetime, but all other things are not equal. The luminosity of a star increases much faster than its mass; it typically goes something like (mass)^3.5 (the exponent varies somewhat depending on the mass of the star), and so the increased rate of fusion wins out over the presence of more mass to go through.
Why is this? The rate at which the star undergoes fusion is fixed by the star hitting an equilibrium between the gravitational forces pulling matter inward and the pressure pushing outwards resulting from the fusion at the core. A more massive star will require a greater outward pressure to achieve balance, and so it will compress to achieve a greater rate of fusion at the center than a lower mass start would have. | [
"Stellar evolution is not studied by observing the life of a single star, as most stellar changes occur too slowly to be detected, even over many centuries. Instead, astrophysicists come to understand how stars evolve by observing numerous stars at various points in their lifetime, and by simulating stellar structure using computer models.\n",
"BULLET::::- To permit a better understanding of the more rapid stages of stellar evolution (such as the classification, frequency, correlations and directly observed attributes of rare fundamental changes and of cyclical changes). This has to be achieved by detailed examination and re-examination of a great number of objects over a long period of operation. Observing a large number of objects in the galaxy is also important to understand the dynamics of our galaxy.\n",
"In the late phase of stellar evolution, massive stars like Betelgeuse exhibit high rates of mass loss, possibly as much as 1 every 10,000 years, resulting in a complex circumstellar environment that is constantly in flux. In a 2009 paper, stellar mass loss was cited as the \"key to understanding the evolution of the universe from the earliest cosmological times to the current epoch, and of planet formation and the formation of life itself\". However, the physical mechanism is not well understood. When Schwarzschild first proposed his theory of huge convection cells, he argued it was the likely cause of mass loss in evolved supergiants like Betelgeuse. Recent work has corroborated this hypothesis, yet there are still uncertainties about the structure of their convection, the mechanism of their mass loss, the way dust forms in their extended atmosphere, and the conditions which precipitate their dramatic finale as a type II supernova. In 2001, Graham Harper estimated a stellar wind at 0.03 every 10,000 years, but research since 2009 has provided evidence of episodic mass loss making any total figure for Betelgeuse uncertain. Current observations suggest that a star like Betelgeuse may spend a portion of its lifetime as a red supergiant, but then cross back across the H-R diagram, pass once again through a brief yellow supergiant phase and then explode as a blue supergiant or Wolf-Rayet star. \n",
"In stellar astronomy, the Algol paradox is a paradoxical situation when elements of a binary star seem to evolve in discord with the established theories of stellar evolution. A fundamental feature of these theories is that the rate of evolution of stars depends on their mass: The greater the mass, the faster this evolution, and the more quickly it leaves the main-sequence, entering either a subgiant or giant phase.\n",
"In interpreting the cosmogonic role of stellar associations Ambartsumian's went on to argue the necessity of change to existing views on star formation; namely, for dissolution and against condensation. Clearly, his early work on the expansion of the planetary nebulae and the fact that those objects result from the ejection of the outer layers of stars-the dynamical dissolution of star clusters via evaporation and of associations-created a logical background for his radical conclusion that the basic evolutionary processes in the Universe are not contraction and condensation. On the contrary, they are always outgoing from some denser state of matter. Thus, he postulated the existence of proto-stars, i.e. superdense objects as progenitors of stars, nebulae, diffuse matter, etc.\n",
"Struve's belief in the widespread existence of life and intelligence in the Universe stemmed from his studies of slow-rotating stars. Many stars, including the Sun, spin at a much lower rate than was predicted by contemporary theories of early stellar evolution. The reason for this, claimed Struve, was that they were surrounded by planetary systems which had carried away much of the stars' original angular momentum. So numerous were the slow-spinning stars that Struve estimated, in 1960, there might be as many as 50 billion planets in our Galaxy alone. As to how many might harbor intelligent life, he wrote:\n",
"As a science, the study of astronomy is somewhat hindered in that direct experiments with the properties of the distant universe are not possible. However, this is partly compensated by the fact that astronomers have a vast number of visible examples of stellar phenomena that can be examined. This allows for observational data to be plotted on graphs, and general trends recorded. Nearby examples of specific phenomena, such as variable stars, can then be used to infer the behavior of more distant representatives. Those distant yardsticks can then be employed to measure other phenomena in that neighborhood, including the distance to a galaxy.\n"
] |
how can some parts of the universe be so far that light hasn't reached them, since nothing can travel faster than light? | Because space can expand faster than the speed of light, matter cannot however. | [
"Some parts of the universe are too far away for the light emitted since the Big Bang to have had enough time to reach Earth or its scientific space-based instruments, and so lie outside the observable universe. In the future, light from distant galaxies will have had more time to travel, so additional regions will become observable. However, due to Hubble's law, regions sufficiently distant from the Earth are expanding away from it faster than the speed of light (special relativity prevents nearby objects in the same local region from moving faster than the speed of light with respect to each other, but there is no such constraint for distant objects when the space between them is expanding; see uses of the proper distance for a discussion) and furthermore the expansion rate appears to be accelerating due to dark energy. Assuming dark energy remains constant (an unchanging cosmological constant), so that the expansion rate of the universe continues to accelerate, there is a \"future visibility limit\" beyond which objects will \"never\" enter our observable universe at any time in the infinite future, because light emitted by objects outside that limit would never reach the Earth. (A subtlety is that, because the Hubble parameter is decreasing with time, there can be cases where a galaxy that is receding from the Earth just a bit faster than light does emit a signal that reaches the Earth eventually.) This future visibility limit is calculated at a comoving distance of 19 billion parsecs (62 billion light-years), assuming the universe will keep expanding forever, which implies the number of galaxies that we can ever theoretically observe in the infinite future (leaving aside the issue that some may be impossible to observe in practice due to redshift, as discussed in the following paragraph) is only larger than the number currently observable by a factor of 2.36.\n",
"By thinking of photons of light as ants crawling along the rubber rope of space between the galaxy and us, we can see that just as the ant can eventually reach the end of the rope, so light from distant galaxies, even some that appear to be receding at a speed greater than the speed of light, can eventually reach Earth, given sufficient time.\n",
"He goes on to explain that in Albert Einstein's special theory of relativity, the rule that nothing can go faster than the speed of light, does not apply to galaxies in an expanding universe. He further explains that while that rule may apply in a preexisting static space, it does \"not\" apply in an expanding universe. In an expanding cosmic universe galaxies can travel away from each other at speeds in excess of the speed of light. As a result of this there are galaxies so far away that the light from them have not reached earth yet and therefore we don't even know they exist. Ferris refers to this phenomenon as a boundless steadily increasing and changing universe with a changing density.\n",
"This puzzle has a bearing on the question of whether light from distant galaxies can ever reach us given the metric expansion of space. The universe is expanding, which leads to increasing distances to other galaxies, and galaxies that are far enough away from us will have an apparent relative motion greater than the speed of light. It might seem that light leaving such a distant galaxy could never reach us.\n",
"The proper distance—the distance as would be measured at a specific time, including the present—between Earth and the edge of the observable universe is 46 billion light-years (14 billion parsecs), making the diameter of the observable universe about 93 billion light-years (28 billion parsecs). The distance the light from the edge of the observable universe has travelled is very close to the age of the Universe times the speed of light, , but this does not represent the distance at any given time because the edge of the observable universe and the Earth have since moved further apart. For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs). As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter, and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away.\n",
"According to the current understanding of physics, an object within space-time cannot exceed the speed of light, which means an attempt to travel to any other galaxy would be a journey of millions of earth years via conventional flight.\n",
"From the ship's frame, the acceleration would continue at the same rate. However, due to Lorentz contraction, the galaxy around the ship would appear to become squashed in the direction of travel, and a destination many light years away would appear to become much closer. Traveling to this destination at subluminal speeds would become practical for the onboard travellers. Ultimately, from the ship's frame, it would be possible to reach anywhere in the observable universe, without the ship ever accelerating to light speed.\n"
] |
Is it possible or even part of the theory of evolution that a population produces multiple beneficial mutations at once? | This happens, people tend to follow one mutation down generations only for simplicity of demonstration but of course each and every gene develop in parallel. Millions of genes all develop at the same time.
Mathematically, however, the most common mutations are minor, insignificant mutations that do not affect reproductive capacity. Thus any strong beneficial mutations will be diluted within a population. Also, there can also be detrimental mutations that may be passed on. Thus the timescales are still very long. | [
"The probability that amount of mutation will go to 0 with the next generation is increased by using non-uniform mutation operator. It keeps the population from stagnating in the early stages of the evolution. It tunes solution in later stages of evolution. This mutation operator can only be used for integer and float genes.\n",
"Mutations can have many different effects upon an organism. It is generally believed that the majority of non-neutral mutations are deleterious, which means that they will cause a decrease in the organism's overall fitness. If a mutation has a deleterious effect, it will then usually be removed from the population by the process of natural selection. Sexual reproduction is believed to be more efficient than asexual reproduction in removing those mutations from the genome.\n",
"Mutations are stochastic and typically occur randomly across genes. Mutation rates for single nucleotide sites for most organisms are very low, roughly 10 to 10 per site per generation, though some viruses have higher mutation rates on the order of 10 per site per generation. Among these mutations, some will be neutral or beneficial and will remain in the genome unless lost via genetic drift, and others will be detrimental and will be eliminated from the genome by natural selection.\n",
"There are several problems not seen in the above. First, mutations occur as random events. Second, the chance that any site in the genome varies is different from the next site, a very good example is the codons for amino acids, the first two nt in a codon may mutate at 1 per billion years, but the third nt may mutate 1 per million years. Unless scientist study the sequence of a great many animals, particularly those close to the branch being examined, they generally do not know what the rate of mutation for a given site. Mutations do occur at 1st and 2nd positions of codons, but in most cases these mutations are under negative selection and so are removed from the population over small periods of time. In defining the rate of evolution in the anchor one has the problem that random mutation creates. For example, a rate of .005 or .010 can also explain 24 mutations according to the binomial probability distribution. Some of the mutations that did occur between the two have reverted, hiding an initially higher rate. Selection may play into this, a rare mutation may be selective at point X in time, but later climate may change or the species migrates and it is not longer selective, and pressure exerted on new mutations that revert the change, and sometimes the reversion of a nt can occur, the greater the distance between two species the more likely this is going to occur. In addition, from that ancestral species both species may randomly mutate a site to the same nucleotide. Many times this can be resolved by obtaining DNA samples from species in the branches, creating a parsimonious tree in which the order of mutation can be deduced, creating branch-length diagram. This diagram will then produce a more accurate estimate of mutations between two species. Statistically one can assign variance based on the problem of randomnicity, back mutations, and parallel mutations (homoplasies) in creating an error range.\n",
"His argument is as follows. The minimal rate of human mutation is estimated to be 100 new mutations per generation. According to Sanford, Kimura's curve shows that most mutations have a near-neutral effect, and are furthermore slightly deleterious. As such, they cause a genetic rust unstoppable by natural selection. Therefore, the main claim is that the rise of random genetic mutations is too unnoticeable to be affected by natural selection, yet harmful enough to cause the gradual extinction of any species through time.\n",
"One advantage of directed evolution is that the mutations do not have to be completely random; instead they can be random enough to discover unexplored potential, but not so random as to be inefficient. The number of possible mutation combinations is astronomical, but instead of just randomly trying to test as many as possible, Arnold integrates her knowledge of biochemistry to narrow down the options, focusing on introducing mutations in areas of the protein that are likely to have the most positive effect on activity and avoiding areas in which mutations would likely be, at best, neutral and at worst, detrimental (such as disrupting proper protein folding).\n",
"BULLET::::- These other mutations are expected only through successive sweeps of mutants with a fitness advantage. One can only expect multiple mutants to arise if each mutation is independently beneficial, and not in cases where the mutations are individually neutral but together advantageous. Successive takeovers are the only reliable way for evolution to proceed in a chemostat.\n"
] |
what is the difference between gaming fps, where it's normally around 60, and slow-motion fps in videos, where it says 1000 fps. | The 1000 FPS in the video is referring to the number of frames per second the camera captures, not the number of FPS the video is playing back.
When you're playing LoL and getting 60 FPS, the game is rendering 60 FPS, and displaying it at 60 FPS.
The 1000 FPS camera is capturing 1000 frames in a second, and then the video is displaying it at, say, 60 FPS.
If you have 1000 frames of footage, but are only showing 60 in second, an action that would normally take one second, is taking 1000/60 seconds--about 16.7 seconds of footage. This is why it appears to be in slow motion. | [
"Because both film speeds have been used in 25-fps regions, viewers can face confusion about the true speed of video and audio, and the pitch of voices, sound effects, and musical performances, in television films from those regions. For example, they may wonder whether the Jeremy Brett series of Sherlock Holmes television films, made in the 1980s and early 1990s, was shot at 24 fps and then transmitted at an artificially fast speed in 25-fps regions, or whether it was shot at 25 fps natively and then slowed to 24 fps for NTSC exhibition.\n",
"It is clear that a fast response time and high refresh rate is desired in order to display smooth motion. A framerate of 60 frames per second (FPS) is generally the minimum acceptable framerate in a video game for enthusiasts, with some enthusiasts preferring 144 FPS or even 165 FPS, to match the refresh rate of their monitor (144 Hz or 165 Hz, respectively). Some gaming monitors can be overclocked to achieve even higher refresh rates. Apart from the primary display, some enthusiasts choose to use a secondary display or more to their PC. Many players game using 3 monitors, which requires 3 times the graphics performance.\n",
"At the time of announcement, Nikon claimed that it features the world's fastest autofocus, with 10 fps—even during videos—based on hybrid autofocus (phase detection/contrast-detect AF with AF-assist illuminator), as well as the world's fastest continuous shooting speed (60 fps) among all cameras with interchangeable lenses. Slow-motion movies can be captured in up to 1200 fps with reduced resolution. Its inbuilt intervalometer enables time-lapse photography.\n",
"For 30-fps standards, a process called \"3:2 pulldown\" is used. One film frame is transmitted for three video fields (lasting 1½ video frames), and the next frame is transmitted for two video fields (lasting 1 video frame). Two film frames are thus transmitted in five video fields, for an average of 2½ video fields per film frame. The average frame rate is thus 60 ÷ 2.5 = 24 frames per second, so the average film speed is nominally exactly what it should be. (In reality, over the course of an hour of real time, 215,827.2 video fields are displayed, representing 86,330.88 frames of film, while in an hour of true 24-fps film projection, exactly 86,400 frames are shown: thus, 29.97-fps NTSC transmission of 24-fps film runs at 99.92% of the film's normal speed.) Still-framing on playback can display a video frame with fields from two different film frames, so any difference between the frames will appear as a rapid back-and-forth flicker. There can also be noticeable jitter/\"stutter\" during slow camera pans (telecine judder).\n",
"BULLET::::- The film can be shot at 24 frames per second. In this case, when transmitted in its native region, the film may be accelerated to 25 fps according to the analog technique described above, or kept at 24 fps by the digital technique described above. When the same film is transmitted in regions that use a nominal 30-fps television standard, there is no noticeable change in speed, tempo, and pitch.\n",
"Film, at its native 24 FPS rate could not be displayed without the necessary pulldown process, often leading to \"judder\": To convert 24 frames per second into 60 frames per second, every odd frame is repeated, playing twice; Every even frame is tripled. This creates uneven motion, appearing stroboscopic. Other conversions have similar uneven frame doubling. Newer video standards support 120, 240, or 300 frames per second, so frames can be evenly multiplied for common frame rates such as 24 FPS film and 30 FPS video, as well as 25 and 50 FPS video in the case of 300 FPS displays. These standards also support video that's natively in higher frame rates, and video with interpolated frames between its native frames. Some modern films are experimenting with frame rates higher than 24 FPS, such as 48 and 60 FPS.\n",
"Featuring a new 14 megapixel image sensor and further increased autofocus (hybrid autofocus with phase detection/contrast-detect AF and AF-assist illuminator) speed to 15 frames per second (fps), the maximum continuous shooting speed stays at 60 fps for up to 40 frames.\n"
] |
If you traveled back in time wouldn't you being there mean that more mass has entered the universe than was originally there? | So you're asking "if you violate the laws of physics, wouldn't that mean you've violated the laws of physics?"
Yes. | [
"Namely, that from the perspective of the point of origin of the Big Bang, according to Einstein's equations of the 'stretching factor', time dilates by a factor of roughly 1,000,000,000,000, meaning one trillion days on earth would appear to pass as one day from that point, due to the stretching of space. When applied to the estimated age of the universe at 13.8 billion years, from the perspective of the point of origin, the universe today would appear to have just begun its sixth day of existence, or if the universe is 15 billion years old from the perspective of earth, it would appear to have just completed its sixth day.\n",
"\"There was a big debate as to whether the millions of other galaxies in the universe were accelerating away from each other or moving away at a constant rate, or whether they were actually coming back in on themselves. The scientists originally thought that everything would eventually come back in on itself and implode, but what's actually happening is that they are accelerating away at a rate of 6 x 7.\"\n",
"If the expansion of the universe continues and it stays in its present form, eventually all but the nearest galaxies will be carried away from us by the expansion of space at such a velocity that our observable universe will be limited to our own gravitationally bound local galactic cluster. In the very long term (after many trillions – thousands of billions – of years, cosmic time), the Stelliferous Era will end, as stars cease to be born and even the longest-lived stars gradually die. Beyond this, all objects in the universe will cool and (with the possible exception of protons) gradually decompose back to their constituent particles and then into subatomic particles and very low level photons and other fundamental particles, by a variety of possible processes.\n",
"At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth.\n",
"In this context, the role of time in the Second Law of Thermodynamics is curious alone because the irreversible increase of entropy in the universe, as a principle that can be verified through daily observations, as something equivalent of the irreversibility of time, is enough proof of the big bang theory. That is to say, since the increase of entropy in time is continuous and irreversible, one would arrive at the prime oneness if one could travel backward in time.\n",
"Many models of the Universe suggest that there was an inflationary epoch in the early history of the Universe when space expanded by a large factor in a very short amount of time. If this expansion was not symmetric in all directions, it may have emitted gravitational radiation detectable today as a gravitational wave background. This background signal is too weak for any currently operational gravitational wave detector to observe, and it is thought it may be decades before such an observation can be made.\n",
"Because of the difference of mass, the activity of life inside of a living thing's atoms would undergo many millennia before enough time passes for that living thing to take a single step. Raëlians believe the universe is infinite in time and space and lacks a center. Because of this, one could not imagine where an ethereal soul would go.\n"
] |
- why don't we clean up satellite debris? | High cost, low benefit. | [
"\"Space debris\" usually refers to the remains of spacecraft that have either fallen to Earth or are still orbiting Earth. Space debris may also consist of natural components such as chunks of rock and ice. The problem of space debris has grown as various space programs have left legacies of launches, explosions, repairs, and discards in both low Earth orbit and more remote orbits. These orbiting fragments have reached a great enough proportion to constitute a hazard to future space launches of both satellite and manned vehicles. Various government agencies and international organizations are beginning to track space debris and also research possible solutions to the problem. While many of these items, ranging in size from nuts and bolts to entire satellites and spacecraft, may fall to Earth, other items located in more remote orbits may stay aloft for centuries. The velocity of some of these pieces of space junk have been clocked in excess of 17,000 miles per hour (27,000 km/h). A piece of space debris falling to Earth leaves a fiery trail, just like a meteor.\n",
"The geo graveyard belt orbital regime is valuable as a storage and disposal location for space debris after their useful economic life is completed as geosynchronous communication satellites. Artificial satellites are left in space because the economic cost of removing the debris would be high, and current public policy does not require nor incentivize rapid removal by the party that first inserted the debris in outer space and thus created a negative externality for others—a placing of the cost onto them.\n",
"A consensus of speakers at a meeting in Brussels on 30 October 2012 organized by the Secure World Foundation (a U.S. think tank) and the French International Relations Institute reported that removal of the largest debris would be required to prevent the risk to spacecraft becoming unacceptable in the foreseeable future (without any addition to the inventory of dead spacecraft in LEO). Removal costs and legal questions about ownership and the authority to remove defunct satellites have stymied national or international action. Current space law retains ownership of all satellites with their original operators, even debris or spacecraft which are defunct or threaten active missions.\n",
"One public policy proposal to deal with growing space debris is a \"one-up/one-down\" launch license policy for Earth orbits. Launch vehicle operators would have to pay the cost of debris mitigation. They would need to build the capability into their launch vehicle-robotic capture, navigation, mission duration extension, and substantial additional propellant – to be able to rendezvous with, capture and deorbit an existing derelict satellite from approximately the same orbital plane.\n",
"An alternative that has been proposed for years is to introduce the capability to retrieve derelict objects for near-space clean up and then either deorbit the satellite or do some sort of in-space recycling of the satellite materials. Several technical approaches have been proposed, but there has been no legal framework to date that has required satellite operators to clean up the negative externality of their derelict satellites. New approaches offer the technical prospect of markedly reducing the cost of object capture and deorbit with the implementation of a one-up/one-down launch license regime to Earth orbits that would require satellite operators to remove one spacecraft for each one deployed.\n",
"Initially, the term space debris referred to the natural debris found in the solar system: asteroids, comets, and meteoroids. However, with the 1979 beginning of the NASA Orbital Debris Program, the term also refers to the debris (alt. space waste or space garbage) from the mass of defunct, artificially created objects in space, especially Earth orbit. These include old satellites and spent rocket stages, as well as the fragments from their disintegration and collisions.\n",
"In order to deal with human-caused space debris, Busek proposed in 2014 a remotely controlled vehicle to rendezvous with debris, capture it, and attach a smaller deorbit satellite to the debris, then drag the debris/smallsat-combination, by means of a tether, to the desired location. The larger sat would then tow the debris/smallsat combination to either deorbit or move it to a higher graveyard orbit by means of electric propulsion. The larger satellite is named the \"ORbital DEbris Remover\", or \"ORDER\" which will carry over 40 SUL (\"Satellite on an Umbilical Line\") deorbit sats plus sufficient propellant for the large number of orbital maneuvers required to effect a 40-satellite debris removal mission over many years. Busek is projecting the cost for such a space tug to be .\n"
] |
why does food that should be warm seem to taste worse when cold? also why does it seem cold when left out for too long, when it should only be room temperature? | When it's warm, it has more flavor. Some liquids evaporate and give off that aroma, some liquids better coat your tongue.
Food left out *is* cold, compared to the temperature it's served at and the temperature of your body. Food is usually heated above 100 degrees F. You body is in the 90s. Room temperature is usually 15-25 degrees lower than your body. Go lick something with the same heat transfer rate as oil and it will seem just as cold. If you had two pieces of food and put one in the fridge and left the other out, it'd be pretty obvious the refrigerated one is much colder when you tasted it.
Edit: fixed "it" to "out" | [
"Temperature can be an essential element of the taste experience. Food and drink that—in a given culture—is traditionally served hot is often considered distasteful if cold, and vice versa. For example, alcoholic beverages, with a few exceptions, are usually thought best when served at room temperature or chilled to varying degrees, but soups—again, with exceptions—are usually only eaten hot. A cultural example are soft drinks. In North America it is almost always preferred cold, regardless of season.\n",
"Foods that spoil easily, such as meats, dairy, and seafood, must be prepared a certain way to avoid contaminating the people for whom they are prepared. As such, the rule of thumb is that cold foods (such as dairy products) should be kept cold and hot foods (such as soup) should be kept hot until storage. Cold meats, such as chicken, that are to be cooked should not be placed at room temperature for thawing, at the risk of dangerous bacterial growth, such as \"Salmonella\" or \"E. coli\".\n",
"The experience of eating favored foods with a cold often disappoints. This is because congestion blocks nasal passageways through which air and flavor molecules enter and exit, thus temporarily reducing retronasal smell capacity.\n",
"Some Turks believe that cold foods, such as ice cream, will cause illnesses – such as sore throats and the common cold; it is held that consumption of warm liquid while consuming ice cream will counteract these effects.\n",
"Tasteless food items, also called watery, are cold and wet. Every insipid food item such as lettuce, dairy products such as yoghurt, or doogh (a yogurt-based beverage) and citrus fruits which are not too much sour or sweet are cold and wet.\n",
"Throw out foods that have been warmer than for more than 2 hours. If there is any doubt at all about the length of time the food has been defrosted at room temperature, it should be thrown out. Freezing does not destroy microbes present in food. Freezing at 0 °F does inactivate microbes (bacteria, yeasts and molds). However, once food has been thawed, these microbes can again become active. Microbes in thawed food can multiply to levels that can lead to foodborne illness. Thawed food should be handled according to the same guidelines as perishable fresh food.\n",
"Food with freezer burn, though dried and wrinkled, is safe to eat. However, food afflicted with freezer burn may have an unpleasant flavour. In most cases, it is sufficient to remove the parts affected by freezer burn.\n"
] |
Info on English Lancegays seems to be impossible to find on, are there any sources that I can be pointed towards as to how these were used? Any treatise or manuals as to how they'd be used in combat? | > [...] there is no identified archaeological evidence: nothing that can show us exactly how long or heavy a lancegay was, what the diameter of the shaft was, what shape of head it had, or whether it had heads at both ends, as is sometimes claimed. There is also no detailed description of a lancegay in any known written source; and there is no instance in any of the visual media that can indubitably be identified as a representation of the weapon. As is so often the case with weapons terms in the Middle Ages, writers of literature and other documentary sources assume that one knows what technical words denote, making it unnecessary for them to supply explanations or descriptions. As a result, everything we believe we know about the lancegay has to be *deduced* from what is said about it.
-David Scott-Macnab, [Sir John Fastolf and the Diverse Affinities of the Medieval Lancegay](_URL_0_)
There are no surviving fectbuch or manual involving a section on the lancegay, although there are surviving treatises (or sections thereof) on spears and lances, such as [Fiori de'i Liberi](_URL_1_), and the techniques involved would presumably have been similar. | [
"A number of manuscripts covering longsword combat and techniques dating from the 13th–16th centuries exist in German, Italian, and English, providing extensive information on longsword combatives as used throughout this period. Many of these are now readily available online.\n",
"\"Conquests of the Longbow\" is based on tremendous historical and cultural research, for detail within the story's setting and puzzles. The game manual lists twenty-eight volumes in the bibliography, including \"Robin Hood\" by J.C. Holt, \"The Outlaws of Medieval Legend\" by Maurice Keen, and \"The White Goddess\" by Robert Graves. The manual includes essays by Marx outlining the history of the legend and the approximate dates at which different characters were incorporated into the \"Robin Hood\" legend, such as Friar Tuck and Marian in the 15th century. Guy of Gisbourne is mentioned but absent from the game. Other essays cover the tree lore, early British history, and video game piracy. \n",
"The lance fournie (French: \"equipped lance\") was a medieval equivalent to the modern army squad that would have accompanied and supported a man-at-arms (a heavily armoured horseman popularly known as a \"knight\") in battle. These units formed companies under a captain either as mercenary bands or in the retinue of wealthy nobles and royalty. Each lance was supposed to include a mixture of troop types (the men-at-arms themselves, lighter cavalry, infantry, and even noncombatant pages) that would have guaranteed a desirable balance between the various components of the company at large; however, it is often difficult to determine the exact composition of the lance in any given company as the available sources are few and often centuries apart.\n",
"What is known of combat with the longsword comes from artistic depictions of battle from manuscripts and the Fechtbücher of Medieval and Renaissance Masters. Therein the basics of combat were described and, in some cases, depicted. The German school of swordsmanship includes the earliest known longsword Fechtbuch, a manual from approximately 1389, known as GNM 3227a. This manual, unfortunately for modern scholars, was written in obscure verse. It was through students of Liechtenauer, like Sigmund Ringeck, who transcribed the work into more understandable prose that the system became notably more codified and understandable. Others provided similar work, some with a wide array of images to accompany the text.\n",
"BULLET::::- Caferro, William (2013). “Edward Despenser, The Green Knight and the Lance Formation: Englishmen in Florentine Military Service” in The Hundred Years War, part III, edited by L. J. Andrew Villalon and Donald Kagay (Leiden: Brill): 85-104.\n",
"Cornwell, a master of action-packed historical fiction, returns with the fourth book in his Grail Quest series (after Heretic), a vivid, exciting portrayal of medieval warfare as the English and French butcher each other at the Battle of Poitiers in 1356 during the Hundred Years War. Nobody writes battle scenes like Cornwell, accurately conveying the utter savagery of close combat with sword, ax, and mace, and the gruesome aftermath. English archer Sir Thomas of Hookton, called the Bastard by his enemies, leads a band of ruthless mercenaries in France. When the French hear of the existence of the sword of Saint Peter, “another Excalibur,” they must possess it for its legendary mystical powers, but the English have other ideas. Thomas is ordered by his lord, earl of Northampton, to find the sword first and begins, with his men, a perilous journey of raiding and plundering across southern France, fighting brutal warlords, cunning churchmen, with betrayal everywhere, and French and Scottish knights who vow to kill Thomas for reasons that have nothing to do with the sword. With surprising results, Thomas and his men reach the decisive Battle of Poitiers, a vicious melee that killed thousands, unseated a king, and forced a devastating and short peace on a land ravaged by warfare. Agent: Toby Eady Associates, U.K.. (Jan.) \n",
"The most obvious comparison is the scarce extent of surviving manuscripts. While there are many Italian and comparatively numerous German manuscripts, there are only three English Longsword treatises. Additionally, the English sources are without illustration, so they are text only. This makes them more difficult to interpret. The last challenging factor is that they have largely not been scanned. Despite this, there are some dedicated HEMA Historical European Martial Arts practitioners, in the United Kingdom, and in Australia (largely associated with the Stoccata School of defence) dedicated to the study of the English longsword form.\n"
] |
why blood turns brown after it dries? | When the liquid from the blood is seeped into the bandaid, the only thing left outside are the dead red blood cells. That is what you are seeing.
Also: the brown colour of poo is caused by the dead red blood cells filtered out by your liver. The contents of your guts before this is added is grey. | [
"Freshly dried bloodstains are a glossy reddish-brown in color. Under the influence of sunlight, the weather or removal attempts, the color eventually disappears and the stain turns gray. The surface on which it is found may also influence the stain's color.\n",
"The color of red blood cells is due to the heme group of hemoglobin. The blood plasma alone is straw-colored, but the red blood cells change color depending on the state of the hemoglobin: when combined with oxygen the resulting oxyhemoglobin is scarlet, and when oxygen has been released the resulting deoxyhemoglobin is of a dark red burgundy color. However, blood can appear bluish when seen through the vessel wall and skin. Pulse oximetry takes advantage of the hemoglobin color change to directly measure the arterial blood oxygen saturation using colorimetric techniques. Hemoglobin also has a very high affinity for carbon monoxide, forming carboxyhemoglobin which is a very bright red in color. Flushed, confused patients with a saturation reading of 100% on pulse oximetry are sometimes found to be suffering from carbon monoxide poisoning.\n",
"The color blood red is a dark shade of the color red meant to resemble the color of human blood (which is composed of oxygenated red erythrocytes, white leukocytes, and yellow blood plasma) by cinnabar, a quick silver thermometer analogue display. It is the iron in hemoglobin specifically that gives blood its red color. The actual color ranges from crimson to a dark brown-blood depending on how oxygenated the blood is, and may have a slightly orange hue. Deoxygenated blood, which circulates closer to the body's surface and which is therefore generally more likely to be seen than oxygenated blood, issues from bodily veins in a dark red state, but quickly oxygenates upon exposure to air, turning a brighter shade of red. This happens more quickly with smaller volumes of blood such as a pinprick and less quickly from cuts or punctures that cause greater blood flows such as a puncture in the basilic vein: all blood collected during a phlebotomy procedure is deoxygenated blood, and it does not usually have a chance to become oxygenated upon leaving the body. Arterial blood, which is already oxygenated, is also already a brighter shade of red— this is the blood see from a pulsating neck, arm, or leg wound, and it does not change color upon exposure to air. The color \"blood red\", therefore, covers both these states: the darker deoxygenated color and the brighter oxygenated one. Also, dried blood often has a darker, rust-colored quality: all dried blood has been oxygenated and then desiccated, causing the cells within it to die. This blood is often darker than either shade of red that can be seen in fresh blood. \n",
"The color of human blood ranges from bright red when oxygenated to a darker red when deoxygenated. It owes its color to hemoglobin, to which oxygen binds. Deoxygenated blood is darker due to the difference in shape of the red blood cell when hemoglobin binds to it (oxygenated) verses does not bind to it (deoxygenated). Human blood is never blue!\n",
"The red colour compound betanin is not broken down in the body, and in higher concentrations may temporarily cause urine or stools to assume a reddish colour, in the case of urine a condition called beeturia. Although harmless, this effect may cause initial concern due to the visual similarity to what appears to be blood in the stool, hematochezia (blood passing through the anus, usually in or with stool) or hematuria (blood in the urine).\n",
"Brown induration is fibrosis and hemosiderin pigmentation of the lungs due to long standing pulmonary congestion (chronic passive congestion).Occurs with mitral stenosis and left sided heart failure . Pathology .. The lung vessels are congested with blood and this leads to pulmonary edema when plasma escapes in alveolar spaces . Rupture of congested capillaries leads to release of hemosiderin from damaged Red Blood cells . When alveolar macrophages engulf hemosiderin they are called heart failure cells . Death of heart failure cells in their journey back to lung tissue with subsequent hemosiderin release leads to lung fibrosis .\n",
"The blood's red color is due to the spectral properties of the hemic iron ions in hemoglobin. Each human red blood cell contains approximately 270 million of these hemoglobin molecules. Each hemoglobin molecule carries four heme groups; hemoglobin constitutes about a third of the total cell volume. Hemoglobin is responsible for the transport of more than 98% of the oxygen in the body (the remaining oxygen is carried dissolved in the blood plasma). The red blood cells of an average adult human male store collectively about 2.5 grams of iron, representing about 65% of the total iron contained in the body.\n"
] |
What would Gettysburg battlefield have looked like in the 1950s? (story behind question inside) | The Gettysburg National Military Park was established and gained it's initial protection status in 1863, about a hundred years before you father visited. The park received federal protection in 1893, was designated a National Park in 1895, and added to the National Register of Historic Preservation in 1966. Keeping in mind that the park had been established and protected in some fashion or another for at least 90 years before you father visited, it's extremely unlikely that he found a canon ball just resting on the surface somewhere and absconded with it.
There were a variety of different field guns deployed by both sides during the battle, and the size and composition of the ammunition used is likewise diverse. Shot and bolts used in the battle would have been composed of solid iron or bronze, and would have weathered pretty poorly in the open.
I suppose it's possible he could have lifted a ball from a stacked display, but I'd count it as unlikely that a school aged boy could effectively remove and conceal a canon ball successively. They can be quite heavy.
On the off chance that your father did remove a canon ball from the field, or from a display, it would have been a punishable violation of the National Historic Preservation Act after 1966, the Historic Sites Act after 1935 and the Antiquities Act after 1906. More than likely it would have been a violation of any number of local laws as well. | [
"A copy of the Gettysburg Cyclorama was displayed in an 1894 tent at The Angle, and during reunions in 1887, 1913 (50th battle anniversary), and 1938 (75th); battle veterans shook hands over the rock wall at The Angle. The nearby field along the Emmitsburg Road was also the site of Gettysburg Battlefield camps after the American Civil War such as Eisenhower's 1918 Camp Colt, the 1938 Army Camp with the Secretary of War's quarters, and a World War II POW stockade.\n",
"The Gettysburg Battlefield is the area of the July 1–3, 1863, military engagements of the Battle of Gettysburg within and around the borough of Gettysburg, Pennsylvania. Locations of military engagements extend from the site of the first shot at Knoxlyn Ridge on the west of the borough, to East Cavalry Field on the east. A military engagement prior to the battle was conducted at the Gettysburg Railroad trestle over Rock Creek, which was burned on June 27.\n",
"The Battle of Gettysburg is a 1913 American silent drama film directed by Charles Giblyn and Thomas H. Ince. \"The Battle of Gettysburg\" is based on the American Civil War battle of the same name. The film is now considered to be lost, although some battlefield footage was used by Mack Sennett in his comedy \"Cohen Saves the Flag\", which was shot on location alongside this production. However, there are claims that \"The Battle of Gettysburg\" was screened in France in 1973.\n",
"Gettysburg is a 1993 American epic war film about the Battle of Gettysburg in the American Civil War. Written and directed by Ronald F. Maxwell, the film was adapted from the historical novel \"The Killer Angels\" by Michael Shaara. It features an ensemble cast, including Tom Berenger as James Longstreet, Jeff Daniels as Joshua Chamberlain, Martin Sheen as Robert E. Lee, Stephen Lang as George Pickett, and Sam Elliot as John Buford.\n",
"Gettysburg is a 2011 American Civil War television documentary film directed by Adrian Moat that was first aired on May 30, 2011 (Memorial Day) on History. This two-hour documentary film, narrated by actor Sam Rockwell, commenced a week of programming by the History channel honoring and commemorating the 150th Anniversary of the American Civil War. \"Gettysburg\" showcases the horror of the pivotal 1863 Battle of Gettysburg by following the stories of eight men as they put their lives on the line to fight for what they believed in.\n",
"The monuments of the Gettysburg Battlefield commemorate the July 1 to 3, 1863 Battle of Gettysburg in the American Civil War. Most are located within Gettysburg National Military Park; others are on private land at battle sites in and around Gettysburg, Pennsylvania. Together, they represent \"one of the largest collections of outdoor sculpture in the world.\"\n",
"The Gettysburg battlefield, dedicated by President Lincoln who presented his iconic Gettysburg Address there in November 1863, contains hundreds of memorials to the regiments that fought there. Army veterans created the Gettysburg Battlefield Memorial Association in 1864, making it one of the earliest historic preservation organizations in the U.S. The battlefield is under the control of the national park Service and is a major tourist destination.\n"
] |
how does someone bet that a country's credit rating will fall and make money? | It's like betting on any futures.
The better the credit rating, the better the value is of a piece of paper that says someone owes you money. If you expect the credit rating to go up, you would buy more, if you expect it to go down, you would sell what you have.
The complicated part happens when someone figured out they could sell more than they have. If they are really sure the value of a bond will drop, they can essentially make a deal to owe someone that bond in return for money, then when it does drop, they can buy it at the low value to pay to give to the other party, and come away with a profit. Of course, if the value goes up instead of down, they're in trouble. | [
"Defenders of credit rating agencies complain of the market's lack of appreciation. Argues Robert Clow, \"When a company or sovereign nation pays its debt on time, the market barely takes momentary notice ... but let a country or corporation unexpectedly miss a payment or threaten default, and bondholders, lawyers and even regulators are quick to rush the field to protest the credit analyst's lapse.\" Others say that bonds assigned a low credit rating by rating agencies have been shown to default more frequently than bonds that receive a high credit rating, suggesting that ratings still serve as a useful indicator of credit risk.\n",
"A credit rating is issued by a credit rating agency (CRA). A credit rating assigned to U.S. sovereign debt is an expression of how likely the assigning CRA thinks it is that the U.S. will pay back its debts. A credit rating assigned to U.S. sovereign debt also influences the interest rates the U.S. will have to pay on its debt; if its debtholders know the debt will be paid back, they do not have to price the chance of default into the interest rate. However, it should be noted that these ratings sometimes measure different things; for instance Moody's considers the expected value of the debt in the event of a default in addition to the probability of default. Some lenders also have contractual requirements only to hold debt above a certain credit rating.\n",
"The information in a credit report is sold by credit agencies to organizations that are considering whether to offer credit to individuals or companies. It is also available to other entities with a \"permissible purpose\", as defined by the Fair Credit Reporting Act. The consequence of a negative credit rating is typically a reduction in the likelihood that a lender will approve an application for credit under favorable terms, if at all. Interest rates on loans are significantly affected by credit history; the higher the credit rating, the lower the interest, while the lower the credit rating, the higher the interest. The increased interest is used to offset the higher rate of default within the low credit rating group of individuals.\n",
"A credit rating is an evaluation of the credit risk of a prospective debtor (an individual, a business, company or a government), predicting their ability to pay back the debt, and an implicit forecast of the likelihood of the debtor defaulting.\n",
"Several credit rating agencies around the world have downgraded their credit ratings of the U.S. federal government, including Standard & Poor's (S&P) which reduced the country's rating from AAA (outstanding) to AA+ (excellent) on August 5, 2011.\n",
"Major credit rating agencies give out the sovereign credit rating of each nation as an absolute grade – see list of countries by credit rating. A particular nation's rating score is independent of the performance of other nation. But in the comparative rating index of sovereigns (CRIS) introduced by India, performance of one nation is compared with all other nations. Perhaps it was the first sovereign rating index by any country in the world. This solves the limitations of the existing credit rating system. An example of comparative rating is the percentile score—the way GATE results are at times given. If a student is described as belonging to the 99th percentile, it clearly says something about this student’s performance vis-à-vis other students.\n",
"A 2010 International Monetary Fund study concluded that ratings were a reasonably good indicator of sovereign-default risk. However, credit rating agencies were criticized for failing to predict the 1997 Asian financial crisis and for downgrading countries in the midst of that turmoil. Similar criticisms emerged after recent credit downgrades to Greece, Ireland, Portugal, and Spain, although credit ratings agencies had begun to downgrade peripheral Eurozone countries well before the Eurozone crisis began.\n"
] |
why is no mouth cpr what everyone is told to do now? | The breaths aren't worth doing. Keeping the heart pumping (doing the compressions) is far more important. This is because while people will lose consciousness from carbon dioxide build up in a couple of minutes, the average person actually has enough oxygen in their blood to stay alive for for a while (nearly 20 minutes) if the heart is pumping. There's really a caveat that after about ten minutes you need to start doing rescue breaths if they're not breathing. But the idea is that in most "man on the street" rescue efforts professionals show up and take over before the person would actually die of lack of oxygen. So for an amateur trying to operate in a high stress situation they probably aren't super practiced in, keep it as simple as possible to have the best effect. Add to that rescue breathes don't have much oxygen in them anyways, since your lungs filtered it out when you inhaled. | [
"Tube weaning is contraindicated in children who do not have a safe swallowing response. It is not recommended if there is a high possibility of an upcoming surgery or intervention that will require further usage of a feeding tube.\n",
"Mouth-to-mouth resuscitation, a form of artificial ventilation, is the act of assisting or stimulating respiration in which a rescuer presses his or her mouth against that of the victim and blows air into the person's lungs. Artificial respiration takes many forms, but generally entails providing air for a person who is not breathing or is not making sufficient respiratory effort on his/her own. It is used on a patient with a beating heart or as part of cardiopulmonary resuscitation (CPR) to achieve the internal respiration.\n",
"Tube weaning program is specifically designed for premature infants and children who are fed via a nasogastric, nasojejunal, gastrostomy or jejunostomy tube. The treatment is performed either when the feeding tube is no longer needed or if children experience side effects and poor response to enteral feeding. The program is suitable both for primary weaning and children that had been unsuccessfully weaned in the past. It is especially recommended for children that are struggling with oral feeding or have developed tube dependency.\n",
"Augmentation pharyngoplasty is a kind of plastic surgery for the pharynx (soft tissue at the back of the mouth) when the tissue at the back of the mouth is not able to close properly. It is typically used to correct speech problems in children with cleft palate. It may also be used to correct problems from a tonsillectomy or because of degenerative diseases. After the surgery, patients have an easier time pronouncing certain sounds, such as 'p' and 't', and the voice may have a less nasal sound.\n",
"Mouth-to-mouth resuscitation is also part of cardiopulmonary resuscitation (CPR) making it an essential skill for first aid. In some situations, mouth to mouth is also performed separately, for instance in near-drowning and opiate overdoses. The performance of mouth to mouth in its own is now limited in most protocols to health professionals, whereas lay first aiders are advised to undertake full CPR in any case where the patient is not breathing sufficiently.\n",
"Mouth-to-mouth resuscitation is a part of most protocols for performing cardiopulmonary resuscitation (CPR) making it an essential skill for first aid. In some situations, mouth-to-mouth resuscitation is also performed separately, for instance in near-drowning and opiate overdoses. The performance of mouth-to-mouth resuscitation on its own is now limited in most protocols to health professionals, whereas lay first aiders are advised to undertake full CPR in any case where the patient is not breathing sufficiently.\n",
"For a long time before it was formalised, it had been known by doctors and midwives that mouth to mouth resuscitation could be useful in bringing a lifeless newborn around. In 1946, during the middle of a polio outbreak, an anesthesiologist, James Elam, applied this principle to an older child in an emergency situation. Elam described the event in his own words as \"I was browsing around to get acquainted with the ward when along the corridor came a gurney racing – a nurse pulling it and two orderlies pushing it, and the kid on it was blue. I went into total reflex behaviour. I stepped out in the middle of the corridor, stopped the gurney, grabbed the sheet, wiped the copious mucous off his mouth and face, … sealed my lips around his nose and inflated his lungs. In four breaths he was pink.\"\n"
] |
why don't newly pressed vinyl records use the whole side? | I can tell you that some recording lathes (the machine that cuts the master disc that the record stampers are made from) are adjustable...to make the tracks closer together. This is done to increase the amount of music on the side. However, some lathes are better than others, and some engineers (who run the lathes) tend to be conservative. The more bass-heavy (low notes) the music is, the more room it takes up (really!)
I would like to say that in your case, they wanted to cut the best possible record, and so gave themselves lots of room when cutting the master. | [
"The composition of vinyl used to press records (a blend of polyvinyl chloride and polyvinyl acetate) has varied considerably over the years. Virgin vinyl is preferred, but during the 1970s energy crisis, it became commonplace to use recycled vinyl. Sound quality suffered, with increased ticks, pops, and other surface noises. Other experiments included reducing the thickness of LPs, leading to warping and increased susceptibility to damage. Using a biscuit of 130 grams of vinyl had been the standard. Compare these to the original Columbia 12-inch LPs (ML 4001) at around 220 grams each. Besides the standard black vinyl, specialty records are also pressed on different colors of PVC/A or picture discs with a card picture sandwiched between two clear sides. Records in different novelty shapes have also been produced.\n",
"In the 1950s and 1960s, it was common for record labels to press relatively heavy records on new or \"virgin\" vinyl. During the economic downturn of the 1970s, the cost of record pressing increased, and many record labels cut costs by pressing lightweight recordings from recycled materials, which were impure. Recycled vinyl pressings have more pops, clicks, and surface noise.\n",
"Most vinyl decals are not reusable, although some reusable vinyl types are available. They use a different adhesive on the rear which means that they can be re-positioned a couple of times before the adhesive wears out. Vinyl stickers at a large size can be very difficult to apply as they can tear, stretch and stick back on themselves. Traditional decals are made from pvc plastic and cut from a single colour using a vinyl cutter or laser cutter. It is possible to print a full colour image onto vinyl and then contour cut around it. Block cut vinyls come in many different finishes from glitter, to metallic, to mirror effect. They can also be supplied as blackboard or whiteboard finish and cut to shape to create a wall decal.\n",
"Records themselves became an art form because of the large surface onto which graphics and books could be printed, and records could be molded into unusual shapes, colors, or with images (picture discs). The turntable remained a common element of home audio systems well after the introduction of other media, such as audio tape and even the early years of the compact disc as a lower-priced music format. However, even though the cost of producing CDs fell below that of records, CDs remained a higher-priced music format than either cassettes or records. Thus, records were not uncommon in home audio systems into the early 1990s.\n",
"With vinyl records, there will be some loss in fidelity on each playing of the disc. This is due to the wear of the stylus in contact with the record surface. Magnetic tapes, both analog and digital, wear from friction between the tape and the heads, guides, and other parts of the tape transport as the tape slides over them. The brown residue deposited on swabs during cleaning of a tape machine's tape path is actually particles of magnetic coating shed from tapes. Sticky-shed syndrome is a prevalent problem with older tapes. Tapes can also suffer creasing, stretching, and frilling of the edges of the plastic tape base, particularly from low-quality or out-of-alignment tape decks.\n",
"Since most vinyl records contain up to 30% recycled vinyl, impurities can accumulate in the record and cause even a brand-new record to have audio artifacts such as clicks and pops. Virgin vinyl means that the album is not from recycled plastic, and will theoretically be devoid of these impurities. In practice, this depends on the manufacturer's quality control.\n",
"Vinyl records can be warped by heat, improper storage, exposure to sunlight, or manufacturing defects such as excessively tight plastic shrinkwrap on the album cover. A small degree of warp was common, and allowing for it was part of the art of turntable and tonearm design. \"Wow\" (once-per-revolution pitch variation) could result from warp, or from a spindle hole that was not precisely centered. Standard practice for LPs was to place the LP in a paper or plastic inner cover. This, if placed within the outer cardboard cover so that the opening was entirely within the outer cover, was said to reduce ingress of dust onto the record surface. Singles, with rare exceptions, had simple paper covers with no inner cover.\n"
] |
Does the US meet traditional definitions of an empire? | It depends on what qualities defines an Empire, really. This is quite a hard question to answer for sure.
The element that is most iffy is the central idea that an Empire not only has the original culture subjugate others, but then attempts to create a state that actively manages all of those cultures at once. In the case of the USA, it never began as a single-culture enterprise at the time in which it was integrating other cultures, it seems to me. Almost like the Imperial part was done in reverse order. | [
"The term \"American Empire\" refers to the United States' cultural ideologies and foreign policy strategies. The term is most commonly used to describe the U.S.'s status since the 20th century, but it can also be applied to the United States' world standing before the rise of nationalism in the 20th century. The United States is not traditionally recognized as an empire, in part because the U.S. adopted a different political system from those that previous empires had used. Despite these systematic differences, the political objectives and strategies of the United States government have been quite similar to those of previous empires. Due to this similarity some scholars confess: \"When it walks like a duck, talks like a duck, it's a duck.\" Academic, Krishna Kumar, argues the distinct principles of nationalism and imperialism may result in common practice; that is, the pursuit of nationalism can often coincide with the pursuit of imperialism in terms of strategy and decision making. Throughout the 19th century, the United States government attempted to expand its territory by any means necessary. Regardless of the supposed motivation for this constant expansion, all of these land acquisitions were carried out by imperialistic means. This was done by financial means in some cases, and by military force in others. Most notably, the Louisiana Purchase (1803), the Texas Annexation (1845), and the Mexican Cession (1848) highlight the imperialistic goals of the United States during this “modern period” of imperialism. The U.S. government has stopped pursuing additional territories since the mid 20th century. However, some scholars still consider U.S. foreign policy strategies to be imperialistic. This idea is explored in the \"contemporary usage\" section.\n",
"Many – perhaps most-- scholars have decided that that the United States lacks the key essentials of an empire. For example while there are American military bases all over, the American soldiers do not rule over the local people, and the United States government does not send out governors or permanent settlers like all the historic empires did. Harvard historian Charles S. Maier has examined the America-as-Empire issue at length. He says the traditional understanding of the word \"empire\" does not apply because the United States does not exert formal control over other nations nor engage in systematic conquest. The best term is that the United States is a \"hegemon.\" Its enormous influence through high technology, economic power, and impact on popular culture gives it an international outreach that stands in sharp contrast to the inward direction of historic empires. \n",
"Many – perhaps most-- scholars have decided that that the United States lacks the key essentials of an empire. For example while there are American military bases all over, the American soldiers do not rule over the local people, and the United States government does not send out governors or permanent settlers like all the historic empires did. Harvard historian Charles S. Maier has examined the America-as-Empire issue at length. He says the traditional understanding of the word \"empire\" does not apply because the United States does not exert formal control over other nations nor engage in systematic conquest. The best term is that the United States is a \"hegemon.\" Its enormous influence through high technology, economic power, and impact on popular culture gives it an international outreach that stands in sharp contrast to the inward direction of historic empires. \n",
"Analogously, the name \"empire\" is also used to refer to non-European entities, such as the Chinese Empire and the Japanese Empire, or give the title of emperor to those like the Negus of Ethiopia, the Shah of Persia, and the Sultan of Morocco. In most cases, this is a \"diplomatic courtesy.\" Since the Cold War, it has also been common to refer to the two rival superpowers as the American Empire and the Soviet Empire.\n",
"The name of \"empire\" has been applied to types of political entities that have not had a universal function (theocratic or Caesaropapist), but to those with a global, secularized one. This has been possible in geostrategic terms for the first time since the coming about of a global economy. Although the first empires to form (the Portuguese Empire and Spanish Empire in the 16th century) in their day did not refer to themselves as empires, (the Spanish self defined, in providentialist terms, as the Catholic Monarchy), the name typically has been applied by historiography (which applies \"empire\" to any political form of the past with multinational dimensions: Turk Empire, Mongol Empire, Inca Empire).\n",
"Sometimes, an empire is a semantic construction, such as when a ruler assumes the title of \"emperor\". That ruler's nation logically becomes an \"empire\", despite having no additional territory or hegemony. Examples of this form of empire are the Central African Empire, or the Korean Empire proclaimed in 1897 when Korea, far from gaining new territory, was on the verge of being annexed by the Empire of Japan, the last to use the name officially. Among the last of the empires in the 20th century were the Central African Empire, Ethiopia, Vietnam, Manchukuo, Germany, and Korea.\n",
"\"Empire\" elaborates a variety of ideas surrounding constitutions, global war, and class. Hence, the Empire is constituted by a monarchy (the United States and the G8, and international organizations such as NATO, the International Monetary Fund or the World Trade Organization), an oligarchy (the multinational corporations and other nation-states) and a democracy (the various non-government organizations and the United Nations). Part of the book's analysis deals with \"imagin[ing] resistance\", but \"the point of Empire is that it, too, is \"total\" and that resistance to it can only take the form of negation - \"the will to be against\". The Empire is total, but economic inequality persists, and as all identities are wiped out and replaced with a universal one, the identity of the poor persists.\n"
] |
Why was there such a regression in technology from the time of the ancient Greeks and Romans to the Middle Ages? | Hiya, not discouraging any new answers coming in, but I think you might find the [part of our FAQ about the so called "Dark Ages"](_URL_0_) helpful and interesting. Scroll down a tiny bit when you open the link:) | [
"During the growth of the ancient civilizations, ancient technology was the result from advances in engineering in ancient times. These advances in the history of technology stimulated societies to adopt new ways of living and governance.\n",
"In summary, Rome contributed numerous advances in technology to the Ancient World. However, it is also viewed that \"the ancient world under the domination of Rome [in fact] reached a kind of climax in the technological field [as] many technologies had advanced as far as possible with the equipment then available\". This concept of perfecting the unperfected was a theme that governed Roman technological supremacy throughout its 1,470 year reign. Ideas that had already been invented or designed: like the pontoon bridge, aqueduct, and military surgery, were constructed or utilized to perfection by Roman innovators. It's the innovation of technology that contributed to Rome's military success.\n",
"After the absorption of the ancient Greek city states into the Roman Republic in 146 BC, the highly advanced Greek technology began to spread across many areas of Roman influence and supplement the Empire. This included the military advances that the Greeks had made, as well as all the scientific, mathematical, political and artistic developments.\n",
"After the absorption of the Ancient Greek city-states into the Roman Republic in 146 BC, the highly advanced Greek technology began to spread across many areas of Roman influence. This included the great military machine advances the Greeks had made (most notably by Dionysus of Syracuse), as well as all the scientific, mathematical, political and artistic developments.\n",
"In the history of technology and ancient science during the growth of the ancient civilizations, ancient technological advances were produced in engineering. These advances stimulated other societies to adopt new ways of living and governance. Sometimes, technological development was sponsored by the state.\n",
"Ancient Rome boasted impressive technological feats, using many advancements that were lost in the Middle Ages and not rivaled again until the 19th and 20th centuries. An example of this is insulated glazing, which was not invented again until the 1930s. Many practical Roman innovations were adopted from earlier Greek designs. Advancements were often divided and based on craft. Artisans guarded technologies as trade secrets.\n",
"The Roman Empire was one of the most technologically advanced civilizations of antiquity, with some of the more advanced concepts and inventions forgotten during the turbulent eras of Late Antiquity and the early Middle Ages. Gradually, some of the technological feats of the Romans were rediscovered and/or improved upon during the Middle Ages and the beginning of the Modern Era; with some in areas such as civil engineering, construction materials, transport technology, and certain inventions such as the mechanical reaper, not improved upon until the 19th century. The Romans achieved high levels of technology in large part because they borrowed technologies from the Greeks, Etruscans, Celts, and others.\n"
] |
what is it about apples that makes us have so much variety compared to other fruits? | In many cases there are in fact many different kinds of a fruits or vegetable. We just either dont sell them or they got breeded out for what was considered better looking or tasting. For example there are multiple types of bananas, oranges, ( berries in general actually), etc. Theres also variety in vegetables with various kinds of greens, peas, corn, carrots, etc.
Carrots are a prime example of breeding out different types. The orange one you see today was created mostly by cross breeding, not naturally
As for fruits that only have one type thats mostly because they only grow in a very select few places under fairly strict weather requirements leaving little room for mutations. | [
"Apples are fruits commonly studied by researchers due to their high phenolic content, which make them highly susceptible to enzymatic browning. In accordance with other findings regarding apples and browning activity, a correlation has been found between high phenolic amount and enzymatic activity of apples. This provides a hope for food industries in an effort to genetically modify foods to decrease polyphenol oxidase activity and thus decrease browning. An example of such accomplishments in food engineering is in the production of Arctic Apples. These apples, engineered by \"Okanagan Specialty Fruits Inc,\" are a result of gene splicing, a technique that has allowed for the reduction in polyphenol oxidase.\n",
"BULLET::::- Genetic consistency: Apples are notorious for their genetic variability, even differing in multiple characteristics, such as, size, color, and flavor, of fruits located on the same tree. In the commercial farming industry, consistency is maintained by grafting a scion with desired fruit traits onto a hardy stock.\n",
"Apples are not native to North America, but today the North American continent boasts the greatest diversity of apples in the world. Part of this is due to \"Johnny Appleseed,\" real name John Chapman. Chapman spent 48 years travelling all along the American northwest spreading apple seeds and planting trees. While apples come in literally thousands of varieties, the majority of the apple market is based on three: Red Delicious, Golden Delicious, and Granny Smith.\n",
"At least two tongue-in-cheek scientific studies have been conducted on the subject, each of which concluded that apples can be compared with oranges fairly easily and on a low budget and the two fruits are quite similar.\n",
"Many apples grow readily from seeds. However, more than with most perennial fruits, apples must be propagated asexually by grafting to obtain the sweetness and other desirable characteristics of the parent. This is because seedling apples are an example of \"extreme heterozygotes\", in that rather than inheriting genes from their parents to create a new apple with parental characteristics, they are instead significantly different from their parents, perhaps to compete with the many pests. Triploid cultivars have an additional reproductive barrier in that 3 sets of chromosomes cannot be divided evenly during meiosis, yielding unequal segregation of the chromosomes (aneuploids). Even in the case when a triploid plant can produce a seed (apples are an example), it occurs infrequently, and seedlings rarely survive.\n",
"Apples are a rich source of various phytochemicals including flavonoids (e.g., catechins, flavanols, and quercetin) and other phenolic compounds (e.g., epicatechin and procyanidins) found in the skin, core, and pulp of the apple; they have unknown health value in humans.\n",
"Commercially popular apple cultivars are soft but crisp. Other desirable qualities in modern commercial apple breeding are a colorful skin, absence of russeting, ease of shipping, lengthy storage ability, high yields, disease resistance, common apple shape, and developed flavor. Modern apples are generally sweeter than older cultivars, as popular tastes in apples have varied over time. Most North Americans and Europeans favor sweet, subacid apples, but tart apples have a strong minority following. Extremely sweet apples with barely any acid flavor are popular in Asia, especially the Indian Subcontinent .\n"
] |
In medieval times, was it common for average citizens to go about their daily lives with weapons? | The answer to this somewhat depends on your meaning of 'armed'. For example, early medieval English fashion called for the carrying of a *Seax*, a long, single-bladed knife that was worn horizontally hanging from a belt on the waist, roughly the size of large kitchen knife. The Saxons derive their demonym from this, but it was carried by Angles as well. Early medieval English culture is essentially one of ostentatious display; from horse tackle, to brooches, to jewellery, wealth and status are displayed in personal ornamentation and the Anglo-Saxons are famed for their metalwork featuring ornate patterns and inlaid gems. For those who could no afford gold and gems, burnished brass and glass could be used, much like costume jewellery today. Part and parcel of this culture of display, therefore, would have been the regular wearing of a *seax*, both as an indicator of wealth, but also as a practical tool for everyday life.
While the *seax* was technically a weapon, it was predominantly a hunting or utility weapon; in warfare the main weapon of the Anglo-Saxons would have been the spear or javellin, and these are unlikely to have been carried socially. Swords are a slightly different beast again; although they are weapons of war, the time and skill necessary to make a sword means that they are largely constrained to the nobility, and as such once again become an object indicative of wealth and status. As such, we might not expect to see a *thegn* or *ealdorman* "casually" wearing a sword, but we might if he was performing a civil action - presiding over a trial, say - as a signifier of his status. | [
"Through the medieval period, soldiers were responsible for supplying themselves, either through foraging, looting, or purchases. Even so, military commanders often provided their troops with food and supplies, but this would be provided in lieu of the soldiers' wages, or soldiers would be expected to pay for it from their wages, either at cost or even with a profit.\n",
"Through the medieval period (the 5th to 15th century in Europe), soldiers were responsible for supplying themselves, either through foraging, looting, or purchases. Even so, military commanders often provided their troops with food and supplies, but this would be provided in lieu of the soldiers' wages, or soldiers would be expected to pay for it from their wages, either at cost or even with a profit.\n",
"For much of the early medieval period kings had few functions except military ones. Kings made war and gave judgements (in consultation with local elders) but they did not govern in any sense of that word. From the sixth to the eleventh centuries the king moved about with an armed, mounted warband, a personal military retinue called a \"teulu\" that is described as a \"small, swift-moving, and close-knit group\". This military elite formed the core of any larger army that might be assembled. The relationships among the king and the members of his warband were personal, and the practice of fosterage strengthened those personal bonds.\n",
"As the Middle Ages came to an end, kings increasingly relied on professional soldiers to fill the bottom ranks of their armies instead of militiamen. Each of these professionals began their careers as a private. The private was a man who signed a private contract with the company commander, offering his services in return for pay. The money was raised through taxation; those yeomen (smallholding peasants) who did not fulfill their annual 40-day militia service paid a tax that funded professional soldiers recruited from the yeomanry. This money was handed to the company commanders from the royal treasury, the company commanders using the money to recruit the troops.\n",
"Particularly for kings, itineration was a vital part of governance, and in many cases kings would rely on the hospitality of their subjects for maintenance while on the road. This could be a costly affair for the localities visited; there was not only the large royal household to cater for, but also the entire royal administration. It was only towards the end of the medieval period, when means of communication improved, that households, both noble and royal, became more permanently attached to one residence.\n",
"As central governments grew in power, a return to the citizen and mercenary armies of the classical period also began, as central levies of the peasantry began to be the central recruiting tool. It was estimated that the best infantrymen came from the younger sons of free land-owning yeomen, such as the English archers and Swiss pikemen. England was one of the most centralized states in the Late Middle Ages, and the armies that fought the Hundred Years' War were mostly paid professionals.\n",
"Many executioners were professional specialists who traveled a circuit or region performing their duty, because executions were rarely very numerous. Within this region, a resident executioner would also administer non-lethal physical punishments, or apply torture. In medieval Europe, to the end of the early modern period, executioners were often knackers, since pay from the rare executions was not enough to live off.\n"
] |
why do dogs care about babies? | Domestic dogs view themselves as part of a human pack, the family. Since babies are part of the pack too, and the pack's alpha members (us) like them they must be worth keeping (dog logic). | [
"They make fairly good watch dogs. When necessary, this dog will bark to alert its family that someone is nearby. This breed is typically good with other pets, especially when socialized at an early age. This dog gets along well with children, but it may be a good idea to socialize this breed at an early age as well as to supervise play time with children to make sure that the dog does not get hurt as a result of its small size.\n",
"They make great, affectionate family dogs, but do not trust strangers. Many can be very protective of their owners or property, so they sometimes bark when someone is coming. Owners should have patience with these Laikas because they can hold grudges for a long time. Also, they can be aggressive towards unknown dogs that come near their home, but should be friendly with dogs that they live with, or dogs away from their home. From a young age, they see small animals, such as squirrels, as potential game so they will more than likely go after them.\n",
"It is critical that human interaction takes place frequently and calmly from the time the puppies are born, from simple, gentle handling to the mere presence of humans in the vicinity of the puppies, performing everyday tasks and activities. As the puppies grow older, socialization occurs more readily the more frequently they are exposed to other dogs, other people, and other situations. Dogs who are well socialized from birth, with both dogs and other species (especially people), are much less likely to be aggressive or to suffer from fear-biting.\n",
"Humans typically have deep attachments to their dogs because dogs are adept at fulfilling emotionally supportive roles in people's lives which results in high levels of attachment. Dog owners who are single, childless, newly married, empty nesters, divorced, or in a second marriage tend to anthropomorphize their pets more often. Dogs can be emotional substitutes for family members such as children and spouses and they contribute to the moral maintenance of people who live alone.\n",
"Canaan dogs have a strong survival instinct. They are quick to react and wary of strangers, and will alert to any disturbances with prompt barking, thus making them excellent watchdogs. Though defensive, they are not aggressive and are very good with children within the family, but may be wary of other children or defensive when your child is playing with another child. They are intelligent and learn quickly, but may get bored with repetitive exercises or ignore commands if they find something of more interest.\n",
"Dogs have been used in research for decades and have been invaluable for treating many human and canine illnesses. Dogs contract many of the diseases humans do, from heart disease to cancer and they are also exposed to the same environment as humans. Canine research has led to many significant breakthroughs such as hip replacements, development of cancer treatments, and research in stem cells, diabetes, and Alzheimer's disease. Treatments for heartworms, parasites, and vaccinations against parvovirus, rabies, and canine distemper have also come from canine models.\n",
"A study conducted by J.S.J Odendaal in 2003 showed that when humans pet dogs, their bodies release oxytocin, a hormone associated with not only happiness, but bonding and affection as well. According to the social support theory, animals are a source of social support and companionship, which are necessary for well-being. Canines' social impact on humans is especially significant for those who tend to be more isolated, such as children with no siblings or elderly persons. In this view, the animal is part of our community and is an important determinant for psychological well-being.\n"
] |
Is dairy really that crucial to our diet? | As others have said, many, many perfectly healthy people live entirely without dairy (myself included).
> why dairy?
Government food recommendations are based as much on economics as health, if not more so. Cheap high-yield staples are not the basis of the recommended diet for health reasons. Health issues are considered, of course, but certainly not exclusively.
> Throughout my entire life, it's been pounded into my head to drink my milk.
Most of the popular conception about milk being necessary for calcium and so on comes from advertising. Most of that advertising is paid for by the dairy industry, and exists for the same reason that all other advertising does: profit.
edit: Sorry, not very AskSciencey. With nutrients, how absorbable they are is important, calcium merely being in a food doesn't mean it is useful to your body. Many vegetables are a good absorbable source of calcium[1], mostly leafy greens.
[1] _URL_0_ | [
"A 2009 scientific conference reported that despite the contribution of dairy products to the saturated fatty acid intake of the diet, there was no clear evidence that dairy food consumption is consistently associated with a higher risk of cardiovascular disease.\n",
"Dairy products are produced from the milk of mammals, usually but not exclusively cattle. They include milk, yogurt and cheese. Milk and its derivative products are a rich source of dietary calcium and also provide protein, phosphorus, vitamin A, and vitamin D. However, many dairy products are high in saturated fat and cholesterol compared to vegetables, fruits and whole grains, which is why skimmed products are available as an alternative.\n",
"Excessive consumption of dairy products can contribute significant amounts of cholesterol and saturated fat to the diet, which can increase the risk of heart disease, and cause other serious health problems.\n",
"BULLET::::- – dairy products are food produced from the milk of mammals. Dairy products are usually high energy-yielding food products. A production plant for the processing of milk is called a dairy or a dairy factory. Apart from breastfed infants, the human consumption of dairy products is sourced primarily from the milk of cows, yet goats, sheep, yaks, horses, camels, and other mammals are other sources of dairy products consumed by humans.\n",
"Changes in diet may help prevent the development of atherosclerosis. Tentative evidence suggests that a diet containing dairy products has no effect on or decreases the risk of cardiovascular disease.\n",
"Dairy products are one of the primary sources of dietary omega-7 fatty acids. However, the production of omega-7 fatty acids in cows is heavily diet-dependent. Specifically, a reduction in the proportion of herbage consumed by a cow is correlated with a significant decrease in the omega-7 fatty acid content of the cow’s milk. Rumenic and vaccenic acid concentrations declined significantly within one week of removing herbage from the cow’s diet, suggesting that modern dairy farming methods may lead to decreases in beneficial fatty acid content of dairy products.\n",
"Humans may consume dairy milk for a variety of reasons, including tradition, availability and nutritional value (especially minerals like calcium, vitamins such as B, and protein). Dairy milk substitutes may be expected to meet such standards, though there are no legal requirements for them to do so. This may result in additives being put into milk substitutes to compensate for the absence of certain vitamins, minerals and/or proteins. Infant formula, whether based on cow's milk, soy or rice, is usually fortified with iron and other dietary nutrients.\n"
] |
why does norway have some of the world's highest gas prices despite being the 15th largest oil producer (1.9m bbl/day) | It's mainly because they tax it ALOT. This is to make an negative incentive to drive as much because of the negative effects of traffic, the impact it has on local environment (bad air) and the global environment. This is also a huge way for the government to earn a lot of taxes which go to building infrastructure or other goods.
It should also be noted that norwegians have an high average income, thus making their purchasing power stronger than most of the world, which would make the prices seem high when you compare it to countries with less income per capita. | [
"In 2011, Norway was the eighth largest crude oil exporter in the world (at 78Mt), and the 9th largest exporter of refined oil (at 86Mt). It was also the world's third largest natural gas exporter (at 99bcm), having significant gas reserves in the North Sea. Norway also possesses some of the world's largest potentially exploitable coal reserves (located under the Norwegian continental shelf) on earth.\n",
"Export revenues from oil and gas have risen to almost 50% of total exports and constitute more than 20% of the GDP. Norway is the fifth-largest oil exporter and third-largest gas exporter in the world, but it is not a member of OPEC. In 1995, the Norwegian government established the sovereign wealth fund (\"Government Pension Fund – Global\"), which would be funded with oil revenues, including taxes, dividends, sales revenues and licensing fees. This was intended to reduce overheating in the economy from oil revenues, minimise uncertainty from volatility in oil price, and provide a cushion to compensate for expenses associated with the ageing of the population.\n",
"Despite Norway maintaining its ranks among the 20 highest EPI countries, achieving a score of 86.9% and rank of 17th out of the 180 analysed in 2016, it is one of the world's largest oil exporter and has the largest sovereign fund of any country. In 2015, Norway produced 53.9 million tonnes of greenhouse gases (GHGs) noted as carbon dioxide emissions - 15.1 million tonnes were attributed to oil and gas extraction - accounting for the largest proportion of emissions than the other sources, e.g. energy supply, agriculture, road traffic. The total emissions of GHGs increased by 600,000 tonnes since 2014, with emissions from oil and gas extraction increasing by 83.3% since 1990. In more detail, a 25% increase CO emissions, 10% decrease in methane, 38% decrease in nitrous oxide; 44.7 million tonnes (Mt) was CO2, 5.5 Mt of CH4, 2.6 Mt of N20 (Figure 1).\n",
"The country maintains a combination of market economy and a Nordic welfare model with universal health care and a comprehensive social security system. Norway has extensive reserves of petroleum, natural gas, minerals, lumber, seafood, fresh water and hydropower. The petroleum industry accounts for around a quarter of the country's gross domestic product (GDP). On a per-capita basis, Norway is the world's largest producer of oil and natural gas outside the Middle East.\n",
"The country maintains a combination of market economy and a Nordic welfare model with universal health care and a comprehensive social security system. Norway has extensive reserves of petroleum, natural gas, minerals, lumber, seafood, fresh water and hydropower. The petroleum industry accounts for around a quarter of the country's gross domestic product (GDP). On a per-capita basis, Norway is the world's largest producer of oil and natural gas outside the Middle East.\n",
"Since World War II, Norway has experienced rapid economic growth, and is now amongst the wealthiest countries in the world. Norway is the world's third largest oil exporter after Russia and Saudi Arabia and the petroleum industry accounts for around a quarter of GDP. It has also rich resources of gas fields, hydropower, fish, forests, and minerals. Norway was the second largest exporter of seafood (in value, after China) in 2006. Other main industries include food processing, shipbuilding, metals, chemicals, mining and pulp and paper products. Norway has a Scandinavian welfare system and the largest capital reserve per capita of any nation.\n",
"By the mid-1990s Norway had become the world's second largest oil exporter (behind Saudi Arabia). The first commercially important discovery of petroleum on Norway's continental shelf was made at the Ekofisk field in the North Sea late in 1969, just as foreign oil companies were about to give up after four years of exploratory drilling. Intensified exploration increased reserves faster than production. Nevertheless, by the mid-1990s about half of export earnings and nearly one-tenth of government revenues came from offshore oil and gas, and these revenues continued to increase as the end of the century approached. It was estimated that the high rate of oil production could be sustained at least into the second decade of the 21st century, while that of natural gas was projected to increase dramatically and be sustained much longer.\n"
] |
Is anyone creating Atoms? Or is this even possible? | Most of the elements on the periodic table above uranium are synthetically created. | [
"Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test the fundamental predictions of physics.\n",
"Democritus believed that atoms are too small for human senses to detect, they are infinitely many, they come in infinitely many varieties, and that they have always existed. They float in a vacuum, which Democritus called the \"void\", and they vary in form, order, and posture. Some atoms, he maintained, are convex, others concave, some shaped like hooks, and others like eyes. They are constantly moving and colliding into each other. Democritus wrote that atoms and void are the only things that exist and that all other things are merely said to exist by social convention. The objects humans see in everyday life are composed of many atoms united by random collisions and their forms and materials are determined by what kinds of atom make them up. Likewise, human perceptions are caused by atoms as well. Bitterness is caused by small, angular, jagged atoms passing across the tongue; whereas sweetness is caused by larger, smoother, more rounded atoms passing across the tongue.\n",
"BULLET::::- Every object of creation is made of atoms (parmanu) which in turn connect with each other to form molecules (anu). Atoms are eternal, and their combinations constitute the empirical material world.\n",
"BULLET::::- Every object of creation is made of atoms (parmanu) which in turn connect with each other to form molecules (anu). Atoms are eternal, and their combinations constitute the empirical material world.\n",
"Superheavy atoms have all been created since the latter half of the 20th century, and are continually being created during the 21st century as technology advances. They are created through the bombardment of elements in a particle accelerator. For example, the nuclear fusion of californium-249 and carbon-12 creates rutherfordium-261. These elements are created in quantities on the atomic scale and no method of mass creation has been found.\n",
"Atoms are the smallest neutral particles into which matter can be divided by chemical reactions. An atom consists of a small, heavy nucleus surrounded by a relatively large, light cloud of electrons. Each type of atom corresponds to a specific chemical element. To date, 118 elements have been discovered or created.\n",
"An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small; typical sizes are around 100 picometers (, a ten-millionth of a millimeter, or 1/254,000,000 of an inch). They are so small that accurately predicting their behavior using classical physics – as if they were billiard balls, for example – is not possible. This due to quantum effects. Current atomic models now use quantum principles to better explain and predict this behavior.\n"
] |
Is there any rational explanation for these noises? | Are they heard in any rural areas? It sounds like traffic noise echoing off of buildings.
(It would be easy enough to drop in a sound to a film to further conspiracy theories, btw.) | [
"According to the NOAA description, it \"rises rapidly in frequency over about one minute and was of sufficient amplitude to be heard on multiple sensors, at a range of over .\" The NOAA's Dr. Christopher Fox did not believe its origin was man-made, such as a submarine or bomb, nor familiar geological events such as volcanoes or earthquakes. While the audio profile of Bloop does resemble that of a living creature, the source was a mystery both because it was different from known sounds and because it was several times louder than the loudest recorded animal, the blue whale.\n",
"Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal. However, in sound perception it can often be used to identify the source of a sound and is an important component of timbre perception (see above).\n",
" the literature on misophonia was limited. Some small studies show that people with misophonia generally have strong negative feelings, thoughts, and physical reactions to specific sounds, which the literature calls \"trigger sounds\". These sounds are apparently usually soft, but can be loud. One study found that around 80% of the sounds were related to the mouth (eating, slurping, chewing or popping gum, whispering, etc.), and around 60% were repetitive. A visual trigger may develop related to the trigger sound. It also appears that a misophonic reaction can occur in the absence of an actual sound.\n",
"Misophonia's mechanism is not known, but it appears that, like hyperacusis, it may be caused by a dysfunction of the central auditory system in the brain and not of the ears. The perceived origin and context of the sound appears to be essential to trigger a reaction.\n",
"Infrasound is sound waves with frequencies lower than 20 Hz. Although sounds of such low frequency are too low for humans to hear, whales, elephants and other animals can detect infrasound and use it to communicate. It can be used to detect volcanic eruptions and is used in some types of music.\n",
"An ongoing low-frequency noise, audible only to some, is thought to originate somewhere near this town and is consequently sometimes known as the Taos Hum. Those who have heard the Hum usually hear it west of Taos near Tres Orejas. The Taos Hum was featured on the TV show \"Unsolved Mysteries\", and it was also briefly mentioned in an episode of \"The X-Files\". It was the basis for the TV series \"Criminal Minds\" episode \"Mixed Signals\".\n",
"Clinton Walker said \"Prehistoric Sounds\" was, \"an extraordinary record - one of the period's best bar none - a brooding, melancholic collision of electrically charged rock balladry and swooping, brassy arrangements. Broadly misunderstood, it meant nothing to no-one.\"\n"
] |
Bio: X-inactivation and x-linked disorders? | We evolved to handle two sets of every gene...except when it comes to the sex chromosomes. Having too much of certain proteins wreaks havoc during development (e.g. trisomy 21, having an extra chromosome 21 causes Down Syndrome), with most trisomies being fatal.
X-inactivation serves the purpose of making sure you only have one X chromosome doing its thing, because doubling the dose with these particular genes can cause a lot of problems.
In fact, [Klinefelter's Syndrome](_URL_0_) is what can develop in someone with XXY. Even though their extra X chromosome does get inactivated, it's apparently not completely inactivated and causes some issues from the doubling doses of particular gene expression. | [
"The X-linked form of MTM is the most commonly diagnosed type. Almost all cases of X-linked MTM occurs in males. Females can be \"carriers\" for an X-linked genetic abnormality, but usually they will not be clinically affected themselves. Two exceptions for a female with a X-linked recessive abnormality to have clinical symptoms: one is a manifesting carrier and the other is X-inactivation. A manifesting carrier usually has no noticeable problems at birth; symptoms show up later in life. In X-inactivation, the female (who would otherwise be a carrier, without any symptoms), actually presents with full-blown X-linked MTM. Thus, she congenitally presents (is born with) MTM.\n",
"Though only definitively diagnosable by genetic sequence testing, including a G band analysis, ATR-16 syndrome may be diagnosed from its constellation of symptoms. It must be distinguished from ATR-X syndrome, a very similar disease caused by a mutation on the X chromosome, and cases of alpha-thalassemia that co-occur with intellectual disabilities with no underlying genetic relationship.\n",
"X-linked SCID is a known pediatric emergency which primarily affects males. If the appropriate treatment such as intravenous immunoglobulin supplements, medications for treating infections or a bone marrow transplant is not administered, then the prognosis is poor. The patients with X-linked SCID usually die two years after they are born. For this reason, the diagnosis of X-linked SCID needs to be done early to prevent any pathogens from infecting the infant.\n",
"There are two theories on the mechanism Xce uses to affect inactivation. The first is that genomic differences in the Xce alleles alter the sequence of the long non-coding RNA that is an integral part of X chromosome inactivation. The second is that Xce acts as a binding site for dosage factors that will affect XIST gene and Tsix expression (long non-coding RNAs involved in X chromosome inactivation).\n",
"Various types of mutations found in ATRX have been found to be associated with ATR-X, including most commonly single-base missense mutations, as well as nonsense, frameshift, and deletion mutations. Characteristics of ATR-X include: microcephaly, skeletal and facial abnormalities, mental retardation, genital abnormalities, seizures, limited language use and ability, and alpha-thalassemia. The phenotype seen in ATR-X suggests that the mutation of ATRX gene causes the downregulation of gene expression, such as the alpha-globin genes. It is still unknown what causes the expression of the various characteristics of ATR-X in different patients.\n",
"ATR association can be separated into two groups. ATR-16 syndrome patients have a 1-2Mb deletion on the top of the chromosome 16 p-arm and are associated with a Mendelian inheritance of a-thalassemia. ATR-X syndrome patients have no deletion in chromosome 16, a-thalassemia is rare, and this syndrome is consistent with X-linked recessive inheritance. However, both groups have similar phenotypes. The phenotypes resulting from ATR-X are due to skewed x-inactivation. When X-inactivation occurs randomly, half of the cells in the carrier female would contain the abnormality. When X-inactivation is skewed, more than 50% of one X chromosome are becoming inactive, and if that X-chromosome is passed to a male, they will have a higher percent of heterochromatin. The ATR-X locus spans the control center Xist, which regulates X-inactivation. When there is a XH2 mutation in the ATR-X locus, this indicates Xist to inactivate the chromosome increasing the amount of heterochromatin in males.\n",
"\"The role of ATRX as a regulator of heterochromatin dynamics raises the possibility that mutations in \"ATRX\" may lead to downstream transcriptional effects across the complex of genes or repetitive regions involved in the global context of the disorder, in addition to explaining phenotypical differences in these patients. For example, \"ATRX\" mutations affect the expression of alpha-globin gene cluster, causing alpha-thalassemia\". \"ATRX\" interacts with the transcription co-factor \"DAXX\" and the alpha-globin gene cluster. Together they are all responsible for depositing the histone H3.3 at telomeric and pericentromeric regions. They are also responsible for regulating gene expression at these regions. \"ATRX\" is characterized by hypo- and hypermethylated regions. It's important to recognize that having a mutation in the \"ATRX\" gene does not necessarily guarantee that the patient has ATR-X syndrome. However, it is common within ATR-X patients to have global hypermethylation of usually unmethylated regions, like CpG islands and promoters. Several of the genes that undergo methylation changes are responsible for biosynthetic, metabolic, and methylation processes, and 42.5% of these genes are present in the telomeric and pericentromeric regions. A couple of these genes include: \"PRDM9\" and \"2-BHMT2\". PRDM9 encodes for a histone H3 lysine-4 trimethyltransferase, which is a known target for \"ATRX\", and \"2-BHMT2\" encodes for betaine-homocysteine methyltransferase, which catalyzes the methylation of homocysteine.\n"
] |
if we can have physician assisted suicides, why are there sometimes major malfunctions when administering the 'death penalty'? | Physician assistant suicide is often phenobarbital, something people would OD on back in the day. One single drug, gently knocks you out and kills you without any dramatic stuff. It basically amps up the brain receptor that says "chill out neuron" so your whole brain chills out, till it knocks you out, and your respiratory drive chills out too and you die.
Death penalty, that doesn't use such a method. There's a cocktail of 3 drugs, one to knock you out, then one that relaxes your muscles, then one to stop your heart. Since your heart is a muscle, the anti-heart drug also messes with your muscles so they can very dramatically spasm if the anti-muscle drug didn't work right. This is also complicated by many pharma companies refusing to provide the drug for moral reasons. Or the electric chair, which also doesn't always work. Or hanging, which can be ugly.
Death penalty uses more complicated and less reliable methods, physician assisted uses simpler and more reliable methods. | [
"Regardless of an alternative protocol, some death-penalty opponents have claimed that execution can be less painful by the administration of a single lethal dose of barbiturate. Supporters of the death penalty, however, state that the single-drug theory is a flawed concept. Terminally ill patients in Oregon who have requested physician-assisted suicide have received lethal doses of barbiturates. The protocol has been highly effective in producing a painless death, but the time to cause death can be prolonged. Some patients have taken days to die, and a few patients have actually survived the process and have regained consciousness up to three days after taking the lethal dose. In a California legal proceeding addressing the issue of the lethal injection cocktail being \"cruel and unusual,\" state authorities said that the time to death following a single injection of a barbiturate could be as much as 45 minutes.\n",
"The State of New York had enacted a prohibition against physician-assisted suicide, making it a crime for a physician to administer lethal medication or to otherwise knowingly and intentionally end the life of a patient, even a consenting, mentally competent, and terminally ill patient. \n",
"A study in 2000 found that Dutch physicians who intend to provide assistance with suicide sometimes end up administering a lethal medication themselves because of the patient's inability to take the medication or because of problems with the completion of physician-assisted suicide.\n",
"Terminal dehydration (also known as voluntary death by dehydration or VDD) has been described as having substantial advantages over physician-assisted suicide with respect to self-determination, access, professional integrity, and social implications. Specifically, a patient has a right to refuse treatment and it would be a personal assault for someone to force water on a patient, but such is not the case if a doctor merely refuses to provide lethal medication. Some physicians believe it might have distinctive drawbacks as a humane means of voluntary death. One survey of hospice nurses in Oregon (where physician-assisted suicide is legal) found that nearly twice as many had cared for patients who chose voluntary refusal of food and fluids to hasten death as had cared for patients who chose physician-assisted suicide. They also rated fasting and dehydration as causing less suffering and pain and being more peaceful than physician-assisted suicide. Patients undergoing terminal dehydration can often feel no pain, as they are often given sedatives and care such as mouth rinses or sprays There can be a fine line between terminal sedation that results in death by dehydration and euthanasia.\n",
"The most current version of the American Medical Association's Code of Ethics states that physician-assisted suicide is prohibited. It prohibits physician-assisted suicide because it is “fundamentally incompatible with the physician’s role as healer” and because it would be “difficult or impossible to control, and would pose serious societal risks”. \n",
"Physician-assisted suicide (PAS) is a highly controversial concept, only legal in a few countries. In PAS, physicians, with voluntary written and verbal consent from the patient, give patients the means to die, usually through lethal drugs. The patient then chooses to \"die with dignity,\" deciding on his/her own time and place to die. Reasons as to why patients choose PAS differ. Factors that may play into a patient's decision include future disability and suffering, lack of control over death, impact on family, healthcare costs, insurance coverage, personal beliefs, religious beliefs, and much more.\n",
"Physicians who are in favor of euthanasia state that to keep euthanasia or physician-assisted suicide (PAS) illegal is a violation of patient freedoms. They believe that any competent terminally-ill patient should have the right to choose death or refuse life-saving treatment. Suicide and assistance from their physician is seen as the only option those patients have. With the suffering and the knowledge from the doctor, this may also suggest that PAS is a humane answer to the excruciating pain.\n"
] |
what's the purpose of the num lock key on a full sized keyboard? | The first IBM PC keyboard had 83 keys. The numeric keypad doubled as the cursor control keys, and you used Num Lock to toggle between the two.
A few years later, IBM introduced what is now the familiar 101 key layout, with dedicated cursor control keys. Even though the Num Lock key was no longer strictly necessary, they left it in for backwards compatibility (some programs use it for other purposes) and because some people preferred the cursor control keys in the numeric keypad layout.
Even through that was almost 30 years ago, the layout has become so standard no one has seen a need to change it. | [
"The Num Lock key exists because earlier 84-key IBM PC keyboards did not have cursor control or arrows separate from the numeric keypad. Most earlier computer keyboards had separate number keys and cursor control keys; however, to reduce cost, IBM chose to combine the two in their early PC keyboards. Num Lock would be used to choose between the two functions. On some laptop computers, the Num Lock key is used to convert part of the main keyboard to act as a (slightly skewed) numeric keypad rather than letters. On some laptop computers, the Num Lock key is absent and replaced by the use of a key combination.\n",
"In a 102/105-key layout of this form, there would be an additional key to the right of the left shift key. This would be an additional backslash key (). Keyboards with 102 keys are not sold as standard, except by certain manufacturers who mistakenly group Israel into Europe, where 102 keyboards are the norm (most notable of the later group are Logitech and Apple).\n",
"Keying is done by one or two keys on the inside of the socket, which fit into grooves provided on the outside of the plug's shroud. The major keyway is 4 mm deep, and there is a corresponding flat protruding into the interior of the shroud to accommodate it. The width of the major keyway defines the current rating: 32 A plugs have a 5 mm wide groove, while 16 A plugs have an 8 mm groove, and will therefore fit into 32 A sockets but not vice versa.\n",
"The key was meant to lock all scrolling techniques, and is a vestige of the original IBM PC keyboard. In the original design, was intended to modify the behavior of the arrow keys. When the mode was on, the arrow keys would scroll the contents of a text window instead of moving the cursor. In this usage, is a toggling lock key like Num Lock or Caps Lock, which have a state that persists after the key is released.\n",
"Each key is meant to be used with screws of a specific socket size, with rather tight tolerances; so the tool is commonly sold in kits that include half a dozen or more keys of different sizes. Usually the size of the key increases with the size of the socket, but not necessarily in direct proportion.\n",
"A typical space bar key is very large, enough so that a thumb from either hand can use it, and is almost always found on the bottom row of standard keyboard layouts. Over time space bars have become narrower on computers to make way for keys such as control key and alt key.\n",
"The 84-key keyboard is a full-size keyboard, with standard 18.5mm key pitch between keys. The keyboard has also been coated with anti-bacterial Silver Nano ions. The touchpad supports multi-touch gestures.\n"
] |
how is it so hard for many people to even try to read or pronounce foreign names? | Part of how most people read is not by looking at each individual letter but rather looking at the word as a whole, more like a singular shape rather than a string of letters. If you've been reading "by shape" and have to switch to reading each individual letter it's pretty jarring. Also different languages will pronounce the same letters differently and so combinations that make sense in Spanish won't make sense in English. | [
"The following is a list of common non-native pronunciations that English speakers make when trying to speak foreign languages. Many of these are due to transfer of phonological rules from English to the new language as well as differences in grammar and syntax that they encounter.\n",
"Users have the ability to hear their phrases spoken by native speakers, read simplified phonetic spelling to learn how best to pronounce them using their native syllables, and save phrases for later use. Additionally, users have the ability to create their own personal phrasebook that suits the needs of their travel experiences.\n",
"The four major components in the acquisition of a language are namely; listening, speaking, reading and writing. While most people have no difficulties in exercising these skills in their native language, doing so in a second or foreign language is not that easy. In the area of writing, research has found that foreign language learners find it painstaking to compose in the target language, producing less eloquent sentences and encountering difficulties in the revisions of their written work. However, these difficulties are not attributed to their linguistic abilities.\n",
"Still other words, including proper nouns such as names of people and places, are not only written as foreign words, but often given their native pronunciation too. For example, the French term \"mange tout\" (a type of pea) is often pronounced with a nasal vowel. To do otherwise, especially with a proper noun, is often considered a mispronunciation.\n",
"Foreign learners may have parallel problems. Learners from very many cultural backgrounds have difficulties with English dental fricatives, usually caused by interference with either sibilants or stops. Words with a dental fricative adjacent to an alveolar sibilant, such as \"clothes\" , \"truths\" , \"fifths\" , \"sixths\" , \"anesthetic\" , etc., are commonly very difficult for foreign learners to pronounce. Some of these words containing consonant clusters can also be difficult for native speakers, including those using the standard and pronunciations generally, allowing such accepted informal pronunciations of \"clothes\" as (a homonym of the verb \"close\") and \"fifth(s)\" as . \n",
"The users of foreign language wanted simply to note things of their interest in the literature of foreign languages. Therefore, this method focuses on reading and writing and has developed techniques which facilitate more or less the learning of reading and writing only. As a result, speaking and listening are overlooked.\n",
"In general the pronunciation of older people has priority; however, some people can actually get quite offended if they think the language is written the 'wrong' way. Some insist that the mission spelling should be used, others the Bani spelling, and still others the KKY (Saibai etc.) spelling, and still again others use mixes of two or three, or adaptations thereof. Some writers of the Mabuiag-Badhu dialect (Kalaw Lagaw Ya), for example, write mainly in the Mission system, sometimes use the diagraphs \"oe\", \"th\", \"dh\" (variant \"dth\") and sometimes use capital letters at the ends of words to show devoiced vowels, such as \"ngukI\" 'fresh water/drinking water, fruit juice' . In the Bani/Klokheid orthograophy \"nguki\" is written \"nguuki\", and in the other dialects the final vowel is either fully voiced, \"nguki\" ), or elided, \"nguk\" ).\n"
] |
if water is not compressible, why does a pressure exists if pipe has limited volume of water ? | Water is compressible; everything is. Water pressure is typically generated by having the storage tank at some height above the pipes. The static pressure in the pipes is proportional to the height. | [
"On the other hand, liquids have little compressibility. Water, for example, will compress by only 46.4 parts per million for every unit increase in atmospheric pressure (bar). At around 4000 bar (400 megapascals or 58,000 psi) of pressure at room temperature water experiences only an 11% decrease in volume. Incompressibility makes liquids suitable for transmitting hydraulic power, because a change in pressure at one point in a liquid is transmitted undiminished to every other part of the liquid and very little energy is lost in the form of compression.\n",
"However, the negligible compressibility does lead to other phenomena. The banging of pipes, called water hammer, occurs when a valve is suddenly closed, creating a huge pressure-spike at the valve that travels backward through the system at just under the speed of sound. Another phenomenon caused by liquid's incompressibility is cavitation. Because liquids have little elasticity they can literally be pulled apart in areas of high turbulence or dramatic change in direction, such as the trailing edge of a boat propeller or a sharp corner in a pipe. A liquid in an area of low pressure (vacuum) vaporizes and forms bubbles, which then collapse as they enter high pressure areas. This causes liquid to fill the cavities left by the bubbles with tremendous localized force, eroding any adjacent solid surface.\n",
"When the pressure in one part of a system is reduced relative to another, the fluid in the higher pressure region will exert a force relative to the region of lowered pressure. Pressure reduction may be static, as in a piston and cylinder arrangement, or dynamic, as in the case of a vacuum cleaner when air flow results in a reduced pressure region.\n",
"The energy of the falling water creates negative pressure inside the pipe that is compensated by the air from the outside atmosphere provided through inlet. The air is compressed by surrounding water pressure (which increases under a column due to the discharge to atmospheric pressure). The pressure of the air delivered cannot exceed the hydraulic head of the discharge pipe of the separation chamber. \n",
"A pressure vessel [6] containing air cushions the hydraulic pressure shock when the waste valve closes, and it also improves the pumping efficiency by allowing a more constant flow through the delivery pipe. Although the pump could in theory work without it, the efficiency would drop drastically and the pump would be subject to extraordinary stresses that could shorten its life considerably. One problem is that the pressurized air will gradually dissolve into the water until none remains. One solution to this problem is to have the air separated from the water by an elastic diaphragm (similar to an expansion tank); however, this solution can be problematic in developing countries where replacements are difficult to procure. Another solution is to have a mechanism such as a snifting valve that automatically inserts a small bubble of air when the suction pulse described above reaches the pump. Another solution is to insert an inner tube of a car or bicycle tire into the pressure vessel with some air in it and the valve closed. This tube is in effect the same as the diaphragm, but it is implemented with more widely available materials. The air in the tube cushions the shock of the water the same as the air in other configurations does.\n",
"Water is approximately 800 times denser than air, and air is approximately 15,000 times more compressible than water. When water is filled with air bubbles, however, the fluid's density is very close to the density of water, but the compressibility will be the compressibility of air. This greatly reduces the speed of sound in the liquid. Wavelength is constant for a given volume of fluid, therefore the frequency (pitch) of the sound will decrease as long as gas bubbles are present.\n",
"where V is the volume of the medium, and dV is the volume decrease due to the pressure increase dp of the sound wave. When water is filled with air bubbles, the fluid density is essentially the density of water, and the air will contribute significantly to the compressibility. Crawford derived the relationship between fractional bubble volume and sound velocity in water, and hence the sound frequency in water, given as.\n"
] |
Were there any examples of Fascist states before 1930's Europe? | Fascism at the best of times is a vague description. As a Marxist, I see fascism as being a reaction to a crisis of state brought on by a crisis of capitalism, as such, to argue that fascist states existed before capitalism wouldn't make sense. The goal of a fascist state then is to try and maintain capital as a mode of production, even if this is an unconscious and only historical role. Have there been dictatorial states and rulers before fascism? Yes, but I think to describe them as being fascist would be taking fascism out of the specifics of the 20th century. A nice pamphlet you can read would be Gilles Dauve's [When Insurrections Die](_URL_0_). | [
"The first totalitarian state in the West was established in Italy. Unlike the Soviet Union however, this would be a Fascist rather than a Communist state. Fascism is a less organized ideology than Communism, but generally it is characterized by a total rejection of humanism and liberal democracy, as well as very intense nationalism, with government headed by a single all-powerful dictator. The Italian politician Benito Mussolini established the Fascist Party, from which Fascism derives its name following World War I. Fascists won support by many disillusioned Italians, angry over Italy's treatment following World War I. They also employed violence and intimidation against their political enemies. In 1922 Mussolini seized power by threatening to lead his followers on a march on Rome if he was not named prime minister. Although he had to share some power with the monarchy, Mussolini ruled as a dictator. Under his rule, Italy's military was built up and democracy became a thing of the past. One important diplomatic achievement of his reign, however, was the Lateran Treaty, between Italy and the Pope, in which a small part of Rome where St. Peter's Basilica and other Church property was located was given independence as Vatican City and the Pope was reimbursed for lost Church property. In exchange, the Pope recognized the Italian government.\n",
"After the Second World War, most fascist regimes were dismantled by the victors, with only those in Spain and Portugal surviving. Parties, movements or politicians who carried the label \"fascist\" quickly became political pariahs with many nations across Europe banning any organisations or references relating to fascism and Nazism. With this came the rise of Neo-Fascism, movements like the Italian Social Movement, Socialist Reich Party and Union Movement attempted to continue fascism's legacy but failed to become mass movements.\n",
"The first fascist country, Italy, was ruled by Benito Mussolini (\"Il Duce\") until he was dismissed and arrested on 25 July 1943. Mussolini was then rescued from prison by Germany, and was given made head of a state named \"Repubblica di Salò\" in northern Italy that continued to fight the allies alongside Germany.\n",
"Prior to and during the Second World War, Nazi Germany imposed numerous fascist/fascist related regimes across occupied Europe, these may not fully espouse the form of fascism established by Mussolini however they were authoritarian, nationalist, anti-communist and staunchly pro-Axis powers:\n",
"Between 1925 and 1943, Italy was a quasi-\"de jure\" Fascist dictatorship, as the constitution formally remained in effect without alteration by the Fascists, though the monarchy also formally accepted Fascist policies and Fascist institutions. Changes in politics occurred, consisting of the establishment of the Grand Council of Fascism as a government body in 1928, which took control of the government system, as well as the Chamber of Deputies being replaced with the Chamber of Fasci and Corporations as of 1939.\n",
"Fascism is a form of radical authoritarian nationalism in Europe shortly after the First World War. It dominated Italy (1923–43) and Nazi Germany (1933-45) and played a role in other countries. It was based in tightly organised local groups, all controlled from the top. It violently opposed to liberalism, Marxism, and anarchism, and tried to control all aspects of society. The foreign policy Militaristic and aggressive. Fascist Italy and Nazi Germany were critical allies in the second world war. Japan, with an authoritarian government that did not have a well-mobilised popular base, was allied with them to form the Axis.\n",
"Fascism first appeared in Brazil in 1922 with the foundation of the \"Legião do Cruzeiro do Sul\" and within ten years this had been followed by the \"Legião de Outubro\", the \"Partido Nacional Sindicalista\", the \"Partido Fascista Nacional\", the \"Legião Cearense do Trabalho\", the \"Partido Nacionalista\" of São Paulo, the \"Partido Nacional Regenerador\",and the \"Partido Socialista Brasileiro\", all minor groups that espoused some form of fascism However one of the most important fascist movements on the continent was Brazilian Integralism, which shared a heritage with Italian fascism as well as Integralismo Lusitano. At its peak the \"Ação Integralista Brasileira\", led by Plínio Salgado, claimed as many as 200,000 members although following coup attempts it faced a crackdown from the Estado Novo of Getúlio Vargas in 1937. Like the Portuguese Estado Novo that influenced it, Vargas' regime borrowed from fascism without fully endorsing it and in the end repressed those who advocated full fascism.\n"
] |
where did the misconception that radioactive waste glows green come from? | An early use of radioactive material was luminescent applications. You've seen glow in the dark watches as an example. These commonly involved a phosphor that glowed green via interaction with the radioactive substance.
This also led to [radium girls](_URL_0_) as an early terrifying example of what can go bad with radiation.
So you have a combination of 'green glow' and 'horrible radiation damage.' | [
"Near the facility, a dense cloud of radioactive dust killed off a large area of Scotch pine trees; the rusty orange color of the dead trees led to the nickname \"The Red Forest\" (\"Рудий ліс\"). The Red Forest was among the world's most radioactive places; to reduce the hazard, the Red Forest was bulldozed and the highly irradiated wood was buried, though the soil continues to emit significant radiation.\n",
"The green hue was a puzzle for astronomers in the early part of the 20th century because none of the known spectral lines at that time could explain it. There was some speculation that the lines were caused by a new element, and the name nebulium was coined for this mysterious material. With better understanding of atomic physics, however, it was later determined that the green spectrum was caused by a low-probability electron transition in doubly ionized oxygen, a so-called \"forbidden transition\". This radiation was all but impossible to reproduce in the laboratory at the time, because it depended on the quiescent and nearly collision-free environment found in the high vacuum of deep space.\n",
"When first isolated, it was observed that the green glow emanating from white phosphorus would persist for a time in a stoppered jar, but then cease. Robert Boyle in the 1680s ascribed it to \"debilitation\" of the air. Actually, it is oxygen being consumed. By the 18th century, it was known that in pure oxygen, phosphorus does not glow at all; there is only a range of partial pressures at which it does. Heat can be applied to drive the reaction at higher pressures.\n",
"The \"Green Run\" was a secret U.S. Government release of radioactive fission products on December 2–3, 1949, at the Hanford Site plutonium production facility, located in Eastern Washington. Radioisotopes released at that time were supposed to be detected by U.S. Air Force reconnaissance. Freedom of Information Act (FOIA) requests to the U.S. Government have revealed some of the details of the experiment. Sources cite of iodine-131 released, and an even greater amount of xenon-133. The radiation was distributed over populated areas, and caused the cessation of intentional radioactive releases at Hanford until 1962 when more experiments commenced.\n",
"Agent Green is the code name for a powerful herbicide and defoliant used by the U.S. military in its herbicidal warfare program during the Vietnam War. The name comes from the green stripe painted on the barrels to identify the contents. Largely inspired by the British use of herbicides and defoliants during the Malayan Emergency, it was one of the so-called \"Rainbow Herbicides\". Agent Green was only used between 1962 and 1964, during the early \"testing\" stages of the spraying program.\n",
"Because they contain mercury, many fluorescent lamps are classified as hazardous waste. The United States Environmental Protection Agency recommends that fluorescent lamps be segregated from general waste for recycling or safe disposal, and some jurisdictions require recycling of them.\n",
"The blue glow of a criticality accident can result from the fluorescence of the excited ions, atoms and molecules of air (mostly oxygen and nitrogen) falling back to unexcited states, which produces an abundance of blue light. This is also the reason electrical sparks in air, including lightning, appear electric blue. The smell of ozone was said to be a sign of high ambient radioactivity by Chernobyl liquidators.\n"
] |
Were their any non-medieval jousts, for example in Japan or Ancient Rome? | OP, I was waiting for 3rd or 4th level comments to mention this, but since there aren't any responses...
I actually saw a jousting tournament at a marina in southern France (Nice?) but rather than two knights on horseback, it was two teams of two in small boats: one standing on the prow with a sheild & jousting pole, the other rowing. Per Wiki, I find that *[joute nautique](_URL_1_)* has not only existed in Southern France for centuries, but traces back to ancient [Egypt](_URL_0_), Greece and Rome. I'll leave further searching for you. | [
"Evidence of jousting is subsequently found in Ancient Greece. The Greeks introduced the practice into Sicily where the Latins, great lovers of all kinds of spectacle, immediately adopted it. Indeed, there are countless signs of jousting in the Roman Empire, especially during naumachia (literally \"naval combat\"). The latter featured naval re-enactments and other water-sports that took place in arenas designed to be flooded for the purpose. In all likelihood, the Romans introduced these types of games throughout their empire. Evidence for this comes from the description of a fête held at Strasbourg in 303 in honour of Emperor Diocletian. Some historians argue, however, for an introduction of the games from the foundation of Massilia, a Greek colony founded in 570BC and later to become the French city of Marseille.\n",
"The medieval joust has its origins in the military tactics of heavy cavalry during the High Middle Ages. Since the 15th century, jousting had become a sport (\"hastilude\") with less direct relevance to warfare, for example using separate specialized armour and equipment.\n",
"The medieval joust has its origins in the military tactics of heavy cavalry during the High Middle Ages. By the 14th century, many members of the nobility, including kings had taken up jousting to showcase their own courage, skill and talents, and the sport proved just as dangerous for a king as a knight, and from the 15th century on, jousting became a sport (\"hastilude\") without direct relevance to warfare.\n",
"Jousting is a martial game or \"hastilude\" between two horsemen wielding lances with blunted tips, often as part of a tournament. The primary aim was to replicate a clash of heavy cavalry, with each participant trying hard to strike the opponent while riding towards him at high speed, breaking the lance on the opponent's shield or jousting armour if possible, or unhorsing him. The joust became an iconic characteristic of the knight in Romantic medievalism. The participants experience close to three and a quarter times their body weight in G-forces when the lances collide with their armour.\n",
"There were several types of joust, including some regional preferences or rules. For example, in fourteenth-century Germany, distinction was made between the \"Hohenzeuggestech\", where the aim was to break the lance, and the \"Scharfrennen\", where knights sought to unhorse their opponents. These types called for different lances (light in the former, heavy in the latter), and saddles (where the \"Scharfrennen\" called for saddles without front or rear supports, which would impede the fall).\n",
"Jousting is based on the military use of the lance by heavy cavalry. It transformed into a specialised sport during the Late Middle Ages, and remained popular with the nobility in England and Wales, Germany and other parts of Europe throughout the whole of the 16th century (while in France, it was discontinued after the death of King Henry II in an accident in 1559). In England, jousting was the highlight of the Accession Day tilts of Elizabeth I and of James VI and I, and also was part of the festivities at the marriage of Charles I.\n",
"The oldest representations of water jousting have been found on bas-reliefs dating from the Ancient Egyptians (2780 – 2380BC). It would seem however, that these relate more to a form of brawling than a leisure activity; given that the jousters are wearing no form of protection and carry gaffes armed with two points at their end.\n"
] |
Has anyone come across any papers on attempts to elucidate how telomerase becomes upregulated in cancerous cells? | Try this one:
Wnt/β-catenin signaling regulates telomerase in stem cells and cancer cells.
Hoffmeyer K, Raggioli A, Rudloff S, Anton R, Hierholzer A, Del Valle I, Hein K, Vogt R, Kemler R.
Stem cells use Wnt signaling through receptors (Lgr5 for instance) which modulates ß-catenin activity. Without Wnt ß-catenin is degraded by the SCF-ßTrCP E3 ligase complex, but the moment Wnt binds to Lgr5 this degradation is inhibited, ß-catenin moves to the nucleus and triggers accumulation of TCF BCL9 and ß-catenin complexes in the nucleus.
This then leads to a whole variety of signaling events that are associated with stem cell phenotype and behaviour, including upregulated telomerase activity.
[EDIT]Cancer cells, especially as the disease progresses, display a lot of so-called stemness; a phenotype that closely resembles stem cells. Wnt-signaling plays an important part in many, many cancers, either directly by aberrant upregulation of Lgr5 or by disruption of the downstream regulation mechanisms. | [
"If increased telomerase activity is associated with malignancy, then possible cancer treatments could involve inhibiting its catalytic component, hTERT, to reduce the enzyme’s activity and cause cell death. Since normal somatic cells do not express TERT, telomerase inhibition in cancer cells can cause senescence and apoptosis without affecting normal human cells. It has been found that dominant-negative mutants of hTERT could reduce telomerase activity within the cell. This led to apoptosis and cell death in cells with short telomere lengths, a promising result for cancer treatment. Although cells with long telomeres did not experience apoptosis, they developed mortal characteristics and underwent telomere shortening. Telomerase activity has also been found to be inhibited by phytochemicals such as isoprenoids, genistein, curcumin, etc. These chemicals play a role in inhibiting the mTOR pathway via down-regulation of phosphorylation. The mTOR pathway is very important in regulating protein synthesis and it interacts with telomerase to increase its expression. Several other chemicals have been found to inhibit telomerase activity and are currently being tested as potential clinical treatment options such as nucleoside analogues, retinoic acid derivatives, quinolone antibiotics, and catechin derivatives. There are also other molecular genetic-based methods of inhibiting telomerase, such as antisense therapy and RNA interference.\n",
"This model of cancer in cell culture accurately describes the role of telomerase in actual human tumors. Telomerase activation has been observed in ~90% of all human tumors, suggesting that the immortality conferred by telomerase plays a key role in cancer development. Of the tumors without TERT activation, most employ a separate pathway to maintain telomere length termed Alternative Lengthening of Telomeres (ALT ). The exact mechanism behind telomere maintenance in the ALT pathway is unclear, but likely involves multiple recombination events at the telomere.\n",
"The ability to maintain functional telomeres may be one mechanism that allows cancer cells to grow \"in vitro\" for decades. Telomerase activity is necessary to preserve many cancer types and is inactive in somatic cells, creating the possibility that telomerase inhibition could selectively repress cancer cell growth with minimal side effects. If a drug can inhibit telomerase in cancer cells, the telomeres of successive generations will progressively shorten, limiting tumor growth.\n",
"Studies have shown that 90 percent of cancer cells contain large amounts of an enzyme called telomerase. Telomerase is an enzyme that replenishes the worn away telomeres by adding bases to the ends and thus renewing the telomere. A cancer cell has in essence turned on the telomerase gene, and this allows them to have an unlimited amount of divisions without the telomeres wearing away. Other kinds of cells that can surpass the Hayflick limit are stem cells, hair follicles, and germ cells. This is because they contain raised amounts of telomerase.\n",
"Telomerase activity is associated with the number of times a cell can divide playing an important role in the immortality of cell lines, such as cancer cells. The enzyme complex acts through the addition of telomeric repeats to the ends of chromosomal DNA. This generates immortal cancer cells. In fact, there is a strong correlation between telomerase activity and malignant tumors or cancerous cell lines. Not all types of human cancer have increased telomerase activity. 90% of cancers are characterized by increased telomerase activity. Lung cancer is the most well characterized type of cancer associated with telomerase. There is a lack of substantial telomerase activity in some cell types such as primary human fibroblasts, which become senescent after about 30–50 population doublings. There is also evidence that telomerase activity is increased in tissues, such as germ cell lines, that are self-renewing. Normal somatic cells, on the other hand, do not have detectable telomerase activity. Since the catalytic component of telomerase is its reverse transcriptase, hTERT, and the RNA component hTERC, hTERT is an important gene to investigate in terms of cancer and tumorigenesis.\n",
"Susceptibility to cancer seems counterintuitive because in many known cancers reactivation of telomerase is actually a required step for malignancy to evolve (see telomere). In a disease where telomerase is affected, it does not seem to follow that cancer would be a complication to result. The authors note the paradoxical nature of cancer predisposition in individuals who seem to lack one of the required components for cancer to form. It is thought that without functional telomerase, chromosomes will likely be attached together at their ends through the non-homologous end joining pathway. If this proves to be a common enough occurrence, malignancy even without telomerase present is possible.\n",
"Telomerase is often activated in cancer cells to enable cancer cells to duplicate their genomes indefinitely without losing important protein-coding DNA sequence. Activation of telomerase could be part of the process that allows cancer cells to become \"immortal\". The immortalizing factor of cancer via telomere lengthening due to telomerase has been proven to occur in 90% of all carcinogenic tumors \"in vivo\" with the remaining 10% using an alternative telomere maintenance route called ALT or Alternative Lengthening of Telomeres.\n"
] |
why do certain franchising companies limit intentionally limit their geographic distribution? (i.e. why is there no steak n shake or in n out in the northeast?) | Part of bring a.franchise is maintaining uniformity of goods. You can't have an In N Out unless you get their meat, bread, etc. The parent company doesn't want to deal with shipping food across the country or setting up regional distribution. | [
"Regional distributors appeared, offering pressing and distribution deals to the small labels that would reach all of the shops in a region. Shops preferred to deal with only a handful of distributors and so the small distributors agreed to also distribute each other's stock, segregating the market by the geography of the shops, rather than by the content or particular labels. This was the beginning of the idea behind the Cartel.\n",
"When McLamore and Edgarton's Burger King Corporation began a full franchising system in 1961, it relied on a regional franchising model where franchisees would purchase the right to open stores within a defined geographic region. These franchise agreements granted the company very little oversight control over its franchisees and resulted in issues of product quality control, store image and design and operations procedures.\n",
"Scale effects resulting from centralized acquisition purchase centres in the food supply chain favor large players such as big retailers or distributors in the food distribution market. This is due to the fact that they can utilize their strong market power and financial advantage over smaller players. Having both strong market power and greater access to the financial credit market meant that they can impose barriers to entry and cement their position in the food distribution market. This would result in a food distribution chain that is characterized by large players on one end and small players choosing niche markets to operate in on the other end. The existence of smaller players in specialized food distribution markets could be attributed to their shrinking market share and their inability to compete with the larger players due to the scale effects. Through this mechanism, globalization has displaced smaller role players.\n",
"A regional lockout may be enforced for several reasons, such as to stagger the release of a certain product, to avoid losing sales to the product's foreign publisher, to maximize the product's impact in a certain region through localization, to hinder grey market imports by enforcing price discrimination, or to prevent users from accessing certain content in their territory because of legal reasons (either due to censorship laws, or because a distributor does not have the rights to certain intellectual property outside their specified region).\n",
"Another mechanism troubling the specialized food distribution markets is the ability of distribution chains to possess their own brand. Stores with their own brand are able to combat price wars between competitors by lowering the price of their own brand, thus making consumers more likely to purchase goods from them.\n",
"Many retailers have tried to compete with showroomers by slashing their own prices. Independent businesses, however, are advised to counter showrooming by adding value via included services and other tactics, such as making information and reviews more readily available to customers so that they might not choose to seek it out online.\n",
"On the Internet, customers can directly contact the distributors. This has reduced the length of the chain to some extent by cutting down on middlemen. Some of the benefits are cost reduction and greater collaboration.\n"
] |
the problem with hipsters | Hipsters invented hipster hate as a way to make themselves appear more underground and oppressed. It is my belief that the majority of hipster hate posts are originated by hipsters.... | [
"Greif's efforts puts the term \"hipster\" into a socioeconomic framework rooted in the petit bourgeois tendencies of a youth generation unsure of their future social status. The cultural trend is indicative of a social structure with heightened economic anxiety and lessened class mobility.\n",
"Mark Greif, a founder of \"n+1\" and an Assistant Professor at The New School, in a \"New York Times\" editorial, states that \"hipster\" is often used by youth from disparate economic backgrounds to jockey for social position. He questions the contradictory nature of the label, and the way that no one thinks of themselves as a hipster: \"Paradoxically, those who used the insult were themselves often said to resemble hipsters—they wore the skinny jeans and big eyeglasses, gathered in tiny enclaves in big cities, and looked down on mainstream fashions and 'tourists'\". He believes the much-cited difficulty in analyzing the term stems from the fact that any attempt to do so provokes universal anxiety, since it \"calls everyone's bluff\". Like Arsel and Thompson, he draws from \"La Distinction\" by Pierre Bourdieu to conclude:\n",
"The term \"hipster\" in its current usage first appeared in the 1990s and became particularly prominent in the late 2000s and early 2010s, being derived from the earlier hipster movements of the 1940s. Members of the subculture typically do not self-identify as hipsters, and the word \"hipster\" is often used as a pejorative for someone who is pretentious or overly trendy; or as a stereotypical term that has been reclaimed and redefined by some as a term of pride and group identity. Some scholars contend that the contemporary hipster is a \"marketplace myth\" that has a complex, two-way relationship with the worldview and value system of indie-oriented consumers. The hipster subculture is considered part of Generation Y.\n",
"Elise Thompson, an editor for the LA blog \"LAist\" argues that \"people who came of age in the 70s and 80s punk rock movement seem to universally hate 'hipsters'\", which she defines as people wearing \"expensive 'alternative' fashion[s]\", going to the \"latest, coolest, hippest bar...[and] listen[ing] to the latest, coolest, hippest band\". Thompson argues that hipsters \"don't seem to subscribe to any particular philosophy ... [or] ... particular genre of music\". Instead, she argues that they are \"soldiers of fortune of style\" who take up whatever is popular and in style, \"appropriat[ing] the style[s]\" of past countercultural movements such as punk, while \"discard[ing] everything that the style stood for\".\n",
"Entering the 2000s, this look because associated with musical scenes including indie rock and emo gradually spreading to the hipster movement. The hipster movement is popular among people in their 20s and 30s whose style attempts to reject mainstream trends. The hipster movement embraced thrift store chic because of its love for vintage items, especially clothing. Items that became popular for indie girls included flowery cotton dresses, cardigans, keffiyehs, and eyeglasses chosen deliberately for their unfashionable connotations. Hipster-thrift-store-chic embraces nostalgia and irony by combining old trucker-caps and vintage bowling t-shirts with worn luxury goods like leather jackets, old military dress uniforms as a protest against the war in Iraq, or used business wear, such as tweed cloth sportcoats.\n",
"In Greenwich Village, New York City by the end of the 1950s, young counterculture advocates were widely called \"hips\" because they were considered \"in the know\" or \"cool\", as opposed to being \"square\". \n",
"In Greenwich Village in the early 1960s, New York City, young counterculture advocates were named \"hips\" because they were considered \"in the know\" or \"cool\", as opposed to being \"square\", meaning conventional and old-fashioned. In the April 27, 1961 issue of The Village Voice, \"An open letter to JFK & Fidel Castro\", Norman Mailer utilizes the term hippies, in questioning JFK's behavior. In a 1961 essay, Kenneth Rexroth used both the terms \"hipster\" and \"hippies\" to refer to young people participating in black American or Beatnik nightlife. According to Malcolm X's 1964 autobiography, the word \"hippie\" in 1940s Harlem had been used to describe a specific type of white man who \"acted more Negro than Negroes\". Andrew Loog Oldham refers to \"all the Chicago hippies,\" seemingly in reference to black blues/R&B musicians, in his rear sleeve notes to the 1965 LP \"The Rolling Stones, Now!\"\n"
] |
when we call or write our representatives in congress, what incentive do they have to listen to us instead of just doing what they wanted to do in the first place? | Because you voted them in and will choose whether or not they get re-elected in the next election. If your representative doesn't represent you very well, you're much less likely to vote for them next time. | [
"Even if Congress is composed of representatives elected by the people, it does not follow, except in a highly qualified sense, that in every exercise of its power of inquiry, the people are exercising their right to information. The members of respondent Committees should not invoke as justification in their exercise of power a right properly belonging to the people in general. This is because when they discharge their power, they do so as public officials and members of Congress.\n",
"Another responsibility of the role is persuading Members of Congress from their party to vote along with their party's Leadership, even when it is unpopular. They may also be charged with coordinating outreach to allied groups of lobbyists, corporations, or unions.\n",
"A major aspect of the role for a Senator and a representative consists of services to his or her constituency. Members receive thousands of letters, phone calls, and e-mails, with some expressing opinion on an issue, or displeasure with a member's position or vote. Often the incoming messages are not from concerned citizens but are barrages of electronic mail and interactive video designed to pressure the congressperson and his or her staff. Constituents request assistance with particular problems or ask questions. Members of Congress want to leave a positive impression on the constituent, rather than leave them disgruntled. Thus, their offices will often be responsive, and go out of their way to help steer the citizen through the intricacies of the bureaucracy. In this role, members and their staffers act as an ombudsman at the Federal level. This unofficial job has become increasingly time-consuming, and has significantly reduced the time that members have for the preparation or inspection of bills. Providing services helps congresspersons win elections and there are reports that some congresspersons compete actively to try to convince voters that they deliver the best services. It can make a difference in close races. For example, Erika Hodell-Cotti talked about how her congressperson, Frank Wolf, sent her letters when her children got awards; the congressperson helped her brothers win admission to the West Point Military Academy. Much of what citizens want is merely help with navigating government bureaucracies. Oftentimes citizens contact member offices that do not represent them. Because resources for helping non-constituents are limited, an additional component of constituent service becomes directing citizens to their assigned representative in Congress. \n",
"All congressional officials try to serve two distinct purposes which sometimes overlap––representing their constituents (local concerns) and making laws for the nation (national concerns). There has been debate throughout American history about how to straddle these dual obligations of representing the wishes of citizens while at the same time trying to keep mindful of the needs of the entire nation. Often, compromise is required.\n",
"Lobbyists routinely monitor how congressional officials vote, sometimes checking the past voting records of congresspersons. One report suggested that reforms requiring \"publicly recorded committee votes\" led to more information about how congresspersons voted, but instead of becoming a valuable resource for the news media or voters, the information helped lobbyists monitor congressional voting patterns. As a general rule, lawmakers must vote as a particular interest group wishes them to vote, or risk losing support.\n",
"What happens after debate stops depends on the legislature in question. In the United States Congress, bells are rung in the various congressional office buildings to indicate to members that their presence is required in their respective chambers. Members of the House use the same electronic system as is used for voting to register their presence; in the Senate, one of the clerks will read out a roll call of senators, who indicate their presence when called. In fact, if any Senator \"suggests the absence of a quorum,\" the Presiding Officer must direct the roll to be called. For practical purposes, a quorum call is a delaying measure that permits the Senate leadership to work out some difficulty or to await a Senator's arrival.\n",
"The people, by their Constitution, affirmatively posited, defined, and delimited all qualifications for standing in elections for membership in the Congress. The states, under the 9th and 10th amendments explicitly retain unto themselves the power to make the laws for the government and regulation of elections for federal offices that are apportioned to them (the states) by the US Constitution. Therefore, the people and the states together have the sole authority for the creation, production, and generation of candidate members of the US Congress through the operation of the laws of the several states and the articles and clauses of the US Constitution. Thus, the Congress itself is become a creation of and subordinate to this process. Congress's processes and procedures for the management, administration, and discipline of members (once they have taken the oath, been sworn, and entered upon the rolls) are constitutionally subordinate to the sovereignty of the people and the states respectively over the creation of the membership of Congress.\n"
] |
What exactly is asthma? Like what is it, how does it form, and how severe can it get | Asthma is, essentially, a chronic inflammatory disease. After, being exposed to an allergen, your body sensitizes itself to that pathogen, so that in the future it garners a large immune response. In the early phase, you have cells that release a substance called histamine, this causes bronchoconstriction (basically your airways getting smaller) and make it difficult for you to breathe all of your air out. So you get hyper inflated lungs and people tend to hyperventilate. After this there is usually a later phase called an inflammatory phase that involves swelling of the airways. So you get airways that are smaller, and are more prone to collapse.
Asthma ranges from mildly severe to being extremely severe. Luckily we have pretty good medications that help prevent attacks, and quickly treat them when they do occur. | [
"Asthma is a common long-term inflammatory disease of the airways of the lungs. It is characterized by variable and recurring symptoms, reversible airflow obstruction, and easily triggered bronchospasms. Symptoms include episodes of wheezing, coughing, chest tightness, and shortness of breath. These may occur a few times a day or a few times per week. Depending on the person, they may become worse at night or with exercise.\n",
"Asthma is an obstructive lung disease where the bronchial tubes (airways) are extra sensitive (hyperresponsive). The airways become inflamed and produce excess mucus and the muscles around the airways tighten making the airways narrower. Asthma is usually triggered by breathing in things in the air such as dust or pollen that produce an allergic reaction. It may be triggered by other things such as an upper respiratory tract infection, cold air, exercise or smoke. Asthma is a common condition and affects over 300 million people around the world.\n",
"Asthma is thought to be caused by a combination of genetic and environmental factors. Environmental factors include exposure to air pollution and allergens. Other potential triggers include medications such as aspirin and beta blockers. Diagnosis is usually based on the pattern of symptoms, response to therapy over time, and spirometry. Asthma is classified according to the frequency of symptoms, forced expiratory volume in one second (FEV1), and peak expiratory flow rate. It may also be classified as atopic or non-atopic, where atopy refers to a predisposition toward developing a type 1 hypersensitivity reaction.\n",
"While asthma is a well-recognized condition, there is not one universal agreed upon definition. It is defined by the Global Initiative for Asthma as \"a chronic inflammatory disorder of the airways in which many cells and cellular elements play a role. The chronic inflammation is associated with airway hyper-responsiveness that leads to recurrent episodes of wheezing, breathlessness, chest tightness and coughing particularly at night or in the early morning. These episodes are usually associated with widespread but variable airflow obstruction within the lung that is often reversible either spontaneously or with treatment\".\n",
"Like other types of asthma, it is characterized by airway inflammation, reversible airways obstruction, and bronchospasm, but it is caused by something in the workplace environment. Symptoms include shortness of breath, tightness of the chest, coughing, sputum production and wheezing. Some patients may also develop upper airway symptoms such as itchy eyes, tearing, sneezing, nasal congestion and rhinorrhea.\n",
"Acute severe asthma is an acute exacerbation of asthma that does not respond to standard treatments of bronchodilators (inhalers) and corticosteroids. Symptoms include chest tightness, rapidly progressive dyspnea (shortness of breath), dry cough, use of accessory respiratory muscles, fast and/or labored breathing, and extreme wheezing. It is a life-threatening episode of airway obstruction and is considered a medical emergency. Complications include cardiac and/or respiratory arrest.\n",
"Asthma is characterized by recurrent episodes of wheezing, shortness of breath, chest tightness, and coughing. Sputum may be produced from the lung by coughing but is often hard to bring up. During recovery from an attack, it may appear pus-like due to high levels of white blood cells called eosinophils. Symptoms are usually worse at night and in the early morning or in response to exercise or cold air. Some people with asthma rarely experience symptoms, usually in response to triggers, whereas others may have marked reactivity and persistent symptoms.\n"
] |
What happens if an electron met the nucleus of an atom? | _URL_0_
It has nothing to do with temperature. Electrons are already in the lowest energy state they can occupy relative to the nucleus. | [
"In the quantum mechanical model of the electron, there is a non-zero probability of finding the electron within the nucleus. During the internal conversion process, the wavefunction of an inner shell electron (usually an \"s\" electron) is said to penetrate the volume of the atomic nucleus. When this happens, the electron may couple to an excited energy state of the nucleus and take the energy of the nuclear transition directly, without an intermediate gamma ray being first produced. The kinetic energy of the emitted electron is equal to the transition energy in the nucleus, minus the binding energy of the electron to the atom.\n",
"A classical electron orbiting a nucleus experiences acceleration and should radiate. Consequently, the electron loses energy and the electron should eventually spiral into the nucleus. Atoms, according to classical mechanics, are consequently unstable. This classical prediction is violated by the observation of stable electron orbits. The problem is resolved with a quantum mechanical description of atomic physics, initially provided by the Bohr model. Classical solutions to the stability of electron orbitals can be demonstrated using Non-radiation conditions and in accordance with known physical laws.\n",
"This is also evident from phenomena like electron capture. Theoretically, in orbital models of heavy atoms, the electron orbits partially inside the nucleus (it does not \"orbit\" in a strict sense, but has a non-vanishing probability of being located inside the nucleus).\n",
"By the dawn of the 20th century, evidence required a model of the atom with a diffuse cloud of negatively charged [[electron]]s surrounding a small, dense, positively charged [[Atomic nucleus|nucleus]]. These properties suggested a model in which electrons circle around the nucleus like planets orbiting a sun. However, it was also known that the atom in this model would be unstable: according to classical theory, orbiting electrons are undergoing centripetal acceleration, and should therefore give off electromagnetic radiation, the loss of energy also causing them to spiral toward the nucleus, colliding with it in a fraction of a second.\n",
"In 1913 [[Niels Bohr]] proposed [[Bohr model|a new model of the atom]] that included quantized electron orbits: electrons still orbit the nucleus much as planets orbit around the sun, but they are permitted to inhabit only certain orbits, not to orbit at any distance. When an atom emitted (or absorbed) energy, the electron did not move in a continuous trajectory from one orbit around the nucleus to another, as might be expected classically. Instead, the electron would jump instantaneously from one orbit to another, giving off the emitted light in the form of a photon. The possible energies of photons given off by each element were determined by the differences in energy between the orbits, and so the emission spectrum for each element would contain a number of lines.\n",
"Consider an electron of charge \"-e\" and an atomic nucleus with charge \"+Ze\", where \"Z\" is the number of protons in the nucleus. According to the Bohr model, if the electron were to approach and bond with the atom, it would come to rest at a certain radius \"a\". The electrostatic potential \"V\" at distance \"a\" from the ionic nucleus, referenced to a point infinitely far away, is:\n",
"Often, as an electron precipitates, it is directed into the upper atmosphere where it may collide with neutral particles, thus depleting the electron's energy. If an electron makes it through the upper atmosphere, it will continue into the ionosphere. Groups of precipitated electrons can change the shape and conductivity of the ionosphere by colliding with atoms or molecules (usually oxygen or nitrogen based particles) in the region. When colliding with an atom, the electron strips the atom of its other electrons creating an ion. Collisions with the air molecules also release photons which provide a dim \"aurora\" effect. Because this occurs at such a high altitude, humans in aircraft are not affected by the radiation.\n"
] |
if you want to heat an oven or stove to 175 degrees, does turning it way up to 400 degrees make it get to 175 faster? | For an electric oven, no. The temperature control is a thermostat, not an accelerator. If the oven is below the set point, the element is on; otherwise it's off. It will take the same amount of time regardless of the setting.
For a stove, it depends. A gas stove uses valves to control how much gas comes out, so turning it up will heat the pan faster. A regular electric stove turns the element on more if you turn the knob higher, so it will also heat faster. An infrared or ceramic cooktop works with a thermostat, as described above, so turning it up will not speed it up. | [
"When there is a high temperature differential (e.g., when an air-source heat pump is used to heat a house with an outside temperature of, say, 0 °C (32 °F)), it takes more work to move the same amount of heat to indoors than on a milder day. Ultimately, due to Carnot efficiency limits, the heat pump's performance will decrease as the outdoor-to-indoor temperature difference increases (outside temperature gets colder), reaching a theoretical limit of 1.0 at −273 °C. In practice, a COP of 1.0 will typically be reached at an outdoor temperature around −18 °C (0 °F) for air source heat pumps.\n",
"Maximal possible conversion of heat to work, or exergy content of heat, depends on the temperature at which heat is available and the temperature level at which the reject heat can be disposed, that is the temperature of the surrounding. The upper limit for conversion is known as Carnot efficiency and was discovered by Nicolas Léonard Sadi Carnot in 1824. See also Carnot heat engine.\n",
"A complete cycle involves heating the oven to the required temperature, maintaining that temperature for the proper time interval for that temperature, turning the machine off and cooling the articles in the closed oven till they reach room temperature. The standard settings for a hot air oven are:\n",
"The burn temperature in modern stoves can increase to the point where secondary and complete combustion of the fuel takes place. A properly fired masonry heater has little or no particulate pollution in the exhaust and does not contribute to the buildup of creosote in the heater flues or the chimney. Some stoves achieve as little as 1 to 4 grams per hour. This is roughly 10% as much smoke than older stoves, and equates to nearly zero visible smoke from the chimney. This is largely achieved through causing the maximum amount of material to combust, which results in a net efficiency of 60 to 70%, as contrasted to less than 30% for an open fireplace. Net efficiency is defined as the amount of heat energy transferred to the room compared to the amount contained in the wood, minus any amount central heating must work to compensate for airflow problems.\n",
"In 1952 the \"Oregonian\" reported: \"Automobile cigarette lighters produced by the Rochester Automotive products division of General Motors are tested to reach a temperature of 1400 degrees in no less than 10 and no more than 12 seconds.\" \n",
"Instead of letting your central heating system cool down completely, so that you often have to keep switching it on for a short time to give your home a big blast of heat, it is best to keep your central heating running continuously with the central wall-mounted thermostat set at the lowest temperature at which you feel comfortable. Doing this could save you money because you will not be wasting so much fuel, especially if your home is well insulated.\n",
"The various standard phrases, to describe oven temperatures, include words such as \"cool\" to \"hot\" or \"very slow\" to \"fast\". For example, a \"cool oven\" has temperature set to 200 °F (90 °C), and a \"slow oven\" has a temperature range from 300-325 °F (150-160 °C). A \"moderate oven\" has a range of 350-375 °F (180-190 °C), and a \"hot oven\" has temperature set to 400-450 °F (200-230 °C). A \"fast oven\" has a range of 450-500 °F (230-260 °C) for the typical temperature.\n"
] |
How are scientists sure that the theory of a singularity at the centre of a black hole, is correct? | > how are scientists so sure that black holes have a singularity where all our understanding of physics breaks down rather than saying that the theory may have inaccuracies?
When they say "all our understanding of physics breaks down" is the same in this context as "the theory may have inaccuracies." "The laws of physics break down" or some similar variant is a pretty common hyperbole in this and other scenarios, but what is really meant is that the laws of physics we use break down. Inside a black hole, it's likely that a theory of quantum gravity will dominate, and we don't have a working (and experimentally verified) theory of quantum gravity.
*Classically* the existence of singularities is inescapable. The Hawking-Penrose singularity theorems guarantee that a singularity will form under reasonably physical conditions that come with the formation of a black hole in general relativity. However, nature is not fully classical. I'd wager most physicists working near the field suspect that this singularity will disappear completely in a full theory of quantum gravity. Physicists don't like singularities hanging around in their theories. | [
"In simple terms, he believes that the singularity in Einstein's field equation at the Big Bang is only an apparent singularity, similar to the well-known apparent singularity at the event horizon of a black hole. The latter singularity can be removed by a change of coordinate system, and Penrose proposes a different change of coordinate system that will remove the singularity at the big bang. One implication of this is that the major events at the Big Bang can be understood without unifying general relativity and quantum mechanics, and therefore we are not necessarily constrained by the Wheeler–DeWitt equation, which disrupts time. Alternatively, one can use the Einstein–Maxwell–Dirac equations.\n",
"One cannot predict what might come \"out\" of a big-bang singularity in our past, or what happens to an observer that falls \"in\" to a black-hole singularity in the future, so they require a modification of physical law. Before Penrose, it was conceivable that singularities only form in contrived situations. For example, in the collapse of a star to form a black hole, if the star is spinning and thus possesses some angular momentum, maybe the centrifugal force partly counteracts gravity and keeps a singularity from forming. The singularity theorems prove that this cannot happen, and that a singularity will always form once an event horizon forms.\n",
"While in a non-rotating black hole the singularity occurs at a single point in the model coordinates, called a \"point singularity\", in a rotating black hole, also known as a Kerr black hole, the singularity occurs on a ring (a circular line), known as a \"ring singularity\". Such a singularity may also theoretically become a wormhole.\n",
"Penrose demonstrated that once an event horizon forms, general relativity without quantum mechanics requires that a singularity will form within. Shortly afterwards, Hawking showed that many cosmological solutions that describe the Big Bang have singularities without scalar fields or other exotic matter (see \"Penrose–Hawking singularity theorems\"). The Kerr solution, the no-hair theorem, and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. Conventional black holes are formed by gravitational collapse of heavy objects such as stars, but they can also in theory be formed by other processes.\n",
"A similar situation occurs in general relativity with the gravitational singularity associated with the Schwarzschild solution that describes the geometry of a black hole. The curvature of spacetime at the singularity is infinite which is another way of stating that the theory does not describe the physical conditions at this point. It is hoped that the solution to this paradox will be found with a consistent theory of quantum gravity, something which has thus far remained elusive. A consequence of this paradox is that the associated singularity that occurred at the supposed starting point of the universe (see Big Bang) is not adequately described by physics. Before a theoretical extrapolation of a singularity can occur, quantum mechanical effects become important in an era known as the Planck time. Without a consistent theory, there can be no meaningful statement about the physical conditions associated with the universe before this point.\n",
"Rotating black holes have surfaces where the metric appears to have a singularity; the size and shape of these surfaces depends on the black hole's mass and angular momentum. The outer surface encloses the ergosphere and has a shape similar to a flattened sphere. The inner surface marks the \"radius of no return\" also called the event horizon; objects passing through this radius can never again communicate with the world outside that radius. However, neither surface is a true singularity, since their apparent singularity can be eliminated in a different coordinate system. Objects between these two horizons must co-rotate with the rotating body, as noted above; this feature can be used to extract energy from a rotating black hole, up to its invariant mass energy, \"Mc\".\n",
"General relativity predicts that any object collapsing beyond a certain point (for stars this is the Schwarzschild radius) would form a black hole, inside which a singularity (covered by an event horizon) would be formed. The Penrose–Hawking singularity theorems define a singularity to have geodesics that cannot be extended in a smooth manner. The termination of such a geodesic is considered to be the singularity.\n"
] |
eli 5 if there is so much junk and satellites orbiting earth, how come we never see any of it in the background of pictures taken from space? | Because there *isn't* that much.
Yes there is a lot of it, but the amount of space is *huge*. You're talking about a density like one Volkswagen Beetle in the state of Texas. | [
"This is a list of satellite map images with missing or unclear data. Some locations on free, publicly viewable satellite map services have such issues due to having been intentionally digitally obscured or blurred for various reasons. For example, Westchester County, New York asked Google to blur potential terrorism targets (such as an amusement park, a beach, and parking lots) from its satellite imagery. There are cases where the censorship of certain sites was subsequently removed. For example, when Google Maps and Google Earth were launched, images of the White House and United States Capitol were blurred out; however, these sites are now uncensored.\n",
"Matters of weather play a large role in IMINT failure. While radar imaging can see through clouds, it is unlikely that a general satellite sweep could find something buried under a few feet of snow or in a frozen lake. Another problem with satellite imagery is that it is a simple snapshot in time. If the satellite that captures the image is not in a geo-synchronous orbit, there is a risk of the target not being there when the satellite passes over the area again. There is also the possibility of camouflage. For example, the entrance to an underground bunker may be camouflaged with foliage and it would take an arduous examination of the image to find the information needed. Another potential failure is a satellite being unavailable at the time needed because it is being used for other intelligence purposes, and the situation or event of interest is missed. Images can also be misinterpreted, generating misleading information and potentially supporting a bad decision.\n",
"A total of 29 pictures were taken, covering 70% of the far side. After the photography was complete the spacecraft resumed spinning, passed over the north pole of the Moon and returned towards the Earth. Attempts to transmit the pictures to the Soviet Union began on October 8 but the early attempts were unsuccessful due to the low signal strength. As Luna 3 drew closer to the Earth, a total of about 17 viewable but poor quality photographs were transmitted by 18 October. All contact with the probe was lost on 22 October 1959. The space probe was believed to have burned up in the Earth's atmosphere in March or April 1960. Another possibility was that it might have survived in orbit until 1962 or later.\n",
"All Apollo flights were heavily scheduled down to the minute. At the time this photo was taken, none of the astronauts was scheduled to do so. Thus this photo was taken quickly in a stolen moment. The astronaut who took the picture was weightless, and the continents were hard to see, and he took the photo quickly, which explains why he held the camera upside down compared to the north up orientation of all maps. If every photo on this roll of film is printed, and all of the photos on the roll of film are oriented the same way, then when viewed in sequence, to put feet down and heads up, this photo will have the south pole up, breaking the map convention.\n",
"Many pictures have been taken of the entire Earth by satellites launched by a variety of governments and private organizations. From high orbits, where half the planet can be seen at once, it is plainly spherical. The only way to piece together all the pictures taken of the ground from lower orbits so that all the surface features line up seamlessly and without distortion is to put them on an approximately spherical surface.\n",
"Space debris photographed in 1998 during the STS-88 mission has been widely claimed to be the Black Knight satellite. Space journalist James Oberg considers it probable that the photographs are of a thermal blanket that was confirmed as lost during an EVA by Jerry L. Ross and James H. Newman.\n",
"Patches may be large enough to be viewed by satellite. For example, when the Malaysian Flight MH370, disappeared in 2014, satellites were scanning the oceans surface for any sign of it, and instead of finding debris from the plane they came across floating garbage. The gyre contains approximately six pounds of plastic for every pound of plankton.\n"
] |
What's past the cosmological horizon? | By definition, no; the observable universe encompasses all that we can have any information about. But there's no particular reason to imagine it would be much different from the local area--it's just impossible to confirm. | [
"The cosmological horizon (also called the particle horizon or the light horizon) is the maximum distance from which particles can have traveled to the observer in the age of the Universe. This horizon represents the boundary between the observable and the unobservable regions of the Universe. The existence, properties, and significance of a cosmological horizon depend on the particular cosmological model.\n",
"A cosmological horizon is a measure of the distance from which one could possibly retrieve information. This observable constraint is due to various properties of general relativity, the expanding universe, and the physics of Big Bang cosmology. Cosmological horizons set the size and scale of the observable universe. This article explains a number of these horizons.\n",
"Cosmologists normally work with a given space-like slice of spacetime called the comoving coordinates, the existence of a preferred set of which is possible and widely accepted in present-day physical cosmology. The section of spacetime that can be observed is the backward light cone (all points within the cosmic light horizon, given time to reach a given observer), while the related term Hubble volume can be used to describe either the past light cone or comoving space up to the surface of last scattering. To speak of \"the shape of the universe (at a point in time)\" is ontologically naive from the point of view of special relativity alone: due to the relativity of simultaneity we cannot speak of different points in space as being \"at the same point in time\" nor, therefore, of \"the shape of the universe at a point in time\". However, the comoving coordinates (if well-defined) provide a strict sense to those by using the time since the Big Bang (measured in the reference of CMB) as a distinguished universal time.\n",
"In cosmology, the event horizon of the observable universe is the largest comoving distance from which light emitted \"now\" can ever reach the observer in the future. This differs from the concept of particle horizon, which represents the largest comoving distance from which light emitted in the \"past\" could have reached the observer at a given time. For events beyond that distance, light has not had time to reach our location, even if it were emitted at the time the universe began. How the particle horizon changes with time depends on the nature of the expansion of the universe. If the expansion has certain characteristics, there are parts of the universe that will never be observable, no matter how long the observer waits for light from those regions to arrive. The boundary past which events cannot ever be observed is an event horizon, and it represents the maximum extent of the particle horizon.\n",
"In the case of a horizon perceived by an occupant of a de Sitter universe, the horizon always appears to be a fixed distance away for a non-accelerating observer. It is never contacted, even by an accelerating observer.\n",
"The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.\n",
"In this equation, \"a\" is the scale factor, \"c\" is the speed of light, and \"t\" is the age of the Universe. If (i.e., points arbitrarily as far away as can be observed), then no event horizon exists. If , a horizon is present.\n"
] |
why couldn't data be transmitted back to us beyond the event horizon of a black hole, i understand gravity prevents light from escaping, but how, and would it be a similar scenario for data? | Well, first off, the data would be travelling as some frequency of light, like radio. So, consider them one and the same.
The way it prevents light from escaping is that gravity actually bends space. Light travels along space, and when [space gets curved](_URL_0_), the light has a longer journey and that journey takes a little longer.
In the case of a black hole, space gets bent so much that the light can't get out -- [kind of like a deep, deep hole](_URL_1_). | [
"Ever since Stephen Hawking suggested information is lost in an evaporating black hole once it passes through the event horizon and is inevitably destroyed at the singularity, and that this can turn pure quantum states into mixed states, some physicists have wondered if a complete theory of quantum gravity might be able to conserve information with a unitary time evolution. But how can this be possible if information cannot escape the event horizon without traveling faster than light? This seems to rule out Hawking radiation as the carrier of the missing information. It also appears as if information cannot be \"reflected\" at the event horizon as there is nothing special about it locally.\n",
"An object can cross through the event horizon of a black hole from the outside, and then fall rapidly to the central region where our understanding of physics breaks down. Since within a black hole the forward light-cone is directed towards the center and the backward light-cone is directed outward, it is not even possible to define time-reversal in the usual manner. The only way anything can escape from a black hole is as Hawking radiation.\n",
"BULLET::::- At the event horizon, formula_30 the speed of light shining outward away from the center of black hole is formula_31 It can not escape from the event horizon. Instead, it gets stuck at the event horizon. Since light moves faster than all others, matter can only move inward at the event horizon. Everything inside the event horizon is hidden from the outside world.\n",
"However, if the object passes too close to the central supermassive black hole, it will make a direct plunge across the event horizon. This will produce a brief violent burst of gravitational radiation which would be hard to detect with currently planned observatories. Consequently, the creation of EMRI requires a fine balance between objects passing too close and too far from the central supermassive black hole. Currently, the best estimates are that a typical supermassive black hole of , will capture an EMRI once every 10 to 10 years. This makes witnessing such an event in our Milky Way unlikely. However, a space based gravitational wave observatory like LISA will be able to detect EMRI events up to cosmological distances, leading to an expected detection rate somewhere between a few and a few thousand per year.\n",
"On the other hand, indestructible observers falling into a black hole do not notice any of these effects as they cross the event horizon. According to their own clocks, which appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour; in classical general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.\n",
"While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon.\n",
"Black hole – mathematically defined region of spacetime exhibiting such a strong gravitational pull that no particle or electromagnetic radiation can escape from inside it. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon. Although crossing the event horizon has enormous effect on the fate of the object crossing it, it appears to have no locally detectable features. In many ways a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, making it essentially impossible to observe.\n"
] |
I'm going to jail tomorrow and I want to read the best nonfiction books about History. What do you suggest? | I'm sorry to hear that. Are you planning to buy your books on the outside or use the library system?
For the Civil War, I would start with either (or both) James McPherson's Battle Cry of Freedom and Shelby Foote's The Civil War. Those are two mainstays that you should be able to find relatively easily. If either of those piques your interest, I can make further recommendations based on what you're interested in. Battles and campaigns? Politics and diplomacy? Economics and organization?
In any case, feel free to PM me if you'd like to discuss it more. | [
"In 2017, James wrote the foreword for the U.K. edition of \"The Crime Book\", with American crime author Cathy Scott writing the foreword for the U.S. edition. The nonfiction book, a volume in the \"Big Ideas Simply Explained\" series, was released by Dorling Kindersley (Penguin Random House) in April 2017 in the U.K. and May 2017 in the U.S..\n",
"The Crime Book (Big Ideas Simply Explained) is a non-fiction volume co-authored by American crime writers Cathy Scott, Shanna Hogan, Rebecca Morris, Canadian author and historian Lee Mellor, and United Kingdom author Michael Kerrigan, with a foreword for the U.S. edition by Scott and the U.K. edition by crime-fiction author Peter James. It was released by DK Books under its Big Ideas Learning imprint in May 2017.\n",
"This book is currently used in both college and Advanced Placement high school courses across the United States. The book is roughly 780 pages and includes the U.S. Constitution, U.S. Bill of Rights, outcomes of various elections throughout American history, and famous court cases. It is accompanied by a companion website that features practice test questions and detailed explanations on each chapter.\n",
"The program is guided by the Literature for Justice committee, an assemblage of well-known authors who are also experts, leaders, and advocates within the space of mass incarceration. This committee is tasked with the creation and selection of a reading list of five books annually to guide readers through this complex issue, with the hope that these texts will help shift public perception and understand of mass incarceration through the power of storytelling. Collectively, the selected books tell a story about America's carceral system and what it means for all Americans.\n",
"Scott co-authored \"The Crime Book\" volume with American crime writers Shanna Hogan, Lee Mellor, Rebecca Morris and British author Michael Kerrigan, with a foreword for the U.S. edition by Scott and the U.K. edition by author Peter James. It was released in April 2017 in the U.K. and May 2017 in the U.S. by Dorling Kindersley (Penguin Random House). In an August 2017 interview with \"Rolling Stone\" magazine, Scott explained the choice of stories for the book: \"We tried to include the famous ones, and then some lesser-known. They needed to be from across the world and across the years and across a variety of crimes.\"\n",
"The book is about common law in the United States, including torts, property, contracts, and crime. It is written as a series of lectures. It has gone out of copyright and is available in full on the web at Project Gutenberg.\n",
"The book launched with a first of its kind online \"Exhibit Hall\" that allows readers to review actual case materials such as: photographs and video from inside prisons, headshots of main characters, audio tapes from trial, autopsy reports, government documents, witness statements, crime scene photos, original police case file, defense motions, court rulings, and newspaper articles. The book's foreword was written by investigative journalist Bill Kurtis host of A&E Network's Investigative Reports, \"American Justice\", and \"Cold Case Files\". Bill Kurtis stated, \"This story should be issued with every passport.”\n"
] |
why do horses let humans ride them? | Being rideable was an important part of the domestication process of horses. Early on (and even still), horses that tolerated carrying loads were kept and bred. Horses that did not tolerate loads were not worth their weight in feed, so they were not kept and bred.
Via similar methods, we've also selected for dogs that provide companionship and cats that don't claw your arm off when you touch them. | [
"Horses are herd animals, with a clear hierarchy of rank, led by a dominant individual, usually a mare. They are also social creatures that are able to form companionship attachments to their own species and to other animals, including humans. They communicate in various ways, including vocalizations such as nickering or whinnying, mutual grooming, and body language. Many horses will become difficult to manage if they are isolated, but with training, horses can learn to accept a human as a companion, and thus be comfortable away from other horses. However, when confined with insufficient companionship, exercise, or stimulation, individuals may develop stable vices, an assortment of bad habits, mostly stereotypies of psychological origin, that include wood chewing, wall kicking, \"weaving\" (rocking back and forth), and other problems.\n",
"Horses are required to go on the bit in certain riding disciplines, such as dressage. However, all horses ridden on contact are generally encouraged to go on the bit, as this not only makes them more responsive to the rider's aids, but also allows them to move in a more athletic manner since the animal is raising its back and bringing its hocks further under its body.\n",
"Communication between human and horse is paramount in any equestrian activity; to aid this process horses are usually ridden with a saddle on their backs to assist the rider with balance and positioning, and a bridle or related headgear to assist the rider in maintaining control. Sometimes horses are ridden without a saddle, and occasionally, horses are trained to perform without a bridle or other headgear. Many horses are also driven, which requires a harness, bridle, and some type of vehicle.\n",
"Racing is the most popular form of animal-related sport, particularly horse racing. Some racing events directly involve humans as riders while others see the animals race alone. In some sports the rider is not directly riding the animal, instead being pulled along. Examples of this include harness racing, dogsled racing and popular ancient Greece and Roman Empire sport of chariot racing.\n",
"Horses do, in fact, have rights when it comes to being on the road, similarly to cyclists and runners who utilize the roadways. However, there are specific rules and regulations that they must abide, as well. Horseback riders must ride with traffic, as far to the right as possible on the roadway. However, many equestrians believe that riding against traffic is a safer way to use the roadway with a horse. Horses do best when they can see what is coming towards them, rather than guessing what is coming towards them from behind. Although this is not the same in each state in the United States of America, the vast majority follow this rule. From state to state, some statues and regulations vary. For example, the state of New York has a very set comprehensive list of rules for the use of horses on the road- both being ridden upon and being horse-driven vehicles. The state of Louisiana prohibits the riding of a horse on any asphalt-based road. There are many states that prohibit the driving or riding of horses on the right of way on a limited access highway like an interstate highway. A lot of the regulations are similar. They include only passing the horse-driven vehicle or horseback rider when it is safe to do so and prohibiting the use of any form of noise, such as a horn. In order to minimize the number of accidents that occur with horse and road distractions, people who are actively driving should try to be as cautious as possible to try and avoid these situations.\n",
"Horses communicate in various ways, including vocalizations such as nickering, squealing or whinnying; touch, through mutual grooming or nuzzling; smell; and body language. Horses use a combination of ear position, neck and head height, movement, and foot stomping or tail swishing to communicate. Discipline is maintained in a horse herd first through body language and gestures, then, if needed, through physical contact such as biting, kicking, nudging, or other means of forcing a misbehaving herd member to move. In most cases, the animal that successfully causes another to move is dominant, whether it uses only body language or adds physical reinforcement.\n",
"Some animals are used due to sheer physical strength in tasks such as ploughing or logging. Such animals are grouped as a draught or draft animal. Others may be used as pack animals, for animal-powered transport, the movement of people and goods. Some animals are ridden by people on their backs and are known as “mounts”; Alternatively, one or more animals in harness may be used to pull vehicles.\n"
] |
what are nfl coaches writing during a game on the sidelines? | They're writing down exactly how hard they're going to cut you next season, *Kelvin Benjamin*. | [
"BULLET::::- How is the choice sent onto the field? In the NFL, a player is in radio contact with the sidelines for a defined interval before each play. The team can send a substitute player onto the field who knows the play the coaches want to run. Personnel on the sidelines can call plays using hand signals or pictures. If the team has called a time-out, the coaches can give players detailed instructions; but there are only three time-outs per half, and they are usually needed for tactical reasons.\n",
"In the National Football League backup players, particularly the quarterback, are seen on the sidelines carrying a clipboard. Football analysts often use the notion of \"carrying a clipboard\" as an object of derision indicating that said football player is not good enough to play on the field.\n",
"First, the quarterback receives the football from the center. The quarterback then starts the play in one direction by appearing to hand the football to the fullback right behind the play side guard on a standard fullback dive play. The guard \"chips\" the 3-technique (defensive tackle) and blocks the play side (the side where the play is going) inside linebacker (usually called the \"mike\", or middle linebacker). The quarterback then reads the unblocked defensive lineman. If the lineman attacks the fullback, the quarterback pulls the ball from the fullback's gut and continues down the line, but if the defensive lineman goes outside to contain the play, he hands off inside to the fullback. The offensive tackle on the side of the play's direction does not block the defensive end and instead moves to block the first threat, usually the linebacker stacked behind the defensive end. In the traditional triple option the backside tailback will take a parallel course down the line of scrimmage keeping a three to five yard separation from the quarterback. If the defensive end comes inside toward the quarterback, he will pitch it outside to the trailing halfback. If the defensive end retains outside leverage and covers the trailing halfback closely, the quarterback will keep the ball and run upfield inside of the defensive end. The tailback to the play side is responsible for blocking one of the defensive backs, usually one of the deep safeties. The wide receiver (WR) to the play side is responsible for blocking the corner back assigned to cover them if the defense were playing man coverage.\n",
"The most popular running play employed in the spread is the read option. This play is also known as the zone read, QB choice, or QB wrap. A type of double option, the read option is a relatively simple play during which the offensive line zone blocks in one direction, ignoring defensive personnel, while the quarterback makes a single read (usually of the backside defensive end or linebacker) and decides whether to keep the ball (if the backside defender crashes down) or to hand off to the back (if the defender indicates that he will cover the quarterback).\n",
"The zone read, or shotgun veer play, is now widely used throughout all levels of college football. A running back is lined up adjacent to the quarterback, and, at the snap, the quarterback opens up facing the running back. He reads the end on the same side as the running back. The running back is performing effectively the same motion as the dive back in a conventional veer, except he runs at the defensive end on the opposite side of the field. If the unblocked end on the running back's side (who, in a sense, is being veered) moves up the field towards the crossing running back, the quarterback pulls the ball from the running back and sprints by the end. If the veered end is waiting at his original position, the quarterback gives the ball to the running back. Many different formations are employed, and as a general rule, the option being employed is the base offense for the team, and not as a wrinkle.\n",
"Unlike \"Sunday NFL Countdown\", \"NFL Primetime\", \"Monday Night Countdown\", \"NFL Insiders\", and \"NFL Live\", \"NFL Matchup\" gives fans an in-depth look at the NFL by breaking down the strategy and tactics—the \"X's and O's\", after the symbols commonly used by coaches to diagram plays—of every pro football game. The program's analysts do this through the exclusive use of team-supplied coaching footage, the same video coaches and players use each week to prepare game plans and strategy.\n",
"Sanders works at NFL Network as an analyst on a number of the network's shows. Prior to the Sunday night game, Sanders, alongside host Rich Eisen and Steve Mariucci, breaks down all the action from the afternoon games on \"NFL GameDay\". At the conclusion of all the action on Sunday, Sanders, Mariucci, Michael Irvin and host Fran Charles recap the day's action with highlights, analysis and postgame interviews. For the 2010 season, Sanders joined Eisen, Mariucci and Marshall Faulk on the road for \"Thursday Night Kickoff presented by Lexus\", NFL Network's two-hour pregame show leading into \"Thursday Night Football\". The group broadcasts live from the stadium two hours prior to all eight live \"Thursday Night Football\" games and returns for the Sprint halftime show and Kay Jewelers postgame show. Sanders also has a segment called \"Let's Go Primetime\" on NFL Network.\n"
] |
Why are things getting blue with the horizon? | It's the same reason that the sky is blue--Rayleigh scattering. This type of scattering only occurs for suspended particles of sufficiently small size (on the order of 10 microns or less in diameter). It preferentially scatters light with the smallest wavelength.
So basically, when you look off into the distance like that, there are many, many tiny particles in the atmosphere. There may not be more per unit volume than there are around you all the time, but from your vantage point they have a larger effect because the light has to pass through more of them. That scatters the blue (shortest wavelength) light preferentially, just like the particles in the sky do, and so you see a similar shade of blue. You might see this more on a particularly hazy day or in a more polluted area, because there will be more tiny scattering particles in the air. This effect is pretty apparent in the hills of Los Angeles, where smog is a big problem. | [
"Another phenomenon that occurs is Rayleigh scattering in the atmosphere along one's line of sight: the horizon is typically 4–5 km distant and the air (being just above sea level in the case of the ocean) is at its densest. This mechanism would add a blue tinge to any distant object (not just the sea) because blue light would be scattered into one's line of sight.\n",
"When light hits the blue oceans or seas, some of it bounces back and enables the observer to physically see the water. However, some of the light also is reflected back up on to the bottoms of low-lying clouds and causes a dark spot to appear underneath some clouds. These clouds may be visible when the seas are not and can show alert and knowledgeable travelers the general direction of water. The dark clouds over open water have long been used by polar explorers and scientists to navigate in sea ice. For example, Arctic explorer Fridtjof Nansen and his assistant Hjalmar Johansen used the phenomenon to find lanes of water in their failed expedition to the North Pole, as did Louis Bernacchi and Douglas Mawson in Antarctica.\n",
"Light from the sky is a result of the Rayleigh scattering of sunlight, which results in a blue color perceived by the human eye. On a sunny day, Rayleigh scattering gives the sky a blue gradient, where it is darkest around the zenith and bright near the horizon. Light rays incoming from overhead encounters of the air mass that those coming along a horizontal path encounter. Hence, fewer particles scatter the zenithal sunbeam, and thus the light remains a darker blue. The blueness is at the horizon because the blue light coming from great distances is also preferentially scattered. This results in a red shift of the distant light sources that is compensated by the blue hue of the scattered light in the line of sight. In other words, the red light scatters also; if it does so at a point a great distance from the observer it has a much higher chance of reaching the observer than blue light. At distances nearing infinity, the scattered light is therefore white. Distant clouds or snowy mountaintops will seem yellow for that reason; that effect is not obvious on clear days, but very pronounced when clouds are covering the line of sight reducing the blue hue from scattered sunlight.\n",
"Lakes and oceans appear blue for several reasons. One is that the surface of the water reflects the color of the sky. While this reflection contributes to the observed color, it is not the sole reason.\n",
"It was described by observers as \"special for its colours around the horizon. There were wonderful oranges and reds all around, the clouds lit up, some dark in silhouette, some golden, glowing yellowy-orange in the distance. You could see the shadow approaching against the clouds and then rushing away as it left.\"\n",
"As the temperature of air increases, the index of refraction of air decreases, a side effect of hotter air being less dense. Normally this results in distant objects being shortened vertically, an effect that is easy to see at sunset where the sun is visible as an oval. In an inversion, the normal pattern is reversed, and distant objects are instead stretched out or appear to be above the horizon, leading to the phenomenon known as a Fata Morgana or mirage.\n",
"The \"true horizon\" is actually a theoretical line, which can only be observed when it lies on the sea surface. At many locations, this line is obscured by land, trees, buildings, mountains, etc., and the resulting intersection of earth and sky is called the \"visible horizon\". When looking at a sea from a shore, the part of the sea closest to the horizon is called the offing.\n"
] |
wine tasting | Good and bad are all a matter of opinion. The purpose of wine tasting is to figure out what YOU like, and what YOU find appealing. Price has little to do with weather or not a bottle is "good". Once you are able to recognize what is meant by those terms you will start to learn what you like. I don't care for oak-y wines, but I love a full bodies dry cab sav... some people are the exact opposite. A lot of times your local wine store will host free tastings and will explain these things with examples so you can start recognizing the different characteristics... You should check it out if you are interested!
Edit: a word | [
"The sensory analysis of the wines was undertaken by professional wine tasters in order to establish the intangible features of wines being tested, such as sweet, dry, bitter, and so on. Two public tastings took place, in which tasters were asked to engage in discussion and note the effect of salt on the wines, as well as colour, nose and taste.\n",
"Wine tasting is the sensory examination and evaluation of wine. Wines contain many chemical compounds similar or identical to those in fruits, vegetables, and spices. The sweetness of wine is determined by the amount of residual sugar in the wine after fermentation, relative to the acidity present in the wine. Dry wine, for example, has only a small amount of residual sugar. Some wine labels suggest opening the bottle and letting the wine \"breathe\" for a couple of hours before serving, while others recommend drinking it immediately. Decanting (the act of pouring a wine into a special container just for breathing) is a controversial subject among wine enthusiasts. In addition to aeration, decanting with a filter allows the removal of bitter sediments that may have formed in the wine. Sediment is more common in older bottles, but aeration may benefit younger wines.\n",
"Thoroughly tasting a wine involves perception of its array of taste and mouthfeel attributes, which involve the combination of textures, flavors, weight, and overall \"structure\". Following appreciation of its olfactory characteristics, the wine taster savors a wine by holding it in the mouth for a few seconds to saturate the taste buds. By pursing ones lips and breathing through that small opening oxygen will pass over the wine and release even more esters. When the wine is allowed to pass slowly through the mouth it presents the connoisseur with the fullest gustatory profile available to the human palate.\n",
"Wine tasting is the sensory examination and evaluation of wine. While the practice of wine tasting is as ancient as its production, a more formalized methodology has slowly become established from the 14th century onwards. Modern, professional wine tasters (such as sommeliers or buyers for retailers) use a constantly evolving specialized terminology which is used to describe the range of perceived flavors, aromas and general characteristics of a wine. More informal, recreational tasting may use similar terminology, usually involving a much less analytical process for a more general, personal appreciation.\n",
"Although the practice of tasting is as old as the history of wine, the term \"tasting\" first appeared in 1519. The methodology of wine tasting was formalized by the 18th century when Linnaeus, Poncelet, and others brought an understanding of tasting up to date.\n",
"Criticism of the event suggested that wine tastings lacked scientific validity due to the subjectivity of taste in human beings. Indeed, the organizer of the competition, Steven Spurrier, said, \"The results of a blind tasting cannot be predicted and will not even be reproduced the next day by the same panel tasting the same wines.\" In one case it was reported that a \"side-by-side chart of best-to-worst rankings of 18 wines by a roster of experienced tasters showed about as much consistency as a table of random numbers.\"\n",
"Results contradicting the reliability of wine tasting in both experts and consumers have surfaced through scientific blind wine tasting, such as inconsistency in identifying wines based on region and price.\n"
] |
Is an object colder if it's moving faster? | It turns out that temperature really only has meaning when you talk about a set of molecules or atoms. The most reasonable description, I suppose, for a molecule would be that the amount of energy in each vibration should be related to the molecule's temperature. At relativistic speeds, the molecule would appear to be vibrating more slowly. However, the energy of those vibrations wouldn't change (i.e. a photon of the same energy will still excite one), so the temperature shouldn't change. (I will defer to a physicist if one comes along.)
My personal knowledge of this topic is in the area of crossed molecular beams. Generally speaking (i.e. at non-relativistic speeds), you can definitely produce molecules moving very quickly that are nonetheless very cold. You might also be interested to know that scientists [pretty](_URL_1_) [frequently](_URL_0_) produce beams of molecules traveling at supersonic speeds that are also very cold (that is, they have a very narrow distribution of energies.) | [
"At high speeds through the air, the object's kinetic energy is converted to heat through compression and friction. At lower speed, the object will lose heat to the air through which it is passing, if the air is cooler. The combined temperature effect of heat from the air and from passage through it is called the stagnation temperature; the actual temperature is called the recovery temperature. These viscous dissipative effects to neighboring sub-layers make the boundary layer slow down via a non-isentropic process. Heat then conducts into the surface material from the higher temperature air. The result is an increase in the temperature of the material and a loss of energy from the flow. The forced convection ensures that other material replenishes the gases that have cooled to continue the process.\n",
"An object at a different temperature from its surroundings will ultimately come to a common temperature with its surroundings. A relatively hot object cools as it warms its surroundings; a cool object is warmed by its surroundings. When considering how quickly (or slowly) something cools, we speak of its \"rate\" of cooling - how many degrees' change in temperature per unit of time.\n",
"As Newton's law of cooling states, the rate of cooling of an object - whether by conduction, convection, or radiation - is approximately proportional to the temperature difference Δ\"T\". Frozen food will warm up faster in a warm room than in a cold room. Note that the rate of cooling experienced on a cold day can be increased by the added convection effect of the wind. This is referred to as wind chill. For example, a wind chill of -20 °C means that heat is being lost at the same rate as if the temperature were -20 °C without wind.\n",
"BULLET::::- Heat transfer: If an object at one temperature is exposed to a medium of another temperature, the temperature difference between the object and the medium follows exponential decay (in the limit of slow processes; equivalent to \"good\" heat conduction inside the object, so that its temperature remains relatively uniform through its volume). See also Newton's law of cooling.\n",
"If bodies are prepared with separately microscopically stationary states, and are then put into purely thermal connection with each other, by conductive or radiative pathways, they will be in thermal equilibrium with each other just when the connection is followed by no change in either body. But if initially they are not in a relation of thermal equilibrium, heat will flow from the hotter to the colder, by whatever pathway, conductive or radiative, is available, and this flow will continue until thermal equilibrium is reached and then they will have the same temperature.\n",
"This behavior might be understood by a human observer as a creature that is 'alive' like an insect and 'restless', never stopping in its movement. The low velocity in regions of low temperature might be interpreted as a preference for cold areas.\n",
"According to the laws of thermodynamics, all particles of matter are in constant random motion as long as the temperature is above absolute zero. Thus the molecules and atoms which make up the human body are vibrating, colliding, and moving. This motion can be detected as temperature; higher temperatures, which represent greater kinetic energy in the particles, feel warm to humans who sense the thermal energy transferring from the object being touched to their nerves. Similarly, when lower temperature objects are touched, the senses perceive the transfer of heat away from the body as feeling cold.\n"
] |
how can developers of self driving cars ensure that there is no lag in the system? | The answer is a so-called Real Time Operating System, which guarantees to handle data within a certain time.
And about the crashing, the trick is to have a very small and good designed kernel and the rest of the stuff runs as an userland thread on top of the kernel: If a thread dies, it doesn't take out the whole computer, just one piece is not working. | [
"Self-driving cars are potentially beneficial to the environment. They can be programmed to navigate the most efficient route and reduce idle time, which could result in less fossil fuel consumption and greenhouse gas (GHG) emissions. The same could be said true for the heavy machinery used in the heavy industry. AI can see the path clearly, whereas humans are prone to occasional errors.\n",
"The influence of ERB on drivers is not universal. There is evidence that as the complexity of driving tasks increases, the benefits of using a HUD are decreased, and in some circumstances, they are no longer statistically significant. The ERB is diminished, for example, when individuals are driving cognitively demanding vehicles, such as industrial vehicles, or when they are asked to multitask while driving. One study has shown that when placed in a cognitively demanding condition, individuals shift their focus from the road alone to focus on other tasks such as shifting gears or talking to others. Subsequently, a driver's ability to process HUD feedback requires diversion of attention, much akin to that which occurs whilst using a HDD.\n",
"Increases in the use of autonomous car technologies (e.g. advanced driver-assistance systems) is causing incremental shifts in the responsibility of driving, with the primary motivation of reducing the frequency of road accidents. Liability for incidents involving self-driving cars is a developing area of law and policy that will determine who is liable when a car causes physical damage to persons or property. As autonomous cars shift the responsibility of driving from humans to autonomous car technology, there is a need for existing liability laws to evolve in order to fairly identify the appropriate remedies for damage and injury. As higher levels of autonomy are commercially introduced (SAE automation levels 3 and 4), the insurance industry stands to see greater proportions of commercial and product liability lines, while personal automobile insurance shrinks.\n",
"The automation of vehicles could prove to have a substantial impact on the environment, although the nature of this impact could be beneficial or harmful depending on several factors. Because automated vehicles are much less likely to get into accidents compared to human-driven vehicles, some precautions built into current models (such as anti-lock brakes or laminated glass) would not be required for self-driving versions. Removing these safety features would also significantly reduce the weight of the vehicle, thus increasing fuel economy and reducing emissions per mile. Self-driving vehicles are also more precise with regard to acceleration and breaking, and this could contribute to reduced emissions. Self-driving cars could also potentially utilize fuel-efficient features such as route mapping that is able to calculate and take the most efficient routes. Despite this potential to reduce emissions, some researchers theorize that an increase of production of self-driving cars could lead to a boom of vehicle ownership and use. This boom could potentially negate any environmental benefits of self-driving cars if a large enough number of people begin driving personal vehicles more frequently.\n",
"Self-driving cars are already exploring the difficulties of determining the intentions of pedestrians, bicyclists, and animals, and models of behavior must be programmed into driving algorithms. Human road users also have the challenge of determining the intentions of autonomous vehicles, where there is no driver with which to make eye contact or exchange hand signals. Drive.ai is testing a solution to this problem that involves LED signs mounted on the outside of the vehicle, announcing status such as \"going now, don't cross\" vs. \"waiting for you to cross\".\n",
"When some car makers suggest or claim vehicles are \"self-driving\", when they are only partly automated, drivers risk becoming excessively confident, leading to crashes, while fully self-driving cars are still a long way off in the UK.\n",
"Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.\n"
] |
lag | What kind of lag? | [
"The Robertson Lag is an example of the systematic delay which the economy suffers from when conditions change and is named after the famous economist Dennis Robertson. This lag describes how a consumers change in income and wealth, a change in its consumption function, leads to a delayed change in its consumption.\n",
"A LAG (Link Aggregation Group) is a method of inverse multiplexing over multiple Ethernet links, thereby increasing bandwidth and providing redundancy. It is defined by the IEEE 802.1AX-2008 standard, which states, \"Link Aggregation allows one or more links to be aggregated together to form a Link Aggregation Group, such that a MAC client can treat the Link Aggregation Group as if it were a single link.\" This layer 2 transparency is achieved by the LAG using a single MAC address for all the device’s ports in the LAG group. LAG can be configured as either static or dynamic. Dynamic LAG uses a peer-to-peer protocol for control, called Link Aggregation Control Protocol (LACP). This LACP protocol is also defined within the 802.1AX-2008 standard.\n",
"Credential lag usually occurs for a user who is attempting to log into a system that relies on updating its cached or otherwise saved user credentials by conferring with Active directory or similar database.\n",
"The Lundberg lag, named after the Swedish economist Erik Lundberg, stresses the lag between changes in the demand and response in output. This is one lag which points out that business cycles do not follow a completely random fashion but can be explained with a few different important regularities.\n",
"In statistics and econometrics, a distributed lag model is a model for time series data in which a regression equation is used to predict current values of a dependent variable based on both the current values of an explanatory variable and the lagged (past period) values of this explanatory variable.\n",
"This phenomenon carries forward for several generations and is called population momentum, \"population inertia\" or \"population-lag effect\". This time-lag effect is of great importance to the growth rates of human populations.\n",
"There is as yet no conclusive explanation for the phenomenon of lag 1 sparing, although it is thought to be related to the first parallel stage of the two-stage system of stimulus selection and processing.\n"
] |
Why is it so common for stroke victims to have a speech impairment? | There are a lot of areas of the cortex that contribute to normal speech, and many of the common types of stroke damage at least one of them.
Eg,
* In the vast majority of people, expressive language is in the left inferior frontal area and receptive language is in the left perisylvian temporal area. A stroke involving the middle cerebral artery (MCA) or the internal carotid artery on the left can damage both; branch lesions of the MCA can take out one or the other.
* Proper mouth, tongue, palatal, and laryngeal articulation require function of the face area of the motor cortex (posterior inferior frontal lobe, MCA territory) as well as multiple cranial nerve nuclei in the medulla (vertebral and posterior inferior cerebellar artery territory) and the cerebellum (vert/basilar, PICA, AICA, SCA territories).
* Smaller "lacunar" strokes of the internal capsule, basal ganglia, or thalamus can cause clumsy hand/dysarthria syndrome with slurred speech. These are usually caused by lesions of the small, perpendicularly-branched lenticulostriate vessels of the MCA or of the small arterial branches of the posterior cerebral artery (PCA).
* In addition, strokes affecting the right hemisphere can lead to situations where word production is maintained, but with loss of normal prosody of speech.
* Strokes of the anterior cerebral arteries damaging the prefrontal cortex can cause abulia and akinetic mutism, wherein a person has the physical ability to speak but has no will or desire to.
This is leaving out hemorrhagic strokes, which tend to affect the pons, putamen, cerebellum, and thalamus (all areas damage to which can affect speech), and venous strokes/sinus thromboses.
Facial nerve lesions, which are often "idiopathic" or at least of unknown etiology, are sometimes due to vascular lesions or vasculitis: hence, a kind of peripheral nerve stroke. Paresis of facial muscles cause by facial neuropathy (often called "Bell's palsy") can also affect speech production. | [
"A stroke is one of the leading causes of disability in the United States. Following a stroke, 40% of stroke patients are left with moderate functional impairment and 15% to 30% have a severe disability as a result of a stroke. A neurogenic cognitive-communicative disorder is one result of a stroke. Neuro- meaning related to nerves or the nervous system and -genic meaning resulting from or caused by. Aphasia is one type of a neurogenic cognitive-communicative disorder which presents with impaired comprehension and production of speech and language, usually caused by damage in the language-dominant, left hemisphere of the brain. Aphasia is any disorder of language that causes the patient to have the inability to communicate, whether it is through writing, speaking, or sign language.\n",
"Disruption in self-identity, relationships with others, and emotional well-being can lead to social consequences after stroke due to the lack of ability to communicate. Many people who experience communication impairments after a stroke find it more difficult to cope with the social issues rather than physical impairments. Broader aspects of care must address the emotional impact speech impairment has on those who experience difficulties with speech after a stroke. Those who experience a stroke are at risk of paralysis which could result in a self disturbed body image which may also lead to other social issues.\n",
"According to the National Stroke Association, stroke is the third-leading cause of death and the leading cause of adult disability. It is thought that glutamate levels cause underlying ischemic damage during a stroke, and, thus, NAAG inhibition might be able to diminish this damage.\n",
"The most common cause of expressive aphasia is stroke. A stroke is caused by hypoperfusion (lack of oxygen) to an area of the brain, which is commonly caused by thrombosis or embolism. Some form of aphasia occurs in 34 to 38% of stroke patients. Expressive aphasia occurs in approximately 12% of new cases of aphasia caused by stroke.\n",
"It has also been found that empathy can be disrupted due to trauma in the brain such as a stroke. In most cases empathy is usually impaired if a lesion or stroke occurs on the right side of the brain. In addition to this it has been found that damage to the frontal lobe, which is primarily responsible for emotional regulation, can impact profoundly on a person's capacity to experience empathy toward another individual. People who have suffered from an acquired brain injury also show lower levels of empathy according to previous studies. In fact, more than 50% of people who suffer from a traumatic brain injury self-report a deficit in their empathic capacity. Again, linking this back to the early developmental stages of emotion, if emotional growth has been stunted at an early age due to various factors, empathy will struggle to infest itself in that individual's mind-set as a natural feeling, as they themselves will struggle to come to terms with their own thoughts and emotions. This is again suggestive of the fact that understanding one's own emotions is key in being able to identify with another individual's emotional state.\n",
"A silent stroke (or asymptomatic cerebral infarction) is a stroke that does not have any outward symptoms associated with stroke, and the patient is typically unaware they have suffered a stroke. Despite not causing identifiable symptoms, a silent stroke still causes damage to the brain and places the patient at increased risk for both transient ischemic attack and major stroke in the future. In a broad study in 1998, more than 11 million people were estimated to have experienced a stroke in the United States. Approximately 770,000 of these strokes were symptomatic and 11 million were first-ever silent MRI infarcts or hemorrhages. Silent strokes typically cause lesions which are detected via the use of neuroimaging such as MRI. The risk of silent stroke increases with age but may also affect younger adults. Women appear to be at increased risk for silent stroke, with hypertension and current cigarette smoking being amongst the predisposing factors.\n",
"A potential complication that may occur in children that suffer acute anemia with a hemoglobin count below 5.5 g/dl is silent stroke A silent stroke is a type of stroke that does not have any outward symptoms (asymptomatic), and the patient is typically unaware they have suffered a stroke. Despite not causing identifiable symptoms a silent stroke still causes damage to the brain, and places the patient at increased risk for both transient ischemic attack and major stroke in the future.\n"
] |
Indo-european and Peppers ? How is Sanskrit connected to European languages and what is up with ancient Indian peppers ? | Yes, Sanskrit is part of the Indo-European language family, and yes, its speakers migrated there from the steppes of what is now Ukraine and Kazakhstan. However, Indo-Iranian moved into its modern range by way of Bactria, not the Caucasus. There are libraries of literature on Indo-European, and the idea that it is "false and disproven" is itself a myth perpetrated by Hindu nationalists. Indic languages are very clearly part of the Indo-European family--this is obvious to anyone with some basic acquaintance with Latin, Greek, and Sanskrit, and has been known to scholars since the late 18th century. We also can be sure that Indo-European did not originate in India for a few reasons. First, India contains only one branch of Indo-European, while linguistic homelands generally contain or are proximate to several different branches. For example, Taiwan, the homeland of Austronesian, contains anywhere from 4 to 9 separate branches. Second, Indic languages (and this dates back as far as Sanskrit) contain features clearly derived from contact with Dravidian languages, like a distinction between dental stops (made with the tongue on the back of the teeth) and retroflex ones (where the tongue is curled back towards the hard palate) or a historic confusion of /l r/. Third, there's a pretty clear relationship between where language families show up geographically in the historic record and the features they share. [This map](_URL_0_) gives you a good picture of that. There's basically no linguistic evidence consonant with the idea.
I'll let someone who knows more about the Columbian Exchange tackle the question about peppers. | [
"The similarities between various European languages, Sanskrit and Persian were noted by Sir William Jones in his \"Third Anniversary Discourse\" to the Asiatic Society in 1786, after learning Sanskrit in India, concluding that all these languages originated from the same source. From his initial intuitions developed the establishment of the Indo-European language family, which consists of several hundred related languages and dialects. There are about 439 languages and dialects, according to the 2009 \"Ethnologue\" estimate, about half of these (221) belonging to the Indo-Aryan subbranch originating in South Asia. The Indo-European family includes most of the major current languages of Europe, the Iranian plateau, the northern half of the Indian Subcontinent, Sri Lanka and was also spoken in ancient Anatolia. With written attestations appearing since the Bronze Age in the form of the Anatolian languages and Mycenaean Greek, the Indo-European family is significant to the field of historical linguistics as possessing the second-longest recorded history, after the Afroasiatic family.\n",
"Other Indo-European languages related to Sanskrit include archaic and classical Latin (c. 600 BCE – 100 CE, old Italian), Gothic (archaic Germanic language, c. 350 CE), Old Norse (c. 200 CE and after), Old Avestan (c. late 2nd millennium BCE) and Younger Avestan (c. 900 BCE). The closest ancient relatives of Vedic Sanskrit in the Indo-European languages are the Nuristani language found in the remote Hindu Kush region of the northeastern Afghanistan and northwestern Himalayas, as well as the extinct Avestan and Old Persian—both Iranian languages. Sanskrit belongs to the satem group of the Indo-European languages.\n",
"Pepper is native to South Asia and Southeast Asia, and has been known to Indian cooking since at least 2000 BCE. J. Innes Miller notes that while pepper was grown in southern Thailand and in Malaysia, its most important source was India, particularly the Chera dynasty (Tamil dynasty) Malabar Coast, in what is now the state of Kerala. The lost ancient port city of Muziris in Kerala, famous for exporting black pepper and various other spices, gets mentioned in a number of classical historical sources. Peppercorns were a much-prized trade good, often referred to as \"black gold\" and used as a form of commodity money. The legacy of this trade remains in some Western legal systems that recognize the term \"peppercorn rent\" as a token payment for something that is, essentially, being given.\n",
"The Rigvedic Sanskrit is one of the oldest attestations of any Indo-Aryan languages, and one of the earliest attested members of the Indo-European languages. The discovery of Sanskrit by early European explorers of India led to the development of comparative Philology. The scholars of the 18th century were struck by the far reaching similarity of Sanskrit, both in grammar and vocabulary, to the classical languages of Europe. Intensive scientific studies that followed have established that Sanskrit and many Indian derivative languages belong to the family which includes English, German, French, Italian, Spanish, Celtic, Greek, Baltic, Armenian, Persian, Tocharian and other Indo-European languages.\n",
"Many consider William Jones, an Anglo-Welsh philologist and puisne judge in Bengal, to have begun Indo-European studies in 1786, when he postulated the common ancestry of Sanskrit, Latin, and Greek. However, he was not the first to make this observation. In the 1500s, European visitors to the Indian subcontinent became aware of similarities between Indo-Iranian languages and European languages, and as early as 1653 Marcus Zuerius van Boxhorn had published a proposal for a proto-language (\"Scythian\") for the following language families: Germanic, Romance, Greek, Baltic, Slavic, Celtic, and Iranian. In a memoir sent to the Académie des Inscriptions et Belles-Lettres in 1767 Gaston-Laurent Coeurdoux, a French Jesuit who spent all his life in India, had specifically demonstrated the analogy between Sanskrit and European languages. In the perspective of current academic consensus, Jones' work was less accurate than his predecessors', as he erroneously included Egyptian, Japanese and Chinese in the Indo-European languages, while omitting Hindi.\n",
"In order to explain the common features shared by Sanskrit and other Indo-European languages, the Indo-Aryan migration theory states that the original speakers of what became Sanskrit arrived in the Indian subcontinent from the north-west sometime during the early second millennium BCE. Evidence for such a theory includes the close relationship between the Indo-Iranian tongues and the Baltic and Slavic languages, vocabulary exchange with the non-Indo-European Uralic languages, and the nature of the attested Indo-European words for flora and fauna. The pre-history of Indo-Aryan languages which preceded Vedic Sanskrit is unclear and various hypotheses place it over a fairly wide limit. According to Thomas Burrow, based on the relationship between various Indo-European languages, the origin of all these languages may possibly be in what is now Central or Eastern Europe, while the Indo-Iranian group possibly arose in Central Russia. The Iranian and Indo-Aryan branches separated quite early. It is the Indo-Aryan branch that moved into eastern Iran and the south into the Indian subcontinent in the first half of the 2nd millennium BCE. Once in ancient India, the Indo-Aryan language underwent rapid linguistic change and morphed into the Vedic Sanskrit language.\n",
"The Sanskrit influence came from contacts with India since ancient times. The words were either borrowed directly from India or with the intermediary of the Old Javanese language. Although Hinduism and Buddhism are no longer the major religions of Indonesia, Sanskrit, which was the language vehicle for these religions, is still held in high esteem and is comparable with the status of Latin in English and other Western European languages. Sanskrit is also the main source for neologisms, these are usually formed from Sanskrit roots. The loanwords from Sanskrit cover many aspects of religion, art and everyday life.\n"
] |
Is it possible to determine the location at which a photo was taken based on the moon's position in the sky? | No, it won't tell you the location from which the photo was taken. What you'd actually need would be an image of the stars in the night sky, but even that would only narrow it down to a latitude, unless you knew the UTC time the photo was taken. | [
"Another method is to take two pictures of the Moon at exactly the same time from two locations on Earth and compare the positions of the Moon relative to the stars. Using the orientation of the Earth, those two position measurements, and the distance between the two locations on the Earth, the distance to the Moon can be triangulated:\n",
"Dennis di Cicco of \"Sky & Telescope\" magazine read about Elmore's results and tried verifying them. Di Cicco entered the position, direction, and time into a program that displayed the Moon's position, but the resulting position did not match the \"Moonrise\" image. Di Cicco was intrigued by the discrepancy. Working off and on over the next ten years, including a visit to the location, Di Cicco concluded in 1991 \"that Adams had been at the edge of the old roadbed, about 50 feet west of the spot on the modern highway that Elmore had identified\". Di Cicco's calculations determined that the image was taken at 4:49:20 p.m. on November 1, 1941. He reviewed his calculations with Elmore, who agreed with Di Cicco's result. Elmore had been misled by his computer monitor's distortion with an additional slight discrepancy in Adams' coordinates. In 1981, the IBM PC's CGA display did not have a 1:1 pixel aspect ratio; plotting software would have to compensate for that aspect ratio to make an isotropic plot.\n",
"Newhall wondered if the astronomical information in the photograph could provide the answer, so he approached David Elmore of the High Altitude Observatory in Boulder, Colorado. Focusing on the autumn months of 1941 through 1944, Elmore found 36 plausible dates for the image. Elmore determined a probable location and direction for the camera alongside the highway. Using that location information, he then plotted the Moon's apparent position on his computer screen for those dates to find a match. Elmore concluded that \"Moonrise\" was taken on October 31, 1941, at 4:03 p.m.\n",
"On May 23, 2007 digital photographs of the Moon during a near-occultation of Regulus were taken from two locations, in Greece and England. By measuring the parallax between the Moon and the chosen background star, the lunar distance was calculated.\n",
"By recording the instant when the Moon occults a background star, (or similarly, measuring the angle between the moon and a background star at a predetermined moment) the lunar distance can be determined, as long as the measurements are taken from multiple locations of known separation.\n",
"BULLET::::- NASA released the first photograph of the Earth as seen from the Moon, after Lunar Orbiter 1 transmitted a picture taken three days earlier. Ground control had decided to turn the orbiter's camera toward the Earth, just as the probe was about to travel toward the far side, in order to show both objects in the same photo. At the time, the Moon was between its perigee (August 17) and apogee (August 31) in relation to Earth and the \"first self-portrait of the Earth\" was taken at a distance of roughly 239,000 miles.\n",
"BULLET::::- The photo of Earth from the Moon, \"Earthrise\", was released to the public by NASA along with eight other spectacular photographs taken during the Apollo 8 mission. The display coincided with the first press conference (at Houston) by astronauts Borman, Lovell and Anders since their return to Earth, and the images were shown on live television, then repeated on evening newscasts around the world and published in the next day's newspapers. In addition to the famous view of a half-lit image of Earth were two pictures of craters on the Moon's far side from an altitude of ; a photo of the nearside craters Goclenius and Magelhaens; a view of the Mare Tranquillitatis where the first Earthmen would land in Apollo 11; and two other views of the Earth's Western Hemisphere.\n"
] |
How historically accurate is the quote: "The war wasn’t only about abolishing fascism, but to conquer sales markets. We could have, if we had intended so, prevented this war from breaking out without doing one shot, but we didn’t want to."? | > verify the quote
There's zero evidence that this was ever said by Churchill, Truman, or anyone else involved in WWII. If one is to presume that the premise of the quote is accurate (that is to say, the war was fought by the allies to "conquer sales markets," why would they admit that after years of propaganda stating that the war was a war for freedom?
It's hard to source a negative, of course (aka, come up with a source saying that someone *didn't* say this particular line). The fact that there are no reputable sources that point towards Churchill or Truman actually uttering this line, however, should dispel its credibility.
> explain the context in which it was made, and
Presumably by some Neo-Nazi, though I can't say who first created this statement. It fits in well with the Nazi idea that the Germans were the target of an international Jewish financier conspiracy.
> also explain how historically true it was.
Completely false. It makes no sense - if Britain and France keen to "conquer" the sales market that was Germany, why would they follow a policy of appeasement throughout the 30s? Why wouldn't they have just conquered Germany in the 20s, when they were weak - say, just extend the [occupation of the Ruhr](_URL_0_) to the entirety of the German nation?
The notion of one nation conquering another for economic gain is not absurd. After all, economic exploitation is one of the key features of imperialism. To portray Nazi Germany as a victim of imperialism, however, is absurd. The British and French were highly reluctant to go to war with Germany, but ended up doing so to prevent Germany from establishing hegemony over Europe. They couldn't have just pulled some strings and prevented Hitler from invading Poland without firing "one shot."
Your skepticism is well justified, and the fact that the quote only appears on site that are apologetic to the Nazis or based on attention-grabbing headlines is telling. It's not history, it's fiction. | [
"\"Fascism is the concentrated expression of the general offensive undertaken by the world bourgeoisie against the proletariat... fascism [is] an expression of the decay and disintegration of the capitalist economy and as a symptom of the bourgeois state’s dissolution. We can combat fascism only if we grasp that it rouses and sweeps along broad social masses who have lost the earlier security of their existence and with it, often, their belief in social order... It will be much easier for us to defeat Fascism if we clearly and distinctly study its nature. Hitherto there have been extremely vague ideas upon this subject not only among the large masses of the workers, but even among the revolutionary vanguard of the proletariat and the Communists... The Fascist leaders are not a small and exclusive caste; they extend deeply into wide elements of the population.\n",
"The document referred to fascism and war as \"attempt of German monopoly capitalism, to \"overcome the economic crisis by means of a brutal, fascist dictatorship and an imperialist war\". This was to entrench German monopoly capital as a dominant power in the world. \n",
"'Professor Brocca seems to recognise that to fight fascism with the weapons fascists use is self-defeating. If we do as the fascists do then we only endorse fascism. To prevent fascism we have to prevent the desperation, the poverty, the chaos and the ignorance out of which fascism is produced'.\n",
"Marxists typically attributed the start of the war to imperialism. \"Imperialism,\" argued Lenin, \"is the monopoly stage of capitalism.\" He thought that monopoly capitalists went to war to control markets and raw materials.\n",
"\"Fascism, the more it considers and observes the future and the development of humanity quite apart from political considerations of the moment, believes neither in the possibility nor the utility of perpetual peace. It thus repudiates the doctrine of Pacifismborn of a renunciation of the struggle and an act of cowardice in the face of sacrifice. War alone brings up to its highest tension all human energy and puts the stamp of nobility upon the peoples who have courage to meet it.\"\n",
"War and killing were praised as the essence of manhood. A Fascist encyclopedia proclaimed, \"Nothing is ever won in history without bloodshed.\" This drew upon older themes, exulted in World War I, with injunctions that suffering was necessary for greatness. World War I was often cited in Fascist propaganda, with many prominent Fascists displaying many medals from the conflict. To such figures as Gabriele d'Annunzio, the return of peace meant only the return of the humdrum, while the ideal was still war, themes that Fascism drew into its propaganda. Mussolini, shortly before the seizure of power, proclaimed violence better than compromise and bargaining. Afterwards, there was a prolonged period where the absence of military action did not prevent the government from many belligerent statements. Interviews appearing in foreign press, where Mussolini spoke of wanting peace, had that portion censored out before appearing in Italian papers. The annexation of Albania was presented as a splendid act of aggression. In the run-up to World War II, Mussolini's claim he could field 8 million was quickly exaggerated to 9 million, and then to 12 million. The continually bellicose pose created an embarrassment with the outbreak of World War II, where failure to join the war would undermine the propaganda effect.\n",
"Although fascism is primarily a political ideology that stresses the importance of cultural and social issues over economics, fascism is generally supportive of a broadly capitalistic mixed economy. Fascism supports a state interventionism into markets and private enterprise, alongside a corporatist framework referred to as the \"third position\" that ostensibly aims to be a middle-ground between socialism and capitalism by mediating labour and business disputes to promote national unity. 20th century fascist regimes in Italy and Germany adopted large public works programs to stimulate their economies, state interventionism in largely private-sector dominated economies to promote re-armament and national interests. Scholars have drawn parallels between the American New Deal and public works programs promoted by fascism, arguing that fascism similarly arose in response to the threat of socialist revolution and similarly aimed to \"save capitalism\" and private property.\n"
] |
What is the relationship between biological nucleus and chemical nucleus? | The dictionary definition of a nucleus is the central and most important part of an object. Both biologists and chemists decided to use this term (in reference to the general meaning), but they refer to entirely different things.
The atomic nucleus (as you stated) is the central part of an atom and is composed of protons and neutrons.
The cellular nucleus is an organelle in cells which contains most of the cell's DNA (whether you'd call it the most central/important can be debated, but it certainly is essential).
The cellular nucleus is composed of many atoms each of which have an atomic nucleus. | [
"The chemical and nuclear properties of the nucleus are determined by the number of protons, called the atomic number, and the number of neutrons, called the neutron number. The atomic mass number is the total number of nucleons. For example, carbon has atomic number 6, and its abundant carbon-12 isotope has 6 neutrons, whereas its rare carbon-13 isotope has 7 neutrons. Some elements occur in nature with only one stable isotope, such as fluorine. Other elements occur with many stable isotopes, such as tin with ten stable isotopes.\n",
"The number of protons in the nucleus, called the \"atomic number\", defines to which chemical element the atom belongs. For example, each copper atom contains 29 protons. The number of neutrons defines the isotope of the element. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature. Chemistry is the discipline that studies these changes.\n",
"An atomic nucleus is formed by a number of protons, \"Z\" (the atomic number), and a number of neutrons, \"N\" (the neutron number), bound together by the nuclear force. The atomic number defines the chemical properties of the atom, and the neutron number determines the isotope or nuclide. The terms isotope and nuclide are often used synonymously, but they refer to chemical and nuclear properties, respectively. Strictly speaking, isotopes are two or more nuclides with the same number of protons; nuclides with the same number of neutrons are called isotones. The atomic mass number, symbol \"A\", equals \"Z\"+\"N\". Nuclides with the same atomic mass number are called isobars. The nucleus of the most common isotope of the hydrogen atom (with the chemical symbol H) is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium (D or H) and tritium (T or H) contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons. The most common nuclide of the common chemical element lead, Pb, has 82 protons and 126 neutrons, for example. The table of nuclides comprises all the known nuclides. Even though it is not a chemical element, the neutron is included in this table.\n",
"The composition of an atomic nucleus is determined by the number of protons \"Z\" and the number of neutrons \"N\", which sum to mass number \"A\". The atomic number \"Z\" determines the position of an element in the periodic table, but the more than 3000 nuclides are commonly represented in a chart with \"Z\" and \"N\" for its axes and the half-life for radioactive decay indicated for each unstable nuclide (see figure). 252 nuclides are thought to be stable (having never been observed to decay), and these follow a general trend in which the number of neutrons rises more rapidly than the number of protons. The last element in the periodic table that has a stable isotope is lead (\"Z\" = 82), with stability generally decreasing in heavier elements. The half-lives of nuclei also decrease when there is a lopsided neutron-proton ratio, such that the resulting nuclei have too few or too many neutrons to be stable.\n",
"The number of protons in the atomic nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's various chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties (except in the case of hydrogen and deuterium). Thus, all carbon isotopes have nearly identical chemical properties because they all have six protons and six electrons, even though carbon atoms may, for example, have 6 or 8 neutrons. That is why the atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of a chemical element.\n",
"Typically, an atomic nucleus is a tightly bound group of protons and neutrons. However, in some nuclides, there is an overabundance of one species of nucleon. In some of these cases, a nuclear core and a halo will form.\n",
"The presence of a nucleus is one major difference between eukaryotes and prokaryotes. Some conserved nuclear proteins between eukaryotes and prokaryotes suggest that these two types had a common ancestor. Another theory behind nucleation is that early nuclear membrane proteins caused the cell membrane to fold inwardly and form a sphere with pores like the nuclear envelope.\n"
] |
why aren’t patients scrubbed down the same as doctors are? | Having had multiple surgeries I gotta tell you; any hospital worth its merit sends you home with surgical wash to use heavuly the night before and morning of surgery. (2 separate showers). This reduces the risk of infection by 30-40%. Then the surgery site is cleaned heavily and seperated by a sterile gown/ cover.
With the site being clean it's more about not introducing new bacteria. So the surgeon has to scrub. | [
"Physicians sometimes use screening as a placebo for patients who wish to have some kind of care. The physician may recommend screening to placate the patient's demand for fast recovery in times when the recommended action would be to do nothing except wait. Research suggests that patients are more satisfied with their treatment when it is or seems expensive because patients believe that the more care they get, even if it is not necessary, then at least doing something is better than doing nothing.\n",
"Overscreening is a type of unnecessary health care. One study about unnecessary screening before surgery reported that physicians order unnecessary tests because of tradition in the practice of medicine, anticipation that other physicians will expect the test results when they see the patient, defensive medicine, worries that a surgery may be canceled if the test is not done, and lack of understanding about when a test is actually indicated.\n",
"Substitution factors can significantly affect the production of physician services and the availability of physicians to see more patients. For example, an accountant can replace some of the financial responsibilities for a physician who owns his or her own practice, allowing for more time to treat patients. Disposable supplies can substitute for labor and capital (the time and equipment needed to sterilize instruments). Sound record keeping by physicians can substitute for legal services by avoiding malpractice suits. However, the extent of substitution of physician production is limited by technical and legal factors. Technology cannot replace all skills possessed by physicians, such as surgical skill sets. Legal factors can include only allowing licensed physicians to perform surgeries, but nurses or doctors administering other surgical care.\n",
"Because individuals may feel better before the infection is eradicated, doctors must provide instructions to them so they know when it is safe to stop taking a prescription. Some researchers advocate doctors' using a very short course of antibiotics, reevaluating the patient after a few days, and stopping treatment if there are no clinical signs of infection.\n",
"Doctors perform surgery with the consent of the patient. Some patients are able to give better informed consent than others. Populations such as incarcerated persons, people living with dementia, the mentally incompetent, persons subject to coercion, and other people who are not able to make decisions with the same authority as a typical patient have special needs when making decisions about their personal healthcare, including surgery.\n",
"Improving patient hand washing has also been shown to reduce the rate of nosocomial infection. Patients who are bed-bound often do not have as much access to clean their hands at mealtimes or after touching surfaces or handling waste such as tissues. By reinforcing the importance of handwashing and providing santizing gel or wipes within reach of the bed, nurses were directly able to reduce infection rates. A study published in 2017 demonstrated this by improving patient education on both proper hand-washing procedure and important times to use sanitizer and successfully reduced the rate of enterococci and \"S. aureus\".\n",
"In law and politics, treating is the act of serving food, drink, and other refreshments to influence people for political gain, often shortly before an election. In various countries, treating is considered a form of corruption, and is illegal as such. However, as long as the supplying of refreshments is not part of a \"quid pro quo\" for votes, etc., it is often not illegal.\n"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.