question
stringlengths
3
300
answer
stringlengths
9
2.77k
context
sequencelengths
7
7
What's the hottest and coldest temperatures insects can survive in?
Because of their ability to withstand desiccation (removal of moisture) insects can recover from extreme temperature events. Including being submerged in liquid helium. From a cursory look, it appears +/- 55 C is the general range of temperatures. References/further reading: [Cold](_URL_1_) and [Hot](_URL_0_).
[ "A number of insects have temperature and humidity sensors and insects being small, cool more quickly than larger animals. Insects are generally considered cold-blooded or ectothermic, their body temperature rising and falling with the environment. However, flying insects raise their body temperature through the action of flight, above environmental temperatures.\n", "Among the psychrophile insects, the Grylloblattidae or icebugs, found on mountaintops, have optimal temperatures between 1-4 °C. The wingless midge (Chironomidae) \"Belgica antarctica\" can tolerate salt, being frozen and strong ultraviolet, and has the smallest known genome of any insect. The small genome, of 99 million base pairs, is thought to be adaptive to extreme environments.\n", "BULLET::::- Temperature-based treatment- Since insects need specific temperatures to thrive, placing objects in a freezer that is -25 to -30 degrees Celsius for three days will kill off any remaining insects. Alternatively, placing objects in a space that is heated to 50 degrees Celius for an hour will also kill all stages of insect growth.\n", "Insects range in their size, structure, and general appearance but most use hibernacula. All insects are primarily exothermic. For this reason, extremely cold temperatures, such as those experienced in the winter season, outside of tropical locations, cause their metabolic systems to shut down; long exposure may lead to death. Insects survive colder winters through the process of overwintering, which occurs at all stages of development and may include migration or hibernation for different insects, the latter of which must be done in hibernacula. Insects that do not migrate must halt their growth to avoid freezing to death, in a process called diapause. Insects prepare to overwinter through a variety of mechanisms, such as using anti-freeze proteins or cryoprotectants in freeze-avoidant insects, like soybean aphids. Cryoprotectants are toxic, with high concentrations only tolerated at low temperatures. Thus, hibernacula are used to avoid sporadic warming and the risk of death due high concentrations of cryoprotectants at warmer temperatures. Freeze-tolerant insects, like second-generation corn-borers, can survive being frozen and therefore, undergo inoculative freezing. Hibernacula range in size and structure depending on the insects using them.\n", "Insect life-histories show adaptations to withstand cold and dry conditions. Some temperate region insects are capable of activity during winter, while some others migrate to a warmer climate or go into a state of torpor. Still other insects have evolved mechanisms of diapause that allow eggs or pupae to survive these conditions.\n", "Some dragonflies, including libellulids and aeshnids, live in desert pools, for example in the Mojave Desert, where they are active in shade temperatures between 18 and 45 °C (64.4 to 113 °F); these insects were able to survive body temperatures above the thermal death point of insects of the same species in cooler places.\n", "Insects are poikilothermic, but maintaining an adequate temperature range remains important. For example, mealworms thrive best when living close together, but this can lead to overheating if temperature is not controlled.\n" ]
why are local positions like coroner, surveyor, recorder, etc elected by the people, and why should the average person care about them?
The theory is that these people's findings have (or used to have) a direct impact on tax collection, so the people should have a say in who gets the job to prevent abuse of power. > Electing a coroner is a holdover from British Common Law, where the coroner’s job was to determine how and when people had died in order to collect taxes. [source](_URL_0_)
[ "Career civil servants (not temporary workers or politicians) are hired only externally on the basis of entrance examinations (). It usually consists of a written test; some posts may require physical tests (such as policemen), or oral tests (such as professors, judges, prosecutors and attorneys). The rank according to the examination score is used for filling the vacancies.\n", "Chartered surveyors in the core of the profession may offer mortgage valuations, homebuyer's survey and valuations, full building surveys, building surveyors' services, quantity surveying, land surveying, auctioneering, estate management and other forms of survey- and building-related advice. It is not usual for any individual member to have expertise in all areas, and hence partnerships or companies are established to create practices able to offer a wider spectrum of surveying services.\n", "Chartered surveyors in the core of the profession may offer mortgage valuations, homebuyer's survey and valuations, full building surveys, building surveyors' services, quantity surveying, land surveying, auctioneering, estate management and other forms of survey- and building-related advice. It is not usual for any individual member to have expertise in all areas, and hence partnerships or companies are established to create general practices able to offer a wider spectrum of surveying services.\n", "There is no qualifying examination, but appointees are required to be of good character and they are usually well established in the local community. People convicted of serious offences are considered unsuitable. Civil servants are usually only appointed where the performance of their official duties requires an appointment (i.e. ex-officio). Solicitors, people employed in legal offices, and members of the clergy are, as a matter of practice, not appointed because their occupation may cause a conflict of interest when exercising the duties of a peace commissioner. The fact that an applicant or nominee may be suitable for appointment does not, in itself, provide any entitlement to appointment as a peace commissioner because other factors, such as the need for appointments in particular areas, are taken into account.\n", "For many years the description of the number of tiers in UK local government arrangements has routinely ignored any current or previous bodies at the lowest level of authorities elected by the voters within their area such as parish (in England and Wales) or community councils; such bodies do not exist or have not existed in all areas.\n", "Local government consists of city and county officers, as well as people who are known as constitutional officers. The positions of these constitutional officers are provided for by the Virginia Constitution. Article 7, Section 4 of the Virginia constitution provides, \"There shall be elected by the qualified voters of each county and city a treasurer, a sheriff, an attorney for the Commonwealth, a clerk, who shall be clerk of the court in the office of which deeds are recorded, and a commissioner of revenue.\" The local constitutional offices are not appointed by the city or county. The Judges of the Circuit Court, the General District Court and the Juvenile and Domestic Relations District Court are appointed by the State legislature. The constitutional officers have salaries set by the state through its compensation board, although the locality may supplement the salaries. This structure allows those officers a measure of independence within the local government setting.\n", "Each county has a county seat, often a populous or centrally located community, where the county's governmental offices are located. Some of the services provided by the county include: law enforcement, circuit courts, social services, vital records and deed registration, road maintenance, and snow removal. County officials include sheriffs, district attorneys, clerks, treasurers, coroners, surveyors, registers of deeds, and clerks of circuit court; these officers are elected for four-year terms. In most counties, elected coroners have been replaced by appointed medical examiners. State law permits counties to appoint a registered land surveyor in place of electing a surveyor.\n" ]
why cant we cure genetic diseases with genetic engineering?
That's like saying "why can't we solve engineering problems with engineering?" It's one thing to spot and identify a problem, but a whole different thing to find ways to understand it, solve it, get the tools you need at the precision you need. For all things medical you also need extensive studies to rule out (or reign in) side effects, find ways to make treatments somewhat affordable and easy to administer and so on. Science / knowledge isn't the end of problem solving. It's the start.
[ "Genetic engineering could potentially fix severe genetic disorders in humans by replacing the defective gene with a functioning one. It is an important tool in research that allows the function of specific genes to be studied. Drugs, vaccines and other products have been harvested from organisms engineered to produce them. Crops have been developed that aid food security by increasing yield, nutritional value and tolerance to environmental stresses.\n", "The kind of technology used in genetic engineering is also being developed to treat people with genetic disorders in an experimental medical technique called gene therapy. However, here the new gene is put in after the person has grown up and become ill, so any new gene is not inherited by their children. Gene therapy works by trying to replace the allele that causes the disease with an allele that works properly.\n", "Genetic engineering has been applied in numerous fields including research, medicine, industrial biotechnology and agriculture. In research GMOs are used to study gene function and expression through loss of function, gain of function, tracking and expression experiments. By knocking out genes responsible for certain conditions it is possible to create animal model organisms of human diseases. As well as producing hormones, vaccines and other drugs genetic engineering has the potential to cure genetic diseases through gene therapy. The same techniques that are used to produce drugs can also have industrial applications such as producing enzymes for laundry detergent, cheeses and other products.\n", "Proponents of genetic engineering cite its ability to cure and prevent life-threatening diseases. Genetic engineering began in the 1970s when scientists began to clone and engineer genes. From this, scientists were able to create human insulin, the first-ever genetically-engineered drug. Because of this development, over the years scientists were able to create new drugs to treat devastating diseases. For example, in the early 1990s, a group of scientists were able to use a gene-drug to treat severe combined immunodeficiency in a young girl. \n", "I do not think the main problem with enhancement and genetic engineering is that they undermine effort and erode human agency. The deeper danger is that they represent a kind of hyperagency—a Promethean aspiration to remake nature, including human nature, to serve our purposes and satisfy our desires\".\n", "Genetic engineering could be used to cure diseases, but also to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases. For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery. Another theorist claims that moral concerns limit but do not prohibit germline engineering.\n", "For some people, the biggest issue with genetic engineering is whether or not to seek out nd act on knowledge about genetic flaws. Double Helix attempts to explore the life saving and life destroying aspects of genetic engineering. The book proposes that, while Eli’s life was saved by avoiding the Huntington’s disease gene, his concept of life and self were destroyed when he found out he was genetically engineered to be a certain way.\n" ]
why are emergency services' two way radio systems so fuzzy and unclear? shouldn't emergency services have crisp audio more than anyone else?
They use big long radio waves that can go through almost anything. That has the drawback that that frequency has a lot of minor interference and distortion on it.
[ "Police radio and other public safety services such as fire departments and ambulances are generally found in the VHF and UHF parts of the spectrum. Trunking systems are often used to make most efficient use of the limited number of frequencies available. \n", "Emergency radios are generally designed to cover the standard AM and FM broadcasting bands, and weather radio in countries that provide that service. Basic shortwave radio coverage (for situations where local radio is out or not available) is less-common.\n", "BULLET::::- portable two-way radios, which transmit and receive on the same frequencies as the built in two-way radios, but are less powerful. Emergency workers can take these radios with them when they exit the vehicle. There are also systems (frequently referred to as mobile extenders or mobile repeaters) that allow the portable radios to be relayed through the vehicle's more powerful two way radio. Some emergency services encrypt their radio transmissions.\n", "One element that separates some emergency radios from other types of radios, is the ability to broadcast alerts via the Emergency Alert System, even when the radio sound is turned off. This is especially useful in areas where sudden storms, tornadoes, tsunamis or other fast-breaking emergencies can occur. Some emergency radios are designed to also charge other devices, such as cell phones or mp3 players, but this can vary widely.\n", "All officers also carry two-way radios registered to Airwave Solutions, a nationwide radio network in the UK on which police and other emergency rely. Based on the TETRA standard, the radio network is secure and fully protected against eavesdropping on transmissions, as well as allowing interoperability with other police services, fire brigades, as well as ambulance services. For the Lancashire Police, Motorola MTH800s are the radios of choice.\n", "Some agencies use commercial two-way radio as an adjunct to their own communications networks. One professional engineering evaluation of public safety radio systems explains that commercial systems such as Nextel's are not built to the same standards of coverage and non-blocking as public safety trunked systems. Like toy walkie talkies marketed to children, they are usable and helpful for non-urgent communications but should not be considered reliable enough for life safety uses. It is also true that most trunked radio system users are likely to hear busy signals, (error tones showing no channels are available,) for the first time during a large disaster. All systems have a finite capacity.\n", "BULLET::::- Two-way radio – One of the most important pieces of equipment in modern emergency medical services as it allows for the issuing of jobs to the ambulance, and can allow the crew to pass information back to control or to the hospital (for example a priority ASHICE message to alert the hospital of the impending arrival of a critical patient.) More recently many services worldwide have moved from traditional analog UHF/VHF sets, which can be monitored externally, to more secure digital systems, such as those working on a GSM system, such as TETRA.\n" ]
How were large, dangerous animals like bears hunted in the Middle Ages?
Unfortunately there isn't a lot of source material on bear hunting as a lot of our late medieval 'hunting manuals' were produced in England...when the bear had been long extinct. However, the very famous 14th c hunting manual by [Gaston Fébus](_URL_0_), the [*Livre de chasse*](_URL_3_), features a chapter on hunting bear. In fact, Gaston is said to have died of stroke while washing his hands of bear blood; this would be a classic example of how we can't take the words of medieval writers about aristocracy at face value - they are always burnishing the image of the subject. As the count of Foix Gaston would have hunted in the Pyrenees where bears have roamed even until recently (and there is in fact a protest by farmers now against the planned re-introduction of bears). I don't know if his book has been translated to English - I've only seen editions of the various manuscript images with summaries. Generally speaking, large animals were harried by horseback riders and hounds and driven into places that could be controlled: pits and traps. Hunters were on horseback and on foot, using weapons at a distance: bows and spears. It seems aristocrats were less interested in demonstrating strength than an ability to outwit their prey, whether bear, stag or wolf. If you want to write about medieval hunting, try to find these two books: * [John Cummins, The Art of Medieval Hunting: The Hound and the Hawk (Castle Books, 2003)](_URL_2_) * [Richard Almond, Medieval Hunting (The History Press, 2011)](_URL_1_)
[ "Bear hunting is the act of hunting bears. Bears have been hunted since prehistoric times for their meat and fur. In modern times they have been favoured by big game hunters due to their size and ferocity. Bear hunting has a vast history throughout Europe and North America, and hunting practices have varied based on location and type of bear.\n", "The Ainu usually hunted bear during the time of the spring thaw. At that time, bears were weak because they had not fed at all during long hibernation. Ainu hunters caught hibernating bears or bears that had just left hibernation dens. When they hunted bear in summer, they used a spring trap loaded with an arrow, called an \"amappo\". The Ainu usually used arrows to hunt deer. Also, they drove deer into a river or sea and shot them with arrows. For a large catch, a whole village would drive a herd of deer off a cliff and club them to death.\n", "In many parts of North Africa and the Middle East, large stone corrals were constructed to drive herds of gazelle into, making for an easy ambush. This method of hunting started in prehistoric time, and continued into the early part of the 20th century.\n", "Bears have been hunted since prehistoric times for their meat and fur; they have been used for bear-baiting and other forms of entertainment, such as being made to dance. With their powerful physical presence, they play a prominent role in the arts, mythology, and other cultural aspects of various human societies. In modern times, bears have come under pressure through encroachment on their habitats and illegal trade in bear parts, including the Asian bile bear market. The IUCN lists six bear species as vulnerable or endangered, and even least concern species, such as the brown bear, are at risk of extirpation in certain countries. The poaching and international trade of these most threatened populations are prohibited, but still ongoing.\n", "In the 19th century, as the settlers began increasingly moving west in pursuit of more land for ranching, bears were becoming increasingly more hunted as threats to livestock. In 1818, a “War of Extermination” against wolves and bears was declared in Ohio. Bear pelts were usually sold for 220 dollars in the 1860s. \n", "Bear hunts were ceremonial. Black bears were usually hunted in winter, where lighted poles were used to drive them from their dens. Grizzlies that lived on the valley floor were greatly feared and rarely hunted. \n", "In the first half of the 20th century, mechanized and overpoweringly efficient methods of hunting and trapping came into use in North America as well. Polar bears were chased from snowmobiles, icebreakers, and airplanes, the latter practice described in a 1965 \"New York Times\" editorial as being \"about as sporting as machine gunning a cow.\" Norwegians used \"self-killing guns\", comprising a loaded rifle in a baited box that was placed at the level of a bear's head, and which fired when the string attached to the bait was pulled. The numbers taken grew rapidly in the 1960s, peaking around 1968 with a global total of 1,250 bears that year.\n" ]
why doesn't facial hair get greasy?
not as much oil secreted by the face as the top of the head
[ "Greasy hair is a hair condition which is common in humans, one of four main types of hair conditioning— normal, greasy, dry and greasy dry. It is primarily caused by build-up of the natural secretion from the sebaceous glands in the scalp and is characterised by the continuous development of natural grease on the scalp. A chronic condition of greasy hair may often accompany chronic greasy skin conditions on the face and body and oily skin and acne. Excessive carbohydrate, fat and starch consumption can increase the likelihood of developing greasy hair and also poor personal hygiene and not washing the hair for a long duration will lead to a buildup of sebum in the hair follicles. Hair conditioners can decrease the likelihood of developing greasy hair after shampooing. Some cosmetics companies produce shampoos and conditioners specifically to deal with greasy hair and for oily or dry hair problems. Massaging the scalp and exposure to the sun can reduce the problem of greasy hair.\n", "After a hair has been shaved, it begins to grow back. Curly hair tends to curl into the skin instead of straight out the follicle, leading to an inflammation reaction. PFB can make the skin look itchy and red, and in some cases, it can even look like pimples. These inflamed papules or pustules can form especially if the area becomes infected.\n", "Hair coloring products are commonly used to cover gray, look more attractive or keep up with fashion, yet they pose a challenge for many women. Because of the length of time the hair dye must be on the hair to achieve deep, even results, it often seeps or drips down onto the hairline, ears or neck, causing unsightly and irritating stains on the skin. Dye users are not universally affected—some persons have a tendency to get stains while others do not—most likely due to the variations in lipid or natural oil composition on the skin surface from one person to the next.\n", "Malnutrition is also known to cause hair to become lighter, thinner, and more brittle. Dark hair may turn reddish or blondish due to the decreased production of melanin. The condition is reversible with proper nutrition.\n", "This condition can cause single or, less commonly, multiple white patches on the hair. Some mistake these white patches for simple birth marks. In poliosis there is decreased or absent melanin in the hair bulbs of affected hair follicles; the melanocytes of the skin are usually not affected. \n", "White piedra (or tinea blanca) is a mycosis of the hair caused by several species of fungi in the genus \"Trichosporon\". It is characterized by soft nodules composed of yeast cells and arthroconidia that encompass hair shafts.\n", "In some cases, gray hair may be caused by thyroid deficiencies, Waardenburg syndrome or a vitamin B deficiency. At some point in the human life cycle, cells that are located in the base of the hair's follicles slow, and eventually stop producing pigment. Piebaldism is a rare autosomal dominant disorder of melanocyte development, which may cause a congenital white forelock.\n" ]
why did ronda rousey look like a complete amateur tonight who had never stepped foot into an octagon?
She was not THAT great of a fighter, just a dominant one in a growing sport (Women's MMA). As it got bigger, the fighters got better. And she took a long break and had just not improved in the time.
[ "Ronda Rousey began the Women's Bantamweight bout aggressively. Holly Holm, with her boxing ability, and from her southpaw stance, tagged Rousey repeatedly with her left hand while controlling the distance. Rousey attempted several clinches through the round, including an attempted armbar, but none had a decisive outcome. By the end of the first round, Rousey was bloodied and breathing heavily, with commentators describing the round as the first she had ever lost in her MMA career.\n", "After the interview, Roberts staggered to the ring with his trademark snake. However, upon reaching the ring, Roberts put the snake down and attempted to return backstage; he then reversed course, returned to ringside, and began greeting fans. Before entering the ring, Roberts grabbed a female fan and had her rub her hands on his bare chest. Later, Roberts removed the snake from its bag and simulated masturbation with it. The event's producers cut to wide-shots of the crowd during the incident, so that at-home viewers were unaware what was happening in the ring. Roberts eventually collapsed in the middle of the ring with the snake draped over his body; the prone Roberts then began to attempt to kiss the snake.\n", "The show is nominally remembered for its conclusion, originally slated to be a double-main event pitting Jake Roberts against Jim Neidhart, and King Kong Bundy against Yokozuna. Roberts' problems with drug and alcohol addiction had been well publicized in the preceding years, and his booking in the main event was meant to capitalize on the resurgence in popularity he had enjoyed as a result of his attempts at sobriety. Roberts' return to the ring was meant to be the high point of the evening, and the match responsible for generating the most publicity. However, Roberts suffered a relapse prior to the show and consumed a significant amount of alcohol before arriving. Prior to his match with Neidhart, Roberts had been scheduled to cut a promo in which he would taunt Neidhart. Due to his level of intoxication, Roberts' promo instead consisted of a slurred, incoherent rant consisting largely of wordplay based on the event's casino setting. One particular segment of the rant would go viral after it was posted on WrestleCrap:\n", "On the March 5 episode of \"Raw\", it was announced that Rousey will make her in–ring debut WrestleMania 34, WWE's flagship event, in a mixed tag team match pitting Rousey and Kurt Angle as her partner against Stephanie McMahon and Triple H. At the event, Rousey submitted McMahon with her trademark armbar submission hold to secure the win for her team. Her debut performance was widely praised by both fans and wrestling critics, with Dave Meltzer of the \"Wrestling Observer\" noting that she \"at no point looked out of her element, she was crisp in just about everything\", calling her performance \"one of the better pro wrestling debuts I've ever seen\". \"The Washington Post\" noted the positive fan reaction, stating \"The match exceeded expectations, with fans firmly behind Rousey\" and \"[fans were] surprised [at her] high–level coordination and quality of wrestling. Even those who were not agreed the match was entertaining.\"\n", "By 1983, Jones had largely retired from in-ring competition due to accumulated back injuries. That year, Jones developed a gimmick of wearing tuxedos, and created an angle in which he held a contest in which a large poster of himself dressed in a white tuxedo would be awarded as a prize to the winner. This led to a memorable episode of \"Mid-Atlantic Championship Wrestling\", in which the winner of the poster was revealed to be a young, attractive woman. As she walked onto the ringside set to claim her prize, she attempted to embrace Jones with a kiss as her way of thanking him; but Jones backed away quickly and proceeded to berate her violently. Rufus R. Jones then came to the lady's rescue, and was attacked by Paul. Paul then shoved the terrified young lady between himself and Rufus to block Rufus' defensive attack. This angle led to a brief feud between Paul Jones and Rufus R. Jones.\n", "In 1956 Arnova created a controversy when she appeared on the RAI television variety show \"La piazzetta\" wearing a tight leotard that made her appear semi-nude because of the lighting effects and the black-and-white system. The show was suspended and she was subsequently fired and banned from Italian television. She subsequently chose to leave showbusiness. \n", "As Rousey predicted, her bout with Tate was highly publicized in the months preceding it. Rousey had made her MMA debut in early 2011 and defeated all four of her opponents by first-round armbar submission. However, Tate did not believe that Rousey had earned a title shot, and felt that Rousey was largely gaining the opportunity due to being \"pretty.\" The two engaged in a variety of trash-talk, with Rousey stating that she was \"bored\" while watching Tate's win over Coenen. Ultimately, Tate and Rousey headlined a on March 3, 2012. This marked a then-rare occurrence of women being placed in the main event of an MMA card. The bout was televised on Showtime and introduced by Jimmy Lennon, Jr. Shortly after the fight began, Tate escaped Rousey's first armbar attempt and retaliated with strikes. After a back-and-forth session of grappling, Tate lost the title when Rousey secured a second armbar near the end of the first round, forcing her to submit.\n" ]
hot weather the cause of higher violence rates?
Climate changes how people live, which affects decisions made and indirectly affects violence. I seriously doubt you will find any evidence that heat directly affects the mind. The research has long shown that it is a indirect relationship. I have never heard of a direct correlation with weather affecting the mind in my studies. Source: Masters in Human Relations (not claiming expert, just these type studies were read and written about during my studies)
[ "Heat waves are the most lethal type of weather phenomenon in the United States. Between 1992 and 2001, deaths from excessive heat in the United States numbered 2,190, compared with 880 deaths from floods and 150 from hurricanes. The average annual number of fatalities directly attributed to heat in the United States is about 400. The 1995 Chicago heat wave, one of the worst in US history, led to approximately 739 heat-related deaths over a period of five days. Eric Klinenberg has noted that in the United States, the loss of human life in hot spells in summer exceeds that caused by all other weather events combined, including lightning, rain, floods, hurricanes, and tornadoes. Despite the dangers, Scott Sheridan, professor of geography at Kent State University, found that less than half of people 65 and older abide by heat-emergency recommendations like drinking lots of water. In his study of heat-wave behavior, focusing particularly on seniors in Philadelphia, Phoenix, Toronto, and Dayton, Ohio, he found that people over 65 \"don't consider themselves seniors.\" One of his older respondents said: \"Heat doesn't bother me much, but I worry about my neighbors.\"\n", "Although comparatively little reporting is made about the health effects of extraordinarily hot conditions, heat waves are responsible for more deaths annually than more energetic natural disasters such as lightning, rain, floods, hurricanes, and tornadoes. Supporting this conclusion, Karl Swanberg, a forecaster with the National Weather Service, reported that between 1936 and 1975, about 20,000 U.S. residents died of heat. \"Heat and solar radiation on average kill more U.S. residents each year than lightning, tornadoes, hurricanes, floods or earthquakes,\" said Karl Swanberg. This finding is also referenced in a publication of the National Oceanic and Atmospheric Administration, giving guidance on how to avoid health problems due to heat.\n", "In addition to physical stress, excessive heat causes psychological stress, to a degree which affects performance, and is also associated with an increase in violent crime. High temperatures are associated with increased conflict both at the interpersonal level and at the societal level. In every society, crime rates go up when temperatures go up, particularly violent crimes such as assault, murder, and rape. Furthermore, in politically unstable countries, high temperatures are an aggravating factor that lead toward civil wars.\n", "In the last 30–40 years, heat waves with high humidity have became more frequent and severe. Extremely hot nights have doubled in frequency. The area in which extremely hot summers are observed, has increased 50-100 fold. These changes are not explained by natural variability, and attributed by climate scientists to the influence of anthropogenic climate change. Heat waves with high humidity pose a big risk to human health while heat waves with low humidity lead to dry conditions that increase wildfires. The mortality from extreme heat is larger than the mortality from hurricanes, lightning, tornadoes, floods, and earthquakes together See also 2018 heat wave.\n", "Extreme weather events, such as heat waves, droughts and floods, are expected to increase in their frequency and intensity in the next hundred years due to climate change. Low socioeconomic status groups and racial and ethnic minorities are affected by heat-related illness at greater rates due to factors such as lack of access to air conditioning, lack of transportation, occupations that require outdoor work and the heat-island effect in urban neighborhoods.\n", "Although official definitions vary, a heat wave is generally defined as a prolonged period with excessive heat. Although heat waves do not cause as much economic damage as other types of severe weather, they are extremely dangerous to humans and animals: according to the United States National Weather Service, the average total number of heat-related fatalities each year is higher than the combined total fatalities for floods, tornadoes, lightning strikes, and hurricanes. In Australia, heat waves cause more fatalities than any other type of severe weather. As in droughts, plants can also be severely affected by heat waves (which are often accompanied by dry conditions) can cause plants to lose their moisture and die. Heat waves are often more severe when combined with high humidity.\n", "Pain and discomfort also increase aggression. Even the simple act of placing one's hands in hot water can cause an aggressive response. Hot temperatures have been implicated as a factor in a number of studies. One study completed in the midst of the civil rights movement found that riots were more likely on hotter days than cooler ones (Carlsmith & Anderson 1979). Students were found to be more aggressive and irritable after taking a test in a hot classroom (Anderson et al. 1996, Rule, et al. 1987). Drivers in cars without air conditioning were also found to be more likely to honk their horns (Kenrick & MacFarlane 1986), which is used as a measure of aggression and has shown links to other factors such as generic symbols of aggression or the visibility of other drivers.\n" ]
Why is there an absolute reference for rotation?
The discussion about no preferred reference frames is in the context of special relativity, which only considers unaccelerated motion. A rotation is always an accelerated motion, i.e. you need a centripetal force to "bend" the path of an object. This force makes different reference frames different. Note that you could still make the change of reference frame if you wanted to. People made quite accurate models with the sun revolving around the earth, they only became hugely complicated when all the other planets had to be taken into account as well. Also note that our solar system is revolving at incredible speeds around the center of the Milky Way. We don't notice this in everyday life, because you can also picture this as that we are in a free fall in the gravitational potential of the Milky Way: Einstein's equivalence principle.
[ "A common situation in which noninertial reference frames are useful is when the reference frame is rotating. Because such rotational motion is non-inertial, due to the acceleration present in any rotational motion, a fictitious force can always be invoked by using a rotational frame of reference. Despite this complication, the use of fictitious forces often simplifies the calculations involved.\n", "An inertial reference frame (or inertial frame in short) is a frame in which all the physical laws hold. For instance, in a rotating reference frame, Newton's laws have to be modified because there is an extra Coriolis force (such frame is an example of non-inertial frame). Here, \"rotating\" means \"rotating with respect to some inertial frame\". Therefore, although it is true that a reference frame can always be chosen to be any physical system for convenience, any system has to be eventually described by an inertial frame, directly or indirectly. Finally, one may ask how an inertial frame can be found, and the answer lies in the Newton's laws, at least in Newtonian mechanics: the first law guarantees the existence of an inertial frame while the second and third law are used to examine whether a given reference frame is an inertial one or not.\n", "That a given frame is non-inertial can be detected by its need for fictitious forces to explain observed motions. For example, the rotation of the Earth can be observed using a Foucault pendulum. The rotation of the Earth seemingly causes the pendulum to change its plane of oscillation because the surroundings of the pendulum move with the Earth. As seen from an Earth-bound (non-inertial) frame of reference, the explanation of this apparent change in orientation requires the introduction of the fictitious Coriolis force.\n", "A rotating frame of reference is a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame. An everyday example of a rotating reference frame is the surface of the Earth. (This article considers only frames rotating about a fixed axis. For more general rotations, see Euler angles.)\n", "For the concept of absolute rotation to be scientifically meaningful, it must be measurable. In other words, can an observer distinguish between the rotation of an observed object and their own rotation? Newton suggested two experiments to resolve this problem. One is the effects of centrifugal force upon the shape of the surface of water rotating in a bucket, equivalent to the phenomenon of rotational gravity used in proposals for manned spaceflight.\n", "Rotation formalisms are focused on proper (orientation-preserving) motions of the Euclidean space with one fixed point, that a \"rotation\" refers to. Although physical motions with a fixed point are an important case (such as ones described in the center-of-mass frame, or motions of a joint), this approach creates a knowledge about all motions. Any proper motion of the Euclidean space decomposes to a rotation around the origin and a translation. Whichever the order of their composition will be, the \"pure\" rotation component wouldn't change, uniquely determined by the complete motion.\n", "For the following formalism, the rotating frame of reference is regarded as a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame denoted the stationary frame.\n" ]
how was the stuxnet virus created, and why was it so successful?
There's a youtube video on this by an a channel called "The Hungry Beast" or something similar, try looking it up. I'm in China atm and it's blocked for me, =/ _URL_0_ - Someone linked it in another thread
[ "\"Stuxnet\" is a computer worm discovered in June 2010 that is believed to have been created by the United States and Israel to attack Iran's nuclear facilities. It switched off safety devices, causing centrifuges to spin out of control. Stuxnet initially spreads via Microsoft Windows, and targets Siemens industrial control systems. While it is not the first time that hackers have targeted industrial systems, it is the first discovered malware that spies on and subverts industrial systems, and the first to include a programmable logic controller (PLC) rootkit.\n", "Stuxnet, discovered by Sergey Ulasen, initially spread via Microsoft Windows, and targeted Siemens industrial control systems. While it is not the first time that hackers have targeted industrial systems, nor the first publicly known intentional act of cyberwarfare to be implemented, it is the first discovered malware that spies on and subverts industrial systems, and the first to include a programmable logic controller (PLC) rootkit.\n", "Stuxnet is a computer worm discovered in June 2010. It initially spreads via Microsoft Windows, and targets Siemens industrial software and equipment. While it is not the first time that hackers have targeted industrial systems, it is the first discovered malware that spies on and subverts industrial systems, and the first to include a programmable logic controller (PLC) rootkit.\n", "Stuxnet is typically introduced into the supply network via an infected USB flash drive with persons with physical access to the system. The worm then travels across the cyber network, scanning software on computers controlling a programmable logic controller (PLC). Stuxnet introduces the infected rootkit onto the PLC modifying the codes and giving unexpected commands to the PLC while returning a loop of normal operation value feedback to the users.\n", "Kaspersky Lab experts at first estimated that Stuxnet started spreading around March or April 2010, but the first variant of the worm appeared in June 2009. On 15 July 2010, the day the worm's existence became widely known, a distributed denial-of-service attack was made on the servers for two leading mailing lists on industrial-systems security. This attack, from an unknown source but likely related to Stuxnet, disabled one of the lists and thereby interrupted an important source of information for power plants and factories. On the other hand, researchers at Symantec have uncovered a version of the Stuxnet computer virus that was used to attack Iran's nuclear program in November 2007, being developed as early as 2005, when Iran was still setting up its uranium enrichment facility.\n", "Stuxnet is typically introduced to the target environment via an infected USB flash drive. The worm then propagates across the network, scanning for Siemens Step7 software on computers controlling a PLC. In the absence of either criterion, Stuxnet becomes dormant inside the computer. If both the conditions are fulfilled, Stuxnet introduces the infected rootkit onto the PLC and Step7 software, modifying the codes and giving unexpected commands to the PLC while returning a loop of normal operations system values feedback to the users.\n", "Stuxnet is a malicious computer worm believed to be a jointly built American-Israeli cyber weapon. Although neither state has confirmed this openly, anonymous US officials speaking to \"The Washington Post\" claimed the worm was developed during the Obama administration to sabotage Iran’s nuclear program with what would seem like a long series of unfortunate accidents.\n" ]
why do girls do the "duck face?"
The ‘duck face’ started back when MySpace was still thriving, around the year of 2005. The face is also attributed to the ‘MySpace pics’ tren, which I think are now called ‘selfies’. Usually teenagers would set their profile picture to something they took in the bathroom mirror, or with their arm extended in front of them. The ‘duck face’ is similar to the ‘fish pout’ lip design that a lot of celebrities have, one of the more noted ones being [Angelina Jolie](_URL_0_). Girls thought this was cute, or cool, and tried to replicate it. Instead it usually comes off looking a little silly, as opposed to the cute/sexy effect they’re going for. It’s basically the result of people trying to ‘fit in’ to trends. Not a lot of people realize that people like Angelina Jolie and Lindsay Lohan are made up to look the way they do. It became a thing because young women were trying to emulate something they thought was stylish, unaware of how the people they got it from actually made themselves look that way.
[ "Duck face is a photographic pose, which is well known on profile pictures in social networks. Lips are pressed together as in a pout and often with simultaneously sucked in cheeks. The pose is most often seen as an attempt to appear alluring, but also as a self-deprecating, ironic gesture making fun of the pose. It may be associated with sympathy, attractiveness, friendliness or stupidity.\n", "Groening thought that it would be funny to have a baby character that did not talk and never grew up, but was scripted to show any emotions that the scene required. Maggie's comedic hallmarks include her tendency to stumble and land on her face while attempting to walk, and a penchant for sucking on her pacifier, the sound of which has become the equivalent of her catchphrase and was originally created by Groening during the Tracey Ullman period. In the early seasons of the show, Maggie would suck her pacifier over other characters' dialogue, but this was discontinued because the producers found it too distracting.\n", "BULLET::::- Duck (voiced by Tracy Ryan) is a female duck with yellow feathers, an orange beak and a long neck. Slow and smart at the same time, Duck is one who gets herself into comical situations. She lives in a nest, although in one episode, she expressed longing for a house and tried to live in a house boat. It floated downriver filled with frogs and Duck lived happily in her nest. She loves playing \"princess\" and pretend. She was hatched in a nest of chicks, because \"some eggs got mixed up\", and Little Bear taught her to fly when she was a duckling. She never has any ducklings of her own, but she is sometimes seen babysitting a group of them.\n", "Plucky Duck (voiced by Joe Alaskey at his \"normal\" age and Nathan Ruegger as a baby) is a young, green male duck in a white tank top. Much like his \"Looney Tunes\" mentor Daffy Duck, he is portrayed as greedy, selfish, and egotistical, often engaging in various schemes with the goal of personal glory. However, Plucky does have moments of heroism and goodwill and is more often than not shown to care about his friends and value their feelings. Plucky is friends with Hamton J. Pig and Buster Bunny (although he frequently annoys Buster, again much like Daffy does with Bugs). Plucky constantly pines for the love of Shirley McLoon though she has extremely little patience for him. Also like Daffy Duck, Plucky is capable of flying with his wings but very rarely does so.\n", "The three ducklings are noted for their identical appearances and personalities. A running joke involves the three sometimes even finishing each other's sentences. In the theatrical shorts, Huey, Dewey, and Louie often behave in a rambunctious and mischievous manner, and they sometimes commit retaliation or revenge on their uncle Donald Duck. In the comics, however, as developed by Al Taliaferro and Carl Barks, the young ducks are more usually portrayed as well-behaved, preferring to assist their uncle Donald Duck and great-uncle Scrooge McDuck in the adventure at hand. In the early Barks comics, the ducklings were still wild and unruly, but their character improved considerably due to their membership in the Junior Woodchucks and the good influence of their wise old great-grandmother Elvira Coot \"Grandma\" Duck. According to Don Rosa, Huey, Dewey and Louie became members of the Junior Woodchucks when they were around 11 years old.\n", "Queer Duck has cyan-colored feathers, a little spiky fringe, and may wear purple eye shadow. He wears a sleeveless rainbow top and, like almost everyone else in the series, does not wear trousers. This follows the tradition of semi-nudity of cartoon animals exemplified by Porky Pig, Donald Duck, Top Cat, etc. He is often shown to have two fingers and one thumb on each hand, though on occasion he has the three fingers and one thumb per hand that is typical of many contemporary cartoons.\n", "The drawings of Boston represent a duck's eye view of the city. Each of the individual ducklings is \"bored, inquisitive, sleepy, or they are scratching, talking over their backs one to another, running to catch up with the line\". Children identify with the ducklings because they behave as children do. The comforting message shows parents as caretakers, protectors, and teachers.\n" ]
Do cardiac muscle cells die?
[We used to think of cardiomyocytes as nondividing and persistent cells](_URL_0_): > For nearly a century, the general belief has been that the heart is a terminally differentiated post-mitotic organ in which the number of cardiomyocytes is established at birth with these cells persisting throughout the lifespan of the organ and organism. [...] Cardiomyocytes were deemed to live and function for nearly 100 years, or longer. Although unstated, the inevitable implication was that cardiomyocytes were judged to be immortal and to be killed only by pathologic processes occurring during the course of individuals’ lifespan. But now that view is changing in light of new research: > A recent study, based on retrospective [14C] birth dating of cardiac cells, has suggested that ~1% and ~0.45% replacement of myocytes occurs annually in the adult human heart at 25 and 75 years of age, respectively. [...] In contrast to the [14C] study in which only 12 pathologic hearts were examined, we have analyzed 74 normal human hearts from 19 to 104 years of age and documented that myocyte turnover in the female heart occurs at a rate of 10%, 14%, and 40% per year at 20, 60 and 100 years of age, respectively.85 Corresponding values in the male heart are 7%, 12%, and 32% per year, demonstrating that cardiomyogenesis involves a large and progressively increasing number of cells with aging. From 20 to 100 years of age, the myocyte compartment is replaced 15 times in women and 11 times in men. TL;DR: Yes, cardiac muscle cells die, and get replaced by progenitor cells and cardiac stem cells. However, that replacement potential is limited, which is why cardiac disease such as myocardial infarctions have such severe consequences.
[ "After a myocardial infarction (MI), cardiac myocyte death can be triggered by necrosis, apoptosis, or autophagy, leading to thinning of the cardiac wall. The surviving cardiac myocytes either arrange in parallel or in series to each other, contributing to ventricular dilatation or ventricular hypertrophy, depending on the loading stress on the ventricular wall. Besides, reduced expression of V1 mysoin and L-type calcium channels on cardiac myocytes are also thought to cause cardiac remodelling. Under normal body conditions, fatty acid accounts for 60 to 90% of the energy supply of the heart. Post MI, as fatty acid oxidation decreases, it leads to reduced energy supply for the cardiac myocytes, accumulation of fatty acids to toxic levels, and dysfunction of mitochondria. These consequences also led to the increase in oxidative stress on the heart, causing the proliferation of fibroblasts, activation of metalloproteinases, and induction of apoptosis, which would be explained below. Besides, inflammatory immune response after MI also contributes to the above changes.\n", "Cardiac muscle cells or cardiomyocytes are the contracting cells which allow the heart to pump. Each cardiomyocyte needs to contract in coordination with its neighbouring cells - known as a functional syncytium - working to efficiently pump blood from the heart, and if this coordination breaks down then – despite individual cells contracting – the heart may not pump at all, such as may occur during abnormal heart rhythms such as ventricular fibrillation.\n", "Until recently, it was commonly believed that cardiac muscle cells could not be regenerated. However, a study reported in the April 3, 2009 issue of \"Science\" contradicts that belief. Olaf Bergmann and his colleagues at the Karolinska Institute in Stockholm tested samples of heart muscle from people born before 1955 who had very little cardiac muscle around their heart, many showing with disabilities from this abnormality. By using DNA samples from many hearts, the researchers estimated that a 4-year-old renews about 20% of heart muscle cells per year, and about 69 percent of the heart muscle cells of a 50-year-old were generated after he or she was born.\n", "Martin A. Samuels, MD, elaborates further on still another process of death, stating that with the release of adrenaline and an increased heart rate, sometimes catecholamines, stress hormones, will build up, leading to calcium channels opening and remaining open, resulting in an overflow of calcium into the system, killing off cells.\n", "Aged muscle cells in mammalian hearts constantly regenerate, albeit at a very low rate. Within 18 months around five percent of the heart muscle cells regenerated themselves originating from Sca-1 stem cells\n", "Cardiac fibroblasts make up more than half of all heart cells and are usually not able to conduct contractions (are not cardiogenic), but those reprogrammed were able to contract spontaneously. The significance is that fibroblasts from the damaged heart or from elsewhere, may be a source of functional cardiomyocytes for regeneration.\n", "Skeletal muscle is able to regenerate far better than cardiac muscle due to satellite cells, which are dormant in all healthy skeletal muscle tissue. There are three phases to the regeneration process. These phases include the inflammatory response, the activation, differentiation, and fusion of satellite cells, and the maturation and remodeling of newly formed myofibrils. This process begins with the necrosis of damaged muscle fibers, which in turn induces the inflammatory response. Macrophages induce phagocytosis of the cell debris. They will eventually secrete anti-inflammatory cytokines, which results in the termination of inflammation. These macrophages can also facilitate the proliferation and differentiation of satellite cells. The satellite cells re-enter the cell cycle to multiply. They then leave the cell cycle to self-renew or differentiate as myoblasts.\n" ]
Movies always make out ancient warriors to be these huge guys with huge muscles that you'd have to get in gyms. Basically what I'm asking is, on average how buff or big were spartan warriors or a knight in the crusades?
They were indeed strong, but not with the sort of big bulky muscles Ahnold and his ilk have. Big muscles are developed through short movements such as lifting weights, which would have been seen as quite strange for a soldier of the past. Warriors had to be able to march for long distances bearing their combat load and then some, and still be able to fight at the end of the day. That sort of activity makes a body strong, but not with huge bulging muscles. Think of the difference between football and futbol players. The former can perform powerful actions, but typically over a shorter time span, while the latter can run around a field for 90 minutes straight. Both are powerful in their own right, but the wiry guy with the endurance is a closer somatotype to historical warriors. The Greeks, since you mention the Spartans, have helpfully provided us with many examples of their warriors' body shape in the form of [statues](_URL_0_) and [muscle cuirasses](_URL_2_). As you can see in those examples, Grecian warriors were muscular, yet quite lean. The Crusaders would have likely been primarily of the same body type, though I'm sure certain nobles and wealthy folk that were able to afford horses and didn't spend all of their free time fighting and training would be exceptions. Forensic analysis of medieval remains like [this poor chap here](_URL_3_) tells us that medieval knights were indeed quite strong. However, effigies like [these](_URL_1_), and there's hundreds more just like them, show us that the average knight was equally as lean as his ancient counterparts.
[ "As Robert Rushing defines it, peplum, \"in its most stereotypical form, [...] depicts muscle-bound heroes (professional bodybuilders, athletes, wrestlers, or brawny actors) in mythological antiquity, fighting fantastic monsters and saving scantily clad beauties. Rather than lavish epics set in the classical world, they are low-budget films that focus on the hero's extraordinary body.\" Thus, most sword-and-sandal films featured a superhumanly strong man as the protagonist, such as Hercules, Samson, Goliath, Ursus or Italy's own popular folk hero Maciste. In addition, the plots typically involved two women vying for the affection of the bodybuilder hero: the good love interest (a damsel in distress needing rescue), and an evil femme fatale queen who sought to dominate the hero.\n", "BULLET::::7. His physical powers are the result of extremely rigorous workouts in gym, unlike other super-heroes who have special superpowers ( for example- Parmanu, who relies on advanced science for his atomic superpower or Shakti who has goddess-like superpowers). He has even fought Kobi, a wolf-like superhero, who is much taller and stronger than him. He is the strongest superhero, if special powers aren't considered.\n", "The average grippli stands two and a half feet high and weighs 25–30 pounds, although particularly ancient warriors may have twice that height, with an equivalent increase in mass. Their bodies are essentially identical to frogs, but they have humanoid hands and prehensile feet.\n", "In the world of martial arts, there have been legends of many powerful warriors. The strongest to ever walk the Earth are West Side Principal, Chou Da Tong, Tian Shan Tong Lao, and the villain Dongfang Bu Bai, who engaged in a sex change in order to enhance his martial arts skills.\n", "The origin of strength athletics lies within prehistory. Testing each other in feats of physical prowess has been something humans have done throughout their existence. This is encapsulated in the modern Olympic motto of \"Swifter, higher, stronger\". There are records in many civilizations of feats of strength performed by great heroes, mythological or otherwise. In ancient western culture Greek heroes such as Heracles are blessed with great strength. In the Bible, figures with exceptional physical strength are described such as Samson and Goliath. Man's obsession with those who possess extraordinary strength is an ancient and persistent one.\n", "BULLET::::- Enforcers - A group of non super powered thugs who originally are enemies of Spider-Man. Although they technically have no super powers, they are formidable fighters, so athletic and skilled that their fighting prowess is almost as good as if they had super powers.\n", "The fighter, as part of the \"warrior\" group, was one of the standard character classes available in the second edition \"Player's Handbook\". The second edition \"Player's Handbook\" gives several examples of famous fighters from legend: Hercules, Perseus, Hiawatha, Beowulf, Siegfried, Cuchulain, Little John, Tristan, and Sinbad. The book also cites a number of great generals and warriors: El Cid, Hannibal, Alexander the Great, Charlemagne, Spartacus, Richard the Lionheart, and Belisarius.\n" ]
what income groups does a sales tax impact more?
Sales tax influences the poor people more. An item worth $100 is 10% month's salary of a person earning $1,000/month; it is 1% of someone earning $10,000. Raising the price by $1 (however that is done, in this case a 1% sales tax), and the added cost is 1% of the former, but only 0.1% of the latter person. An income tax of 1% otoh would cost both that 1% of their income; that is $1 for the former, and $10 for the latter. That is what is usually considered "taxed the same rate".
[ "The top 1 percent of income-earners accounted for 52 percent of the income gains from 2009 to 2015, where income is defined as market income excluding government transfers, while their share of total income has more than doubled from 9 percent in 1976 to 20 percent in 2011. According to a 2014 OECD report, 80% of total pre-tax market income income growth went to the top 10% from 1975 to 2007.\n", "Tax expenditures (i.e., deductions, exemptions, and preferential tax rates) represent a major driver of inequality, as the top 20% get roughly 50% of the benefit from them, with the top 1% getting 17% of the benefit. For example, a 2011 Congressional Research Service report stated, \"Changes in capital gains and dividends were the largest contributor to the increase in the overall income inequality.\" CBO estimated tax expenditures would be $1.5 trillion in fiscal year 2017, approximately 8% GDP; for scale, the budget deficit historically has averaged around 3% GDP.\n", "CBO reported in November 2018 that all income groups significantly increased both their pre-tax and after-tax income from 1979 to 2015 in real terms (i.e., adjusted for inflation). For example, income after transfers and taxes was up 103% for the highest income quintile, 79% for the lowest income quintile, and 46% for the middle three quintiles measured together (21st to 80th percentiles). CBO also reported that the middle quintile (40th to 60th percentile) households, a proxy for the middle-class, earned an average of $58,500 in market income during 2015, representing a 12% share of the total market income. At the 1979 share of 16%, this figure would be $78,000 or $19,500 higher. After taxes and transfers, these middle-class households earned an average of $64,700, a 15% share. At the 1979 share of 16%, this figure would be $69,000 or $4,300 higher.\n", "CBO reported in January 2016 that: \"Tax expenditures are distributed unevenly across the income scale. When measured in dollars, much more of the tax expenditures go to higher-income households than to lower-income households. As a percentage of people’s income, tax expenditures are greater for the highest-income and lowest-income households than for\n", "Sales and income taxes behave differently due to differing definitions of tax base, which can make comparisons between the two confusing. Under the existing individual income plus employment (Social Security; Medicare; Medicaid) tax formula, taxes to be paid are included in the base on which the tax rate is imposed (known as \"tax-inclusive\"). If an individual's gross income is $100 and the sum of their income plus employment tax rate is 23%, taxes owed equals $23. Traditional state sales taxes are imposed on a tax base equal to the pre-tax portion of a good's price (known as \"tax-exclusive\"). A good priced at $77 with a 30% sales tax rate yields $23 in taxes owed. To adjust an inclusive rate to an exclusive rate, divide the given rate by one minus that rate (i.e. formula_1).\n", "The middle income worker's tax wedge is 46% and effective marginal tax rates are very high. Value-added tax is 24% for most items. Capital gains tax is 30-34% and corporate tax is 20%, about the EU median. Property taxes are low, but there is a transfer tax (1.6% for apartments or 4% for individual houses) for home buyers. There are high excise taxes on alcoholic beverages, tobacco, automobiles and motorcycles, motor fuels, lotteries, sweets and insurances. For instance, McKinsey estimates that a worker has to pay around 1600 euro for another's 400 euro service - restricting service supply and demand - though some taxation is avoided in the black market and self-service culture. Another study by Karlson, Johansson & Johnsson estimates that the percentage of the buyer’s income entering the service vendor’s wallet (inverted tax wedge) is slightly over 15%, compared to 10% in Belgium, 25% in France, 40% in Switzerland and 50% in the United States. Tax cuts have been in every post-depression government's agenda and the overall tax burden is now around 43% of GDP compared to 51.1% in Sweden, 34.7% in Germany, 33.5% in Canada, and 30.5% in Ireland.\n", "The subject of individual income tax is the real people. The meaning of income is the net amount of revenues derived by a person within a year. According to Income Tax Law, incomes may be listed such as:\n" ]
Why were the Germans in WWII so much more scared to surrender to the Soviets?
They were taught, brainwashed, in to believing that the soviets were sub-human savages. Barbarians. They put out a lot of propaganda about Russian soldiers raping and pillaging and ignoring laws in order to improve the moral while fighting them. But it worked so well that many of them genuinely believed it. Maybe they were right in some cases, too. They thought they'd be tortured and executed by the Soviets immediately.
[ "There were several reasons some Germans decided to end their lives in the last months of the war. First, by 1945, Nazi propaganda had created fear among some sections of the population about the impending military invasion of their country by the Soviets or Western Allies. Information films from the Reich Ministry of Public Enlightenment and Propaganda repeatedly chided audiences about why Germany must not surrender, telling the people they faced the threat of torture, rape, and death in defeat. These fears were not groundless, as many Germans were raped, mostly by Soviet soldiers, although many by Western Allied soldiers also. The number of rapes is disputed, but was certainly considerable – hundreds of thousands of incidents, according to most Western historians.\n", "One reason for the policy was that the Allies wished to avoid a repetition of the stab-in-the-back myth that arose in Germany after World War I, which attributed Germany's loss to betrayal by Jews, Bolsheviks, and Socialists. The myth was used by the Nazis in their propaganda. It was felt that an unconditional surrender would ensure that the Germans knew that they had lost the war themselves.\n", "On 9 May 1945 (Moscow time), Germany surrendered, meaning that if the Soviets were to honour the Yalta agreement, they would need to enter war with Japan by 9 August 1945. The situation continued to deteriorate for the Japanese, and they were now the only Axis power left in the war. They were keen to remain at peace with the Soviets and extend the Neutrality Pact, and they were also keen to achieve an end to the war. Since Yalta they had repeatedly approached, or tried to approach, the Soviets in order to extend the Neutrality Pact, and to enlist the Soviets in negotiating peace with the Allies. The Soviets did nothing to discourage these Japanese hopes, and drew the process out as long as possible (whilst continuing to prepare their invasion forces.) One of the roles of the Cabinet of Admiral Baron Suzuki, which took office in April 1945, was to try to secure any peace terms short of unconditional surrender. In late June, they approached the Soviets (the Neutrality Pact was still in place), inviting them to negotiate peace with the Allies in support of Japan, providing them with specific proposals and in return they offered the Soviets very attractive territorial concessions. Stalin expressed interest, and the Japanese awaited the Soviet response. The Soviets continued to avoid providing a response. The Potsdam Conference was held from 16 July to 2 August 1945. On 24 July the Soviet Union recalled all embassy staff and families from Japan. On 26 July the conference produced the Potsdam Declaration whereby Churchill, Harry S. Truman and Chiang Kai-shek (the Soviet Union was not officially at war with Japan) demanded the unconditional surrender of Japan. The Japanese continued to wait for the Soviet response, and avoided responding to the declaration.\n", "The Soviet Union lived in significant fear of a possible World War Three, and because of this, tended to have a hair trigger when it came to reacting to an event. Hence, the threat of a nuclear war was a very real possibility, even if the reasons behind it are misunderstood. The Soviet Union was ready to go at any time, especially after being scared many times for smaller reasons. On the night of September 1, 1983, a civilian Boeing 747 en route to South Korea passed into Soviet airspace near the Siberian coast. A Sukhoi Su-15 interceptor aircraft piloted by Gennadi Osipovich targeted the civilian aircraft and shot it down with two missiles. The Soviets claimed that they knew it was a civilian aircraft, however, they said it would be very easy to convert a civilian aircraft into an intelligence gathering platform. The Soviets claimed they believed they had a justification to shoot down this aircraft because they perceived it to be a hostile intruder. There was one American on board, Larry McDonald who was a United States House of Representative member. Oleg Gordievsky believes that the Soviet Union mistook the civilian airliner to be a United States Boeing RC-135, which is a reconnaissance gathering aircraft which looks very similar to a Boeing 747 due to the fact that it has four engines and a wide body similar to the airliner. This is refuted by the pilots of the attacking Soviet aircraft claiming that he knew it was a civilian jet, but he shot it down anyway because it could have been easily converted for reconnaissance. The attitude of the Soviets towards anything that might be perceived as a threat was devolving more and more towards a 'shoot first, ask questions later' mentality. While it may have been uncalled for, Soviets were on edge about everything at this point. This would prove to be incredibly dangerous in the impending strategic nuclear war exercises about to be conducted by the United States and its NATO allies.\n", "In February 1945 during the Yalta Conference the Soviet Union had agreed to enter the war against Japan 90 days after the surrender of Germany. At the time Soviet participation was seen as crucial to tie down the large number Japanese forces in Manchuria and Korea, keeping them from being transferred to the Home Islands to mount a defense to an invasion.\n", "The Germans were now not only starving, but running out of ammunition. Nevertheless, they continued to resist, in part because they believed the Soviets would execute any who surrendered. In particular, the so-called \"HiWis\", Soviet citizens fighting for the Germans, had no illusions about their fate if captured. The Soviets were initially surprised by the number of Germans they had trapped, and had to reinforce their encircling troops. Bloody urban warfare began again in Stalingrad, but this time it was the Germans who were pushed back to the banks of the Volga. The Germans adopted a simple defense of fixing wire nets over all windows to protect themselves from grenades. The Soviets responded by fixing fish hooks to the grenades so they stuck to the nets when thrown. The Germans had no usable tanks in the city, and those that still functioned could, at best, be used as makeshift pillboxes. The Soviets did not bother employing tanks in areas where the urban destruction restricted their mobility.\n", "The chiefs of staff were concerned that given the enormous size of Soviet forces deployed in Europe at the end of the war, and the perception that the Soviet leader Joseph Stalin was unreliable, there existed a Soviet threat to Western Europe. The Soviet Union had yet to launch its attack on Japanese forces, and so one assumption in the report was that the Soviet Union would instead ally with Japan if the Western Allies commenced hostilities.\n" ]
why can't people remember correct spellings of common words?
I type reasonably fast, frequently my misspellings are when I am thinking of the right word but I type a homonym or I start to type one word like pay and then swap what I want to say to use paid mid-word and just add the typical past tens suffix. Edit: I've been a fast reader for several decades and read mostly from context so I'm terrible at proof reading.
[ "At the other extreme are languages such as English, where the pronunciations of many words simply have to be memorized as they do not correspond to the spelling in a consistent way. For English, this is partly because the Great Vowel Shift occurred after the orthography was established, and because English has acquired a large number of loanwords at different times, retaining their original spelling at varying levels. Even English has general, albeit complex, rules that predict pronunciation from spelling, and these rules are successful most of the time; rules to predict spelling from the pronunciation have a higher failure rate.\n", "One of the most effective ways to memorize spellings is to make up mnemonics to help remember them. A mnemonic is a memory trick which associates the thing that is to be remembered with something else to make it easier. For spelling, it can be the exaggerated pronunciation of a word, like \"indepenDENT\". Or it might be a silly sentence or visual image to help remember the word, like \"the independents all dented their cars with sledgehammers\". Or it might describe a key aspect of a word to help remember it, like \"all the vowels in \"cemetery\" are the same: three little \"e's\", each on its own little tombstone.\"\n", "People who use non-standard spelling often suffer from adverse opinions, as a person's mastery of standard spelling is often equated to his or her level of formal education or intelligence. Spelling is easier in languages with more or less consistent spelling systems such as Finnish, Serbian, Italian and Spanish, owing to the fact that, either, pronunciation in these languages has changed relatively little since the initial establishment of their spellings systems, or else that \"non-phonemic etymological\" spellings have been replaced with \"phonemic unetymological\" spellings as pronunciation changes. Predicting spelling is more difficult in languages in which pronunciation has changed significantly since the spelling was fixed, thus yielding a \"non-phonetic etymological\" spelling system such as Irish or French. These spelling systems are still 'phonemic' (rather than 'phonetic') since the pronunciation of a word can be systematically derived from the spelling, although the converse (i.e. spelling from pronunciation) may not be possible. English is an extreme example of a defective orthography in which the spelling cannot be systematically derived from pronunciation but also has the more unusual problem that pronunciation cannot be systematically derived from spelling.\n", "In some cases controlled vocabulary can enhance recall as well, because unlike natural language schemes, once the correct authorized term is searched, there is no need to search for other terms that might be synonyms of that term.\n", "A large number of easily noticeable spelling pronunciations occurs only in languages such as French and English in which spelling tends to not indicate the current pronunciation. Because all languages have at least some words which are not spelled as pronounced, even those such as Finnish with most words being written phonetically, spelling pronunciations can arise in any language in which most people obtain only enough education to learn how to read and write but not enough to understand when the spelling fails to indicate the modern pronunciation. In other words, when many people do not clearly understand the relationship between spelling and pronunciation, spelling pronunciations are common.\n", "Unlike spelling reforms, we can actually keep a word's original spelling intact but add pronunciation information to it, e.g. using diacritics. Phonetically Intuitive English is a Chrome browser extension that automatically adds such a pronunciation guide to English words on Web pages, for English-speaking children to recognize a written word's pronunciation and therefore map the written word to the mental word in his mind.\n", "Pronunciation respelling systems for English have been developed primarily for use in dictionaries. They are used there because it is not possible to predict with certainty the sound of a written English word from its spelling or the spelling of a spoken English word from its sound. So readers looking up an unfamiliar word in a dictionary may find, on seeing the pronunciation respelling, that the word is in fact already known to them orally. By the same token, those who hear an unfamiliar spoken word may see several possible matches in a dictionary and must rely on the pronunciation respellings to find the correct match.\n" ]
How do we know that physical constants such as G, C, etc. have not slowly change over time?
Because if they did it would have measureable effects that we can observe. We know, for example, that the gravitational constant varies by less than a few parts per trillion yearly. [link](_URL_0_)
[ "In 1937, Paul Dirac and others began investigating the consequences of natural constants changing with time. For example, Dirac proposed a change of only 5 parts in 10 per year of Newton's constant \"G\" to explain the relative weakness of the gravitational force compared to other fundamental forces. This has become known as the Dirac large numbers hypothesis.\n", "Physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying has been proposed as a way of solving problems in cosmology and astrophysics. String theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just ) actually vary.\n", "Paul Dirac in 1937 speculated that physical constants such as the gravitational constant or the fine-structure constant might be subject to change over time in proportion of the age of the universe. Experiments can in principle only put an upper bound on the relative change per year. For the fine-structure constant, this upper bound is comparatively low, at\n", "Physical constant in the sense under discussion in this article should not be confused with other quantities called \"constants\" that are assumed to be constant in a given context without the implication that they are fundamental, such as the \"time constant\" characteristic of a given system, or material constants, such as the Madelung constant, electrical resistivity, and heat capacity.\n", "Some theorists (such as Dirac and Milne) have proposed cosmologies that conjecture that physical \"constants\" might actually change over time (e.g. a variable speed of light or Dirac varying-\"G\" theory). Such cosmologies have not gained mainstream acceptance and yet there is still considerable scientific interest in the possibility that physical \"constants\" might change, although such propositions introduce difficult questions. Perhaps the first question to address is: How would such a change make a noticeable operational difference in physical measurement or, more fundamentally, our perception of reality? If some particular physical constant had changed, how would we notice it, or how would physical reality be different? Which changed constants result in a meaningful and measurable difference in physical reality? If a physical constant that is not dimensionless, such as the speed of light, \"did\" in fact change, would we be able to notice it or measure it unambiguously? – a question examined by Michael Duff in his paper \"Comment on time-variation of fundamental constants\".\n", "By definition, fundamental physical constants are subject to measurement, so that their being constant (independent on both the time and position of the performance of the measurement) is necessarily an experimental result and subject to verification.\n", "The value of the constant \"G\" was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for \"G\". This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's \"Principia\" and 71 years after Newton's death, so none of Newton's calculations could use the value of \"G\"; instead he could only calculate a force relative to another force.\n" ]
why is a bullet so deadly?
The bullet moves very fast, and though it only leaves a small hole, the shockwave disrupts and destroys a lot of tissue. Look at the various films where people shoot melons. These melons explode in all directions. Imagine flesh being subjected to those forces.
[ "BULLET::::- Is presumed to be toxic to humans because it falls within any one of the following categories when tested on laboratory animals (whenever possible, animal test data that has been reported in the chemical literature should be used):\n", "The bullet design can produce deep wounds while failing to pass through structural barriers thicker than drywall or sheet metal. These qualities make it less likely to strike unintended targets, such as people in another room during an indoor shooting. Also, when it strikes a hard surface from which a solid bullet would glance off, it fragments into tiny, light pieces and creates much less ricochet danger.\n", "BULLET::::- Several people share an intent to do serious harm, and the victim dies because of the action of \"any\" of those involved (for example, if another person goes \"further than expected\" or performs an unexpectedly lethal action).\n", "BULLET::::6. Lethal force/Deadly force – a force with a high probability of causing death or serious bodily injury. Serious bodily injury includes unconsciousness, protracted or obvious physical disfigurement, or protracted loss of or impairment to the function of a bodily member, organ, or the mental faculty. A firearm is the most widely recognized lethal or deadly force weapon, however, an automobile or weapon of opportunity could also be defined as a deadly force utility.\n", "BULLET::::2. In the absence of adequate data on human toxicity, is presumed to be toxic to humans because when tested on laboratory animals it has an LC value of not more than 5000 ml/m³. See 49CFR 173.116(a) for assignment of Hazard Zones A, B, C or D. LC values for mixtures may be determined using the formula in 49 CFR 173.133(b)(1)(i)\n", "Trauma from a gunshot wound varies widely based on the bullet, velocity, entry point, trajectory, and affected anatomy. Gunshot wounds can be particularly devastating compared to other penetrating injuries because the trajectory and fragmentation of bullets can be unpredictable after entry. Additionally, gunshot wounds typically involve a large degree of nearby tissue disruption and destruction due to the physical effects of the projectile correlated with the bullet velocity classification.\n", "Despite the ban on military use, hollow-point bullets are one of the most common types of bullets used by civilians and police, which is due largely to the reduced risk of bystanders being hit by over-penetrating or ricocheted bullets, and the increased speed of incapacitation.\n" ]
Did Arian Christians accuse "Orthodox" Christians of heresy?
In short, yes. Remember, "Arian" is two sorts of label used to describe a number of groups in the fourth century. Firstly, it's a label used by Athanasius from the 360s onwards to describe a number of groups that he opposes, and to attempt to *taint* them through guilt by association with Arius, who had already been condemned. "Arians" by this measure did not consider themselves "Arian". Secondly, it's a label used by later historians to name a number of these same groupings, but it's largely falling out of disfavour in recent years since it is problematic. That said, groups that we call "Arian", considered themselves to be "orthodox" (though they did not necessarily use that word in the way that we do), and when they held sway they would bring ecclesiastical power to bear on 'orthodox' believers. I'll give some links to translated creedal documents to illustrate. At the Council of Antioch in 341, mostly composed of non-Nicenes, they swore til they were blue in the face that they were not disciples of Arius. ([First Creed](_URL_2_)). They also describe their faith as in conformity to the "evangelical and apostolic tradition", which is the kind of language marker we mean by "orthodox" ([Second Creed](_URL_1_)). [Sirmium 357](_URL_0_) very explicitly speaks *against* using the language of 'essence' to talk about the Father and Son, and is aimed squarely at rendering the Nicene creed and its adherents as unorthodox. Coupled with these credal statements, councils dominated by non-Nicenes would regularly depose bishops of other theological persuasions, exiling them or otherwise punishing them. This explains, for instance, why Athanasius is continually deposed from Alexandria and goes into exile so often. His case is not isolated. Remember, 'heresy' is, from a historical perspective, a post-factum description of the losing side. 'Heretics' didn't think they were heretical, but that they were right and the other guys were wrong.
[ "Other allegations of heresy have emerged among conservative Christians, such as that White has denied the Trinity, partly as a result of a video shared by Christian author Erick Erickson that shows White assenting to the viewpoint that Jesus Christ was not the only son of God, in contravention of the Nicene Creed. Erickson has stated:\n", "The Greek Orthodox Church and the Ecumenical Patriarchate of Constantinople were considered by the Ottoman governments as the ruling authorities of the entire Orthodox Christian population of the Ottoman Empire, whether ethnically Greek or not. Although the Ottoman state did not force non-Muslims to convert to Islam, Christians faced several types of discrimination intended to highlight their inferior status in the Ottoman Empire. Discrimination against Christians, particularly when combined with harsh treatment by local Ottoman authorities, led to conversions to Islam, if only superficially. In the 19th century, many \"crypto-Christians\" returned to their old religious allegiance.\n", "In Eastern Orthodox Christianity heresy most commonly refers to those beliefs declared heretical by the first seven Ecumenical Councils. Since the Great Schism and the Protestant Reformation, various Christian churches have also used the concept in proceedings against individuals and groups those churches deemed heretical. The Orthodox Church also rejects the early Christian heresies such as Arianism, Gnosticism, Origenism, Montanism, Judaizers, Marcionism, Docetism, Adoptionism, Nestorianism, Monophysitism, Monothelitism and Iconoclasm.\n", "Already in the 2nd century, Christians denounced teachings that they saw as heresies, especially Gnosticism but also Montanism. Ignatius of Antioch at the beginning of that century and Irenaeus at the end saw union with the bishops as the test of correct Christian faith. After legalization of the Church in the 4th century, the debate between Arianism and Trinitarianism, with the emperors favouring now one side now the other, was a major controversy.\n", "Nestorians held that the Council of Chalcedon proved the orthodoxy of their faith and had started persecuting non-Chalcedonian or monophysite Syrian Christians during the reign of Peroz I. In response to pleas for assistance from the Syrian Church, Armenian prelates issued a letter addressed to Persian Christians reaffirming their condemnation of the Nestorianism as heresy. \n", "With the adoption of Christianity by Constantine I (after Battle of Milvian Bridge, 312), heresy had become a political issue in the late Roman empire. Adherents of unconventional Christian beliefs not covered by the Nicene Creed like Novatianism and Gnosticism were banned from holding meetings, but the Roman emperor intervened especially in the conflict between orthodox and Arian Christianity, which resulted in the burning of Arian books.\n", "Most Arian creeds were written in the fourth century during the Arian controversy. Arian creeds generally represent the beliefs of those Christians opposed to the Nicene Creed. The Arian controversy began when Alexander of Alexandria accused a local presbyter, Arius, of heresy, in the late 310s and early 320s. It lasted until the proclamation of the Creed of Constantinople in 381. The opponents of Arius expressed their beliefs in the Nicene Creed. Arians expressed their beliefs in their own, Arian creeds. Advocates of Nicene Christianity and Arian Christianity debated and competed throughout the fourth century, each claiming to be the orthodox variant. Nicene Christians called their opponents, as a group, Arians although many of them differed significantly from the original doctrines of Arius, and many opponents of the Nicene Creed did not identify as Arians.\n" ]
USSR Causing Ukrainian Genocide, Mao Responsible for Famine, and Stalin Creating Consideration Camps: Did Any of These Happen?
They aren't fabrications. The Soviet Famine of 32-33 (what you refer to as the Ukrainian Genocide) was an actual famine caused by a massive drought throughout the major wheat production centers in the Ukraine and North Caucuses. The debate lies in whether or not the drought was deliberate or not and the extent to which the Soviet government knew about it/was slow to react. More modern evidence points to poor relations between the Party and the peasants, and a total hatch job by Soviet scientists on actually studying the soil conditions, as central to understanding why the drought occured. Calling it solely a Ukrainian genocide would mean ignoring the affected regions of the North Caucuses, and the general impact on the country as a whole as a result of having much less wheat output. I know there is a book that some like to cite which claims the famine was made up by Nazi's and Harvard as anti-soviet propaganda, but that argument hold little to no water. As an aside, especially with soviet history, stay far away from one Grover Furr. He isn't an historian, he's a nut who cares more about trying to protect the image of Stalin than actual research. Further reading: [jstor link to paper on party-peasant relations](_URL_1_) [paper on soil conditions and harvest statistics during the famine years](_URL_2_) The Chinese Famine that resulted from Mao's Great Leap Forward had similar issues. Local party members lied about how much grain the villages could produce and vastly over estimated the yields. This lead to grain requisition quotas that had devastating effects, as the higher members were unaware of how little was left after taking the amount of grain that was proportional to the amount that had been reported. This was coupled by a large drought. In short, people had less food due to faulty reporting, and not enough water to grow food even to just feed themselves. Further reading: [Jstor paper on grain quotas and procurements](_URL_0_) Stalin and the camps. I think you meant to say concentration camps, not consideration camps. I admittedly am not an authority on the GULAG system, but it has been covered here before.
[ "In 1932, under the rule of the USSR, Ukraine experienced one of its largest famines when between 2.4 and 7.5 million peasants died as a result of a state sponsored famine. It was termed the Holodomor, suggesting that it was a deliberate campaign of repression designed to eliminate resistance to collectivization. Forced grain quotas imposed upon the rural peasants and a brutal reign of terror contributed to the widespread famine. The Soviet government continued to deny the problem and it did not provide aid to the victims nor did it accept foreign aid. Several contemporary scholars dispute the notion that the famine was deliberately inflicted by the Soviet government.\n", "The Soviet famine of 1932–33, called Holodomor in Ukrainian, claimed up to 10 million Ukrainian lives as peasants' food stocks were forcibly removed by Stalin's regime by the NKVD secret police. As elsewhere, the precise number of deaths by starvation in Ukraine may never be precisely known. That said, the most recent demographic studies suggest that over 4 million Ukrainians perished in the first six months of 1933 alone, a figure that increases if population losses from 1931, 1932 and 1934 are also included, along with those from adjacent territories inhabited primarily by Ukrainians (but politically part of the Russian Federated Soviet Socialist Republic), such as the Kuban.\n", "Within the Soviet Union, forced changes in agricultural policies (collectivization), confiscations of grain and droughts caused the Soviet famine of 1932–1933 in Ukraine, Northern Caucasus, Volga Region and Kazakhstan. The famine was most severe in the Ukrainian SSR, where it is often referenced as the Holodomor. A significant portion of the famine victims (3.3 to 7.5 million) were Ukrainians. Another part of the famine was known as Kazakh catastrophe, when more than 1.3 million ethnic Kazakhs (38% of all indigenous population) died. Many scholars say that the Stalinist policies that caused the famine may have been designed as an attack on the rise of Ukrainian nationalism and thus may fall under the legal definition of genocide (see Holodomor genocide question).\n", "The Holodomor famine has been frequently described as a deliberate \"Terror-Famine\" campaign organized by the Soviet authorities against the Ukrainian population. It resulted in deaths of millions of ethnic Ukrainians of starvation in peacetime. Entire nations and ethnic groups were collectively punished by the Soviet government under the guise of “alleged collaboration” with the enemy during World War II. At least nine distinct ethnic-linguistic groups, including Poles, Germans, Romanians, Hungarians, Greeks, Chechens, Koreans, and Kalmyks, were deported to remote unpopulated areas of Siberia and Kazakhstan. Population transfer in the Soviet Union led to millions of deaths from the inflicted hardships. Mass operations of the NKVD were needed to deport hundreds of thousands of people.\n", "In the former Soviet Union millions of men, women and children fell victims to the cruel actions and policies of the totalitarian regime. The Great Famine of 1932–1933 in Ukraine (Holodomor), took from 7 million to 10 million innocent lives and became a national tragedy for the Ukrainian people. In this regard, we note activities in observance of the seventieth anniversary of this Famine, in particular organized by the Government of Ukraine.\n", "Timothy D. Snyder, professor of history at Yale University, asserts that in 1933 \"Joseph Stalin was deliberately starving Ukraine\" through a \"heartless campaign of requisitions that began Europe's era of mass killing\". He argues the Soviets themselves \"made sure that the term \"genocide\", contrary to Lemkin's intentions, excluded political and economic groups\". Thus the Ukrainian famine can be presented as \"somehow less genocidal because it targeted a class, kulaks, as well as a nation, Ukraine\".\n", "According to several Western historians, Stalinist agricultural policies were a key factor in causing the Soviet famine of 1932–1933, which the Ukrainian government now calls the Holodomor, recognizing it as an act of genocide. Some scholars dispute the intentionality of the famine.\n" ]
eau de toilette, eau de perfume, eau de cologne
The major distinction between the three is the concentration of the actual fragrance that is dissolved in alcohol. Eau de toilette, toilet water, has anywhere from 5-15% concentration of the fragrance. Eau de perfume has anywhere from 10 to 20%. It's worth noting, that Parfum du Toilette and Eau du Parfum are usually used synonymously while Parfum du toilette and eau du toilette are distinct from each other. Eau du cologne is kind of a special case. When someone says eau du cologne, they usually just mean a perfume with a concentration between 3 and 8%. Perfume mist also has a concentration of 3-8%. The the current distinction between eau du cologne and perfume mist is that cologne is, in English speaking countries, typically for men and perfume is typically for women. Previously, in the.. Like.. 18th to 19th century, I think.. Eau du cologne specifically referred to perfume (with 3-8% concentration) that was a blend of citrus fruit notes that also happened to come from Cologne, Germany. Now, Classical Cologne is used to refer to this stuff. All of these concentrations are lower than perfume extract, which is usually 15-40%. Perfume extract is the stuff that comes in a tiny little bottle that costs like.. 150 dollars. Generally, it's worth the price because you don't use nearly as much to get a scent that's usually stronger while on your skin. The previous concentrations, as far as my experience goes, all come in some sort of spray bottle. The only social rules that I'm aware of, specifically in English speaking countries - there could be different ones in other places, are: 1) perfume extract is not really acceptable as a gift to men. 2) don't put too much on. Less is more. You can get away with putting on more eau du cologne than you can eau du perfume, since the concentrations are different. Disclaimer: as a previously avid cologne junkie, this is all information that I have been familiar with in the past, is probably still pretty close, and I think I've plagiarized most of it from the Wikipedia article on perfume. So.. Look there if you want to be sure.
[ "Eau de Cologne (; German: Kölnisch Wasser ; meaning \"Water from Cologne\"), or simply cologne, is a perfume originating from Cologne, Germany. Originally mixed by Johann Maria Farina (Giovanni Maria Farina) in 1709, it has since come to be a generic term for scented formulations in typical concentration of 2–5% and also more depending upon its type essential oils or a blend of extracts, alcohol, and water. \n", "The original \"Eau de Cologne\" is a spirit-citrus perfume launched in Cologne in 1709 by Giovanni Maria Farina (1685–1766), an Italian perfume maker from Santa Maria Maggiore Valle Vigezzo. In 1708, Farina wrote to his brother Jean Baptiste: \"I have found a fragrance that reminds me of an Italian spring morning, of mountain daffodils and orange blossoms after the rain\". He named his fragrance \"Eau de Cologne\", in honour of his new hometown.\n", "\"Demeter\" was founded by ex-Kiehl's perfumer Christopher Brosius, in 1993, as a project 'bottle' everyday odors into wearable personal colognes. The first three colognes that were created – Dirt, Grass and Tomato – were launched at New York department stores, Bergdorf Goodman and Henri Bendel, in 1996. The three scents sold well, which led to further introductions to the range and unusual fragrances such as Gin & Tonic, Baby Powder and Play-Doh joined the portfolio. \n", "Classical colognes first appeared in Europe in the 17th century. The first fragrance labeled a \"parfum\" extract with a high concentration of aromatic compounds was Guerlain's \"Jicky\" in 1889. Eau de Toilette appeared alongside parfum around the turn of the century. The EdP concentration and terminology is the most recent. Parfum de toilette and EdP began to appear in the 1970s and gained popularity in the 1980s.\n", "In the early 18th century, Johann Maria Farina (1685–1766), an Italian living in Cologne, Germany, created a fragrance. He named it \"Eau de Cologne\" (\"water from Cologne\") after his new home. Over the next century, the fragrance became increasingly popular.\n", "Cologne is also famous for Eau de Cologne (German: \"Kölnisch Wasser\"; lit: \"Water of Cologne\"), a perfume created by Italian expatriate Johann Maria Farina at the beginning of the 18th century. During the 18th century, this perfume became increasingly popular, was exported all over Europe by the Farina family and \"Farina\" became a household name for \"Eau de Cologne\". In 1803 Wilhelm Mülhens entered into a contract with an unrelated person from Italy named Carlo Francesco Farina who granted him the right to use his family name and Mühlens opened a small factory at Cologne's Glockengasse. In later years, and after various court battles, his grandson Ferdinand Mülhens was forced to abandon the name \"Farina\" for the company and their product. He decided to use the house number given to the factory at Glockengasse during the French occupation in the early 19th century, 4711. Today, original Eau de Cologne is still produced in Cologne by both the Farina family, currently in the eighth generation, and by Mäurer & Wirtz who bought the 4711 brand in 2006.\n", "The \"Eau de Cologne\" composed by Farina was used only as a perfume and delivered to \"nearly all royal houses in Europe\". His ability to produce a constantly homogeneous fragrance consisting of dozens of monoessences was seen as a sensation at the time. A single vial of this \"aqua mirabilis\" (Latin for miracle water) cost half the annual salary of a civil servant. When free trade was established in Cologne by the French in 1797, the success of \"Eau de Cologne\" prompted countless other businessmen to sell their own fragrances under the name of \"Eau de Cologne\". Giovanni Maria Farina's formula has been produced in Cologne since 1709 by Farina opposite the Jülichplatz and to this day remains a secret. His shop at Obenmarspforten opened in 1709 and is today the world's oldest fragrance factory.\n" ]
How did marsupials spread from the Americas to Australia if they evolved way after South America separated from Gondwana?
This seems to be a difficult question to answer, in part because Antarctica is difficult to study: fossil records are scarce, as are geological samples. There were apparently many active volcanoes along the Antarctic peninsula during the late cretaceous and early tertiary times, which, combined with the discovery of a [marsupial fossil on Seymour Island](_URL_2_), makes it plausible that there was a connection from South America to Antarctica. Keep in mind that the date for marsupial divergence is still fairly up in the air (I'm seeing numbers from [125 MYA](_URL_0_) to [65 MYA](_URL_1_)), and that their fossil records are fairly spotty.
[ "Marsupials reached Australia via Antarctica about 50 mya, shortly after Australia had split off. This suggests a single dispersion event of just one species, most likely a relative to South America's monito del monte (a microbiothere, the only New World australidelphian). This progenitor may have rafted across the widening, but still narrow, gap between Australia and Antarctica. In Australia, they radiated into the wide variety seen today. Modern marsupials appear to have reached the islands of New Guinea and Sulawesi relatively recently via Australia. A 2010 analysis of retroposon insertion sites in the nuclear DNA of a variety of marsupials has confirmed all living marsupials have South American ancestors. The branching sequence of marsupial orders indicated by the study puts Didelphimorphia in the most basal position, followed by Paucituberculata, then Microbiotheria, and ending with the radiation of Australian marsupials. This indicates that Australidelphia arose in South America, and reached Australia after Microbiotheria split off.\n", "South American marsupials have long been suspected to be ancestral to those of Australia, consistent with the fact that the two continents were connected via Antarctica in the early Cenozoic. Australia’s earliest known marsupial is \"Djarthia\", a primitive mouse-like animal that lived in the early Eocene about 55 million years ago (mya). \"Djarthia\" had been identified as the earliest known australidelphian, and this research suggested that the monito del monte was the last of a clade that included \"Djarthia\". This relationship suggests that the ancestors of the monito del monte might have reached South America by back-migration from Australia. The time of divergence between the monito del monte and Australian marsupials was estimated to have been 46 mya.\n", "Fossils found at Lightning Ridge, New South Wales, suggest that 110 million years ago (Ma), Australia supported a number of different monotremes, but did not support any marsupials. Marsupials appear to have evolved during the Cretaceous in the contemporary northern hemisphere, to judge from a 100-million-year-old marsupial fossil, \"Kokopellia\", found in the badlands of Utah. Marsupials would then have spread to South America and Gondwana. The first evidence of mammals (both marsupials and placental) in Australia comes from the Tertiary, and was found at a 55-million-year-old fossil site at Murgon, in southern Queensland. As Zealanded had rifted away at this time it explains the lack of ground dwelling marsupials and placental mammals in New Zealand's fossil record.\n", "Marsupials appear to have traveled via Gondwanan land connections from South America through Antarctica to Australia in the late Cretaceous or early Tertiary. One living South American marsupial, the monito del monte, has been shown to be more closely related to Australian marsupials than to other South American marsupials; however, it is the most basal australidelphian, meaning that this superorder arose in South America and then colonized Australia after the monito del monte split off. A 61-Ma-old platypus-like monotreme fossil from Patagonia may represent an Australian immigrant. Paleognath birds (ratites and South American tinamous) may have migrated by this route around the same time, more likely in the direction from South America to Australia/New Zealand. Other taxa that may have dispersed by the same route (if not by flying or floating across the ocean) are parrots, chelid turtles and (extinct) meiolaniid turtles.\n", "In Australia, terrestrial placental mammals disappeared early in the Cenozoic (their most recent known fossils being 55 million-year-old teeth resembling those of condylarths) for reasons that are not clear, allowing marsupials to dominate the Australian ecosystem. Extant native Australian terrestrial placental mammals (such as hopping mice) are relatively recent immigrants, arriving via island hopping from Southeast Asia.\n", "Furthermore, molecular data suggests that Notoryctemorphia separated from other marsupials around 64 million years ago. Although at this time South America, Antarctica and Australia were still joined the order evolved in Australia for at least 40-50 million years. The Riversleigh fossil material suggests that \"Notoryctes\" was already well adapted for burrowing and probably lived in the rainforest that covered much of Australia at that time. The increase in aridity at the end of Tertiary was likely one of the key contributing factors to the development of the current highly specialized form of marsupial mole. The marsupial mole had been burrowing long before the Australian deserts came into being.\n", "The roots of Australian marsupials are thought to trace back tens of millions of years to when much of the current Southern Hemisphere was part of the supercontinent of Gondwana; marsupials are believed to have originated in what is now South America and migrated across Antarctica, which had a temperate climate at the time. As soil degradation took hold, it is believed that the marsupials adapted to the more basic flora of Australia. According to Pemberton, the possible ancestors of the devil may have needed to climb trees to acquire food, leading to a growth in size and the hopping gait of many marsupials. He speculated that these adaptations may have caused the contemporary devil's peculiar gait. The specific lineage of the Tasmanian devil is theorised to have emerged during the Miocene, molecular evidence suggesting a split from the ancestors of quolls between 10 and 15 million years ago, when severe climate change came to bear in Australia, transforming the climate from warm and moist to an arid, dry ice age, resulting in mass extinctions. As most of their prey died of the cold, only a few carnivores survived, including the ancestors of the quoll and thylacine. It is speculated that the devil lineage may have arisen at this time to fill a niche in the ecosystem, as a scavenger that disposed of carrion left behind by the selective-eating thylacine. The extinct \"Glaucodon ballaratensis\" of the Pliocene age has been dubbed an intermediate species between the quoll and devil. Fossil deposits in limestone caves at Naracoorte, South Australia, dating to the Miocene include specimens of \"S. laniarius\", which were around 15% larger and 50% heavier than modern devils. Older specimens believed to be 50–70,000 years old were found in Darling Downs in Queensland and in Western Australia. It is not clear whether the modern devil evolved from \"S. laniarius\", or whether they coexisted at the time. Richard Owen argued for the latter hypothesis in the 19th century, based on fossils found in 1877 in New South Wales. Large bones attributed to \"S. moornaensis\" have been found in New South Wales, and it has been conjectured that these two extinct larger species may have hunted and scavenged. It is known that there were several genera of thylacine millions of years ago, and that they ranged in size, the smaller being more reliant on foraging. As the devil and thylacine are similar, the extinction of the co-existing thylacine genera has been cited as evidence for an analogous history for the devils. It has been speculated that the smaller size of \"S. laniarius\" and \"S. moornaensis\" allowed them to adapt to the changing conditions more effectively and survive longer than the corresponding thylacines. As the extinction of these two species came at a similar time to human habitation of Australia, hunting by humans and land clearance have been mooted as possible causes. Critics of this theory point out that as indigenous Australians only developed boomerangs and spears for hunting around 10,000 years ago, a critical fall in numbers due to systematic hunting is unlikely. They also point out that caves inhabited by Aborigines have a low proportion of bones and rock paintings of devils, and suggest that this is an indication that it was not a large part of indigenous lifestyle. A scientific report in 1910 claimed that Aborigines preferred the meat of herbivores rather than carnivores. The other main theory for the extinction was that it was due to the climate change brought on by the most recent ice age.\n" ]
why fort knox has so much gold stored within it.
The US dollar, like most currencies, used to be based on gold. So the government had to have a lot of gold stuck in its vaults to back up its currency. This isn't true anymore, but there's no good way to get rid of all that gold. If it were all dumped on the open market, the price would crash.
[ "The United States Bullion Depository, often known as Fort Knox, is a fortified vault building located next to the United States Army post of Fort Knox, Kentucky. It is operated by the United States Department of the Treasury. The vault is used to store a large portion of the United States' gold reserves as well as other precious items belonging to or in custody of the federal government. It currently holds roughly of gold bullion, over half of the Treasury’s stored gold. The depository is protected by the United States Mint Police.\n", "The United States Bullion Depository, often known as Fort Knox, is a fortified vault building located adjacent to the Fort Knox Army Post. It is operated by the Department of the Treasury. The vault currently stores over half of the United States' gold reserves. It is protected by the United States Mint Police, and is well known for it's physical security. \n", ", Fort Knox holds of gold reserves with a market value of US $1413.200*147341858.382/1000000000 round 1 billion. This represents of the gold reserves of the United States. As of 2017, the U.S. gold reserves of 8,133.5 metric tons was nearly as much as the next three countries combined. The next highest holdings were Germany's, whose gold reserves were 3,371.0 metric tons.\n", "The Fort Knox Gold Mine operates a single large pit. From 2001 - 2004 the True North Mine, a small satellite deposit was operated and the ore was processed through the Fort Knox mill. Production from Fort Knox is up to per day of low grade ore (1 gram per tonne), with two mineral processing streams (carbon in pulp for higher grade ore and heap leaching for lower grade ore). The mine is located from the city of Fairbanks via a combination of paved and unpaved roads. Surpassing 6 million ounces in 2011, Fort Knox is the single largest producer of gold in the history of the state of Alaska. Expected to run out of ore in 2021, the mine's life has been extended to 2027 following Kinross' $100 million expansion investment announced in 2018, increasing life-of-mine production by about 1.5 million gold equivalent ounces.\n", "The depository was built by the Treasury in 1936 on land transferred to it from Fort Knox. Early shipments of gold totaling almost 13,000 metric tons were escorted by combat cars of the 1st U.S. Cavalry Regiment to the depository. It has in the past safeguarded other precious items such as the Constitution of the United States and Declaration of Independence, as well as enough opium and morphine to meet the needs of the entire nation for one year.\n", "In 1974, Beter publicly stated that most of the gold in Fort Knox had been sold to European interests, at prices vastly below market rates. According to him, international speculators had dishonestly obtained the gold.\n", "In June 1935, the U.S. Treasury announced its intention to quickly build a gold depository on the grounds of Fort Knox, Kentucky. It's purpose was to store gold then kept being kept in the New York City Assay office and Philadelphia Mint. This was in keeping with a policy previously announced to move gold reserves away from coastal cities to areas less vulnerable to foreign military invasion. This policy had already led to the shipment of nearly of of gold from San Francisco to Denver. The initial plans were to be completed by August, and called for a building costing no more than $450,000 ().\n" ]
why can't we identify someone if we can't see their eyes?
We can identify people without seeing their eyes just fine if we know them well. Haven't you ever recognized someone with their back turned or semi-profile if their hair has partially fallen over their face so you can't see their eyes? Or how about someone asleep? You can't see their eyes then.
[ "Face recognition can be used not just to identify an individual, but also to unearth other personal data associated with an individual – such as other photos featuring the individual, blog posts, social networking profiles, Internet behavior, travel patterns, etc. – all through facial features alone. Concerns have been raised over who would have access to the knowledge of one's whereabouts and people with them at any given time. Moreover, individuals have limited ability to avoid or thwart face recognition tracking unless they hide their faces. This fundamentally changes the dynamic of day-to-day privacy by enabling any marketer, government agency, or random stranger to secretly collect the identities and associated personal information of any individual captured by the face recognition system. Consumers may not understand or be aware of what their data is being used for, which denies them the ability to consent to how their personal information gets shared.\n", "An individual is 'identified' if distinguished from other members of a group. In most cases an individual's name, together with some other information, will be sufficient to identify them, but a person can be identified even if their name is not known. Start by looking at the means available to identify an individual and the extent to which such means are readily available to you.\n", "The face is the feature which best distinguishes a person. Specialized regions of the human brain, such as the fusiform face area (FFA), enable facial recognition; when these are damaged, it may be impossible to recognize faces even of intimate family members. The pattern of specific organs, such as the eyes, or of parts of them, is used in biometric identification to uniquely identify individuals.\n", "When we see people, we recognise individuals, we make judgements about them, we draw conclusions about their age, their sex, their personality, their mood and their intentions. And, deliberately or not, we send signals to others - through our expressions, the way we dress and the way we modify our bodies.\n", "The ability to visually identify previous social partners is essential for successful interactions because it aids in recognizing which partners can and cannot be trusted. In humans, this is accomplished by facial recognition. Research suggests that humans are born with an innate ability to process other human faces. In one study, Pascalis, et al. (1995) found that four-day-old neonates (infants) prefer to look at their mothers' faces rather than at a stranger's. This finding suggests that neonates are able to remember, recognize, and differentiate between faces. Further research suggests that humans prefer to attend to faces rather than non-face alternatives. Such specialized processing for faces aids in the encoding of memory for people. This preference is one explanation for why humans are more proficient at memorizing faces than non-faces.\n", "There is an exception, however, in certain circumstances when it is needed to identify the girl or woman, such as taking a photo of her for an ID card or passport, or when there is an exam and it is feared that a girl may secretly replace another. In such cases, she has to show her face for identification\n", "Recognizing faces is one of the most common forms of pattern recognition. Humans are incredibly effective at remembering faces, but this ease and automaticity belies a very challenging problem. All faces are physically similar. Faces have two eyes, one mouth, and one nose all in predictable locations, yet humans can recognize a face from several different angles and in various lighting conditions. \n" ]
when a company offers a direct listing for a new stock, how is the initial price point determined?
In the case of a direct listing, the original share holders put up their shares and determine the value themselves (no underwriters involved). It is much cheaper for the company but it is also considered extremely unsafe. There are no protections against the price swinging violently. This can lead to your shares not being purchased; low demand = no value and your stock tanks. Doing an IPO comes with the backing of security. In certain cases, where your product/offerings have an extremely solid future, or an already established market presence, you can do a direct listing and basically pop bottles because your product is the tits.
[ "To induce the shareholders of the target company to sell, the acquirer's offer price is usually at a premium over the current market price of the target company's shares. For example, if a target corporation's stock were trading at $10 per share, an acquirer might offer $11.50 per share to shareholders on the condition that 51% of shareholders agree. Cash or securities may be offered to the target company's shareholders, although a tender offer in which securities are offered as consideration is generally referred to as an \"exchange offer.\"\n", "The acquiring company essentially uses its own stock as cash to purchase the business. Each shareholder of the acquired company will receive a pre-determined number of shares from the acquiring company. Before the swap occurs each party must accurately value their company so that a fair swap ratio can be calculated. Valuation of a company is quite complicated. Not only does fair market value have to be determined, but the investment and intrinsic value needs to be determined as well.\n", "BULLET::::- Sale Price – The sale price is finalized through negotiations between the issuing company & the purchaser which is influenced by reputation of the promoters, project evaluation, prevailing market sentiment, prospects of off-loading these shares at a future date, etc.\n", "In certain circumstances, it may be possible for the management and the original owner of the company to agree a deal whereby the seller finances the buyout. The price paid at the time of sale will be nominal, with the real price being paid over the following years out of the profits of the company. The timescale for the payment is typically 3–7 years.\n", "If the market price of the stock falls below the mini-tender price before the offer closes, the bidder can cancel the offer or reduce the offer price. While a price change allows investors to withdraw their shares, this process is not automatic. The \"onus is on the investor\", as they (and not the bidder or broker) are responsible for acquiring the revised offer information and withdrawing their shares by the deadline.\n", "In a stock merger, the acquirer offers to purchase the target by exchanging its own stock for the target's stock at a specified ratio. To initiate a position, the arbitrageur will buy the target's stock and short sell the acquirer's stock. This process is called \"setting a spread\". The size of the spread positively correlates to the perceived risk that the deal will not be consummated at its original terms. The arbitrageur makes a profit when the spread narrows, which occurs when deal consummation appears more likely. Upon deal completion, the target's stock will be converted into stock of the acquirer based on the exchange ratio determined by the merger agreement. At this point in time, the spread will close. The arbitrageur delivers the converted stock into his short position to close his position.\n", "The initial share price is determined on the trade date. The final valuation of the shares is based on the closing price of the reference shares determined four days prior to maturity. If the investor is delivered physical shares, their value will be less than the initial investment.\n" ]
Are there any people alive today that can trace their ancestry back to ancient history?
There are plenty of people/dynasties that claim they "descend from antiquity" but there are no 'Western' claims that are accepted by historians and genealogists. There are a number of 'Eastern' claims that might one-day be accepted, the oldest of which is Kung Tsui-chang who claims to be the 79th-generation male descendant of Confucius (though, probably with some adoption involved). The Japanese imperial family also has a reasonably strong claim, though their surviving records don't go back as far as their claim does.
[ "A 2013 study in \"Nature\" reported that DNA found in the 24,000-year-old remains of a young boy from the archaeological Mal'ta-Buret' culture suggest that up to one-third of the indigenous Americans may have ancestry that can be traced back to western Eurasians, who may have \"had a more north-easterly distribution 24,000 years ago than commonly thought\". \"We estimate that 14 to 38 percent of Native American ancestry may originate through gene flow from this ancient population\", the authors wrote. Professor Kelly Graf said,\n", "A 2013 study in \"Nature\" reported that DNA found in the 24,000-year-old remains of a young boy from the archaeological Mal'ta-Buret' culture suggest that up to one-third of indigenous Americans' ancestry can be traced back to western Eurasians, who may have \"had a more north-easterly distribution 24,000 years ago than commonly thought\" \"We estimate that 14 to 38 percent of Amerindian ancestry may originate through gene flow from this ancient population,\" the authors wrote. Professor Kelly Graf said,\n", "With advances in science able to trace individuals' ancestry via their DNA, according to a widely publicised study (December 2008) in the \"American Journal of Human Genetics\", modern Spaniards (and Portuguese) have an average admixture of 19.8 percent from ancestors originating in the Near East during historic times (i.e. Phoenicians, Carthaginians, Jews and Levantine Arabs) – compared to 10.6 percent of North African – Berber admixture. This proportion could be as high as 23% in the case of Latin Americans, however, according to a study published in Nature Communications. The possibly higher proportion of significant Jewish ancestry in the Latin American population could stem from increased emigration of Conversos to the New World to avoid persecution by the Spanish Inquisition.\n", "Another study, focused on the mtDNA (that which is inherited only through the maternal line), revealed that the indigenous people of the Americas have their maternal ancestry traced back to a few founding lineages from East Asia, which would have arrived via the Bering strait. According to this study, it is probable that the ancestors of the Native Americans would have remained for a time in the region of the Bering Strait, after which there would have been a rapid movement of settling of the Americas, taking the founding lineages to South America.\n", "Available genetic data show that the Clovis people are the direct ancestors of roughly 80% of all living Native American populations in North and South America, with the remainder descended from ancestors who entered in later waves of migration. As reported in February 2014, DNA from the 12,600-year-old remains of Anzick boy, found in Montana, has affirmed this connection to the peoples of the Americas. In addition, this DNA analysis affirmed genetic connections back to ancestral peoples of northeast Asia. This adds weight to the theory that peoples migrated across a land bridge from Siberia to North America.\n", "Another study, also focused on the mtDNA (that which is inherited through only the maternal line), revealed that the indigenous people of the Americas have their maternal ancestry traced back to a few founding lineages from East Asia, which would have arrived via the Bering strait. According to this study, it is probable that the ancestors of the Native Americans would have remained for a time in the region of the Bering Strait, after which there would have been a rapid movement of settling of the Americas, taking the founding lineages to South America.\n", "According to self-reported ancestry figures recorded by the ACS, the five largest ancestral groups in Starrucca in 2013 were Germans (28.8%), English (19.9%), Poles (10.5%), Irish (9.9%), and Dutch (6.8%). Those reporting American ancestry made up 2.6% of the population.\n" ]
Why were Quakers banned from the Massachussetts Bay Colony in the mid-1600s?
Quakers were a persecuted group even back in the UK- they emphasized a more personal, direct relationship with God, unmediated by clergy, which was viewed as blasphemous and a threat to the established power of the Church of England. Because the monarch was the head of the church, denying one's allegiance to the church was akin to disloyalty to the state, and so 'nonconformists' like the Quakers were heavily proscribed. Now, whereas 17th-century England was a state in which political and religious authority were heavily entwined, the Massachusetts Bay Colony was essentially a theocracy- although it was organized on republican lines, the franchise was limited to freemen, and one of the requirements for being a freeman was being a member of a Puritan church. This, too, was in the days when becoming a church member was a very serious business that involved an interrogation by the pastor to hunt down any hints of heterodoxy. Because of the religious dominance of the government, the code of laws was heavily based off of Puritan beliefs. People could be prosecuted for crimes as various as playing dice or breaking the Sabbath (to say nothing of people executed for 'witchcraft'). Christmas and May Day were banned. Back in England, the Quakers had been persecuted for their beliefs because blasphemy was seen as a threat to the social order. However, in the environment of Puritan New England, blasphemy was not just a religious injunction, but a civil offense. Simply by being within the colony and holding beliefs that were at odds with the established doctrine there, one was actively committing a crime, and it was on this basis that Quakers were banned. They took no half measures, either. Any Quakers found on a ship coming into the colony were immediately imprisoned before being banished. There were a number of individuals who persisted in coming into the colony over a period of years in the middle of the 1600s, and four of them were eventually publicly hanged for their beliefs. This was seen as the last straw by the authorities in England, who revoked the ban on Quakers and, not long afterwards, ended the colony's de facto independence by sending over a royal governor to enforce the Crown's laws.
[ "Of all the New England colonies, Massachusetts was the most active in persecuting the Quakers, but the Plymouth, Connecticut and New Haven colonies also shared in their persecution. When the first Quakers arrived in Boston in 1656 there were no laws yet enacted against them, but this quickly changed, and punishments were meted out with or without the law. It was primarily the ministers and the magistrates who opposed the Quakers and their evangelistic efforts. A particularly vehement persecutor, the Reverend John Norton of the Boston church, clamored for the law of banishment upon pain of death. He is the one who later wrote the vindication to England, justifying the execution of the first two Quakers in 1659.\n", "Small numbers of Quakers started arriving in New England by 1656. The Puritan-dominated General Court immediately enacted laws to discourage their activities. The new laws provided for harsh punishment to anyone who professed the \"heretical opinions\" of Quakers. They even punished ship captains who knowingly carried Quakers as passengers. However, these new measures brought heated debate within the General Court, as their passage was far from unanimous. The Deputies of the General Court, including Robert Pike, who represented the outlying areas, were much more likely to be sensitive to the issue of religious freedom and probably voted against the new laws. Nonetheless, numerous Quaker missionaries were punished by public whippings, banishment, and the threat of death if they returned to Massachusetts Bay Colony. Between 1659 and 1661, five Quakers, all of whom had returned to Boston to continue preaching publicly, were hanged.\n", "On 11 July 1656 they became the first Quakers to visit the English North American colonies, arriving at Boston in the Massachusetts Bay Colony on the \"Swallow\". There they met with fierce hostility from the Puritan population and the Deputy Governor of the colony, Richard Bellingham, as news of the heretical views of the Quakers had preceded them.\n", "On 11 July 1656 they became the first Quakers to visit the English North American colonies, arriving at Boston in the Massachusetts Bay Colony on the \"Swallow\". There they met with fierce hostility from the Puritan population and the Deputy Governor of the colony, Richard Bellingham, as news of the heretical views of the Quakers had preceded them.\n", "Despite those arguments a major factor is agreed to be that the Quakers were initially discouraged or forbidden to go to the major law or humanities schools in Britain due to the Test Act. They also at times faced similar discriminations in the United States, as many of the colonial universities had a Puritan or Anglican orientation. This led them to attend \"Godless\" institutions or forced them to rely on hands-on scientific experimentation rather than academia.\n", "Despite those arguments a major factor is agreed to be that the Quakers were initially discouraged or forbidden to go to the major law or humanities schools in Britain due to the Test Act. They also at times faced similar discriminations in the United States, as many of the colonial universities had a Puritan or Anglican orientation. This led them to attend \"Godless\" institutions or forced them to rely on hands-on scientific experimentation rather than academia.\n", "In 1656, not long before Prence became governor, Quakers began to arrive in New England in substantial numbers. The conservative leaders of the Puritan colonies were alarmed by what they saw as their heretical religious views. Massachusetts issued a call to the United Colonies for concerted action against them, and would ultimately take the hardest line against them, hanging four of them for repeated violations of banishment.\n" ]
how do torrent websites keep track of the seeds/peers of each torrent?
So a torrent has seeders (people who have the whole file and are actively sharing it) and leechers (typically people who download more than they upload). The collection of the seeders and leechers is called a swarm. Different torrent sites use a tracker to organise the swarm. The tracker is what detects the number of people seeding and leeching. So when you open a torrent your computer talks to the tracker and tells it which torrent you're after (downloading), or you tell it that you're ready to upload for someone (seeding).
[ "The algorithm applies to a scenario in which there is only one seed in the swarm. By permitting each downloader to download only specific parts of the files listed in a torrent, it equips peers to begin seeding sooner. Peers attached to a seed with super-seeding enabled therefore distribute pieces of the torrent file much more readily before they have completed download themselves. \n", "Users find a torrent of interest on a torrent index site or by using a search engine built into the client, download it, and open it with a BitTorrent client. The client connects to the tracker(s) or seeds specified in the torrent file, from which it receives a list of seeds and peers currently transferring pieces of the file(s). The client connects to those peers to obtain the various pieces. If the swarm contains only the initial seeder, the client connects directly to it, and begins to request pieces. Clients incorporate mechanisms to optimize their download and upload rates.\n", "Web \"seeding\" was implemented in 2006 as the ability of BitTorrent clients to download torrent pieces from an HTTP source in addition to the \"swarm\". The advantage of this feature is that a website may distribute a torrent for a particular file or batch of files and make those files available for download from that same web server; this can simplify long-term seeding and load balancing through the use of existing, cheap, web hosting setups. In theory, this would make using BitTorrent almost as easy for a web publisher as creating a direct HTTP download. In addition, it would allow the \"web seed\" to be disabled if the swarm becomes too popular while still allowing the file to be readily available. This feature has two distinct specifications, both of which are supported by Libtorrent and the 26+ clients that use it.\n", "The eXeem network used super-peers that were used to track torrents (as ordinary bittorrent trackers). These super-peers were also responsible for maintaining file lists, comments and ratings for part of the files in the network. When a peer that was tracking a torrent was closed or went down, a new peer was assigned to be the tracker for that particular torrent.\n", "Selecting a torrent from the search results list would take the user to another page listing the websites currently hosting the specified torrent (with which users would download files). As Torrentz used meta-search engines, users would be redirected to other torrent sites to download content (commonly KickassTorrents, which was considered safe to use).\n", "Web search engines allow the discovery of torrent files that are hosted and tracked on other sites; examples include The Pirate Bay, Torrentz, isoHunt and BTDigg. These sites allow the user to ask for content meeting specific criteria (such as containing a given word or phrase) and retrieve a list of links to torrent files matching those criteria. This list can often be sorted with respect to several criteria, relevance (seeders-leechers ratio) being one of the most popular and useful (due to the way the protocol behaves, the download bandwidth achievable is very sensitive to this value). Metasearch engines allow one to search several BitTorrent indices and search engines at once. DHT search engines monitors the DHT network, and indexes torrents via metadata exchange from peers. \n", "Users wishing to obtain a copy of a file typically first download a torrent file that describes the file(s) to be shared, as well as the URLs of one or more central computers called trackers that maintain a list of peers currently sharing the file(s) described in the .torrent file. In the original BitTorrent design, peers then depended on this central tracker to find each other and maintain the swarm. Later development of distributed hash tables (DHTs) meant that partial lists of peers could be held by other computers in the swarm and the load on the central tracker computer could be reduced. PEX allows peers in a swarm to exchange information about the swarm directly without asking (polling) a tracker computer or a DHT. By doing so, PEX leverages the knowledge of peers that a user is connected to by asking them for the addresses of peers that they are connected to. This is faster and more efficient than relying solely on one tracker and reduces the processing load on the tracker. It also keep swarms together when the tracker is down.\n" ]
Did the concept of zero exist in the Roman era?
The Romans didn't have a _number_ for zero. They definitely had the _concept_ though. They just used the word "nulla" to mean nothing. Either you had something, or it wasn't worth counting. Roman numerals are almost useless when doing math. Even simple addition and subtraction is clumsy and slow. I've heard it speculated (but don't have a source) that they did most calculations with abacus-like frames, where the numbers remained abstract, and then wrote down the result of their math in the traditional Roman numerals. That is, the numerals were for records, not arithmetic, making a numeral for "zero" even more useless. Why record the transaction if nothing happened? In modern systems, zero can be an important placeholder (think "100") but practical, day-to-day situations involving a true zero are pretty rare and most of those cases can be covered with nulla or maybe "free." As in all things, the Romans were also heavily influenced by Greek thinking and the Greeks liked to debate the philosophical nature of zero ("how can anything truly be nothing?"), which again shows that everyone was aware of the concept, just not sure of its utility. I've never heard anything about the Romans fearing pagan mathematical concepts. Until Constantine and Christianity, Romans wouldn't have even thought of other people as "pagans." Barbarians, certainly, but when you live in a polytheistic culture, pagan doesn't mean much. P.S. If someone has an example of a Roman source working out a sum, I'd really like to see it.
[ "Neither the concept nor a symbol for zero existed in the system of Roman numerals. The Babylonian system of the BC era had used the idea of \"nothingness\" without considering it a number, and the Romans enumerated in much the same way. Wherever a modern zero would have been used, Bede and Dionysius Exiguus did use Latin number words, or the word \"nulla\" (meaning \"nothing\") alongside Roman numerals. Zero was invented in India in the sixth century, and was either transferred or reinvented by the Arabs by about the eighth century. The Arabic numeral for zero (0) did not enter Europe until the thirteenth century. Even then, it was known only to very few, and only entered widespread use in Europe by the seventeenth century.\n", "Another true zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, meaning \"nothing\", not as a symbol. When division produced 0 as a remainder, , also meaning \"nothing\", was used. These medieval zeros were used by all future medieval computists (calculators of Easter). An isolated use of their initial, N, was used in a table of Roman numerals by Bede or a colleague about 725, a true zero symbol.\n", "Another zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, \"nulla\" meaning \"nothing\", not as a symbol. When division produced zero as a remainder, \"nihil\", also meaning \"nothing\", was used. These medieval zeros were used by all future medieval calculators of Easter. The initial \"N\" was used as a zero symbol in a table of Roman numerals by Bede or his colleagues around 725.\n", "By 130 AD, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for 0 (a small circle with a long overbar) within a sexagesimal numeral system otherwise using alphabetic Greek numerals. Because it was used alone, not as just a placeholder, this Hellenistic zero was the first \"documented\" use of a true zero in the Old World. In later Byzantine manuscripts of his \"Syntaxis Mathematica\" (\"Almagest\"), the Hellenistic zero had morphed into the Greek letter Omicron (otherwise meaning 70).\n", "The ancient Greeks had no symbol for zero (μηδέν), and did not use a digit placeholder for it. They seemed unsure about the status of zero as a number. They asked themselves, \"How can nothing \"be\" something?\", leading to philosophical and, by the medieval period, religious arguments about the nature and existence of zero and the vacuum. The paradoxes of Zeno of Elea depend in large part on the uncertain interpretation of zero.\n", "BULLET::::- The Roman number system was very cumbersome because there was no concept of zero (or empty space). The concept of zero (which was also called \"cipher\"), which is now common knowledge, was alien to medieval Europe, so confusing and ambiguous to common Europeans that in arguments people would say \"talk clearly and not so far fetched as a cipher\". Cipher came to mean concealment of clear messages or encryption.\n", "The development of zero as a number is one of the most important developments in early mathematics. It was used as a placeholder by the Babylonians and Greek Egyptians, and then as an integer by the Mayans, Indians and Arabs. (See The history of zero for more information.)\n" ]
Was there any actual proof of genocide in Srebrenica and Zepa?
Discovery and excavation of mass graves in the region has been [going on for years.]( _URL_0_)
[ "On 12 July 2010, at the 15th anniversary of the Srebrenica Massacre, Dodik declared that he acknowledges the killings that happened on the site, but does not regard what happened at Srebrenica as genocide, differing from the conclusions of the ICTY and of the International Court of Justice. \"If a genocide happened then it was committed against Serb people of this region where women, children and the elderly were killed en masse,\" Dodik said, in reference to eastern Bosnia.\n", "In 2001, the International Criminal Tribunal for the Former Yugoslavia (ICTY) judged that the 1995 Srebrenica massacre was an act of genocide. On 26 February 2007, the International Court of Justice (ICJ), in the \"Bosnian Genocide Case\" upheld the ICTY's earlier finding that the massacre in Srebrenica and Zepa constituted genocide, but found that the Serbian government had not participated in a wider genocide on the territory of Bosnia and Herzegovina during the war, as the Bosnian government had claimed.\n", "In the case of Tolimir, in the first degree verdict, the International Criminal Tribunal has concluded that genocide was committed in the enclave of Žepa, outside of Srebrenica. However, that conviction was overturned by the appeals chamber, which narrowed the crime of genocide only to Srebrenica.\n", "On 26 September 1997, Nikola Jorgić was found guilty by the Düsseldorf Oberlandesgericht (Higher Regional Court) on 11 counts of genocide involving the murder of 30 persons in the Doboj region, making it the first Bosnian Genocide prosecution. However, ICTY ruled out that genocide did not occur . Jorgić's appeal was rejected by the German Bundesgerichtshof (Federal Supreme Court) on 30 April 1999. The Oberlandesgericht found that Jorgić, a Bosnian Serb, had been the leader of a paramilitary group in the Doboj region that had taken part in acts of terror against the local Bosniak population carried out with the backing of the Serb leaders and intended to contribute to their policy of \"ethnic cleansing\".\n", "In addition, the Srebrenica massacre was the core issue of the landmark court case Bosnian Genocide case at the International Court of Justice through which Bosnia and Herzegovina accused Serbia and Montenegro of genocide. The ICJ presented its judgement on 26 February 2007, which concurred with ICTY's recognition of the Srebrenica massacre as genocide. It cleared Serbia of direct involvement in genocide during the Bosnian war, but ruled that Belgrade did breach international law by failing to prevent the 1995 Srebrenica genocide, and for failing to try or transfer the persons accused of genocide to the ICTY, in order to comply with its obligations under Articles I and VI of the Genocide Convention, in particular in respect of General Ratko Mladić. Citing national security, Serbia obtained permission from the ICTY to keep parts of its military archives out of the public eye during its trial of Slobodan Milošević, which may have decisively affected the ICJ's judgement in the lawsuit brought against Serbia by Bosnia-Herzegovina, as the archives were hence not on the ICTY's public record – although the ICJ could have, but did not, subpoena the documents themselves. Chief prosecutor's office, OTP, rejects allegations that there was a deal with Belgrade to conceal documents from the ICJ Bosnia genocide case.\n", "International Tribunal findings, have addressed two allegations of genocidal events, including the 1992 Ethnic Cleansing Campaign in municipalities throughout Bosnia, as well as the convictions found in regards to the Srebrenica Massacre of 1995 in which the tribunal found, \"Bosnian Serb forces committed genocide, they targeted for extinction, the 40,000 Bosnian Muslims of Srebrenica ... the trial chamber refers to the crimes by their appropriate name, genocide ...\". Individual convictions applicable to the 1992 Ethnic Cleansings have not been secured however. A number domestic courts and legislatures have found these events to have met the criteria of genocide, and the ICTY found the acts of, and intent to destroy to have been satisfied, the \"Dolus Specialis\" still in question and before the MICT, UN war crimes court. but ruled that Belgrade did breach international law by failing to prevent the 1995 Srebrenica genocide, and for failing to try or transfer the persons accused of genocide to the ICTY, in order to comply with its obligations under Articles I and VI of the Genocide Convention, in particular in respect of General Ratko Mladić.\n", "According to a 2013 study of the Srebrenica genocide, Geller \"is among the most vociferous revisionists\" of the genocide. She denies the genocide of Bosnians in Srebrenica, describing it as the \"Srebrenica Genocide Myth.\" She has defended Slobodan Milošević, who died while standing trial for war crimes in the Bosnian War, and denied the existence of Serbian concentration camps. She claimed that the 1999 NATO intervention in Kosovo was \"in order to pave the way for an Islamic state in the heart of Europe – Kosovo.\" \n" ]
What does the wobble of the earth's axis have on the weather?
You are on the correct trail in researching Milankovitch cycles. The question of does it effect weather is really one of time-scales. The characteristic time scale of Axial precession (wobble) is 26,000 years so any effect on our 'weather' would be on similar time-scales. It is basically impossible for something which varies once every 26,000 years to effect weather which changes on a daily basis. Milankovitch cycles are relevant for climate-changes of 10,000s of years, but not weather.
[ "Retrograde motion, or retrogression, within the Earth's atmosphere is seen in weather systems whose motion is opposite the general direction of airflow, i.e. from east to west against the westerlies or from west to east through the trade wind easterlies.\n", "Surface friction allows the atmosphere to 'pick up' angular momentum from Earth in the case of retrograde rotation or release it to Earth in the case of superrotation. Averaging over longer time scales, no exchange of AAM with the solid Earth takes place. Earth and atmosphere are decoupled. This implies that the ground level zonal wind-component responsible for rigid rotation must be zero on the average. Indeed, the observed meridional structure of the climatic mean zonal wind on the ground shows westerly winds (from the west) in middle latitudes beyond about ± 30 latitude and easterly winds (from the east) in low latitudes—the trade winds—as well as near the poles\n", "BULLET::::- Does the internal mantle structure provide the resonance for the Chandler wobble of the earth's axis or is it some other external mechanism? No available motions seem to be coherent drivers for the wobble period of 433 days.\n", "The Coriolis effect causes Coriolis drift in a direction perpendicular to the Earth's axis; for most locations on Earth and firing directions, this deflection includes horizontal and vertical components. The deflection is to the right of the trajectory in the northern hemisphere, to the left in the southern hemisphere, upward for eastward shots, and downward for westward shots. The vertical Coriolis deflection is also known as the Eötvös effect. Coriolis drift is not an aerodynamic effect; it is a consequence of the rotation of the Earth.\n", "The common explanation for this celestial phenomenon is precession, the ‘wobbling’ rotating movement of the earth axis. Research into Sri Yukteswar’s explanation is being conducted by the Binary Research Institute.\n", "While it has to be maintained by changes in the mass distribution or angular momentum of the Earth's outer core, atmosphere, oceans, or crust (from earthquakes), for a long time the actual source was unclear, since no available motions seemed to be coherent with what was driving the wobble.\n", "The Earth's axis of rotation – and hence the position of the North Pole – was commonly believed to be fixed (relative to the surface of the Earth) until, in the 18th century, the mathematician Leonhard Euler predicted that the axis might \"wobble\" slightly. Around the beginning of the 20th century astronomers noticed a small apparent \"variation of latitude,\" as determined for a fixed point on Earth from the observation of stars. Part of this variation could be attributed to a wandering of the Pole across the Earth's surface, by a range of a few metres. The wandering has several periodic components and an irregular component. The component with a period of about 435 days is identified with the eight-month wandering predicted by Euler and is now called the Chandler wobble after its discoverer. The exact point of intersection of the Earth's axis and the Earth's surface, at any given moment, is called the \"instantaneous pole\", but because of the \"wobble\" this cannot be used as a definition of a fixed North Pole (or South Pole) when metre-scale precision is required.\n" ]
why does mitch mcconnell have so much power?
He is the Senate Majority Leader. & #x200B; The political party that he belongs to, the Republicans, have the majority of the seats in the Senate. According to the Senate's rules (which they get to make up themselves), the majority party gets to vote to elect a leader. That leader is given the power to schedule bills for votes. & #x200B; The senate could vote to change those rules if they wanted, but they typically don't because whoever has the majority of the seats will also have the most votes against changing the rules.
[ "McConnell has gained a reputation as a skilled political strategist and tactician. However, this reputation dimmed after Republicans failed to repeal the Affordable Care Act (Obamacare) in 2017 during consolidated Republican control of government.\n", "McConnell was elected to the Senate in 1984 and has been re-elected five times since. During the 1998 and 2000 election cycles, he was chairman of the National Republican Senatorial Committee. McConnell was elected as Majority Whip in the 108th Congress and was re-elected to the post in 2004. In November 2006, he was elected Senate Minority Leader; he held that post until 2015, when Republicans took control of the Senate and he became Senate Majority Leader.\n", "In 2018, McConnell called for entitlement cuts and raised concern about government deficits. He blamed the deficit on government spending, and dismissed criticisms of the tax cuts bill he passed the year prior, which added more than $1 trillion in debt.\n", "McConnell's role as Chief of Staff of the Air Force, as well as that of the other members of the Joint Chiefs of Staff during the Vietnam War, specifically under the administration of President Lyndon B. Johnson and Secretary of Defense Robert McNamara, has recently been the subject of significant historical research in the area of the relationships between senior military leaders (e.g., the JCS) and the civilian political leadership (e.g., the National Command Authority) and has increasingly become a topical discussion issue and object lesson for officers attending the nation's senior service colleges (i.e., Air War College, Army War College, Naval War College and National War College).\n", "McConnell was the second person to hold the position of Director of National Intelligence. He was nominated by President George W. Bush on January 5, 2007, and was sworn in at Bolling Air Force Base in Washington, D.C. on February 20, 2007. McConnell's appointment to the post was initially greeted with broad bipartisan support, although he has since attracted criticism for advocating some of the Bush administration's more controversial policies.\n", "McConnell was known as a pragmatist and a moderate Republican early in his political career but veered to the right over time. He led opposition to stricter campaign finance laws, culminating in the Supreme Court ruling that partially overturned the Bipartisan Campaign Reform Act (McCain-Feingold) in 2009. During the Obama presidency, McConnell worked to withhold Republican support for major presidential initiatives, made frequent use of the filibuster, and blocked many of Obama's judicial nominees, including Supreme Court nominee Merrick Garland. McConnell later described his decision to block the Garland nomination as \"the most consequential decision I've made in my entire public career\". Under McConnell's leadership of the Senate, Obama saw the fewest judicial nominees confirmed in the final two years of a presidency since 1951-1952. McConnell was included in the \"Time\" 100 list of the most influential people in the world in 2015.\n", "Scholars have also characterized Mitch McConnell's tenure as Senate Minority Leader and Senate Majority Leader during the Obama presidency as one where obstructionism reached all-time highs. Political scientists have referred to McConnell's use of the filibuster as \"constitutional hardball\", referring to the misuse of procedural tools in a way that undermines democracy. McConnell delayed and obstructed health care reform and banking reform, which were two landmark pieces of legislation that Democrats sought to pass (and in fact did pass) early in Obama's tenure. By delaying Democratic priority legislation, McConnell stymied the output of Congress. Political scientists Eric Schickler and Gregory J. Wawro write, \"by slowing action even on measures supported by many Republicans, McConnell capitalized on the scarcity of floor time, forcing Democratic leaders into difficult trade-offs concerning which measures were worth pursuing. That is, given that Democrats had just two years with sizeable majorities to enact as much of their agenda as possible, slowing the Senate’s ability to process even routine measures limited the sheer volume of liberal bills that could be adopted.\"\n" ]
what is a good and simple analogy for the big freeze theory?
Imagine you put 1000 ants randomly onto a little patch of soil 1 foot square. They're close together, they can work together to achieve things, maybe they'll even establish a new colony. It's thriving with activity. But now imagine you can magically grow that square foot of soil at an ever-expanding rate. After an hour it's now 10ft square. The ants can still cross the distance and cooperate, it just takes them longer and uses more energy. After two hours it's a mile square. Maybe some ants are still working together here and there but most just got separated by distance and are wandering around alone. After three hours it's a thousand miles square. Now each ant is so far apart from it's neighbors that they'll likely never meet except in rare circumstances. After four hours it's a million miles square. The possibility that any two ants will ever meet and be able to do work together is vanishingly small. After five hours it's a hundred billion miles square. Each ant couldn't even cross the distance to the next ant if it walked for an entire lifetime. After six hours, all the ants are dead. Each one ran out of energy and just lay down on their separate, lonely, vast islands of isolation and expired. The whole scene is still. What was once thriving activity is now completely static and lifeless.
[ "The Big Freeze is a 1993 featurette-length film written and directed by Eric Sykes. The action centres on mishaps involving a father and son plumbing team attending to business in sub-zero temperatures at a retirement home in Finland. Like other Sykes directorial vehicles, the piece is a silent comedy with a star cast - here including Bob Hoskins, John Mills, Donald Pleasence and Spike Milligan.\n", "BULLET::::- Flexible Freeze is a budgeting approach pioneered by President George H. W. Bush as a means to cut government spending. Under this approach, certain programs would be affected by changes in population growth and inflation.\n", "Specifically, the Freeze's goal was to get the U.S. and the Soviet Union to simultaneously adopt a mutual freeze on the testing, production, and deployment of nuclear weapons and of missiles, as well as new aircraft designed primarily to deliver nuclear weapons. Much emphasis was put on the MX and Pershing II missiles. Randall Forsberg was the organizer who initiated this idea of the \"mutual, verifiable\" Freeze.\n", "In software engineering, a freeze is a point in time in the development process after which the rules for making changes to the source code or related resources become more strict, or the period during which those rules are applied. A freeze helps move the project forward towards a release or the end of an iteration by reducing the scale or frequency of changes, and may be used to help meet a roadmap.\n", "The Big Freeze is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature. This scenario, in combination with the Big Rip scenario, is currently gaining ground as the most important hypothesis. It could, in the absence of dark energy, occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe. In this scenario, stars are expected to form normally for 10 to 10 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, which themselves will disappear over time as they emit Hawking radiation. Over infinite time, there would be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations, and the fluctuation theorem.\n", "The Big Freeze is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature. This scenario, in combination with the Big Rip scenario, is currently gaining ground as the most important hypothesis. It could, in the absence of dark energy, occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe. In this scenario, stars are expected to form normally for 10 to 10 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, which themselves will disappear over time as they emit Hawking radiation. Over infinite time, there would be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations, and the fluctuation theorem.\n", "The Freeze was an Edinburgh punk band that lasted from 1976–1981. Wanting to do something darker and noisier, Gordon Sharp and David Clancy took ideas from The Freeze to a greater extreme; the result was the formation of Cindytalk in 1982.\n" ]
Hypothetical Exoplanet Analysis
Off the top of my head, the simplest example in that image of a mistake that I think would stand up is that it shows a star in the sky that is either very large or pretty close - and the brightness from it would certainly drown out the other stars in the black sky there. Even the moon photos with their lack of atmosphere have the stars basically invisible as the brightness of the foreground totally overwhelms them. Some of it seems plausible - the star is quite large or close, so it would be potentially possible to have some liquid water perhaps, the rocks and terrain, meh, why not. It does look like the "camera" is sitting on a moon orbiting the gas giant. I could be wrong, but I think that if a gas giant was that close to a star, it would be quite warm indeed, which I think would then lead to a much faster moving atmosphere - meaning that the nice cloud bands wouldn't appear, but rather it would be more of a single color or smear at best, but I am happy to stand corrected if that is wrong.
[ "NICMOS observed the exoplanet XO-2b at star XO-2, and a spectroscopy result was obtained for this exoplanet in 2012. This uses the spectroscopic abilities of the instrument, and in astronomy spectroscopy during a planetary transit (an exoplanet passes in front of star from the perspective of Earth) is a way to study that exoplanet's possible atmosphere.\n", "The discovery of the exoplanet (along with Kepler-62e) was announced in April 2013 by NASA as part of the \"Kepler\" spacecraft data release. The exoplanet was found by using the transit method, in which the dimming effect that a planet causes as it crosses in front of its star is measured. According to scientists, it is a potential candidate to search for extraterrestrial life, and was chosen as one of the targets to study by the Search for Extraterrestrial Intelligence (SETI) program.\n", "The Habitable Exoplanet Imaging Mission (HabEx) is a space telescope concept that would be optimized to search for and image Earth-size habitable exoplanets in the habitable zones of their stars, where liquid water can exist. HabEx would aim to understand how common terrestrial worlds beyond the Solar System may be and the range of their characteristics. It would be an optical, UV and infrared telescope that would also use spectrographs to study planetary atmospheres and eclipse starlight with either an internal coronagraph or an external starshade.\n", "Present day searches for exoplanets are insensitive to exoplanets located at the distances from their host star comparable to the semi-major axes of the gas giants in the Solar System, greater than about 5 AU. Surveys using the radial velocity method require observing a star over at least one period of revolution, which is roughly 30 years for a planet at the distance of Saturn. Existing adaptive optics instruments become ineffective at small angular separations, limiting them to semi-major axes larger than about 30 astronomical units. The high contrast of the Gemini Planet Imager at small angular separations will allow it to detect gas giants with semi-major axes of 5–30 astronomical units.\n", "Exoplanetology, or exoplanetary science, is an integrated field of astronomical science dedicated to the search for and study of exoplanets (extrasolar planets). It employs an interdisciplinary approach which includes astrobiology, astrophysics, astronomy, astrochemistry, astrogeology, geochemistry, and planetary science.\n", "The Exoplanet Archive serves photometric time-series data from surveys that aim to discover transiting exoplanets, such as the Kepler Mission and CoRoT. The database provides access to over 22 million light curves from space and ground-based exoplanet transit survey programs, including:\n", "Studies of exoplanets have measured atmospheric escape as a means of determining atmospheric composition and habitability. The most common method is Lyman-alpha line absorption. Much as exoplanets are discovered using the dimming of a distant star's brightness (transit), looking specifically at wavelengths corresponding to hydrogen absorption describes the amount of hydrogen present in a sphere around the exoplanet. This method indicates that the hot Jupiters HD209458b and HD189733b and Hot Neptune GJ436b are experiencing significant atmospheric escape.\n" ]
how do companies get away with badmouthing each other?
There is nothing illegal about using your competitor's name in an advertisement. But...you open yourself up to false advertisement claims (not to mention mounds of legal costs) when piss off a competitor. The use of the "compared to the leading brand" is often chosen to avoid hassle, but also because you don't want to give increased recognition to a competitor's brand. Typically only underdog brands engage in this strategy since it is perceived as risky for your brand (being a whiny bitch isn't always a good strategy). Typically suits come up under false advertisement. The affected brand says that the advertisement is misleading or factually inaccurate (e.g. the add suggests that coke will make you get laid more than pepsi but no evidence exists to back that up). Companies make these false claims all the time and they _could be_ regarded as false advertising, but...if you're using a competitor's name you're just that much more likely to get called out on it. It _was_ once illegal. it is no longer. still is in some countries.
[ "This type occurs whereby a dominant firm using dominant position to exploit consumers without losing them through conduct like price increase and production limitation. There is no legal definition of ‘exploitative abuse’ under Article 102 but it can be taken as ‘any conduct that directly causes harm to the customers of the dominant undertaking’. Without barriers to entry, the market is likely to be self-corrected by competition because monopoly profits will attract new competitors to enter the market. However, the Guidance does suggest that the Commission will intervene where the conduct is directly exploitative of consumers (for example, charging excessively high prices). Some examples of exploitative conduct include:\n", "A company engaging in corporate cannibalism is effectively competing against itself. There are two main reasons companies do this. Firstly, the company wants to increase its market share and is taking a gamble that introducing the new product will harm other competitors more than the company itself. Secondly, the company may believe that the new product will sell better than the first, or will sell to a different sort of buyer. For example, a company may manufacture cars, and later begin manufacturing trucks. While both products appeal to the same general market (drivers) one may fit an individual's needs better than the other. However, corporate cannibalism often has negative effects: the car manufacturer's customer base may begin buying trucks instead of cars, resulting in good truck sales, but not increasing the company's market share. There may even be a decrease. This is also called market cannibalization.\n", "Under \"Sec. 15 of RA 10667\", the entities (whether companies or individuals) are prohibited from abusing their dominant position by engaging in conduct that would substantially prevent, restrict or lessen competition. Such conduct includes predatory pricing, imposing barriers to entry in an anti-competitive manner, unfair exercise of monopsony power, among others\n", "Monopolies and oligopolies are often accused of, and sometimes found guilty of, anti-competitive practices. For this reason, company mergers are often examined closely by government regulators to avoid reducing competition in an industry. Although anti-competitive practices often enrich those who practice them, they are generally believed to have a negative effect on the economy as a whole, and to disadvantage competing firms and consumers who are not able to avoid their effects, generating a significant social cost. For these reasons, most countries have competition laws to prevent anti-competitive practices, and government regulators to aid the enforcement of these laws.\n", "The problem manifests itself in the ways middle managers discriminate against employees who they deem to be \"overqualified\" in hiring, assignment, and promotion, and repress or terminate \"whistleblowers\" who want to make senior management aware of fraud or illegal activity. This may be done for the benefit of the middle manager and against the best interest of the shareholders (or members of a non-profit organization).\n", "The legislation gives private companies the authority to go on the counter-offensive against hackers, meaning a company that was hacked could perform more assertive defensive measures than are currently allowed under the law. However, companies would not be allowed to hack back into other systems or manipulate systems for which they do not have consent to control.\n", "BULLET::::2. The company may wish to tie up lower-end competitors who might otherwise try to move up-market. If the company has been attacked by a low-end competitor, it often decides to counterattack by entering the low end of the market.\n" ]
why do cops use numbers like 10-4 to talk to each other instead of saying what’s actually happening?
Because there are multiple officers all trying to talk on the same frequency so it's good to be brief. Also in case someone is listening it isn't immediately obvious what's going on
[ "Ten-codes, officially known as ten signals, are brevity codes used to represent common phrases in voice communication, particularly by law enforcement and in Citizens Band (CB) radio transmissions. The police version of ten-codes is officially known as the APCO Project 14 Aural Brevity Code.\n", "In the United States and Canada, ten-digit dialing is the practice of including the area code of a telephone number when dialing to initiate a telephone call. When necessary, a ten-digit number may be prefixed with the trunk code \"1\", which is often referred to as \"11-digit dialing\" or \"national format\".\n", "Call signs with two (or more) digits in them can arise a number of ways. When the digits abut one another, it is important to distinguish which digit belongs to the prefix, which is the separating numeral, and which may belong to the suffix.\n", "The ten-codes are used only for voice communications, usually radio transmissions and denote commonly used phrases; for example 10-16 means \"domestic disturbance\" for some agencies. Use of ten-codes is intended for the clear, quick, and concise communication between law enforcement officers.\n", "Ten-codes, especially \"10-4\" (meaning \"understood\") first reached public recognition in the mid- to late-1950s through the popular television series \"Highway Patrol\", with Broderick Crawford. Crawford would reach into his patrol car to use the microphone to answer a call and precede his response with \"10-4\".\n", "A one or two digit number denotes a Sergeant (except in larger boroughs like Southwark where some sergeants are allocated three digit numbers), a three digit number denotes a Constable, a four digit number beginning with 5 denotes an officer of the Metropolitan Special Constabulary, unless they're attached to a 'Roads & Transport Policing Command' (RTPC) team, in which case the number will begin with an 8 and a four digit number beginning with 7 denotes a PCSO again unless they are attached to RTPC and they will start with a 6. Confusingly, MPS epaulettes display the letters \"over\" the digits, i.e. 81FH (a Sergeant based at Hammersmith) would show FH over 81 on their shoulder, which reads more like FH81 (the call sign of a panda car based there). Ranks above Sergeant do not have collar numbers - officers are identified by name (e.g. Inspector Smith, who may once have been PC 123 kg Smith).\n", "Because call whisper settings are specific to a particular non-geographic telephone number, the feature can allow the called party to identify which telephone number the caller has dialed. For instance, ten separate numbers can all be routing their inbound calls to a single destination number. However, each of the ten numbers could have their own customized call whisper message on them relating to the nature of the call i.e. the function of the number that has been dialed. Therefore, the agent answering will know the number that has been dialed and thus the appropriate way to answer the call.\n" ]
why the new star wars movie still isn't rated this close to it's release date?
In this case I believe it is because they have not released the movie to critics so as to avoid leaking plot details. Current rumor is there is some big reveal regarding Luke they don't want to give away in advance.
[ "All six \"Star Wars\" films were released by 20th Century Fox Home Entertainment on Blu-ray Disc on September 16, 2011 in three different editions, with \"A New Hope\" available in both a box set of the original trilogy and with all six films on \"Star Wars: The Complete Saga\", which includes nine discs and over 40 hours of special features. The original theatrical versions of the films were not included in the box set; however, the new 2011 revisions of the trilogy were leaked a month prior to release, inciting controversy the new changes made to these movies and causing an online uproar against Lucas.\n", "While initially only being released in a limited theatrical run, \"Star Wars\" was an unprecedented success for 20th Century Fox, soon becoming a blockbuster hit and expanding to a much wider release. It would eventually see many theatrical and home video re-releases.\n", "Worried that \"Star Wars\" would be beaten out by other summer films, such as \"Smokey and the Bandit\", 20th Century Fox moved the release date to May 25, the Wednesday before Memorial Day. However, fewer than 40 theaters ordered the film to be shown. In response, the studio demanded that theaters order \"Star Wars\" if they wanted the eagerly anticipated \"The Other Side of Midnight\" based on the novel by the same name.\n", "Roger Ebert gave the film two-and-a-half out of four stars, stating that while the actors were good, \"The Last Starfighter\" was \"not a terrifically original movie,\" but was nonetheless \"well-made\". \"Halliwell's Film Guide\" described the film as \"a surprisingly pleasant variation on the \"Star Wars\" boom, with sharp and witty performances from two reliable character actors and some elegant gadgetry to offset the teenage mooning.\" Gene Siskel included the film on his list of \"Guilty Pleasures\", describing it as \"a \"Star Wars\" rip-off, but the best one\". Over time it has developed a cult following.\n", "The release on May 19, 1999 of the first new \"Star Wars\" film in 16 years was accompanied by a considerable amount of attention. Few film studios released films during the same week: DreamWorks and Universal Studios released \"The Love Letter\" on May 21 and \"Notting Hill\" on May 28, respectively. \"The Love Letter\" was a commercial failure but \"Notting Hill\" fared better and followed \"The Phantom Menace\" closely in second place. Employment consultant firm Challenger, Gray & Christmas estimated that 2.2 million full-time employees missed work to attend the film, resulting in a loss of productivity. According to \"The Wall Street Journal\", so many workers announced plans to view the premiere that many companies closed on the opening day. Queue areas formed outside cinema theaters over a month before ticket sales began.\n", "In the opinion of several critics, the release of \"Star Wars\" marked a distinctive demographic shift among the audiences as well as altered trends in movie industry drastically which at the same time attributed to \"Sorcerer\"s financial and critical fiasco. Sean Macaulay notes that \"Star Wars\" changed the movie-going demography, considerably \"reset[ting] American cinema back to comforting fantasy\" According to reviewer Pauline Kael, \"Star Wars\" contributed to \"infantilizing the audience\" as well as \"obliterating irony, self-consciousness, and critical reflection\" and to Tom Shone, who drew from Kael, was impossible to compete with by Friedkin and \"Sorcerer\". Biskind also thought American movie-going demographic changed considerably since \"The French Connection\" and \"Sorcerer\" was \"too episodic, dark, and star challenged\" to achieve mainstream appreciation. RH Greene argues that \"Star Wars\", which in his opinion was \"pure escapism\", made intellectually demanding films like \"Sorcerer\" obsolete.\n", "\"\" is the seventh film in the \"Star Wars\" franchise, released ten years after . Co-written and directed by J. J. Abrams, the film stars Adam Driver, Daisy Ridley, John Boyega and Oscar Isaac in new roles, with Harrison Ford, Mark Hamill, and Carrie Fisher reprising their roles from the original trilogy which concluded in 1983. Prior to its release, the film was predicted by box office analysts to break records, citing the relative lack of competition owing to its date of release, being released in large formats such as IMAX in a high number of venues, and multi-generational appeal to both fans of the previous movies and children as success factors.\n" ]
What is "foaming at the mouth" and what exactly causes it?
Rabies causes, amongst other things "hydrophobia" which counter to what its name suggests isn't a literal fear of water but more an inability to swallow effectively. Many patients when afflicted by rabies experience laryngospasm, pharyngeal or diaphramatic spasms. The end result is the inability to effectively swallow even your own saliva leading to drooling, spitting, and as it progresses and you become increasingly dehydrated and decreasingly lucid, foam starts to form in your now thick saliva as you attempt to spit. Source: work in healthcare, also _URL_0_
[ "In cuisine, foam is a gelling or stabilizing agent in which air is suspended. Foams have been present in many forms over the history of cooking, such as whipped cream, meringue and mousse. In these cases, the incorporation of air or another gas creates a lighter texture and a different mouth feel. Foams add flavor without significant substance, and thus allow cooks to integrate new flavors without changing the physical composition of a dish.\n", "\"Sea foam\", ocean foam, beach foam, or spume is a type of foam created by the agitation of seawater, particularly when it contains higher concentrations of dissolved organic matter (including proteins, lignins, and lipids) derived from sources such as the offshore breakdown of algal blooms. These compounds can act as surfactants or foaming agents. As the seawater is churned by breaking waves in the surf zone adjacent to the shore, the surfactants under these turbulent conditions trap air, forming persistent bubbles that stick to each other through surface tension. Sea foam is a global phenomenon and it varies depending on location and the potential influence of the surrounding marine, freshwater, and/or terrestrial environments. Due to its low density and persistence, foam can be blown by strong on-shore winds from the beach face inland.\n", "Oromandibular dystonia (OMD) is a form of focal dystonia that affects varying areas of the head and neck including the lower face, jaw, tongue and larynx. The spasms may cause the mouth to pull open, shut tight, or move repetitively. Speech and swallowing may be distorted. It is often associated with dystonia of the cervical muscles (Spasmodic Torticollis), eyelids (Blepharospasm), or larynx (Spasmodic Dysphonia).\n", "Foams consist of two phases, an aqueous phase and a gaseous (air) phase. Foams have been used in many forms in the history of cooking, for example: whipped cream, ice cream, cakes, meringue, soufflés, mousse and marshmallow. It has a unique light texture because of the tiny air bubbles and/or a different mouthfeel. In most of these products, proteins are the main surface active agents that help in the formation and stabilization of the dispersed gas phase. To create a protein-stabilized foam, it usually involves bubbling, whipping or shaking a protein solution and its foaming properties refers to its capacity to form a thin tenacious film at the gas-liquid interface for large amounts of gas bubbles to become incorporated and stabilized.\n", "A foaming agent is a material that facilitates formation of foam such as a surfactant or a blowing agent. A surfactant, when present in small amounts, reduces surface tension of a liquid (reduces the work needed to create the foam) or increases its colloidal stability by inhibiting coalescence of bubbles. A blowing agent is a gas that forms the gaseous part of the foam.\n", "Foam destabilization occurs for several reasons. First, gravitation causes drainage of liquid to the foam base, which Rybczynski and Hadamar include in their theory; however, foam also destabilizes due to osmotic pressure causes drainage from the lamellas to the Plateau borders due to internal concentration differences in the foam, and Laplace pressure causes diffusion of gas from small to large bubbles due to pressure difference. In addition, films can break under disjoining pressure, These effects can lead to rearrangement of the foam structure at scales larger than the bubbles, which may be individual (T1 process) or collective (even of the \"avalanche\" type).\n", "The name foamy virus can be attributed to the foamy appearance of the cells upon rapid lysation and syncytium formation, vacuolization and cellular death, which is also known as a cytopathic effect (CPE).\n" ]
Are trees from different families examples of convergent evolution? Or do many families of plants come from a tree-like ancestor?
So I've heard of arboresence (or tree-ness) considered a convergent phenotype in George McGhee's book "Convergent Evolution: Limited forms most beautifu"l. In fact, he argues that arboresence has evolved independently in 9 plant lineages. One example is Ferns which evolved tree-ness in 3 groups throughout history (only one is group is still extant), each with different types of trunks! The main reason touted for the convergence of this trait is functional constraint. It just turns out that 'growing upward' is rather important so many lineages will leverage their genetic or morphological resources to achieve that goal. This variety of mechanical mechanisms to realize the trait suggest that the tree form was likely independently selected upon numerous times. To answer your question more specifically, it depends on the taxanomic level we're talking about. I think that trees in the Fabacea and Rosaceae might not really be convergence because they share a common tree form. However the tree form of Cycad palms compared to angiosperm trees would be an instance of convergence since they are truly independent and distinct. Edit: Upon re-reading, I realize some of my language is very teleological. I don't want to ascribe any teleology to these evolutionary processes, I'm just speaking in rhetorical short hand. Please don't crucify me
[ "Genome-wide analysis of 11 clumps of \"P. trichocarpa\" trees reveals significant genetic differences between the roots and the leaves and branches of the same tree. The variation within a specimen is as much as found between unrelated trees. These results may be important in resolving debate in evolutionary biology regarding somatic mutation (that evolution can occur within individuals, not solely among populations), with a variety of implications.\n", "An evolutionary tree (of Amniota, for example, the last common ancestor of mammals and reptiles, and all its descendants) illustrates the initial conditions causing evolutionary patterns of similarity (e.g., all Amniotes produce an egg that possesses the amnios) and the patterns of divergence amongst lineages (e.g., mammals and reptiles branching from the common ancestry in Amniota). Evolutionary trees provide conceptual models of evolving systems once thought limited in the domain of making predictions out of the theory. However, the method of phylogenetic bracketing is used to infer predictions with far greater probability than raw speculation. For example, paleontologists use this technique to make predictions about nonpreservable traits in fossil organisms, such as feathered dinosaurs, and molecular biologists use the technique to posit predictions about RNA metabolism and protein functions. Thus evolutionary trees are evolutionary hypotheses that refer to specific facts, such as the characteristics of organisms (e.g., scales, feathers, fur), providing evidence for the patterns of descent, and a causal explanation for modification (i.e., natural selection or neutral drift) in any given lineage (e.g., Amniota). Evolutionary biologists test evolutionary theory using phylogenetic systematic methods that measure how much the hypothesis (a particular branching pattern in an evolutionary tree) increases the likelihood of the evidence (the distribution of characters among lineages). The severity of tests for a theory increases if the predictions \"are the least probable of being observed if the causal event did not occur.\" \"Testability is a measure of how much the hypothesis increases the likelihood of the evidence.\"\n", "The phylogenetic tree has the most recent common ancestor at the root, all the current species as the leaves, and intermediate nodes at each point of branching divergence. The branches are divided into segments (between one node and another node, a leaf, or the root). Each segment is assigned an ED score defined as the timespan it covers (in millions of years) divided by the number of species at the end of the subtree it forms. The ED of a species is the sum of the ED of the segments connecting it to the root. Thus, a long branch which produces few species will have a high ED, as the corresponding species are relatively distinctive, with few close relatives. ED metrics are not exact, because of uncertainties in both the ordering of nodes and the length of segments.\n", "Phylogenetic trees generated by computational phylogenetics can be either \"rooted\" or \"unrooted\" depending on the input data and the algorithm used. A rooted tree is a directed graph that explicitly identifies a most recent common ancestor (MRCA), usually an imputed sequence that is not represented in the input. Genetic distance measures can be used to plot a tree with the input sequences as leaf nodes and their distances from the root proportional to their genetic distance from the hypothesized MRCA. Identification of a root usually requires the inclusion in the input data of at least one \"outgroup\" known to be only distantly related to the sequences of interest.\n", "Phylogenetic trees are limited by the set of closely related genomes that are available, and results are dependent on BLAST search criteria. Because it is based on sequence similarity, it is often difficult for phylostratigraphy to determine whether a novel gene has emerged \"de novo\" or has diverged from an ancestral gene beyond recognition, for instance following a duplication event. This was pointed out by a study that simulated the evolution of genes of equal age and found that distant orthologs can be undetectable for the most rapidly evolving genes. When accounting for changes in the rate of evolution to portions of young genes that acquire selected functions, a phylostratigraphic approach was much more accurate at assigning gene ages in simulated data. A subsequent pair of studies using simulated evolution found that phylostratigraphy failed to detect an ortholog in the most distantly related species for 13.9% of \"D. melanogaster\" genes and 11.4% of \"S. cerevisiae\" genes. Similarly, a spurious relationship between a gene’s age and its likelihood to be involved in a disease process was claimed to be detected in the simulated data. However, a reanalysis of studies that used phylostratigraphy in yeast, fruit flies and humans found that even when accounting for such error rates and excluding difficult-to-stratify genes from the analyses, the qualitative conclusions were unaffected for all three studies. The impact of phylostratigraphic bias on studies examining various features of \"de novo\" genes (see below) remains debated.\n", "However, this classification was only slightly better supported than certain alternative interpretations. A phylogenetic analysis constructs thousands of family trees, each of which include hundreds of \"steps\" in evolution where analyzed traits are evolved, lost, or reacquired. The family tree with the fewest \"steps\", known as the most parsimonius tree (MPT), is generally considered to be the most accurate under the principle of Occam's razor. In the case of this analysis, the MPT considered \"Colobops\" to be a basal rhynchosaur. However, some family trees look completely different from the MPT despite only a being few evolutionary steps more complex. If new data is incorporated into the analysis, one of these alternative trees may become a new MPT, rewriting our knowledge of reptile classification in the process. The MPT given by Pritchard \"et al\". (2018) is given below:\n", "Phylogenetic trees were created to show the divergence of variants from their ancestors. The divergence of variant, H2A.X, from H2A occurred at multiple origins in a phylogenetic tree. Acquisition of the phosphorylation motif was consistent with the many origins of H2A that arose from an ancestral H2A.X. Finally, the presence of H2A.X and absence of H2A in fungi leads researchers to believe that H2A.X was the original ancestor of the histone protein H2A \n" ]
if i were able to attain enough money to hire a team capable of sending me into space, and buy the spacecraft itself, would anyone be legally allowed to stop me?
Goodness no! In fact, you'd probably get a prize, if you were able to do something novel up there. We're actively encouraging civilian spaceflight through a number of initiatives, including SpaceX Prizes and general tax breaks for corporations even attempting it. Mind you you'd have to schedule your launch past air traffic control.
[ "It's possible to equip your space shuttle with cargo, crew, and energy. Then you will launch, pilot, and land it on a carrier. While orbiting the Earth you will deploy and maintain satellites, or build and visit a space station.\n", "Furthermore, if such services were unavailable by the end of 2010, NASA would've been forced to purchase orbital transportation services on foreign spacecraft such as the Russian Federal Space Agency's Soyuz and Progress spacecraft, the European Space Agency's Automated Transfer Vehicle, or the Japan Aerospace Exploration Agency's H-II Transfer Vehicle since NASA's own Crew Exploration Vehicle, since refocused, would not have been ready until 2014. NASA asserted that once COTS was operational, it would no longer procure Russian cargo delivery services. On May 22, 2012, Bill Gerstenmaier confirmed that NASA was no longer purchasing any cargo resupply services from Russia and would rely solely on the American CRS vehicles, the SpaceX Dragon and Orbital Sciences' Cygnus; with the exception of a few vehicle-specific payloads delivered on the European ATV and the Japanese HTV.\n", "Bruce F. Webster reviewed \"Rescue at Rigel\" in \"The Space Gamer\" No. 34. Webster commented that \"if you've got the money and the interest, buy it. In fact, if you've only got either the money or the interest, buy it - you'll be glad you did.\"\n", "A further option for rescue would be to use Russian Soyuz spacecraft. Nikolay Sevastyanov, director of the Russian Space Corporation Energia, was reported by \"Pravda\" as saying: \"If necessary, we will be able to bring home nine astronauts on board three Soyuz spacecraft in January and February of the next year\".\n", "By using modernised, tried-and-tested equipment rather than developing technology from scratch, the project is reportedly saving around $2 billion in development costs. The Russian Proton rocket, launched from the Baikonur Cosmodrome in Kazakhstan, is intended to be used to launch one of the spacecraft into space, where it will remain. Astronauts will use the Excalibur Almaz RRVs to get to and from the spacecraft.\n", "One question of special importance was whether NASA could have saved the astronauts had they known of the danger. This would have to involve either rescue or repair – docking at the International Space Station for use as a haven while awaiting rescue (or to use the Soyuz to systematically ferry the crew to safety) would have been impossible due to the different orbital inclination of the vehicles.\n", "The players now just sign a contract, and then gain access to Corp Inc. Warehouse. The player can build their own space ship from the available prefabricated materials and fly it to the Periphery (edge of the Galaxy). The player may lose money, but any gains made are theirs to keep, and Corporation Incorporated will pay a bonus for quick delivery.\n" ]
Regarding classical conditioning and using it to influence your sexual preferences..
I'm no psychologist, but preferences for women's body types vary markedly by culture, and even over time within the same culture. That suggests that they aren't all hardwired at least, and that there is a fair amount of conditioning that goes into it, whether it's social reinforcement or whatever. There are probably limits to what conditioning can accomplish, though, as some of it is almost certainly hardwired. There's probably a point where your brain will just give up and say "I can't fap to this", so Bea Arthur fetishism might not be as likely to take hold.
[ "Some explanations invoke classical conditioning. In several experiments, men have been conditioned to show arousal to stimuli like boots, geometric shapes or penny jars by pairing these cues with conventional erotica. According to John Bancroft, conditioning alone cannot explain fetishism, because it does not result in fetishism for most people. He suggests that conditioning combines with some other factor, such as an abnormality in the sexual learning process.\n", "Classical conditioning is a theory of learning discovered by physiologist, Ivan Pavlov. It supports assumptions that form the foundation of behaviorism. These basic ideas suggest that all learning occurs through interactions within the environment, and that environment shapes behavior.\n", "Classical conditioning (also known as Pavlovian or respondent conditioning) refers to a learning procedure in which a biologically potent stimulus (e.g. food) is paired with a previously neutral stimulus (e.g. a bell). It also refers to the learning process that results from this pairing, through which the neutral stimulus comes to elicit a response (e.g. salivation) that is usually similar to the one elicited by the potent stimulus.\n", "Together with operant conditioning, classical conditioning became the foundation of behaviorism, a school of psychology which was dominant in the mid-20th century and is still an important influence on the practice of psychological therapy and the study of animal behavior. Classical conditioning is a basic learning process, and its neural substrates are now beginning to be understood. Though it is sometimes hard to distinguish classical conditioning from other forms of associative learning (e.g. instrumental learning and human associative memory), a number of observations differentiate them, especially the contingencies whereby learning occurs. \n", "Classical conditioning focuses on using preceding conditions to alter behavioral reactions. The principles underlying classical conditioning have influenced preventative antecedent control strategies used in the classroom. Classical conditioning set the groundwork for the present day behavior modification practices, such as antecedent control. Antecedent events and conditions are defined as those conditions occurring before the behavior. Pavlov's early experiments used manipulation of events or stimuli preceding behavior (i.e., a tone) to produce salivation in dogs much like teachers manipulate instruction and learning environments to produce positive behaviors or decrease maladaptive behaviors. Although he did not refer to the tone as an antecedent, Pavlov was one of the first scientists to demonstrate the relationship between environmental stimuli and behavioral responses. Pavlov systematically presented and withdrew stimuli to determine the antecedents that were eliciting responses, which is similar to the ways in which educational professionals conduct functional behavior assessments. Antecedent strategies are supported by empirical evidence to operate implicitly within classroom environments. Antecedent-based interventions are supported by research to be preventative, and to produce immediate reductions in problem behaviors.\n", "In contrast, classical conditioning involves involuntary behavior based on the pairing of stimuli with biologically significant events. For example, sight of sweets may cause a child to salivate, or the sound of a door slam may signal an angry parent, causing a child to tremble. Salivation and trembling are not operants; they are not reinforced by their consequences, and they are not voluntarily \"chosen\".\n", "The influence of classical conditioning can be seen in emotional responses such as phobia, disgust, nausea, anger, and sexual arousal. A familiar example is conditioned nausea, in which the CS is the sight or smell of a particular food that in the past has resulted in an unconditioned stomach upset. Similarly, when the CS is the sight of a dog and the US is the pain of being bitten, the result may be a conditioned fear of dogs. An example of conditioned emotional response is conditioned suppression.\n" ]
kobe was accused of rape — the victim was battered, he was charged with a felony, they settled after a civil case, he issued an apology — but no one seems to care. why?
I think people have generally accepted that Kobe was *likely* guilty of adultery and but that it was consensual sex and not rape. Rationale: 1.) Authorities chose not to pursue criminal charges of rape against him 2.) He publicly admitted to adultery but not rape 3.) Civil case against him for rape was settled out of court with neither side admitting guilt or innocence. 4.)(this is probably the biggest thing...) There was not a string of similar allegations from other women or similar shady behavior I think most people are about 95% sure but there's just no way to know for certain at this point so people will always have doubts.
[ "The Kobe Bryant sexual assault case began in July 2003 when the news media reported that the sheriff's office in Eagle, Colorado, had arrested professional basketball player Kobe Bryant in connection with an investigation of a sexual assault complaint filed by a 19-year-old hotel employee. Bryant had checked into The Lodge and Spa at Cordillera, a hotel in Edwards, Colorado, on June 30 in advance of having surgery near there on July 2 under Richard Steadman. The woman accused Bryant of raping her in his hotel room on July 1, the night before the surgery. \n", "After testimony from two experts, and new arguments about the case, the lawyers voted with a unanimous \"Not guilty\" verdict for all defendants. Among other deciding factors was the defense's evidence that the five men accused of the rape had been involved in violence on the other side of Honolulu (the near collision with the Peeples's car) near the time of the alleged attack on Massie and would not have been able to reach Waikiki in time to have also raped Massie as she described.\n", "The \"Los Angeles Times\" described Wood's role in the Kobe Bryant case: \"The woman allegedly raped by Kobe Bryant has hired renowned attorney L. Lin Wood, a libel specialist who has represented such clients as the family of JonBenet Ramsey, former Rep. Gary Condit and Richard Jewell in lawsuits against the media.\"\n", "On July 9, 2010 the Seattle City Attorney's office charged Rep. Simpson with one count of assault stemming from an alleged incident of domestic violence at Seattle Children's Hospital on May 22, 2010. According to news sources, \"A social worker told police she saw Simpson \"barrel\" into the room, push the former wife and shut the door. He 'closed the blinds and barricaded himself inside using his body' and was yelling inside, according to the report. Once he came out, he left the property, witnesses said.\"\n", "The nature of the case led to accusations that coaches and school officials knew about the rape and failed to report it. For example, several texts entered into evidence during the trial implied that Steubenville head coach Reno Saccoccia was trying to cover for the players, which led to nationwide outrage after he received a new contract as the district's administrative services director. In response, shortly after the sentences were handed down Ohio Attorney General Mike DeWine announced he would empanel a special grand jury to determine whether other crimes were committed—specifically, whether coaches and other school officials failed to report the rape even though Ohio law makes them mandated reporters.\n", "In January 2015, a victim was hospitalized following allegations of sexual assault at a party at a SAE Iowa State University party. The incident is under investigation by police and the chapter suspended the member suspected of the assault.\n", "Court records also detailed acts of physical abuse in which the girl reported Uaiyue “punished her by binding her to roof hooks in the garage with hay bailing twine and left her tied up for several days as punishment.”\n" ]
is the us going to war with iran?
You think the people in charge of our government have a cogent plan they're going to stick to? The Joint Chiefs of Staff probably know the answer to this as well as I do. But we do have some world-class warmongers making some high-level decisions right now
[ "Muravchik argued in the Washington Post that the United States should attack Iran, stating: \"Does this mean that our only option is war? Yes, although an air campaign targeting Iran's nuclear infrastructure would entail less need for boots on the ground than the war Obama is waging against the Islamic State, which poses far smaller a threat than Iran does.\"\n", "as well as U.S. involvement in a potential war between Israel and Iran over Iran’s nuclear program. The experience of wars in Iraq and Afghanistan is likely related to this declining desire to use force.\" Still as the report notes later 64% say Iran's nuclear problem is a critical threat to the United States and that \"Americans are...willing to take measures to counter the nuclear threat in both Iran and North Korea, but are much more guarded, stopping short of supporting military strikes.\"\n", "A document found in Zarqawi's safe house indicates that the group was trying to provoke the U.S. to attack Iran in order to reinvigorate the insurgency in Iraq and to weaken American forces in Iraq. \"The question remains, how to draw the Americans into fighting a war against Iran? It is not known whether America is serious in its animosity towards Iran, because of the big support Iran is offering to America in its war in Afghanistan and in Iraq. Hence, it is necessary first to exaggerate the Iranian danger and to convince America, and the West in general, of the real danger coming from Iran...\" The document then outlines six ways to incite war between the two nations. Some experts questioned the authenticity of the document.\n", "Military tensions between Iran and the United States escalated in 2019 as President Donald Trump deployed substantial military assets to the Persian Gulf based on alleged intelligence suggesting a planned \"campaign\" by Iran and allies against US forces and interests in the Persian Gulf and Iraq. This followed a between the two countries during Donald Trump's presidency, which included the withdrawal of the United States from the nuclear deal, the imposition of new sanctions against Iran, and the American designation of the IRGC as a terrorist organization.\n", "According to 2012 polls, a majority of Americans supported United States or Israeli military action against Iran. More recent polls report that Americans \"back a newly brokered nuclear deal with Iran by a 2-to-1 margin and are very wary of the United States resorting to military action against Tehran even if the historic diplomatic effort falls\". Organised opposition to a possible future military attack against Iran by the United States (US) and/or Israel is known to have started during 2005–2006. Beginning in early 2005, journalists, activists and academics such as Seymour Hersh, Scott Ritter, Joseph Cirincione and Jorge E. Hirsch began publishing claims that United States' concerns over the allege threat posed by the possibility that Iran may have a nuclear weapons program might lead the US government to take military action against that country. These reports, and the concurrent escalation of tensions between Iran and some Western governments, prompted the formation of grassroots organisations, including Campaign Against Sanctions and Military Intervention in Iran in the US and the United Kingdom, to advocate against potential military strikes on Iran. Additionally, several individuals, grassroots organisations and international governmental organisations, including the Director-General of the International Atomic Energy Agency, Mohamed ElBaradei, a former United Nations weapons inspector in Iraq, Scott Ritter, Nobel Prize winners including Shirin Ebadi, Mairead Corrigan-Maguire and Betty Williams, Harold Pinter and Jody Williams, Campaign for Nuclear Disarmament, Code Pink, the Non-Aligned Movement of 118 states, and the Arab League, have publicly stated their opposition to such an attack.\n", "In March 2012, a Reuters/Ipsos poll revealed that a majority of Americans, 56%, would support military action against Iran, even if it led to increased gas prices, if there was evidence demonstrating that Tehran was building nuclear weapons. 39% said that they opposed a military strike, while 62% of Americans said that they'd support Israel striking Iran over its nuclear program.\n", "In March 2012, a Reuters/Ipsos poll revealed that a majority of Americans, 56%, would support military action against Iran, even if it led to increased gas prices, if there was evidence demonstrating that Tehran was building nuclear weapons. 39% said that they opposed a military strike, while 62% of Americans said that they'd support Israel striking Iran over its nuclear program.\n" ]
The FDA says that "residual quantities of formaldehyde may be found in some current vaccines" How is that even remotely possible when formaldehyde is a gas?
Gases can be dissolved in liquids. Formaldehyde exists in equilibrium with methylene glycol when dissolved in water. [The equilibrium constant at standard conditions is on the order of 10^3](_URL_0_), so most of it will become methylene glycol, but not all of it.
[ "In June 2011, the twelfth edition of the National Toxicology Program (NTP) Report on Carcinogens (RoC) changed the listing status of formaldehyde from \"reasonably anticipated to be a human carcinogen\" to \"known to be a human carcinogen.\" Concurrently, a National Academy of Sciences (NAS) committee was convened and issued an independent review of the draft United States Environmental Protection Agency IRIS assessment of formaldehyde, providing a comprehensive health effects assessment and quantitative estimates of human risks of adverse effects.\n", "The Food and Drug Administration (FDA) announced on October 8, 2010 that it \"was working with state and local organizations, as well as OSHA, to determine whether the products or ingredients would be likely to cause health problems under the intended conditions of use. The composition of the products and the labeling, including use instructions and any warning statements, will be factors in this determination. One safety issue we’ll be evaluating is whether formaldehyde may be released into the air after the product is applied to the hair and heated.\"\n", "FDA spokesman Mike Herndon says that, \"At this point, FDA sees no compelling need to analyze any more samples for acetaminophen.\" A third, independent lab also did not detect acetaminophen. It is not clear whether the FDA checked samples of the same brands of food in which acetaminophen was allegedly found because the initial laboratory could not initiate contact with the FDA or publicly identify the manufacturer because of client confidentiality. Only if the FDA contacted the lab could information be shared, and the FDA did not do this until June 12.\n", "Some manufacturers of products containing formaldehyde and methylene glycol have complained that the method of testing for formaldehyde—which does not distinguish between formaldehyde and methylene glycol—is not a reliable indicator of the toxicity of the product.\n", "There has been a reluctance for modern drug discovery programs to consider covalent inhibitors due to toxicity concerns. An important contributor has been the drug toxicities of several high-profile drugs believed to be caused by metabolic activation of reversible drugs For example, high dose acetaminophen can lead to the formation of the reactive metabolite N-acetyl-p-benzoquinone imine. Also, covalent inhibitors such as beta lactam antibiotics which contain weak electrophiles can lead to idiosyncratic toxicities (IDT) in some patients. It has been noted that many approved covalent inhibitors have been used safely for decades with no observed idiosyncratic toxicity. Also, that IDTs are not limited to proteins with a covalent mechanism of action. A recent analysis has noted that the risk of idiosyncratic toxicities may be mitigated through lower doses of administered drug. Doses of less than 10 mg per day rarely lead to IDT irrespective of the drug mechanism.\n", "An aqueous solution of formaldehyde can be useful as a disinfectant as it kills most bacteria and fungi (including their spores). It is used to produce killed vaccines. Formaldehyde releasers are used as biocides in personal care products such as cosmetics. Although present at levels not normally considered harmful, they are known to cause allergic contact dermatitis in certain sensitised individuals.\n", "2015 revisions added oxidizing materials to the existing 'Flammables' classification. The other major change allowed and encouraged labels to incorporate the GHS signal word, hazard pictograms, and hazard statements. This addition helped identify additional dangers when dealing with materials that fit into multiple categories, like Hydrogen sulfide, which is both flammable and toxic.\n" ]
How do bacteria get their energy?
[Metabolism](_URL_0_), like every other living creature. The two common sources of energy are light and molecular bonds in food matter. Different bacteria have evolved to capitalize on both sources. The variations are fascinating. Wood is a sugar molecule with lots of energy, but humans can't unlock it so we call it "fiber." Termites can't digest it either, but they thrive on it because they harbor bacteria in their guts that CAN digest wood fiber and the bacteria's waste products feed the termite.
[ "Some species of bacteria obtain their energy by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria oxidize arsenite to arsenate as fuel for their metabolism. The enzymes involved are known as arsenate reductases (Arr).\n", "In respiring bacteria under physiological conditions, ATP synthase, in general, runs in the opposite direction, creating ATP while using the proton motive force created by the electron transport chain as a source of energy. The overall process of creating energy in this fashion is termed oxidative phosphorylation.\n", "The bacterial symbionts are chemosynthetic, gaining energy by oxidizing sulfide from the environment, and producing biomass by fixing carbon dioxide through the Calvin-Benson-Bassham cycle. The bacteria benefit from the symbiosis because the host animal can migrate between sulfide- and oxygen-rich regions of the sediment habitat, and the bacteria require both these chemical substances to produce energy. The hosts are believed to consume the bacteria as a food source, based on evidence from their stable carbon isotope ratios.\n", "Bacteria are responsible for the process of nitrogen fixation, which is the conversion of atmospheric nitrogen into nitrogen-containing compounds (such as ammonia) that can be used by plants. Autotrophic bacteria derive their energy by making their own food through oxidation, like the \"Nitrobacters\" species, rather than feeding on plants or other organisms. These bacteria are responsible for nitrogen fixation. The amount of autotrophic bacteria is small compared to heterotrophic bacteria (the opposite of autotrophic bacteria, heterotrophic bacteria acquire energy by consuming plants or other microorganisms), but are very important because almost every plant and organism requires nitrogen in some way.\n", "Microbial fuel cells can create energy when bacteria breaks down organic material, this process a charge that is transferred to the anode. Taking something like human saliva, which has lots of organic material, can be used to power a micro-sized microbial fuel cell. This can produce a small amount of energy to run on-chip applications. This application can be used in things like biomedical devices and cell phones.\n", "Bacteria either derive energy from light using photosynthesis (called phototrophy), or by breaking down chemical compounds using oxidation (called chemotrophy). Chemotrophs use chemical compounds as a source of energy by transferring electrons from a given electron donor to a terminal electron acceptor in a redox reaction. This reaction releases energy that can be used to drive metabolism. Chemotrophs are further divided by the types of compounds they use to transfer electrons. Bacteria that use inorganic compounds such as hydrogren, carbon monoxide, or ammonia as sources of electrons are called lithotrophs, while those that use organic compounds are called organotrophs. The compounds used to receive electrons are also used to classify bacteria: aerobic organisms use oxygen as the terminal electron acceptor, while anaerobic organisms use other compounds such as nitrate, sulfate, or carbon dioxide.\n", "Many cellulolytic bacteria use cellodextrins as their primary source of energy. The energy is obtained through the phosphorolytic cleavage of glycosidic bonds as well as the anaerobic glycolysis of the glucose monomers. Transport of cellodextrins across the cell membrane is usually an active process, requiring ATP.\n" ]
Is it possible that scientific constants exist because of the way we define units?
Dimensionful constants, like the speed of light or Planck's constant, are indeed dependent on the units we choose, and we can set them to 1 in some appropriate unit system (this is commonly done, for example, in quantum field theory or in relativity). But dimensionless constants, like the fine-structure constant, have the same value no matter which units we choose, so these can't be eliminated by choosing a certain unit system.
[ "While there are several other physical constants, these three are given special consideration, because they can be used to define all Planck units and thus all physical quantities. The three constants are therefore used sometimes as a framework for philosophical study and as one of pedagogical patterns.\n", "Using dimensional analysis, it is possible to combine dimensional universal physical constants to define a system of units of measurement that has no reference to any human construct. Depending on the choice and arrangement of constants used, the resulting natural units may have useful physical meaning. For example, Planck units, shown in the table below, use \"c\", \"G\", \"ħ\", \"ε\", and \"k\" in such a manner to derive units relevant to unified theories such as quantum gravity.\n", "The question as to which constants are \"fundamental\" is neither straightforward nor meaningless, but a question of interpretation of the physical theory regarded as fundamental; as pointed out by , not all physical constants are of the same importance, with some having a deeper role than others.\n", "By definition, fundamental physical constants are subject to measurement, so that their being constant (independent on both the time and position of the performance of the measurement) is necessarily an experimental result and subject to verification.\n", "In the physical sciences, most physical constants such as the universal gravitational constant, and physical variables, such as position, mass, speed, and electric charge, are modeled using real numbers. In fact, the fundamental physical theories such as classical mechanics, electromagnetism, quantum mechanics, general relativity and the standard model are described using mathematical structures, typically smooth manifolds or Hilbert spaces, that are based on the real numbers, although actual measurements of physical quantities are of finite accuracy and precision.\n", "Certain universal dimensioned physical constants, such as the speed of light in a vacuum, the universal gravitational constant, Planck's constant, Coulomb's constant, and Boltzmann's constant can be normalized to 1 if appropriate units for time, length, mass, charge, and temperature are chosen. The resulting system of units is known as the natural units, specifically regarding these five constants, Planck units. However, not all physical constants can be normalized in this fashion. For example, the values of the following constants are independent of the system of units, cannot be defined, and can only be determined experimentally:\n", "Table 2 clearly defines Planck units in terms of the fundamental constants. Yet relative to other units of measurement such as SI, the values of the Planck units, other than the Planck charge, are only known \"approximately.\" This is due to uncertainty in the value of the gravitational constant \"G\" as measured relative to SI unit definitions.\n" ]
3 good questions about the derivatives of position. 1. can we perceive any besides velocity and acceleration?....
> you go from velocity to acceleration by squaring the t in d/t Well, given a function v(t), you differentiate with respect to time to get a(t). Ok, a perfectly valid function is v(t)=d/t with some constant d, which would mean a velocity that gradually approaches zero, but I doubt this is what you mean by d/t. > Are humans aware of any of the derivatives beyond just velocity and acceleration? What do they feel like? The derivative of acceleration is called [jerk](_URL_0_). So this, multiplied by the mass, is the rate of change of applied force. It's used in various engineering applications, as detailed in the article. > In partial response to my own first question, I imagine that it would get very hard to distinguish between each derivative. So if there are some that we're not aware of, are THESE the "tiny dimensions" that I hear physicists talking about? Not at all. The "tiny dimensions" are something else completely. With regards to the first part of your question, it's true for certain classes of functions that if you take loads and loads of derivatives you'll end up with zero in the end, but this is not true for all functions. For example, you can differentiate e^x as many times as you like and still get e^x. > Are there any other forces that are equivalent to one of the derivatives, such as with gravity & amp; acceleration? Yep, jerk.
[ "If represents the position of an object at time , then the higher-order derivatives of have specific interpretations in physics. The first derivative of is the object's velocity. The second derivative of is the acceleration. The third derivative of is the jerk. And finally, the fourth derivative of is the jounce.\n", "BULLET::::2. An analysis of point-motion leads us to the conceptions of velocity and acceleration. Velocity is a proper measure of the manner in which position is instantaeously changing. Acceleration is a proper measure of how velocity itself is an increased change. It is found that a motion is fully determined. Theoretically, a complete description of the path and position at each instant of time may be deduced when the velocity in any one position and the acceleration for all positions is given.\n", "Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature.\n", "For a position vector r that is a function of time \"t\", the time derivatives can be computed with respect to \"t\". These derivatives have common utility in the study of kinematics, control theory, engineering and other sciences.\n", "Time derivatives are a key concept in physics. For example, for a changing position formula_8, its time derivative formula_3 is its velocity, and its second derivative with respect to time, formula_5, is its acceleration. Even higher derivatives are sometimes also used: the third derivative of position with respect to time is known as the jerk. See motion graphs and derivatives.\n", "BULLET::::- Equate first and second derivatives to 0 to find the stationary points and inflection points respectively. If the equation of the curve cannot be solved explicitly for \"x\" or \"y\", finding these derivatives requires implicit differentiation.\n", "is the coordinate velocity, the derivative of position r with respect to coordinate time \"t\". (Throughout this article, overdots are with respect to coordinate time, not proper time). It is possible to transform the position coordinates to generalized coordinates exactly as in non-relativistic mechanics, r = r(q, \"t\"). Taking the total differential of r obtains the transformation of velocity v to the generalized coordinates, generalized velocities, and coordinate time\n" ]
what does cold/flu medicine do if you don't have a cold or flu?
The most commonly used decongestant is **guafenesin**. It increases the blood flow to your nose. Many opera singers use it daily to clear their sinuses. If you take too much you'll throw up, but it's not deadly. **Guafenesin** is also an expectorant, but maybe your medicine uses **pseudoephedrine**. That will make your heart beat a little faster, almost like caffeine or something. A lot of cough syrup contains **acetaminophen/paracetamol** (different names, same drug). It's Tylenol, a mild pain reliever, and will also destroy your liver if you take too much or drink with it. **Dextromethorphan** is the cough suppresant, and there's no slightly about it, that stuff will *fuck* you up. But not nearly as badly as... **Dyphenhydramine**. This is what Benadryl is, and it will make you go full retard for a day. You will lose the ability to form coherent sentences, talk to people that aren't there, and just generally be completely delirious (the drug is actually called a deliriant). Source: I take cough syrup recreationally (unadvisable), and knowing what you're putting in your body is a necessity.
[ "There is little evidence to support that Cold-fx is effective in the common cold. All trials have been done by the manufacturer and there has been poor data reporting. According to Health Canada's Natural Health Product Directorate records, the company claims that it may \"help reduce the frequency, severity and duration of cold and flu symptoms by boosting the immune system\".\n", "A number of methods have been recommended to help ease symptoms, including adequate liquid intake and rest. Over-the-counter pain medications such as acetaminophen and ibuprofen do not kill the virus; however, they may be useful to reduce symptoms. Aspirin and other salicylate products should not be used by people under 16 with any flu-type symptoms because of the risk of developing Reye's Syndrome.\n", "Current evidence does not support its use for the prevention of the common cold. There is, however, some evidence that regular use may shorten the length of colds. It is unclear whether supplementation affects the risk of cancer, cardiovascular disease, or dementia. It may be taken by mouth or by injection.\n", "Scientists have argued that the product has not been tested for its ability to treat a cold after an individual has been infected. No studies have yet been performed to assess the possible long term side effects of taking the pills every day during the cold and flu season.\n", "There is no vaccine for the common cold. The primary methods of prevention are hand washing; not touching the eyes, nose or mouth with unwashed hands; and staying away from sick people. Some evidence supports the use of face masks. There is also no cure, but the symptoms can be treated. Zinc may reduce the duration and severity of symptoms if started shortly after the onset of symptoms. Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen may help with pain. Antibiotics, however, should not be used and there is no good evidence for cough medicines.\n", "People with the flu are advised to get plenty of rest, drink plenty of liquids, avoid using alcohol and tobacco and, if necessary, take medications such as acetaminophen (paracetamol) to relieve the fever and muscle aches associated with the flu. In contrast, there is no enough evidence to support corticosteroids as add on therapy for influenza. It is advised to avoid close contact with others to prevent spread of infection. Children and teenagers with flu symptoms (particularly fever) should avoid taking aspirin during an influenza infection (especially influenza type B), because doing so can lead to Reye's syndrome, a rare but potentially fatal disease of the liver. Since influenza is caused by a virus, antibiotics have no effect on the infection; unless prescribed for secondary infections such as bacterial pneumonia. Antiviral medication may be effective, if given early (within 48 hours to first symptoms), but some strains of influenza can show resistance to the standard antiviral drugs and there is concern about the quality of the research. High-risk individuals such as young children, pregnant women, the elderly, and those with compromised immune systems should visit the doctor for antiviral drugs. Those with the emergency warning signs should visit the emergency room at once.\n", "Antibiotics have no effect against viral infections or against the viruses that cause the common cold. Due to their side effects, antibiotics cause overall harm but are still frequently prescribed. Some of the reasons that antibiotics are so commonly prescribed include people's expectations for them, physicians' desire to help, and the difficulty in excluding complications that may be amenable to antibiotics. There are no effective antiviral drugs for the common cold even though some preliminary research has shown benefits.\n" ]
how can something like a cellphone or computer possibly have any effect on a plane during takeoff?
800 megahertz communications can interfere with instrument landing systems as well as other navigation systems.
[ "\"The aircraft manufacturer's avionics representative advised that there was no likelihood that the operation of a computer, other electronic device or a cell phone would have affected the aircraft's flight instruments.\"\n", "Initial analysis from David Learmount, a \"Flight International\" editor, was that \"The aircraft had either a total or severe power loss and this occurred very late in the final approach because the pilot did not have time to tell air traffic control or passengers.\" Learmount went on to say that to land in just , the aircraft must have been near stalling when it touched down. The captain also reported the aircraft's stall warning system had sounded.\n", "With the loss of all power to the lights and flight altitude instruments, flying at night in instrument conditions, the pilots quickly became spatially disorientated and unable to know which inputs to the flight controls were necessary to keep the plane flying normally. Consequently, the crew lost complete control of the aircraft and crashed into the ocean in a steep nose-down angle, killing everyone on board. The flight-control system would not have been affected by the loss of electrical power, since it relied on hydraulic and mechanical lines, so it was concluded that loss of control was the result of the crew's inability to see around the cockpit. It was theorized that the backup electrical system had not been activated because the crew could not locate the switch for it in darkness.\n", "The most dangerous pilot-induced oscillations can occur during landing. Too much up elevator during the flare can result in the plane getting dangerously slow and threatening to stall. A natural reaction to this is to push the nose down harder than one pulled it up, but then the pilot ends up staring at the ground. An even larger amount of up elevator starts the cycle over again.\n", "Electromagnetic interference to aircraft systems is a common argument offered for banning mobile phones (and other passenger electronic devices) on planes. Theoretically, active radio transmitters such as mobile phones, walkie–talkies, portable computers or gaming devices may interfere with the aircraft. Non-transmitting electronic devices also emit electromagnetic radiation, although typically at a lower power level, and could also theoretically affect the aircraft electronics. Collectively, any of these may be referred to as portable electronic devices (PEDs).\n", "BULLET::::- Receiving aircraft typically have the probe in the front which present problems such as: sensitive avionics equipment (pitot static and angle of attack probes, etc.), can easily be damaged by the drogue, and FOD, including fuel or probe/drogue parts can be ingested into the plane's engines.\n", "The investigation did look at the possibility of electromagnetic interference and tested a similar aircraft using mobile phones. It concluded that there were \"no indications that aircraft systems were negatively affected by electromagnetic interference (EMI)\".\n" ]
How did the four Sunni Islamic school of thoughts settle to the current geography? Could this change in the future?
Mod mote to OP and potential respondents: just a reminder that this sub *does not permit speculation*, so discussion here will have to exclude the last question (re possible futures) .
[ "After its beginnings in the 8th century based on Hellenistic geography, Islamic geography was patronized by the Abbasid caliphs of Baghdad. Various Islamic scholars contributed to its development, and the most notable include Al-Khwārizmī, Abū Zayd al-Balkhī (founder of the \"Balkhi school\"), and Abu Rayhan Biruni. \n", "The genesis of Shi’a Islam is rooted in the idea that the charismatic and politico-religious authority possessed by the prophet Muhammad was transferred to his biological descendants after his death in 632 CE. The resulting claim to the rightful leadership of the Muslim community (the ummah) was thus supposed to pass, in the form of the Imamate, to the descendants of Muhammad’s daughter Fatima (606–632) and her husband Ali b. Abi Talib (600–661). Yet the political reality in the decades after the Prophet’s death diverged from that vision. After the death of the prophet Mohammad in the year 632, a disagreement arose between the followers of Mohammad as to who should be appointed as the prophet's successor. The Muslims were split between two groups, those who supported Abu Bakr, a companion of the prophet, and those who supported Ali, the prophet's cousin and son-in law. Ultimately, Abu Bakr became the first caliph, and his followers are today known as Sunni Muslims. The followers of Ali are known as Shia Muslims. Abu Bakr served for two years, and appointed Umar as his successor in 634. Umar served as caliph for ten years, during which he was responsible for the rapid spread of Islam through military and territorial gains. Upon his death in 644, a council of Islamic leaders elected a new Caliph, Uthman ibn Affan, of the Umayyad family. However, in 656 supporters of Ali, who believed that a descendant of the prophet should lead the Muslim community, assassinated Uthman and installed Ali as the fourth caliph. Ali's reign was marred by numerous violent struggles between his supporters and the supporters of Muawiyah I, a relative of Uthman and the governor of Syria. When Ali was murdered in the year 661 by a supporter of Muawiyah, he became the first martyr of the Shia faith. Ali had two sons, Hasan ibn Ali and Husayn, who the Shia believe continued their father's struggle in different ways. Hasan renounced his right to the caliphate in a compromise with Mu'awiya, which the Shi'a view as Hasan's rational recognition of his own constraints at the time, but Husayn sought to restore the caliphate to the family of Ali by military means. When Muawiyah died in the 680 and his son Yazid I assumed the caliphate, Husayn renewed his efforts to regain the caliphate. On the advice of supporters in Kufa, a supposed stronghold of Shia support, Husayn and a small numbers of his family and supporters traveled to Kufa, camping out in nearby Karbala. However, the governor of Kufa was aware of Husayn's presence and sent close to 4000 troops to Karbala. After several days of failed negotiations, as Husayn refused to recognize Yazid as caliph, the governor's soldiers massacred Husayn and 72 of his men. This massacre, which occurred on the 10th day of the month of Muharram, elevated the martyrdom of Husayn to almost mythical levels in Shia belief.\n", "Much of this learning and development can be linked to geography. Even prior to Islam's presence, the city of Mecca had served as a centre of trade in Arabia, and the Islamic prophet Muhammad himself was a merchant. With the new Islamic tradition of the \"Hajj\", the pilgrimage to Mecca, the city became even more a centre for exchanging goods and ideas. The influence held by Muslim merchants over African-Arabian and Arabian-Asian trade routes was tremendous. As a result, Islamic civilization grew and expanded on the basis of its merchant economy, in contrast to the Europeans, Indians, and Chinese, who based their societies on an agricultural landholding nobility. Merchants brought goods and their Islamic faith to China, India, Southeast Asia, and the kingdoms of western Africa, and returned with new discoveries and inventions.\n", "Since the Al Saud succeeded in annexing Mecca in 1926 and the discovery of oil, Hanbali school of theology has benefited from the sponsorship of the Saudi state. Theology students from all over the world are educated in Saudi Arabia following this school of theology and Saudi-funded Dawah succeeded in attracting new followers all over the world. Since the beginning of the 20th-century, the school has therefore gained more acceptance and diffusion in the Islamic world.\n", "There is concern that Islamic fundamentalism is increasing as young citizens return to the country following Islamic theological studies abroad and seek to impose a stricter adherence to Islamic religious law on their family members and associates; in response, the Union Government has established a university to give young citizens the option of pursuing university studies in the country.\n", "Three years after his arrival in the city, Ibn Taymiyyah became involved in efforts to deal with the increasing Shia influence amongst Sunni Muslims. An agreement had been made in 1316 between the amir of Mecca and the Ilkhanate ruler Öljaitü, brother of Ghazan Khan, to allow a favourable policy towards Shi'ism in Mecca, a city that houses the holiest site in Islam, the Kaaba. Around the same time the Shia theologian Al-Hilli, who had played a crucial role in the Mongol rulers decision to make Shi'ism the state religion of Persia, wrote the book, \"Minhaj al-Karamah\"\" (\"The Way of Charisma'), which dealt with the Shia doctrine of the Imamate and also served as a refutation of the Sunni doctrine of the caliphate. To counter this Ibn Taymiyyah wrote his famous book, \"Minhaj as-Sunnah an-Nabawiyyah\", as a refutation of Al-Hilli's work.\n", "By the 11th century, the Muslim world had developed four major Sunni schools of Islamic jurisprudence (or \"fiqh\"), each with its own interpretations of Sharia: Hanbali, Maliki, Shafi and Hanafi. In Arabia, a preference for the Hanbali school was advocated by the Wahhabi movement, founded in the 18th century. Wahhabism, a strict form of Sunni Islam, was supported by the Saudi royal family (the Al Saud) and is now dominant in Saudi Arabia. From the 18th century, the Hanbali school therefore predominated in Nejd and central Arabia, the heartland of Wahhabi Islam. In the more cosmopolitan Hejaz, in the west of the peninsula, both the Hanafi and Shafi schools were followed.\n" ]
Were there Prisoner of War camps in the American Civil War? If so, what were they like? How were the prisoners treated?
It was bad in the South as the war dragged on. The North was very effective at limiting the South's ability to supply their troops and civilians. PoWs are pretty low on the list when you're rationing. Andersonville, by far the most notorious Civil War prison, housed nearly 33,000 men at its peak—one of the largest "cities" of the Confederacy. Inmates crowded into 26.5 acres (11 hectares) of muddy land, constructing "shebangs," or primitive shelters, from whatever material they could find. Lacking sewer or sanitation facilities, camp inmates turned "Stockade Creek" into a massive, disease-ridden latrine. Summer rainstorms would flood the open sewer, spreading filth. Visitors approaching the camp for the first time often retched from the stench. The prison's oppressive conditions claimed 13,000 lives by the war's end. That's not to say that the North was much better: Prisons often engendered conditions more horrible than those on the battlefield. The Union's Fort Delaware was dubbed "The Fort Delaware Death Pen," while Elmira prison in New York saw nearly a 25 percent mortality rate. The South's infamous Camp Sumter, or Andersonville prison, claimed the lives of 29 percent of its inmates. _URL_0_ It's a good article.
[ "In the early years of the Civil War, the north barracks were used to hold Confederate officers taken as prisoners of war pending transfer to other Union prisons such as Camp Johnson in Ohio, Fort Delaware or Fort Warren in Boston Harbor. Fort Columbus and Castle Williams also served as a temporary prisoner of war camp and confinement hospital for Confederate prisoners during the war. Major General William H. C. Whiting (CSA) died of dysentery in February 1865 in the post hospital shortly after his surrender at the Battle of Fort Fisher, North Carolina. He was the highest ranking Confederate officer to die as a prisoner of war.\n", "The first Confederate prisoners of war arrived at Camp Douglas on February 20, 1862 to find a camp but no real prison. They were housed for their first few days at the camp in the White Oak Square section along with newly–trained Union soldiers about to depart for service at the front. The army sent sick prisoners to the camp, where there were no medical facilities at the time, even though army staff were specifically warned not to do so. On February 23, 1862, the Union troops vacated the camp except for an inadequate force of about 40 officers and 469 enlisted men left to guard the prisoners. About 77 escapes were recorded at Camp Douglas by June 1862.\n", "In 1861, Camp Rathbun, near the town of Elmira, was established as a training camp at the beginning of the Civil War. As the Union troops who trained there were sent to their respective assignments, the camp emptied and in 1864 it was turned into the Elmira Prison prisoner-of-war camp. The facilities were not adequate to house the thousands of Confederate prisoners, and many succumbed to exposure, malnutrition, and smallpox and were subsequently interred at the cemetery.\n", "Lacking a means for dealing with large numbers of captured troops early in the American Civil War, the Union and Confederate governments relied on the traditional European system of parole and exchange of prisoners. While awaiting exchange, prisoners were confined to permanent camps.\n", "During and after the American Civil War, concentration camps located in Natchez, Mississippi were used to corral freed slaves. \"As slaves were being emancipated from the plantations, their route to freedom usually took them in the vicinity of the Union army forces. Unhappy with the slaves being freed, the army began recapturing the slaves and forced the men back into hard labor camps. The most notorious of the several concentration camps that were established was located in Natchez, MS.\"\n", "During the war, Fort McClellan became the temporary home for many captured enemy soldiers; a 3,000-capacity Prison Internment Camp for prisoners of war (POWs) was built in 1943. The camp also served to receive prisoners who would go on to three other POW camps in Alabama. At the end of the war in Europe, the camp at Fort McClellan held 2,546 men. A cemetery on the reservation marks 26 German and 3 Italian prisoners of war who died while in captivity.\n", "At the time of the Civil War, the concept of a prisoner of war camp was still new. It was as late as 1863 when President Lincoln demanded a code of conduct be instituted to guarantee prisoners of war the entitlement to food and medical treatment and to protect them from enslavement, torture, and murder. Andersonville did not provide its occupants with these guarantees; therefore, the prisoners at Andersonville, without any sort of law enforcement or protections, functioned more closely to a primitive society than a civil one. As such, survival often depended on the strength of a prisoner's social network within the prison. A prisoner with friends inside Andersonville was more likely to survive than a lonesome prisoner. Social networks provided prisoners with food, clothes, shelter, moral support, trading opportunities, and protection against other prisoners. One study found that a prisoner having a strong social network within Andersonville \"had a statistically significant positive effect on survival probabilities, and that the closer the ties between friends as measured by such identifiers as ethnicity, kinship, and the same hometown, the bigger the effect.\"\n" ]
Are there any theories which try to explain where the very first matter, mass, or "stuff" came from?
Current cosmological/physics theories can trace the universe back rather accurately to about 10^(-12) seconds or so after the big bang. All of our theories before that are speculative to some degree or another, so we can't say anything definitive about the universe at those points (let alone make sense of "before the big bang").
[ "All these theories imply that matter is a continuous substance. Two Greek philosophers, Leucippus (first half of the 5th century BC) and Democritus of Abdera (lived about 410 BC) came up with the notion that there were two real entities: atoms, which were small indivisible particles of matter, and the void, which was the empty space in which matter was located. Although all the explanations from Thales to Democritus involve matter, what is more important is the fact that these rival explanations suggest an ongoing process of debate in which alternate theories were put forth and criticized.\n", "However, the Newtonian picture was not the whole story. In the 19th century, the term \"matter\" was actively discussed by a host of scientists and philosophers, and a brief outline can be found in Levere. A textbook discussion from 1870 suggests matter is what is made up of atoms:Three divisions of matter are recognized in science: masses, molecules and atoms. A Mass of matter is any portion of matter appreciable by the senses. A Molecule is the smallest particle of matter into which a body can be divided without losing its identity. An Atom is a still smaller particle produced by division of a molecule. \n", "The idea that all matter is composed of elementary particles dates to at least the 6th century BC. The Jains in ancient India were the earliest to advocate the particular nature of material objects between 9th and 5th century BCE. According to Jain leaders like Parshvanatha and Mahavira, the ajiva (non living part of universe) consists of matter or \"pudgala\", of definite or indefinite shape which is made up tiny uncountable and invisible particles called \"permanu\". \"Permanu\" occupies space-point and each \"permanu\" has definite colour, smell, taste and texture. Infinite varieties of \"permanu\" unite and form \"pudgala\". The philosophical doctrine of atomism and the nature of elementary particles were also studied by ancient Greek philosophers such as Leucippus, Democritus, and Epicurus; ancient Indian philosophers such as Kanada, Dignāga, and Dharmakirti; Muslim scientists such as Ibn al-Haytham, Ibn Sina, and Mohammad al-Ghazali; and in early modern Europe by physicists such as Pierre Gassendi, Robert Boyle, and Isaac Newton. The particle theory of light was also proposed by Ibn al-Haytham, Ibn Sina, Gassendi, and Newton.\n", "For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, was first put forward in ancient India by Jains (~900–500 BC), followed by the Greek philosophers Leucippus (~490 BC) and Democritus (~470–380 BC).\n", "One challenge to the traditional concept of matter as tangible \"stuff\" came with the rise of field physics in the 19th century. Relativity shows that matter and energy (including the spatially distributed energy of fields) are interchangeable. This enables the ontological view that energy is prima materia and matter is one of its forms. On the other hand, the Standard Model of particle physics uses quantum field theory to describe all interactions. On this view it could be said that fields are prima materia and the energy is a property of the field.\n", "Bernard argued that matter, although caused by God, existed from all eternity. In the beginning, before its union with the Ideas, it was in a chaotic condition. It was by means of the native forms, which penetrate matter, that distinction, order, regularity, and number were introduced into the universe.\n", "BULLET::::- Search for a new state of matter called the quark–gluon plasma, which is believed to be the state of matter existing in the universe shortly after the Big Bang. PHENIX data suggest that a new form of matter has indeed been discovered, and that it behaves like a perfect fluid. PHENIX scientists are now working to study its properties.\n" ]
what is stockholm’s syndrome and can children have it due to abusive parents?
Stockholm Syndrome is when a captive person grows an attachment to their captor. For example, it has been debated that dogs only love their owners because their owners feed and shelter them. I think children can develop Stockholm Syndrome with/to abusive parents. Depending on their age, they probably do feel like prisoners.
[ "Stockholm syndrome is a \"contested illness\" due to doubt about the legitimacy of the condition. It has also come to describe the reactions of some abuse victims beyond the context of kidnappings or hostage-taking. Actions and attitudes similar to those suffering from Stockholm syndrome have also been found in victims of sexual abuse, human trafficking, terror, and political and religious oppression.\n", "Victims of the formal definition of Stockholm syndrome develop \"positive feelings toward their captors and sympathy for their causes and goals, and negative feelings toward the police or authorities\". These symptoms often follow escaped victims back into their previously ordinary lives.\n", "A research group led by Namnyak has found that although there is a lot of media coverage of Stockholm syndrome, there has not been a lot of professional research into the phenomenon. What little research has been done is often contradictory and does not always agree on what Stockholm syndrome is. The term has grown beyond kidnappings to all kinds of abuse. There is no clear definition of symptoms to diagnose the syndrome.\n", "Psychiatrist Frank Ochberg, who was responsible for defining the term \"Stockholm syndrome\", said he does not think Belle exhibits the trauma symptoms of prisoners suffering from the syndrome because she does not go through a period of feeling that she is going to die. Some therapists, while acknowledging that the pairing's relationship does not meet the clinical definition of Stockholm syndrome, argue that the relationship depicted is dysfunctional and abusive and does not model healthy romantic relationships for young viewers. Following this viewpoint, Constance Grady of Vox wrote that Jeanne-Marie Leprince de Beaumont's \"Beauty and the Beast\" was a fairy tale originally written to prepare young girls in 18th-century France for arranged marriages, and that the power disparity is amplified in the Disney version. Additionally, Anna Menta of \"Elite Daily\" argued that the Beast does not apologize to Belle for imprisoning, hurting, or manipulating her, and that his treatment of Belle is not painted as wrong.\n", "Stockholm syndrome is a condition which causes hostages to develop a psychological alliance with their captors during captivity. These alliances result from a bond formed between captor and captives during intimate time together, but they are generally considered irrational in light of the danger or risk endured by the victims. The FBI's Hostage Barricade Database System and Law Enforcement Bulletin indicate that roughly 8% of victims show evidence of Stockholm syndrome.\n", "The term \"stockholm syndrome\" refers to a psychological phenomenon in which hostages express empathy and sympathy and have positive feelings toward their captors, sometimes to the point of defending them. These feelings are generally considered irrational in light of the danger or risk endured by the victims, who essentially mistake a lack of abuse from their captors for an act of kindness.\n", "Typically, Stockholm syndrome develops in captives when they engage in \"face-to-face contact\" with their captors, and when captors make captives doubt the likelihood of their survival by terrorizing them into \"helpless, powerless, and submissive\" states. This enables captors to appear merciful when they perform acts of kindness or fail to \"beat, abuse, or rape\" the victims. Ideas like \"dominance hierarchies and submission strategies\" assist in devising explanations for the illogical reasoning behind the symptoms of those suffering from Stockholm syndrome as a result of any oppressive relationship. Partial activation of the capture-bonding psychological trait may lie behind battered woman syndrome, military basic training, fraternity hazing, and sex practices such as sadism/masochism or bondage/discipline.\n" ]
will the united states debt to china ever affect the us government in a negative way?
To China specifically? No. Owing money to China is no different from the US government owing money to me for money I've lent to it. The US government owes about $1.25 trillion to China and about $6.2 trillion to foreign sources overall. It doesn't really offer China any influence over the US. Can debt overall effect the US government negatively if it takes out so much debt people lose faith in its ability to pay it back/defaults on a loan? Yeah, totally, because then people won't lend it money as cheaply anymore.
[ "A significant number of economists and analysts dismiss any and all concerns over foreign holdings of United States government debt denominated in U.S. Dollars, including China's holdings. Critics of the \"excessive\" amount of US debt held by China acknowledge that the \"biggest effect of a broad-scale dump of US Treasuries by China would be that China would actually export fewer goods to the United States.\"\n", "The 112th United States Congress introduced legislation whose aim was the assessment of the implications of China’s ownership of U.S. debt. The subsequent Congressional Report of 2013 claimed that \"[a] potentially serious short-term problem would emerge if China decided to \"suddenly\" reduce their liquid U.S. financial assets significantly\" [emphasis in the original text], noting also that Federal Reserve System Chairman Ben Bernanke had, in 2007, stated that “because foreign holdings of U.S. Treasury securities represent only a small part of total U.S. credit market debt outstanding, U.S. credit markets should be able to absorb without great difficulty any shift of foreign allocations.\"\n", "BULLET::::- The government of the People's Republic of China is said to be nervous about the effect a Democrat-led Congress might have on its exports to the United States market and the possible controversy that could result because of the country's human rights record. Nancy Pelosi, who became the Speaker of the House, is a noted critic of Chinese policy. Concerns likely to be raised include the undervalued Chinese currency, blamed by some for the recent losses in the American manufacturing industry, and issues such as internet censorship, piracy, limited market access within China itself for companies based in the U.S., and religious freedom. The Chinese Foreign Ministry spokeswoman Jiang Yu stated that she hoped the United States would play a \"constructive role\" in maintaining \"sound, healthy and stable relations between China and the U.S.\".\n", "Since 2016, the US has executed multiple actions towards China in response to the above issues. First, the government has implemented economic tariffs on a variety of imports, such as photographic films. Also, it has heavily scrutinized Chinese-based companies that have committed economic offenses against its interests. For example, the US Commerce Department was extremely close to handing a large fine to the Chinese telecom company ZTE for violating US-Iranian sanctions. However, President Trump blocked this decision because he thought it would cause too many Chinese jobs to be lost, and severely decrease US-China relations.  While this instance displays the American government's ability to tackle any economic issues that may arise with China, it also exemplifies how inconsistent such reactions might be. On a related note, the US has exited from some multilateral approaches related to handling Chinese economic expansion, such as the TPP (Trans-Pacific Partnership). In order for the US to successfully achieve economic diplomacy with China, it must collaborate with Beijing and other nations to ensure certain conditions exist, such as low trade/investment barriers and respect of property rights for all companies interacting with China. The US-China economic diplomacy approach may been seen by some as not fully encouraging this progress so far. However, this is subject to change depending on what future political/economic decisions are made.\n", "In 2010, the United States had a $270 billion trade deficit with China (Chinese imports totalling $360 billion compared to only $90 billion in American exports) in part to what most U.S. economists warn as an undervaluation of the Chinese currency, Yuan, which in turn gives its exporters a significant advantage in the global economy.\n", "BULLET::::2. The trade dispute with the U.S., caused by the undervalued \"renminbi\", damages China’s most important bilateral relationship. New tariffs aimed at retaliating the undervalued currency are possible in the new United States Congress, as the U.S. House of Representatives passed legislation that would impose economic sanctions on China. “Chinese officials do not understand the intensity of anger in Washington and could face a backlash if they fail to mollify their critics,” according to analyst Jason Kindopp at the Eurasia Group.\n", "China is a major creditor and the second largest foreign holder of US public debt and has been critical of US deficits and fiscal policy, advising for policies that maintain the purchasing value of the dollar although it had little few options other than to continue to buy United States Treasury bonds. China condemned the US monetary policy of quantitative easing, responding to S&P's downgrade of U.S. credit rating, and advised the United States not to continue with the accumulation of debt, concluding with the statement that America cannot continue to borrow to solve financial problems.\n" ]
What were the consequences of Athens’ decisions, and how did their downfall effect Greece and ultimately leave it open for Phillip II to conquer.
There is a significant amount of time between the dissolution of the Delian League (404 BC) and the conquest of Greece by Philip II (338 BC). In fact, there was time enough for the Athenians to start a Second Delian League (probably in 378 BC) *and for that League to fall too* (after the Social War of 357-355 BC). However, Athenian decisions certainly did play into Philip II's hand. Firstly, from 368 BC onward, the Athenians waged a costly war to recover the strategic city of Amphipolis in Thrace, which they had lost in 424 BC. They failed to take it, but antagonised the Macedonians in the process, and Philip eventually did conquer the town (and its mines and vast resouces of timber). Secondly, the Athenian attempt to recover their power in the Aeagen was initially successful, but their continued campaigning against Amphipolis made their new allies feel used for the sake of Athens' interests, leading to the Social War I just mentioned. This war was extremely costly for Athens, and decisively ruined its chances of uniting the Aegean in a new Athenian empire. Thirdly, Athens spent a great deal of resources trying to set up a pro-Athenian government on the island of Euboia, which brought it into conflict with the Thebans through most of the 340s BC. The Thebans themselves were already extremely weakened by the ongoing Third Sacred War (356-346 BC) against Phokis, in which Philip II became more and more involved after he conquered Thessaly. All of these wars and alliances are very complex, but the main point is that only major states like Thebes and Athens had the strength to form an alliance capable of stopping Philip, and thanks to their constant warring no one trusted them to have the interests of Greece at heart. Meanwhile, Philip encroached on Central Greek affairs more and more, while also expanding into the Hellespont, which was a direct threat to Athenian interests. In the war that followed, Athens' only major ally was actually Thebes, but their joined army was defeated at Chaironeia in 338 BC, and Philip made the mainland Greek subject to Macedon.
[ "The Greek victories at Plataea and contemporaneous naval battle at Mycale had the result that never again would the Persian Empire launch an attack on mainland Greece. Afterwards, Persia pursued its policies by diplomacy, bribery and cajolement, playing one city state against another. But, by these victories, and through the Delian League, Athens was able to consolidate its power in the flowering of Athenian democracy in 5th century Athens, under the leadership of Pericles, son of Xanthippus.\n", "The final steps in the shift to empire may have been triggered by Athens' defeat in Egypt, which challenged the city's dominance in the Aegean and led to the revolt of several allies, such as Miletus and Erythrae. Either because of a genuine fear for its safety after the defeat in Egypt and the revolts of the allies, or as a pretext to gain control of the League's finances, Athens transferred the treasury of the alliance from Delos to Athens in 454–453 BC.\n", "After her eventual defeat by Sparta in 404 BC, Athens soon recovered and re-established her hegemony over Euboea, which was an essential source of grain for the urban population. The Eretrians rebelled again in 349 BC and this time the Athenians could not recover control. In 343 BC supporters of Philip II of Macedon gained control of the city, but the Athenians under Demosthenes recaptured it in 341 BC.\n", "Athens in fact partially recovered from this setback between 410-406 BC, but a further act of economic war finally forced her defeat. Having developed a navy that was capable of taking on the much-weakened Athenian navy, the Spartan general Lysander seized the Hellespont, the source of Athens' grain. The remaining Athenian fleet was thereby forced to confront the Spartans, and were decisively defeated. Athens had little choice but to surrender; and was stripped of her city walls, overseas possessions and navy. In the aftermath, the Spartans were able to establish themselves as the dominant force in Greece for three decades.\n", "The Athenians had been unable to conquer Amphipolis, which commanded the gold mines of Mount Pangaion. So Philip reached an agreement with Athens to lease the city to them after its conquest, in exchange for Pydna (lost by Macedon in 363). However, after conquering Amphipolis, Philip kept both cities (357). As Athens had declared war against him, he allied Macedon with the Chalkidian League of Olynthus. He subsequently conquered Potidaea, this time keeping his word and ceding it to the League in 356.\n", "The economic resources of the Athenian State were not excessive. All the glory of Athens in the Age of Pericles, its constructions, public works, religious buildings, sculptures, etc. would not have been possible without the treasury of the Delian League. The treasury was originally held on the island of Delos but Pericles moved it to Athens under the pretext that Delos wasn't safe enough. This resulted in internal friction within the league and the rebellion of some city-states that were members. Athens retaliated quickly and some scholars believe this to be the period wherein it would be more appropriate to discuss an Athenian Empire instead of a league.\n", "The main reasons for the eventual failure were structural. This alliance was only valued out of fear of Sparta, which evaporated after Sparta's fall in 371 BC, losing the alliance its sole 'raison d'etre'. The Athenians no longer had the means to fulfill their ambitions, and found it difficult merely to finance their own navy, let alone that of an entire alliance, and so could not properly defend their allies. Thus, the tyrant of Pherae was able to destroy a number of cities with impunity. From 360 BC, Athens lost its reputation for invincibility and a number of allies (such as Byzantium and Naxos in 364 BC) decided to secede.\n" ]
Why do galaxies appear as consistent objects, given their sheer scale?
Hundreds of thousands of years is actually quite a short period of time in astronomy. Even very short-lived stars last for millions of years. The distance to other galaxies is also much larger than their size. Andromeda, the closest large galaxy to us, is millions of light-years away. We can say many other objects that are *billions* of light-years away. On that scale, we definitely notice that closer galaxies are more evolved than more distant galaxies. But on the scale of an individual galaxy, a hundred thousand years is almost too small to notice.
[ "Current models also predict that the majority of mass in galaxies is made up of dark matter, a substance which is not directly observable, and might not interact through any means except gravity. This observation arises because galaxies could not have formed as they have, or rotate as they are seen to, unless they contain far more mass than can be directly observed.\n", "However, due to the existence of long-range correlations, it is known that structures can be found in the distribution of galaxies in the universe that extend over scales larger than the homogeneity scale.\n", "The largest galaxies are giant ellipticals. Many elliptical galaxies are believed to form due to the interaction of galaxies, resulting in a collision and merger. They can grow to enormous sizes (compared to spiral galaxies, for example), and giant elliptical galaxies are often found near the core of large galaxy clusters.\n", "Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory.\n", "These are objects that appear to be a series of irregular clumps with no coherent structure. Many of these objects are simply nearby dwarf galaxies. Some of these objects are interacting galaxies, while others are small groups of galaxies. In both cases, many of the constituent galaxies are irregular galaxies. The superposition of two or more such irregular galaxies can easily look like a single larger irregular galaxy, which is why the Atlas of Peculiar Galaxies (and other catalogs) often classify these pairs and groups as single objects.\n", "Currently, astronomers know little about the shape and size of our galaxy relative to what they know about other galaxies; it is difficult to observe the entire Milky Way from the inside. A good analogy is trying to observe a marching band as a member of the band. Observing other galaxies is much easier because humans are outside those galaxies. Steven Majewski and his team planned to use SIM Lite to help determine not only the shape and size of the Galaxy but also the distribution of its mass and the motion of its stars.\n", "This structure is not only incredibly large, but also very dense; the galaxies located in each of the filaments are four times closer to each other than the universe's average. Before its discovery, astronomers had predicted the existence of a structure like this one. According to computer models, several of the most massive galaxies originated in structures like this. These galaxies are believed to have formed as a result of blobs like those constituent to this structure collapsing under their own gravity. Since the densest areas in the universe are thought to be the places where galaxies formed first, this structure may be one of the earliest to have formed. This structure may reveal when and how the first galaxies formed and could help us better understand how our own galaxy came to be.\n" ]
If space is ever so expanding, do we seen new/farther everyday we take photos of the outer edges of space? Do we add on to “observable universe” everyday too?
Kind of. So, the universe as a whole does not have a boundary. The expansion of the universe is not the expansion of the *edge* of the universe, but rather everything getting further away from everything else - it's not expanding at the edges, it's expanding *everywhere*. In fact, the evidence is somewhat pointing towards the universe being *infinite* in size. However, the *observable* universe *is* expanding. The edge of the observable universe represents the distance that light could have travelled since the beginning of the universe. The universe is about 13 billion years old, so the light from the edge of the observable universe is the light that has travelled about 13 billion light years (although the universe has continued to expand since the light was emitted and the images we see from this light are from objects that are now much more than 13 billion light years away). This means that the images we see of the very edge of the observable universe are images of the very very young universe. This is the universe before there was any structure - it was a dense, uniform, opaque fluid of energy and particles. Because the universe was opaque, we can't actually see the light from the very beginning of the universe. Instead, we see the light that was emitted just as the universe got cool enough and sparse enough to become transparent. This light forms a background behind everything else in the universe. It has been redshifted over billions of years into microwave frequencies, which is why we call it the Cosmic Microwave Background, or the CMB. So, as the observable universe expands, what happens is that the CMB radiation that we see is from slightly further away. What's happening is that the CMB radiation was emitted *everywhere*, but because it takes time for light to reach us, every second we're seeing CMB radiation that was emitted from gas about a light-second more distant than what we received the previous second. This means we are technically seeing new parts of the observable universe, but it's not like we're seeing new galaxies turn up - we're just seeing slightly more distant "slices" of the CMB. The CMB is far too uniform over time for us to really see any change over decades though.
[ "Trivia questions and urban legends often claim certain constructed objects or effects are so large they are visible from outer space. For example, a giant beaver dam in Canada was described as \"so large it is visible from outer space.\" \"Field and Stream\", a Canadian Magazine, wrote, \"How big? Big enough to be visible ... from outer space.\"\n", "Regardless of the overall shape of the universe, the question of what the universe is expanding into is one which does not require an answer according to the theories which describe the expansion; the way we define space in our universe in no way requires additional exterior space into which it can expand since an expansion of an infinite expanse can happen without changing the infinite extent of the expanse. All that is certain is that the manifold of space in which we live simply has the property that the distances between objects are getting larger as time goes on. This only implies the simple observational consequences associated with the metric expansion explored below. No \"outside\" or embedding in hyperspace is required for an expansion to occur. The visualizations often seen of the universe growing as a bubble into nothingness are misleading in that respect. There is no reason to believe there is anything \"outside\" of the expanding universe into which the universe expands.\n", "Big Picture’s first 21 issues have been firmly grounded on planet Earth. Now we’d like to look at an even bigger picture – the universe. Space biology looks at life in space from several perspectives: how it began, where it might be, and the effects of space as a rather extreme habitat.\n", "The size of the Universe is somewhat difficult to define. According to the general theory of relativity, far regions of space may never interact with ours even in the lifetime of the Universe due to the finite speed of light and the ongoing expansion of space. For example, radio messages sent from Earth may never reach some regions of space, even if the Universe were to exist forever: space may expand faster than light can traverse it.\n", "Even if the overall spatial extent is infinite and thus the universe cannot get any \"larger\", we still say that space is expanding because, locally, the characteristic distance between objects is increasing. As an infinite space grows, it remains infinite.\n", "Because we cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the Universe in its totality is finite or infinite. Estimates for the total size of the universe, if finite, reach as high as formula_1 megaparsecs, implied by one resolution of the No-Boundary Proposal.\n", "In his introduction Boeke says that the essay originated with a school project at his Werkplaats Children's Community in Bilthoven. The idea was to draw pictures that would include ever-growing areas of space, to show how the earth is located in an unfathomably enormous universe. Boeke then writes that he realized the reverse process—creating graphics of tinier and tinier bits of reality—would reveal a world \"as full of marvels\" as the most gigantic reaches of outer space.\n" ]
Any good non-Roman sources on the Roman military? Talking about tactics, formations, appearance, etc.
Polybius is a Greek writing to Greeks trying to explain how Rome came to dominate the Mediterranean. His 6th book is an analysis of Rome's political and military practices systematically including the structure of the legion, recruitment, provisioning, layout of the camp etc. The rest of it is a military history starting with the 1st Punic War down to 146, the year in which Rome destroyed Carthage and Corinth. Polybius lived for years in Rome as a hostage in the house of Paulus and apparently traveled widely in Italy and was friends with Scipio Aemelianus, so he probably has a fairly good idea what he's talking about. [LacusCurtius](_URL_0_) has the Loeb translation up.
[ "The focus below is primarily on Roman tactics – the \"how\" of their approach to battle, and how it stacked up against a variety of opponents over time. It does not attempt detailed coverage of things like army structure or equipment. Various battles are summarized to illustrate Roman methods with links to detailed articles on individual encounters. \n", "BULLET::::- Ross Cowan, \"Roman Battle Tactics, 109 BC - AD 313\". Osprey, Oxford 2007. — The book clearly explains and illustrates the mechanics of how Roman commanders — at every level — drew up and committed their different types of troops for open-field battles.\n", "BULLET::::- — One volume history covering the Roman Army, which was the biggest most important part of its military. Goldsworthy covers the early Republican days down to the final Imperial era demise, tracing changes in tactics, equipment, strategy, organization etc. He notes the details of the military system such as training and battlefield tactics, as well as bigger picture strategy, and changes that impacted Roman arms. He assesses what made the Romans effective, and ineffective in each of the various eras.\n", "This article contains the summaries of the detailed linked articles on the historical phases above, Readers seeking discussion of the Roman army by theme, rather than by chronological phase, should consult the following articles:\n", "Roman armies of the Republic and early empire worked from a set tactical 'handbook', a military tradition of deploying forces that provided for few variations and was ignored or elaborated only on occasion.\n", "The tactics of the Roman military depended on the discipline of the soldiers, the equipment of the soldiers, the formation of the cohorts of a legion on the battlefield, and the terrain of the battlefield.\n", "The figures given by ancient historians for the size of the Roman army engaged in the battle are unlikely, as they are notorious for exaggerating figures. Contrary to Berresford Ellis's assertion, at the time, the Romans had only two legions. The number of legions was not increased to four until later in the century, during the Second Samnite War (326-304 BC), and the first record of four legions is for 311 BC. At that point, the Romans also had additional military commanders: the praetor, who had been instituted in 366 BC, and the proconsul, who was a consul who received an extension of his term of military command (the practice started in 327 BC). The first historical hints of the consuls leading more than one legion were for 299 BC (during a war with the Etruscans) and 297 BC, during the Third Samnite War (298-290 BC). The first explicit mention of a consul with two legions is for 296 BC. In 295 BC, the Romans deployed six legions; four led by the two consuls, fought a coalition of four peoples (the Samnites, Etruscans, Umbrians and Senone Gauls) in the huge Battle of Sentinum. Two were led to another front by a praetor. The battle of the Allia took place in the early days of Rome, when the Roman army was much smaller and its command structure was much simpler. The Roman army had only two legions, and the two consuls were the sole military commanders; each headed one legion. In addition, the battle occurred in the early history of the Roman Republic, while the consulship alternated with years when Rome was headed by military tribunes with consular power, often referred to as consular tribunes, instead, and 390 BC was a year in which six consular tribunes were in charge. Therefore, Berresford Ellis's assertion that the Romans at the battle of Allia had four legions, two for each of the two consuls, is doubly anachronistic. Moreover, the Roman legions had 6,000 men in only a few exceptional occasions. In the early days of the Republic, when the Battle of the Allia took place, it was 4,200. Later, it was 5,200 when at full strength (the legions were often under-strength). Accordingly, the Roman force at the battle was likely substantially smaller than estimated.\n" ]
why do cuts get white when you take a shower?
It's a bodily fluid that mostly contains white blood cells and vitamins. It's to help speed the process of tissue/skin repair.
[ "Another common problem is color bleeding. For example, washing a red shirt with white underwear can result in pink underwear. Often only similar colors are washed together to avoid this problem, which is lessened by cold water and repeated washings. Sometimes this blending of colors is seen as a selling point, as with madras cloth.\n", "This is a set of designs printed in special ink that evaporates from liquid that is absorbed from the wearer-specifically urine, near the area that is most commonly urinated. When the child does wet the pants, these designs smudge to the point that they fade completely to white. This is intended to be an incentive for staying dry and a way to discourage wetting, and to identify when he or she is wet. Such a feature was first sold to consumers in 2000.\n", "After scouring and bleaching, optical brightening agents (OBAs) are applied to make the textile material appear a more brilliant white. These OBAs are available in different tints such as blue, violet and red.\n", "Well done cuts, in addition to being brown, are drier and contain few or no juices. Note that searing (cooking the exterior at a high temperature) in no way \"seals in the juices\" – water evaporates at the same or higher rates as unseared meat. Searing does play an important role, however, in browning, a crucial contributor to flavor and texture.\n", "Bleaching improves whiteness by removing natural coloration and remaining trace impurities from the cotton; the degree of bleaching necessary is determined by the required whiteness and absorbency. Being a vegetable fiber, cotton will be bleached using an oxidizing agent, such as dilute sodium hypochlorite or dilute hydrogen peroxide. If the fabric is to be dyed a deep shade, then lower levels of bleaching are acceptable. However, for white bed sheets and medical applications, the highest levels of whiteness and absorbency are essential.\n", "White fabrics acquire a slight color cast after use (usually grey or yellow). Since blue and yellow are complementary colors in the subtractive color model of color perception, adding a trace of blue color to the slightly off-white color of these fabrics makes them appear whiter. Laundry detergents may also use fluorescing agents to similar effect. Many white fabrics are blued during manufacturing. Bluing is not permanent and rinses out over time leaving dingy or yellowed whites. A commercial bluing product allows the consumer to add the bluing back into the fabric to restore whiteness.\n", "Bleaching improves whiteness by removing natural coloration and remaining trace impurities from the cotton; the degree of bleaching necessary is determined by the required whiteness and absorbency. Cotton being a vegetable fibre will be bleached using an oxidizing agent, such as dilute sodium hypochlorite or dilute hydrogen peroxide. If the fabric is to be dyed a deep shade, then lower levels of bleaching are acceptable, for example. However, for white bed sheetings and medical applications, the highest levels of whiteness and absorbency are essential.\n" ]
why are firefighters called to the scene of an emergency even if there is no fire involved?
You could almost call firefighters the engineers of the emergency services. They carry ladders, lifting gear, cutting tools, winches etc...
[ "Firefighters work closely with other emergency response agencies such as the police and emergency medical service. A firefighter's role may overlap with both. Fire investigators or fire marshals investigate the cause of a fire. If the fire was caused by arson or negligence, their work will overlap with law enforcement. Firefighters also frequently provide some degree of emergency medical service, in addition to working with full-time paramedics.\n", "Firefighting is the act of attempting to prevent the spread of and extinguish significant unwanted fires in buildings, vehicles, woodlands, etc. A firefighter suppresses fires to protect lives, property and the environment. \n", "Operating in the U.S. within the context of fire use, firefighters may only suppress fire that has become uncontrollable. Conversely, fires or portions of a fire that have previously been engaged by firefighters may be treated as fire use situation and be left to burn.\n", "Some fire stations are not regularly occupied, with the firefighting carried out by volunteer or retained firefighters. In this case, the firefighters are summoned to the fire station by siren, radio or pagers, where they will then deploy the fire engine. These fire stations may still have office space for the firefighters, a library of reference and other materials, and a \"trophy wall\" or case where the firefighters display memorabilia.\n", "Firefighters have sometimes been assaulted by members of the public while responding to calls. These kinds of attacks can cause firefighters to fear for their safety and may cause them to not have full focus on the situation which could result in injury to their selves or the patient.\n", "Firefighting is the act of extinguishing destructive fires. A firefighter fights these fires with the intent to prevent destruction of life, property and the environment. Firefighting is a highly technical profession, which requires years of training and education in order to become proficient. A fire can rapidly spread and endanger many lives; however, with modern firefighting techniques, catastrophe can usually be avoided. To help prevent fires from starting, a firefighter's duties include public education and conducting fire inspections. Because firefighters are often the first responders to victims in critical conditions, firefighters often also provide basic life support as emergency medical technicians or advanced life support as licensed paramedics. Firefighters make up one of the major emergency services, along with the emergency medical service, the police, and many others.\n", "Many areas had to be evacuated due to the fire as it approached populated areas. The fire was also fought by thousands of firefighters and some groups of specialized firefighters, called hot shotshot shots. Hot Shots are groups brought into an area to contain a wildfire. Eight helicopters with buckets attached filled with water mixed with a retardant were used to contain the fire, along with helitankers, helicopters with a built in container for water. Humans fighting the fire tried to contain the fire and prevent \"slop over\", where the fire crosses a control line to an unburned side. Lines of trees were cleared or \"sawcut\" so the fire had less fuel and by digging a fire line down into the mineral soil to stop the fire whenever it reached the ditch helped with containment. Firefighters even used the paths of past wildfires to help slow down the Beaver Creek Fire. They accomplished this by forcing the fire towards the boundary of the past fire and into a natural barrier. The last resort to stop the fire implemented by humans was the Governor of Idaho, Butch Otter, who declared the Beaver Creek Fire a state disaster area on August 14. This allowed the area more financial funds to fight the fire and more human resources, such as firefighters. The fire entered the Wood River Valley through Greenhorn Gulch, a canyon halfway between the towns of Hailey and Ketchum, and to some extent, through Deer Creek Canyon, just to the south. One home in Greenhorn was destroyed by the fire but more than 25 other homes were saved by the efforts of firefighting crews.\n" ]
How much of an impact did the ending of slavery have for the emancipation of women and women's rights in general? (USA)
Well, the idea of giving black *men* the vote gained traction pretty quickly; this was actually used to try and suppress the Women's Rights groups by telling them that they should be quiet and wait as this was "the Negro's hour." Elizabeth Cady Stanton took a somewhat dim view of this idea on the grounds that black *women* were going to be denied their rights, and expressed her views on the matter in [a letter to the National Anti-Slavery Standard](_URL_0_) (scroll down; it's the third document in the PDF.)
[ "Even as women played crucial roles in abolitionism, the movement simultaneously helped stimulate women's-rights efforts. A full 10 years before the Seneca Falls Convention, the Grimkés were travelling and lecturing about their experiences with slavery. As Gerda Lerner says, the Grimkés understood their actions' great impact. \"In working for the liberation of the slave,\" Lerner writes, \"Sarah and Angelina Grimké found the key to their own liberation. And the consciousness of the significance of their actions was clearly before them. 'We Abolition Women are turning the world upside down.\n", "Other luminaries such as Lydia Maria Child, Elizabeth Cady Stanton, Susan B. Anthony, Harriet Tubman, and Sojourner Truth all played important roles in abolitionism. But even beyond these well-known women, abolitionism maintained impressive support from white middle-class and some black women. It was these women who performed many of the logistical, day-to-day tasks that made the movement successful. They raised money, wrote and distributed propaganda pieces, drafted and signed petitions, and lobbied the legislatures. Though abolitionism sowed the seeds of the women's rights movement, most women became involved in abolitionism because of a gendered religious worldview, and the idea that they had feminine, moral responsibilities. For example, in the winter of 1831–1832, three women's petitions were written to the Virginia legislature, advocating emancipation of the state's slave population. The only precedent for such action was Catharine Beecher's organization of a petition protesting the Cherokee removal. The Virginia petitions, while the first of their kind, were by no means the last. Similar backing increased leading up to the Civil War.\n", "Women continued to be active in reform movements in the second half of the 19th century. In 1851, former slave Sojourner Truth gave a famous speech, called \"Ain't I a Woman?,\" at the Women's Convention in Akron, Ohio. In this speech she condemned the attitude that women were too weak to have equal rights with men, noting the hardships she herself had endured as a slave. In 1852, Harriet Beecher Stowe wrote the abolitionist book \"Uncle Tom's Cabin\". Begun as a serial for the Washington anti-slavery weekly, the National Era, the book focused public interest on the issue of slavery, and was deeply controversial for its strong anti-slavery stance at the time it was written. In writing the book, Stowe drew on her personal experience: she was familiar with slavery, the antislavery movement, and the Underground Railroad because Kentucky, across the Ohio River from Cincinnati, Ohio, where Stowe had lived, was a slave state. \"Uncle Tom's Cabin\" was a best-seller, selling 10,000 copies in the United States in its first week; 300,000 in the first year; and in Great Britain, 1.5 million copies in one year. Following publication of the book, Harriet Beecher Stowe became a celebrity, speaking against slavery both in America and Europe. She wrote \"A Key to Uncle Tom's Cabin\" in 1853, extensively documenting the realities on which the book was based, to refute critics who tried to argue that it was inauthentic; and published a second anti-slavery novel, \"Dred\", in 1856. Later, when she visited President Abraham Lincoln, the family's oral tradition states that he greeted her as \"the little lady who made this big war.\" Campaigners for other social changes, such as Caroline Norton who campaigned for women's rights, respected and drew upon Stowe's work.\n", "The relatively small women's rights movement of that time was closely associated with the American Anti-Slavery Society led by William Lloyd Garrison. The women's movement depended heavily on abolitionist resources, with its articles published in their newspapers and some of its funding provided by abolitionists. There was tension, however, between leaders of the women's movement and male abolitionists who, although supporters of increased women's rights, believed that a vigorous campaign for women's rights would interfere with the campaign against slavery. In 1860, when Anthony sheltered a woman who had fled an abusive husband, Garrison insisted that the woman give up the child she had brought with her, pointing out that the law gave husbands complete control of children. Anthony reminded Garrison that he helped slaves escape to Canada in violation of the law and said, \"Well, the law which gives the father ownership of the children is just as wicked and I'll break it just as quickly.\"\n", "The coming of the American Civil War ended the annual National Women's Rights Convention and focused women's activism on the issue of emancipation for slaves. The New York state legislature repealed in 1862 much of the gain women had made in 1860. Susan B. Anthony was \"sick at heart\" but could not convince women activists to hold another convention focusing solely on women's rights.\n", "Throughout the early 20th century, states enacted labor rights to advance social and economic progress. But despite the Clayton Act, and abuses of employers documented by the \"Commission on Industrial Relations\" from 1915, the Supreme Court struck labor rights down as unconstitutional, leaving management powers virtually unaccountable. In this \"Lochner era\", the Courts held that employers could force workers to not belong to labor unions, that a minimum wage for women and children was void, that states could not ban employment agencies charging fees for work, that workers could not strike in solidarity with colleagues of other firms, and even that the federal government could not ban child labor. It also imprisoned socialist activists, who opposed the fighting in World War One, meaning that Eugene Debs ran as the Socialist Party's candidate for President in 1920 from prison. Critically, the courts held state and federal attempts to create social security to be unconstitutional. Because they were unable to save in safe public pensions, millions of people bought shares in corporations, causing massive growth in the stock market. Because the Supreme Court precluded regulation for good information on what people were buying, corporate promoters tricked people into paying more than stocks were really worth. The Wall Street Crash of 1929 wiped out millions of people's savings. Business lost investment and fired millions of workers. Unemployed people had less to spend with businesses. Business fired more people. There was a downward spiral into the Great Depression. This led to the election of Franklin D. Roosevelt for President in 1932, who promised a \"New Deal\". Government committed to create full employment and a system of social and economic rights enshrined in federal law. But despite the Democratic Party's overwhelming electoral victory, the Supreme Court continued to strike down legislation, particularly the National Industrial Recovery Act of 1933, which regulated enterprise in an attempt to ensure fair wages and prevent unfair competition. Finally, after Roosevelt's second overwhelming victory in 1936, and Roosevelt's threat to create more judicial positions if his laws were not upheld, one Supreme Court judge switched positions. In \"West Coast Hotel Co v Parrish\" the Supreme Court found that minimum wage legislation was constitutional, letting the New Deal go on. In labor law, the National Labor Relations Act of 1935 guaranteed every employee the right to unionize, collectively bargain for fair wages, and take collective action, including in solidarity with employees of other firms. The Fair Labor Standards Act of 1938 created the right to a minimum wage, and time-and-a-half overtime pay if employers asked people to work over 40 hours a week. The Social Security Act of 1935 gave everyone the right to a basic pension and to receive insurance if they were unemployed, while the Securities Act of 1933 and the Securities Exchange Act of 1934 ensured buyers of securities on the stock market had good information. The Davis–Bacon Act of 1931 and Walsh–Healey Public Contracts Act of 1936 required that in federal government contracts, all employers would pay their workers fair wages, beyond the minimum, at prevailing local rates. To reach full employment and out of depression, the Emergency Relief Appropriation Act of 1935 enabled the federal government to spend huge sums of money on building and creating jobs. This accelerated as World War II began. In 1944, his health waning, Roosevelt urged Congress to work towards a \"Second Bill of Rights\" through legislative action, because \"unless there is security here at home there cannot be lasting peace in the world\" and \"we shall have yielded to the spirit of Fascism here at home.\"\n", "Women had increasingly played a larger role in the anti-slavery movement but could not take a direct role in Parliament. They sometimes formed their own anti-slavery societies. Many women were horrified that, under slavery, women and children were taken away from their families. In 1824, Elizabeth Heyrick published a pamphlet titled \"Immediate not Gradual Abolition\", in which she urged the immediate emancipation of slaves in the British colonies.\n" ]
how does a strong currency affect a country's economy?
a strong currency makes your exports comparably more expensive. and vice versa. Consider this. The canadian dollar is currently in the shitter. but canadians have not all had their annual salary adjusted. So their product cost in canadian dollars is the same. But to americans, due to the favorable exchange, you can buy a canadian product for less US dollars.
[ "BULLET::::7. Economic strength of a country: In general, high economic growth rates are not conducive to the local currency's performance in the foreign exchange market in the short term, but in the long run, they strongly support the strong momentum of the local currency.\n", "A strong currency helps domestic importers as their currency buys more, benefits foreign exporters as their exports garner more, hurts domestic exporters as there are not as many foreign buyers, and harms foreign importers as they cannot buy as much. A weak currency does the opposite of the above. They are summed up in the tables below.\n", "BULLET::::- Monetary dumping: a currency undergoes a devaluation when monetary authorities decide to intervene in the foreign exchange market to lower the value of the currency against other currencies. This makes local products more competitive and imported products more expensive (Marshall Lerner Condition), increasing exports and decreasing imports, and thus improving the trade balance. Countries with a weak currency cause trade imbalances: they have large external surpluses while their competitors have large deficits.\n", "Mainly in export-oriented economies, the effects of monetary policy transmission operating through exchange rate effects has been a major concern of their central banks. Expansionary monetary policy will cause the interest rate in a country to fall and deposits that are denominated in that domestic currency become less attractive than their foreign equivalents. As a result, the value of domestic deposits will fall compared to foreign deposits, which leads to a depreciation of the domestic currency. Since the value of the domestic currency is falling compared to foreign currencies, it now takes more of the domestic currency to buy a same amount in the foreign currency, and thus a depreciation of a currency is denoted be ↑e. As a result of this depreciation (domestic products become cheaper), net exports will rise and consequently so will aggregate spending. \n", "A currency that is naturally tied to a country’s major commodities can be beneficial if global demand for a commodity increases, naturally strengthening the value of the currency. As seen in Figure 1, as the demand for a commodity shifts out (higher demand) the price increases to p’. This increased demand also is likely to increase GDP, as more exports take place as demonstrated by the equation for GDP below.\n", "Other nations, including Iceland, Japan, Brazil, and so on have had a policy of maintaining a low value of their currencies in the hope of reducing the cost of exports and thus bolstering their economies. A lower exchange rate lowers the price of a country's goods for consumers in other countries, but raises the price of imported goods and services for consumers in the low value currency country.\n", "However, when a country is suffering from high unemployment or wishes to pursue a policy of export-led growth, a lower exchange rate can be seen as advantageous. From the early 1980s the International Monetary Fund (IMF) has proposed devaluation as a potential solution for developing nations that are consistently spending more on imports than they earn on exports. A lower value for the home currency will raise the price for imports while making exports cheaper. This tends to encourage more domestic production, which raises employment and gross domestic product (GDP). Such a positive impact is not guaranteed however, due for example to effects from the Marshall–Lerner condition. Devaluation can be seen as an attractive solution to unemployment when other options, like increased public spending, are ruled out due to high public debt, or when a country has a balance of payments deficit which a devaluation would help correct. A reason for preferring devaluation common among emerging economies is that maintaining a relatively low exchange rate helps them build up foreign exchange reserves, which can protect against future financial crises.\n" ]
What did the ancient people that built Stonehenge do when it was cloudy on the Solstices?
Since this involves a time before written records, this may not be the best subreddit for the question. At the same time, since it is involves a time before written records, it is largely impossible to answer your question. I can say that with my experience involving winter in Britain and Ireland, the average day was indeed cloudy, but the sun more often than not appeared on the horizon as it rose, shined for a few minutes, and then rose into the bank of clouds. Stonehenge only "works" at the solstices at the moments of sunrise and sunset; not later when the sun is behind the clouds.
[ "The prehistoric monument of Stonehenge has long been studied for its possible connections with ancient astronomy. The site is aligned in the direction of the sunrise of the summer solstice and the sunset of the winter solstice. Archaeoastronomers have made a range of further claims about the site's connection to astronomy, its meaning, and its use.\n", "Although Stonehenge has become an increasingly popular destination during the summer solstice, with 20,000 people visiting in 2005, scholars have developed growing evidence that indicates prehistoric people visited the site only during the winter solstice. The only megalithic monuments in the British Isles to contain a clear, compelling solar alignment are Newgrange and Maeshowe, which both famously face the winter solstice sunrise.\n", "This is attested by physical remains in the layouts of late Neolithic and Bronze Age archaeological sites, such as Stonehenge in England and Newgrange in Ireland. The primary axes of both of these monuments seem to have been carefully aligned on a sight-line pointing to the winter solstice sunrise (Newgrange) and the winter solstice sunset (Stonehenge). It is significant that at Stonehenge the Great Trilithon was oriented outwards from the middle of the monument, i.e. its smooth flat face was turned towards the midwinter Sun.\n", "The first stone phase at Stonehenge has been dated to about 2600 BCE. Stone circles are still being made in Wales as part of the Eisteddfod movement, which incorporates this among other elements from the Druidic revival. Desert kites were used possibly by 3000 BCE; they fell out of use in the Neolithic as prey populations declined and the human population rose.\n", "Many ancient monuments were constructed with the passing of the solar year in mind; for example, stone megaliths accurately mark the summer or winter solstice (some of the most prominent megaliths are in Nabta Playa, Egypt; Mnajdra, Malta and at Stonehenge, England); Newgrange, a prehistoric human-built mount in Ireland, was designed to detect the winter solstice; the pyramid of El Castillo at Chichén Itzá in Mexico is designed to cast shadows in the shape of serpents climbing the pyramid at the vernal and autumnal equinoxes.\n", "At Stonehenge on midsummer morning, a phallic shadow is cast from the \"Heel\" stone; this penetrates to the centre of the monument where its tip touches the \"Goddess\" stone. In his book, \"Stonehenge, Avebury and Drombeg Stone Circle Deciphered\", he wrote that most stone circles were planned as sunrise fertility monuments, and so need to be studied at sunrise.\n", "Early efforts to date Stonehenge exploited changes in astronomical declinations and led to efforts such as H. Broome’s 1864 theory that the monument was built in 977 BC, when the star Sirius would have risen over Stonehenge's Avenue. Sir Norman Lockyer proposed a date of 1680 BC based entirely on an incorrect sunrise azimuth for the Avenue, aligning it on a nearby Ordnance Survey trig point, a modern feature. Petrie preferred a later date of 730 AD. The relevant stones were leaning considerably during his survey, and it was not considered accurate.\n" ]
if pets require protective collars to allow wounds to heal properly how do wild animals repair wounds?
They die a lot of the time from those types of wounds. The cones are generally used so the animal won't pick out the sutures. Any animal in the wild would probably die from a wound that needed sutures anyway.
[ "In order to prevent the animal from irritating a wound or removing stitches while self grooming, Elizabethan collars are used to either prevent the animal from licking/biting its wound or using its limbs to scratch their head or ears. The collar can also be used to restrain animals with self destructing habits, either from poor training or mental illness.\n", "As with the licking of wounds by people, wound licking by animals carries a risk of infection. Allowing pet cats to lick open wounds can cause cellulitis and sepsis due to bacterial infections. Licking of open wounds by dogs could transmit rabies if the dog is infected with rabies, although this is said by the CDC to be rare. Dog saliva has been reported to complicate the healing of ulcers. Another issue is the possibility of an allergy to proteins in the saliva of pets, such as Fel d 1 in cat allergy and Can f 1 in dog allergy. Cases of serious infection following the licking of wounds by pets include:\n", "Animals may chew on the wires and remove some of the insulation. In any of these cases if the damage is not noticed and the cord is not repaired or taken out of service, the damage can lead to arcing or a short circuit between the wires, which can ignite nearby materials. The exposed wires of an extension cord with damaged insulation can also present a shock hazard to people and animals.\n", "Cairn terriers shed very little, but should always be hand stripped. Using scissors or shears can ruin the dog's rugged outer coat after one grooming. Hand stripping involves pulling the old dead hair out by the roots. If done incorrectly, this can cause discomfort to the dog, causing it to shy away from future hand stripping. Removing the dead hair in this manner allows new growth to come in. This new growth helps protect the dog from water and dirt.\n", "Collars can be dangerous for pets that live in crates or which might get stuck in tree branches and that is why safety collars have been developed. There is a particular type of safety collar which is intended for both dogs and cats. Breakaway collars are especially designed to prevent the pet from choking or getting stuck because of their collar. They feature a clever design that releases quickly when a small amount of pressure is applied, such as the cat hanging from a tree branch. The clasp will release, which quickly gets the pet out of a possibly desperate situation. However, it is recommended that pets have their collar removed before sleeping in a wired crate.\n", "Antibiotics to prevent infection are recommended for dog and cat bites of the hand, and human bites if they are more than superficial. They are also recommended in those who have poor immune function. Evidence for antibiotics to prevent infection in bites in other areas is not clear.\n", "Wound licking is beneficial but too much licking can be harmful. An Elizabethan collar may be used on pet animals to prevent them from biting an injury or excessively licking it, which can cause a lick granuloma. These lesions are often infected by pathogenic bacteria such as \"Staphylococcus intermedius\". Horses that lick wounds may become infected by a stomach parasite, \"Habronema\", a type of nematode worm. The rabies virus may be transmitted between animals, such as the kudu antelopes by wound licking of wounds with residual infectious saliva.\n" ]
If time is nonexistent for a photon, how can it be emitted from something and never be absorbed?
Time isn't quite non-existent for a photon. A photon simply has no perspective in the first place; there's no way of putting a coordinate system on the Universe to describe things as a photon would see it. In Science-Speak, there's no such thing as a photon's frame of reference, because there's no such thing as a frame in which a photon can be at rest. Photons are only able to exist in other observers' reference frames and in those frames, photons travel at a definitive speed, the speed of light.
[ "The process of emission and absorption of photons seemed to demand that the conservation of energy will hold at best on average. If a wave containing exactly one photon passes over some atoms, and one of them absorbs it, that atom needs to tell the others that they can't absorb the photon anymore. But if the atoms are far apart, any signal cannot reach the other atoms in time, and they might end up absorbing the same photon anyway and dissipating the energy to the environment. When the signal reached them, the other atoms would have to somehow recall that energy. This paradox led Bohr, Kramers and Slater to abandon exact conservation of energy. Heisenberg's formalism, when extended to include the electromagnetic field, was obviously going to sidestep this problem, a hint that the interpretation of the theory will involve wavefunction collapse.\n", "When a photon is absorbed, the electromagnetic field of the photon disappears as it initiates a change in the state of the system that absorbs the photon. Energy, momentum, angular momentum, magnetic dipole moment and electric dipole moment are transported from the photon to the system. Because there are conservation laws, that have to be satisfied, the transition has to meet a series of constraints. This results in a series of selection rules. It is not possible to make any transition that lies within the energy or frequency range that is observed.\n", "If a single photon approaches an atom which is receptive to it, the photon can be absorbed by the atom in a manner very similar to a radio wave being picked up by an aerial. At the moment of absorption the photon ceases to exist and the total energy contained within the atom increases. This increase in energy is usually described symbolically by saying that one of the outermost electrons \"jumps\" to a \"higher orbit\". This new atomic configuration is unstable and the tendency is for the electron to fall back to its lower orbit or energy level, emitting a new photon as it goes. The entire process may take no more than 1 x 10 seconds. The result is much the same as with reflective colour, but because of the process of absorption and emission, the substance emits a glow. According to Planck, the energy of each photon is given by multiplying its frequency in cycles per second by a constant (Planck’s constant, 6.626 x 10 erg seconds). It follows that the wavelength of a photon emitted from a luminescent system is directly related to the difference between the energy of the two atomic levels involved.\n", "In the case of photons, photons are bosons and can very easily and rapidly appear or disappear. Therefore, the chemical potential of photons is always and everywhere zero. The reason is, if the chemical potential somewhere was higher than zero, photons would spontaneously disappear from that area until the chemical potential went back to zero; likewise if the chemical potential somewhere was less than zero, photons would spontaneously appear until the chemical potential went back to zero. Since this process occurs extremely rapidly (at least, it occurs rapidly in the presence of dense charged matter), it is safe to assume that the photon chemical potential is never different from zero.\n", "An electron entering an electric field is accelerated, and therefore must lose part of its energy in the form of a photon via the Bremsstrahlung effect - the process by which a charged particle emits electromagnetic radiation when being decelerated upon passing an atom, for instance in a solid material. By exploiting the relativistic phenomena of time dilatation and length contraction, the NA63 experiment has shown that this process of photon emission is not instantaneous, but rather, takes time. Because the process takes time, the photon production can be influenced experimentally. For non-relativistic particles this time is so short that investigations are very difficult, if not excluded. But for the relativistic particles used by NA63, their time is ‘slowed’ by a factor of about half a million due to the relativistic effect of time dilatation, making investigations possible.\n", "If light (photons) of frequency ν passes through the group of atoms, there is a possibility of the light being absorbed by electrons which are in the ground state, which will cause them to be excited to the higher energy state. The rate of absorption is proportional to the radiation intensity of the light, and also to the number of atoms currently in the ground state, \"N\".\n", "In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be \"dressed\" by emission and reabsorption of \"virtual photons\" from the vacuum. The vacuum state energy described by is infinite. We can make the replacement:\n" ]
Coin from 1919; can somebody identify?
That's a British 3 pence coin featuring king George V. It appears to be of fine condition and could probably fetch anything from a pound up to 3-4 pounds at the most. An uncirculated version of the same coin would have been worth around 10 pounds.
[ "Coins were first issued in 1952 in denominations of 1, 3, 5, 10, 25, and 50 bani, with aluminum bronze for 1, 3, and 5 bani, and cupronickel for 10, 25, and 50 bani. These coins featured the state arms and name \"Republica Populara Româna\".\n", "Coins were issued in 1934 in denominations of 1-, 2, 3-, 5-, 10-, 15 and 20 kɵpejek, a Tuvanized name for the Russian kopeck, with banknotes issued in 1935 and 1940 in denominations of 1 to 25 akşa. The names \"kɵpejek\" and \"akşa\" are spelled in Jaꞑalif.\n", "The Maine State Museum website favors the view that the coin was found at the site and is therefore evidence of Norse presence on the North American continent, although the Museum states \"the most likely explanation for the coin's presence is that it was obtained by natives somewhere else, perhaps in Newfoundland where the only known New World Norse settlement has been found at L’Anse aux Meadows, and that it eventually reached the Goddard site through native trade channels\". The Maine State Museum describes it as \"the only pre-Columbian Norse artifact generally regarded as genuine found within the United States\".\n", "The Nova Constellatio coins are the first coins struck under the authority of the United States of America. These pattern coins were struck in early 1783, and are known in three silver denominations (1,000-Units, 500-Units, 100-Units), and one copper denomination (5-Units). All known examples bear the legend \"NOVA CONSTELLATIO\" with the exception of a unique silver 500-Unit piece.\n", "The first group of these coins reviewed by numismatists were 10 silver pieces and one bronze piece found in the mid-nineteenth century. By 1881 the number of coins had grown to 43, and many more have been found since. These coins were first attributed to Bar Kokhba by Moritz Abraham Levy in 1862 and Frederic Madden in 1864.\n", "In 1818, after acquiring permission from the Spanish government, he created the first coin that existed in Texas. On one side of the coin, Garza used his initials \"JAG\" (José Antonio de la Garza) along to the year of creation (1818). On the other side of the coin, he fixed a drawing of one star. \n", "The Half union (separate varieties known as J-1546 through J-1549) was a United States coin minted as a pattern, or a coin not approved for release, with a face value of fifty U.S. Dollars. It is often thought of as one of the most significant and well-known patterns in the history of the U.S. Mint. The basic design, featuring Liberty on the obverse, was slightly modified from the similar $20 \"Liberty Head\" Double Eagle, which was designed by James B. Longacre and minted from 1849 to 1907. \n" ]
A question about how colonization affected the languages of Africa.
This is in reference to the "how the languages are faring" question only, and is currentish info; if it's violating the 20-year rule too badly, my apologies (I trust it will meet a speedy end in that case!): There are many, *many* languages in Africa; some of them are thriving (e.g., Yoruba, Kinyarwanda), others not. For *basic* information about current language situation, you might find the [Ethnologue section for Africa](_URL_1_) useful -- while the specific population figures for many of the languages will be off a bit (because of sampling error, the agenda of the organization (missionaries), and some out-of-date-ness in some cases etc.), it'll give you a general idea. If you move the cursor over regions of the map, it'll give you pop-up boxes with relative "vitality" counts for languages, and you can drill down to individual countries, etc. The [World Atlas of Languages](_URL_0_) may be useful as well, although it's focused more on specific linguistic characteristics.
[ "Throughout the long multilingual history of the African continent, African languages have been subject to phenomena like language contact, language expansion, language shift and language death. A case in point is the Bantu expansion, in which Bantu-speaking peoples expanded over most of Sub-Equatorial Africa, displacing Khoi-San speaking peoples from much of Southeast Africa and Southern Africa and other peoples from Central Africa. Another example is the Arab expansion in the 7th century, which led to the extension of Arabic from its homeland in Asia, into much of North Africa and the Horn of Africa.\n", "Linguistic, archeological and genetic evidence also indicates that during the course of the Bantu expansion, \"independent waves of migration of western African and East African Bantu-speakers into southern Africa occurred.\" In some places, Bantu language, genetic evidence suggests that Bantu language expansion was largely a result of substantial population replacement. In other places, Bantu language expansion, like many other languages, has been documented with population genetic evidence to have occurred by means other than complete or predominant population replacement (e.g. via language shift and admixture of incoming and existing populations). For example, one study found this to be the case in Bantu language speakers who are African Pygmies or are in Mozambique, while another population genetic study found this to be the case in the Bantu language speaking Lemba of Zimbabwe. Where Bantu was adopted via language shift of existing populations, prior African languages were spoken, probably from African language families that are now lost, except as substrate influences of local Bantu languages (such as click sounds in local Bantu languages).\n", "Scholars including Dellal (2013), Miraftab (2012) and Bamgbose (2011) have argued that Africa’s linguistic diversity has been eroded. Language has been used by western colonial powers to divide territories and create new identities which has led to conflicts and tensions between African nations.\n", "In trying to assess the legacy of colonization, some researchers have focused on the type of political and economic institutions that existed before the arrival of Europeans. Heldring and Robinson conclude that while colonization in Africa had overall negative consequences for political and economic development in areas that had previous centralized institutions or that hosted white settlements, it possibly had a positive impact in areas that were virtually stateless, like South Sudan or Somalia. In a complementary analysis, Gerner Hariri observed that areas outside Europe which had State-like institutions before 1500 tend to have less open political systems today. According to the scholar, this is due to the fact that during the colonization, European liberal institutions were not easily implemented. Beyond the military and political advantages, it is possible to explain the domination of European countries over non-European areas by the fact that capitalism did not emerge as the dominant economic institution elsewhere. As Ugo Pipitone argues, prosperous economic institutions that sustain growth and innovation did not prevail in areas like China, the Arab world, or Mesoamerica because of the excessive control of these proto-States on private matters.\n", "European exploration of tropical areas was aided by the New World discovery of quinine, the first effective treatment for malaria. Europeans suffered from this disease, but some indigenous populations had developed at least partial resistance to it. In Africa, resistance to malaria has been associated with other genetic changes among sub-Saharan Africans and their descendants, which can cause sickle-cell disease. In fact, the resistance of sub-Saharan Africans to malaria in the Southern United States contributed to the development of slavery in those regions.\n", "The primary evidence for this expansion is linguistic - a great many of the languages spoken across Sub-Equatorial Africa are remarkably similar to each other, suggesting the common cultural origin of their original speakers. The linguistic core of the Bantu languages, which comprise a branch of the Niger–Congo family, was located in the adjoining regions of Cameroon and Nigeria. However, attempts to trace the exact route of the expansion, to correlate it with archaeological evidence and genetic evidence, have not been conclusive; thus although the expansion is widely accepted as having taken place, many aspects of it remain in doubt or are highly contested.\n", "Because Indigenous nations were deemed to be \"uncivilized,\" European powers declared the territorial sovereignty of Africa as openly available, which initiated the Scramble for Africa in the late nineteenth century. With the continent of Africa conceptualized as effectively \"ownerless\" territory, Europeans positioned themselves as its redeemers and rightful colonial rulers. In the European colonial mindset, Africans were inferior and incapable of being \"civilized\" because they had failed to properly manage or exploit the natural resources available to them. As a result, they were deemed to be obstacles to capitalist investment, extraction, and production of natural resources in the construction of a new colonial empire and built environment. The immense diversity of the Indigenous peoples of Africa was flattened by this colonial perception, which labeled them instead as an \"unrepresentable nomadic horde of apprehensions that ran across European territories.\"\n" ]
how do people find out who a reddit user is in real life?
Probably not unless you mention doing something very illegal that warrants notifying authorities who can seize ip logs & shit.
[ "When various users answer a question they are awarded a certain number of points, the person who asked the question can then select the best answer and will be awarded points. This user can then ask questions on the website using the points that he/she was awarded. The users with most points are ranked daily with the option to display their real identities, if they choose to do so.\n", "To verify a profile, users must take multiple selfies which are then approved by a team of 5,000 moderators. The actual profile is generated with information from other social media sites including Facebook, Foursquare and Instagram. The user chooses whether they are looking for a date or a friend as well as ideal age range and how many places matches must have in contact before they can contact the user.\n", "Acquaintances of a celebrity (friends, managers, parents, siblings, etc.) are invited to answer a quiz about the celebrity to see if they know him/her as well as they think they do. For instance, they will have to watch videos of the celebrity and predict his/her next actions.\n", "The website allows users to create a profile with personal, faith-based, educational, and professional information, and upload pictures. Users can send instant messages to members of the opposite gender, and send them virtual gifts. Users can perform searches of the member database based on criteria such as age, religious sect, location, country of origin, piety, citizenship, language(s), marital status, education, and profession. The website also has a real time live chat feature.\n", "The users themselves determine their name, birthday, contacts, and address, and can choose who may see this information: everyone, registered users, only their contacts, or nobody. The username (first name and nickname) is for \"everyone\", \"contacts\" or \"nobody\" visible. Each user is informed about who has visited their profile. Anonymous visits are not possible.\n", "BULLET::::- Ones that reveal personality directly, often seen as quizzes like \"What kind of person are you?\", \"What kind of boy\" or \"boyfriend are you?\", \"What kind of girl\" or \"girlfriend are you?\", \"What\" (insert popular entertainment series here) \"character are you?\" (series such as Star Wars, Gilmore Girls, Pokémon, Metroid, or South Park)\n", "People tend to disclose more personal information about themselves (e.g. birthday, e-mail, address, hometown and relationship status) in their social networking profiles (Hew 2011). This personally identifiable information could be used by fraudsters to steal users' identities, and posting this information on social media makes it a lot easier for fraudsters to take control of it. \n" ]
Do Coronal Mass Ejections have a significant impact on the life of a star?
I assume you mean in terms of mass lost, and the answer is a resounding "NO". The mass lost in a typical CME from our Sun is [of the order of 10^15 grams](_URL_0_), while the mass of our Sun is of the order 10^33 grams. So every CME from our Sun releases approx 0.000000000000000001% of the Sun's mass. In human terms, if you weigh 100Kg (220lbs) then it would be like losing 1^-13 grams in mass... which, if my math is correct, is about 0.00000000015% the mass of the average human hair.
[ "The ultra-fast coronal mass ejection of August 1972 is suspected of triggering magnetic fuses on naval mines during the Vietnam War, and would have been a life-threatening event to Apollo astronauts if it had occurred during a mission to the Moon.\n", "The first detection of a Coronal mass ejection (CME) as such was made on Dec 1 1971 by R. Tousey of the US Naval Research Laboratory using the 7th Orbiting Solar Observatory (OSO 7). Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing.\n", "The following contains a list of coronal mass ejections. A coronal mass ejection (CME) is a massive burst of solar wind and magnetic fields rising above the solar corona or being released into space. Most ejections originate from active regions on the Sun's surface, such as groupings of sunspots associated with frequent flares.\n", "The first detection of a Coronal mass ejection (CME) as such was made on December 1, 1971 by R. Tousey of the US Naval Research Laboratory using OSO 7. Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing.\n", "Although scientists previously announced that the magnetic fields of close-in exoplanets may cause increased stellar flares and starspots on their host stars, in 2019 this claim was demonstrated to be false in the HD 189733 system. The failure to detect \"star-planet interactions\" in the well-studied HD 189733 system calls other related claims of the effect into question.\n", "These stars are large enough to produce gamma rays with enough energy to create electron-positron pairs, but the resulting net reduction in counter-gravitational pressure is insufficient to cause the core-overpressure required for supernova. Instead, the contraction caused by pair-creation provokes increased thermonuclear activity within the star that repulses the inward pressure and returns the star to equilibrium. It is thought that stars of this size undergo a series of these pulses until they shed sufficient mass to drop below 100 solar masses, at which point they are no longer hot enough to support pair-creation. Pulsing of this nature may have been responsible for the variations in brightness experienced by Eta Carinae in 1843, though this explanation is not universally accepted.\n", "Stars already lose a small flow of mass via solar wind, coronal mass ejections, and other natural processes. Over the course of a star's life on the main sequence this loss is usually negligible compared to the star's total mass; only at the end of a star's life when it becomes a red giant or a supernova is a large proportion of material ejected. The star lifting techniques that have been proposed would operate by increasing this natural plasma flow and manipulating it with magnetic fields.\n" ]
What did the diet of North American 19th century lumberjacks consist of?
from Roy B. Clarkson's description of a logging camp in West Virginia circa 1909: > A typical evening meal consisted of boiled or roast beef or pork or steak, turnips, hanovers, tomatoes, potatoes, beans, hash, "light" bread or corn bread, and two different kinds of pie ( quartered) and cake and cookies. The men were encouraged to eat all they wanted, and at the end o a hard day's work their appetites were prodigious... > > A typical breakfast consisted of hot biscuits, steak ( well done or rare) fried eggs, fried potatoes, oatmeal, cake, donuts, prunes or other fruit, and coffee. It was not uncommon for a man to eat half a dozen eggs along with generous helps of other "vittles"... > > One of the ways the men had of relieving the boredom of their existence was to invent nicknames for each other and for the objects around them. Thus, biscuits were "cat-heads", donuts were "fried holes" or "doorknobs", meat was "sow-belly" or "long-hog", light bread was "punk", milk was "cow" or "white line", sugar was "sand train", prunes were "Rocky Mt. Huckleberries", coffee was "java or "Arbuckles", apple butter was "Pennsylvania Salve", cooks were called "boilers", women cooks were rare but were called "she-boilers" or "Open bottom cooks". > > *Tumult on the Mountains* (1964) p.63-65 Clarkson said nothing about lunch, but you suspect that there was enough consumed in two meals to equal three, and that if a man stuffed something into his pocket at breakfast for a noon break nothing would be said.. He also noted that a good cook made about $3.00 a day, and the loggers $1.75 to $2.00: but the cook worked seven days a week. & #x200B;
[ "The original Oʼodham diet consisted of regionally available wild game, insects, and plants. Through foraging, Oʼodham ate a variety of regional plants, such as: ironwood seed, honey mesquite, hog potato, and organ-pipe cactus fruit. While the Southwestern United States did not have an ideal climate for cultivating crops, Oʼodham cultivated crops of white tepary beans, papago peas, and Spanish watermelons. They hunted pronghorn antelope, gathered hornworm larvae, and trapped pack rats for sources of meat. Preparation of foods included steaming plants in pits and roasting meat on an open fire.\n", "This section of the country has some of the oldest known foodways in the land, with some recipes almost 400 years old. Native American influences are still quite visible in the use of cornmeal as an essential staple and found in the Southern predilection for hunting wild game, in particular wild turkey, deer, woodcock, and various kinds of waterfowl; for example, coastal North Carolina is a place where hunters will seek tundra swan as a part of Christmas dinner; the original English and Scottish settlers would have rejoiced at this revelation owing to the fact that such was banned amongst the commoner class in what is now the United Kingdom, and naturally, their descendants have not forgotten. Native Americans also consumed turtles and catfish, specifically the snapping turtle and blue catfish. Catfish are often caught with one's bare hands, gutted, breaded, and fried to make a Southern variation on English fish and chips and turtles are turned into stews and soups. Native American tribes of the region such as the Cherokee or Choctaw often cultivated or gathered local plants like pawpaw, maypop and several sorts of squashes and corn as food and spicebush, sassafras as spices, and the aforementioned fruits are still cultivated as food in the South. Maize is to this day found in dishes for breakfast, lunch and dinner in the form of grits, hoecakes, baked cornbread, and spoonbread, and nuts like the hickory, black walnut and pecan are commonly included in desserts and pastries as varied as mince pies, pecan pie, pecan rolls and honey buns (both are types of sticky bun), and quick breads, which were themselves invented in the South during the American Civil War. Peaches have been grown in this region since the 17th century and are a staple crop as well as a favorite fruit, with peach cobbler being a signature dessert.\n", "The traditional wild food is supplemented by store-bought items, most notably black loose leaf tea, which was introduced to the Athabaskan by traders in the 1800s and remains a staple among present day potlatches. Bannock, also known as fry bread, rolls, and salads are also served.\n", "The agave, especially \"Agave murpheyi\", was a major food source for the prehistoric indigenous people of the Southwestern United States. The Hohokam of southern Arizona cultivated large areas of agave.\n", "Indigenous peoples of the Great Plains and Canadian Prairies or Plains Indians relied heavily on American bison (American buffalo) as a food source. The meat was cut in thin slices and dried, either over a slow fire or in the hot sun, until it was hard and brittle which could last for months, making it a main ingredient to be combined with other foods, or eaten on its own. One such use could be pemmican, a concentrated mixture of fat and protein, and fruits such as cranberries, Saskatoon berries, blueberries, cherries, chokeberries, chokecherries, and currants were sometimes added. When asked to state traditional staple foods, a group of Plains elders identified \"prairie turnips, fruits (chokecherries, June berries, plums, blueberries, cranberries, strawberries, buffalo berries, gooseberries), potatoes, squash, dried meats (venison, buffalo, jack rabbit, pheasant, and prairie chicken), and wild rice\" as being these staple foods. Bison was a staple of Plains Indians' diets. Many parts were utilized and prepared in numerous ways, including: \"boiled meat, tripe soup perhaps thickened with brains, roasted intestines, jerked/smoked meat, and raw kidneys, liver, tongue sprinkled with gall or bile were eaten immediately after a kill. The animals that Great Plains Indians consumed, like bison, deer, and antelope, were grazing animals. Due to this, they were high in omega-3 fatty acids, an essential acid that many diets lack.\n", "Pemmican became the main source of food throughout the Bison trade nearing the end of the eighteenth century. The Hudson’s Bay Company relied on the food to provide a source of energy for their fur-traders who had relocated to areas that had a scarce food supply but plenty of fur. By the mid-1790s the first systems of small wintering posts had become massive food depots, with Montreal companies organizing depots in the Red River Valley. The Hudson’s Bay Company grew concerned during the turn of the century because their rival, the North West Company, was providing trappers in the north with enough Pemmican to allow them to specialize their hunting patterns to cater to the English trading posts rather than the HBC.\n", "South American cultures began domesticating llamas and alpacas in the highlands of the Andes circa 3500 BCE. These animals were used for both transportation and meat; their fur was shorn or collected to use to make clothing. Guinea pigs were also domesticated as a food source at this time.\n" ]
If I was in the center between two planets with the exact same gravity, would I be ripped apart? Or would I experience weightlessness?
Whether or not gravity can rip you apart depends on the tidal forces or differences in gravitational attraction across a body's length. So the answer is "it depends." If you [make a graph](_URL_0_) of -1/(r+1)^2 - 1/(r-1)^2 you'd see the potential does have a maximum in between the two point sources which means you'd be safe if you were small enough. For a single attractive source, the tidal force (for a target body of length L) is approximately L/r^3 and it would be interesting to see what modification to this you'd need for a body balanced in the center. Of course the center is unstable so a push or shove in any direction will make you fall one way or the other, then you can use the L/r^3 equation to figure it out ignoring the source you didn't fall into.
[ "The pull of gravity in LEO is only slightly less than on the Earth's surface. This is because the distance to LEO from the Earth's surface is far less than the Earth's radius. However, an object in orbit is, by definition, in free fall, since there is no force holding it up. As a result objects in orbit, including people, experience a sense of weightlessness, even though they are not actually without weight.\n", "A more basic manifestation of the same effect involves two bodies that are falling side by side towards the Earth. In a reference frame that is in free fall alongside these bodies, they appear to hover weightlessly – but not exactly so. These bodies are not falling in precisely the same direction, but towards a single point in space: namely, the Earth's center of gravity. Consequently, there is a component of each body's motion towards the other (see the figure). In a small environment such as a freely falling lift, this relative acceleration is minuscule, while for skydivers on opposite sides of the Earth, the effect is large. Such differences in force are also responsible for the tides in the Earth's oceans, so the term \"tidal effect\" is used for this phenomenon.\n", "Gravity at the altitude of the ISS is approximately 90% as strong as at Earth's surface, but objects in orbit are in a continuous state of freefall, resulting in an apparent state of weightlessness. This perceived weightlessness is disturbed by five separate effects:\n", "That is, being on the surface of the Earth is equivalent to being inside a spaceship (far from any sources of gravity) that is being accelerated by its engines. The direction or vector of acceleration equivalence on the surface of the earth is \"up\" or directly opposite the center of the planet while the vector of acceleration in a spaceship is directly opposite from the mass ejected by its thrusters. From this principle, Einstein deduced that free-fall is inertial motion. Objects in free-fall do not experience being accelerated downward (e.g. toward the earth or other massive body) but rather weightlessness and no acceleration. In an inertial frame of reference bodies (and photons, or light) obey Newton's first law, moving at constant velocity in straight lines. Analogously, in a curved spacetime the world line of an inertial particle or pulse of light is \"as straight as possible\" (in space \"and\" time). Such a world line is called a geodesic and from the point of view of the inertial frame is a straight line. This is why an accelerometer in free-fall doesn't register any acceleration; there isn't any.\n", "A body in free fall (which by definition entails no aerodynamic forces) near the surface of the earth has an acceleration approximately equal to 9.8 m s with respect to a coordinate frame tied to the earth. If the body is in a freely falling lift and subject to no pushes or pulls from the lift or its contents, the acceleration with respect to the lift would be zero. If on the other hand, the body is subject to forces exerted by other bodies within the lift, it will have an acceleration with respect to the freely falling lift. This acceleration which is not due to gravity is called \"proper acceleration\". On this approach, weightlessness holds when proper acceleration is zero.\n", "At a distance relatively close to Earth (less than 3000 km), gravity is only slightly reduced. As an object orbits a body such as the Earth, gravity is still attracting objects towards the Earth and the object is accelerated downward at almost 1g. Because the objects are typically moving laterally with respect to the surface at such immense speeds, the object will not lose altitude because of the curvature of the Earth. When viewed from an orbiting observer, other close objects in space appear to be floating because everything is being pulled towards Earth at the same speed, but also moving forward as the Earth's surface \"falls\" away below. All these objects are in free fall, not zero gravity.\n", "When the gravitational field is non-uniform, a body in free fall experiences tidal effects and is not stress-free. Near a black hole, such tidal effects can be very strong. In the case of the Earth, the effects are minor, especially on objects of relatively small dimensions (such as the human body or a spacecraft) and the overall sensation of weightlessness in these cases is preserved. This condition is known as microgravity, and it prevails in orbiting spacecraft.\n" ]
why are some corn fields allowed to go brown before they are harvested?
Don't worry, you're not seeing waste, its perfectly normal. You're seeing the corn plant, not the corn ear. The plant dries down and turns brown before the grain moisture gets low enough to harvest and store. If you harvest corn while its wet and store it, it will rot. What you're seeing is perfectly normal. Most corn for cattle actually gets harvested early, while there's still some green on the plant.
[ "Field corn primarily grown for livestock feed and ethanol production is allowed to mature fully before being shelled off the cob before being stored in silos, pits, bins or grain \"flats\". Field corn can also be harvested as high-moisture corn, shelled off the cob and piled and packed like silage for fermentation; or the entire plant may be chopped while still very high in moisture with the resulting silage either loaded and packed in plastic bags, piled and packed in pits, or blown into and stored in vertical silos. \n", "In North America, field corn also known as \"grain\" corn, is corn (\"Zea mays\") grown for livestock fodder (silage), ethanol, cereal and processed food products. The principal field corn varieties are dent corn, flint corn, flour corn (also known as soft corn) which includes blue corn (\"Zea mays amylacea\"), and waxy corn.\n", "Often, green manure crops are grown for a specific period, and then plowed under before reaching full maturity in order to improve soil fertility and quality. Also the stalks left block the soil from being eroded.\n", "Maize kernels naturally occur in many colors, depending on the cultivar: from pale white, to yellow, to red and bluish purple. Likewise, corn meal and the tortillas made from it may be similarly colored. White and yellow tortillas are by far the most common, however.\n", "Varieties of blue corn cultivated in the Southwestern United States vary in their respective contents of anthocyanins, the polyphenol pigment giving the corn its unique color. Anthocyanins having the highest contents are cyanidin 3-glucoside (most abundant), pelargonidin and peonidin 3-glucoside.\n", "Black walnut is primarily a pioneer species similar to red and silver maple and black cherry. Because of this, black walnut is a common weed tree found along roadsides, fields, and forest edges in the eastern US. It will grow in closed forests, but is classified as shade intolerant, this means it requires full sun for optimal growth and nut production.\n", "To the Nahua, corn is known not only as a physical life sustainer, but also a spiritual sustainer. It is used at every meal and also feeds domesticated animals. It is the main source of income and is therefore an inevitable constituent in community *politics. Corn can be divided, by color, into four groups. White is superior for human consumption and is sold at the market. Yellow is resilient, thus grown in the dry season and is the corn of choice for domesticated animals. Black and red corns are grown in lesser amounts and are not preferred for eating.\n" ]
why would we ever care about the distinction of a newtonian fluid and a non-newtonian fluid? (i put this with an engineering flair because i want to know if there’s any practical use, not theoretical)
Imagine you're trying to fill a mold with a material. You'd likely want to do that as fast as you can. Well, what happens if the material doesn't behave like a Newtonian fluid? It becomes much more difficult to predict how the mold will fill without extensive experimentation. You can also get neat things like _URL_1_ which lets us do _URL_0_
[ "Newtonian fluids are the simplest mathematical models of fluids that account for viscosity. While no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common, and include oobleck (which becomes stiffer when vigorously sheared), or non-drip paint (which becomes thinner when sheared). Other examples include many polymer solutions (which exhibit the Weissenberg effect), molten polymers, many solid suspensions, blood, and most highly viscous fluids.\n", "A Newtonian fluid (named after Isaac Newton) is defined to be a fluid whose shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear. This definition means regardless of the forces acting on a fluid, it \"continues to flow\". For example, water is a Newtonian fluid, because it continues to display fluid properties no matter how much it is stirred or mixed. A slightly less rigorous definition is that the drag of a small object being moved slowly through the fluid is proportional to the force applied to the object. (Compare friction). Important fluids, like water as well as most gases, behave—to good approximation—as a Newtonian fluid under normal conditions on Earth.\n", "A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. Most commonly the viscosity of non-Newtonian fluids is a function of shear rate or shear rate history. However, there are some non-Newtonian fluids with shear-independent viscosity, that nonetheless exhibit normal stress-differences or other non-Newtonian behaviour. Many salt solutions and molten polymers are non-Newtonian fluids, as are many commonly found substances such as ketchup, custard, toothpaste, starch suspensions, paint, blood, and shampoo. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different, and can even be time-dependent. The study of the non-Newtonian fluids is usually called rheology. A few examples are given here.\n", "A non-Newtonian fluid is a fluid that does not follow Newton's law of viscosity, i.e. constant viscosity independent of stress. In non-Newtonian fluids, viscosity can change when under force to either more liquid or more solid. Ketchup, for example, becomes runnier when shaken and is thus a non-Newtonian fluid. Many salt solutions and molten polymers are non-Newtonian fluids, as are many commonly found substances such as custard, honey, toothpaste, starch suspensions, corn starch, paint, blood, and shampoo.\n", "More precisely, a fluid is Newtonian only if the tensors that describe the viscous stress and the strain rate are related by a constant viscosity tensor that does not depend on the stress state and velocity of the flow. If the fluid is also isotropic (that is, its mechanical properties are the same along any direction), the viscosity tensor reduces to two real coefficients, describing the fluid's resistance to continuous shear deformation and continuous compression or expansion, respectively.\n", "A Non-Newtonian fluid is a fluid which is different from the Newtonian fluid as the viscosity of non-Newtonian fluids is dependent on shear rate or shear rate history. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different and can even be time-dependent (Time Dependent Viscosity). Therefore, a constant coefficient of viscosity cannot be defined.\n", "Newton's law of viscosity is not a fundamental law of nature, but rather a constitutive equation (like Hooke's law, Fick's law, and Ohm's law) which serves to define the viscosity formula_10. Its form is motivated by experiments which show that for a wide range of fluids, formula_10 is independent of strain rate. Such fluids are called Newtonian. Gases, water, and many common liquids can be considered Newtonian in ordinary conditions and contexts. However, there are many non-Newtonian fluids that significantly deviate from this behavior. For example:\n" ]
Would supersonic air flowing across a wing fixed to the floor in a lab produce a constant sonic boom?
A sonic boom is the perceptual phenomenon caused by the passage of a shock wave over the human ear. Every object that is going faster than the speed of sound in a medium has an associated shock wave attached to or very near the object. This shock wave is always there once the object reaches a speed faster than Mach 1. In a stationary wind tunnel, with an object exposed to supersonic flow, the shock wave would be stationary relative to the object and therefore relative to the observer. If the observer were inside the tunnel, they would not hear a sonic boom unless they deliberately walked through the shock.
[ "The aerodynamics of supersonic flight is called compressible flow because of the compression (physics) associated with the shock waves or \"sonic boom\" created by any object travelling faster than sound.\n", "The later shock waves are somewhat faster than the first one, travel faster and add to the main shockwave at some distance away from the aircraft to create a much more defined N-wave shape. This maximizes both the magnitude and the \"rise time\" of the shock which makes the boom seem louder. On most aircraft designs the characteristic distance is about , meaning that below this altitude the sonic boom will be \"softer\". However, the drag at this altitude or below makes supersonic travel particularly inefficient, which poses a serious problem.\n", "As an aircraft enters the transonic region close to the speed of sound, the acceleration of air over curved areas can cause the flow to go supersonic. This generates a shock wave and creates considerable drag, known as wave drag. The increase in drag is so rapid and powerful that it gives rise to the concept of a sound barrier.\n", "For the purpose of comparison, in supersonic flows, additional increased expansion may be achieved through an expansion fan, also known as a Prandtl–Meyer expansion fan. The accompanying expansion wave may approach and eventually collide and recombine with the shock wave, creating a process of destructive interference. The sonic boom associated with the passage of a supersonic aircraft is a type of sound wave produced by constructive interference.\n", "If the intensity of the boom can be reduced, then this may make even very large designs of supersonic aircraft acceptable for overland flight. Research suggests that changes to the nose cone and tail can reduce the intensity of the sonic boom below that needed to cause complaints. During the original SST efforts in the 1960s, it was suggested that careful shaping of the fuselage of the aircraft could reduce the intensity of the sonic boom's shock waves that reach the ground. One design caused the shock waves to interfere with each other, greatly reducing sonic boom. This was difficult to test at the time, but the increasing power of computer-aided design has since made this considerably easier. In 2003, a Shaped Sonic Boom Demonstration aircraft was flown which proved the soundness of the design and demonstrated the capability of reducing the boom by about half. Even lengthening the vehicle (without significantly increasing the weight) would seem to reduce the boom intensity.\n", "One of the many differences between supersonic and hypersonic flight concerns the interaction of the boundary layer and the shock waves generated from the nose of the aircraft. Normally the boundary layer is quite thin compared to the streamline of airflow over the wing, and can be considered separately from other aerodynamic effects. However, as the speed increases and the shock wave increasingly approaches the sides of the craft, there comes a point where the two start to interact and the flowfield becomes very complex. Long before that point, the boundary layer starts to interact with the air trapped between the shock wave and the fuselage, the air that is being used for lift on a waverider.\n", "At high-subsonic flight speeds, the local speed of the airflow can reach the speed of sound where the flow accelerates around the aircraft body and wings. The speed at which this development occurs varies from aircraft to aircraft and is known as the critical Mach number. The resulting shock waves formed at these points of sonic flow can result in a sudden increase in drag, called wave drag. To reduce the number and power of these shock waves, an aerodynamic shape should change in cross sectional area as smoothly as possible.\n" ]
What is the connection between Majorana Mass and a Majorana Particle?
If you give a Majorana mass to a "normal" four-component spinor, so making it satisfy the Majorana *equation*, you obtain *two* particles. They are antiparticles of eachother and must be electrically neutral. Only when you supplement the Majorana *condition*, that is that the spinor is its own charge conjugate, you get one single Majorana particle, neutral and its own antiparticle.
[ "However, the right-handed sterile neutrinos introduced to explain neutrino oscillation could have Majorana masses. If they do, then at low energy (after electroweak symmetry breaking), by the seesaw mechanism, the neutrino fields would naturally behave as six Majorana fields, with three of them expected to have very high masses (comparable to the GUT scale) and the other three expected to have very low masses (below 1 eV). If right-handed neutrinos exist but do not have a Majorana mass, the neutrinos would instead behave as three Dirac fermions and their antiparticles with masses coming directly from the Higgs interaction, like the other Standard Model fermions.\n", "It is possible to include both Dirac and Majorana mass terms in the same theory, which (in contrast to the Dirac-mass-only approach) can provide a “natural” explanation for the smallness of the observed neutrino masses, by linking the right-handed neutrinos to yet-unknown physics around the GUT scale (see seesaw mechanism).\n", "If the neutrino is a Majorana particle, the mass may be calculated by finding the half-life of neutrinoless double-beta decay of certain nuclei. The current lowest upper limit on the Majorana mass of the neutrino has been set by KamLAND-Zen: 0.060–0.161 eV.\n", "Since Majorana masses of the right-handed neutrino are forbidden by symmetry, GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios.\n", "EXO measures the rate of neutrinoless decay events above the expected background of similar signals, to find or limit the double beta decay half-life, which relates to the effective neutrino mass using nuclear matrix elements. A limit on effective neutrino mass below 0.01 eV would determine the neutrino mass order. The effective neutrino mass is dependent on the lightest neutrino mass in such a way that that bound indicates the normal mass hierarchy.\n", "The precise mass of the neutrino is important not only for particle physics, but also for cosmology. The observation of neutrino oscillation is strong evidence in favor of massive neutrinos, but gives only a weak lower bound.\n", "In particle physics, majorons (named after Ettore Majorana) are a hypothetical type of Goldstone boson that are theorized to mediate the neutrino mass violation of lepton number or \"B\" − \"L\" in certain high energy collisions such as\n" ]
Can theories be proven or is it that they have just failed to be disproved?
You are correct. The general answer for the scientific method is that nothing can be proven, only disproven. However, once a hypothesis has passed enough experimental tests that the only other competing hypothesizes are clearly wrong, then the hypothesis becomes a "theory" meaning it has a great deal of acceptance in the scientific community.
[ "To the Sceptics, none of the contending theories, proposed by limited intellect, can be known to be true, since they are mutually contradictory. Also, any new theory is bound to contradict existing theories, and hence cannot be true. Hence nothing can be known to be true. Thus the Sceptics conclude that the contradictions of metaphysics and the impossibility of omniscience leads them to accept Scepticism.\n", "However, such theories are controversial and cannot be all true. Conclusive evidence proving or disproving them has never been presented and there is no consensus amongst scholars on whether or not such links exist.\n", "The Duhem–Quine thesis argues that no scientific hypothesis is by itself capable of making predictions. Instead, deriving predictions from the hypothesis typically requires background assumptions that several other hypotheses are correct; for example, that an experiment works as predicted or that previous scientific theory is sufficiently accurate. For instance, as evidence against the idea that the Earth is in motion, some people objected that birds did not get thrown off into the sky whenever they let go of a tree branch. Later theories of physics and astronomy, such as classical and relativistic mechanics could account for such observations without positing a fixed Earth, and in due course they replaced the static-Earth auxiliary hypotheses and initial conditions.\n", "In natural science, impossibility assertions (like other assertions) come to be widely accepted as overwhelmingly probable rather than considered proved to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible.\n", "The Quine-Duhem thesis argues that it's impossible to test a single hypothesis on its own, since each one comes as part of an environment of theories. Thus we can only say that the whole package of relevant theories has been collectively falsified, but cannot conclusively say which element of the package must be replaced. An example of this is given by the discovery of the planet Neptune: when the motion of Uranus was found not to match the predictions of Newton's laws, the theory \"There are seven planets in the solar system\" was rejected, and not Newton's laws themselves. Popper discussed this critique of naïve falsificationism in Chapters 3 and 4 of \"The Logic of Scientific Discovery\". For Popper, theories are accepted or rejected via a sort of selection process. Theories that say more about the way things appear are to be preferred over those that do not; the more generally applicable a theory is, the greater its value. Thus Newton's laws, with their wide general application, are to be preferred over the much more specific \"the solar system has seven planets\".\n", "Arguments involving underdetermination attempt to show that there is no reason to believe some conclusion because it is underdetermined by the evidence. Then, if the evidence available at a particular time can be equally well explained by at least one other hypothesis, there is no reason to believe it rather than the equally supported rival, which can be considered observationally equivalent (although many other hypotheses may still be eliminated).\n", "Acceptance of a theory does not require that all of its major predictions be tested, if it is already supported by sufficiently strong evidence. For example, certain tests may be unfeasible or technically difficult. As a result, theories may make predictions that have not yet been confirmed or proven incorrect; in this case, the predicted results may be described informally with the term \"theoretical\". These predictions can be tested at a later time, and if they are incorrect, this may lead to the revision or rejection of the theory.\n" ]
Are there any examples of animals practicing medicine in the wild?
There are parrots in the amazon that eat fruit that is toxic to them. Every evening they fly to a riverbank with an exposed clay cliff and eat some of the clay. The minerals in the clay neutralise the poison. I wish I could remember more details, like the species of parrot and the specific type of fruit.
[ "Some animal parts used as medicinals can be considered rather strange such as cows' gallstones, hornet's nests, leeches, and scorpion. Other examples of animal parts include horn of the antelope or buffalo, deer antlers, testicles and penis bone of the dog, and snake bile. Some TCM textbooks still recommend preparations containing animal tissues, but there has been little research to justify the claimed clinical efficacy of many TCM animal products.\n", "Animals may also play a role, in particular in research. In traditional remedies, animals are extensively used as drugs. Many animals also medicate \"themselves\". Zoopharmacognosy is the study of how animals use plants, insects and other inorganic materials in self-medicatation. In an interview with the late Neil Campbell, Eloy Rodriguez describes the importance of biodiversity:\n", "Other 20th-century medical advances and treatments that relied on research performed in animals include organ transplant techniques, the heart-lung machine, antibiotics, and the whooping cough vaccine. Treatments for animal diseases have also been developed, including for rabies, anthrax, glanders, feline immunodeficiency virus (FIV), tuberculosis, Texas cattle fever, classical swine fever (hog cholera), heartworm, and other parasitic infections. Animal experimentation continues to be required for biomedical research, and is used with the aim of solving medical problems such as Alzheimer's disease, AIDS, multiple sclerosis, spinal cord injury, many headaches, and other conditions in which there is no useful \"in vitro\" model system available.\n", "There is not much information on the Sukuma tribes use of animals in their medicine.This is mostly because a lot of the research that has been done on the medicinal practices of this tribe have been plant based. A study was conducted in the Busega District of Tanzania, an area comprising the Serengeti Game Reserve and Lake Victoria, to determine which faunal resources healers use to treat illnesses within the community. 98 Community members (farmers, healers, fisherman, and cultural officers), aged 55 and older, were interviewed to obtain their knowledge on which animals were used to treat illnesses. These were the results of the study:\n", "Throughout the 20th century, research that used live animals has led to many other medical advances and treatments for human diseases, such as: organ transplant techniques and anti-transplant rejection medications, the heart-lung machine, antibiotics like penicillin, and whooping cough vaccine.\n", "An important concern is the case of medications, which are routinely tested on animals to ensure they are effective and safe, and may also contain animal ingredients, such as lactose, gelatine, or stearates. There may be no alternatives to prescribed medication or these alternatives may be unsuitable, less effective, or have more adverse side effects. Experimentation with laboratory animals is also used for evaluating the safety of vaccines, food additives, cosmetics, household products, workplace chemicals, and many other substances.\n", "Animals such as the fruit fly \"Drosophila melanogaster\" serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug Yondelis are based on toxins or other molecules of animal origin.\n" ]
Can someone lend a neutral analysis of Mao, the Cultural Revolution, and the Great Leap Forward?
Well I will be honest I don't have anything more than an undergraduate's understanding of China, having taken a few classes on the subject, but I have a few things to say here. First the problem with the article you linked is that it is overwhelmingly arguing from a position of "well the evidence that this happened might not be true." The problem with that tactic is that it is sort of a rabbit hole, you can basically argue almost anything from that position but it doesn't prove the reverse happened. Additionally, even if the numbers of people who died in the famine are disputed (they typically range from 30-50 million in most things I've seen), there does seem to be a general agreement that such a famine occurred. Basically I would say that unless some smoking gun evidence appears that suggests this massive famine didn't occur, it would be reasonable to assume it did. Oh and Mao is still on the hook for the nightmare that was the Cultural Revolution.
[ "The official view aimed to separate Mao's actions during the Cultural Revolution from his \"heroic\" revolutionary activities during the Chinese Civil War and the Second Sino-Japanese War. It also separated Mao's personal mistakes from the correctness of the theory that he created, going as far as to rationalize that the Cultural Revolution contravened the spirit of Mao Zedong Thought, which remains an official guiding ideology of the Party. Deng Xiaoping famously summed this up with the phrase \"Mao was 70% good, 30% bad.\" After the Cultural Revolution, Deng affirmed that Maoist ideology was responsible for the revolutionary success of the Communist Party, but abandoned it in practice to favour \"Socialism with Chinese characteristics\", a very different model of state-directed market economics.\n", "Mao: A Reinterpretation is a biography of the Chinese communist revolutionary and politician Mao Zedong written by Lee Feigon, an American historian of China then working at Colby College. It was first published by Ivan R. Dee in 2002, and would form the basis of Feigon's 2006 documentary \"Passion of the Mao\".\n", "In the early 1960s, Cadart became a researcher at the (CERI) of Sciences Po and dedicated himself to the study of contemporary China. At a time when many sinologists considered Mao Zedong a visionary leader, Cadart cautioned that Mao's political campaigns, most notably the Cultural Revolution, were not the liberation movements that many European leftists had envisioned, but were merely political machinations for maintaining his own power. This was before Simon Leys published \"Les Habits neufs du président Mao\" in 1971, the famous work that exposed the destructions of Mao's Cultural Revolution.\n", "To make sense of the mass chaos caused by Mao's leadership in the Cultural Revolution while preserving the Party's authority and legitimacy, Mao's successors needed to lend the event a \"proper\" historical judgment. On June 27, 1981, the Central Committee adopted the \"\"Resolution on Certain Questions in the History of Our Party Since the Founding of the People's Republic of China\",\" an official assessment of major historical events since 1949.\n", "The Resolution frankly noted Mao's leadership role in the movement, stating that \"chief responsibility for the grave 'Left' error of the 'Cultural Revolution,' an error comprehensive in magnitude and protracted in duration, does indeed lie with Comrade Mao Zedong.\" It diluted blame on Mao himself by asserting that the movement was \"manipulated by the counterrevolutionary groups of Lin Biao and Jiang Qing,\" who caused its worst excesses. The Resolution affirmed that the Cultural Revolution \"brought serious disaster and turmoil to the Communist Party and the Chinese people.\"\n", "\"The Cultural Revolution in China\" is written from the perspective of trying to understand the thinking that lay behind the revolution, particularly Mao Zedong's preoccupations. Mao is seen as aiming to recapture a revolutionary sense in a population that had known only, or had grown used to, stable Communism, so that it could \"re-educate the Party\" (pp. 20, 27); to instill a realisation that the people needed the guidance of the Party and much as the other way round (p. 20); to re-educate intellectuals who failed to see that their role in society, like that of all other groups, was to 'Serve the People' (pp. 33, 43); and finally to secure a succession, not stage-managed by the Party hierarchy or even by Mao himself but the product of interaction between a revitalised people and a revitalised Party (p. 26).\n", "BULLET::::- Significance: The meeting repeated Mao Zedong's assessment that Chinese economy was to take agriculture as basis to develop industry. The session's official communique also started to outline Mao Zedong's \"theory of continued revolution under proletarian dictatorship\" which led to the Cultural Revolution.\n" ]
Is there any peculiarity about the places a supercontinent splits (e.g. the Atlantic coastlines of Africa and South America) or is it just about the subterranean magma flow?
There are a couple of things that (potentially) contribute to where the rift system that breaks up a supercontinent will localize, but all of them will generally lead to the rift system initiating broadly in the center of the supercontinent and roughly coincident with where the supercontinent was joined together in the first place. In detail, the presence of the supercontinent contributes to (1) an accumulation of heat beneath the supercontinent from insulation of the mantle by the thick continental crust, which will be greatest roughly near the center of the continental mass and (2) the development of a [geoid](_URL_1_) high within the supercontinent (and a corresponding geoid low in the surrounding ocean basin). In a very general sense, warmer earth materials mean weaker earth materials, so the first property would tend to make areas in the center of the supercontinent weaker. The geoid high represents an instability that likely drives (or at least contributes to) the breakup of the supercontinent. Once there is a force driving supercontinent breakup, the resulting rifts will localize where the continental crust is the weakest (i.e. if you start deforming any heterogeneous material, the weakest portion will start deforming first). As mentioned earlier, the center of the supercontinent may be warmer and weaker due to the insulation effect, but anything that contributes to a reduction in strength in a particular area may help to initiate a rift in that region. One of the primary sources of weakness are preexisting structures, meaning that the [sutures](_URL_0_) marking the locations where the constituent continental portions of plates were joined during supercontinent assembly likely are important 'guides' for the localization of rifts during breakup. Pangea is a good example, at least in the North America - Europe portion, as the rifting largely followed the location of the mountain ranges (e.g. the Appalachians, etc) that were formed during the assembly of Pangea. For those interested in more details, there are a variety of review papers about the supercontitent cycle which discuss the breakup and assembly processes (and the variety of ideas related to them) in great detail, e.g. these papers [1](_URL_2_), [2](_URL_3_), or [3](_URL_4_).
[ "In plate tectonics theory during the breakup of a continent, three divergent boundaries form, radiating out from a central point (the triple junction). One of these divergent plate boundaries fails (see aulacogen) and the other two continue spreading to form an ocean. The opening of the south Atlantic Ocean started at the south of the South American and African continents, reaching a triple junction in the present Gulf of Guinea, from where it continued to the west. The NE-trending Benue Trough is the failed arm of this junction.\n", "During break-up of the supercontinent, rifting environments dominate. This is followed by passive margin environments, while seafloor spreading continues and the oceans grow. This in turn is followed by the development of collisional environments that become increasingly important with time. First collisions are between continents and island arcs, but lead ultimately to continent-continent collisions. This is the situation that was observed during the Paleozoic supercontinent cycle and is being observed for the Mesozoic–Cenozoic supercontinent cycle, still in progress.\n", "Many existing continental rift valleys are the result of a failed arm (aulacogen) of a triple junction, although there are two, the East African Rift and the Baikal Rift Zone, which are currently active, as well as a third which may be, the West Antarctic Rift. In these instances, not only the crust, but also entire tectonic plates, are in the process of breaking apart to create new plates. If they continue, continental rifts will eventually become oceanic rifts.\n", "The break-up of Sclavia, and possibly other continents or supercratons, can be linked to a global pulse of magmatic activity around 2.33–2.1 Gya probably caused by increased mantle plume activity. Related results of this mantle activity include the 2.3 Ga-old Precambrian dyke swarms in the Dharwar Craton in southern India which were emplaced in only five million years. Similar swarms have been found in what is today Antarctica, Australia, Finland, Greenland, and North America.\n", "It was the first supercontinent to form on Earth, all the continental crust on Earth came together and formed one giant land mass. This land mass was surrounded by an even larger ocean, known as Mirovia. There were about four smaller continents that collided and came together to form Rodinia. This event is called the Grenville Orogeny. This caused there to be mountain building along the areas of were continents collided. This is because the continental crust is not very dense so neither continent would sink or sub duct. This causes the formation of Fold and Thrust belts, similar to the Himalayas today.\n", "Supercontinents describe the merger of all, or nearly all, of the Earth's landmass into a single contiguous continent. In the Pangaea Ultima scenario, subduction at the western Atlantic, east of the Americas, leads to the subduction of the Atlantic mid-ocean ridge followed by subduction destroying the Atlantic and Indian basin, causing the Atlantic and Indian Oceans to close, bringing the Americas back together with Africa and Europe. As with most supercontinents, the interior of Pangaea Proxima would probably become a semi-arid desert prone to extreme temperatures.\n", "The geology of Duluth demonstrates the Midcontinent Rift, formed as the North American continent (Laurentia) began to split apart about 1.1 billion years ago. Continental rifting is a recurring process in the history of the earth that leads to break-up of continents and the formation of ocean basins. In the Lake Superior region, the upwelling of molten rock may have been the result of a hot spot that produced a dome over the Lake Superior area. As the earth's crust thinned, magma rose toward the surface. When insulated by overlying roof rock, the upwelling magma cooled slowly, and is therefore coarse-grained. These intrusions formed a sill some thick, primarily of gabbro, which is known as the Duluth Complex. In the areas where the rising magma erupted to the surface and cooled rapidly, basalt, the extrusive equivalent of gabbro, was formed.\n" ]
studying vs fun
Feel good now many times or feel good once later in the future? Your brain prefers the former. It comes down to gratification frequency in your brain. The frequency of dopamine release you get from video games is shorter in video games since you can get instantly rewarded for your tasks, feeding into a cycle. Think about it each time you pick up treasure or beat a hard boss. Whereas studying has a long-term dopamine award that you won't see without seeing your exam grade. That's why people often suggest creating your own dopamine awards when you've finished a study period (e.g., I can continue watching my TV show if I finish writing this report or finish studying). The same analogy can be applied to diets too. Eating a bag of chips is highly addictive for the average person, but following a healthy diet is better for your body. Unfortunately, your brain prefers the former earlier. The mentality of preference over instant gratification probably stems from primitive times when instant gratification was necessary to ensure immediate survival in a time before modern civilization and technology.
[ "\"Fun School Specials\" is a set of educational games, created in 1993 by Europress Software, consisting of four different games. Upon demand, Europress designed each game specifically with a certain major topic to add depth to spelling, maths, creativity and science, respectively and comply fully with the National Curriculum.\n", "It has been suggested that games, toys, and activities perceived as fun are often challenging in some way. When a person is challenged to think consciously, overcome challenge and learn something new, they are more likely to enjoy a new experience and view it as fun. A change from routine activities appears to be at the core of this perception, since people spend much of a typical day engaged in activities that are routine and require limited conscious thinking. Routine information is processed by the brain as a \"chunked pattern\": \"We rarely look at the real world\", according to game designer Raph Koster, \"we instead recognize something we have chunked, and leave it at that. [...] One might argue that the essence of much of art is in forcing us to see things as they really are rather than as we assume them to be\". Since it helps people to relax, fun is sometimes regarded as a \"social lubricant\", important in adding \"to one's pleasure in life\" and helping to \"act as a buffer against stress\".\n", "The pleasure of fun can be seen by the numerous efforts to harness its positive associations. For example, there are many books on serious subjects, about skills such as music, mathematics and languages, normally quite difficult to master, which have \"fun\" added to the title.\n", "For children, fun is strongly related to play and they have great capacity to extract the fun from it in a spontaneous and inventive way. Play \"involves the capacity to have fun – to be able to return, at least for a little while, to never-never land and enjoy it.\"\n", "Interest in combining education with entertainment, especially in order to make learning more enjoyable, has existed for hundreds of years, with the Renaissance and Enlightenment being movements in which this combination was presented to students. Komenský in particular is affiliated with the “school as play” concept, which proposes pedagogy with dramatic or delightful elements.\n", "Fun School is a series of educational packages developed and published in the United Kingdom by Europress Software, initially as \"Database Educational Software\". The original Fun School titles were sold mostly by mail order via off-the-page adverts in the magazines owned by Database Publications. A decision was made to create a new set of programs, call the range Fun School 2, and package them more professionally so they could be sold in computer stores around the UK. Every game comes as a set of three versions, each version set to cater for a specific age range.\n", "A step forward in the flipped-mastery model would be to include gamification elements in the learning process. Gamification is the application of game mechanisms in situations not directly related to games. The basic idea is to identify what motivates a game and see how it can be applied in the teaching-learning model (in this case it would be Flipped-Mastery). The results of the Fun Theory research showed that fun can significantly change people's behavior in a positive sense, in the same way that it has a positive effect on education (Volkswagen, 2009).\n" ]
Would it be possible to make a device to see radio/TV/cell phone signals, the same way an infrared camera can "see" heat?
This is used a lot in astronomy [pic](_URL_0_)
[ "There are two infrared-based approaches. In one, an array of sensors detects a finger touching or almost touching the display, thereby interrupting infrared light beams projected over the screen. In the other, bottom-mounted infrared cameras record heat from screen touches.\n", "The use of an infrared sensor to detect position can cause some detection problems in the presence of other infrared sources, such as incandescent light bulbs or candles. This can be alleviated by using fluorescent or LED lights, which emit little to no infrared light, around the Wii. Innovative users have used other sources of IR light, such as a pair of flashlights or a pair of candles, as Sensor Bar substitutes. The Wii Remote picks up traces of heat from the sensor, then transmits it to the Wii console to control the pointer on your screen. Such substitutes for the Sensor Bar illustrate the fact that a pair of non-moving lights provide continuous calibration of the direction that the Wii Remote is pointing and its physical location relative to the light sources. There is no way to calibrate the position of the cursor relative to where the user is pointing the controller without the two stable reference sources of light provided by the Sensor Bar or substitutes. Third-party wireless sensor bars have also been released, which have been popular with users of Wii emulators since the official Sensor Bar utilizes a proprietary connector to connect to the Wii console.\n", "These systems utilize light waves to transmit sound from the transmitter to a special light sensitive receiver. The signal can be broadcast to a whole room through speakers or a person who wears an individual receiver. There must be a clear line of connection between the transmitter and receiver so that the light signal is not interrupted. The benefit of infrared systems is that they only work in the room where the transmitter and receiver are located resulting in significantly fewer issues with cross-over. These systems can be sensitive to external light sources or interfering objects.\n", "Modified digital cameras can detect some ultraviolet, all of the visible and much of the near infrared spectrum, as most digital imaging sensors are sensitive from about 350 nm to 1000 nm. An off-the-shelf digital camera contains an infrared hot mirror filter that blocks most of the infrared and a bit of the ultraviolet that would otherwise be detected by the sensor, narrowing the accepted range from about 400 nm to 700 nm.\n", "The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment.\n", "Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see ). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well, such as this image of a cup of coffee.\n", "law enforcement, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, and to detect overheating of electrical apparatus.\n" ]
why is it difficult for the investigators to find the flight recorder from the downed malaysia flight 17 , when there is supposed to be a beacon pin-pointing its location ?
Finding the proverbial black box is simple if it's left where it is. The moment someone picks it up, removes the pinger, and sticks it on a truck -- it becomes hard to find. Rebels (or Russians, I don't know that there's a practical way to distinguish) have been all over the site removing "stuff". The working hypothesis, based on the fact that rebels or Russians aiding rebels, have shot down several aircraft in the past couple of weeks is that whoever shot the missile downing the plane probably thought that it was a Ukrainian transport plane (probably rebels, since I think it's unlikely a professional Russian soldier would make such a stupid mistake). Ukraine intelligence even offered up something they claim to be an intercepted communiqué between rebels and Russian forces indicating as much.
[ "The aircraft's flight recorders were sent to the Australian Transport Safety Bureau for analysis. Initial statements by the authorities suggested that the pilots mistook a road for the airport's runway in low visibility.\n", "However in case the flight recorders shall become available to the western countries their data may be used for: Confirmation of no attempt by the intercepting aircraft to establish a radio contact with the intruder plane on 121.5 MHz and no tracers warning shots in the last section of the flight\n", "The aircraft was not required to be equipped with a flight data recorder, therefore a flight data recorder was not present. The cockpit voice recorder was burned to the point that the data inside was not usable. The National Transportation Safety Board used aircraft position data from air traffic control, the aircraft wreckage, survivor interviews, and weather information to find its probable cause.\n", "the helicopter was the video recorder and four video cassettes. The aircraft was not equipped with a flight data recorder (FDR) or a cockpit voice recorder (CVR) and investigators had hoped the tapes might have clues to the reason for the crash but apparently the recorder was not operating at the time of the incident.\n", "Flight data recorders (FDRs) and cockpit voice recorders (CVRs) in commercial aircraft continuously record information and can provide key evidence in determining the causes of an aircraft loss. The greatest depth from which a flight recorder has been recovered is , for the CVR of South African Airways Flight 295. Most flight recorders are equipped with Underwater locator beacons to assist searchers in recovering them from offshore crash sites, however these beacons run off a battery and eventually stop transmitting. For various reasons, a flight recorder cannot always be recovered, and many recorders that are recovered are too damaged to provide any data.\n", "Contrary to initial news reports, which stated that both flight recorders had been successfully read-out, the investigation team determined that the flight recorders could not be analyzed because the CVR had been inoperative for nine days leading up to the crash and the FDR was mysteriously found to have only recorded the first 14 minutes of the flight. \n", "Wreckage from the aircraft was not recovered, except for seat cushions and plywood bulkheads found floating near the accident site. Regulations at the time did not require flight recorders to be installed on the aircraft, and no cockpit voice recorder or flight data recorder was installed. Due to lack of evidence, the NTSB was unable to determine the probable cause of the accident. \n" ]
Foggy London vs polluted Beijing
fyi, you'll find some previous discussions on this in the FAQ * [Air pollution](_URL_0_)
[ "Due to Beijing's high-level of air pollution, there are various readings by different sources on the subject. Daily pollution readings at 27 monitoring stations around the city are reported on the website of the Beijing Environmental Protection Bureau (BJEPB). The American Embassy of Beijing also reports hourly fine particulate (PM2.5) and ozone levels on Twitter. Since the BJEPB and US Embassy measure different pollutants according to different criteria, the pollution levels and the impact to human health reported by the BJEPB are often lower than that reported by the US Embassy.\n", "In Jan 2013, only five days were not occupied by haze and fog. In Oct 2014, the air quality index in Beijing reached a peak of 470, far beyond the severe pollution level of 300; meanwhile, the situation was even more serious in the neighboring province of Hebei, whose PM2.5 particles climbed above 500 micrograms per cubic meter—northern China was blanketed by the heavy air pollution, forcing the Chinese authorities to raise its pollution alert from yellow to orange, which was the second highest.\n", "The issue of air pollution, which was widely discussed during the 2008 Summer Olympics, was cited as a likely factor in Beijing's bid. Beijing and Zhangjiakou suffer from severe air pollution which is worse during the winter. As of February 26, 2014, Beijing hit a dangerous particulate concentration of 537. A rate of 301-500 is marked as hazardous. An anti-pollution body expressed a concern that Beijing companies failing to meet regulations were moving their factories to Tianjin and Hebei.\n", "As air pollution in China is at an all-time high, several Hebei cities are among one of the most polluted cities and has one of the worst air quality in China. Reporting on China's airpocalypse has been accompanied by what seems like a monochromatic slideshow of the country's several cities smothered in thick smog. According to a survey made by \"Global voices China\" in February 2013, 7 cities in Hebei including Xingtai, Shijiazhuang, Baoding, Handan, Langfang, Hengshui and Tangshan, are among China's 10 most polluted cities. Xingtai ranked 1st in the list and is referred to has the worst air quality in all Chinese cities.\n", "Atmospheric scientists at Texas A&M University investigating the haze of polluted air in Beijing realized that their research led to a possible cause for the London event in 1952. \"By examining conditions in China and experimenting in a lab, the scientists suggest that a combination of weather patterns and chemistry could have caused London fog to turn into a haze of concentrated sulfuric acid.\"\n", "As air pollution in China is at an all-time high, several Hebei cities are among the most polluted in the country and Tangshan has some of the worst air quality in China. Reporting on China's airpocalypse has been accompanied by what seems like a monochromatic slideshow of the country's several cities smothered in thick smog. According to a survey made by \"Global voices China\" in February 2013, 7 cities in Hebei including Xingtai, Shijiazhuang, Baoding, Handan, Langfang, Hengshui and Tangshan, are among China's 10 most polluted cities.\n", "Other major air pollution, particularly in China, has been compared to the 1966 smog. Elizabeth M. Lynch, a New York City legal scholar, said that images of visible air pollution in Beijing from 2012 were \"gross\" but not \"that much different from pictures of New York City in the 1950s and 1960s\", specifically referring to the 1952, 1962, and 1966 smog events. Lynch wrote that the Chinese government's increased transparency on the issue was an encouraging sign that pollution in China could be regulated and abated just as it had in the United States. Similar comparisons between the 1966 smog and Chinese pollution in late 2012 appeared in \"Business Insider\" and \"Slate\". \"USA Today\" cited the 1966 smog after China issued its first \"red alert\" air quality warning in December 2015; the same month, an article in \"The Huffington Post\" used the 1966 smog to argue that China could follow the United States' model to regulate pollution.\n" ]
Did the Barbarians of the Classical Roman Era field any navies and if so was there ever any engagements between them and the Roman Empire?
The Veneti, a Gallic tribe from Brittany in modern France, posed quite a naval threat to Caesar during the Gallic Wars. They had several coastal citadels that couldn't be sieged out, as the strong Venetian (no, not *that* Venetian) navy protected the supply ships from across the channel in Britain. A fascinating aspect of these coastal cities was the fact that at high tide, they were islands, and at low tide, peninsulas, creating a problem for Caesar. At this point, the Romans had absolutely no naval power in the English Channel. To combat this, Caesar constructed a navy. However, the stormy seas of the English Channel and the oak ships of the Veneti were difficult to overcome for the Romans. The Romans did eventually, in the Battle of Morbihan, defeat the Venetians by cutting their halyards, causing their mainsails to fall, in turn leaving the ships as sitting ducks for the Romans' excellent boarding capabilities.
[ "The barbarians comprised war bands or tribes of 10,000 to 20,000 people,[5] but in the course of 100 years they numbered not more than 750,000 in total, compared to an average 39.9 million population of the Roman Empire at that time. Although invasion was common throughout the time of the Roman Empire,[6] the period in question was, in the 19th century, often defined as running from about the 5th to 8th centuries AD.[7][8] The first invasions of peoples were made by Germanic tribes such as the Goths (including the Visigoths and the Ostrogoths), the Vandals, the Anglo-Saxons, the Lombards, the Suebi, the Frisii, the Jutes, the Burgundians, the Alemanni, the Scirii and the Franks; they were later pushed westward by the Huns, the Avars, the Slavs and the Bulgars.[9]\n", "Historians have postulated several explanations for the appearance of \"barbarians\" on the Roman frontier: weather and crops, population pressure, a \"primeval urge\" to push into the Mediterranean or the \"domino effect\" of the Huns falling upon the Goths who, in turn, pushed other Germanic tribes before them. Entire barbarian tribes (or nations) flooded into Roman provinces, ending classical urbanism and beginning new types of rural settlements. In general, French and Italian scholars have tended to view this as a catastrophic event, the destruction of a civilization and the beginning of a \"Dark Age\" that set Europe back a millennium. In contrast, German and English historians have tended to see Roman/Barbarian interaction as the replacement of a \"tired, effete and decadent Mediterranean civilization\" with a \"more virile, martial, Nordic one\".\n", "During the beginning of the 6th century, several barbarian tribes who eventually destroyed the Western Roman Empire in the 5th century eventually were recruited in the armies of the Eastern Roman Empire. Among them were the Heruli, who had deposed the last Western Roman Emperor Romulus Augustulus under their leader Odoacer in 476. Other barbarians included the Huns, who had invaded the divided Roman Empire during the second quarter of the 5th century under Attila, and the Gepids, who had settled in the Romanian territories north of the Danube River.\n", "While the barbarian invasions of the 4th century and later mostly occurred by land, some notable examples of naval conflicts are known. In the late 3rd century, in the reign of Emperor Gallienus, a large raiding party composed by Goths, Gepids and Heruli, launched itself in the Black Sea, raiding the coasts of Anatolia and Thrace, and crossing into the Aegean Sea, plundering mainland Greece (including Athens and Sparta) and going as far as Crete and Rhodes. In the twilight of the Roman Empire in the late 4th century, examples include that of Emperor Majorian, who, with the help of Constantinople, mustered a large fleet in a failed effort to expel the Germanic invaders from their recently conquered African territories, and a defeat of an Ostrogothic fleet at Sena Gallica in the Adriatic Sea.\n", "The Romans consistently allied themselves with certain barbarian groups outside the Empire, playing them out against rival barbarian tribes as a policy of \"divide and rule\", the barbarian allies being known as \"foederati\". Sometimes these groups were allowed to live within the Empire. Barbarians could also be settled within the Empire as \"dediticii\" or \"laeti\". The Romans could henceforth rely on these groups for military support or even as legionary recruits. One such group were the Burgundians, whom the Roman Emperor Honorius in 406 had invited to join the Roman Empire as foederati with a capital at Worms . The Burgundians were soon defeated by the Huns, but once again given land near Lake Geneva for Gundioc (r. 443-474) to establish a second federate kingdom within the Roman Empire in 443. This alliance was a contractual agreement between the two peoples. Gundioc's people were given one-third of Roman slaves and two-thirds of the land within Roman territory. The Burgundians were allowed to establish an independent federate kingdom within the Empire and received the nominal protection of Rome for their agreement to defend their territories from other outsiders. This contractual relationship between the guests, Burgundians, and hosts, Romans, supposedly provided legal and social equality. However, Drew argues that the property rights and social status of the guests may have given them disproportionate leverage over their hosts. More recently, Henry Sumner Maine argues that the Burgundians exercised \"tribe-sovereignty\" rather than complete territorial sovereignty.\n", "Until late in the fourth century the united Empire retained sufficient power to launch attacks against its enemies in Germania and in the Sassanid Empire. \"Receptio\" of barbarians became widely practiced: imperial authorities admitted potentially hostile groups into the Empire, split them up, and allotted to them lands, status, and duties within the imperial system. In this way many groups provided unfree workers (\"coloni\") for Roman landowners, and recruits (\"laeti\") for the Roman army. Sometimes their leaders became officers. Normally the Romans managed the process carefully, with sufficient military force on hand to ensure compliance, and cultural assimilation followed over the next generation or two.\n", "During the decline of the Roman Empire, irregulars made up an ever-increasing proportion of the Roman military. At the end of the Western Empire, there was little difference between the Roman military and the barbarians across the borders. \n" ]
if everyone says girls mature more quickly than boys, why do boys seem to have a higher sex drive than girls while teenagers?
because sex drive and maturity have nothing to with each other?
[ "Studies have shown that most high school girls are more interested in a relationship compared to high school boys, who are mostly interested in sex. Young women tend to be honest about their sexual encounters and experiences, while young men tend to lie more often about theirs. Another study shows that once a person has sex for their first time, it becomes less of an issue or big deal to future relationships or hook ups. During this study, it was shown that girls in high school do not care as much as boys do on having sex in a relationship. But, on the contrary, girls will have sex with their partner in order to match them.\n", "Though boys face fewer problems upon early puberty than girls, early puberty is not always positive for boys; early sexual maturation in boys can be accompanied by increased aggressiveness due to the surge of hormones that affect them. Because they appear older than their peers, pubescent boys may face increased social pressure to conform to adult norms; society may view them as more emotionally advanced, although their cognitive and social development may lag behind their appearance. Studies have shown that early maturing boys are more likely to be sexually active and are more likely to participate in risky behaviours.\n", "Some studies have also found that adolescents whose media diet was rich in sexual content were more than twice as likely as others to have had sex by the time they were 16. In a Kaiser Family Foundation study, 76 percent of teens said that one reason young people have sex is because TV shows and movies make it seem normal for teens. In addition to higher likelihoods that an adolescent exposed to sexual content in the media will engage in sexual behaviors, they are also have higher levels of intending to have sex in the future and more positive expectations of sex.\n", "Males reach the peak of their sex drive in their teens, while females reach it in their thirties. (speculation) The surge in testosterone hits the male at puberty resulting in a sudden and extreme sex drive which reaches its peak at age 15–16, then drops slowly over his lifetime. In contrast, a female's libido increases slowly during adolescence and peaks in her mid-thirties. Actual testosterone and estrogen levels that affect a person's sex drive vary considerably.\n", "When comparing the sexual self-concepts of adolescent girls and boys, researchers found that boys experienced lower sexual self-esteem and higher sexual anxiety. The boys stated they were less able to refuse or resist sex at a greater rate than the girls reported having difficulty with this. The authors state that this may be because society places so much emphasis on teaching girls how to be resistant towards sex, that boys do not learn these skills and are less able to use them when they want to say no to sex. They also explain how society's stereotype that boys are always ready to desire sex and be aroused may contribute to the fact that many boys may not feel comfortable resisting sex, because it is something society tells them they should want. Because society expects adolescent boys to be assertive, dominant and in control, they are limited in how they feel it is appropriate to act within a romantic relationship. Many boys feel lower self-esteem when they cannot attain these hyper-masculine ideals that society says they should. Additionally, there is not much guidance on how boys should act within relationships and many boys do not know how to retain their masculinity while being authentic and reciprocating affection in their relationships. This difficult dilemma is called the double-edged sword of masculinity by some researchers.\n", "One study from 1996 documented the interviews of a sample of junior high school students in the United States. The girls were less likely to state that they ever had sex than adolescent boys. Among boys and girls who had experienced sexual intercourse, the proportion of girls and boys who had recently had sex and were regularly sexually active was the same. Those conducting the study speculated that fewer girls say they have ever had sex because girls viewed teenage parenthood as more of a problem than boys. Girls were thought to be more restricted in their sexual attitudes; they were more likely than boys to believe that they would be able to control their sexual urges. Girls had a more negative association in how being sexually active could affect their future goals. In general, girls said they felt less pressure from peers to begin having sex, while boys reported feeling more pressure.\n", "Girls are more likely than boys to co-ruminate with their close friends, and co-rumination increases with age in children. Female adolescents are more likely to co-ruminate than younger girls, because their social worlds become increasingly complex and stressful. This is not true for boys, however as age differences are not expected among boys because their interactions remain activity focused and the tendency to extensively discuss problems is likely to remain inconsistent with male norms.\n" ]
If a person gains weight gradually, will their leg muscles (quadriceps, hamstrings, calves etc) grow proportionally to support the added weight?
They would have to, otherwise you wouldn't be able to walk. On the other hand, it depends on what kind of weight you are gaining. Obviously gaining muscle mass will cause your muscles to grow. On the other hand, gaining a lot of weight in the form of fat has other consequences. The fat gets deposited all over the place, including within the muscle, and certainly within the walls of your blood vessels, limiting blood flow. Your muscles may get stronger to carry the weight, but the restricted blood flow limits the ability of the muscle to perform for long periods of time, because it doesn't get enough oxygen for oxidative phosphorylation. While overweight people may have an increased amount of glycogen providing the muscle with a larger "fuel tank" if you will, the muscle still will quickly be forced to switch to anaerobic metabolism. This is less efficient, and causes lactic acid to build up (which gives you cramps). Your heart beats faster to deliver more blood to the tissue, but it isn't getting there effectively because of the fat clogging up the blood vessels. You start to hyperventilate (technically speaking this is a misnomer, it would likely be hyperpnea in this case) to compensate for the increased oxygen demand and the increased acid load created by lactic acid. **You will tire out more quickly** The consequence of this is that you walk less, and of course, without using the muscle, it gets weaker again, and you may end up with a weaker muscle as well as body fat that you cannot carry. **Your muscle ends up weaker in the long run, if you gain enough weight**. See: [*Muscle strength is inversely related to prevalence and incidence of obesity in adult men*](_URL_0_) In [this paper](_URL_1_), the results conclude that obese women are stronger than their lean counterparts, but only in terms of absolute strength. When you control for body weight, every muscle is weaker on a pound for pound basis in the obese women, with the exception of trunk flexors, which were stronger. Presumably, this might be because many obese women carry the weight in the abdomen and chest. Edit: tl;dr: Yes, gaining weight makes your legs stronger. But there is a diminishing return on this, and the growth is not linearly proportional. This does more harm than good.
[ "Individuals with this disorder typically experience progressive muscle weakness of the leg and pelvis muscles, which is associated with a loss of muscle mass (wasting). Muscle weakness also occurs in the arms, neck, and other areas, but not as noticeably severe as in the lower half of the body. Calf muscles initially enlarge during the ages of 5-15 (an attempt by the body to compensate for loss of muscle strength), but the enlarged muscle tissue is eventually replaced by fat and connective tissue (pseudohypertrophy) as the legs become less used (with use of wheelchair).\n", "Conversely, decreased use of the muscle results in incremental loss of mass and strength, known as muscular atrophy (see atrophy and muscle atrophy). Sedentary people often lose a pound or more of muscle annually.\n", "Temporary loss or impairment of proprioception may happen periodically during growth, mostly during adolescence. Growth that might also influence this would be large increases or drops in bodyweight/size due to fluctuations of fat (liposuction, rapid fat loss or gain) and/or muscle content (bodybuilding, anabolic steroids, catabolisis/starvation). It can also occur in those that gain new levels of flexibility, stretching, and contortion. A limb's being in a new range of motion never experienced (or at least, not for a long time since youth perhaps) can disrupt one's sense of location of that limb. Possible experiences include suddenly feeling that feet or legs are missing from one's mental self-image; needing to look down at one's limbs to be sure they are still there; and falling down while walking, especially when attention is focused upon something other than the act of walking.\n", "Thigh weights are the most reasonable form of resistance. The location of the mass more readily duplicates the natural fat-storage mechanism of the human body and being closer to the core. In leg raise exercises, it allows more activation of the hip flexors (and abdominals) without putting more strain on the quadriceps muscles for extension, making it good for sports-specific training on movements like knees and jumping. The greater area and safe location allow it to handle much more weight. For those with wide thighs, such as bodybuilders with large quadriceps, or people with large amounts of fat stores on the inner thigh, it may cause chafing. If worn on both legs, however, the chafing would be between the weights and only damage them, possibly only chafing with a lack of tightness.\n", "A common trend in modern technology is the push to create lightweight devices. A 1981 collection of studies on amputees showed a 30% increase in metabolic cost of walking for an able-bodied subject with 2-kg weights fixed to each foot. Correspondingly, transfemoral prostheses are on average only about one third of the weight of the limb they are replacing. However, the effect of added mass appears to be less significant for amputees. Small increases in mass (4-oz and 8-oz) of a prosthetic foot had no significant effect and, similarly, adding 0.68-kg and 1.34-kg masses to the center of the shank of transfemoral prostheses did not alter metabolic cost at any of the tested walking speeds (0.6, 1.0, and 1.5 m/s). In another study, muscular efforts were significantly increased with added mass, yet there was no significant impact on walking speeds and over half of the subjects preferred a prosthetic that was loaded to match 75% weight of the sound leg. In fact, it has been reported in several articles that test subjects actually prefer heavier prostheses, even when the load is completely superficial.\n", "Independent of strength and performance measures, muscles can be induced to grow larger by a number of factors, including hormone signaling, developmental factors, strength training, and disease. Contrary to popular belief, the number of muscle fibres cannot be increased through exercise. Instead, muscles grow larger through a combination of muscle cell growth as new protein filaments are added along with additional mass provided by undifferentiated satellite cells alongside the existing muscle cells.\n", "The loss of 10 pounds of muscle per decade is one consequence of a sedentary lifestyle. The adaptive processes of the human body will only respond if continually called upon to exert greater force to meet higher physiological demands.\n" ]
why do people's voices sound higher pitched in older recordings? were the vocal tastes for higher pitched voice in the past or was it due to the recording equipment used?
Bass frequencies require more energy to record and reproduce than do higher ranges of audio. Old recording equipment didn't have the types of improvements we've made since then. The diaphragm in a modern microphone is much more sensitive to input and can record a sound with much less intensity than older equipment. The primary factor is production. Most music made these days is mixed for use on smaller speakers, like headphones, laptops, smartphones, etc. And is adjusted to try and be "louder" then the other guys. They butcher old recordings, strip out the dynamic range, and level-match everything for a "digitally remastered" sound that ruins classic albums, also.
[ "Some evidence exists of vocal fry becoming more common in the speech of young female speakers of American English in the early 21st century, but its frequency's extent and significance are disputed. Researcher Ikuko Patricia Yuasa suggests that the tendency is a product of young women trying to infuse their speech with gravitas by means of reaching for the male register and found that \"college-age Americans [...] perceive female creaky voice as hesitant, nonaggressive, and informal but also educated, urban-oriented, and upwardly mobile.\"\n", "Bone conduction is one reason why a person's voice sounds different to them when it is recorded and played back. Because the skull conducts lower frequencies better than air, people perceive their own voices to be lower and fuller than others do, and a recording of one's own voice frequently sounds higher than one expects.\n", "Like Crosby, they used the recording technique known as close miking where the microphone is less than from the singer's mouth. This produces a more intimate, less reverberant sound than when a singer is or more from the microphone. When using a pressure-gradient (uni- or bi-directional) microphone, it emphasizes low-frequency sounds in the voice due to the microphone's proximity effect and gives a more relaxed feel because the performer is not working as hard. The result is a singing style which diverged from the unamplified theater style of the musical comedies of the 1930s and 1940s.\n", "In early music, particularly that of the Renaissance, the use of the terms soprano, alto, tenor and bass for the voice parts should not be taken in the modern sense of defining which voices are to be used. This is because much 4-voice vocal music of the time possesses the narrower overall range typical of men's voice music with a countertenor on the top (soprano) part. Appropriate downward transposition based on chiavette should always be considered.\n", "Aside from a hoarse-sounding voice, changes to pitch and volume may occur with laryngitis. Speakers may experience a lower or higher pitch than normal, depending on whether their vocal folds are swollen or stiff. They may also have breathier voices, as more air flows through the space between the vocal folds (the glottis), quieter volume and a reduced range.\n", "In music, boys' voices, before they 'break' being of a soprano register (specifically known as treble) unlike adult men (in a choir usually tenor and bass), have been most sought-after, especially where female voices were considered inappropriate as often in church and certain theatrical music - this even led to the practice of physically trying to prevent their 'angelical' voices ever to break by surgically cutting short the hormonal drive to manhood: for centuries, castrato singers, who coupled adult strength and experience with a treble register, starred in contratenor parts, mainly in operatic styles.\n", "During historical periods when instrumental music rose in prominence (relative to the voice), there was a continuous tendency for pitch levels to rise. This \"pitch inflation\" seemed largely a product of instrumentalists competing with each other, each attempting to produce a brighter, more \"brilliant\", sound than that of their rivals. (In string instruments, this is not all acoustic illusion: when tuned up, they actually sound objectively brighter because the higher string tension results in larger amplitudes for the harmonics.) This tendency was also prevalent with wind instrument manufacturers, who crafted their instruments to play generally at a higher pitch than those made by the same craftsmen years earlier.\n" ]
Do mental illnesses run in families? Will they be the same mental illness or can they vary between each offspring?
There is a genetic role in some forms of mental illness like schizophrenia for example. There's also a lot of environmental factors that cause or worsen mental illness, like physical/emotional abuse/neglect, malnutrition, traumatic life events, etc. There are usually many factors at play and no case is exactly the same.
[ "A similar study on the mental health of single mothers attempted to answer the question, \"Are there differences in the prevalence of psychiatric disorders, between married, never-married, and separated/divorced mothers?\" Statistically, never married, and separated/divorced mothers had the highest regularities of drug abuse, personality disorder and PTSD. The family structure can become a trigger for mental health issues in single mothers. They are especially at risk for having higher levels of depressive symptoms.\n", "A number of psychiatric disorders are linked to a family history (including depression, narcissistic personality disorder and anxiety). Twin studies have also revealed a very high heritability for many mental disorders (especially autism and schizophrenia). Although researchers have been looking for decades for clear linkages between genetics and mental disorders, that work has not yielded specific genetic biomarkers yet that might lead to better diagnosis and better treatments.\n", "The most common mental illnesses in children include, but are not limited to, ADHD, autism and anxiety disorder, as well as depression in older children and teens. Having a mental illness at a younger age is much different from having one in your thirties. Children's brains are still developing and will continue to develop until around the age of twenty-five. When a mental illness is thrown into the mix, it becomes significantly harder for a child to acquire the necessary skills and habits that people use throughout the day. For example, behavioral skills don’t develop as fast as motor or sensory skills do. So when a child has an anxiety disorder, they begin to lack proper social interaction and associate many ordinary things with intense fear. This can be scary for the child because they don’t necessarily understand why they act and think the way that they do. Many researchers say that parents should keep an eye on their child if they have any reason to believe that something is slightly off. If the children are evaluated earlier, they become more acquainted to their disorder and treating it becomes part of their daily routine. This is opposed to adults who might not recover as quickly because it is more difficult for them to adapt.\n", "Some diseases are hereditary and run in families; others, such as infectious diseases, are caused by the environment. Other diseases come from a combination of genes and the environment. Genetic disorders are diseases that are caused by a single allele of a gene and are inherited in families. These include Huntington's disease, Cystic fibrosis or Duchenne muscular dystrophy. Cystic fibrosis, for example, is caused by mutations in a single gene called \"CFTR\" and is inherited as a recessive trait.\n", "Genetic disorders may also be complex, multifactorial, or polygenic, meaning they are likely associated with the effects of multiple genes in combination with lifestyles and environmental factors. Multifactorial disorders include heart disease and diabetes. Although complex disorders often cluster in families, they do not have a clear-cut pattern of inheritance. This makes it difficult to determine a person’s risk of inheriting or passing on these disorders. Complex disorders are also difficult to study and treat, because the specific factors that cause most of these disorders have not yet been identified. Studies which aim to identify the cause of complex disorders can use several methodological approaches to determine genotype-phenotype associations. One method, the genotype-first approach, starts by identifying genetic variants within patients and then determining the associated clinical manifestations. This is opposed to the more traditional phenotype-first approach, and may identify causal factors that have previously been obscured by clinical heterogeneity, penetrance, and expressivity.\n", "Any reproductive risks (e.g. a chance to have a child with the same diagnosis) can also be explored after a diagnosis. Many disorders cannot occur unless both the mother and father pass on their genes, such as cystic fibrosis; this is known as autosomal recessive inheritance. Other autosomal dominant diseases can be inherited from one parent, such as Huntington disease and DiGeorge syndrome. Yet other genetic disorders are caused by an error or mutation occurring during the cell division process (e.g. aneuploidy) and are not hereditary.\n", "In Sweden, Emma Fransson et al. have shown that children living with one single parent have worse well-being in terms of physical health behavior, mental health, peer friendships, bullying, cultural activities, sports, and family relationships, compared to children from intact families. As a contrast, children in a shared parenting arrangement that live approximately equal amount of time with their divorced mother and father have about the same well-being as children from intact families and better outcomes than children with only one custodial parent.\n" ]
How did Polar Bears survive the Medieval Warm Period?
It [wasn't that warm](_URL_0_) compared to today.
[ "In the Ice Age (which included warm spells), mammals such as the woolly mammoth, wild horse, giant deer, brown bear, spotted hyena, Arctic lemming, Norway lemming, Arctic fox, European beaver, wolf, Eurasian lynx, and reindeer flourished or migrated depending on the degree of coldness. The Irish brown bear was a genetically distinct (clade 2) brown bear from a lineage that had significant polar bear mtDNA. The closest surviving brown bear is Ursus arctos middendorffi in Alaska.\n", "To investigate the possibility of climatic cooling, scientists drilled into the Greenland ice cap to obtain core samples, which suggested that the Medieval Warm Period had caused a relatively milder climate in Greenland, lasting from roughly 800 to 1200. However, from 1300 or so the climate began to cool. By 1420, the \"Little Ice Age\" had reached intense levels in Greenland. Excavations of middens from the Norse farms in both Greenland and Iceland show the shift from the bones of cows and pigs to those of sheep and goats. As the winters lengthened, and the springs and summers shortened, there must have been less and less time for Greenlanders to grow hay. A study of North Atlantic seasonal temperature variability showed a significant decrease in maximum summer temperatures beginning in the late 13th century to early 14th century—as much as 6-8 °C lower than modern summer temperatures. The study also found that the lowest winter temperatures of the last 2,000 years occurred in the late 14th century and early 15th century. By the mid-14th century deposits from a chieftain’s farm showed a large number of cattle and caribou remains, whereas, a poorer farm only several kilometers away had no trace of domestic animal remains, only seal. Bone samples from Greenland Norse cemeteries confirm that the typical Greenlander diet had increased by this time from 20% sea animals to 80%.\n", "Polar bears rarely live beyond 25 years. The oldest wild bears on record died at age 32, whereas the oldest captive was a female who died in 1991, age 43. The causes of death in wild adult polar bears are poorly understood, as carcasses are rarely found in the species's frigid habitat. In the wild, old polar bears eventually become too weak to catch food, and gradually starve to death. Polar bears injured in fights or accidents may either die from their injuries or become unable to hunt effectively, leading to starvation.\n", "According to this hypothesis, a temperature increase sufficient to melt the Wisconsin ice sheet could have placed enough thermal stress on cold-adapted mammals to cause them to die. Their heavy fur, which helps conserve body heat in the glacial cold, might have prevented the dumping of excess heat, causing the mammals to die of heat exhaustion. Large mammals, with their reduced surface area-to-volume ratio, would have fared worse than small mammals.\n", "Polar bears appear on Jan Mayen, although in diminished numbers compared with earlier times. Between 1900 and 1920, there were a number of Norwegian trappers spending winters on Jan Mayen, hunting Arctic foxes in addition to some polar bears. But the exploitation soon made the profits decline, and the hunting ended. Polar bears are genetically distinguishable in this region of the Arctic from those living elsewhere.\n", "A portion of the time the Greenland settlements existed was during the Little Ice Age and the climate was, overall, becoming cooler and more humid. As climate began to cool and humidity began to increase, this brought longer winters and shorter springs, more storms and affected the migratory patterns of the harp seal. Pasture space began to dwindle and fodder yields for the winter became much smaller. This combined with regular herd culling made it hard to maintain livestock, especially for the poorest of the Greenland Norse. In spring, the voyages to where migratory harp seals could be found became more dangerous due to more frequent storms, and the lower population of harp seals meant that \"Nordrsetur\" hunts became less successful, making subsistence hunting extremely difficult. The strain on resources made trade difficult, and as time went on, Greenland exports lost value in the European market due to competing countries and the lack of interest in what was being traded.\n", "To investigate the possibility of climatic cooling, scientists drilled into the Greenland ice caps to obtain core samples. The oxygen isotopes from the ice caps suggested that the Medieval Warm Period had caused a relatively milder climate in Greenland, lasting from roughly 800 to 1200. However, from 1300 or so the climate began to cool. By 1420, we know that the \"Little Ice Age\" had reached intense levels in Greenland.\n" ]
Who were Turkish Sultans descended from?
As strange as it may seem, your story is essentially correct. Ottoman Sultans didn't marry, and instead had lowborn concubines. I'm not sure how many of the Sultan's mothers were European, but some were, yes. And indeed since only the direct male line is the family of the Sultans, the vast majority of later Sultans' ancestors would have been commoners. As to whether the Sultans were white... I think this sort of shows your nationality, if I'm guessing right that you're American. In the Old World Turks would generally be considered white. And to those who don't agree with that, they wouldn't really think Bulgarians or Romanians are white either! As the old proverb says after all, the "wogs start at Calais".
[ "This is a list of the biological mothers of Ottoman sultans. There were thirty-six sultans of the Ottoman Empire in twenty-one generations. (During early days the title \"Bey\" was used instead of \"Sultan\") Throughout 623-years history the sultans were the members of the same house, namely the House of Ottoman (Turkish: \"Osmanlı Hanedanı\").\n", "The Ottoman dynasty or \"House of Osman\" ( 1280–1922) was unprecedented and unequaled in the Islamic world for its size and duration. The Ottoman sultan, pâdişâh or \"lord of kings\", served as the empire's sole regent and was considered to be the embodiment of its government, though he did not always exercise complete control. The Ottoman family was originally Turkish in its ethnicity, as were its subjects; however the kingship quickly acquired many different ethnicities through intermarriage with slaves and European nobility.\n", "The Çandarlı family was a prominent Turkish political family which provided the Ottoman Empire with five grand viziers during the 14th and 15th centuries. At the time, it was the second most important family after the Ottoman dynasty itself.\n", "In 1974, descendants of the dynasty were granted the right to acquire Turkish citizenship by the Grand National Assembly, and were notified that they could apply. Mehmed Orhan, son of Prince Mehmed Abdul Kadir of the Ottoman Empire, died in 1994, leaving the grandson of Ottoman Sultan Abdülhamid II, Ertuğrul Osman, as the eldest surviving member of the deposed dynasty. Osman for many years refused to carry a Turkish passport, calling himself a citizen of the Ottoman Empire. Despite this attitude, he put the matter of an Ottoman restoration to rest when he told an interviewer \"no\" to the question of whether he wished the Ottoman Empire to be restored. He was quoted as saying that \"democracy works well in Turkey.\" He returned to Turkey in 1992 for the first time since the exile, and became a Turkish citizen with a Turkish passport in 2002.\n", "Mehmed VI Vahideddin ( \"Meḥmed-i sâdis\", \"Vahideddin\", or ), who is also known as \"Şahbaba\" (meaning \"Emperor-father\") among his relatives, (14 January 1861 – 16 May 1926) was the 36th and last Sultan of the Ottoman Empire, reigning from July 4, 1918 until November 1, 1922 when the Ottoman Empire dissolved after World War I and became the nation of the Republic of Turkey on October 29, 1923. The brother of Mehmed V, he became heir to the throne after the 1916 suicide of Abdülaziz's son Şehzade Yusuf Izzeddin as the eldest male member of the House of Osman. He acceded to the throne after the death of Mehmed V. He was girded with the Sword of Osman on 4 July 1918, as the thirty-sixth \"padishah\". His father was Sultan Abdulmejid I and mother was Gülüstü Hanım (1830 – 1865), an ethnic Abkhazian, daughter of Prince Tahir Bey Çaçba and his wife Afişe Lakerba, originally named Fatma Çaçba. Mehmed was removed from the throne when the Ottoman sultanate was abolished in 1922.\n", "There were 36 Ottoman Sultans who ruled over the Empire, and each one was a direct descendant through the male line of the first Ottoman Sultan, Sultan Osman I. After the deposition of the last Sultan, Mehmet VI, in 1922, and the subsequent abolition of the Ottoman Caliphate in 1924, members of the Imperial family were forced into exile. Their descendants now live in many different countries throughout Europe, as well as in the United States, the Middle East, and since they have now been permitted to return to their homeland, many now also live in Turkey. When in exile, the family adopted the surname of Osmanoğlu\n", "His parents were descendents of Turkish and Albanian settlers in Macedonia. The ancestors had arrived in the Balkans with Ottoman armies several centuries ago, and established themselves in farming and trade. They belonged to a culturally distinct group, different from the Christians among whom they lived but highly westernized compared to the Turkish population that remained in Anatolia.\n" ]
The origins of the Abrahamic religions.
Can I ask a clarification question before my answer gets deleted? Are we allowed to use religious texts such as the old testament or Koran as sources while we acknowledge the disputed historicity? In particular, the book of Joshua?
[ "Abrahamic religions are those religions deriving from a common ancient tradition and traced by their adherents to Abraham (circa 1900 BCE), a patriarch whose life is narrated in the Hebrew Bible/Old Testament, where he is described as a prophet (Genesis 20:7), and in the Quran, where he also appears as a prophet. This forms a large group of related largely monotheistic religions, generally held to include Judaism, Christianity, and Islam, and comprises over half of the world's religious adherents.\n", "Abrahamic religion spread globally through Christianity being adopted by the Roman Empire in the 4th century and Islam by the Islamic Empires from the 7th century. Today the Abrahamic religions are one of the major divisions in comparative religion (along with Indian, Iranian, and East Asian religions). The major Abrahamic religions in chronological order of founding are Judaism (the base of the other two religions) in the 7th century BCE, Christianity in the 1st century CE, and Islam in the 7th century CE.\n", "Abrahamic religions are those monotheistic faiths emphasizing and tracing their common origin to Abraham or recognizing a spiritual tradition identified with him. They constitute one of three major divisions in comparative religion, along with Indian religions (Dharmic) and East Asian religions (Taoic).\n", "In the study of comparative religion, the category of Abrahamic religions consists of the three monotheistic religions, Christianity, Islam and Judaism, which claim Abraham (Hebrew \"Avraham\" אַבְרָהָם; Arabic \"Ibrahim\" إبراهيم ) as a part of their sacred history. Smaller religions such as Bahá'í Faith that fit this description are sometimes included but are often omitted.\n", "The Abrahamic religions, also referred to collectively as Abrahamism, are a group of Semitic-originated religious communities of faith that claim descent from the Judaism of the ancient Israelites and the worship of the God of Abraham. The Abrahamic religions are monotheistic, with the term deriving from the patriarch Abraham (a major biblical figure from the Old Testament, which is recognized by Jews, Christians, Muslims, and others).\n", "According to Malhotra, Abrahamic religions are history-centric in that their fundamental beliefs are sourced from history – that God revealed his message through a special prophet and that the message is secured in scriptures. This special access to God is available only to these intermediaries or prophets and not to any other human beings. History-centric Abrahamic religions claim that we can resolve the human condition only by following the lineage of prophets arising from the Middle East. All other teachings and practices are required to get reconciled with this special and peculiar history. By contrast, the dharmic traditions—Hinduism, Buddhism, Jainism and Sikhism—do not rely on history in the same absolutist and exclusive way.\n", "The three major Abrahamic faiths (in chronological order of revelation) are Judaism, Christianity and Islam. Some strict definitions of what constitutes an Abrahamic religion include only these three faiths. However, there are many other religions incorporating Abrahamic doctrine, theology, genealogy and history into their own belief systems.\n" ]
Is the Multiverse a "God of the Gaps"-type explanation for something we don't understand?
I think that this links directly to the question of the place of falsifiability in science more broadly, which is something that physicists are currently fighting over. [Some have argued](_URL_1_) that the multiverse theory, among others, fails a basic test of scientific rigor in being falsifiable. [Others](_URL_0_) have argued that falsifiability is unrelated to whether something is real or not, and that an idea shouldn't be rejected out of hand just because it's unfalsifiable. So some professional scientists would answer your question with a resounding "Yes!" while others would say "Of course not!" (My personal view is that the theories and their consequences should be explored, but held in suspicion until falsified or not, so both camps have a valid point. The ability to live with the tension of uncertainty is a virtue.)
[ "The multiverse is a series of parallel universes in many of the science fiction and fantasy novels and short stories written by Michael Moorcock. (Many other fictional settings also have the concept of a multiverse.) Central to these works is the concept of an Eternal Champion who has potentially multiple identities across multiple dimensions. The multiverse contains a legion of different versions of Earth in various times, histories, and occasionally, sizes. One example is the world in which his Elric Saga takes place. The multiplicity of places in this collection of universes include London, Melniboné, Tanelorn, the Young Kingdoms, and the Realm of Dreams.\n", "The concept of a multiverse is explored in various religious cosmologies that propose that the totality of existence comprises multiple or infinitely many universes, including our own. Usually, such beliefs include a creation myth, a history, a worldview and a prediction of the eventual fate or destiny of the world. The worldview discusses the current organizational form of our universe and may contain references to other supernatural world or worlds. These references have aided several esoteric practices, including contacts with spirit worlds, and activities concerning personal or inner spiritual development.\n", "Stoeger, Ellis, and Kircher (sec. 7) note that in a true multiverse theory, \"the universes are then completely disjoint and nothing that happens in any one of them is causally linked to what happens in any other one. This lack of any causal connection in such multiverses really places them beyond any scientific support\". Ellis (p29) specifically criticizes the MUH, stating that an infinite ensemble of completely disconnected universes is \"completely untestable, despite hopeful remarks sometimes made, see, e.g., Tegmark (1998).\"\n", "The Multiverse hypothesis proposes the existence of many universes with different physical constants, some of which are hospitable to intelligent life (see multiverse: anthropic principle). Because we are intelligent beings, it is unsurprising that we find ourselves in a hospitable universe if there is such a multiverse. The Multiverse hypothesis is therefore thought to provide an elegant explanation of the finding that we exist despite the required fine-tuning. (See for a detailed discussion of the arguments for and against this suggested explanation.)\n", "If the concept of the Multiverse is true, then other hierarchies exist in other Universes. It is believed in Theosopy that the concept of the Multiverse is true because Theosophists have stated that the Universal Logos has created many other cosmoses besides our own.\n", "The fictional Moorcock Multiverse, consisting of several universes, many layered dimensions, spheres, and alternative worlds, is the place where the eternal struggle between Law and Chaos, the two main forces of Moorcock's worlds, takes place. In all these dimensions and worlds, these forces constantly war for supremacy. Since the victory of Law or Chaos would cause the Multiverse either to become permanently static or totally formless, the Cosmic Balance enforces certain limits which the powers of Law and Chaos violate at their peril. Law, Chaos, and the Balance are active, but seemingly non-sentient, forces which empower various champions and representatives.\n", "Critics of the multiverse-related explanations argue that there is no independent evidence that other universes exist. Some criticize the inference from fine-tuning for life to a multiverse as fallacious, whereas others defend it against that challenge.\n" ]
why can't i, a nearsighted person, use a vr headset without my glasses? shouldn't everything still be clear since it's just a screen close to my eyes?
Glasses refocus light to correctly hit misshapen eyes. Each person, sometimes each eye, will have different corrections that need to be made. If a "normal" sighted person wears someones prescription glasses everything will look distorted. The lenses in a VR set also refocus light, but are designed to simulate distance rather than correct for eye shape. Wearing your glasses or contacts with the vr headset will add the corrections needed for your eyes.
[ "VR headsets may regularly cause eye fatigue, as does all screened technology, because people tend to blink less when watching screens, causing their eyes to become more dried out. There have been some concerns about VR headsets contributing to myopia, but although VR headsets sit close to the eyes, they may not necessarily contribute to nearsightedness if the focal length of the image being displayed is sufficiently far away.\n", "BULLET::::- People experience 3-D virtual reality through glasses and contact lenses that beam images directly to their retinas (retinal display). Coupled with an auditory source (headphones), users can remotely communicate with other people and access the Internet.\n", "The technology differs from standard Virtual Reality. With standard Virtual Reality, people wear screens over their eyes, the headsets through which they are unable to see their own torso, arms or legs. Whereas in a VR CAVE, instead of going into the computer game, the person wears clear glasses and can see their own body, the computer-generated objects appearing to be in the real world with them.\n", "BULLET::::- These special glasses and contact lenses can deliver \"augmented reality\" and \"virtual reality\" in three different ways. First, they can project \"heads-up-displays\" (HUDs) across the user's field of vision, superimposing images that stay in place in the environment regardless of the user's perspective or orientation. Second, virtual objects or people could be rendered in fixed locations by the glasses, so when the user's eyes look elsewhere, the objects appear to stay in their places. Third, the devices could block out the \"real\" world entirely and fully immerse the user in a virtual reality environment.\n", "Arguably the most important facet in making a virtual environment seem real is an appeal to sight. A virtual reality headset, incorporating a head-mounted display (HMD) is placed in front of the eyes of a patient like a pair of sunglasses, enabling for complete visual attention. A virtual environment is then displayed. A system of motion sensors tracks movement of the patient's head. If the patient tilts, or turns his head to view another part of the room, the environment adjusts accordingly. This allows for a more realistic experience by limiting restrictions of the head.\n", "Using VRI for medical, legal and mental health settings is seen as controversial by some in the deaf community, where there is an opinion that it does not provide appropriate communication access—particularly in medical settings where the patient's ability to watch the screen or sign clearly to the camera may be compromised. This is balanced by many in the services and public services sectors who identify with the benefits of being able to communicate in otherwise impossible (and sometimes life-threatening) situations without having to wait hours for an interpreter to turn up, even if this initial contact is used just to arrange a further face-to-face appointment. Therefore, businesses and organizations contend that it meets or exceeds the minimum threshold for reasonable accommodation as its principle is built around offering \"reasonable adjustment\" through increasing initial accessibility.\n", "BULLET::::- By the same year, practical virtual reality glasses will be in use. The devices will work by beaming images directly onto the retinas of their users, creating large, three-dimensional floating images in the person's field of view. Such devices would provide a visual experience on par with a very large television, but would be highly portable, combining the best features of a portable video player and a widescreen TV. The glasses will deliver full-immersion virtual reality.\n" ]
why, when watching a live tv program, is there a delay on a screen in the shot when it shows the same program as being broadcast
Because there is a time delay in capturing video and displaying it. The live view is capturing a display of itself. At one frame in time, that light information from the camcorder gets converted to electrical signals, decoded, then sent to be displayed elsewhere. This will take several frames of time before it pops up on screen for the live view to see. This time difference is the delay you see. You can get the same effect by pointing a webcam to look at a screen with its own video feed, you'll get a repeating image of what you see but they will pop up one at a time due to the delay.
[ "In radio and television, broadcast delay is an intentional delay when broadcasting live material. Such a delay may be short (often seven seconds) to prevent mistakes or unacceptable content from being broadcast. Longer delays lasting several hours can also be introduced so that the material is aired at a later scheduled time (such as the prime time hours) to maximize viewership.\n", "Tape delay may also refer to the process of broadcasting an event at a later scheduled time because a scheduling conflict prevents a live telecast, or a broadcaster seeks to maximize ratings by airing an event in a certain timeslot. That can also be done because of time constraints of certain portions, usually those that do not affect the outcome of the show, are edited out or the availability of hosts or other key production staff only at certain times of the day, and it is generally applicable for cable television programs.\n", "If a user is interrupted while watching television, they can use Chase Play to 'pause' the television until they can keep watching. Initiating 'Chase Play' means that the user no longer has to wait until the show is over before finding out what they have missed. When returning from the interruption, 'Chase Play' allows the user to start viewing the program from precisely where they left off. The recording device will continue to capture the program in 'real time' until instructed to stop.\n", "Many modern digital television receivers, such as standalone TV sets and set-top boxes use sophisticated audio processing, which can create a delay between the time when the audio signal is received and the time when it is heard on the speakers. Since many of these TVs also cause delays in processing the video signal this can result in the two signals being sufficiently synchronized to be unnoticeable by the viewer. However, if the difference between the audio and video delay is significant, the effect can be disconcerting. Some TVs have a \"lip sync\" setting that allows the audio lag to be adjusted to synchronize with the video, and others may have advanced settings where some of the audio processing steps can be turned off.\n", "Because television is a field- rather than frame-based system, however, not all the information in the picture can be retained on film in the same way as it can on videotape. The time taken physically to move the film on by one frame and stop it so that the gate can be opened to expose a new frame of film to the two fields of television picture is much longer than the vertical blanking interval between these fields—so the film is still moving when the start of the next field is being displayed on the television screen. It is not possible to accelerate the film fast enough to get it there in time without destroying the perforations in the film stock—and the larger the film gauge used, the worse the problem becomes.\n", "\"Screen time was precious, and infusing every moment with the emotion [was the point], not just forming the pieces of the puzzle to tell the story, which is hard enough. If you’re going to take five seconds of screen time, you’d better damn well be sure that there’s an emotion there. It may be very, very subtle, but trust the audiences to pick up on that, because audiences do.\"\n", "By default, users can view program titles for multiple TV channels simultaneously, but, except for the currently selected or displayed channel, only for one time slot at a time. For the currently selected or displayed channel, the next three shows and their times are displayed in a \"paddle\" off to the right side of the vertical list. To see more time slots, users can use the BACK and NEXT buttons on the remote to scroll through time slots (this works on all menus) or can double press the askew 'square' button on the top row of the remote to view a more traditional TV channel grid guide (this only works for the Channels menu). While watching full-screen TV (with the Moxi Menu hidden), the Moxi Flip Bar (a bar that pops up on the bottom of the screen) provides programming information and ability to change channels by using the arrow keys on the remote.\n" ]
why do people's ears tend to get hot when they drink alcohol?
Alcohol causes your blood vessels to expand which increases circulation. This is more noticeable where there is very little skin and muscle to hide the changes. Like your ears.
[ "Alcohol may worsen asthmatic symptoms in up to a third of people. This may be even more common in some ethnic groups such as the Japanese and those with aspirin-induced asthma. Other studies have found improvement in asthmatic symptoms from alcohol.\n", "Alcohol abuse can cause a susceptibility to infection after major trauma to the lungs / respiratory system. It creates an increased risk of aspiration of gastric acid, microbes from the upper part of the throat, decreased mucous-facilitated clearance of bacterial pathogens from the upper airway and impaired pulmonary host defenses. This increased colonization by pathogenic organisms, combined with the acute intoxicating effects of alcohol and the subsequent depression of the normally protective gag and cough reflexes, leads to more frequent and severe pneumonia from gram-negative organisms. Defects in the function of the upper airway's clearance mechanisms in alcoholic patients have been detected.\n", "Alcohol consumption increases the risk of hypothermia in two ways: vasodilation and temperature controlling systems in the brain. Vasodilation increases blood flow to the skin, resulting in heat being lost to the environment. This produces the effect of an individual \"feeling\" warm, when they are actually losing heat. Alcohol also affects the temperature-regulating system in the brain, decreasing the body's ability to shiver and use energy that would normally aid the body in generating heat. The overall effects of alcohol lead to a decrease in body temperature and a decreased ability to generate body heat in response to cold environments. Alcohol is a common risk factor for death due to hypothermia. Between 33% and 73% of hypothermia cases are complicated by alcohol.\n", "Repeated exposure to fatty alcohols produce low-level toxicity and certain compounds in this category can cause local irritation on contact or low-grade liver effects (essentially linear alcohols have a slightly higher rate of occurrence of these effects). No effects on the central nervous system have been seen with inhalation and oral exposure. Tests of repeated bolus dosages of 1-hexanol and 1-octanol showed potential for CNS depression and induced respiratory distress. No potential for peripheral neuropathy has been found. In rats, the no observable adverse effect level (NOAEL) ranges from 200 mg/kg/day to 1000 mg/kg/day by ingestion. There has been no evidence that fatty alcohols are carcinogenic, mutagenic, or cause reproductive toxicity or infertility. Fatty alcohols are effectively eliminated from the body when exposed, limiting possibility of retention or bioaccumulation.\n", "Alcohol also impairs and alters the functioning in the cerebellum, which affects both motor function and coordination. It has a notable inhibitory effect on the neurons of the cerebral cortex, affecting and altering thought processes, decreasing inhibition, and increasing the pain threshold. It also decreases sexual performance by depressing nerve centers in the hypothalamus. Alcohol also has an effect on urine excretion via inhibition of anti-diuretic hormone (ADH) secretion of the pituitary gland. Lastly, it depresses breathing and heart rate by inhibiting neuronal functioning of the medulla.\n", "Regular consumption of alcohol is associated with an increased risk of gouty arthritis and a decreased risk of rheumatoid arthritis. Two recent studies report that the more alcohol consumed, the lower the risk of developing rheumatoid arthritis. Among those who drank regularly, the one-quarter who drank the most were up to 50% less likely to develop the disease compared to the half who drank the least.\n", "A study conducted by French scientists showed that loud music leads to more alcohol consumption in less time. For three Saturday evenings researchers observed customers of two bars situated in a medium-sized city in the west of France. Participants included forty males aged between 18 and 25, who were unaware that they were subjects of a research. The study featured only those who ordered a glass of draft beer (25 cl. or 8 oz.). The lead researcher, Nicolas Guéguen, said that each year more than 70,000 people in France die from an increased level of alcohol consumption, which also leads to fatal car accidents.\n" ]
Why are older geological layers at the bottom and newer ones at the top?
An important concept is that of [isostatic subsidence](_URL_0_). Let's imagine a really simple scenario. Imagine a lake whose level is around sea-level. It is receiving sediment from rivers draining mountains and other surrounding upland areas. Let's say that we deposit 1 cm of sediment in that lake. That 1 cm of sediment has a mass associated with it which now adds to the total mass of the column of crust beneath it. This addition of mass causes a little more of the aesthenosphere (a part of the mantle beneath the lithosphere which is weaker, behaves plastically and is able to "flow" on a geologic timescale) to be displaced beneath this column so the column sinks a tiny bit, thus creating more space, referred to as "accommodation space" within this lake. Continue this process for a long time and you will progressively end up with a column of sediments that increases in age downward, but the surface of the earth (where sediment is being deposited in your simple lake) is about the same absolute elevation referenced to some external datum (like sea level). There is also some amount of recycling that is happening as some areas are uplifted due to processes like mountain building. So you may have millions of years of deposition in an area and eventually this area may be involved in the formation of a mountain range and then a decent portion of those rocks will be uplifted, eroded and then deposited in a basin.
[ "That new rock layers are above older rock layers is stated in the principle of superposition. There are usually some gaps in the sequence called unconformities. These represent periods where no new sediments were laid down, or when earlier sedimentary layers were raised above sea level and eroded away.\n", "When rock units are placed under horizontal compression, they shorten and become thicker. Because rock units, other than muds, do not significantly change in volume, this is accomplished in two primary ways: through faulting and folding. In the shallow crust, where brittle deformation can occur, thrust faults form, which causes deeper rock to move on top of shallower rock. Because deeper rock is often older, as noted by the principle of superposition, this can result in older rocks moving on top of younger ones. Movement along faults can result in folding, either because the faults are not planar or because rock layers are dragged along, forming drag folds as slip occurs along the fault. Deeper in the Earth, rocks behave plastically and fold instead of faulting. These folds can either be those where the material in the center of the fold buckles upwards, creating \"antiforms\", or where it buckles downwards, creating \"synforms\". If the tops of the rock units within the folds remain pointing upwards, they are called anticlines and synclines, respectively. If some of the units in the fold are facing downward, the structure is called an overturned anticline or syncline, and if all of the rock units are overturned or the correct up-direction is unknown, they are simply called by the most general terms, antiforms and synforms.\n", "The \"law of superposition\" states that a sedimentary rock layer in a tectonically undisturbed sequence is younger than the one beneath it and older than the one above it. This is because it is not possible for a younger layer to slip beneath a layer previously deposited. The only disturbance that the layers experience is bioturbation, in which animals and/or plants move things in the layers. however, this process is not enough to allow the layers to change their positions. This principle allows sedimentary layers to be viewed as a form of vertical time line, a partial or complete record of the time elapsed from deposition of the lowest layer to deposition of the highest bed.\n", "The oldest rocks are Lower Carboniferous limestones of Dinantian age, which form the core of the White Peak within the Peak District National Park. Because northern Derbyshire is effectively an uplifted dome of rock layers which have subsequently eroded back to expose older rocks in the centre of that dome, these are encircled by progressively younger limestone rocks until they in turn give way on three sides to Upper Carboniferous shales, gritstones and sandstones of Namurian age.\n", "An inlier is an area of older rocks surrounded by younger rocks. Inliers are typically formed by the erosion of overlying younger rocks to reveal a limited exposure of the older underlying rocks. Faulting or folding may also contribute to the observed outcrop pattern. A classic example from Great Britain is that of the inlier of folded Ordovician and Silurian rocks at Horton in Ribblesdale in North Yorkshire which are surrounded by the younger flat-lying Carboniferous Limestone. The location has long been visited by geology students and experts. Another example from South Wales is the Usk Inlier in Monmouthshire where Silurian age rocks are upfolded amidst Old Red Sandstone rocks of Devonian age.\n", "The oldest rocks are hard, grey limestones that make up the Carboniferous Limestone. These were laid down in a warm, shallow, subtropical sea and are rich in fossils, especially corals, crinoids and brachiopods. About 300 million years ago movements in the Earth's crust deformed and folded the rocks, as a result of which the rocks above the Carboniferous Limestone were worn away. Deposition resumed during the Triassic Period when the area was a desert with hills of limestone, and a dry plain where the Bristol Channel is now. Short violent storms caused flash-floods which carried debris down the hillsides and deposited it as alluvial fans of coarse, red conglomerate at the edge of the plain. These Triassic lie unconformably on the Carboniferous Limestone.\n", "Rocks normally decrease in porosity with age and depth of burial. Tertiary age Gulf Coast sandstones are in general more porous than Cambrian age sandstones. There are exceptions to this rule, usually because of the depth of burial and thermal history.\n" ]
Was American Artillery more effective and decisive than German Artillery in WW2? If so, why?
[This response by /u/vonadler is exactly what you are looking for.](_URL_0_)
[ "Although also frequently out-ranged by their German counterparts, American artillery built up a reputation for effectiveness and the infantry increasingly relied on the artillery to get them forward. The War Department General Staff ignored the Army Ground Force's recommendations for a powerful heavy artillery arm, authorizing only 81 medium and 54 heavy non-divisional artillery battalions instead of the 140 and 101 recommended by Army Ground Forces, only to have combat experience in Italy prove that air power could not substitute for heavy artillery. As a result, over 100 medium and heavy artillery battalions were activated in 1944, mostly through the conversion of coast artillery units.\n", "After the fall of France, the U.S. Army studied the reasons behind the effectiveness of the German campaign against the French and British forces. One aspect that was highlighted by this study was the use of self propelled artillery, however by 1941 there was little available in the U.S. Army's arsenal that could be used in such a role. The Army had a number of M1897A5 guns, sufficient for the mass-production for such a weapon, and the M3 half-track was coming into production. After some debate, the Army decided to place M1897A5 guns on the M3 half-track chassis, which was designated the T12 GMC. The M1897A5 gun was originally adapted for the M3 chassis by placing it in a welded box riveted to the chassis behind the driver's compartment. It was accepted by the Army on 31 October 1941.\n", "By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannon proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This calibre gun was used by the Germans against Paris and could hit targets more than away.\n", "Initially, anti-aircraft artillery guns of World War I were adaptations of existing medium-caliber weapons, mounted to allow fire at higher angles. By 1915, the German command realized that these were useless for anything beyond deterrence, even against the vulnerable balloons and slow-moving aircraft of the period. With the increase of aircraft performance, many armies developed dedicated AA guns with a high muzzle velocity – allowing the projectiles to reach greater altitudes. It was this muzzle velocity, combined with a projectile of high weight, that made the 8.8 cm Flak one of the great World War II anti-tank guns. The first such German gun was introduced in 1917, and it used the 8,8 cm caliber, common in the \"Kaiserliche Marine\" (navy).\n", "But changes have been made since past wars and in World War I, artillery was more accurate than before, although not as accurate as artillery one century newer. The tactics of artillery from previous wars were carried on, and still had similar success. Warships and battleships also carried large caliber guns that needed to be elevated to certain degrees to accurately hit targets, and they also had the similar drawbacks of land artillery.\n", "By the early 20th century, infantry weapons became more powerful and accurate, forcing most artillery away from the front lines. Despite the change to indirect fire, cannon still proved highly effective during World War I, causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they fired at a steep angle, and were thus better suited than guns at hitting targets in trenches. Furthermore, their shells carried larger amounts of explosives than those of guns, and caused considerably less barrel wear. The German army took advantage of this, beginning the war with many more howitzers than the French. World War I also marked the use of the Paris Gun, the longest-ranged gun ever fired. This caliber gun was used by the Germans to bombard Paris, and was capable of hitting targets more than away.\n", "Soon after the war, Ernst von Hoeppner wrote that German air units were overwhelmed by the number and aggression of British and French air crews, who gained air supremacy and reduced the \"to a state of impotence\". Hoeppner wrote that Anglo-French artillery-observation aircraft were their most effective weapon, operating in \"perfect accord\" with their artillery and \"annihilating\" the German guns, although the French aviators were superior to the British. Low-altitude flying for machine-gun attacks on German infantry had little practical effect but the depression of German infantry morale was much greater, leading to a belief that return fire had no effect on Allied aircraft and that all aeroplanes seen were British or French, which led to more demands from the infantry for air protection by the , despite their unsuitable aircraft.\n" ]
Neutron Star Density question. (Chemistry/Astrophysics)
The mass density of a neutron star is comparable to the mass density of the nucleus of an atom. In ordinary matter, the density is much less, because the mass of an atom is almost all in the nucleus, but the nucleus takes up only a tiny fraction of the volume of an atom, so the density of ordinary matter is vastly less than the density of the nucleus. In a neutron star, you basically have density comparable to what you'd get if you piled a bunch of nuclei as close to each other as you could.
[ "Neutron stars have overall densities of to ( to times the density of the Sun), which is comparable to the approximate density of an atomic nucleus of . The neutron star's density varies from about in the crust—increasing with depth—to about or (denser than an atomic nucleus) deeper inside. A neutron star is so dense that one teaspoon (5 milliliters) of its material would have a mass over , about 900 times the mass of the Great Pyramid of Giza. In the enormous gravitational field of a neutron star, that teaspoon of material would weigh , which is 15 times what the Moon would weigh if it were placed on the surface of the Earth. The entire mass of the Earth at neutron star density would fit into a sphere of 305m in diameter (the size of the Arecibo Observatory). The pressure increases from to from the inner crust to the center.\n", "The equation of state for a neutron star is not yet known. It is assumed that it differs significantly from that of a white dwarf, whose equation of state is that of a degenerate gas that can be described in close agreement with special relativity. However, with a neutron star the increased effects of general relativity can no longer be ignored. Several equations of state have been proposed (FPS, UU, APR, L, SLy, and others) and current research is still attempting to constrain the theories to make predictions of neutron star matter. This means that the relation between density and mass is not fully known, and this causes uncertainties in radius estimates. For example, a neutron star could have a radius of 10.7, 11.1, 12.1 or 15.1 kilometers (for EOS FPS, UU, APR or L respectively).\n", "As noted above, a mass distribution will emit gravitational radiation only when there is spherically asymmetric motion among the masses. A spinning neutron star will generally emit no gravitational radiation because neutron stars are highly dense objects with a strong gravitational field that keeps them almost perfectly spherical. In some cases, however, there might be slight deformities on the surface called \"mountains\", which are bumps extending no more than 10 centimeters (4 inches) above the surface, that make the spinning spherically asymmetric. This gives the star a quadrupole moment that changes with time, and it will emit gravitational waves until the deformities are smoothed out.\n", "Neutron star relativistic equations of state describe the relation of radius vs. mass for various models. The most likely radii for a given neutron star mass are bracketed by models AP4 (smallest radius) and MS2 (largest radius). BE is the ratio of gravitational binding energy mass equivalent to the observed neutron star gravitational mass of \"M\" kilograms with radius \"R\" meters,\n", "The seven objects seem to be the best laboratory to study neutron star atmospheres and, probably, internal structure. The holy grail of neutron star astrophysics is the determination of the equation of state (EOS) of matter at supra-nuclear densities. The most direct way of constraining the EOS is to measure simultaneously the neutron star mass and radius. If a neutron star emits blackbody radiation from its surface of radius formula_1 at homogeneous temperature formula_2, the received flux at distance formula_3 is:\n", "Neutron stars that can be observed are very hot and typically have a surface temperature of around . They are so dense that a normal-sized matchbox containing neutron-star material would have a weight of approximately 3 billion metric tons, the same weight as a 0.5 cubic kilometre chunk of the Earth (a cube with edges of about 800 metres). Their magnetic fields are between 10 and 10 (100 million to 1 quadrillion) times stronger than Earth's magnetic field. The gravitational field at the neutron star's surface is about (200 billion) times that of Earth's gravitational field.\n", "A team of astronomers from Italy, Poland, and the U.K. has reported in 2016 observations of the light emitted by a neutron star (pulsar RX J1856.5−3754). The star is surrounded by a very strong magnetic field (10G), and one expects birefringence from the vacuum polarization described by the Euler–Heisenberg Lagrangian. A degree of polarization of about 16% was measured and was claimed to be \"large enough to support the presence of vacuum birefringence, as predicted by QED\". Fan et al. pointed that their results are uncertain due to low accuracy of star model and the direction of the neutron magnetization axis.\n" ]
why are addresses depicted the way they are, and not postcode/zipcode first? surely this would be easier?
Pretty much you answered it yourself. It's done that way because it's traditional. People haven't felt a need for change because there were no problems with the old way. The cost to promote the change, and the cost of potential delays or disruptions from confusion or inconsistency, would outweigh the negligible benefits.
[ "Postal designations for place names become \"de facto\" locations for their addresses, and as a result, it is difficult to convince residents and businesses that they are located in another city or town different from the \"preferred\" place name associated with their ZIP Codes. Because of issues of confusion and lack of identity, some cities, such as Signal Hill, California, (an enclave located entirely inside the separate city of Long Beach) have successfully petitioned the Postal Service to change ZIP Code boundaries or create new ZIP Codes so their cities become the \"preferred\" place name for addresses within the ZIP Code.\n", "The classic postal codes of the 1970s are not fine-grained, can't be used as location (converted to approximated latitude/longitude of the address). But modern digital maps can generate geocodes (e.g. Geohash), it can be used as a finer location code with the same number of digits, and without administrative cost. \n", "Initially, states divided into multiple area codes were assigned area codes with a \"1\" in the second position, while areas that covered entire states or provinces received codes with \"0\" as the middle digit; however, this rule was abandoned by the early 1950s. In order to distinguish seven-digit dialing from ten-digit dialing, central office codes were restricted to not having a \"0\" or \"1\" in the middle position. This was already common practice, because the system of using the initial letters of central office names did not assign letters to digits \"1\" and \"0\". Furthermore, area codes and central office codes could not start with \"0\" or \"1\", because \"0\" was used for operator assistance and a leading single pulse (\"i.e.\", the digit \"1\") was automatically ignored by most switching equipment of the time. In addition, the eight codes of the form \"N11\" (\"N = 2–9\") were reserved as service codes. The easily recognizable codes of the form \"N00\" were available in the numbering plan, but were not initially included in assignments.\n", "Most of the postal code systems are numeric; only a few are alphanumeric (i.e., use both letters and digits). Alphanumeric systems can, given the same number of characters, encode many more locations. For example, while a 2 digit numeric code can represent 100 locations, a 2 character alphanumeric code using ten numbers and twenty letters can represent 900 locations.\n", "Because of the way the segment address and offset are added, a single linear address can be mapped to up to 2 = 4096 distinct segment:offset pairs. For example, the linear address 08124h can have the segmented addresses 06EFh:1234h, 0812h:0004h, 0000h:8124h, etc.\n", "\"Acceptable\" place names are usually added to a ZIP Code in cases where the ZIP Code boundaries divide them between two or more cities, as in the case of Centennial. However, in many cases, only the \"preferred\" name can be used, even when many addresses in the ZIP Code are in another city. People sometimes must use the name of a post office rather than their own city.\n", "The mapcode system was designed specifically as a free, brand-less, international standard for representing any location on the surface of the Earth by a short, easy to recognize and remember “code”, usually consisting of between 4 and 7 letters and digits.\n" ]
In late medieval England, how were MPs chosen for the House of Commons?
Ah the pre reform house of commons. As always the answer is it depends which ones. Essentially you had 2 main forms of MPs. The knights of the shires that represented counties and those that represented urban centres. Then you had varies other MPs such as those that represented the Cinque Ports (University constituencies though cam rather later. Now we've got that out of the way it depends on the date. 1430 is a critical year as is when the 40 shilling freeholders act was passed. After that date you needed to have a freehold worth 40 shillings a year in rent in order to vote in county elections (prior to that point is possible that all freeholders had the vote in theory). One big catch was that you had to vote in the county court. This meant if you didn't live near it voting wouldn't be very practical. An exception was Hampshire where you could vote in Newport on the Isle of Wight as well as Winchester. Voting was public so you might want to stick to candidates that the great and good of Winchester were happy with. So in practice such MPs were selected by who met the voting requirements, were able to get to the county town and were prepared to openly support them. For MPs that represented towns the system was largely left up to the authorities of the town in question and a range of approaches were taken from giving the vote to most householders through just those that paid certain taxes to just the members of the borough corporation. Of course this assumes that any MP was chosen at all. Southampton for example repeatedly failed to provide an MP (at the time it was expected to produce two). This makes more sense when you realise that Southampton was frequently a poor town and parliaments could be held in some fairly random places.
[ "The House of Commons of the Kingdom of England evolved from an undivided parliament to serve as the voice of the tax-paying subjects of the counties and of the boroughs. Knights of the shire, elected from each county, were usually landowners, while the borough members were often from the merchant classes. These members represented subjects of the Crown who were not Lords Temporal or Spiritual, who themselves sat in the House of Lords. The House of Commons gained its name because it represented communities (\"communes\"). Members of the Commons were all elected, while members of the upper house were summoned to parliament by the monarch, usually on the basis of a title which would be inherited after the holder's death, or because they held a position in the realm that warranted special recognition, such as the bishops of the English and Welsh dioceses. After the Reformation, these bishops were those of the Church of England. \n", "The division of the Parliament of England into two houses occurred during the reign of Edward III: in 1341 the Commons met separately from the nobility and clergy for the first time, creating in effect an Upper Chamber and a Lower Chamber, with the knights and burgesses sitting in the latter. They formed what became known as the House of Commons, while the clergy and nobility became the House of Lords. Although they remained subordinate to both the Crown and the Lords, the Commons did act with increasing boldness. During the Good Parliament of 1376, the Commons appointed Sir Peter de la Mare to convey to the Lords their complaints of heavy taxes, demands for an accounting of the royal expenditures, and criticism of the King's management of the military. The Commons even proceeded to impeach some of the King's ministers. Although Mare was imprisoned for his actions, the benefits of having a single voice to represent the Commons were recognized, and the office which became known as Speaker of the House of Commons was thus created. Mare was soon released after the death of King Edward III and in 1377 became the second Speaker of the Commons.\n", "In 1341 the Commons met separately from the nobility and clergy for the first time, creating what was effectively an Upper Chamber and a Lower Chamber, with the knights and burgesses sitting in the latter. This Upper Chamber became known as the House of Lords from 1544 onward, and the Lower Chamber became known as the House of Commons, collectively known as the Houses of Parliament.\n", "Montfort's Parliament of 1265 was the first parliament of England to include representatives chosen by the counties (or shires), the cities, and the boroughs, groups who eventually became the House of Commons, although to begin with Lords and Commons met all together, \n", "From 1295, (the Model Parliament) a form of this constituency on a narrower area, the Parliamentary borough of Salisbury, returned two MPs to the House of Commons of England Elections were held using the bloc vote system. This afforded the ability for wealthy male townsfolk who owned property rated at more than £2 a year liability in Land Tax to vote in the county and borough (if they met the requirements of both systems). The franchise (right to vote) in the town was generally restricted to male tradespersons and professionals within the central town wards, however in medieval elections would have been the aldermen.\n", "The procedure for filling a vacant seat in the House of Commons of England was developed during the Reformation Parliament of the 16th century by Thomas Cromwell; previously a seat had remained empty upon the death of a member. Cromwell devised a new election that would be called by the king at a time of the king's choosing. This made it a simple matter to ensure the seat rewarded an ally of the crown.\n", "The House of Lords developed from the \"Great Council\" (\"Magnum Concilium\") that advised the King during medieval times. This royal council came to be composed of ecclesiastics, noblemen, and representatives of the counties of England and Wales (afterwards, representatives of the boroughs as well). The first English Parliament is often considered to be the \"Model Parliament\" (held in 1295), which included archbishops, bishops, abbots, earls, barons, and representatives of the shires and boroughs of it.\n" ]