text
stringlengths
24
30.8k
Why is the combustion engine still king of common transportation? It seems like we are so along in knowledge and technological advances, but yet we are still using these semi-primitive ways to get around. Why isn't the hydrogen cell around more commonly? Or other little-to-none emission types of engines? There are still a lot of problems with other types of motors. The most obvious is that the energy storage is super easy (comparably) for combustion engines. Hydrogen is difficult to store. It diffuses through everything and has usually to be kept at very high pressures and/or very low temperatures. Furthermore, hydrogen fuel cells are fickle high tech things (and quite expensive). Electricity is also very difficult to store. The batteries are heavy, super expensive (a battery for an electric car costs in the order of 15000$ and lasts ~10 years, many cars are not worth as much after 10 years) take long to recharge and do not get you very far compared to a full tank of gasoline. Batteries are also quite difficult to control as evidenced by Boeings latest fiasco with the dreamliner. All in all, it is not suprising that combustion engines are still leading.
(Scientific method) Acceptable hypotheses: dark energy and phlogiston I'm not sure how to phrase my question simply, so please forgive me for trying to explain by example. I also don't have any background in physics beyond the AP level, so maybe there are just some technical details I'm not getting. Consider if you were an early chemist, researching the properties of then-generally-accepted phlogiston. As you burn wood, sulphur, coal -- most anything -- their masses decrease as they release phlogiston. So far, so good: the theory is supported. Now you burn magnesium, and, surprise! The mass increases. At this point, I believe, it would be bad form, as far as the theory goes, to explain that, while wood contains phlogiston, magnesium contains "dark phlogiston" with negative mass. It's an ugly, unparsimonious attempt to save a failing theory. As I understand it, dark energy works something like repulsive gravity. We observe that, for most anything, gravity seems to hold, but the universe seems to repel itself. So my question essentially is, why is it then OK to postulate dark energy? First, "dark energy" is just the place-holder name we give to whatever is responsible for the observed expansion of space. A lot of research remains to be done to figure out just what it is, but expansion is definitely happening and in the meantime we need some way to refer to the effect. Second, according to the general theory of relativity—our current best model of gravitation, wherein gravity is the curvature of spacetime—there's no reason the effect of gravity must be attractive. One of the of the theory is that it can accommodate both the observed acceleration of massive bodies toward one another the expansion of space on large scales.
Do black holes cores emit photons that just can’t escape to the surface? We have no evidence of what happens beyond the event horizon of a black hole so any answer is speculation. The math of General Relativity suggests that the straight line paths (geodesics) through spacetime along which all objects travel, experience such extreme curvature due to gravity that beyond the event horizon, all paths lead to the center. That is, the concept of 'direction' ceases to have meaning because there is only one direction - in. We don't know what lies at the core of a black hole. It could be the hypothetical 'singularity' - a point where all the numbers go to infinity and the normal laws of physics no longer apply. It could be that some as yet unknown physics steps in to halt the ultimate gravitational collapse into a singularity... in which case there would be some form of object there with a volume, like a much more extreme version of a neutron star (see 'Planck Star' for one such example). We have very little idea of how such objects might behave.
What would happen if water's surface tension was much stronger? I'm interested in how this would change the world around us and everyday life. What if the surface tension doubled? What if it was increased by a factor of ten? Edit: Clarification If the difference in surface tension was reflected in a difference in Hydrogen bond strength, then changing it could have a huge impact on life. DNA pairs by hydrogen bonding and protein structures also often contain regions that are H-bonded to other regions in the protein, helping them to fold into their 3D structures. Changing the strength of H-bonding would probably affect these systems in a catastrophic manner.
Do mosquitoes have a preferred blood type? I'm asking because I'm usually the one in my dorm that gets surrounded by mosquitoes while my friends are relatively better off. Is it because of my blood type or does body odor has something to do with it? Not blood type, but CO2 levels in your breath, your skin temperature, lactic acid. If you exercise, your skin temperature and lactic acid will be up, attracting mosquitoes. Do you happen to work out more often than your dorm mates? https://abcnews.go.com/Health/things-make-mosquito-magnet/story?id=24676818
Why is alcohol universally toxic to living things? Generally speaking, alcohol (like ethanol) is a solvent meaning it changes the solubility of cellular macro molecules. These can be lipids in cellular membranes or proteins within a cell that can denature if their solubility is altered. As membrane integrity and protein function is essential for all living beings down to bacteria alcohol us toxic for them/us all. In certain organisms (like humans for example) alcohol is detoxified by enzymatic oxidation. First to acetaldehyde (in case of ethanol, our drinking alcohol) and then to (acetic) acid. Acetaldehyde is very reactive, damaging proteins and even mutating dna. Thus, too high levels of this intermediate product are extremely toxic (and one of the causes for hangovers btw)
Can a metal detector detect a higher amount of iron in our blood? For example if I eat mussels, which contain high amounts of iron, am I more likely to set of a metal detector? The answer to your question is yes, but not from food intake. There is a disorder called Hemochromatosis which results in a higher than normal iron content in the body. A normal person has ~4 grams of iron in their body, while someone suffering from Hemochromatosis might have up to 50 grams in their body. These extreme cases have been known to set off standard metal detectors, such as those found in airports. For reference, a 3oz portion of mussels contains ~6mg of iron. You would have to eat thousands of mussels in one sitting before the level of iron in your body would be high enough to set off a detector.
Was the cosmic background radiation ever visible? to be clear, the cosmic background radiation has been red shifted down to microwaves. so assuming that someone was there to see it, was it ever visible to the naked eye when it was in the visible light spectrum? The cosmic background radiation dates from when the universe cooled to a temperature of about 3000 K (cool enough when atoms could form; they're electrically neutral, and thus the universe becomes transparent, and so those photons form background radiation from that point forward). If you look at this image , you can see intensity of light at different wavelengths as a function of temperature. The bottom curve shows the distribution at 3000 K, and also shows the portion of that distribution in the visible spectrum. It was an appreciable amount of the background radiation.
Would a half gram of U-238 with a .2% concentration of decaying U-235 be safe to handle in a sealed glass jar? Um, why do you ask? (Calls FBI.) Yes. Natural uranium is .711% U-235, so you have some depleted uranium that is less radioactive than something you could theoretically find laying on the ground. Half a gram is not a large quantity. Perhaps someone will do the math for you, but it should be safe enough. The depleted uranium shells the US military uses contain kilograms of depleted uranium, and the military says they're safe to be around... you trust the military, don't you? Don't you? Under the "as low as reasonably achievable" dose guidelines you should only expose humans to radiation when there is a purpose for doing so, so you should keep this source in a lead box or similar and only handle it when necessary.
Why is Optical interconnection faster? The main performance increase in fiber-optic interconnects compared to say copper, is the bandwidth that is permitted by the medium, ie; the amount of data that can be travelling the cable at any given moment. (comparable to the width of a pipe carrying water) This is due to fiber-optics using beams of light to transmit the data, which operate at much higher frequencies than copper does. Another benefit of using light is the minimal interference over long cable runs, copper is very susceptible to all sorts of electromagnetic interference meaning it is somewhat more likely that the data does not reach the destination as it was intended and may need to be re-sent.
Can time be subdivided infinitely? Similarly, is there a time measurement equivalent to the planck length? Nobody knows the answer to whether time can be subdivided infinitely. Yes, there is a Planck time, but whether this is the minimal unit of time is not known. The various Planck scales are simply the scales at which quantum field theory and general relativity must both be included, and so the theory of what happens on those scales depends on having a quantum theory of gravity (or whatever theory harmonizes quantum field theory and general relativity).
How do brain-eating amoebas (e.g. Naegleria fowleri, Balamuthia, etc) know the way to the olfactory bulb after binding to the mucosa? The olfactory nerves exit the skull into your nose thru the cribriform plate. It's just a thin bone with tons of tiny holes in it for the nerve fibers. Naegleria has a flagella tail so if you accidentally "snort" it up your nose when you are in contaminated water, the protozoan can swim up and enter thru the cribriform plate into your skull.
Is it advances in sheet metal molding that has enabled us to manufacture better looking, modern cars? Or is there something fundamentally different about how we think about aesthetics? Like a cultural difference? The front end differences as AdShea mention. Also there is a minimum amount of empty space above the engine to ensure a pedestrian's head cannot hit the top of the cylinder head. There's probably other things I'm not aware of but those are certainly the bigger things.
Efficiency of heaters Is a fan forced element heater less efficient then a oil heater? Completely energy sealed room vs normal room On the other hand, (for heating, in this case) is more effective than any other heater. Why? Because although oil/fan/electric heaters all turn 100% of the energy you invest into heat, air conditioners actually use your energy to and put it into the room - so in addition to all the energy you spent being turned to heat (as all energy eventually does), there is more heat that came from the outside, resulting in >100% efficiency.
Are visual inputs in the center of your retina processed and perceived faster than those in your retina's periphery? Ok, so perfect example: I'm sitting here in my car looking at my phone, scrolling up and down Reddit. In the upper left corner of my left eye, I can see my phone's reflection in my left front window. Now, I SWEAR TO GOD, I'm seeing the reflection in the corner of my eye happen slightly the scrolling that I see happening on my phone in the center of my vision. Like maybe a few miliseconds after. Every time I scroll. Now, assuming light speed is too fast to notice this difference, my guess is the time dilation happens in my brain somewhere. I know the fovea is the most sensitive part of vision. But, is it actually so important to survival that the brain processes information in the fovea it processes information outside the fovea? Meaning, we become consciously aware of phenomena from our fovea before we become aware of phenomena from outside our foveas? So, am I crazy? Or, am I really seeing the same event essentially happen twice in my head due to seeing it from different places in my eye? Signal transduction in wires is slower than light, sometimes by a fairly significant fraction. Signal transduction in nerves is even slower because nerves don’t work like wires.
Total internal reflection question. Let's say the critical angle or an incident ray going from one medium to a medium with a higher index of refraction is 60 degrees. Any incident angle smaller than 60 degrees will simply refract through the second medium, while at exactly 60 degrees the refracted ray is parallel to the plane between the media. Now we also know for incident angles greater than 60 degrees total internal reflection will occur. Such that a 70 degree incident ray results in a 70 degree (with respect to normal) internally reflected ray. My question comes from what happens in the differential increase from 60 degrees up to some arbitrary value, like 70 degrees for example. At exactly 60 degrees incidence, the "resultant" ray is parallel to the surface junction; in other words it has an angle of 90 with respect to normal. But at even 61 degrees this resultant/reflected angle becomes 61 degrees. In this short change incident angle (of just 1 degree) we lowered resultant angle by 29. How did this happen? Is the resultant degree non continuous for that region, because it is for all other values greater than the critical angle right? E.g. 70 degree incidence results in 70 degree reflection, 80 for 80... all continuous up to 90 of course. I know this is a bit lengthy and maybe even a bit arbitrary... but I'm just curious; thanks anyway! At angles below the critical angle not all of the light will be transmitted into the second medium. There's something called the coefficient of Reflection (and a related coefficient of transmission), which can be calculated with fresnel's equations. See: http://en.wikipedia.org/wiki/Fresnel_equations If you take a look at the graph for the reflection coefficient you'll notice it's growth near the critical angle when the second medium has a lower refractive index than the first. In other words even before the critical angle more and more of the light is being internally reflected. This happens in a nice smooth continuous way. The critical angle however is special because beyond that all light is reflected as you know.
Do people in large population-dense cities like Tokyo, Mexico City and New York get sick more frequently, being exposed to more people on a daily basis? Related question: do they end up with stronger immune systems for the same reason? Short answer is yes and possibly. More people touching more areas allows for a greater spread of any bacteria. More hosts allow for a greater chance of mutation and more strains. Each time you contract a strain, even if it doesn't make you physically sick causes your body to create antibodies, thus theoretically boosting your immune system in a way.
Has anything changed in the last 20 years to make the forecasting of weather more accurate? So I was sitting on my deck and I was looked at my weather app at 11:30. It said that at 12:00 pm there will be a 100% chance of rain. The rain started at 11:50. Is this just a case of weather forecasts being more accurate closer to the event, or has something in the way that weather is forecast fundamentally improved? Now you can get customized forecasts based on your GPS coordinates instead of just your region or town. Radar and instrumentation has also improved over the years. Still, my weather app is wrong almost as often as it is right when it comes to rain forecasts.
How does cold welding work? What causes the metal to fuse? I recently read about how cold welding is possible in a vacuum while reading about some space type stuff, but I'm unable to dig up / understand the specifics behind this. If you have two metal surfaces that are extremely clean and well matched, you can compress them together very tightly to make a cold weld. The atoms at the interface start to diffuse and intermingle with one another, resulting in one solid piece of metal. If you cut a metal block in vacuum, you will reveal a fresh, non-oxidized metal surface. These cold welds work much better in a vacuum environment because no metal oxides form at the surface to block the diffusion.
In the double slit experiment, the peaks and troughs cancel out from the light from the two slits. Why does the peak and trough from a single slit not cancel out on the display screen? I was just thinking of the double slit experiment... We're told (and can see) in the experiments that the waves of light travelling offset from each other cancel out, or reinforce each other... If so - how come we can see any light coming from a single slit? How come it's not got a net brightness of zero, if the wave's troughs and peaks are hitting a screen? What about two light sources and two slits overlaid? Where's the waves go? Surely both sets of light create their own waves and troughs that can still interact? It seems they only exist when light's "split" into two from a single source? It's all very confusing... The peaks and troughs from a single slit do cancel out, this produces what is called a diffraction pattern. It is not completely dark, but it does have alternating light and dark spots. In general, the intensity of light a particular point is due to the amplitudes of light at that point from all possible sources. So, on an approximate level, you could think of a single slit as a rectangular grid of point sources of light. The brightness at some point on the screen is a result of the amplitudes from all these sources added up, after taking phase into account.
How come the undersea earthquakes and plate movements don't release oil into the environment under the oceans? Oil seeps are naturally occurring undersea oil leaks. They occur in many areas, often with oil wells there too. Oil well operators will often try to blame their own leaks on naturally occurring leaks (and not fix their leaks). My main point: It doesn't take an earthquake to leak oil from the sea bottom. In seismically active areas, an earthquake may exacerbate the issue though.
How come we can't efficiently cool something quickly? I know there's things that can supercool stuff pretty much instantly, but ho come there isn't like a microwave equivalent for cold? Refrigerators/freezers usually take hours to cool something. The biggest reason is that it's much easier to turn useful energy into heat than it is to do the reverse. This is a direct consequence of the Second law of thermodynamics , which can be restated in simple terms as: "You can't build an isolated device that converts its own heat into useful energy." On the other hand, the reverse process (converting other forms of energy into heat) is much easier and in fact is essentially inevitable. Basically all the energy we use in some fashion will end up eventually being dissipated as heat. This means that we can take useful forms of energy, such as say electricity and pass it through a resistor to generate high temperatures as in an oven, or to create a strong electromagnetic field as in a microwave, which can in turn quickly heat an object. However you can never do the reverse!
What causes one chemical to be classified as addictive and another chemical with the same effects non-addictive? For instance there are painkillers that are considered to be addictive and some that are not. What makes this so? I'll chip in. This is hard to explain but I'm gonna try my best. A lot of brain altering substances work by blocking neurotransmitters. (You'd be shocked how many drugs do this!) So what makes your regular old prozac (fluoxetine) and cocaine. Prozac is an SSRI, which means it blocks the brain from reuptaking seretonin. Cocaine is a dopamine blocker, which stops the brain from reuptaking dopamine and then floods the brain with dopamine. (This is really hard to explain). So people with seretonin deficits take prozac to make more seretonin. And this isn't a huge deal because your brain is usually pretty good about dealing with the amount of seretonin. Dopamine is different because of how the hormone works. It really controls your moods as opposed to seretonin which just controls the extent of your moods. See: http://upload.wikimedia.org/wikipedia/en/8/88/Dopamineseratonin.png Now with cocaine your brain is flooded with dopamine and your body just can't handle coming back down from that, so typically people take another hit of cocaine so they don't experience the horrible withdrawl. Now hit after hit your body has adjusted to the new level of dopamine, this is now considered normal for your body. Other substances that are addictive work by triggering other key areas of the brain or key hormones. You could write a thousand pages because chemical dependence is really complex... Did I help? This question would be better asked in Neuroscience rather than medicene.
What's wrong with these Homeopathy studies? I've always believed homeopathy to be a pseudo-science, but yesterday someone provided me with a link to studies on homeopathy that seem to show that it has valid medical applications. List of all the studies: Here are a couple of examples from the list: - Triple blind trial, Significant improvements in patients with chronic fatigue syndrome using homeopathic methods - Double blind trial, found that homeopathy improved hayfever symptoms Most of these studies show that homeopathy works at least better than a placebo, and in some cases leads to significant improvement. I don't understand how this can be the case. Homeopathy seems to violate everything I know about chemistry, but these studies show that it works. What's your take on these studies? The first study states in their abstract that the results were not statistically significant. In any clinical trial you have errors due to the random nature of human response to treatment, so your sample size must be large enough to show that there is a statistically significant difference between two treatments. Imagine flipping a coin 100 times, and it comes up heads 52 times and tails 48. Even though heads came up more, you wouldn't say that the coin is biased, because the difference is so small. This is basically the result they got in this study.
How accurate is radiation in fiction (e.g. irradiated cities)? Oftentimes in fiction or science fiction, the film, novel, or video game is set in a place where the setting is irradiated, such as a city that was hit by a nuclear bomb. How realistic (if at all) is it that certain areas could be more irradiated than others? Another example would be the Fallout games, where some areas have extremely lethal levels of radiation, despite the fact that the entire world was equally decimated. Also, some fiction has the survivors living underground and only occasionally make short trips to the surface; is it realistic to only need a gas mask for trips to the surface if the duration is short enough? At what point does greater protection become necessary? The portrayal of radiation in fiction is at best incredibly misleading and at worst outright false. Things like comic books and video games have helped contribute to the widespread (and misplaced, in my opinion) public fear of radiation. Most of the pop-media portrayals of radiation effects, like glowing green or mutating extra body parts, are nowhere near true. In almost any conceivable large radiation exposure scenario, the only widespread effect would be increases in the rate of cancer many years down the road. To respond to your specific questions, radiation due to nuclear fallout is in the form of radioactive isotopes. Each specific isotope also has its own specific chemical properties, and as a result could be concentrated in the ground, water, or air in different places depending on how it interacts with those materials. For instance, if a fallout product is water-soluble, it may tend to accumulate in waterways.
Do long-time smokers have a better chance of surviving smoke inhalation than non-smokers? Title says it all really. Are they better off in the situation of a house/building fire? Or worse off even? Or are they simply just as susceptible as non-smokers? Smokers need oxygen just as much as the rest of us. When they smoke, they are essentially replacing part of their inhale with smoke. Your question appears to be asking if one can build an 'immunity' to smoke inhalation, that enables a smoke to go longer without air... It would be the exact opposite. Cigarettes cause a build up of Carbon Monoxide in the Blood stream. This bonds with the Haemoglobin of your red blood cells more readily than oxygen does. A smoke, particularly one who has just had a cigarette (and potentially caused this fire) will have a higher level of CO in the blood, meaning they would absorb less of the oxygen remaining around them. Smokers would suffocate first....but only by a small margin.
What are some examples of reversible chemical reactions? I know there are physical changes such as water changing its state from ice to liquid water which can be easily reversed (changing the liquid water back into solid, ice form). I remember reading in a science textbook that chemical reactions are not so easily reversed, however. Are there any examples of everyday chemical reactions that can be easily reversed? Dissolution is one that comes to mind. Dissolve salt in water and then cool the water down and the salt will begin to precipitate out. If that isn't "chemical" enough, then you can think of the same thing about carbon dioxide. When CO2 dissolves in water it follows the reaction CO2 + H2O = H2CO3, known as carbonic acid. This is a reversible reaction and is almost entirely dependent on the ambient pressure of CO2. Imagine when soda is bottled, high pressure CO2 is injected which forces the equilibrium to the right, producing a lot of carbonic acid and dissolving the CO2. When you leave a soda out, there's very low CO2 in the atmosphere, so the carbonic acid breaks back down into CO2 and water and the CO2 enters the atmosphere. This is also the cause of ocean acidification because as global CO2 levels increase, the amount of CO2 dissolving in the ocean increases. This causes more carbonic acid to form, which reduces the pH of the ocean water.
Schroedinger equation and wavefunction Schroedinger's equation contains the imaginary unit, i, and the solution (wavefunction psi) is a complex function. Indeed, Roger Penrose devotes pages in his books to develop and comment on the use of complex quantities in quantum physics....seemingly to say that somehow, complex quantities are needed to describe the physical world. In fact, a little math shows that the Schroedinger equation can be expressed as a pair of coupled real equations whose solutions are the real and the imaginary parts of the complex wavefunction, psi. Just substitute psi = psireal +i psiimag into Schreodinger's equation to find the two real, coupled equations. (psireal and psiimag are, respectively, the real and the imaginary parts of psi.) So it seems to me that in principle there is no need to use complex quantities in quantum physics. Perhaps the use of complex quantities is simply a convenient mathematical tool? I am really puzzled why this is not mentioned by Penrose or other quantum physicists. Am I missing something important? So it seems to me that in principle there is no need to use complex quantities in quantum physics. It's true that one could use two real-valued equations to represent the original complex values and terms. But that would be more difficult, the math would be more involved, and the outcome would be less intuitive. Consider this example: (1) [; \displaystyle f(y) = \int_{-\infty}^{\infty} f(x)\ e^{-2\pi i x y} dx ;] The above is the classic Fourier transform (time -> frequency). And here is its inverse (frequency -> time): (2) [; \displaystyle f(x) = \int_{-\infty}^{\infty} f(y)\ e^{2\pi i x y} dy ;] Consider that equation (1) converts from the time domain to the frequency domain, and equation (2) converts from the frequency domain to the time domain, . The above equations rely on a property of the complex exponential that looks like this (Euler): [; \displaystyle e^{2 \pi i \theta} = \cos{2 \pi \theta} + i \sin{2 \pi \theta} ;] But without the use of complex numbers, this remarkable relationship is not available -- it might as well not exist. One would have to write out a series of discrete sine and cosine terms for each transform, and the mathematics would be much more involved. Read more about Euler Formula : "named after Leonhard Euler, a mathematical formula in complex analysis that establishes the deep relationship between the trigonometric functions and the complex exponential function."
If I started a fire and then fed it nothing but damp cardboard, would the energy in the cardboard be enough to keep it going despite the water? Water has a specific (quite high) heat capacity, which means it takes a certain amount of thermal energy to saturate it to the point it boils away. If the global heat capacity of the water in the cardboard is greater than the thermal energy you get by burning the dried up mass, you're not breaking even and the fire cools down. With enough water (or any coolant, for that matter), you can make the reaction no longer self sustainable and kill the fire no matter how much fuel is left.
Why does hot water sound different when it's poured? The viscosity of water changes significantly with temperature (hot water "flows easier" than cold water). Viscosity and speed of sound are related, and the propagation speed of sound waves is an important factor in the sound it makes when poured.
Why does gas-stove and electric stove cook foods differently, even though they are both indirect? IANAS but I imagine it's because of the distribution of heat on the pan, as well as the speed at which the temperature can change.
How powerful must a laser be in order to see the beam in air? I know it's a function of how hot the air gets before it starts to glow, so how powerful of a beam would it take to make it visible in daylight? EDIT: To clarify, I was watching a video of a Naval vessel firing a laser at a target. The beam was invisible, but the target exploded almost instantly. How powerful would that laser have to be for the beam to glow bright enough to see with the naked eye in broad daylight? Maybe you should clarify your question -- do you mean a hypothetical scenario in which the air is totally free of dust, or do you just mean in practice? In practice it depends on the color of the laser, but roughly speaking you can (barely) see the beam above around 20 mW or so in daylight.
If meditation is good for the brain, is the opposite of mediation(chaotic, unfocused thoughts) bad for the brain? I think you're going to have to explain your premise before anyone can answer your question. In what way do you think meditation is "good" for the brain? Do you have a source that states that meditation is "good for the brain" so we can form an objective definition for what that might mean?
How come we see the Milky Way almost as though we’re outside of it? The galaxy is flattish and you can see along it... I'm not sure what you mean exactly. You are inside your room and can see the room
Does the total volume of precipitation in the sky vary? If so, by how much? Unsure what you are asking. Are you asking if the amount of water vapor varies? If so, it clearly does because humidity varies widely over the Earth.
I know you can donate your body to science, but what about your death? It wasn't just about the legality of voluntary death, but about making sure you cop it while in the tube, which practically speaking means someone pushing a death button, in my mind.
When two people are infected with the same virus, are the t cells in our bodies that deal with the virus the same? I only have a simple understanding of how the immune system works, but I learnt about the lock and key model that describes the way in which t cells operate. From what I know, a specific t cell will attack the virus depending on the antigen (?). If this is the case then would two separate immune systems identify the antigen in the same way and thus would the t cells be the same? Nice to see some immunology on here! No. The concept of V(D)J recombination and there being "one epitope for one clonotype" of T cell was soundly struck down a few years ago. I, like you, however, was still taught the same dogma you've been taught. The clonal selection theory proposes that individual lymphocytes through V(D)J recombination are specific for one, and only one, antigen. For many years the concept of huge numbers of TCRs successfully providing immunity to all foreign peptides in a “one-clonotype–one-specificity” paradigm was accepted. The number of possible epitopes, however, greatly exceeds the number of T cells found within the human pool. A simple mathematical argument demonstrates that 10 T cells, a conservative estimate for the total number of foreign epitopes, would require a spleen the weight of a small car. Mason, therefore, posits that the “one clonotype-one specificity” paradigm is false, rather each T cell is able to respond to more than one antigen. Indeed, he suggests each T cell is required to recognise of the order of 10 antigens: a cross-reactive “one clonotype-million specificity” hypothesis should be adopted. This means that when a T cell "recognises" a foreign epitope, is activated, and undergoes clonal expansion, it's recognising only 1 of a million possible steric combinations. So if someone else's T cells recognises the same epitope, there's nothing to stop it being a totally different bit of VDJ recombination - and therefore a totally different TCR receptor, and a totally different T cell. Does that makes sense? Edit: Forgot to add the Don Mason paper: Mason, D. A very high level of crossreactivity is an essential feature of the T-cell receptor. Immunol. Today 19, 395–404 (1998).
Can I always use the Heisenberg uncertainty principle to understand diffraction? What happens in a single-slit diffraction experiment when the slit width "d" becomes narrower than the wavelength of the light? To put the question in more context, when I was an undergraduate I once heard that single-slit diffraction could be understood in terms of the Heisenberg uncertainty principle. The argument goes as follows: Imagine an incident plane wave of light at wavelength "λ" approaching a 1-dimensional slit aperture from the left, which I will define here as the "x" direction. The slit is of height "d" along the "y" direction, and for simplicity's sake is infinite along the z direction. Light must pass through the slit in the form of photons, and the Heisenberg uncertainty principle places limits on these photons' momenta. Specifically, we know that as a photon passes through the slit, its vertical position is known to within an uncertainty of about "d" so the vertical component of momentum, "p_y", is only defined to within a precision of about hbar / d. Now, because the incident light was coming in with wavelength λ, we know that each photon’s total momentum must be roughly 2 π hbar / λ. We can combine the two relationships to give the following relation, d sin(theta) = λ. (sin(theta) = transverse photon momentum / total photon momentum.) This is exactly the condition for the edge of the central peak in Fraunhofer diffraction. My question is this: What happens when the slit becomes narrower than λ? That is to say, what happens when the slit becomes so narrow that a photon passing though it must acquire an uncertainty in transverse momentum that is greater than the photon’s total momentum was to begin with? At least two possibilities come to mind: (a) When the slit gets narrow, light simply cannot get through. This seems reasonable except for the fact that once you close down the slit even a single photon that sneaks through the slit would have to violate the uncertainty principle. Transmittance that is identically zero seems almost as difficult to swallow as a failure of the uncertainty principle. (c) When the slit becomes very narrow, it begins to resemble a cavity where photons can be up-converted to higher energy and momentum. Would this mean that forcing light to pass through a narrow slit changes its color? Thanks in advance for any thoughts. I wouldn't use the uncertainty principle to try to explain anything, to be honest. It's much better to look at the underlying principle, namely that of conjugate operators or, clasically speaking, Fourier transform relationships. For a diffraction experiment, the resulting image in k-space (wave vector space) is the Fourier transform of the slit. Each k vector has a direction and magnitude. If you have the distribution of wave vectors, you can then calculate the image light with this distribution would make on a screen at some distance from the slit. What happens for a slit is simply that the whole thing gets broader and broader, with the minima going to larger and larger angles, while the intensity decreases due to the smaller amount of light coming through the slit. Fourier transform relations are much easier understood for frequency and time. Imagine a beat experiment ) with two frequencies separated by df. To be able to recognize that there are two lines here rather than one, you need to wait about a full beating period 1/df. If the time you wait is shorter, you could say that it's impossible to resolve the difference between the two frequencies. Are there still two frequencies present if the observation time is too short? Nope, there is a broadened line that depends on the temporal shape of your detection. If you make the time very short, you will hear a click (a signal with a wide frequency distribution), even from a sine wave.
Which flow type cools faster, Laminar or Turbulent? If a copper rod is being cooled by a fan, and depending on its position it can experience both turbulent or Laminar flow, would the rod cool faster if the flow is more turbulent or laminar? The position the rod experiences laminar is closest to the cooling source, so intuitively this would cool faster, but experiment has shown that the further away from the source it is, and the more turbulent the flow, the faster it cools. Am I misunderstanding my results, or what is going on behind the physics? Laminar flow provides heat transfer only through conduction because in laminar flow the air is flowing in sheets with little mixing between them. A way to visualize this is a deck of playing cards. The layer of air that touches the rod is heated. That layer also does not mix with the other layers of air above it. The heat can only be transferred from one layer to the next by contact (conduction). Since the energy transfer rate depends on the temperature difference, the sheet of air gets warmer by the end of the rod and is removing heat at a slower rate than when it first began conducting heat from the rod. The turbulent flow has no sheets. This means that more fresh cold gas will contact the surface resulting in a faster heat transfer rate due to a larger average temperature difference between the rod and air.
What is the heaviest naturally forming element in the universe? You don't need an estimate. Its Plutonium. No elements heavier than plutonium are known to occur naturally. That said, natural plutonium is so rare, for a long time it was thought to be only artificial. Uranium is the heaviest element created in appreciable quantities.
If a racquetball and a golfball are hit with a golf club against the wall of a racquetball court, which one will have the most speed/force when returning from the wall? So here is the setup. you have a golf ball and a racquetball on separate golf tees inside a racquetball court. They are struck with a golf club at the wall and bounce back. My question is which would be traveling at a greater speed ad have more force. My guess is that the answer has something to do with the bounciness (elasticity) of the balls. I would think that the racquetball would have more speed/force after hitting the wall because the golfball is much less elastic, causing it to transfer most of its energy to the wall. Correct if I'm wrong. I guess I derived this question from the "Jackass" boys. We were wondering which would hurt more, being struck by a racquetball or a golfball. The link to the video is posted below(Sorry for any crude language) The golf-ball collision is more elastic than the raquetball one. The raquetball being squishier actually has nothing to do with its elasticity, and everything to do with it's (lack of) stiffness. Both balls are almost completely elastic and return almost perfectly to their original shape after being deformed. But since the golf-ball is less squishy, it deforms less, and loses less energy to heat, therefore it "springs" back into its original shape with a greater fraction of the original deforming force, which makes the collision more elastic.
What is the probability of TCP failing to detect an error? I am starting to learn about CRC and the TCP/IP stack protocol. Am I correct in thinking that, for as little as it may be, there is still a possibility that all of the error detection methods throughout the layers fail to discover a particular sequence of wrong bits? If so, is it an issue that programmers should take into consideration, when dealing with highly critical data? The TCP checksum isn't meant to guarantee anything, it's a pretty weak test, especially when you're talking about massive amounts of data. This paper is pretty cool. Mostly you're relying on the fact that your data transmission protocols are pretty high-fidelity, so your base error rate is low, and therefore the probability of even needing a checksum is low, so the probability of checksum failure is low.
During a severe asthma attack, why can't the patient resolve it with endogeneous release of epinephrine? During exercise-induced asthma, asthmatics can 'treat' bronchoconstriction while they are exercising due to epinephrine release. But when they stop, they can have an attack minutes later because there is no further release of epinephrine. My question is: why can't these patients release epinephrine because they are panicking/in a high stress situation, especially one that they know can cause them to die? Is this because they 'run out' of epinephrine in the adrenals? Doctor here. Many asthma medications are Beta-2 receptor agonists--including albuterol. Beta-2 receptors are responsible for the smooth muscle relaxation that achieves symptomatic relief in asthmatics. Epinephrine is non-selective and can cause myriad effects that differ based on high vs. low dose administration. In short, it CAN be used in an emergency, but will cause many undesired effects. Physiologically, the same thing is true. We will experience an increased heart rate and blood pressure which will both contribute to an increase in the oxygen demand of our cardiac muscle... creating even more stress on the already taxed respiratory system.
How much potential energy could be produced using wind and solar means in the US if we took advantage of all the empty space? that, combined with geothermal (properly harnessed) would make charging for energy obsolete as it would be so plentiful and ubiquitous. We're talking in the order of Zeta joules (the whole earth only uses about 1/2 a Zeta joule per year.
Is there any difference between mass granted by the Higg's mechanism versus mass granted by the Strong force? Forgive my terminology, but as I understand it, the Higg's field creates mass for elementary particles, but the vast majority of mass we see is a result of the strong force holding together bundles of quarks and nucleons. How is it that these 2 seemingly different mechanisms result in something that on our scale look like exactly the same thing? Does mass created by the strong force interact with the Higg's field? Does mass created by the Higg's field interact with the curvature of spacetime? Is there any difference between mass granted by the Higg's mechanism versus mass granted by the Strong force? No, there is no difference. How is it that these 2 seemingly different mechanisms result in something that on our scale look like exactly the same thing? Because both mechanisms are doing the same thing: confining energy. The strong force confines the energy of particles with color charge into a small amount of space, and the Higgs mechanism confines the energy of particles that interact with the Higgs into a small amount of space. In both cases, mass essentially results from the relation E=mc , by virtue of the respective forces confining energetic particles and thus giving them some average rest frame. An advanced undergraduate-level exercise, which might help, is to show that a photon trapped in a massless mirror-box has an effective mass given by E=mc . It doesn't matter what forces hold the mirror-box together; what matters is that energy is confined.
When would you clean with, say, vinegar or borax instead of dish soap or bleach or alcohol? What are the properties of common solvents used to clean things, and what about them makes one better or worse at certain jobs than another? Some of the substances you mention are solvents, and others aren't. Bleach and vinegar (acetic acid) chemically attack anorganic and some organic substances and destroy them. I don't know how Borax works. Dish soap is a surfactant, it is a long molecule with one polar (hydrophilic/lipohobic) and one nonpolar (hydrophobic/lipophilic) end. One end attaches to fat and the other to water, allowing the water to engulf the fat and rinse it off. Water itself is a polar molecule thanks to its 104.5 degree angle structure. When it acts as a solvent, other polar substances such as salts will happily dissolve in it. Nonpolar molecules such as triglycerides (fat) will not dissolve very well. On the other hand there are nonpolar solvents such as alcohol, acetone, diethylether. These are great to dissolve nonpolar substances but not polar molecules. Strangely enough, water and e.g. alcohol will happily mix with each other and then you can dissolve all sorts of things, up a point. This explains why wine and other alcoholic beverages can have such a complex taste - both solvents combine to dissolve more aromatic molecules. So you really have to pick the correct substance for the job. For permanent marker, try a nonpolar solvent. For fat, it's soap and water. Vinegar works well against calcified areas such as taps. Bleach will, well, bleach stuff.
Prior to the existence of pollinating insects, how did plants reproduce sexually? Pollen is a relatively recent invention. It evolved around the same time as the seed. The first types of pollination were almost certainly by wind -- there is clear evidence for this from fossil plants (their anatomy and morphology is consistent with wind pollination). Your question is more broadly about sexual reproduction. Pollination is just one means of bringing sperm -- indirectly -- to an egg. In plants this process (sperm to egg) is indirect, because the first stage of sex produces a distinct plant generation that is haploid, meaning that it is a multicellular generation with one chromosome set instead of two (just like sperm and eggs; but this haploid generation produces the sperm and eggs). In many seedless plants like mosses and ferns, these multicellular haploid generations (which are produced by unicellular sexual spores) are free-living on (or in) the soil. The soil is where sperm are released and they swim through water in the soil to the egg, usually in another haploid plant, to fertilize them there and produce a new diploid generation. One evolutionary innovation in seed plants is that the haploid generation that produces sperm, which is now reduced to a few cells (all wrapped up inside the pollen grain), travels through the air (sometimes water) using a pollinator, or the wind. Its 'partner' haploid generation, which produces one or more eggs, is wrapped up in the parental ovule (= pre-fertilized seed). In flowering plants the ovules (seeds) are further wrapped up inside parental tissue that later develops into fruit. When pollen grains arrive they have to produce tunneling pollen tubes to deliver sperm to eggs in ovules. In a few seed plants, like cycads, the sperm are released inside the ovule and swim directly to the eggs there. Edits: Grammar!
What is the math behind mining bitcoins? What math does the computer do exactly to mine bitcoins? How do they make it to be harder and harder to mine and calculate so that you always end up needing more processing power? Why was it once relatively easy and now it is almost impossible to do decent mining with a desktop computer? What math does the computer do exactly to mine bitcoins? To "mine" coins you find an input to the SHA-256 hash function that produces an output that is less than some specified value. Since SHA-256 is a cryptographically secure hash function there is no way to predict the inputs that will produce a particular output. All you can do is try input after input until you find an input that produces a small enough output. Then you report your finding to the blockchain and you are given a reward for your computational effort. ow do they make it to be harder and harder to mine and calculate so that you always end up needing more processing power? Why was it once relatively easy and now it is almost impossible to do decent mining with a desktop computer? Bitcoin is designed so that the rate at which blocks are mined is constant. The way it does this is by adjusting how small the output of the SHA-256 hash function needs to be. If a whole lot of people are working hard then simply make the desired output smaller so its harder for everybody to actually mine a block. Because so many people are using custom hardware now the protocol has adjusted to compensate. This means that mining a block is much more difficult than it was a few years ago.
Why does a photon have an electric field? Or rather how does it have one? I think discussing the reverse question is more illuminating: From electromagnetic theory, we know the Lagrangian (an incredibly compact way to describe motion and energy in physics) is for chargeless situations, ℒ = -¼F F where 'i' and 'j' cycle through time, x, y z, and, F = (d/dx )A - (d/dx )A where we find the kernel of physics we are interested in. We want to know about A which is our electromagnetic four-potential from which flows the electric and magnetic fields. In short through a complicated relationship of how A changes, we get the equations of motion for the electromagnetic field. In classical electromagnetism, this is all find and dandy. We can figure out the electric and magnetic fields of wires and magnets. The properties of the four-potential yield us Maxwell's equations. From this we even get a classical wave equation that describes classical light! (d/dx )(d/dx )A = 0 Light then becomes electromagnetic fields which do not have charges around them behaving in a way that obeys a wave equation. This light is continuous though, you can always have a little more or a little less or it, there are no restrictions the dynamics of this wave outside the wave equation. However, something interesting happens when you try to electrodynamics. The interesting thing is that to describe behavior, you're going to get terms which describe discreet changes in the electromagnetic field. Essentially, a restriction occurs that doesn't allow the electromagnetic field to behave completely arbitrarily. Another way to think of it is to picture the sound modes of an organ pipe. These restricted objects which contribute to motion in electromagnetism are called photons. So that is why electromagnetic fields have photons.
At what point does motion become sufficiently microscopic such that it becomes tempurature? A motion contributes to the temperature when it is in thermal equilibrium. Size doesn't matter in-itself. For translational motion, it will be in thermal equilibrium when it's randomly distributed. That is, the net motion of an object does not count towards its temperature. It doesn't matter if something is standing still or moving at 1000 m/s. (and why would it? whether it's moving depends on what you're measuring relative) When a moving object slams into a stationary one and comes to a stop, the kinetic energy of its motion gets redistributed and randomized, and the temperature increases.
Why did the Apollo astronauts not just parachute from space directly? Why did they have to use the heat shield and then open the parachutes? And could someone conceivably skydive from the ISS? It seems like it would be possible to just "float" down from space on a parachute... The ISS is travelling at around 28000km/h (17000mph) relative to the Earth's surface. To land on the surface, that kinetic energy needs to be shed. The spacesuit (EMU) plus manned manoeuvring unit (MMU) have a combined mass of around 190kg (420 lbs). Combine this with an 85kg (185 lbs) astronaut and you are up to 275kg. The kinetic energy that needs to be shed is: E = 0.5 m v E = 0.5 * 275kg * (7750 m/s) E = 8.3x10 J E = 8 GJ This is enough energy to run a 60W bulb for over 4 years! Let's pick a time period of 10 hours ( shuttle re-entry is around an hour ) to dissipate this energy, and assume it is dissipated evenly over this period to the atmosphere. The power output for the process is: P = 8x10 / (10x3600) P = 229 kW A room heater is around 2kW, so this power dissipation is the equivalent of wrapping 115 room heaters around the suit and leaving them running for 10 hours straight. So to do this is an hour is the equivalent of wrapping the spacesuit in over 1000 room heaters. Assuming you get a nudge to push you into the denser atmosphere, you have to shed 28000km/h (17000mph) by collision with the atmosphere. This is a toasty process. Formatting, TLDR, and last sentence added.
How dangerous is uranium/uranium oxide to handle? At 38:55 of the below video, it is said that people wear gloves when handling uranium to protect the uranium from being contaminated, rather than wearing gloves to protect themselves from the uranium. It is said that since uranium's half-life is in the billions of years, it isn't that radioactive. This sounds hard for me to believe, as I thought uranium was very dangerous to handle. Is it true that uranium isn't that radioactive? That gloves are worn to protect the uranium, and not the human? Also, is uranium oxide - which is what the pellets in the video are - the same as uranium in terms of safety? Uranium in its natural state is not particularly radioactive. U-238 is the most common isotope in Uranium ore. U-235, the more radioactive isotope used in enriched and weapons-grade uranium only accounts for about 0.7% of natural uranium ore. But even U-235 isn't terribly dangerous from a radiation standpoint. The larger concern when handling these materials is their inherent toxicity. For this reason they are always handled with gloves and similar protection. One would have to spend a long period of time in close proximity to a very large quantity of uranium in order to receive a dose of radiation that was any more notable than the typical background radiation we receive in everyday life. The perception of uranium as highly radioactive and dangerous comes from two sources. First, it is often thought of interchangeably with plutonium in this regard. Pure plutonium is significantly more radioactive and thus should be handled with much greater care, but even then, I believe the principle concern is toxicity, not radioactivity. Secondly, and more importantly, irradiated nuclear fuel is radioactive, and quite dangerous to interact with. This is probably what you're thinking of. Enriched uranium that has spent time as fuel in a nuclear reactor has undergone fission, and been bombarded with particles, all creating numerous other materials within the fuel that make it very radioactive. Spent fuel like this is what we refer to when we talk of "nuclear waste" and it quite dangerous. This is the material that conjures up images of technicians in bulky radiation suits, daintily holding on to glowing metal rods with a pair of tongs to avoid contact.
Trying to learn about ultrasound and I get that sound is reflected at an interface proportional to the difference in acoustic impedance between the two materials but what I can't find is a physical explanation for why that would be the case. Can someone help? I find it especially weird that it works in both directions equally, for instance nearly 100% of sound is reflected going from air to solid AND solid to air. Why?? In one dimension, you can consider that your substances are a bit like a Newton's Cradle. Air has very light balls on long strings, a solid has heavy balls on short strings. It's a bit of a simplification, but the math comes pretty close to being a discretised version of the wave equation. If a soundwave comes from air into a solid, it's like a light ball hitting a heavy one, in an elastic collision. Conservation of momentum means that the light ball has to bounce back pretty fast, while the heavy ball only starts moving slowly. Most of the of the wave has been reflected into the string of light balls. Vice versa, if a heavy ball hits a light ball, the light ball is bounced away pretty hard, but the heavy ball barely slows down. This allows it to swing way out on its string, and then return with almost as much energy as it started out with. Again, most of the of the wave is reflected (although the wave in the less-dense medium has more amplitude than the original wave).
How safe is handling a radioactive fragment of Chernobyl's nuclear fuel with bare hands? (video inside) In video, the girl finds what is purportedly a radioactive fragment of Chernobyl's nuclear fuel. She handles it with her bare hands and even takes it home. Is it safe to handle this with bare skin? If not, what protective gear should have been worn? What if the fragment was eaten or inhaled? Thank you rennovak for the additional questions: What effect does it have, if any, on the devices like the laptop, mobile phone etc? Is this fragment dangerous only from a nearby position or could my whole building be in danger if I would to take it home and keep it? First off, I would be highly skeptical of any YouTube video claiming to have found something like that. It is easy to make stuff up on the internet. But let's assume that the video is accurate: she claims the source she found has a contact dose rate of several Sv per hour. It seems like the source is mostly alpha and beta decay (short range charged particles) or low-energy gamma, since the dose rate is so much lower when the source is in the ground (2 uSv/hr). An alternative would be something like cesium or cobalt which emits a more energetic gamma ray that can penetrate farther through the ground. We use a similar radioactive "seed" in cancer treatment, for a procedure called . Depending on what the exact isotope is, when we handle these sources we wear gloves to avoid having the isotope stick to our hands, and sit behind a small leaded glass panel to shield our torso from exposure. The hands are fairly radiation-insensitive, so you don't need to shield them nearly as much. Alpha particles are stopped by a few cm of air, and beta particles are stopped by a few cm of dense material like water or dirt. So the video is accurate and that source is actually emitting several Sv/hr at contact, one would expect to see burns on the hands (very similar to sunburn) after handling the source for around an hour. Handling it for short periods of time isn't the smartest thing, but it probably won't lead to any immediate harm. From other parts of the video, it seems like the source is well-shielded by simple materials. So I doubt there was any real danger to any other occupants of a building that this source was in. If the source is shielded by a few centimeters of dirt, then the radiation detected by someone many meters away through walls will be negligible.
When I'm hungry, I smell food better. Is that my brain filtering information differently or is it my nose being "more active"? I don't experience this, but nonetheless it must be the former (your brain filtering the information differently). There is no difference in the number or type of odor receptor cells or the "activity" of your olfactory epithelium in the roof of your nose between times when you are hungry and times when you are full. I am also going to go ahead and assume that your breathing patterns do not change (i.e. breathe more through your nose when hungry and mouth when not hungry).
Explain to me, in a nutshell, what Quantum Superposition is. First, you have to understand the concept of quantum states. Let me give you an example: Take an ordinary photon. Any photon. And now this photon passes through a vertical polarizing filter. Now, there are two states that this photon can be in: vertically polarized, or horizontally polarized. If it's vertically polarized, it will pass through the filter just fine, but if it's horizontally polarized, it will get absorbed. These are the two quantum states of the photon. If you shine an unpolarized beam of light through a polarizing filter, you will notice that the light is only half as intense when it passes through. This is because half the photons are blocked, and the other half are transmitted. Lets keep following this light. You know that all of the photons in it must be vertically polarized, since it just went through the filter. If now, you put a horizontal filter over it, you won't be too surprised to discover that none of the photons make it through. What happens, though, if instead you put a filter at 45 degrees to vertical? Well, as it happens, half of the photons pass through again, and now your beam of light is polarized 45 degrees to vertical. This is because a vertically polarized beam of light can be described as a quantum superposition of photons that are either 45 degrees, or 135 degrees from vertical. The photons that are 45 degrees pass through the filter, while those at 135 degrees are absorbed. Each individual photon has a 50 percent probability of being either one, and you can't know which are which until they actually pass through the filter. So, if this illustration helps at all, a quantum superposition is where a system can be described as a sum of different possible states, weighted by their probabilities.
How small a particle can be effected by magnetism? What is the smallest individual particle that can be manipulated by magnetism? Would magneto be able to individually manipulate individual iron molecules? Mass and volume have absolutely no effect on magnetic force. Magnetic force is proportional to charge, strength of magnetic field, and velocity of particle. An ideal particle with no mass and no volume could still be affected by magnetic force. Yes, theoretically magneto could manipulate individual particles of iron.
How we know that how kreb cycle actually happens in the cell? How do scientists study microscopic biochemical mechanism at cellular level like Na/K Pump, glycolysis, kreb cycle etc. In cycles products keep changing/converting so how do they keep a track of all of them? Considering they are so so minute and so so less in amount in comparison to average lab samples and experimentation. One powerful technique is isotopic labeling - a compound that is known to be part of a process is labeled with either a stable isotope or a radioactive one, then the label is tracked through the various transformations. For instance 13C or 14C labeled pyruvate could be fed into the citric acid cycle, producing labeled citrate and subsequent constituents over time. Varying the position of the label can show which parts of molecules are being transferred through the cycle. In the mid-20th century radioactive labeling was particularly useful because the compounds could be detected through simple techniques like thin layer chromatography, whereas we can observe stable isotope labels in a much more fine grained fashion now using LC-MS/MS.
Is there/was there a selective animal breeding program focused purely on intelligence? I ask because I have never heard of such a program, and the little bit of research I have done has only provided me with abstract information about the feasibility of the idea, not its implementation. We've been doing that for a while with dogs that need to be intelligent in some way to do their jobs. http://en.wikipedia.org/wiki/Border_Collie http://pets.webmd.com/dogs/features/how-smart-is-your-dog
Do insects experience pain in the same way we do? For instance, would a bee have a similar experience to a human if their leg was torn off? One problem with the question is that pain research often tosses "perception of potentially harmful stimuli" in the same pot as "suffering". Many very simple organisms have ways to perceive and react to things that might hurt them. A bee doesn't like to be squeezed and will sting in defense if it gets too bad. A simple coral will notice touch and withdraw from it. Even plants react to damage. What they don't have is the higher capacity to think and worry about those experiences. It happens, seems to be uncomfortable, the animal tries to get out of the situation, but they don't seem to develop a lasting "fear" or often even just a strategy to avoid the same situation from happening again. A big part of what turns pain into suffering is the brain's ability to simulate the future. We can expect that something will hurt, we can fear it, we can imagine how bad things could get. We worry. Simple animals don't. Context matters a lot to humans. The pain after a good workout can feel good, while the same amount of pain after an accident can have us suffering. It's very difficult to test for such processes in animals. You can check their bodies for special pain transmitting nerves, or chemical changes to damage, but that doesn't say anything about their ability to interpret the sensation beyond reflexes to move away.
Is it possible to diffract bacterium? In a similar fashion to electron diffraction, is it possible to diffract bacterium? Actually, yes. I'm not sure about bacteria specifically yet, but a couple of techniques let us get that kind of data from other small things. Some of the earliest work done at the Stanford Free-electron laser was obtaining the diffraction pattern from a single virus. This is an image of diffraction from a single viral particle . Additionally, soft X-rays have been used to image whole human cells using X-ray tomography (like a CT scan): http://www.ncbi.nlm.nih.gov/pubmed/23086890
Why are there so few species of mammals? It seems like mammals can have a lot of variance before we call them seperate species. There are more species of frogs and toads than all species of mammals. Is Mammelia just a younger Class? Are they better at breeding with more distant relatives? (And why are there so many species of bats? They take up like 20% of mammalian species) So species arise through divergent evolution. Evolution essentially occurs as genetics are altered from parent to offspring whether by genetic "mistakes" or by more purposeful routes such as combining DNA from multiple parents as with sexual reproduction. More species would arise from a single progenitor species under various circumstances, most notably would be the number of offspring produced and the rate that the offspring are produced. So two extreme examples would be: Mammals: A broad generalization, but tend to reproduce very slowly, reach reproductive age later, and be long lived, thus having fewer young throughout their lifetime. As a result, evolution of new species would occur much more slowly. Insects: Have massive amounts of young, (thousands to hundreds of thousands in some cases), reach reproductive age quickly, and tend to be short lived. Thus, evolution of new species would proceed quite quickly in this group. This is a broad generalization and many other factors can effect evolution of species, but in my opinion as a research biologist, this is likely the most impactful variable in reference to your question.
How would I give myself the best chance at becoming a fossil once I die? What sort of preparation, if any, would increase the chances of fossilization? Would calcium or other mineral supplements help? Where should I have my bodied buried? Near a river? In the arctic? Would a casket hinder this goal? To be clear, I am not talking about mummification. I want my bones preserved in stone to be dug up millions of years from now. What you're interested in looking at is taphonomy , which is the study of what happens from the time an organism dies until it is discovered as a fossil. Here is a journal article about it. You'll find this stuff if you go through some taphonomy papers or sites like the ones I linked to above. There are some general conditions ideal for fossilization: Ideally you'd like the remains to be complete and articulated, and maybe preserve soft tissue like hair or stomach contents. You also presumably want the remains to be found someday. The first step is to have them buried quickly. You don't want them to be exposed to the elements or to things like scavengers. They could get swallowed by a collapsing sand dune or sink to the bottom of a lake that has a lot of fine-grained sediment being deposited in it. You want the environment to be low-energy, so no fast-moving water like rivers or strong ocean currents. You don't want big pieces of debris, like large pebbles, to potentially damage the remains. The smaller the sediment particles the better. Lithographic limestone is ideal. You want the environment to be low in oxygen to prevent decay. This will keep the skeleton together if nothing else. In ideal scenarios, this can preserve soft tissue. You want them to be in a fairly stable area. No major earthquakes or igneous activity that will disturb the strata or metamorphose it. It's no good if the limestone the remains are in turns to marble. You don't want them to erode out of a hillside too soon or have faulting break apart and overturn things. You don't want there to be a lot of freeze-thaw cycles because when water gets into cracks and freezes it can cleave things apart. So I'd veto a river and the Arctic. I'm not sure about the effects of using a casket, although if you want the remains to permineralize then a sealed casket might not be ideal. If there are microbes or mold spores in there they could potentially cause damage. I'd suggest a lake or backwater lagoon that had an influx of sediment. If the bottom of the lagoon or lake is anoxic, even better. Swamps and peat bogs would work well. Or you could go for something terrestrial. Aim for a geologic region that is relatively stable. Less likely possibilities are volcanic ash or tar pits. Above all, there are no guarantees in the fossil record. Taphonomic bias is certainly an issue that paleontologists have to account for. There are lagerstätten with absolutely exceptional preservation, but they're not common. Examples of great fossilization: Solnhofen limestone Messel Shale Djadochta Formation (e.g. the Flaming Cliffs in Mongolia) Jehol group in Liaoning Province, China Burgess Shale Mazon Creek in Illinois, US Green River Formation Chinle Formation (e.g. Petrified Forest National Park) Also, if you like this kind of stuff, you'll probably love the book . It's a fictional book by renowned paleontologist George Gaylord Simpson.
How high would the lunar return module of the Apollo lander be able to go if it left from the surface of the earth? The ascent engine could only lift the ascent stage in lunar gravity, where is had 1/6 of it's earth weight. On earth, the stage and fuel weighed around 10,000 pounds. The engine had a thrust of 3,500 pounds. So, the answer to your question is it would not even lift off.
What causes water rings on cups? When you leave a glass, why do your contents, end up in a ring around the glass, when there was no liquid outside the cup? Are you talking about water rings on the surface the glass was resting ? That's due to condensation, most prominent when you pour a cool drink onto the glass. Water vapour from the air condenses onto the surface of your glass because it is cooler, and when enough water has condensed it rolls down the side of your glass to the bottom. That usually doesn't include contents of whatever that's your cup. Of course, it's possible that, in the process of sipping your drink, some has gone outside the cup.
Can a stray balloon make it to space? No, for most definitions of space. Balloons don't really push themselves up, it's more that the air around them is pushing down harder than they do, and the air pushes them out of the way, thus up. As you get higher, there's less air to push the balloon higher. Eventually one of two things happen: /u/mosskin-woast
Why does my microwave mess up the (wi-fi) internet when it's on and is there any way to stop this? other than buying a new microwave Some quick googling landed me with the following information: Approximate frequency of Microwave Oven: 2450 MHz (2.4 GHz) Frequency of type b,g,ad protocol for Wifi: 2.4 GHz with channels taking the range from 2.4 to 2.5 GHz. So you have either a b,g or ad type router (probably g) and your microwave has poor shielding (probably old). The emissions from your microwave are interfering with the signal to/from your router because they are of the same frequency. (Other frequencies don't matter because they are filtered out). This is happening because the channel that your router is using falls in the same frequency as your microwave. The 2.4-2.5 GHz frequency band is divided into smaller 20MHz sized channels that allow for multiple wireless signals following the same protocol to exist in the same area. Since the oven frequency is an approximation I cannot say for sure what channel you are operating on, but the easiest fix would be to change your communications channel on your router. I've never done this so I can't say for sure how to (I'm sure google can). I would also advise getting a new microwave as I do believe that they are supposed to shield against emitting radio waves.
Why do we base attractiveness on such seemingly trivial factors? Why do fetishes exist? Couldn't a fat woman birth a child as efficiently as a skinny woman? Do the size of breasts serve any evolutionarily beneficial purpose? In short, WHY does the factor of attraction vary so much on seemingly trivial factors, ones that seem to serve minimal survival purposes? I'm going to be contrary to what other people in this thread are saying, and point out that things like attractiveness and beauty vary hugely over time and space. That is to say, different cultures have different concepts of beauty, whether its different because its somewhere else in the world, or different because its a past version of your own culture. For instance, you ask about fat women or small-chested women, but fat women considered attractive once, and small breasts preferred once. .
How is having such dependent offspring selectively advantageous for humans? Well, first of all, humans are extremely K-selected in comparison to most mammals. We put a lot of effort and care into each individual which is born. You can have a supremely dependent offspring when you put a lot of care into it. Additionally, our babies are born in what we call a secondarily altricial state. We are less developed at birth than what you'd see in most primate species. Basically, we are born before you would be expected with regards to our brain (and body) development. Much of this is related to brain size. We are born relatively large-brained, but have neonatal brains that are in a less developed state (relative to final brain size) than seen in our primate relatives. It has been hypothesized that this is due to the "obstetric dilemma," where you can only manage to squeeze out a head of a certain size through the pelvic outlet (although there are some researchers who indicate that this might not necessarily be the case). It is also interesting to note that humans are really good at taking care of our kids after they are weaned. Most primates do not engage in significant direct care of offspring following weaning. Primate juveniles are pretty much on their own in terms of finding food following weaning -- and have correspondingly high rates of mortality. In contrast, humans engage in significant food sharing and direct care for individuals after weaning. This period called "childhood" (see work by Barry Bogin and friends) not only increases offspring survival, but also reduces interbirth interval by lowering the amount of time spent in lactation. As females demonstrate reproductive suppression during lactation, reducing this period allows us to resume cycling faster and allows us to have additional kids while still having dependent offspring. In contrast, other ape species have extremely long interbirth intervals as their period of lactation (and thus reproductive suppression) is longer than found among humans.
What's happening when you break in an engine, and how do different break in methods effect the engine's performance and longevity? Also, what effect do cylinder coatings such as nikasil or SCEM have on break in? Breaking in an engine is about making sure that all of the bearings and moving parts settle in together and wear evenly. If you don't do it, the engine will fail sooner, but there are so many variables you can't really say for sure how much damage you'll do. You could end up with piston rings that don't seat correctly and the car will burn oil, or you could get piston slap (when the piston head rocks back and forth in the cylinder rather then sliding straight up and down.) Camshaft lobes could wear incorrectly. leading to valves opening and closing at the wrong speed/time. A crankshaft bearing could spin and destroy itself, seizing the entire engine. The crankshaft itself could begin flexing as it rotates, and eventually snap. Every car manual I have ever read has said that you should break in an engine by running it at various RPM, and avoiding heavy loads for the first few thousand miles. Once an engine is broken in, if you take it apart, it has to go together exactly the same way, as the parts have "mated" together. Engine assemblers use journaling marks to make sure that they have no excuse for mixing parts up or getting something backwards. Now, these are all pretty extreme examples. Modern machining is pretty good, and you have to abuse a car pretty hard during it's break in period to do damage that will be immediately apparent. If you don't hoon your car, but simply ignore the break in procedures and drive to work and whatnot, you'll probably sell your car before anything goes wrong with the engine. In older engines strictly following the break-in procedures was far more important due to less advanced materials and less consistent tolerances. I really don't know anything about nikasil or SCEM, but I've got a car with a rotary engine, and nikasil is mentioned as a material used in apex seals for them. The break in procedure for my RX-8 was no different from my bog standard Honda , so I doubt it changes much of anything. Someone with an ASE certification should be able to provide a lot more technical terms and info then me. you can find them on /r/mechanicadvice and /r/cartalk You can find photos of horrible engine failures on /r/justrolledintotheshop
If Gravity Is Not a Force, but the Curvature of 4D Spacetime, Why Do We Want to Unify It with the Other Fundamental Forces? Gravity always seemed to be the hardest force to tie into the bunch of the 4 fundamental ones, but gravity is also the only force for which we go out of the way to claim that it's just a result of curvature in 4D spacetime and that it isn't really a force. So, if it actually isn't a force, why are we so keen on unifying it with the three leftover forces in the pursuit of a "Theory of Everything"? Because this curvature is still describing a classical phenomenon. Think of this. Maxwell's equations describe electromagnetic interactions as interactions with fields and charges. That's the classical view. But quantum electrodynamics (QED) describes it as charged particles exchanging photons (quanta of the electromagnetic field). How do we know QED is better? Experiments see discrepancies from the classical interpretation. The curvature of space-time is related to the gravitational field, again a classical field. We expect this to be an exchange of gravitons between masses(quantum gravity), but we haven't noticed any discrepancies between reality and Einstein's equations yet because these things would occur in very strong gravitational fields which we don't have access to.
[Biology] Is the inside of a resting neuron negative, or just MORE negative (a.k.a. less positive, but still positive) than the extracellular fluid surrounding it? I've been looking at YT videos and various websites for a while now and they seem to use vague language in describing this You may find the wording vague because as with any other discussion of potential difference, it is the in electric potential that matters; the absolute value can be anything you want. We know that real cells are aqueous systems, and by the law of electroneutrality we know that the system is overall neutral to begin with - you can point at any random flask of solution and expect it to be basically electrically neutral. This means that if we were to set up an electrochemical gradient, say, with NaCl, across some membrane in this system, it must do so via charge separation - one side of this membrane has an excess of Na and will be positive, the other negative with an excess of Cl and be negative. So you can build an arbitrary membrane potential of -70 mV by having one compartment sitting at +35 mV, and the other -35 mV, both relative to our original, electroneutral solution. In a one-ion scenario, the voltage difference is given by the Nernst equation . Real cell systems, however, are a lot more complex, and are full of charge-carrying species. So whenever you find a potential mentioned like this, the number only applies to the ions we're investigating. For example, in a cell there are potassium, sodium, chloride, sulphate, phosphate, carbonate, etc. ions, but not all of them are permeable. It is only the cell-permeable ones that we consider in coming up with the membrane potential - see the Goldman equation - because those are the only ones that can actually be used to do work. So basically, there are so many other species (such as charge-carrying proteins) that can neutralize any "net" charge, it's really not too meaningful to discuss any given compartment to be "positive" or "negative", because it could be either or both. The membrane potential exists as a local phenomenon that is only found across that membrane, generated by ions that can permeate that membrane. The only thing we can say for certain is that if you consider the solution in bulk you can consider it as basically neutral.
silly math regarding supermassive black holes and average density. So, regarding the new largest ever black hole that's been discovered... I was doing some math. Badly, likely. Can someone proof this? This is not homework, and was done for fun on another thread. The diameter of the event horizon for the new black hole was estimated to be about the size of the orbit of Neptune, or 4 light-days, which would give us a radius of... r= 51,804,136,742,400 m (299,792,458 m/s x 60sec x 60min x 24 x 2) and pi is 3.14159265359 so the volume of the event horizon would be 26,486,747,264,016,772,492,836,761,361 cubic meters. If that's correct, we can move on. The current heavyweight comes in at 17 Billion solar masses. 1.9891 × 10 kg (suns mass) x 17,000,000,000 = 33,814,700,000,000,000,000,000,000,000,000,000,000,000 kg. So, the average density of that event horizon volume would be 1,276,664,879,342 kg/m3 if my shitty math is correct. It probably isn't. Now, this conflicts with things I've heard, that the average densities of supermassive black holes are in fact very light, some even lower than water. That doesn't jive with my ad-hoc calculations. Any input? Now, this conflicts with things I've heard, that the average densities of supermassive black holes are in fact very light, some even lower than water. I thought this was totally wrong at first, but then I ran the numbers myself in wolfram alpha and got a density drastically lower than water. Things I never knew before! As for your math, I think you may have made an error in calculating the volume. Since the radius is about 5 x 10 m, and the volume V = (4 pi r )/ 3, the volume should be something like 6 x 10 cubic meters, which is way higher than what you have in your post. This refers only to the average density within the event horizon; it is thought that the mass actually all piles up in the center (possible/probably in a singularity with infinite density). edit: in general, since the Schwarzschild Radius of a black hole is 3 km per solar mass, the volume is proportional to the cube of the mass, so higher-mass black holes will have lower densities.
Sucking hot air out or sucking cool air in - what's a more effective way to cool down a room? I have a fan I put on my window sill, and I have the type of windows that open horizontally (so there's a large patch of open window that the fan does not cover. I'm guessing heat pump, but all the physics I know was from my chemistry courses so I can't say for certain. My reasoning is that the air in my room is hot and the air outside is cool, so having a fan that moves air along the gradient in a thermodynamically-favorable direction would take less work (from the fan) then the reverse. I don't know if there is a significant difference in air pressure between my room and the outside environment when a window is opened. I think that in the overall system, the total air pressure of the room VS the outside world moves to equilibrium quicker then the temperatures, possibly mitigating the effect of pressure (if my assumptions aren't too flawed). Also, the fact that the opening to the outside is not completely sealed by the fan may have an effect that I'm unaware of. Just something I've wondered about. Curious to know if someone has a more conclusive answer. Time to do some science! Given what I can tell about your current setup, I'd recommend putting the fan at the bottom of the window, and move your bed/desk so it's pointing at you. Sure, it may seem obvious, but here's why: If you want to get really fancy, go get a smoke machine, smoke bomb, or something really dusty and cloud up your room before turning on the fan in a bunch of different positions. Take some video to see where the smoke goes and test your hypotheses about the fan's flow lines. For other people with hot room problems, if you have access to an attic, stick your fan in the ceiling to blow the hot air out. Drawing cool air in with this method can be even better than the solution I offer above, but it depends on your fan, room size, geometry, etc. Heat transfer problems are nasty that way: hard to solve analytically for complicated geometries and funky boundary conditions. But at least they're easy to experiment with! I look forward to hear how it goes.
Why must the electrode of a pH meter never be removed from a solution while the device is on? Hello scientists! Every lab manual I've ever run across insists quite vigorously, often in italics, that one must remove the pH meter's electrode from solution while still on. I've dutifully followed these instructions, but answers as to why this an imperative aren't easily available online. I'm sure there is a simple explanation, and the strong warning language makes me curious as to what it is. Presumably it ruins the device and/or data, but as a man of the world, I simply need more answers (which you lot are overqualified to provide.) What's the deal? I'm not too sure about this, but i think it might have something to do with drying out the electrode's glass coating which would results in unreliable results being detected. So it should always remain in the solution or in a specific container with electrolytes which keeps it moist and prevents drying out. I could be wrong, though.
Why do larger elements (e.g Moscovium) have such short lifespans - Can they not remain stable? Why do they last incredibly short periods of time? Most of my question is explained in the title, but why do superheavy elements last for so short - do they not have a stable form in which we can observe them? Edit: Thanks to everyone who comments; your input is much appreciated! A contributing factor is that we probably haven't synthesised the most stable isotopes of many superheavy elements. The higher the atomic number, the greater the neutron/proton ratio required for stability, and since superheavy elements are synthesised by fusing two lighter ones together it's hard to get enough neutrons. For example the first isotope of Copernicium (element 112) synthesised, in 1996, was Cn-277 with a half-life of under a millisecond. A few years later Cn-285 was synthesised and that has a half-life of about 30 seconds. Still very short in human terms, but many thousands of times more stable than the first isotope discovered. It's likely the same will apply for the newest elements discovered, and indeed unconfirmed results indicate this. Even in the predicted "island of stability" half-lifes are still likely to be minutes at best though.
longest eartly shadow possible? the empire state building blocked the sun from me from 20 miles away where I grew up...whats the longest possible shadow that you could possibly observe (and distinguish) Strictly speaking, I think an object can cast an infinite shadow, albeit not for very long. Consider a single ray, moving from the sun to the earth, and assume that this ray misses the earth by a tiny margin, say 3 feet or so. If we ignore the effects of gravity, then this ray will move past the earth and out into space, and will travel (for all intents and purposes) an infinite distance. Now say that you place an object in the way of that ray. This means that the object will cast a shadow, in the place where the ray was going to be, and since the ray was aimed in such a way as to travel an infinite distance, so will the shadow be of infinite distance. This is still true when gravity is brought into the picture, it just means we have to tweak the angles.
At what point if at all, do an exotic species become a native species? Can an exotic species become so well intregrated into the local ecosystem that it can be consider a native to that system? Once a non native species establishes a sustainable population it’s pretty much a part of the ecosystem. This happens a with a lot of organisms. If it’s overtaking ecological established niches it’s an invasive.
If an electron has a non-zero chance to be a very long distance from the atom, then doesn't the amount of atoms in the universe makes a certainity that there is at least one electron out there orbiting meters away from it's atoms? Also, does this even matter, or the whole concept of the electron as a point particle that "is" somewhere is stupid? the whole concept of the electron as a point particle that "is" somewhere is stupid? I think that a better way of imagining the electron is as a localized wave. It does, after all, behave according to a modification of the wave equation. Although people tend to talk in terms of the probability of finding an electron over here or over there, most actual physical processes do not resolve the position of the electron a volume much smaller than an atom. If you want to force the electron to collapse into a state where the entire wave is effectively localized into a very small volume, you have to hit it with something of very high energy, such as an x-ray or gamma ray. Getting to your main question, if you did shoot an x-ray at an atom but 'missed' by a few bohr radii, you would still have a small chance of the x-ray scattering off of the electron. This probability decreases exponentially as one moves away from the center of the atom. When people talk of the electron as a point particle, what is meant is that there does not appear to be any internal structure, or if there is, it must be contained in a small volume with a radius under 10 m. So far there are no major indications of internal structure.
Is there a common explanation why people who took LSD describe similar visual experiences e.g. colorful, fractal, vibrating strings/stripes ? LSD affects a specific neurotransmitter receptor in the brain called 5HT2a . Activation of this receptor leads to many changes in activity across the brain. A recent study looked at how taking LSD affects the part of the brain that processes visual information using a technique called fMRI . They showed that brain activity in people who closed their eyes after taking LSD mirrored the brain's response when people saw real images. They figured this out by comparing patterns of eyes-closed fMRI activity under LSD vs. without LSD. Here's a quote from the article: "This result may indicate that under LSD, with eyes-closed, the early visual system behaves as if it were seeing spatially localized visual inputs."
How do you invent a programming language? I'm just curious how someone is able to write a programming language like, say, Java. How does the language know what any of your code actually means? Designing a computer language is a pretty tricky business, really. There are a lot of tradeoffs to be made, which explains why there are so dang many of them. When starting a new one from scratch, you ask yourself a lot of questions. Ultimately, the question that matters most is, "What do I want to be in this language?" You might even call it the First Question of Computing. That's only half the problem, however. To understand the second half, let's take a little detour into the mid 20th century, and look at computers themselves. Now, ever since the first computers came online, we brave and foolish folks who program them have had a vast number of varied answers to this question. Some folks wanted to make war simpler , some wanted to make intelligence simpler . But in general, the early computers were often single purpose machines. Enter ENIAC , which is often called the first "general purpose" computer. All of a sudden, we had a machine which could do a lot of different things. This was exciting! And terrifying at the same time. How do you tell a computer the size of a small house that you want to calculate the logarithm of any number you give it, just as a simple example? The answer was to have a very small number of very simple instructions that the computer could perform, and then build up from this small instruction set , combining them in various orders, until you eventually make a "program" that does what you want. Amazingly, this still holds true today! Your typical PC running what's called the x86 instruction set is basically just performing a bunch of the same small(- ish ) number of instructions over and over, until you get what you wanted to get. [As a brief aside, mathematicians had already attempted this reduction of an algorithm to the most basic set of operations and postulates - let's just say it didn't go so well , and both mathematicians and computer programmers are struggling with some fundamental problems that fell out even today.] One key feature of almost all instruction sets is their emphasis on arithmetic. There's a reason we call computers "computers", after all. The designers of the earliest computers answered the First Question of Computing with "I want to be easy." So computers got really good at math, really quickly . Unfortunately, as the things we asked computers to do became more and more complex, it became very tedious to construct programs using that very small set of possible instructions. One particularly forward thinking programmer decided one day to add a layer of indirection between the program writer, and the machine. Basically, she decided to answer the First Question of Computing with, "I want to make easy." The first of the truly great computer programming languages, FORTRAN , was finally born. FORTRAN allows the programmer to type things like "do the following thing 10 times ", written not in instruction-set codes, but in plain old English . This was an enormous step forward, but involved some sleight of hand behind the scenes. Basically, the FORTRAN compiler would read in the program which was nice to human eyes, and for each line of code, it would create a bunch of those instructions from the instruction set that preserved the intent of that line of code, but could now be executed by the machine. This truly was wizardry of the highest order. Very much like a growing baby, FORTRAN changed and grew as the years went by, as different people asked it to answer the First Question of Computing in different ways. Computers started to get smaller and faster, and made their way into the home. All of a sudden, folks much like myself started to give different answers to the First Question of Computing. We were playing with the computer, exploring what it would let us do, what it could be pushed to do. With this large set of new things that people wanted to be to do on a computer, a whole slew of new languages popped up. Some of them let you manipulate lists really easily, some of them let you manipulate hardware really easily. In each language, it was easy to do some things, but remember those tradeoffs I mentioned right at the beginning? They were right about to bite us programmers in the butt. In C, for instance, it is in fact very easy to manipulate hardware. Many operating systems are written in C for just this reason. Unfortunately, making it easy to manipulate hardware makes it really hard to manage your computer's memory , among other things. C programmers spend a lot of time worrying about where they stored this variable or that string, how to get rid of it, how to let other parts of the program know where it is. Needless to say, if you're not answering the First Question of Computing with "I want to make hardware manipulation easy", C is going to give you a rough ride. The designers of Java , for instance, answered the First Question of Computing with, "I want to make running on lots of different machines easy". While the jury may still be out on whether or not they succeeded, they did have a clear vision because they succinctly answered the First Question of Computing. (A few other global principles went into the design as well, of course.) Now for each of these new computer languages, you'd have a different grammar that defined what a legal line of code looks like, much like English grammar is different than Finnish grammar. Both let you speak and convey meaning , but they sound pretty darn different. What's the same, however, is that for each line of code in the "high-level" language, we use a compiler or interpreter to transform our friendly code into the kind of instructions the machine likes to read. This constant, this fundamental purpose of the compiler, is the second half of designing a computer language. First it parses your friendly code, then generates machine code. We can now hopefully answer what it means to create a new programming language. First, you need to answer the First Question of Computing. Once you have decided how want to answer that question, then you write the grammar that fulfills your answer, and the compiler that translates your grammar to the grammar of the underlying machine instruction set. This process, this mapping between two different levels of representation, but a map that , is far and away one of the most amazing ideas I've ever learned about. It has applications in a huge number of different endeavors , across all walks of life . It is the idea of a . The fact that you asked this question means you've taken your first step into a truly amazing journey. Stay curious :)
If i put a candle in a box, then put a candle and a mirror in another box, will the one with the mirror be brighter inside? thought about this as i put a mirror up in my room. i flicked on a flashlight in the dark and it shone through the mirror to reflect back into the room. then i wondered if the room was actually any brighter now than it was without the mirror. Yeah, as JushiBlue pointed out, it depends where you're looking. Some of the first "flashlights" (or torches you crazy Brits you) were basically this principle. Stick a candle in some type of enclosure and a bunch of reflective mirrors to focus the light outward very brightly. That outward light is much brighter than just the candle by itself, but only because it's focused much like a lens. An easy way to think of it is that a candle in an open enclosure emits light radially, if you use mirrors you're "moving" some of the light that would have gone in other directions into the same direction, thus making it brighter!
Why are American aircraft carriers flat, while British aircraft carriers sloped upward? Surely there must be an internationally agreed upon method for launching an aircraft into the air from a very short distance, especially with the aircraft being so similar? What explains the disparity? British carriers are designed for Harrier jets which can take off under their own power off a sloped deck. Their engines can be turned downward to assist takeoff. US aircraft are launched off the carrier with a catapult that accelerates the plane from zero to flying in 2 seconds. US carriers can fly a range of aircraft using this system.
Is the latent heat of fusion of water altered with different crystalline structures of ice? (Ice formed under very high pressure vs STP) If so, is there a formula? The thought is, because ice formed under high pressure cannot optimize for hydrogen bonds (why it normally expands when frozen), that it would require less energy to undergo the phase transition Yes. Latent heat is the difference between states A & B at the conditions of phase transition, in terms of energy stored in the configuration of molecules (internal energy including sum of all intermolecular interactions). There are 18 known phases of crystalline solid water, so I don't think there's a simple formula I can find for you. The normal variant is hexagonal, but you also get cubic, tetragonal, rhombohedral, etc. Many of those won't have a "heat of fusion" because there won't be a direct phase boundary with liquid water, but there will be some latent heat associated with transition to other forms of ice.
Do stars have lagrange points? Since our star is orbiting the centre of our galaxy, does it have it's set of L-points? is there some far off L4/L5 position of just some colossal clump of debris never to have a star of it's own to orbit? If so, could a sufficiently large enough star hold entire other star systems in it's L4/L5? and should you be some astronomer to evolve in such a star system, would you be able to tell that you were in the L4/L5 of a larger body and not infact independently orbiting the central mass of the galaxy? would that have any implications? Thanks! For most of the galaxy the gravity of the center is very small relative to the gravity of all the other stars in the galaxy. So any stellar Lagrange point would be swamped by the effect of gravity from other stars.
Why do green house gases not stop as much light from entering the atmosphere as they keep from exiting the atmosphere? I understand how greenhouse gases prevent light from exiting our atmosphere, but why do they not prevent an equal amount of light from entering? The big greenhouse gases on the Earth (water, carbon dioxide, methane, oxygen, etc) work because those molecules have specific bands of wavelengths where they absorb light. These are greenhouse gases because those bands are most effective at absorbing infrared light. The light coming into the Earth from the sun is (roughly) a blackbody at 5800K . Thus, most of the light (energy) from the sun is in the visible wavelengths and a lot of it gets through to the surface (the red part in the figure) because the greenhouse gas absorption bands aren't as good in the visible (you can see in the image where the bands still work). Meanwhile, the Earth is much colder, something like 280K. Its blackbody emission is therefore almost entirely in the infrared . This happens to be where the greenhouse gases are extremely effective at absorbing light and therefore don't let the Earth's light escape back into space (you can see lots of yellow where the greenhouse gases eat up all the light trying to escape and then reradiate it back at the Earth). Basically then, the greenhouse gases work because the wavelengths of light coming in are different from the wavelengths of light going out, and the greenhouse absorption bands are more effective at preventing the light from getting out than preventing the light from coming in.
The Mars Perseverance Rover's Parachute has an asymmetrical pattern to it. Why is that? Why was this pattern chosen? Image of Parachute: The asymmetry in the coloring makes it easier to study the video and assess the parachute's performance. In multi-chute systems, you'll see that each parachute has a different pattern so they can tell them apart. Edit: more explanation: the parachute is able to twist with respect to the vehicle (and therefore the camera). If there's any strange behavior in the parachute, they can track it visually and then go back and look at photos of the folded and packed chute, the fabrication process, etc, and the markings help them to make a direct comparison.
Now that a piece of wreckage from flight MH370 has been confirmed on Réunion Island, is it possible to use our understanding of direction and speed of currents in the Indian Ocean to narrow down where it likely crashed? Confirmation the wreckage is from MH370 can be found here: The crash occurred 515 days ago. If this points an area out, where would it be? Is the area outlined by this method an improvement of the previous search perimeter on the one where search parties were operating? Are they even close? There's a pretty good summary here . What I took away from that was that it'd be insanely hard to pinpoint an exact starting location since planes aren't really Lagrangian drifters and ocean flow is more turbulent than it is laminar. What say you, /u/sverdrupian ?
What ever happened to the hole in the ozone layer? Are my poor little phytoplankton going to be okay? Since 1981 the United Nations Environment Programme has sponsored a series of reports on scientific assessment of ozone depletion, based on satellite measurements. The 2007 report showed that the hole in the ozone layer was recovering and the smallest it had been for about a decade.[62] The 2010 report found that "Over the past decade, global ozone and ozone in the Arctic and Antarctic regions is no longer decreasing but is not yet increasing... the ozone layer outside the Polar regions is projected to recover to its pre-1980 levels some time before the middle of this century... In contrast, the springtime ozone hole over the Antarctic is expected to recover much later."[63] http://en.wikipedia.org/wiki/Ozone_depletion
If energy cant be created or destroyed. How can it be here? Existence and creation are different things but how it can be here if in coudnt be created? There are basically two options: The total energy is constant, has always existed and will continue to exist forever Energy is created in some process that we aren't sure about yet. One candidate for such a process that may violate energy conservation is cosmic expansion of a space with a positive vacuum energy. If the cosmological constant is indeed constant, but space is expanding, this implies that vacuum energy is actually created. It's also possible that the total amount of energy was simply created at or near the big bang through some physical process we have no access to. for a suitable definition of always and forever, which might, as far as we know, include that time started after or with the big bang
I thought electrons didn't actually orbit the nucleus, but this SciAm article talks about possible relativistic effects in large atoms from their "orbits". Please help me understand. My previous impression was that electrons sat in a "probability cloud" around the nucleus of the atom, and they didn't actually orbit the atom a la planets, as the old simplified models talked about. In the latest Scientific American article titled "Cracks in the Period Table" (Edit: the article is about the properties of synthesized super-heavy atoms), there is a paragraph in the intro that says, "But as the atomic numbers -- the number of protons in a nucleus -- reached higher, some of the added elements no longer behaved the way the period law requires; that is, their chemical interactions, such as the types of bonds they form with other atoms, did not resemble those of other elements in the same column of the table. The reason is that some of the electrons orbiting the heaviest nuclei reach speeds that are a substantial fraction of the speed of light. They become, in physics parlance, "relativistic," causing the atoms' behavior to differ from what is expected from their position in the table." So, can someone help me understand exactly what orbiting means when it comes to electrons, and why there is a speed involved if it's a probability cloud? And if there is some sort of orbit involved, what path do the electrons follow, and how does the path twist into an orbital shape? Ah yes! That's the problem with describing them as a 'probability cloud'. See, if it was a cloud of electron density, one might wonder what quantum mechanics is for, since classical EM has no problems modeling a 'charge cloud', so to speak. But that's not the whole story. The probability of where the electrons are behaves in a "wave-like" manner. The 'cloud' doesn't change its position, but it still has kinetic energy, it's behaving much like a standing wave . The more tightly concentrated the cloud is, the higher the kinetic energy, and the more spread out it is, the lower the kinetic energy. It's very much at odds with classical physics - the electrons are , and yet the 'cloud', the probability of where they're likely to be (which is all you can say about their position), . Not while the atom/molecule is in its ground state, anyway. Quantum particles don't follow classical trajectories. In fact, when you have a 'node' in an orbital (again, analogous to nodes in standing waves), that's an area where there's an exactly probability density of finding the electron. They can get from one point to another without necessarily having to pass intermediate points. Anyway - in heavier elements, the attraction from the nucleus is stronger, since it has a larger positive charge. The innermost electrons therefore get probability clouds more narrowly centered on the nucleus of the atom. But in response, they also get that much more kinetic energy. And they end up with relativistic effects on their momentum, because they have 'velocities' that start to get up to a significant fraction of light speed. (You normally only talk in momentum terms here, because 'velocity' is not a well useful concept for something that doesn't follow a trajectory!) So relativistic effects become increasingly important for heavier elements. (It's what gives gold its yellow color and what allows your car to start , among other things)
What defines measurement which affects quantum outcomes? Or what is wrong with my understanding? It's my understanding that when one "measures" results in a quantum experiment, it will affect the results. That is, if I fire buckyballs though some slots, they will take every possible path and these paths will interfere and create an interference pattern. I may be mistaken, but I understand that measuring the buckyball impacts will affect the experiment's outcome. This must be my misunderstanding, as eyeballs and testing equipment (amplifiers) should not affect the interference pattern. But the original experiment consisting of a single buckyball resulted on a well defined path, or so I thought? My base question, besides how is my understanding flawed, mis what defines the measurement that will affect this outcome? Perception? Sensation? I'm very confused. Help! You are misunderstanding what it means to observe/measure in quantum physics. This has been addressed before ( try this search too ). It's worth checking out previous threads.
What happens to the light that strikes a solar cell but is not converted into electricity? If it's absorbed it's turned into heat [just like any other light that is absorbed], if not it's sent packing back into space or wherever. Ironically heating up a solar cell makes it less efficient so the ideal solar cell is highly reflective.
What exactly are the new states of matter? We all know the big three but now there's bose-einstein condensates and dropletons? I have read a few articles on them and their discovery but I still don't quite get what they are. Since this is a physics question, I have to say first: no one REAALLY knows what they are. But in terms of my knowledge I believe I might be able to help on the BEC side. Bose certain materials when cooled to extremely low temperatures the energy in their atoms/molecules and it is no longer made up of fermionic mater, which follows Fermi-Dirac statistics, follow the pauli exclusion principle, and half half-integer spins. Instead it is now made of Bosons, which do not obey the pauli exclusion principle and have integer spins. One of the main properties that characterizes this would be no viscosity, making them as slippery as possible. Because of the bosons that comprise this matter at the lowest quantum state, it additionally can behave strangely and therefore also represents what is called a macroscopic quantum phenomenon. Of course there are more technicalities and mathematics involved but thats the basics (as far as I know). I'm sure someone with more knowledge could explain it better.
What is mild hyperexpansion of the lungs? I had chest X-ray done a while ago, and while flipping through my records, I noticed that the doctor had left a note "Lungs are mildly hyperexpanded..." What is it? You need to ask your doctor. We can't interpret exactly what he meant, and anyway the internet is a bad place to be looking for medical advice.
Does the heat index actually correlate to a specific dry air temperature well? With the heat rising in the northeast U.S., the topic of heat index keeps popping up on my radar. What I'm trying to figure out is whether a heat index of 110 deg F is actually comparable to a dry air temperature of 110 deg F. I'm basing this off of experiencing I've personally had while living in NJ where the heat index is 105 deg F vs. an air temperature I've experienced of 110 deg in Las Vegas. To me, I felt significantly warmer in NJ than I did in Vegas, despite the heat index being lower in NJ. From what I've read, the correlation is based off of relative humidity and the body's ability to reject heat to the atmosphere. Obviously perspiration is the primary method of heat removal, which is inhibited when the ambient humidity is increased. The amount of heat removal is apparently directly related to how hot we personally feel. I still don't understand how this determination is made, since I feel much warmer in a humid environment than in a warmer, dryer one. OR, am I confusing discomfort of sweat with heat, making this question useless? The calculation is a curve-fit to a specific set of data from the 1970s but each individual experiences heat differently. The data for the classic heat index assumed a 5'7" man weighing 147 pounds, wearing long pants and short-sleeved shirt while walking 3 mph in the shade with a slight breeze. Your results may vary. source
For something as medically useful and clinically important as Lithium, how do we not yet know its mechanism of action? Most drugs, especially those as old as Lithium, were discovered serendipitously. In 1849, a researcher named John Cade, wanted to see if any chemicals in urine from manic patients were toxic, so he dosed guinea pigs with urine samples from patients with mania. To solubilize uric acid crystals as control, he used a lithium salt. He noticed it had a profoundly sedative effect. Initially, he tried to chase down various chemicals causing the effect, before he realized it was the lithium itself acting as a tranquilizer. This started lithium's long history of use in the clinic and many patent medicines, and it's one of the first drugs to be used to treat psychiatric disorders. It was even in the original formulation of 7-up called "Bib-Label Lithiated Lemon-Lime Soda". Unfortunately, lithium has a low therapeutic index (a little is good, a little more is toxic), so use as a patent medicine or in drinks caused many cases of chronic lithium toxicity, presenting as tremor and mild confusion in most cases. Now it's use is much more restricted and closely monitored. However, to this day, it remains one of the standbys for treatment of mania, especially bipolar disorder. But none of this has answered your question, which was why do we not know how it works. The answer is relatively simple: we don't know how many things work, including the brain itself. Most treatments are evaluated for safety and efficacy, and a mechanism of action is not required for FDA approval (though this is now changing as pharma requires it). Add to this fact that lithium is mind-bendingly simple - it's literally just the ionic form of an element. You can find salts in rocks. If I had to give you a complete list of what sodium did in our body, another ion closely related to lithium, you and I would both tire of the endeavor long before we covered every topic in depth. And that relationship could be key, because though lithium is present in all life, it is not required and you do quite well without it. Most theories on lithium's MOA that I find plausible suggest subtle modifications on sodium ion channels in the brain. But this could be way off. I've been wrong before.
Why aren't siblings of the same gender genetic twins? Ignoring random mutation of gametes, why don't brothers with the same parents have the same genome? The way I understand it, each parent contributes half of the genetic code, in the form of half of the 46 chromosomes. If a specific set of chromosomes (eg the odd numbered from dad, the even numbered from mom) are in each gamete, copied from the parent's genome, the resulting offspring are identical. If each ovum had a different set of chromosomes, that's a huge number of possible combinations (2 to be exact, for humans), and the fertilizing sperm would have to match up identically, otherwise bad things would probably happen. Not saying that they don't, but those bad things would be far more common. So, obviously I'm missing something, because my brother is not my twin genetically, but he managed to be granted a relatively normal set of chromosomes, just like me. I think the main thing you are missing is the fact that humans are . This means that we have two copies of our DNA that are very similar that make up our entire genome. Each of the 46 chromosomes has a twin making 23 pairs of closely matching chromosomes. When a gamete is formed one of each of these pairs is sorted into the gamete and only one of these chromosome sets (the sex chromosomes X and Y) determines the gender. Thinking of it as 'mom gave me the odd ones and dad the even' is not really correct and it may help to look at what's called a karyotype which is just a visualization of the chromosomes. Their are only 22 non-sex chromosomes and every normal human has two of each. You are actually getting one full copy from your father your mother. This means that you are getting two copies of every gene, one from each parent and this is a good thing because one copy might have a deleterious mutation that could harm fitness. All of these non-sex chromosomes are what determine morphological features, what you look like hair color, eye color, height etc. Their is very little information (comparatively) stored on the sex chromosomes (Y especially being very small) so that means that someones sex is not closely linked to a whole lot of traits (hence, for the most part you and your brother are different because you got different non-sex chromosomes from each parent). Some traits that are found on the sex chromosomes are sex-linked like one particular form of red-green color blindness. This is also really just the beginning as not only do chromosomes (one of each of the 23 pairs) get randomly put into an egg or a sperm, but genes can also swap places with their counterparts on their twin chromosome.
If a spaceship is in the dead of space, unaffected by gravity, will firing the rockets increase the ships speed indefinitely? An object moving with constant proper acceleration will technically continue accelerating forever, however its speed will asymptotically approach c. There's a derivation here .
How does a ground/earth pin work on a plane? I was on an American Airlines flight and noticed that there were power plug holes on the backs of the seats, 110v 60hz, they had earthing ports, how did they work? The ground pin doesn't have to connect to the actual ground: what matters is that it connects the case of the electronic device to something else the user is touching, so the device and the user are at the same voltage. No voltage difference, no shock. In a car, for instance, the ground is the metal frame of the car. For an airplane, the ground pin presumably is in contact with the frame of the plane. The vehicle might have a very different voltage than the actual ground, but so long as all the objects the user can touch are at the same voltage, no shock can occur.