text
stringlengths
286
572k
score
float64
0.8
0.98
model_output
float64
3
4.39
An old disk still capable of forming a planetary system Publication date: 31 January 2013 Authors: E.A. Bergin, et al. Copyright: Nature Publishing Group From the masses of the planets orbiting the Sun, and the abundance of elements relative to hydrogen, it is estimated that when the Solar System formed, the circumstellar disk must have had a minimum mass of around 0.01 solar masses within about 100 astronomical units of the star. (One astronomical unit is the Earth-Sun distance.) The main constituent of the disk, gaseous molecular hydrogen, does not efficiently emit radiation from the disk mass reservoir, and so the most common measure of the disk mass is dust thermal emission and lines of gaseous carbon monoxide. Carbon monoxide emission generally indicates properties of the disk surface, and the conversion from dust emission to gas mass requires knowledge of the grain properties and the gas-to-dust mass ratio, which probably differ from their interstellar values. As a result, mass estimates vary by orders of magnitude, as exemplified by the relatively old (3-10 million years) star TW Hydrae, for which the range is 0.0005-0.06 solar masses. Here we report the detection of the fundamental rotational transition of hydrogen deuteride from the direction of TW Hydrae. Hydrogen deuteride is a good tracer of disk gas because it follows the distribution of molecular hydrogen and its emission is sensitive to the total mass. The detection of hydrogen deuteride, combined with existing observations and detailed models, implies a disk mass of more than 0.05 solar masses, which is enough to form a planetary system like our own.Link to publication
0.835672
3.877263
An international research, led by scientists from the University of Sheffield and with the participation of the Instituto de Astrofísica de Canarias, has discovered, using the HiPERCAM instrument of the Gran Telescopio Canarias at the Roque de los Muchachos Observatory (Garafía, La Palma), an ancient pulsating star in a double star system. The discovery, which is published in the journal Nature Astronomy, provides important information about how stars like our Sun evolve and eventually die. An international team of scientists, led by the University of Sheffield and with the participation of the Instituto de Astrofísica de Canarias, has discovered the first pulsating white dwarf star in an eclipsing binary system. This type of binary, or double star system, is made up of two stars orbiting each other and periodically passing in front of each other as seen from the Earth. One of the stars that form the observed system is a white dwarf, the burnt out cores left behind when a star like the Sun dies. This particular white dwarf could provide key insights into the structure, evolution and death of these stars for the first time. Determining what a white dwarf is made of is not straightforward because these objects have about half of the mass of the Sun, packed into something about the size of the Earth. This means that gravity is extremely strong on a white dwarf, around one million times larger than here on Earth, so on the surface of a white dwarf an average person would weigh about 60,000,000kg. "The gravity causes all of the heavy elements in the white dwarf to sink to the centre, leaving only the lightest elements at the surface and so the true composition of it remains hidden underneath", explains Steven Parsons, researcher in the University of Sheffield’s Department of Physics and Astronomy, who led the study. To research the hidden structure of the white dwarf, the scientists used different techniques.“This pulsating white dwarf we discovered is extremely important since we can use the binary motion and the eclipse to independently measure the mass and radius of this white dwarf, which helps us determine what it is made of", says Parsons. Most white dwarfs are thought to be made primarily of carbon and oxygen, but this particular white dwarf is made mostly of helium. The team think this is a result of its binary companion cutting off its evolution early, before it got a chance to fuse the helium into carbon and oxygen. "Even more interestingly, the two stars in this binary system have interacted with each other in the past, transferring material back and forth between them. We can see how this binary evolution has affected the internal structure of the white dwarf, something that we've not been able to do before for these kinds of binary systems”, points out the astrophysicist. The scientists combined the study of eclipses with astrosismology, a technique that involves measuring how fast sound waves travel through the white dwarf. To observe the rapid and subtle pulsations of the star, they used HiPERCAM, a revolutionary high-speed camera developed by a team led by Vikram S. Dhillon, astrophysicist from the University of Sheffield and affiliated researcher with the IAC. This instrument, mounted on the10.4m Gran Telescopio Canarias (GTC) of the Roque de los Muchachos Observatory (Garafía, La Palma), can take one picture every millisecond simultaneously in five different colours."This exciting scientific discovery would not have been possible without the combination of the great light-gathering power of the GTC combined with the high-speed, multi-colour capability of HiPERCAM". The next step of the research is to continue observing the white dwarf to record as many pulsations as possible using HiPERCAM and the Hubble Space Telescope. Article: Steven G. Parsons et al. "A pulsating white dwarf in an eclipsing binary". Nature Astronomy, March 2020. DOI:10.1038/s41550-020-1037-z Vikram S. Dhillon: vik.dhillon [at] sheffield.ac.uk An international team of astronomers, including researchers from the Instituto de Astrofísica de Canarias, has found for the first time an unusual star that oscillates on one side due to the gravitational attraction of another nearby star. The study, which is published in the journal Nature Astronomy, uses data from NASA's TESS satellite and has involved the collaboration of citizen scientists. An international study, using the high-speed HiPERCAM camera, installed in the Gran Telescopio Canarias (GTC), has determined for the first time the mass and radius of one of the oldest stars in our Galaxy.
0.853681
3.972463
Ever dance with a comet in the pale sunlight? Today, after a 10-year, 3.5-billion-mile game of cosmic catch-up, the European Space Agency's Rosetta spacecraft has finally met Comet C-G and will waltz across the stars alongside it for the next 18 months, studying the comet's mysterious composition and gases, and will even drop a lander onto its pitted surface. After a flawlessly executed engine burn, Rosetta is now 62 miles (100 kilometers) from the comet's surface and will soon enter orbit around it. This historic event places a manmade robotic probe around a comet for the first time, with the pair now flying 251 million miles from Earth on their way toward the sun. Rosetta "gives you a front-seat, ride-along vision of what the comet's going to do and how a comet works," says the ESA's Rosetta project scientist Matt Taylor. "This is really a big leap forward. It's going to be an awesome ride. Stay tuned." Comet 67P/Churyumov-Gerasimenko, nicknamed Comet C-G, was discovered in 1969 by Ukrainian astronomers Klim Churyumov and Svetlana Gerasimenko and circles the sun once every 6.5 years. The probe, crafted with the cooperation of the United States and the European Space Agency, is reported to have cost $1.7 billion. Rosetta's small comet lander, the Philae, was named after a famous obelisk found near the Nile River that allowed for translation of the Egyptian language, much like the legendary Rosetta Stone. Rosetta has already captured some amazing photos of the duck-shaped comet during its rendezvous and determined that the surface is too warm to be totally covered in ice. As the pair nears the sun, a spectacular tail and halo will erupt in a cosmic lightshow to be documented it its entirety. Over the next six months, Rosetta will get down to the business of mapping and analyzing the comet's surface, dropping down to 20 miles or less from its crust. Rosetta was first launched back in 2004 and was required to complete a series of course corrections, burns and maneuvers in order to intersect Comet C-G. After cruising past Jupiter in 2011 and doing a buzz-by of asteroids Steins and Lutetia, the probe was put into hibernation for three years, then reawakened in January of this year for its big date with destiny. Check out the Rosetta mission video below and follow its live progress at the ESA's site here. "We've arrived. Ten years we've been in the car waiting to get to scientific Disneyland, and we haven't even gotten out of the car yet and look at what's outside the window," said Mark McCaughrean, senior scientific adviser with the ESA's Directorate of Science and Robotic Exploration. "It's just astonishing."
0.803882
3.721447
Physicist love things that are simple. This may be one of the reasons that I think black holes are cool. Black holes form when you have something so dense that nothing can resist its own gravity: it collapses down becoming smaller and smaller. Whatever formerly made up your object (usually, the remains of what made up a star), is crushed out of existence. It becomes infinitely compact, squeezed into an infinitely small space, such that you can say that the whatever was there no longer exists. Black holes aren’t made of anything: they are just empty spacetime! Black holes are very simple because they are just vacuum. They are much simpler than tables, or mugs of coffee, or even spherical cows, which are all made up of things: molecules and atoms and other particles all wibbling about and interacting with each other. If you’re a fan of Game of Thrones, then you know the plot is rather complicated because there are a lot of characters. However, in a single glass of water there may be 1025 molecules: imagine how involved things can be with that many things bouncing around, occasionally evaporating, or plotting to take over the Iron Throne and rust it to pieces! Even George R. R. Martin would struggle to kill off 1025 characters. Black holes have no internal parts, they have no microstructure, they are just… nothing… (In case you’re the type of person to worry about such things, this might not quite be true in a quantum theory, but I’m just treating them classically here.) Since black holes aren’t made of anything, they don’t have a surface. There is no boundary, no crispy sugar shell, no transition from space to something else. This makes it difficult to really talk about the size of black holes: it is a question I often get asked when giving public talks. Black holes are really infinitely small if we just consider the point that everything collapsed to, but that’s not too useful. When we want to consider a size for a black hole, we normally use its event horizon. The event horizon is the point of no return. Once passed, the black hole’s gravity is inescapable; there’s no way out, even if you were able to travel at the speed of light (this is what makes them black holes). The event horizon separates the parts of the Universe where you can happily wander around from those where you’re trapped plunging towards the centre of the black hole. It is, therefore, a sensible measure of the extent of a black hole: it marks the region where the black hole’s gravity has absolute dominion (which is better than possessing the Iron Throne, and possibly even dragons). The size of the event horizon depends upon the mass of the black hole. More massive black holes have stronger gravity, so there event horizon extends further. You need to stay further away from bigger black holes! If we were to consider the simplest type of black hole, it’s relatively (pun intended) easy to work out where the event horizon is. The event horizon is a spherical surface, with radius This is known as the Schwarzschild radius, as this type of black hole was first theorised by Karl Schwarszchild (who was a real hard-core physicist). In this formula, is the black hole’s mass (as it increases, so does the size of the event horizon); is Newton’s gravitational constant (it sets the strength of gravity), and is the speed of light (the same as in the infamous ). You can plug in some numbers to this formula (if anything like me, two or three times before getting the correct answer), to find out how big a black hole is (or equivalently, how much you need to squeeze something before it will collapse to a black hole). What I find shocking is that black holes are tiny! I meant it, they’re really small. The Earth has a Schwarzschild radius of 9 mm, which means you could easily lose it down the back of the sofa. Until it promptly swallowed your sofa, of course. Stellar-mass black holes are just a few kilometres across. For comparison, the Sun has a radius of about 700,000 km. For the massive black hole at the centre of our Galaxy, it is 1010 m, which does sound a lot until you release that it’s less than 10% of Earth’s orbital radius, and it’s about four million solar masses squeezed into that space. The event horizon changes shape if the black hole has angular momentum (if it is spinning). In this case, you can get closer in, but the position of the horizon doesn’t change much. In the most extreme case, the event horizon is at a radius of Relativists like this formula, since it’s even simpler than for the Schwarzscild radius (we don’t have to remember the value of two), and it’s often called the gravitational radius. It sets the scale in relativity problems, so computer simulations often use it as a unit instead of metres or light-years or parsecs or any of the other units astronomy students despair over learning. We’ve now figured out a sensible means of defining the size of a black hole: we can use the event horizon (which separates the part of the Universe where you can escape form the black hole, from that where there is no escape), and the size of this is around the gravitational radius . An interesting consequence of this (well, something I think is interesting), is to consider the effective density of a black hole. Density is how much mass you can fit into a given space. In our case, we’ll consider the mass of the black hole and the volume of its event horizon. This would be something like where I’ve used for density and you shouldn’t worry about the factors of or or , I’ve just put them in case you were curious. The interesting result is that the density decreases as the mass increases. More massive black holes are less dense! In fact, the most massive black holes, about a billion times the mass of our Sun, are less dense than water. They would float if you could find a big enough bath tub, and could somehow fill it without the water collapsing down to a black hole under its own weight… In general, it probably makes a lot more sense (and doesn’t break the laws of physics), if you stick with a rubber duck, rather than a black hole, as a bath-time toy. In conclusion, black holes might be smaller (and less dense) than you’d expect. However, this doesn’t mean that they’re not very dangerous. As Tyrion Lannister has shown, it doesn’t pay to judge someone by their size alone.
0.843235
3.607066
NASA scientists have confirmed the discovery of first Earth-size planet on adequate distance from its star, making it a habitable zone. NASA’s Transiting Exoplanet Survey Satellite (TESS) found that the conditions on the exoplanet, named TOI 700 d, may just allow the presence of liquid water on the surface. According to NASA, TOI 700 is a small, cool ‘M dwarf’ star located just over 100 light-years away in the southern constellation Dorado. It carries 40 percent of Sun’s mass and has half of its surface temperature. It was originally misclassified as being more similar to Sun which meant the planets would have been hotter and larger than they really are. The error was identified with the help of several researchers and a high school student, Alton Spencer, working with the TESS team. “When we corrected the star’s parameters, the sizes of its planets dropped, and we realized the outermost one was about the size of Earth and in the habitable zone,” said Emily Gilbert, a graduate student at the University of Chicago. “Additionally, in 11 months of data we saw no flares from the star, which improves the chances TOI 700 d is habitable and makes it easier to model its atmospheric and surface conditions,” she added. The findings were presented at the 235th meeting of the American Astronomical Society in Honolulu and three papers were submitted to scientific journals. TOI 700 d is the outermost planet of the system which completes its orbit every 37 days and receives, from its star, 86 per cent of the energy that the Sun provides to Earth. It is believed that all the planets of the system are tidally locked to their star, meaning they rotate once per orbit. This phenomenon keeps one side constantly bathed in daylight. Joseph Rodriguez, an astronomer at the Center for Astrophysics, said that it’s a great addition to the legacy of a mission that helped confirm two of the TRAPPIST-1 planets and identify five more. “Given the impact of this discovery - that it is TESS’s first habitable-zone Earth-sized planet - we really wanted our understanding of this system to be as concrete as possible,” he said.
0.856185
3.564351
An aging but dependable NASAprobe has tweaked its orbit around Mars to seek out warmer ground on the distant,red world. NASA reckons the infraredcameras aboard the Mars Odyssey spacecraft will work better over regionsexposed to more sunlight, so it shifted the probe?s orbit in order to steer itscamera eyes away from the evening shade. The move took nearly eight months, ,but Odyssey?s infrared camera now peers onto the planet during the Martianafternoon, earlier than it has during most of its seven-year mission. "The orbiteris now overhead at about 3:45 in the afternoon instead of 5 p.m., so theground is warmer and there is more thermal energy for the camera's infraredsensors to detect," said Jeffrey Plaut planetary scientist at NASA's JetPropulsion Laboratory (JPL) in Pasadena, Calif., in a statement. The orbital switch increasesthe sensitivity of the camera, called the ThermalEmission Imaging System, to allow for better mapping of Martian minerals.It now senses infrared radiation from a warmer surface that is receiving moresunlight. A warmer Martian target Temperatures on Mars rangefrom as low as minus 195 degrees Fahrenheit (-125 degrees Celsius) near thepoles during the winter to 70 degrees Fahrenheit (20 degrees Celsius) at noonnear the equator. The average Martian temperature is about? minus 80 degreesFahrenheit (-60 degrees Celsius). On Sept. 30, 2008, Odysseyfired its thrusters for six minutes, entering into a "drift"pattern that gradually changed its orbit and the time-of-day during which itwas over the planet. On June 9 of this year, the spacecraft fired the thrustersagain, this time for 5.5 minutes. The burn ended the drift pattern and lockedthe spacecraft into the new orbit, always above the planet in themid-afternoon. "The maneuver wentexactly as planned," said JPL's Gaylon McSmith, Odyssey mission manager. Afternoons on Mars This is not the first timethe orbiter will see a sunnier side of Mars. Back in 2002, early in Odyssey'smission, it flew mid-afternoon passes over the planet and made importantdiscoveries of minerals, including salt deposits that were apparently leftbehind by large bodies of water when they evaporated. "The new orbit means we can now get the type of high-quality data for therest of Mars that we got for 10 or 20 percent of the planet during those earlysix months," said Philip Christensen, an Arizona State University researcher who is principal investigatorfor Odyssey's Thermal Emission Imaging System. The trade-off is that thenew orbit will put one of the spacecraft's other instruments out of commission.The gamma-ray detector, one of a suite of three instruments that sense shortlight waves and neutrons, must be shut down or risk overheating. In 2002, thesuite, called the Gamma Ray Spectrometer, made a dramatic discovery of largeareas of water-ice near the Martian surface. The gamma ray detector has alsomapped the global deposits of many elements, such as iron, silicon andpotassium. NASA launched the MarsOdyssey orbiter in 2001. The solar-powered spacecraft arrived at the red planetabout a year later to begin its now seven-year Mars observation campaign. ? Video- Digging on Mars ? Video- Will Odyssey Get a Definitive Answer on Mars Water? ? SpecialReport: Odyssey Mission to Mars
0.829438
3.534512
A puzzling alien planet is the closest thing to an Earth twin in size and composition known beyond our solar system, though it's far too hot to support life, scientists say. The exoplanet Kepler-78b, whose supertight orbit baffles astronomers, is just 20 percent wider and about 80 percent more massive than Earth, with a density nearly identical to that of our planet, two research teams report in separate papers published online Wednesday in the journal Nature. "This is the planet that, in many respects, is the most like Earth that's been discovered outside our solar system," said Andrew Howard of the University of Hawaii at Manoa's Institute for Astronomy and lead author of one of the studies. "It has approximately the same size. It has the same density, which means it's made out of the same stuff as Earth, in all likelihood." [The Strangest Alien Planets (Gallery)] Studying a lava world Kepler-78b, whose discovery was announced last month, orbits a sunlike star in the constellation Cygnus, about 400 light-years from Earth. The alien world circles 900,000 miles (1.5 million kilometers) or so from its parent star — just 1 percent of the distance between Earth and the sun— and completes one lap every 8.5 hours. Surface temperatures on Kepler-78b likely top 3,680 degrees Fahrenheit (2,000 degrees Celsius), Howard said. The planet was found by NASA's prolific Kepler space telescope, which has spotted nearly 3,600 potential exoplanets since its March 2009 launch. (Kepler was hobbled in May of this year when the second of its orientation-maintaining reaction wheels failed, but scientists are still sifting through the instrument's huge databases.) Kepler flagged alien worlds by noting the telltale brightness dips they caused when passing in front of, or transiting, their parent stars from the spacecraft's perspective. Kepler's measurements allow researchers to estimate an exoplanet's size but not its mass, meaning that other strategies are required to get a handle on a world's density and composition. [Gallery: A World of Kepler Planets] One such method is the radial velocity technique, which measures the wobble in a host star's light induced by the gravitational pull of an orbiting planet. Both new studies employed this method to investigate the Kepler-78 system, with Howard's group using the HIRES spectrograph at Hawaii's Keck Observatory and another team, led by Francesco Pepe of the University of Geneva, relying on the new HARPS-N instrument on the Telescopio Nazionale Galileo in the Canary Islands. The two teams came to very similar conclusions. Howard's group determined Kepler-78b's mass to be 1.69 times greater than that of Earth, while Pepe's team calculated it to be 1.86 times higher than Earth's. The results of the Pepe-led study suggest a density of 5.57 grams per cubic centimeter for Kepler-78b, while those of Howard's team imply a density of 5.3 grams per cubic cm. These numbers agree to within the error range independently estimated by both teams, suggesting that they are quite accurate, Howard said. "The fact that we agree to within our errors — in science, that's basically as good as you can do," Howard told Space.com. Earth's density is about 5.5 grams per cubic cm, so Kepler-78b probably has an Earth-like composition, complete with a rocky interior and an iron core, both studies suggest. A mysterious origin The extremely tight orbit of Kepler-78b puzzles astronomers. According to prevailing theory, the alien world shouldn't exist where it does, because its host star was significantly larger when the planet was taking shape. "It couldn't have formed in place because you can't form a planet inside a star," Dimitar Sasselov of the Harvard-Smithsonian Center for Astrophysics and a member of the Pepe-led team said in a statement. "It couldn't have formed further out and migrated inward, because it would have migrated all the way into the star. This planet is an enigma." What is clear, however, is that Kepler-78b's days are numbered. The planet will continue circling lower and lower until the immense gravity of its host star tears it apart, likely within 3 billion years or so. "Kepler-78b is going to end up in the star very soon, astronomically speaking," Sasselov said. The search for another Earth The hellishly hot Kepler-78b is not a good place to hunt for alien life. But the determination of its density marks a milestone in the ongoing search for a true "Earth twin" — a planet very much like Earth in size, composition and surface temperature. "The existence of Kepler-78b shows that, at the very least, extrasolar planets of Earth-like composition are not rare," astronomer Drake Deming of the University of Maryland wrote in an accompanying commentary article Wednesday in the same issue of Nature. Deming points to NASA's upcoming Transiting Exoplanet Survey Satellite mission, or TESS, which is slated to launch in 2017 to hunt for transiting planets around nearby stars (as contrasted with Kepler, whose gaze was more distant). "By focusing particularly on small stars cooler than the sun, TESS should find exo-Earths whose mass can be measured by trading the close-in orbit of Kepler-78b for more distant orbits around low-mass stars, approaching orbital zones where life is possible," Deming writes. "That trade-off probably cannot be pushed to the point of measuring an Earth twin orbiting once per year around a sun twin, but it will allow future scientific teams to probe habitable planets orbiting small stars." - 7 Ways to Discover Alien Planets - The Search For Another Earth | Video - Alien Planet Quiz: Are You an Exoplanet Expert?
0.830026
3.714072
This happens at around the time when the Moon's orbit carries it around the far side of the Earth as seen from the Sun, at around the same time that it passes full moon. This distance between the Earth and Moon will be 0.0027 AU (396,000 km). The exact positions of the Sun and Moon in the sky will be: |Object||Right Ascension||Declination||Constellation||Angular Size| The coordinates above are given in J2000.0. |The sky on 31 May 2020| 9 days old All times shown in EDT. The circumstances of this event were computed using the DE405 planetary ephemeris published by the Jet Propulsion Laboratory (JPL). This event was automatically generated by searching the ephemeris for planetary alignments which are of interest to amateur astronomers, and the text above was generated based on an estimate of your location. |14 Aug 2046||– The Moon at aphelion| |16 Aug 2046||– Full Moon| |19 Aug 2046||– The Moon at apogee| |24 Aug 2046||– Moon at Last Quarter|
0.879564
3.180743
Earth 2.0? Artist's rendition of Gliese 581 c Canada's space telescope has spent the past two weeks straining for a glimpse of what an elite group of European astronomers claim is the first habitable planet discovered outside this solar system. The suitcase-sized Canadian satellite, called MOST, is the only instrument capable of quickly verifying the historic claim. The extrasolar planet -- or "exoplanet" -- is named Gliese 581 c, and is thus far the only other place in the universe believed capable of supporting liquid water, and therefore extraterrestrial life. It was discovered using the HARP instrument at the European Southern Observatory's 3.6-meter telescope in La Sille, Chile. But even before the April 25 announcement splashed across the pages newspapers, the European astronomers had quietly contacted the MOST mission control team at the University of British Columbia. The Europeans sought verification of their ground-based observation, because they hadn't actually "seen" the new exoplanet. Rather, they deduced its presence using the radial velocity method, in which the presence of a planet is deduced based on how its mass causes the orbit of its host star to wobble. "The radial velocity signal is quite low, and there is a lot of scatter," said Jaymie Mark Matthews, principal investigator in charge of the MOST mission. "There is justifiable skepticism within the exoplanet community about whether this planet really exists." To remove all doubt, astronomers need to catch a glimpse of the planet itself. MOST circles the Earth in a polar orbit, its 15-centimetre telescope unfettered by our murky atmosphere. Launched by the Canadian Space Agency in 2003 for a mere $10 million, MOST is the only space telescope sufficiently agile to re-point on relatively short notice. Rather than watch for wobbles, MOST detects what astronomers call "transits." Just as a mosquito passing in front of a light bulb blocks the light ever so slightly, an exoplanet passing between its star and Earth dims the amount of light reaching MOST. By measuring the miniscule reduction in light, the MOST team can estimate the size of the transiting object. "If we observe a transit, that will take away all ambiguity," Matthews said. "We'll know we're looking at a planet." And MOST will secure its role in an international adventure the likes of which this world has not witnessed since the era of Drake and Magellan -- a race to determine whether or not we are alone in the universe. To glimpse a passing planet Matthews figures the odds of MOST catching a fleeting glimpse of Gliese 581 c at about one in 30. That's because for MOST to observe a transit, the exoplanet's orbit must pass directly between its host star and Earth. Astronomers do not know the inclination of the Gliese 581 c orbit, but assume that orbital planes are inclined randomly throughout the universe. Because Gliese 581 c orbits much closer to its dim star than Earth does to the Sun, the geometry improves MOST's odds. MOST has already watched one potential transit period, and will observe several more before returning to its day job, stellar seismology. "We had our first chance earlier this week," Matthews told The Tyee. "We'll have another intense stakeout in less than two weeks." If MOST does catch a transit, astronomers will be able to combine MOST's data on the planet's size and speed with HARP's observations of mass. "We would be the first to measure the density of an Earth-like planet. No ones ever been able to do that," Matthews said. "We would be able to tell whether it was an ocean, or rocky." Even more important to the scientists, MOST will return baseline information about the star itself. Gliese 581 a is a red dwarf, a type of star that tends to be more turbulent than stars like the Sun. MOST will likely determine whether or not that "variability" is sufficient to mislead the HARP instrument. This is precisely what MOST -- an acronym for Microvariability and Oscillations of STars -- was built to study. Likewise, MOST could determine if the Gliese 581 system's habitable zone is actually habitable. If it splashes periodic waves of intense heat toward its companion planets, "that might not be all that great an environment for life to gain a foothold," Matthews said. "On the other hand, the star is remarkably quiescent for a red dwarf. That would be a good sign that that 581c could be a solid world that could support liquid water on the surface." "We're lucky we live near such a boring star," Matthews added. "If the Sun's energy varied the way some red dwarfs do, we wouldn't be here. Our climate would be changing even more dramatically than the global warming everyone is currently concerned about." Habitable, but not just like home What makes the Gliese 581 c discovery so remarkable is that of the 233 planets discovered thus far, it is the only one that does not suffer from what some astronomers call "the Goldilocks paradox." All the other exoplanets discovered to date are either too hot (because they orbit too close to their stars) or too cold (they orbit too far out) for water to remain a liquid, which is regarded as a precondition for the existence of life as we know it. Gliese 581 c is just right. Researchers predict its average temperature to be somewhere between 0 and 40 degrees Celsius. (The Earth's mean surface temperature is currently about 15 degrees Celsius, and is projected to increase between one and six degrees by 2100.) But even if it does prove to be a wet, rocky planet, Gliese 581 c probably wouldn't feel much like home. It's about 50 per cent larger than Earth. It's also five times as massive, so its gravitational pull would be greater. And because it completes a full orbit of its dwarf star every 13 days, birthdays would be celebrated more or less every other weekend. Gliese 581 c orbits 14 times closer to its host star than Earth orbits the Sun. To anyone standing on the surface of Gliese 581 c, the star would appear two or three times larger that than the sun does from Earth. "Red dwarfs are the Honda Civics of the Universe," Matthews said. "Being relatively dim, they don't use up their fuel at such a rapid rate. So they last a long time." If there are creatures living on Gliese 581 c, they probably see things differently than we do. Our eyes have evolved to be most sensitive to the strongest light frequencies emitted by our Sun. The Gliesians, noted Matthews, "would likely possess vision skewed to red or infrared, because that's what their star emits." The neighbourhood is bit different, too. Whereas Earth's solar system has eight full-patch planets -- plus Pluto, Xena and an icy crew of hang-arounds -- only three have been identified in the Gliese 581 system: The Neptune-sized 581b (15 times the mass of Earth) that orbits in only 5.4 days, the Earth-like 581c that orbits in 13 days, and 581d (eight times the mass of Earth) that orbits in 84 days. Modern-day Magellans If verified as a habitable planet, the discovery of Gliese 581 c will boost the reputation of the team led by Swiss astronomers Michel Mayor and Didier Queloz, who in 1995 became the first earthlings to identify an exoplanet. (See sidebar.) Gliese 581 c was idenfitied with the HARPS (High Accuracy Radial Velocity for Planetary Searcher) instrument, a precise spectrograph operated by the European Organisation for Astronomical Research in the Southern Hemisphere. "HARPS is a unique planet hunting machine," said Mayor, who serves as principal investigator. "We can say without doubt that HARPS has been very successful: Of the 13 known planets with a mass below 20 Earth masses, 11 were discovered with HARPS." "The discovery of Gliese 581 c is an important stepping stone," said UBC's Matthews. "Even if it turns out that this isn't a planet, we anticipate that HARPS will uncover more Super-Earths. Once we've got 15 or 20 to observe, the odds are good that one of them is going to transit." Matthews said MOST will likely release preliminary findings related to Gliese 581 c sometime next month. If Canada's diminutive space telescope observes a transit of Gliese 581 c, a next step would be to use a larger colour space telescope to determine whether there is water vapour in the exoplanet's atmosphere. (MOST is a black-and-white instrument.) The first identification of interstellar water was recently deduced using infrared data from transits observed by the $3 billion Hubble Space Telescope. "This might or might not be it," Matthews said. "I have no doubt that within the next five to 10 years, we will find another Earth-like planet." Close, but still far away Gliese 581a is among the 100 closest stars to Earth. It's located only 20.5 light-years away in the constellation Libra ("the Scales"). "You could observe this star with an amateur telescope," Matthews said. "You could probably see it with a decent set of binoculars." Xavier Delfosse, a French member of the ESO team, released a statement declaring: "Because of its temperature and relative proximity, this planet will most probably be a very important target of the future space missions dedicated to the search for extra-terrestrial life." Delfosse added, "On the treasure map of the Universe, one would be tempted to mark this planet with an X." But getting there is another matter entirely. In order to visualize 20.5 light years, imagine a model in which the Sun is about the size of a cherry. The Earth would be a grain of sand, revolving around the cherry at a distance of one metre. Pluto would be 40 metres away. If the cherry is in Vancouver, the next closest star (Proxima Centauri) would be just south of Seattle. And Gleise 581 would be in San Francisco. These distances cripple fantasies of Star Trek–style travel. The fastest spacecraft ever launched left Earth last year. Travelling at 50,000 kilometres per hour, it will take nine and a half years to reach Pluto. That same spacecraft would require 350,000 years to reach Gliese 581 c. In order for that spacecraft to reach Gliese 581 c this decade, it would have had to have left Earth about the same time pygmy-sized hominids first stood erect. Radio communication is more practical. The non-governmental Search for Extraterrestrial Intelligence (SETI) listened for signs of life in the Gliese 581 system in 1995 using the Parks Radio Telescope in Australia, and again in 1997 using the Green Bank Radio Telescope in West Virginia. No signal was detected. Senior astronomer Seth Shostak has announced that SETI will listen to the system again this summer when the new Allen Telescope Array begins operations. Just as intriguing is the question of whether Gliese 581 c has been listening to Earth. Radio signals travels at roughly the speed of light. So Gliese 581 c would only now be receiving radio and television programming broadcast in late 1986. "Walk Like an Egyptian" was the Bangles' hit single. Knight Rider was in its fourth (and final) season, and Alf had just premiered. "Knight Rider," Matthews mused. "If there is intelligent life on Gliese 581 c, that probably explains why they haven't contacted us." Related Tyee stories: Arming the Heavens Why we must oppose US weapons in space. Why Smart People Love 'Battlestar Galactica' TV's hottest unreality show is the real thing. Saddle up, Space Man! 'Serenity' will twang your little heart strings.
0.913779
3.651084
eso1151 — Publikim shkencor A Black Hole's Dinner is Fast Approaching VLT spots cloud being disrupted by black hole 14 Dhjetor 2011 Astronomers using ESO’s Very Large Telescope have discovered a gas cloud with several times the mass of the Earth accelerating fast towards the black hole at the centre of the Milky Way. This is the first time ever that the approach of such a doomed cloud to a supermassive black hole has been observed. The results will be published in the 5 January 2012 issue of the journal Nature. During a 20-year programme using ESO telescopes to monitor the movement of stars around the supermassive black hole at the centre of our galaxy (eso0846) , a team of astronomers led by Reinhard Genzel at the Max-Planck Institute for Extraterrestrial Physics (MPE) in Garching, Germany, has discovered a unique new object fast approaching the black hole. Over the last seven years, the speed of this object has nearly doubled, reaching more than 8 million km/h. It is on a very elongated orbit and in mid-2013 it will pass at a distance of only about 40 billion kilometres from the event horizon of the black hole, a distance of about 36 light-hours . This is an extremely close encounter with a supermassive black hole in astronomical terms. This object is much cooler than the surrounding stars (only about 280 degrees Celsius), and is composed mostly of hydrogen and helium. It is a dusty, ionised gas cloud with a mass roughly three times that of the Earth. The cloud is glowing under the strong ultraviolet radiation from the hot stars around it in the crowded heart of the Milky Way. The current density of the cloud is much higher than the hot gas surrounding the black hole. But as the cloud gets ever closer to the hungry beast, increasing external pressure will compress the cloud. At the same time the huge gravitational pull from the black hole, which has a mass four million times that of the Sun, will continue to accelerate the inward motion and stretch the cloud out along its orbit. “The idea of an astronaut close to a black hole being stretched out to resemble spaghetti is familiar from science fiction. But we can now see this happening for real to the newly discovered cloud. It is not going to survive the experience,” explains Stefan Gillessen (MPE) the lead author of the paper. The cloud’s edges are already starting to shred and disrupt and it is expected to break up completely over the next few years . The astronomers can already see clear signs of increasing disruption of the cloud over the period between 2008 and 2011. The material is also expected to get much hotter as it nears the black hole in 2013 and it will probably start to give off X-rays. There is currently little material close to the black hole so the newly-arrived meal will be the dominant fuel for the black hole over the next few years. One explanation for the formation of the cloud is that its material may have come from nearby young massive stars that are rapidly losing mass due to strong stellar winds. Such stars literally blow their gas away. Colliding stellar winds from a known double star in orbit around the central black hole may have led to the formation of the cloud. “The next two years will be very interesting and should provide us with extremely valuable information on the behaviour of matter around such remarkable massive objects,” concludes Reinhard Genzel. The black hole at the centre of the Milky Way is formally known as Sgr A* (pronounced Sagittarius A star). It is the closest supermassive black hole known by far and hence is the best place to study black holes in detail. The observations were made using the NACO infrared adaptive optics camera and the SINFONI infrared spectrograph, both attached to the ESO Very Large Telescope in Chile. The centre of the Milky Way lies behind thick dust clouds that scatter and absorb visible light and must be observed at infrared wavelengths where the clouds are more transparent. A light-hour is the distance that light travels in one hour. It is a little more than the distance from the Sun to the planet Jupiter in the Solar System. For comparison the distance between the Sun and the nearest star is more than four light-years. The cloud will pass at less than ten times the distance from the Sun to Neptune from the black hole This effect well known from the physics of fluids and can be seen when for example pouring syrup in a glass of water. The flow of syrup downwards through the water will be disrupted and the droplet will break apart — effectively diluting the syrup in the water. Më shumë informacion This research was presented in a paper “A gas cloud on its way towards the super-massive black hole in the Galactic Centre”, by S. Gillessen et al., to appear in the 5 January 2012 issue of the journal Nature. The team is composed of S. Gillessen (Max-Planck-Institut für extraterrestrische Physik [MPE], Germany), R. Genzel (MPE; Department of Physics, University of California [UC], USA), T. Fritz (MPE, Germany), E. Quataert (Department of Astronomy, UC, USA), C. Alig (Universitätssternwarte der Ludwig-Maximilians-Universität [LMU], Germany), A. Burkert (MPE; LMU), J. Cuadra (Departamento de Astronomía y Astrofísica, Pontificia Universidad Católica de Chile, Chile), F. Eisenhauer (MPE), O. Pfuhl (MPE), K. Dodds-Eden (MPE), C. Gammie (Center for Theoretical Astrophysics, University of Illinois, USA), T. Ott (MPE). ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive astronomical observatory. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 40-metre-class European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”. Max-Planck Institute for Extraterrestrial Physics Tel: +49 89 30000 3839 Max-Planck Institute for Extraterrestrial Physics Tel: +49 89 30000 3281 ESO, La Silla, Paranal, E-ELT & Survey Telescopes Press Officer Garching bei München, Germany Tel: +49 89 3200 6655 Cel: +49 151 1537 3591
0.888474
3.734956
Edmond Halleyâs magnificent prediction November 8, 1656. English astronomer and mathematician Edmond Halley was born on this date near London. He became the first to calculate the orbit of a comet, arguably the most famous of all comets today, named Comet Halley in his honor. He was also friends with Isaac Newton and contributed to Newton’s development of the theory of gravity, which helped establish our modern era of science, in part by removing all doubt that we live on a planet orbiting around a sun. When Comet Halley last appeared in Earth’s skies in 1986, it was met in space by an international fleet of spacecraft. This famous comet will return again in 2061 on its 76-year journey around the sun. It’s famous in part because it tends to be a bright comet in Earth’s skies; at the 1986 return, many people saw it. Also, because of the length of the comet’s orbit – 76 years – many on Earth will see it again. But, in Edmond Halley’s time, people didn’t know that comets were like planets in being bound in orbit by the sun. They didn’t know that some comets, like Comet Halley, return over and over. Comets were thought to pass only once through our solar system. In the year 1704, Halley had become a professor of geometry at Oxford University. The following year, he published A Synopsis of the Astronomy of Comets. The book contains the parabolic orbits of 24 comets observed from 1337 to 1698. It’s also in this book that Halley remarks on three comets that appeared in 1531, 1607, and 1682. He used Isaac Newton’s theories of gravitation and planetary motions to compute the orbits of these comets, finding remarkable similarities in their orbits. Then Halley made a leap and made what was, at that time, a stunning prediction. He said these three comets must in fact be a single comet, which returns periodically every 76 years. He then predicted the comet would return, saying: Hence I dare venture to foretell, that it will return again in the year 1758. Halley didn’t live to see his prediction verified. It was 16 years after his death that – right on schedule, in 1758 – the comet did return. The scientific world – and the public – were amazed. It was the first comet ever predicted to return. It’s now called Comet Halley, in honor of Edmond Halley. The 17th century was an exciting time to be a scientist in England. The scientific revolution gave birth to the Royal Society of London when Halley was only a child. Members of the Royal Society – physicians and natural philosophers who were some of the earliest adopters of the scientific method – met weekly. The first Astronomer Royal was John Flamsteed, who is remembered in part for the creation of the Royal Observatory at Greenwich, which still exists today. After entering Queen’s College in Oxford as a student in 1673, Halley was introduced to Flamsteed. Halley had the chance to visit him in his observatory on a few occasions during which Flamsteed encouraged him to pursue astronomy. At that time, Flamsteed’s project was to assemble an accurate catalog of the northern stars with his telescope. Halley thought he would do the same, but with stars of the Southern Hemisphere. His journey southward began in November 1676, even before he obtained his university degree. He sailed aboard a ship from the East India Company to the island of St. Helena, still one of the most remote islands in the world and the southernmost territory occupied by the British. His father and King Charles II financed the trip. In spite of bad weather that made Halley’s work difficult, when he turned to sail back home in January 1678, he brought records of the longitude and latitude of 341 stars and many other observations including a transit of Mercury. Of the transit, he wrote: This sight … is by far the noblest astronomy affords. Halley’s catalog of southern stars was published by the end of 1678, and – as the first work of its genre – it was a huge success. No one had ever attempted to determine the locations of southern stars with a telescope before. The catalog was Halley’s glorious debut as an astronomer. In the same year, he received his M.A. from the University of Oxford and was elected a fellow of the Royal Society. Halley visited Isaac Newton in Cambridge for the first time in 1684. A group of Royal Society members, including physicist and biologist Robert Hooke, architect Christopher Wren and Isaac Newton, were trying to crack the code of planetary motion. Halley was the youngest to join the trio in their mission to use mathematics to describe how – and why – the planets move around the sun. They were all competing against one another to find the solution first, which was very motivating. Their problem was to find a mechanical model that would keep the planet orbiting around the sun without it escaping the orbit or falling into the star. Hooke and Halley determined that the solution to this problem would be a force that keeps a planet in orbit around a star and must decrease as the inverse square of its distance from the star, what we today know as the inverse-square law. Hooke and Halley were on the right track, but they were not able to create a theoretical orbit that would match observations, in spite of a monetary prize to be given by Wren. Halley visited Newton and explained the concept to him, also explaining that he couldn’t prove it. Newton, encouraged by Halley, developed Halley’s work into one of the most famous scientific works to this day, Mathematical Principles of Natural Philosophy, often referred to simply as Newton’s Principia. Halley is also known for his work in meteorology. He put his talent of giving meaning to great amounts of data to use by creating a map of the world in 1686. The map showed the most important winds above the oceans. It is considered to be the first meteorological chart to be published. Halley kept travelling and working on many other projects, such as attempting to link mortality and age in a population. This data was later used by actuaries for life insurance. In 1720, Halley succeeded Flamsteed and became the second Astronomer Royal at Greenwich. Bottom line: Astronomer Edmond Halley – for whom Halley’s Comet is named – was born on November 8, 1656.
0.898828
3.61161
“Justice is like the north star, which is fixed, and all the rest revolve about it.” Here at the Planetarium, we often proclaim, “The stars are not moving above you; it’s the Earth spinning below your feet.” Well, that’s only partly true. The Earth does rotate very quickly, but the stars move even faster! Stars are huge balls of hydrogen, mostly. Billions and billions of them whirl around the core of the Milky Way galaxy. At the Sun’s distance from the galaxy’s center, about 27 light years, stars speed along at 150 miles per second! Though racing rapidly, they still take about 230 million years to make one orbit around our galaxy. Galactic stars do not move perfectly together in one great cosmic circle; they have random motions. Take the stars of the big dipper in the 200,000-year timelapse in the GIF below. Notice the dates. The time starts at 100,000 BC and runs to 100,000 AD. Long ago and far into the future, the Big Dipper will not look like a Big Dig Dipper. We do not observe this speedy motion simply because our lifetimes are very comparatively short and the stars are extremely far way. How far? The closest star of the dipper is Megrez at 58 light years away, or 350 trillion miles. With present technology, our fastest spaceship travels at 10 miles per second. At that speed, it would take over a million years to reach Megrez! Conversely, imagine a car speeding past you a block away at 150 miles per second. You probably wouldn’t even notice it. At best, you might spot the briefest flash of light. You would never know it was a car. Our visual experience always depends on speed, size, and distance. The International Space Station (ISS) races along at five miles per second, or 17,000 miles per hour. At that speed, it orbits the Earth in 90 minutes. The astronauts are moving so quickly, they see a sunrise and sunset every 45 minutes. The ISS is as big as a football field, but it is 200 miles away from the surface of the Earth, so when we gaze into the night sky, ISS looks like a fairly bright, slow-moving star. Downtown Milky Way Like most city downtowns, our galactic core is densely backed and full of bright lights. We live way out in a distant suburb. If we resided in downtown Milky Way, our night sky would be ablaze with a myriad of stars as bright as Venus. This discovery of our galactic location took awhile. You might think it would be easy to see where downtown is -- simply look for a bright concentration of stars. This is no problem when we look at other galaxies. Take the Andromeda galaxy for example; its bright galactic middle is obvious to the telescope’s eye. One attempt to discern our residence position was to simply count and map all the stars in every direction. Astronomer William Herschel (discoverer of Uranus) took on this project and, in 1785, drew a map of the Milky Way, placing the Sun nearly at the center. Around 1920, astronomer Harlow Shapley deduced our Solar System’s location by mapping the position and distance of globular star clusters. The results, while not perfect, suggested the constellation of Sagittarius was the center of the galaxy. Once again, however, astronomers did not see anything resembling a bright galactic core. The reason galaxy explorers were thrown off was because of the dark nebulae between the Sun and the galaxy’s center. These huge gas and dense dust clouds obscure the core’s bright light. A breakthrough occurred when scientists realized they can see through the dark, dusty nebulae. How? Just as we can detect a heat source in a dark room, astronomers see “light,” or energy, through the dark clouds with radio and infrared telescopes. Unknowingly, a confirmation of the galaxy’s center came from Karl Jansky’s radio antenna in 1931. He detected a constant radio source coming from the constellation of Sagittarius. Today astronomers have added x-ray telescopes to their investigative line-up and have confirmed the existence of a supermassive black hole at the center of the Milky Way galaxy. They have mapped stars orbiting the massive gravity well. This animation shows the location of stars for 18 years, from 1995 to 2013. Notice how they move faster as they near the black hole. Our galaxy’s central black hole is now called Sagittarius A*. It has the mass of over 4 million Suns severely packed into a space only 14 million miles across. (For comparison, our sun is a little less than one million mile wide.) Within one parsec, or 3.26 light years of the galactic center, astronomers estimate there are 10 million stars! Within one parsec of the Sun, there are no other stars. It took us awhile to find the location of downtown Milky Way. It turns out to be an extremely energetic and vibrant place... Maybe best if we keep our safe distance. Goodbye Opportunity — AKA Oppy Completing a marathon in 15 years may be rather slow, but if you are a rover on Mars, it’s quite the accomplishment! Last month, we wistfully said farewell to NASA’s Opportunity rover. The plucky robot roamed over the red planet for over 28 miles (distance from east Milwaukee to west Waukesha). Opportunity landed on Mars on January 25, 2004. The original plan called for only 90 Martian days (sols), or 92 Earth days. The rover far exceeded that conservative estimate by lasting over 5,000 sols. A raging dust storm on the red planet finally caused its end in June 2018. NASA had hopes of re-booting Opportunity, but to no avail. One of the biggest discoveries by Opportunity was the strange round rocks dubbed "blueberries" (below). Scientists are not certain of their exact origins but suggest they are evidence of past water on Mars. Some of the tweets at NASA shared at Spirit and Oppy @MarsRovers are worth sharing: “To all the poets, the painters, the makers and dreamers who've reached out to say #ThanksOppy and explore with us, thank you. We love sharing space with you.” “#ThanksOppy for being the little rover that could! A planned 90-day mission to explore Mars turned into 15 years of ground-breaking discoveries and record-breaking achievements.” The rover Curiosity will celebrate seven years on Mars this August. It’s already approaching a half-marathon, or 13.1 miles. Who knows, maybe someday it will break Oppy’s milestone. Mars stays in the same place pretty much all March. Look for the red “star” in the southwest after sunset. By March 31, the red planet sets about 11:30 p.m. The Moon wanders by from March 10-13. Jupiter rises in the southeast about 2:30 a.m. local time at month’s start, and by 1:30 a.m. at month’s end. It still shines a considerable distance from it gas giant cousin Saturn. If your sky is far from city lights, you can look for the stars of Sagittarius. The Moon comes into view near the biggest planets from March 26-29. The Moon occults (eclipses) both Saturn and Pluto on March 29, but this is not visible from Wisconsin. Pluto is way too small and far away to see with the human eye, or even a fairly good telescope. Venus sinks even lower in March, but is so bright you can spot it very low in the southeast. The Moon passes Venus twice in March; catch it from March 1-3 and from March 30 to April 1. March Star Map Receive this newsletter via email! Follow Bob on Twitter @MPMPlanetarium
0.840273
3.441794
These days, astronomers all over the world sleep with one eye open, keeping a close watch of of the supermassive black hole located in the center of our Milky Way Galaxy: Sagittarius A* (Sgr A*). A mysterious gas cloud called “G2” is on a collision course with our Galactic nucleus and may produce some fireworks in the near future (read all about it here). Imagine the excitement when on April 24 (2013), our daily observations performed with Swift’s X-ray Telescope suddenly detected enhanced activity at the position of Sgr A*. An Astronomer’s Telegram was readily distributed to instantly notify the astronomical community. To everybody’s surprise, however, rapid follow-up observations at infrared and radio wavelengths did not detect anything out of the ordinary and in stead suggested the supermassive black hole remained quiet as always. The mystery was resolved when right next day, Swift’s Burst Alert Telescope detected a very short (less than a second) and energetic burst of gamma-ray emission. Together with the detection of a pulsed X-ray signal using the brand-new high-energy telescope NuSTAR, this revealed that an otherwise dormant neutron star, located very close to the supermassive black hole, had been revived. This neutron star, named SGR J1745-29, has an extremely strong magnetic field and belongs to the rare class of “magnetars”. So far it is only the magnetar that continues to show fireworks, whereas Sgr A* remains as quiet as it has ever been. Kennea et al. 2013, ApJ Letters 770, L24: Swift Discovery of a New Soft Gamma Repeater, SGR J1745-29, near Sagittarius A* Paper link: ADS Press item: Sky&Telescope feature
0.822011
3.639002
Following a decade of traveling through space, and years before the of assignment preparation, the European Space Agency’s (ESA) Rosetta spacecraft is right on course to provide a superb assignment. Already it’s shown intriguing views of its goal Comet 67P/Churyumov-Gerasminko and on Wednesday August 6, Rosetta will finish the closing of a collection of ten manoeuvres which will bring it to within 100km of the comet. Reaching New Heights Rosetta is defined to be the first spacecraft to orbit a comet and after this year are the first to set up a tiny lander, called Philae, to touch down on the comet’s surface. Comets represent a few of their most primitive material from the solar system, unchanged from the striking procedures that built up the moons and planets. They could tell us exactly what components were about when the solar system formed 4.6 billion decades back and they’ve supplied Earth with the organic and water substance needed for life to grow. It travels from just beyond the orbit of Jupiter into over the areas of Mars and Earth. It probably originated from inside the Kuiper Belt, a place outside beyond Neptune, but has been ejected at some stage. It belongs to the domain of Jupiter-family comets, called such since the orbits of those comets are tremendously affected by Jupiter’s gravity. Are We There Yet? The comet is now situated between the orbit of Mars and Jupiter at about 500 million kilometers from sunlight and it has taken Rosetta a very long time to catch it up. Ever since being launched by Europe’s Spaceport at Kourou, French Guiana on 2 March 2004, Rosetta has travelled over six billion kilometres to a lengthy journey which comprised three fly-bys of Earth, among Mars and 2 bonus glimpse of asteroids from the asteroid belt that the tiny asteroid 2867 Steins in a space of 800km and also the considerably larger 20 Lutetia seen from approximately 3,000km. It spent nearly the last 3 decades of its travel at hibernation before waking up in January this year. Within the last couple of months since the campuses between Rosetta and the comet started to quickly slip away, dramatic graphics have proven that Comet 67P/Churyumov-Gerasminko is similar to some of the other bunch of comets which were observed from up-close in distance. This comet, that can be projected to be 4km across, is made up of 2 distinctly-shaped pieces. It might be that a very slow crash has jumped two individual objects together. But it’s also possible that this is one thing that has been warped out of shape by the gravitational pull of something large or possibly its outer layers are eliminated over the years, leaving just the most compact material behind. What is exciting is that we’ll soon know a lot more about this comet. But, it is definitely not plain sailing from here. In reality, the challenges are, in certain ways, just starting. Three Sided Orbit? The rendezvous puts Rosetta to a peculiar triangular orbit seen from the movie below. Each Wednesday and Sunday, little thruster burns will bring the spacecraft back around to traveling another facet of this triangle. Finally, Rosetta will hit within 30km from the surface of the comet and from there the comet’s feeble gravity ought to have the ability to take over and maintain Rosetta in orbit. At that moment, the surface will be mapped in fantastic detail to hunt the most perfect place to ship Rosetta’s lander, Philae. To produce the installation itself, Rosetta will have to come within only 2.5kilometers of the comet’s nucleus. But there is more. Now’s an exciting time to hook up with the comet, since Rosetta will soon be travelling with Comet 67P/Churyumov-Gerasminko since it makes its journey around sunlight. It is on this trip to the sunlight that things really begin to happen to get a comet. This creates a fuzzy reefs round the comet and since the gas leaks, in addition, it takes with it dust particles. The warmer and more energetic that the comet gets, the gas and dust is released into space. This substance creates the comet’s tail and can be pushed back from the comet from the strain of the solar power. Every comet grows in its own peculiar manner, dependent on how compacted the comet may be, just how much volatile substance it contains and that regions of the comet are being warmed by sunlight. Already, although Comet 67P/Churyumov-Gerasminko is beyond Mars, it’s showing signs of out-gassing water. ESA isn’t taking any chances of Rosetta and the spacecraft is presently travelling marginally before this comet, remaining out of this manner of any out-gassing material. It’s guaranteed to be an exciting and ambitious mission to see exactly how near the spacecraft can get into the comet, and also the wonderful science it’ll do, while still maintaining Rosetta from harm’s way.
0.843728
3.673066
The never-before-seen phenomenon was first noticed in January and has since been documented at least 11 times. Some of the dust expelled was blown out into the void of space but the remainder was actually captured within Bennu's orbit, falling back down and resting on the asteroid's surface. Curiously, however, at least four larger chunks of debris have remained in orbit around Bennu, potentially forming micro moons. At the moment, the eruptions pose more questions than the OSIRIS-REx team have answers. The origin of these plumes and what exactly triggers them remains a mystery. “The discovery of [the] plumes is one of the biggest surprises of my scientific career,” said principal investigator Dante Lauretta of the University of Arizona. The OSIRIS-REx probe arrived in Bennu orbit in December 2018 to study the rock for additional information about the origins of the universe. It is due to collect a rock sample using its extendable arm but the sampling is proving more difficult than anticipated due to the amount of larger boulders on the surface. More about: NASA
0.824605
3.282509
First Detection of Super-Earth Atmosphere For the first time astronomers were able to analyse the atmosphere of an exoplanet in the class known as super-Earths. Using data gathered with the NASA/ESA Hubble Space Telescope and new analysis techniques, the exoplanet 55 Cancri e is revealed to have a dry atmosphere without any indications of water vapour. The results, to be published in the Astrophysical Journal, indicate that the atmosphere consists mainly of hydrogen and helium. The international team, led by scientists from University College London (UCL) in the UK, took observations of the nearby exoplanet 55 Cancri e, a super-Earth with a mass of eight Earth-masses. It is located in the planetary system of 55 Cancri, a star about 40 light-years from Earth. Using observations made with the Wide Field Camera 3 (WFC3) on board the NASA/ESA Hubble Space Telescope, the scientists were able to analyse the atmosphere of this exoplanet. This makes it the first detection of gases in the atmosphere of a super-Earth. The results allowed the team to examine the atmosphere of 55 Cancri e in detail and revealed the presence of hydrogen and helium, but no water vapour. These results were only made possible by exploiting a newly-developed processing technique. “This is a very exciting result because it’s the first time that we have been able to find the spectral fingerprints that show the gases present in the atmosphere of a super-Earth,” explains Angelos Tsiaras, a PhD student at UCL, who developed the analysis technique along with his colleagues Ingo Waldmann and Marco Rocchetto. “The observations of 55 Cancri e’s atmosphere suggest that the planet has managed to cling on to a significant amount of hydrogen and helium from the nebula from which it originally formed.” Super-Earths like 55 Cancri e are thought to be the most common type of planet in our galaxy. They acquired the name ‘super-Earth’ because they have a mass larger than that of the Earth but are still much smaller than the gas giants in the Solar System. The WFC3 instrument on Hubble has already been used to probe the atmospheres of two other super-Earths, but no spectral features were found in those previous studies. 55 Cancri e, however, is an unusual super-Earth as it orbits very close to its parent star. A year on the exoplanet lasts for only 18 hours and temperatures on the surface are thought to reach around 2000 degrees Celsius. Because the exoplanet is orbiting its bright parent star at such a small distance, the team was able to use new analysis techniques to extract information about the planet, during its transits in front of the host star. Observations were made by scanning the WFC3 very quickly across the star to create a number of spectra. By combining these observations and processing them through analytic software, the researchers were able to retrieve the spectrum of 55 Cancri e embedded in the light of its parent star. “This result gives a first insight into the atmosphere of a super-Earth. We now have clues as to what the planet is currently like and how it might have formed and evolved, and this has important implications for 55 Cancri e and other super-Earths,” said Giovanna Tinetti, also from UCL, UK. Intriguingly, the data also contain hints of the presence of hydrogen cyanide, a marker for carbon-rich atmospheres. “Such an amount of hydrogen cyanide would indicate an atmosphere with a very high ratio of carbon to oxygen,” said Olivia Venot, KU Leuven, who developed an atmospheric chemical model of 55 Cancri e that supported the analysis of the observations. “If the presence of hydrogen cyanide and other molecules is confirmed in a few years time by the next generation of infrared telescopes, it would support the theory that this planet is indeed carbon rich and a very exotic place,” concludes Jonathan Tennyson, UCL. “Although hydrogen cyanide, or prussic acid, is highly poisonous, so it is perhaps not a planet I would like to live on!”
0.856535
3.919808
This craggy fantasy mountaintop enshrouded by wispy clouds looks like a bizarre landscape from Tolkien's "The Lord of the Rings" or a Dr. Seuss book, depending on your imagination. The NASA Hubble Space Telescope image, which is even more dramatic than fiction, captures the chaotic activity atop a three-light-year-tall pillar of gas and dust that is being eaten away by the brilliant light from nearby bright stars. The pillar is also being assaulted from within, as infant stars buried inside it fire off jets of gas that can be seen streaming from towering peaks. This turbulent cosmic pinnacle lies within a tempestuous stellar nursery called the Carina Nebula, located 7,500 light-years away in the southern constellation Carina. The image celebrates the 20th anniversary of Hubble's launch and deployment into an orbit around Earth. Scorching radiation and fast winds (streams of charged particles) from super-hot newborn stars in the nebula are shaping and compressing the pillar, causing new stars to form within it. Streamers of hot ionized gas can be seen flowing off the ridges of the structure, and wispy veils of gas and dust, illuminated by starlight, float around its towering peaks. The denser parts of the pillar are resisting being eroded by radiation much like a towering butte in Utah's Monument Valley withstands erosion by water and wind. Nestled inside this dense mountain are fledgling stars. Long streamers of gas can be seen shooting in opposite directions off the pedestal at the top of the image. Another pair of jets is visible at another peak near the center of the image. These jets (known as HH 901 and HH 902, respectively) are the signpost for new star birth. The jets are launched by swirling disks around the young stars, which allow material to slowly accrete onto the stars' surfaces. Hubble's Wide Field Camera 3 observed the pillar on Feb. 1-2, 2010. The colors in this composite image correspond to the glow of oxygen (blue), hydrogen and nitrogen (green), and sulfur (red). The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington.
0.88852
3.413444
|A depiction of the atomic structure of the atom of helium. The darkness of the electron cloud corresponds to the line-of-sight integral over the probability function of the 1s electron orbital. The nucleus is, showing protons in purple and neutrons in pink. In reality, the nucleus (and the wavefunction of each of the nucleons) is also spherically symmetric. (For more complicated nuclei this is not the case.)| छगू अणूय् पोजितिभ चार्ज दुगु प्रोतोन व चार्ज मदुगु न्युत्रोन जाना ताकुगु आणविक न्युक्लियस देकातै, थ्व न्युक्लियसया पिने न्युक्लियस स्वया तधंगु तर देन्सिती म्हो जुगु इलेक्त्रोन क्लाउद दै गुकिलि नेगेतिभ चार्ज दुगु इलेक्त्रोन दै। छगू अणुइ प्रोतोन व इलेक्त्रोनया संख्या उलि हे दःसा अणु इलेक्त्रिकल्ली न्युत्रल जुइ। छगू अणुइ प्रोतोनया ल्याखँ नं व अणुया तत्त्व सीकेफै धाःसा न्युत्रोनया ल्याखँ नं आइसोतोप सीकेफै। The concept that matter is composed of discrete units and can not be divided into any arbitrarily tiny or small quantities has been around for thousands of years. The earliest references to the concept of atoms date back to ancient India in the 6th century BCE. The Nyaya and Vaisheshika schools developed elaborate theories of how atoms combined into more complex objects (first in pairs, then trios of pairs). The references to atoms in West, emerge a century later by Leucippus whose student, Democritus, systemized his views. In around 450 BCE, Democritus coined the term atomos, which meant "uncuttable". Though both the Indian and Greek concepts of the atom were based purely on philosophy, modern science has retained the name coined by Democritus. In 1803, John Dalton used the concept of atoms to explain why elements always reacted in simple proportions, and why certain gases dissolved better in water than others. He proposed that each element consists of atoms of a single, unique type, and that these atoms could join to each other, to form compound chemical. In 1827 a British botanist Robert Brown used a microscope to look at dust grains floating in water. He called their erratic motion "Brownian motion". Albert Einstein would later demonstrate that this motion was due to the water molecules bombarding the grains. In 1897, JJ Thomson, through his work on cathode rays, discovered the electron and its subatomic nature, which destroyed the concept of atoms as being indivisible units. Later, Thomson also discovered the existence of isotopes through his work on ionized gases. Thomson believed that the electrons were distributed evenly throughout the atom, balanced by the presence of a uniform sea of positive charge. However, in 1909, the gold foil experiment was interpreted by Ernest Rutherford as suggesting that the positive charge of an atom and most of its mass was concentrated in a nucleus at the center of the atom (Rutherford model), with the electrons orbiting it like planets around a sun. In 1913, Niels Bohr added quantum mechanics into this model, which now stated that the electrons were locked or confined into clearly defined orbits, and could jump between these, but could not freely spiral inward or outward in intermediate states. In 1926, Erwin Schrodinger, using Louis DeBroglie's 1924 proposal that all particles behave to an extent like waves, developed a mathematical model of the atom that described the electrons as three-dimensional waveforms, rather than point particles. A consequence of using waveforms to describe electrons, pointed out by Werner Heisenberg a year later, is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at any point in time; this became known as the uncertainty principle. In this concept, for any given value of position one could only obtain a range of probable values for momentum, and vice versa. Although this model was difficult to visually conceptualize, it was able to explain many observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms bigger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described orbital zones around the nucleus where a given electron is most likely to exist. न्हापा न्हापा अणु धागु खँग्वः सैद्धान्तिक तवं तक्ले चिधंगु वस्तुयात जनाउ यायेत छ्य्ला तसां आःवया आधुनिक वैज्ञानिक परिक्षणं अणुया दुने नं उप-आणविक पार्तिकल लुइकाहगु दु। उप-आणविक पार्तिकलत थ्व कथं दु: - इलेक्त्रोन, which have a negative charge, a size which is so small as to be currently unmeasurable, and which are the least heavy (i.e., massive) of the three types of basic particles, with an mass of 9.11x10-31kg. - प्रोतोन, which have a positive charge, with a free mass about 1836 times more than electrons (mass of 1.67x10-27kg though binding energy changes can reduce this). - न्युत्रोन, which have no charge, have a free mass about 1839 times the mass of electrons, and about the same physical size as protons (which is on the order of 2.5x10-15 m in diameter, although the "surface" of a proton or neutron is not very sharply defined). Protons and neutrons make up a dense, massive atomic nucleus, and are collectively called nucleons. The electrons form the much larger electron cloud surrounding the nucleus. Both protons and neutrons are themselves now thought to be composed of even more elementary particles, quarks. Atoms of the same element have the same number of protons (called the atomic number). Within a single element, the number of neutrons may vary, determining the isotope of that element. The number of electrons associated with an atom is most easily changed, due to the lower energy of binding of electrons. The number of protons (and neutrons) in the atomic nucleus may also change, via nuclear fusion, nuclear fission, bombardment by high energy subatomic particles or photons, or certain (but not all) types of radioactive decay. In such processes which change the number of protons in a nucleus, the atom becomes an atom of a different chemical element. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms which have either a deficit or a surplus of electrons are called ions. Electrons that are furthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. अणु व मोलेक्युल[सम्पादन] For gases and certain molecular liquids and solids (such as water and sugar), molecules are the smallest division of matter which retains chemical properties; however, there are also many solids and liquids which are made of atoms, but do not contain discrete molecules (such as salts, rocks, and liquid and solid metals). Thus, while molecules are common on Earth (making up all of the atmosphere and most of the oceans), most of the mass of the Earth (much of the crust, and all of the mantle and core) is not made of identifiable molecules, but rather represents atomic matter in other networked arrangements, all of which lack the particular type of small-scale interrupted order (i.e., small, strongly-bound collections of atoms held to other collections of atoms by much weaker forces) that is associated with molecular matter. Most molecules are made up of multiple atoms; for example, a molecule of water is a combination of two hydrogen atoms and one oxygen atom. The term "molecule" in gases has been used as a synonym for the fundamental particles of the gas, whatever their structure. This definition results in a few types of gases (for example inert elements that do not form compounds, such as neon), which has "molecules" consisting of only a single atom. The first nuclei, including most of the helium and all of the deuterium in the universe, were theoretically created during big bang nucleosynthesis, about 3 minutes after the big bang. The first atoms were theoretically created 380,000 years after the big bang, an epoch called recombination, when the universe cooled enough to allow electrons to become attached to nuclei. Since then, atoms have been combined in stars through the process of nuclear fusion to generate atoms up to Iron. Some atoms such as 6Li are generated in space through Cosmic ray spallation. Elements heavier than Iron were generated in supernovae through the r-process and in AGB stars through the s-process. Some elements, such as lead, formed largely through the radioactive decay of heavier elements. Most of the atoms that currently make up the earth and all its inhabitants were present in their current form in the nebula that formed the solar system. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the earth through radiometric dating. Most of the helium on earth is a product of alpha-decay. There are a few trace atoms on Earth that were not present at the beginning, nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions, including all the plutonium and technetium on the earth. Various analogies have been used to demonstrate the minuteness of the atom: - A human hair is about 1 million carbon atoms wide. - A single drop of water contains about 2 sextillion atoms of oxygen (2 followed by 21 zeros, 2×1021) and twice as many hydrogen atoms. - A HIV virion is the width of 800 carbon atoms and contains about 100 million atoms total. An E. coli bacterium contains perhaps 100 billion atoms, and a typical human cell roughly 100 trillion atoms. - A speck of dust might contain 3x1012 (3 trillion) atoms. - If an apple is magnified to the size of the earth, then the atoms in the apple are approximately the size of the original apple. - Matthew Champion, "Re: How many atoms make up the universe?", 1998 - Gangopadhyaya, Mrinalkanti. Indian Atomism: History and Sources. Atlantic Highlands, New Jersey: Humanities Press, 1981. ISBN 0-391-02177-X - (2002) Prentice Hall Science Explorer. Upper Saddle River, New Jersey USA: Prentice-Hall, Inc.. ISBN 0-13-054091-9. Science textbook, Page 32: "There are 2,000,000,000,000,000,000,000 (that's 2 sextillion) atoms of oxygen in one drop of water—and twice as many atoms of hydrogen." - Richard Feynman (1995). Six Easy Pieces. The Penguin Group. ISBN 978-0-14-027666-4. - Kenneth S. Krane, Introductory Nuclear Physics (1987) - Atomic and cosmic model of ferman (1975. |Wikisource has the text of The New Student's Reference Work article Atom.| |विकिमिडिया मंका य् थ्व विषय नाप स्वापु दुगु मिडिया दु: Atom| - आणविक sizes - How Atoms Work - Wikibooks FHSST Physics Atom:The Atom - विकिसफू आणविक संरचना - Science aid - atomic structure A guide to the atom for teens. |विकिमिडिया मंका य् थ्व विषय नाप स्वापु दुगु मिडिया दु: Atoms|
0.847225
3.64049
The Universe is, as we say, a collection of planets, stars, galaxies, and all other forms of matter and energy. As per the calculations of astrophysicist, the diameter of the observable universe is about 93 billion light years, but the entire dimensions of the universe is still unknown. Here is a list of some unknown facts which will blow your mind. A year on Venus is shorter than its day: The slowest rotating planet in our Solar System is Venus. It is so slow, that it takes 243 days to fully rotate on its own axis and 224 days to complete its orbit. This means that on Venus, days last longer than years. A day on Mercury is longer than its year: Mercury revolves around the sun faster than other planets, which makes its year equivalent to 88 Earth days. While a day on Mercury lasts 176 Earth days. You cannot cry in space: You cannot cry in space because without gravity, tears don’t flow downwards out of the eye and do not wash away irritants from our eyes. Instead, they accumulate into a little ball of liquid that keeps hanging in the eye. The weight of a spoonful of a neutron star is a billion ton: It is calculated that if you could collect a spoonful of matter from the center of a neutron star, its weight would be about a billion tons. Looking into the night sky enables you to you look back in time: The stars we see in the night sky are very far away from us. The star light we see has taken a long time to travel across space and has then reached our eyes. This means that whenever we look out into the night and look at stars, we are actually watching how they looked in the past. The Hubble telescope can help us look back billions of years into the past: The Hubble Telescope enables us to look towards very distant objects in the universe. It is created using images from the telescope from 2003 and 2004. The incredible picture displayed is a tiny patch of the sky in detail which contains 10,000 objects and acts as a portal back in time. You can watch the Big Bang on your television: Cosmic background radiation is the afterglow and heat of the Big Bang which started our universe 13.7 billion years ago. This cosmic echo exists throughout the universe, of which we can also catch a glimpse by using an old-fashioned television set. When a television is not tuned to a station we can see some black and white fuzz and crackling noise and around 1% of this interference is made up cosmic background radiation. A massive water body is just floating around in space: There is a gigantic water reservoir floating in space which has 140 trillion times all the water in the world’s ocean.
0.869905
3.31362
NASA - Goddard Space Flight Centre logo / Planetary Science Institute (animated) logo. Oct. 3, 2013 Ancient Mars could have been home to a type of supervolcano that affected its atmosphere and was partly responsible for the barren red planet scientists are exploring today. Image above: A false-color view of Eden Patera, a possible supervolcano on Mars. Image Credits: ASU/GSFC/JPL/NASA. Boffins from the Planetary Science Institute and the NASA Goddard Space Flight Centre have postulated that a vast circular basin on the Red Planet known as Eden Patera – which was previously thought to be an impact crater – is actually the remnants of a massive new type of volcano. Scientists had previously seen evidence of other types of newer eruptions, called shield volcanoes, during the Red Planet's Hesperian geological time period. But the paper looks into an entirely new category of Martian volcanic construct: ancient supervolcanoes. When a "supervolcano" erupted, the scientists theorise, it left a volcanic "caldera" – a depression in the planet’s surface – which looks like a crater. "This highly explosive type of eruption is a game-changer, spewing many times more ash and other material than typical, younger Martian volcanoes," Goddard’s Jacob Bleacher said. "During these types of eruptions on Earth, the debris may spread so far through the atmosphere and remain so long that it alters the global temperature for years." Arabia Tera with potential volcanic calderas labeled. Image credit: Michalski et al. The researchers reckon that a large body of magma loaded with dissolved gas rose to the surface very quickly. They compared the supervolcano’s eruption with a bottle of soda being shaken, blowing its contents far and wide across the Martian landscape. Because so much material explodes away, the depression left behind can collapse even further, making it look less like the site of a volcano. Similar eruptions happened on prehistoric Earth in places like Yellowstone National Park in the US and Lake Toba in Indonesia. Eden Patera is located in the Arabia Terra region of Mars, an area known for its impact craters. But PSI’s Joseph Michalski began to suspect that it wasn’t just another crater when he examined data from NASA's Mars Odyssey, Mars Global Surveyor and Mars Reconnaissance Orbiter spacecraft, as well as from the European Space Agency's Mars Express orbiter. He noticed that the “crater” was missing its rim and the ejecta – melted rock that splashes out when an object hits a planet. The researchers say the new type of volcano explains the discrepancy between the lava and pyroclastic materials found on Arabia Terra and beyond could not be accounted for by the shield volcanoes. Supervolcanoes on Mars Bleacher identified features at the site that showed volcanism, such as the rock ledges which are usually left behind after a lava lake slowly drains and the kinds of faults and valleys around the crater that are created when the ground collapses due to volcanic activity beneath the surface. The team also reckons that a few other basins nearby could be volcano remnants as well. "If just a handful of volcanoes like these were once active, they could have had a major impact on the evolution of Mars," Bleacher said. The full study, “Supervolcanoes within an ancient volcanic province in Arabia Terra, Mars”, was published in Nature. "Every decade or two someone proposes yet another otherwise previously unrecognized volcano on Mars," says space volcanology expert Larry Crumpler of the New Mexico Museum of Natural History and Science in Albuquerque. He calls the supervolcano "an interesting new idea about Martian highlands volcanism where none had been proposed before." However, both Crumpler and MIT's Maria Zuber (who calls the observations "well supported") caution that the supervolcanoes idea rests on interpretation of the Martian surface, which has a long history of misleading observers. "Like most remote-sensing studies it relies principally on circumstantial evidence," Crumpler says. "Nonetheless, it postulates an intriguing direction for future research regarding what was the wettest period in Martian geologic history." Images (mentioned), Video, Text, Credits: NASA / Planetary Science Institute / Nature Video.
0.855042
3.880576
Many of the planets discovered elsewhere in our galaxy are not like Earth, but rather more like Jupiter. Such gas giants, as far as we know, are not hospitable to life, but now it has been suggested that the moons of these planets could be habitable. If confirmed, it would suggest these locations could be the predominant sources for life in the universe, not worlds like our own Earth. ‘If even some of these Jupiter-sized planets have moons, they might be the predominant sites of life in the universe,’ Dr Sarah Ballard said. In particular, she focused on the world of Upsilon Andromedae d, a gas giant exoplanet about 10 times the mass of Jupiter, located 44 light-years from Earth. While the planet itself is not thought to be habitable, it is possible that a moon in its orbit - known as an exomoon - could be. And if you were to step on the surface of the moon, you would see ‘beautiful tumultuous clouds on the Jovian planet’ and ‘incredibly complex cloud activity,’ according to Dr Ballard. So far, no exomoons have been discovered, but given that six of the eight planets in our solar system have moons, most astronomers regard it as an inevitability rather than a possibility that one will be found. It might be possible to find one in data collected by Nasa’s Kepler space telescope, or it may be necessary to wait for a more powerful planet-hunter to come online, such as the Transiting Exoplanet Survey Satellite (Tess), due to launch in 2017. Finding exomoons is a bit of a problem though, as their mass and size is so much less than their host planet. One technique to find them that may prove successful is gravitational microlensing, which uses a foreground star to magnify a more distant one. The chance alignment can reveal exoplanets around a star, and could possibly even be used to spot a moon in orbit. And we are able to rule certain planets out - ones that are too close to their host star, like Mercury and Venus in our own solar system, are unable to cling on to natural satellites. But finding out if moons are common in our galaxy will be key for the search for life, and could signal a change in goals for planet-hunters in the near future. ‘The fact we reside on a single rocky hunk of rock, orbiting without a big brother planet, might be relatively unusual,’ added Dr Ballard.
0.925252
3.760959
Our Sun is one of roughly 100 billion other stars that make up the Milky Way Galaxy. Two-thirds of all stars are paired off, with a gravitational bond between the two stars. Such systems are known as stellar binaries. Although these binaries are very common in the galaxy, there is much yet to be learned about their formation, evolution, and interactions. The approach taken in this thesis is to produce simulated data representing the expected measurements that an observational astronomer would collect. We attempt to simulate three different stellar binary star systems: an eclipsing binary, a spectroscopic binary, and a gravitational wave emitting binary. In the case of the eclipsing binary, we aim to create a graph of the amount of light received as a function of time. For the spectroscopic binary, we use the fundamental physical principles to measure the velocity of each of the stars with respect to the Earth. Then for the gravitational wave emitting binary, we generate a plot which measures the distortion of spacetime due to the rotation of the stellar binary. Using these generalized functions, a future researcher will be able to develop a statistical analysis program that combines all of the data from the models in an effort to learn more about the characteristics of the stellar binary. "Multimessenger Astronomy: Modeling Gravitational and Electromagnetic Radiations from a Stellar Binary System," Bridges: A Journal of Student Research: Vol. 6 , Article 5. Available at: https://digitalcommons.coastal.edu/bridges/vol6/iss6/5
0.808035
3.035385
As high and low tides are the earth’s oceans’ response to the moon’s gravity on a large scale, so are women’s menstrual cycles on a smaller scale. Whether we are greatly or hardly affected by our close (relatively speaking) cosmic neighbor, whether it depresses or impresses us, it has inspired humankind since we have had the capacity to observe its wanderings across the sky. Long before we understood that it revolves around the earth, as the earth does around the sun, we tried to explain its regular appearances and disappearances. In ancient Greek mythology, the moon was thought to represent the Goddess Selene riding her silver chariot across the firmament at night, just as her brother, Sun God Helios, moved across the day sky in his golden chariot. The scientific terms selenology and selenography (the astronomical study of the moon, and the study of the physical features of the moon, respectively), still commemorate that divine lady of the night. To enlarge a photo, click on it. To read its caption, hover cursor over it. Our ancestors prayed or sacrificed to it, or joined wild canines in howling at it. January’s first full moon is known as Wolf Moon, named after the hungry wolves that vocally lament scarce food offerings in the midst of the coldest and darkest month of the year. Its monthly waxing and waning are obvious to anybody who takes the time to gaze at the heavens, and special celestial events, such as eclipses, have always brought out admirers, just as it did a few days ago, when a Wolf Moon, which was simultaneously a Super Moon (a full moon that appeared larger and brighter, as its orbit reached the perigee, the shortest possible distance from the earth), was involved in a total lunar eclipse, thereby being transformed into a Blood Moon (in which the fully eclipsed moon took on a reddish color). I am not one to set my alarm at 3 o’clock in the morning to witness most astronomical happenings, but I like to be aware of the lunar cycles. Even before I learned about this month’s planetary spectacle, I had sorted through some of my old moon photographs to prepare a blog post. As the sky in Colorado Springs was clear on January 20, and I did not have to set my alarm for the middle of the night, I was able to add a few additional lunar impressions to share with my follow moon lovers.
0.851529
3.376221
The GPS radio occultation technique (Kursinski et al., 1997) uses satellite-to-satellite limb soundings between low-Earth orbiters and GNSS transmitters to measure accumulated refraction of the signal while it propagates through the atmosphere. Based on Doppler shift and known occultation geometry the so-called bending angle can be calculated and assigned to perigee points where the refraction is critical. The idea has its origin in extraterrestrial space missions to explore atmospheres of Mars (Fjeldbo and Eshleman, 1968) and Venus (Fjeldbo et al., 1971) in Mariner program. The proof-of-concept GPS/MET mission launched in 1995 was the first experiment on Earth’s atmosphere (Kursinski et al., 1996) that made the radio occultation technique an important tool for weather forecasting since the number of receiver platforms as well as new generation transmitters is constantly increasing. The most successful mission FORMOSAT-3/COSMIC which brought constellation of six low-Earth orbiters in 2006 provides both, ionospheric total electron density profiles for space weather applications and neutral atmosphere soundings that are operationally assimilated into numerical weather prediction models in the form of bending angle or refractivity profiles (Cucurull et al., 2007; Poli et al., 2010). Since COSMIC satellites already exceed their lifetime, the follow-on mission is planned to be launched on equatorial (expected in 2018) and polar orbits (under evaluation by NOAA’s Observing System Simulation Experiments) which together with growing EUMETSAT constellation of MetOp satellites will provide continuous retrievals. Fig. Radio occultation event (source: cosmic.ucar.edu) Fjeldbo, G., Eshleman, V. R. (1968). The atmosphere of Mars analyzed by integral inversion of the Mariner IV occultation data. Planetary and Space Science, 16(8), 1035-1059. Fjeldbo, G., Kliore, A. J., Eshleman, V. R. (1971). The neutral atmosphere of Venus as studied with the Mariner V radio occultation experiments. The Astronomical Journal, 76, 123. Kursinski, E. R., Hajj, G. A., Bertiger, W. I., Leroy, S. S. (1996). Initial results of radio occultation observations of Earth’s atmosphere using the Global Positioning System. Science, 271(5252), 1107. Kursinski, E. R., Hajj, G. A., Schofield, J. T., Linfield, R. P., Hardy, K. R. (1997). Observing Earth’s atmosphere with radio occultation measurements using the Global Positioning System. Journal of Geophysical Research: Atmospheres, 102(D19), 23429-23465. Cucurull, L., Derber, J. C., Treadon, R., Purser, R. J. (2007). Assimilation of global positioning system radio occultation observations into NCEP’s global data assimilation system. Monthly Weather Review, 135(9), 3174-3193. Poli, P., Healy, S. B., Dee, D. P. (2010). Assimilation of Global Positioning System radio occultation data in the ECMWF ERA–Interim reanalysis. Quarterly Journal of the Royal Meteorological Society, 136(653), 1972-1990.
0.881493
3.912244
Quarter* ♏ Scorpio Moon phase on 27 January 2095 Thursday is Waning Gibbous, 21 days old Moon is in Libra.Share this page: twitter facebook linkedin Previous main lunar phase is the Full Moon before 6 days on 20 January 2095 at 12:48. Moon rises in the evening and sets in the morning. It is visible to the southwest and it is high in the sky after midnight. Moon is passing about ∠24° of ♎ Libra tropical zodiac sector. Lunar disc appears visually 8.9% narrower than solar disc. Moon and Sun apparent angular diameters are ∠1783" and ∠1948". Next Full Moon is the Snow Moon of February 2095 after 22 days on 19 February 2095 at 06:59. There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak. The Moon is 21 days old. Earth's natural satellite is moving from the middle to the last part of current synodic month. This is lunation 1175 of Meeus index or 2128 from Brown series. Length of current 1175 lunation is 29 days, 11 hours and 55 minutes. It is 1 hour and 45 minutes longer than next lunation 1176 length. Length of current synodic month is 49 minutes shorter than the mean length of synodic month, but it is still 5 hours and 20 minutes longer, compared to 21st century shortest. This lunation true anomaly is ∠317.1°. At the beginning of next synodic month true anomaly will be ∠337.7°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°). 2 days after point of apogee on 25 January 2095 at 03:56 in ♍ Virgo. The lunar orbit is getting closer, while the Moon is moving inward the Earth. It will keep this direction for the next 9 days, until it get to the point of next perigee on 6 February 2095 at 05:16 in ♓ Pisces. Moon is 401 980 km (249 779 mi) away from Earth on this date. Moon moves closer next 9 days until perigee, when Earth-Moon distance will reach 359 759 km (223 544 mi). 9 days after its ascending node on 17 January 2095 at 23:02 in ♊ Gemini, the Moon is following the northern part of its orbit for the next 4 days, until it will cross the ecliptic from North to South in descending node on 1 February 2095 at 11:53 in ♐ Sagittarius. 9 days after beginning of current draconic month in ♊ Gemini, the Moon is moving from the beginning to the first part of it. 8 days after previous North standstill on 19 January 2095 at 00:03 in ♋ Cancer, when Moon has reached northern declination of ∠24.116°. Next 5 days the lunar orbit moves southward to face South declination of ∠-24.164° in the next southern standstill on 2 February 2095 at 11:47 in ♑ Capricorn. After 8 days on 4 February 2095 at 21:28 in ♒ Aquarius, the Moon will be in New Moon geocentric conjunction with the Sun and this alignment forms next Sun-Moon-Earth syzygy.
0.848363
3.160539
Tracking down cosmic giants Astronomers and astrophysicists hope to investigate galaxy clusters using the eROSITA space telescope. When eROSITA gets to work, it will have had a long journey. The X-ray telescope has already travelled a total of 1.5 million kilometres through space since its launch in July 2019 on board the SRG satellite. Its destination is a point on an extended line running through the sun and Earth on which the gravity between the two celestial bodies is in balance. The SRG satellite will orbit this point, at the same time as orbiting the sun together with Earth. It will then start to slowly rotate on its own axis. One complete rotation every four hours, for a period of four years. eROSITA will scan the entire night sky once every six months. Researchers hope the telescope will allow them to investigate galaxy clusters, the largest groups of connected objects in the universe. Explaining dark energy These clusters of several thousand galaxies should help to explain the phenomenon of dark energy, a mysterious form of energy which counteracts gravity and contributes to the increasingly rapid expansion of the universe. Astronomers now hope to investigate 100,000 of these galaxy clusters. A team led by FAU astronomer Prof. Dr. Jörn Wilms from the Erlangen Centre for Astroparticle Physics (ECAP) and the Dr. Karl Remeis Observatory Bamberg has been involved in developing an X-ray telescope for this purpose. Researchers from several German universities were involved in the project, coordinated by the Max Planck Institute for Extraterrestrial Physics (MPE) and the German Aerospace Centre (DLR). They hope that eROSITA will not only allow them to identify and count galaxy clusters, but also make it possible for them to actually see inside these cosmic objects. The gas found there is so hot that it radiates out in the medium X-ray range, thereby becoming visible for eROSITA. This allows comparisons to be drawn between galaxies which are close and others which are further away. The further a galaxy cluster is from the observer, the older the light is which can be seen, thereby showing an earlier stage of development. This allows conclusions to be drawn on the strength of the dark energy and how it has changed over time as the universe has developed. The astronomers also hope to use eROSITA to investigate roughly two million black holes in active galactic nuclei. The working group led by FAU astronomer Prof. Dr. Manami Sasaki from the Dr. Karl Remeis Observatory aims to investigate the remains of stellar explosions in our Milky Way. Specialist software delivered by FAU Seven mirror modules which can capture even the smallest X-ray signals are required in order to receive images of the night sky with a better resolution than ever before. FAU expertise has contributed to the development and operation of the highly-specialised detector. Wilms and his team have designed software for mathematical models which can be used to optimise the performance of such measuring devices in advance and monitor them during operation. ‘Our highly-specialised expertise is the reason we were able to get involved in such major projects,’ explains Wilms. Normally, simulation programmes are developed especially for each particular mission. Wilms’ software has been designed in such a way, however, that it can be easily adapted for any space mission. ‘The model reproduces the entire measuring process,’ explains Wilms. ‘Our objective, however, is to refine the model until the simulated measurements it gives are the same as the data actually recorded by the device.’ Working towards this goal, researchers have been feeding the simulation programme with data from earlier missions and astronomical events which they know will definitely occur. ‘This basically allows us to predict what the device will record,’ says Wilms. If an actual measurement differs from the simulation, this falls to the attention of the FAU researchers, who can then check whether an error has occurred or whether the telescope has made a discovery. Rapid reaction times Another task for which FAU is responsible is the near real-time analysis for eROSITA. The X-ray telescope gathers data which are transmitted once a day to earth and decoded with software designed by FAU. These are then evaluated straight away at the observatory in Bamberg. ‘There are objects in the sky which have to looked at without delay,’ says Wilms. ‘As it is not possible to alter eROSITA’s observation routine, most data it collects are more likely to be interesting from the point of view of investigating the origins of the universe,’ he explains. ‘If, however, we record a source of light which is 100 times brighter than expected, then we may need to look into it in more detail without delay.’ In instances such as these, the FAU researchers will use their software to try and find out what it is by comparing the source with older images. However, it is still worth taking another look without delay even if the source is known. ‘Our colleagues might discover something new,’ says Wilms. If so, scientists would then alert their colleagues throughout the world, and the NASA Hubble telescope could move to focus on this point. eROSITA will continue to turn stoically without interruption, and its X-ray eyes will only return to the same spot in another six months, by which time it may be too late. FAU alexander magazine The current alexander focuses on the following topics: 50 year anniversary of the landing on the moon, the Internet – in danger? Article 17 of EU copyright reform, anniversary at FAU Language Centre and a new article from the series ‘Interesting places at FAU’, this time about the observatory. Selected articles from alexander are also available online.
0.889375
3.942388
Super-luminous supernovae are the brightest explosions in the Universe. In just a few months, a super-luminous supernova can release as much energy as our Sun will in its entire lifespan. And at its peak, it can be as bright as an entire galaxy. One of the most-studied super-luminous supernovae (SLSN) is called SN 2006gy. Its origin is uncertain, but now Swedish and Japanese researchers say they might have figured out what caused it: a cataclysmic interaction between a white dwarf and its massive partner. SN 2006gy is about 238 million light years away, in the constellation Perseus. It’s in the spiral galaxy NGC 1260. It was discovered in 2006 as its name shows, and has been studied by teams of astronomers using the Chandra X-Ray Observatory, the Keck Observatory, and others. When SN 2006gy was discovered, Nathan Smith from UC Berkeley was leading a team of astronomers from UC and University of Texas at Austin. “This was a truly monstrous explosion, a hundred times more energetic than a typical supernova,” said Smith. “That means the star that exploded might have been as massive as a star can get, about 150 times that of our sun. We’ve never seen that before.” Those types of stars mostly existed in the early Universe, astronomers thought at the time. So witnessing this one exploding gave astronomers a rare look at one aspect of the early Universe. It wasn’t just the energy output from SN 2006gy that attracted attention. The SLSN displays some curious emission lines that have puzzled astronomers. Now a team of researchers think they’ve discovered what’s behind SN 2006gy. Their paper is titled “A type Ia supernova at the heart of superluminous transient SN 2006gy.” It’s published in the journal Science. The team includes researchers from Stockholm University in Sweden and colleagues at Kyoto University, University of Tokyo, and Hiroshima University. The team saw emission lines of iron that only appeared about one year after the supernova. They explored several models to explain the phenomenon, and settled on one. “No-one had tested to compare spectra from neutral iron, i.e. iron which all electrons retained, with the unidentified emission lines in SN 2006gy, because iron is normally ionized (one or more electrons removed). We tried it and saw with excitement how line after line lined up just as in the observed spectrum,” says Anders Jerkstrand, Department of Astronomy, Stockholm University. “It became even more exciting when it quickly turned out that very large amounts of iron was needed to make the lines – at least a third of the Sun’s mass – which directly ruled out some old scenarios and instead revealed a new one.” The new one involved a star going supernova and interacting with a pre-existing dense shell of circumstellar material. According to the team’s results, SN 2006gy started out as a double star. One star was a white dwarf similar in size to Earth. The second one was a massive, hydrogen-rich star that was as large as our entire Solar System. The pair were in a tight orbit. The larger star was in the later stages of evolution, and was expanding as new fuel was ignited. As its envelope expanded, the white dwarf was drawn into the larger star, spiraling in towards the center. During the in-spiral of the white dwarf, the more massive star expelled some of its envelope. That happened less than a century before the supernova. Eventually, the white dwarf reached the center and became unstable. It then exploded as a Type Ia supernova. Wwhen the supernova exploded, the material slammed into the expelled envelope. That titanic collision produced SN 2006gy’s extreme light output and curious emission lines. “That a Type Ia supernova appears to be behind SN 2006gy turns upside down what most researchers have believed,” says Anders Jerkstrand. “That a white dwarf can be in close orbit with a massive hydrogen-rich star, and quickly explode upon falling to the centre, gives important new information for the theory of double star evolution and the conditions necessary for a white dwarf to explode.” SN 2006gy was extremely bright, but others have come close. Another supernova, SN 2005ap, was brighter than SN 2006gy, but only at its peak. SN 2005ap’s peak brightness lasted only a few days. Then there’s SN 2015L (also called ASASSN-15lh) which was brighter still. Though it appeared to be a superluminous supernova, its nature is still disputed. At peak brightness, SN 2015L was 570 billion times brighter than the Sun, and 20 times brighter than the combined light emitted by the Milky Way. NASA and SpaceX made history today with the launch of second demonstration flight of the… Oh Planet Nine, when will you stop toying with us? Whether you call it Planet… SpaceX suffered yet another setback when their SN4 prototype exploded into a fireball during a… This Saturday, NASA and SpaceX will make their second attempt to send astronauts to the… Even though Earthling scientists are studying Mars intently, it's still a mysterious place. One of… For the child inside all of us space-enthusiasts, there might be nothing better than discovering…
0.858481
4.082044
Its career as a planet last for less than half its year. Pluto was discovered late, in 1930, as our final planet, completing the Sun’s brood of nine. (In hindsight it had been seen, but not recognized, as early as 1909.) But it was always an odd one, the runt of the litter, banished to the far flung regions of the Solar System. By 2006 the International Astronomical Union had seen enough. It instigated a new class of objects, intermediate between planets and asteroids (or minor planets), and re-assigned Pluto to be the primary member of this class, called dwarf planets. There were good reasons to do so: Pluto was just too small, on too strange an orbit, and worst of all, similar objects were being discovered. If Pluto was a planet, perhaps as many as 50 other objects out there could claim planethood too. NASA astronomers objected vehemently. NASA had just launched a mission to Pluto, called New Horizons. US Congress would not be well pleased if its money, awarded to investigate the last unexplored planet of the Solar System, would end up at a dwarf planet instead. The objections were overruled by a convincing vote. 76 years after being discovered and named, having travelled only a third of its 248-years orbit around the Sun, Pluto’s brief summer as a planet came to an end. Still, this newly-named dwarf planet, frozen beyond belief, is a fascinating object. It has everything: weather, enormous climate change, active geology and volcanoes, a large heart on its surface, a family of satellites, and a mysterious past. Maybe being a dwarf planet should be a badge of honour, worthy of every penny spent by NASA. An interesting aside, the name Pluto was first suggested by Venetia Burney, 11 years old at the time and living in Oxford. Of course we don’t know who named the original planets but perhaps this was also the work of children. Children have curiosity and want to push their horizons. Space was made for them. This post will present our knowledge of Pluto, and its place in the Solar System, prior to the New Horizons encounter. Other (later) posts will be about New Horizons itself, and what we learned from the fly-by. Children of the Sun Two decades ago, when everything was simpler, the Solar System had only two types of planets: the rocky or terrestrial planets (Mercury, Venus, Earth and Mars) and the gas giants (Jupiter, Saturn, Uranus, Neptune). Pluto as a solid body was classed with the rocky planets. The four rocky planets all have iron cores, underneath a silicate mantle. They have only small amounts of the so-called volatiles: water, methane, CO2. There is a reason for this deficit. The planets formed from solid particles which came together. Each mineral has a temperature below which it can be a solid (there is no liquid phase in vacuum). For iron and silicates, this condensation happens at when temperatures drop below a balmy 800-1000 Kelvin, but water, methane and CO2 freeze out only below a chilly 200 K. The rocky planets formed at a distance from the young Sun where the temperatures were in the range 300 to 600 Kelvin. So water, and other volatiles, were in a gas phase and absent from the solid particles which made up these planets. The 200-K temperature was reached somewhere within the asteroid belt. This is called the snow line. Further out, water became solid. Any ‘rocky’ planets forming out here would contain major amounts of water. In reality, the boundary wasn’t as sharp as the name ‘snow line’ implies, and the fraction of water in planets slowly increases further out, being very low for Earth, larger for Mars, and larger still for many asteroids. The four gas giants are nowadays divided into two groups. Jupiter and Saturn, the monsters of the Solar System, are true gas giants. Uranus and Neptune, rather smaller but still 15 times heavier than the Earth, are considered water giants. They contain huge oceans in their mantles, above a rocky, Earth-like core. The moons of the large gas giants, being outside the snow line, also are a mixture of rock and ice, with more ice than found in a typical asteroid. In all these objects, ‘ice’ is mainly water ice but it also contains ammonia, CO2, and methane. How does Pluto compared to the rest of the family? If you like your information in numerical form, a Pluto factsheet is maintained by NASA. But the facts need interpreting. Pluto’s mass may seem impressive, but it is positively minute compared to the planets: it is a midget, only 0.2% of the Earth, and 20 times less than the smallest planet, Mercury. Embarrassingly, seven moons in the Solar System are larger than Pluto. The orbit is also strange for a planet, very elliptical, with the distance to the Sun varying by 50%. For comparison, for Earth it varies only by 3.5%, and for Mars (the most elliptical orbit among the planets) the variation is 18%. Pluto is in a different league. The orbit is also well outside the plane of the ecliptic where all other planets are found. It really is an outcast. Pluto is just about large enough that during formation, the interior would have melted from the heat of the colliding fragments. The melting will have allowed the denser silicates to sink down, and Pluto is thus expected to have a rocky core, surrounded by a water mantle. There is also an atmosphere, mainly consisting of nitrogen, with a pressure of 10 microbar, comparable to the Earth’s atmosphere 100 kilometer above the surface. Temperatures at Pluto are around 40 Kelvin, or -243 C. (If you live in one of the seven countries following the Polish-born Daniel Fahrenheit, this is -387 F.) Pluto’s giant moon is called Charon. ‘Giant’ is relative, but Charon is large enough that some consider it a dwarf planet in its own right. Whereas Pluto has a diameter of 2370 km, Charon is 1200 km across. (For comparison, our Moon is 3475 kilometer across.) Four more moons were discovered in recent years. They are very much smaller, and a bit further from Pluto than Charon. Interestingly, the orbits are in resonance: the orbital periods scale approximately as 1:3:4:5:6, which is the only way such closely spaced moons can stay in stable orbits. The four large moons of Jupiter have the same kind of resonance. Charon takes 6.4 days to orbit Pluto (that is the ‘1’ in the 1:3:4.. sequence above). Pluto’s own rotation period is also 6.4 days, as is Charon’s day. In other words, they are tidally locked, both always showing the same face to each other, just like our Moon does to us (but the Earth does not to the Moon). How did this complex system form? It seems unlikely that a dwarf planet so far out in the Solar System can pick up 5 satellites! There are too few free objects around there for such multi-dating. The circular orbits of the moons suggest that they formed in situ, around Pluto, and were not captured from abroad. The large size of Charon is most easily explained as caused by a major impact during the formation of Pluto, which effectively split proto-Pluto in two. This impact would have been even more devastating (relatively speaking) than the one that formed our Moon. The other moons could have formed from debris from this impact. But could such a major impact have happened in such an empty region of the Solar System? Isn’t it a bit like a world-class collision in a road-less region in the desert heart of Australia, traversed by one vehicle per month? A clue comes from the densities of Pluto and Charon. These are around 1800 kg/m3, midway between ice and rock, meaning that both are very roughly equal parts ice and rock. This is very similar to Titan and Enceladus, moons of Saturn. The moons of Jupiter, in contrast, are denser and have more rock than ice. Remember that the further from the Sun a planet or moon formed, the higher fraction of ice it contains (Earth has essentially none). Pluto will thus have formed somewhere in the general region of Saturn, which is much closer to the Sun than it is now. During a chaotic phase in the early Solar System, a lot of smaller bodies were thrown out of this region by encounters with Saturn and Neptune. It explains the strange orbit of Pluto. The collision that formed Charon could also have happened during this period of chaos. Long before New Horizons was launched, we knew that Pluto has an interesting surface. Its brightness changed, such that the dwarf was considerably brighter when the side facing Charon was pointed towards us. There was a more reflective material on that side, and the most likely culprit was ice; nitrogen ice, to be precise, which is white at very low temperature, around 40 K. (When ‘warmed’ to 60 K, it becomes transparent, and at 63 K it evaporates.) The Hubble Space Telescope had a go at mapping tiny Pluto. It found a particular bright region, neighbouring a very dark area. The dwarf planet also appeared to be changing: over a decade, the northern (darker) hemisphere was brightening, and the southern hemisphere fading. Over this time, the density of the puny atmosphere had doubled. Pluto was showing seasons! We now know that the bright region is the same as the ‘heart’ found by New Horizons. Discoveries often have history. Seasons of the heart Pluto’s seasons are funny. Of course, this far from the Sun’s warmth, the seasons change between winter and ice age; you would not expect to find anything resembling summer! On Earth the seasons are determined purely by the tilt of the polar axis. The angle between the orbit around the Sun and the Earth’s rotation is 23 degrees: this tilt causes the varying length of day, and the varying height of the Sun above the horizon. Within 23 degrees of the equator, the Sun can get to the zenith: these are the tropics. Within 23 degrees of the pole, there are times when the Sun doesn’t set or doesn’t rise, depending on the season: these are the polar regions. Pluto’s inclination is more than double that of Earth, at 57 degrees. This makes the seasons much more extreme. (The tilt is actually 123 degrees: the planet rotates upside-down and the Sun rises in the West. But the effect is the same as a tilt of 57 degrees.) As the dwarf moves around the Sun, at a certain point in its 248-year orbit the Sun will be above the equator: this is the equinox when day and night are the same length, everywhere on Pluto. A quarter orbit along, the Sun is only 33 degrees from the north pole of Pluto: the southern quarter of the planet is now in perpetual darkness and the northern quarter in perpetual sunlight. Than the Sun moves back to the equator, and finally to close to the south pole, and the polar day and night are reversed. On Pluto, the ‘tropics’ (this name could be seen as optimistic) and the polar regions overlap: there is a region at intermediate latitudes where during the summer the Sun gets to zenith, therefore it counts as the tropics, but this Sun never sets, and half an orbit later the same region has a perpetual polar night; so it is both tropical and polar. At the moment, the Sun is not far from the equator and all of Pluto gets some sunlight. Spring has begun in the North, and the South is heading towards its century-long winter night. Pluto’s climate was designed for Sleeping Beauty. A second problem is that the distance to the Sun changes so much during the Pluto year. Closest approach occurs during the spring equinox in the North (or autumn equinox in the South). The northern summer and southern winter happen when Pluto is far from the Sun. The South therefore has more extreme temperatures – a colder winter and warmer summer. The South has climate and the North has weather. Also, note that Pluto moves much slower when further from the Sun, so that the seasons are of very unequal length. With Pluto just coming out of the northern winter, the expectation was that at the moment there is a nitrogen ice cap in the north, in the process of evaporating, while the southern ice cap has not yet started to reform. The atmosphere is therefore denser than usual. As Pluto moves away from the Sun, much of it will condense as frost and ice, on the southern hemisphere. The ice cap resembles an anti-swallow, migrating from winter to winter. Is there life on Pluto? There are four groups of objects in the outer Solar System similar to Pluto, ranging from puny to Pluto-sized. Over a thousand objects are known, several of which are large enough to be dwarf planets. Together they are called the Kuiper belt. The first, and largest, group contains the ‘classical objects’: they have mildly elliptical orbits, around 42 to 45 AU (1 AU is the distance Earth-Sun; Pluto is at 39 AU), and most are close to the plane of the ecliptic where the planets are found. The second group have the same orbital period as Pluto but very elliptical orbits (each one different) and a variety of inclinations to the plane of the ecliptic. These are called the plutinos. Their orbits are in a 2:3 resonance to Neptune, meaning that for every three orbits of Neptune, they do exactly 2. Many cross the orbit of Neptune (as does Pluto), but because of this resonance they never come close or collide. There probably used to be more, but any that had different orbital periods were slowly removed by Neptune. Gravity is a bastard: it can be slow and weak, but it always wins. The third group is the ‘scattered disk’: these objects are much further out and their orbits are highly elliptical. The dwarf planet Eris, almost as big as Pluto, is a member of this group. They have clearly been thrown out here after coming too close to one of the giant planets, probably Jupiter, and coming off worst. Now they are the cricket/baseball out-fielders: hanging around, not doing much, waiting to catch the odd ball coming their way. Finally, a few of the moons of the giant planets are similar to Pluto, including Phoebe (orbiting Saturn) and Triton (orbiting Neptune). Triton is in some ways the other Pluto. It orbits Neptune in the opposite direction to Neptune’s rotation. That indicates it hasn’t always been there but was captured by Neptune at some time. Triton is a bit larger than Pluto and has a little bit higher density: a bit more rock and a bit less ice; thus, it probably formed closer to the Sun, perhaps near Jupiter. Like Pluto, Triton has a nitrogen atmosphere. Triton was photographed during the Voyager encounter, in 1989. It showed a surface that is a mixture of dirt and nitrogen ice: the lack of impact craters suggests that the ice is quite young (or re-formed regularly). Most excitingly, Voyagers found geysers on the surface, ejecting gas kilometers high. These were only seen where the sun was directly above Triton, in the zenith. The geysers are probably from nitrogen: phreatic nitrogen volcanoes. Nitro-thermal activity, if you prefer. The geysers of Triton raise the question how common volcanoes are, out there in Tevya’s frozen wastelands (from Sholem Aleichem’s Fiddler). All rocky planets have volcanoes (remember Venus). These planets produce liquid rock, which rises to the surface as lava, either effusively or explosively. But the outer Solar System has less rock, and more ice. Would you expect volcanoes? And to melt the magma, a heat source is needed. Does that even exist in these small, cold bodies? Io, the hellish moon of Jupiter, is a clear case in favour. It is the most volcanic surface in the Solar System; the heat is supplied by huge tides (on solid rock!) from its boss, Jupiter. Nearby, the asteroid Vesta once had a magma ocean, albeit only when it was young. But both of these bodies formed close to the snow line, and still are largely rock. Further out, where ice rules, volcanoes are very different. Here, on the ice worlds, you would expect that volcanoes erupt liquid water rather than liquid rock. Many of the moons are now believed to have water oceans deep underneath the surface, and these take up the role of a ‘water magma’. But water is not a good volcanic substance. When rock melts, it becomes less dense and the magma therefore rises up towards the surface. When ice melts, its density increases and the resulting liquid tries to sink. You know this from experience: ice floats but rocks sink. Down is the wrong direction for volcanic eruptions. Turning water into a gas does give the required upward pressure, perhaps even too much of it. This gasification can happen close to the surface and results in a Yellowstone: huge geysers erupting into the sky. (Under near-vacuum, liquid water does not exist and all eruptions become gaseous when reaching the surface.) The result is seen on Enceladus: geysers erupting from a sub-surface ocean, going so high that some water escapes altogether and ends up on other moons. The geysers of Enceladus are one of Brian’s wonders of the Solar System. Would you call this a volcano? When it goes this high, you might as well. (But to play the devil’s advocate, comets work the same way and would you call the jets that form their tails volcanic?) Even further out in the Solar System, the surfaces are mainly nitrogen ice. When this is transparent, sunlight can penetrate and heat the material underneath. This is probably what powers the geysers of Triton. At 40K, nitrogen ice is white rather than transparent and this underground heating does not work. So one might expect that Pluto would not have such geysers. The ninth planet Let’s come back to the plutinos. A paper published in early 2016 pointed out that their orbits were remarkably similar, with the closest approach to the Sun occurring in the same area of space. This was a strange alignment which could not have happened by accident. Models show that only the gravitational pull of another body could do this. This body would be perhaps 200 to 1000 times further from the Sun than we are, on an elliptical orbit with a period as long as 10,000 year, and could be as massive as Uranus or Neptune, large enough that there should be no doubt about its classification as a planet. The models appear quite convincing. We haven’t found it yet and it will be very faint. But it does appear that the Solar System again has nine planets, not eight, and the final member which occupies the vacancy left by demoted Pluto is our third water giant. It cannot have formed as far out as it appears to be now: most likely it formed close to Saturn, and was ejected during an ill-advised close approach, in the years of chaos in the young Solar System. Now it lives in the Solar System’s Siberia, far from civilization but still out there after all these years. A true wanderer. If we only knew where it was, New Horizons could perhaps be redirected at it. It might take a century to get there, but the US Congress’ money would still be used for its original purpose: visiting the final planet of the Solar System. So we knew a fair amount about Pluto. And finally a spacecraft came to have a look. New Horizons was launched in 2006, still in the years of Pluto’s planethood, and traveled for nine years, with a quick fly-by of Jupiter on the way to pick some extra speed. On 4 July 2015 it reached Pluto. Unable to carry enough fuel to stop, it flew past within hours. These hours transformed our knowledge. The dwarf planet became the world with the big heart. To be continued Over the years I have written very little about Katla. The reason for this is that Katla has done very little to merit an article. Here at Volcanocafé we have written a few posts about Katla, but all have been attempts to put facts up against all the alarmist trash that has been written over the years. This has though changed lately, but before we start talking about resent activity we need to look a bit at Katla to have our facts straight. In other words, we need a historic background to judge what is happening against previous eruptions. Below I will only write about eruptions that are known to really have happened and I am not included mini-eruptions or eruptions only to be found in the heads of people with feverish minds. Background of Katla Katla is the third largest volcano in Iceland with Bárdarbunga and Grímsvötn being slightly larger. All 3 of them come with slightly different “flavours”. Bárdarbunga is more into large effusive eruptions and small explosive eruptions. Grimsvötn is all over the map producing explosive eruptions ranging from VEI-2 to VEI-6 and has had 3 known large effusive rifting fissure eruptions. Katla is more consistent with predominantly large explosive eruptions from the caldera and one prolonged large effusive eruption. Katla has had 30 eruptions since 820 giving an average of 40 years between eruptions. That average is though just a statistical number that obviously can differ a lot. The longest repose time during that time was 100 years and the shortest well dated repose time is 12 years. In regards of how explosive the eruptions has been we get a pretty good picture from the records. I am here only using those eruptions that has a classification in the Global Volcanism Program from 820 and onwards. 3 eruptions had a Volcanic Explosivity Index (VEI) of 3, 14 eruptions had a VEI of 4 and 4 eruptions had a VEI of 5. The average size of the eruptions is why Katla has such a fearsome reputation, especially since it is located unusually close to settlements. After all, the volcano has an average size of eruptions that ranges somewhere between a medium sized VEI-4 to a borderline VEI-5. During eruptions between 0.05 to 5 cubic kilometers of ash is released and here the bad news is that the ash is very fine grained and needle like, so the effects on air traffic during an eruption could be significant depending on the weather pattern during onset of eruption. The greatest threat to the locals is the well known very large jökulhlaups that will come pouring out of the caldera during an eruption. These jökulhlaups are so large that they can remove entire farms, take out a long stretch of the national highway and usually changes the entire landscape. During the last eruption the jökulhlaup transformed the entire coastline below Myrdalsjökull and the ash, mud and stones deposited added several square kilometers to the surface of Iceland. The extreme oddball of the eruptions is the Éldgja fissure eruption that started in 934 and lasted into 940 and deposited 18 cubic kilometers of fresh lava. In 2013 the earthquake pattern of Katla changed and recurrent brief episodes of small deep earthquakes started. Those earthquakes range between 20 and 30km+ depth and are a sign of magma moving upwards into the system of the volcano. During early 2016 the size and frequency of these earthquakes increased indicating an increased rate of magma influx from depth. During all of the years that we have had instrumented recordings of earthquake sizes in Iceland Katla has suffered a few M3+ earthquakes with the record being at M3.4. So it was a bit of a surprise last week when Katla banged off an earthquake that was M3.5. It was shallow and the signature indicated that it was related to hydrothermal activity caused by fluid movement. And here we come to today’s earthquakes. During the night leading to today there was a brief and intriguing earthquake swarm in the northern part of the caldera. A spot that Henrik Lovén already in 2011 pointed towards being both the most likely spot for an eruption, and as being the most likely for a slightly larger eruption in a series of articles he published here debunking that Katla would erupt soon (it was during the Katla scare following Eyjafjallajökull). So, what makes this brief earthquake swarm so interesting? First of all we have to take into account the size. After all we had a new record earthquake size last week, and now we had 2 earthquakes within 20 seconds apart measuring M4.5. And as everyone knows the destructive force is 27 times larger in an M4.5 earthquake compared to an M3.5 earthquake. So, the record was not only broken, it was shattered completely. Another way to look at it would be like this, within a minute Katla released more seismic energy than has been recorded by instrument for that volcano. Yes, those two earthquakes released more energy than all of the ten thousand plus recorded earthquakes at Katla, something to ponder indeed. Another thing is that during the hour prior to the two large earthquakes we have several episodes that can be interpreted as fluid movement and one episode after. This would indicate that fluid started to move, putting pressure on the magma reservoir causing two large tectonic type earthquakes that in turn created a void that more fluid moved into. The depth of the events makes it highly unclear if it was magma or hydrothermal fluid (super hot water) that was on the move. After the event the earthquake swarm has continued with smaller earthquakes with the largest as I write being M3.3. Last week I wrote an article about Grimsvötn where I described my favorite method of modeling the likelihood of an upcoming eruption, the finite element threshold analysis modeling method. It basically is a way to try to calculate how much pressure increase a volcano can take before it ruptures like an old boiler tank. And like an old boiler tank a volcano will creak and groan as it closes in on an explosion and that the amount of creaks and groans will increase exponentially as the volcano closes in on an eruption. For Katla we do not know how much of the creaks and groans there will be prior to an eruption being inevitable. I will here return to the known increase of magma influx from depth and the very sudden increase in energy released as earthquakes at Katla. In my view a volcano that suddenly changes its pattern is about to do so in more ways than just as being more seismically active. In my way of modeling this is exactly the kind of sign a volcano would give as it comes close to the breaking point that the model predicts. I am certain that the volcano has reached the tipping point of no return. If things calm down now it will still be closer to an eruption and if it continues or intensifies it is only a question of a relatively short time before we get the steady thrumming earthquake swarm that we know from other Icelandic eruption run ups. Right now I would say that we are days to years away from an eruption, but if the activity continues or intensifies I would say we are days to weeks away. The change in behavior is after all that significant. In 2011 Henrik Lovén wrote this; “While a larger “proper” eruption of Katla in the VEI 3 – 5 range cannot be ruled out, I find one unlikely at present as the current activity mostly is in areas already depleted of evolved magmas by geologically speaking very recent major eruptions. Also there is little sign of the uplift required on GPS. If one were to occur, the odds for one towards the upper end of what Katla is able of ought to be better in the Eastern to Northern parts of the caldera.” This also follows the modeling prediction that the greatest likelihood of an eruption occurring in a part that has not recently erupted since the pressure there should be larger, a pattern that is known for the Icelandic caldera volcanoes. In short, if I use my own model of prediction (and I should) it seems to say that Katla is nearing an eruption. Due to lack of data prior to an eruption I can’t calculate when exactly it will occur, but if the current swarm continues over an extended period or suddenly intensifies and continues we should see an eruption in the not too distant future. For those interested in a more in depth explanation of the finite element threshold method I recommend my previous article about Grimsvötn.
0.867588
3.484952
What are Inner Planets? In our Solar System, which consists of eight planets total, there are four inner and four outer planets, with the asteroid belt between them. The inner, or Earth-like planets, include Mercury, Venus, Earth and Mars, and are not to be confused with inferior planets, which are those closer to the Sun than Earth (so Mercury and Venus). This classification of inner and outer planets was made based on their shape, size, structure and number of moons orbiting those planets. In the case of inner planets, they have rocky compositions – their crusts are made up of minerals such as silicates, while their inner layers are made up of metals such as iron or nickel. The Earth is no exception and has the same characteristics. In addition, inner planets have few or no moons, and not one of them has some sort of a ring system, like those seen with Neptune, for example. Finally, they’re much smaller and less massive than the outer planets, but also warmer. This temperature range, combined with the fact that all of the inner planets except for Mars allows the other planets to have atmospheres that can create and support weather and climate. What are Outer Planets? As opposed to the inner planets, the outer planets are further away from the Sun, thus making them much colder. Separated by the asteroid belt between Mars and Jupiter, they’re on the other side of it than the inner planets. The outer planets include Jupiter, Saturn, Uranus and Neptune. They should be distinguished from superior planets, which are those that are further away from the Sun than the Earth, and which have Mars in addition. They aren’t rocky, but either gaseous or icy, and much more massive than all the inner planets combined. In fact, the outer planets are 99% of all the mass that orbits around the Sun, and only Jupiter and Neptune are almost 400 times more massive than the Earth. While the inner planets were mostly made up of minerals and metals, the outer ones are usually made up of hydrogen and helium, the two most common elements found in the universe. All of them have rings, although the famous Saturn’s rings are most easily observed from Earth. Additionally, all of them also have moons orbiting around them, with the Jupiter’s moons Ganymede, Callisto, Io and Europa being the most common examples. Similarities between Inner and Outer Planets It’s obvious from the differences I’ve covered above that there aren’t many similarities between these two types of planets. There are some, however, and they’re listed below: - They’re all planets orbiting around the Sun This might seem like a trivial statement, but, with new extrasolar planets discovered each day, it’s becoming an increasingly important distinction. - They all have their axis of rotation and rotate in the same direction, except for Venus and Uranus Venus and Uranus are famous for their retrograde rotation, meaning they rotate clockwise, while all the other planets, both inner and outer, rotate in the counter-clockwise direction. - They were all formed around the same time There are several theories that try to describe the evolution of the Solar System, but most of them state that all of the planets formed in a relatively short window of time. - They’re all ellipsoidal or spherical They all look like balls, but are in fact slightly “squeezed” due to the fact that they also rotate around their axes, and thus are ellipsoidal. They can usually be approximated to be spherically symmetric without many consequences, though At 2.3-3.3AU from the Sun, there’s the so-called Asteroid Belt, which separates inner from outer planets. While inner planets are rocky, relatively small and relatively warm, mostly with an atmosphere and without moons or planets, the outer planets are completely opposite – they’re very large and massive, either gaseous or icy, mostly composed of hydrogen and helium, and all of them have rings and moons. This distinction is important as it tells something about how the Solar System was formed and how it evolved over time. Author: Dr. Howard Fields Dr. Howard is a Clinical Psychologist and a Professional Writer and he has been partnering with patients to create positive change in their lives for over fifteen years. Dr. Howard integrates complementary methodologies and techniques to offer a highly personalized approach tailored to each patient.
0.855026
3.567097
There are many different names used to describe the activity of using an analog video camera or a digital USB camera to enhance the view through the lens or reflected from the mirror of a telescope. Electronic Assisted Astronomy or, EAA, is probably the most commonly used name and is the name used for a popular Cloudy Nights forum. But it is in my opinion a bit too general since the "Electronic" part of the name could simply mean a tracking mount, Night Vision equipment (as it does for the CN forum), some form of computer control, a WiFi connection, etc. The term "Near Real Time Viewing" is also commonly used since the effect is to see great detail in galaxies, nebulae, etc. in a matter of a few or tens of seconds. Video astronomy was the name used in the early days since, until recently, the most commonly used cameras by far were analog video cameras like the Mallincam, Stellcam and Samsung to name a few. But the name I like best is Camera Assisted Viewing since it goes to the heart of the activity which is to use a camera to collect light during short exposures of a few seconds or tens of seconds thereby greatly enhancing the detail seen compared to looking through the same telescope but with an eyepiece. The camera could be a video camera like those mentioned above, a digital camera like those from ASI, QHY or Starlight Xpress or even a DSLR. Often software like Sharpcap will be combined with the camera to provide on-the-fly stacking and processing to further improve what can be seen in seconds to minutes. So, regardless of what we call this, I have wondered where and how did it all begin? In the Beginning: Camcorders The image of Gil Miles using a CCTV in 1961 to view the moon live on a TV screen signals the possibilities which were to come to fruition in the coming decades. Video astronomy would not only allow amateur astronomers to observe much more than they could with an eyepiece, they could do this in more comfort and they could simultaneously share the view with others. This particular photo is amusing when one notices the formal attire Gil is wearing while comfortably seated in his armchair. We have come a long way in the last 58 years in more ways than one. It was not until Sony introduced the first consumer camcorder in 1983, the Betacam, that video astronomy finally was within reach of the average amateur astronomer. With the aid of a camcorder, armature astronomers were able to enhance their views of the moon and the planets and capture video to be viewed later or processed to make pretty images. However, because of the size of these early camcorders (they generally also housed a VHS or Beta tape recorder) they needed to be mounted onto a tripod or hand held at the eyepiece of a telescope for afocal imaging. Over time camcorders became smaller and lighter and easier to use with a telescope and some had detachable lenses so that the camera could be mounted at prime focus of the telescope. But camcorders were still limited to bright objects like the planets, the moon, double stars and lunar occultations because of their low light sensitivity (1-5 lux), short exposures (1/60th sec.) and lack of manual control of the exposure and gain settings. In addition, camcorders had auto focusing, which made them even more of a challenge to adapt to astronomy. Indeed, Dennis di Cicco reported in his review of the Canon L1 camcorder in Sky &Telescope April 1992 that he could only record a trace of the bright core of the Orion Nebula through a 200mm/f1.8 lens. Nonetheless, intrepid amateurs showed the advantages of adapting camcorders for astronomy as the age of Camera Assisted Viewing began. In 1994 the first commercial webcam, the QuickCam, was introduced for $99. In contrast to camcorders, webcams were not only inexpensive, they were also much smaller and lighter and more easily adapted to astronomy. And, they came with a USB cable and software for easy connection to and control by a computer. The QuickCam and the Philips ToUcam were two of the more popular early webcams using Sony's 1/4" ICX098BQ CCD detector with 350K total 5.6um square pixels. In contrast to the camcorder, webcams had a maximum exposure of 200 msec which was nearly an order of magnitude longer than that of a camcorder. While offering the amateur astronomer improvements over the camcorder, the webcams did have their limitations. Since they were designed for terrestrial use, they required mechanical modification to enable attachment to a telescope. This involved surgery to remove its lens and add a c-mount adapter. And the small size of the detectors used in webcams greatly limited their field of view, making it sometimes difficult to find and center the object of interest. Ultimately, with sub-second exposures (0.2 sec.), small chips (1/4"), small pixels (5.6 microns), poor sensitivity (initial Lux values of 10 to 20), webcams were limited to bright celestial objects like the moon and the planets. Despite this, webcams quickly made their mark as premier planetary imagers. By taking advantage of their 60 to 90 fps image capture rates amateurs could collect thousands of images of a planet in a few minutes which they would post process later. Using free software like Registax to automatically select and align the better images produced during brief moments of exceptional seeing, amateurs were able to produce images rivaling those of professional astronomers with much larger and better telescopes only a decade earlier. As this approach gained traction, the Quickcam and Unconventional Imaging Astronomy Group (QCUIAG) was formed in 1998 as an online forum where new methods, cameras, modifications and images could be shared among like minded video astronomers. It did not take long for some enthusiasts to push the envelope and figure out how to modify these webcams to achieve longer exposures and begin using them to successfully view and image some of the brighter Deep Sky Objects (DSOs) like star clusters, galaxies and bright nebulae. In 1999 Dave Allmon found that by clipping a single wire in the Connectrix webcam he was able to take long exposures which he demonstrated with images of 5 sec to 180 sec duration of the Andromeda galaxy. Unfortunately this capability was a quirk of the Connectrix camera and could not be applied to other Webcams. Then in the summer of 2001 Steve Chambers is credited with pioneering a series of modifications to a wide variety of webcams allowing long exposures, replacement of the standard CCD with larger and more sensitive CCDs and disabling of the amplifier to minimize noise. While amateur astronomy is flush with do-it-yourselfers, not everyone has the skills or desire to make their own modifications. Enter the commercially modified webcams from the likes of Atik (ATK-1C, ATK-2C), SAC (SAC7, SAC7b), Celestron (NexImage), Meade (Lunar and Planetary Imager and Deep Sky Imager), and Orion (Starshoot Solar System and Deep Space). The ATK-1C and NexImage were commercial modifications of the ToUCam. All of the commercially modified webcams were designed with housings to fit into standard 1 1/4" eyepiece holders, many had passive cooling designs and some even had fans to better reduce the appearance of warm pixels. They also came with larger chips for larger fields of view and larger pixels for improved light sensitivity. And all provided longer exposures than unmodified webcams pushing practical exposures into the range of minutes. This finally made it possible to create images of DSOs by collecting hundreds of exposures of a few tens of seconds which could later be stacked and processed with software like Registax. See for instance, Stephen Chambers and Stephen J. Wainright, Sky &Telescope Jan. 2004. At this point it was possible to do Camera Assisted Viewing of brighter DSOs with a webcam, but this limited capability was already being made moot by the availability of integrating video cameras from SBIG, Adirondack and Mallincam leaving webcams as the mainstay for lucky imaging of the solar system rather than the key to Camera Assisted Viewing of DSOs. A complete review of Webcams and their applications to astronomy can be found in Robert Reeves 2006 book, "Introduction to Webcam Astrophotography". As the search for ever improving capability evolved in the 90s, amateur video astronomers eventually began experimenting with low-light video surveillance cameras. While larger than webcams these cameras came with higher sensitivity, detachable lenses and c-mounts allowing them to fit nicely into 1 1/4" eyepiece holders. They were typically used with a frame grabber and a VHS recorder to capture thousands of frames which could later be stacked in software to produce excellent solar system images, some say better than what could be obtained with long exposure film cameras. In 1994 two amateur astronomers in upstate New York, John Cordiales and Jim Barot, saw the value in these video surveillance cameras and launched their own company, Adirondak Video Astronomy. One of their first products was the Astrovid 2000 in 1996 which sold for $595. This camera used a Sony 1/2" B&W CCD, ICX038DLA, and had an exposure range of 1/60 sec to 1/10,000 sec making it well suited to solar, lunar, planetary and occulations but not DSOs. A big advantage of the Astrovid 2000 was the fact that it provided for manual adjustment of the shutter, gain, gamma and contrast through a wired hand control providing the needed control over the image settings and also making it possible to change camera settings without disturbing the telescope. David Moore published a detailed review of the Astrovid 2000 in Sky & Telescope Aug. 1999. While still not ready for DSO viewing, John and Jim would soon play a pivotal role in reshaping the nature of video astronomy. Then in 1998 a Texas company, Supercircuits, introduced two very inexpensive video surveillance cameras, the PC-23C and PC-33C, a B&W and color model, respectively. Because these cameras could be purchased for only $80, they quickly became very popular in the video astronomy community. These re-badged Topica TP-505D/3 cameras from Taiwan used 1/3" Sony CCDs and had a maximum exposure of only 1/30 sec. Unlike the Astrovid 2000, the Supercircuits cameras had auto exposure and auto gain mading the cameras challenging to obtain the needed exposure for small, high contrast objects like the planets. In his Skywatch March-April 1999 review of the PC-23C, Rod Mollise noted the excellent views he achieved of Saturn and Mars, including a clearly defined and razor sharp view of the Cassini Division. But, he noted, " For 'real' deep sky work you do need an integrating CCD camera..." Even though these inexpensive Supercircuits cameras were not capable of viewing DSOs, they were quite capable at capturing images of the solar system and the B&W version was a favorite tool for asteroid occultations. Similar to Topica, another Taiwanese company, Mintron, and a Japanese company, Watec, began developing and selling video surveillance cameras in the 90s. Cameras like the Watec 902H ($500) with its 1/2" Sony ICX249ALL CCD also found their way into the amateur astronomy community even though the manufacturers were not initially aware of this application for their products. But with exposures still limited to 1/60sec (NTSC) and 1/50sec (PAL) these cameras were also not useful for viewing DSOs. Nonetheless, both Mintron and Watec cameras would be key to shaping the nature of Camera Assisted Viewing over the next decade, if primarily through several innovative amateur astronomers who saw the value of this new technology. While the 90s brought a new technology, video surveillance cameras, to the forefront of amateur astronomy, maximum exposures of 1/30 sec limited the cameras, like the webcams, to the solar system, asteroid occultations, lunar meteor impacts, double stars and bright open clusters. To take advantage of this new branch of astronomy, a Yahoo Video Astro Group was formed by Jim Ferreira in the spring 1999 providing an on line forum for discussion of video cameras and equipment for lunar, solar and planetary astrophotography, and eventually DSOs. That forum continues to this day. Integrating Astro Video Cameras In the Beginning: The SBIG STV The first video camera designed and marketed for real time viewing of DSOs is probably the SBIG STV. This dual purpose guider and video camera was introduced in 1999 with with the capability to perform exposures of 1 msec to 10 min. In their 2000 book, "Video Astronomy", Steve Massey, Thomas Dobbins and Eric Douglass called the arrival of the STV a "first glimpse into the exciting future of video astronomy." The longer exposures afforded by the STV suddenly made it possible to view images of DSOs in real time without the need to capture thousands of individual frames for stacking and processing later. An exciting new age in amateur astronomy had arrived. The key to longer exposures is the ability of the camera to integrate the light captured by the CCD over many 1/60 sec (NTSC or 1/50 sec PAL) exposures effectively creating a single longer exposure. This long exposure is stored in the camera's buffer and continuously output to the monitor at standard video rates, 30 frames/sec NTSC or 25 frames/sec PAL. When a new long exposure is ready it is transferred to the buffer and output in the video feed for viewing until the next long exposure replaces it. Thus, one can view an image continuously on the monitor while the camera takes the next long exposure. The STV is an all-in-one system with a CCD camera attached by an electrical umbilical cord to a large control box just under 12" x 10" x 3". The camera uses a sensitive B&W 1/3" CCD sensor from Texas Instruments, TC-237. The STV gives the user manual control of exposure, gain, brightness and contrast providing the user with complete control over the camera, unlike camcorders and security cameras like the Supercircuits PC-23C and similar cameras of the day. With the STV it was now possible to view all of the Messier objects with exposures in the range of a few seconds to minutes. In many ways the STV was way ahead of its time considering that it also came with a Thermo Electric cooler, automatic dark frame subtraction to minimize noise, complete control of the camera's settings, image display and image capture with or without a computer and a feature called "Track and Accumulate". The "Track and Accumulate" feature allowed the camera to internally align and stack up to 10 frames in real time to dramatically improve SNR reducing noise and improving feature detail. The STV also had the ability to store images in the on-board internal memory for later display or download to an external computer. Optional accessories included a 5" B&W LCD monitor, color filter wheel and a focal reducer with an extension tube capable of reducing an f/10 system to ~f/6 and f/3.75. The focal reducer could also turn the camera into a wide field finder "eFinder" with a FOV of 2.7 degrees. Alan Dyer gave a thorough review of the SBIG STV in the Jan. 2001 Sky &Telescope. However, at a price of $1995 for the base unit and $2395 for the Deluxe with the LCD display, this camera was out of reach of many amateurs who continued to look for a cheaper alternative. The STV was discontinued in 2006 due to the lack of key component availability, but used systems can occasionally be found for re-sale on Cloudy Nights. A Couple of Guys in Upstate N.Y.: The Stellacam In the meantime, a number of forward looking amateur astronomers were working on adapting existing security video cameras from companies like Mintron and Watec for astronomical use. As mentioned above, Adirondak Video Astronomy was one those and although their Astrovid 2000 was not suitable for real time viewing of DSOs, things changed in the fall of 2001 when they introduced the Stellacam for $595. While this camera had the same CCD as the Astrovid 2000 the key difference was the capability to integrate up to 128 x 1/60 sec image frames for a maximum exposure of 2.1 sec. This extended exposure capability, while not as long as that of the STV, was sufficient to make it possible to view many DSOs in real time that were not possible previously with cameras like the Astrovid 2000, Supercircuits PCs, modified webcams and camcorders. The Stellacam was in reality a re-badged Mintron MTV 12V1 with the IR filter removed to increase sensitivity at the important Halpha wavelength. It also had a different rear panel than the stock MTV 12V1. Instead of the 5 buttons to navigate the camera on screen menu, the BNC video and S-Video connectors, a power input port and a green LED, the Stellacam had a single multi-pin connector for the cable to connect to a wired hand control. Unlike the more sophisticated hand control for the Astrovid 2000, the Stellacam's hand control was a small metal box with five buttons and a resistor network inside so the user could to emulate the camera menu buttons on the back of the camera body. This box also had an input for camera power and a BNC connector for the video output from the camera. This enabled a single wire from the camera to the observer several feet away. While the STV heralded a new world of possibilities, the Stellcam, even with its limited exposure, opened up that new world to many more astronomers at 1/3 the cost of the STV. The Stellacam was followed by the Stellacam EX in 2002 ($695) and the Stellacam II ($795) in 2003 further improving the depth and detail of DSOs which could be viewed in real time. The EX was another re-badged Mintron security camera, the MTV12V1Ex. It was still limited to 2.1sec exposure but with the more sensitive 1/2" Sony ICX248 CCD it could provide much more detail than the Ex. It came with the same hand control as the non-Ex camera. In his review of astronomical video cameras available at the time in Sky & Telescope Feb 2003, Johnny Horne noted that both the Stellacam and Stellacam EX were "sensitive enough to put the Triffid Nebula directly on a TV screen in real time." With the Stellacam II, Adirondak switched manufacturers from Mintron to Watec, re-badging the Watec 120N which had the even more sensitive Sony 1/2" ICX418 CCD. But the biggest improvement with this camera was the capability to increase internal stacking to 256 frames for a maximum exposure of 8.5 and 10.2sec, respectively, for the NTSC and PAL versions of the camera. The Stellacam II was supplied with a much more sophisticated wired hand control than either the Astrovid or the Stellacam Ex. Most likely the new hand control was manufactured by Watec itself, since one could purchase the exact same camera and remote directly from Watec. This remote had rotary knobs for adjustment of the exposure integration (1X , 2X, 4X, ... 256X) and Gain along with a sliding switch to change the Gamma. An optional wireless hand control was available instead of the wired hand control eliminating one extra wire from the camera to the observer. The last of the Stellcam series, the Stellacam III ($1295) based on the Watec 120N+, was released in 2006. It had the same CCD as the Stellacam II but now had an unlimited maximum exposure which made the Stellcam III the ultimate an astronomer could ask for. Now, the ability to view DSOs was more limited by the mount polar alignment, tracking capability and light pollution rather than the upper limit of the camera exposure. The Stellacam III had the same wired and wireless hand control options as the Stellcam II. But it also introduced Thermo electric cooling (TEC) of the sensor with the addition of a Peltier cooler like the STV to help reduce thermal noise. In his 2005 book, "Visual Astronomy Under Dark Skies", Antony Cooke called the Stellacam line of cameras "true astronomical video pioneers." Around 2010, the Stellacam line of cameras was taken over by John Lee's CosmoLogic Systems. Cosmologic had provided the Peltier cooling for the Stellacam III and the wireless remote for Adirondak's camera. When Cordiales decided to leave the business, Lee stepped in, at least for a short time. It seems that the change in ownership, the fact that Stellacam never introduced a color camera and stiff competition from another innovative amateur astronomer ultimately led to the disappearance in the Stellacam line earlier this decade. Used Stellacams can still be found in the Cloudy Nights or Astromart classifieds, but have long since been supplanted by much more capable and even less expensive video cameras. A Guy Even Further North Joins the Fun: The Mallincam In the meantime, up in Canada a fellow named Rock Mallin began tinkering with video cameras for astronomy in earnest in 1994 and introduced his 1st commercial camera, the Mallincam I (MC I) in 1999. This first camera also used a B&W 1/3" CCD and had a maximum 1/2 sec. exposure which limited its use to the solar system and lunar occultations. It was not until late 2002 or early 2003 that the Mallincam II was introduced with the ability for exposures of 2.1 sec, long enough to display many DSOs in real time. It used the same Sony CCD as the Stellacam and the Mintron 12V1 and appears to be a modified version of the Mintron. It is likely the MC II that Terence Dickenson reviewed in Sky News Nov/Dec 2003 remarking that " On the deep sky, the Mallincam really shines." The MC I and MC II appear not to have widespread distribution and may have only been used by a handful of amateurs in Canada. But this would change dramatically as Rock released a succession of analog video cameras with key improvements over the next decade. His MC II Color camera introduced in late 2003 or early 2004 was the first commercially available long exposure astronomy video camera with color capability and a major game changer for deep sky camera assisted viewing. The MC Hyper in 2004-2005 extended exposures up to 12 sec with its "hyper" circuitry. The Hyper Plus introduced in 2005 and reviewed by Gary Kronk in Astronomy, July 2010, extended the maximum exposure to 56 sec and appears to be the first Mallincam camera with Peltier cooling. It is likely that with the Hyper and Hyper Plus cameras, the Mallincam line really took off. The VSS and VSS+ in 2007-2008 pushed the maximum exposure to just under 2 min. and also were equipped with a Peltier cooler. With the Xtreme in 2009 the maximum exposure was now 100 min., certainly beyond any real time viewing threshold. The B&W versions of the Hyper Plus, VSS, VSS+ and Xtreme all used the Sony ICX428, while the color versions used the ICX418, all 1/2" format. The Xterminator introduced in 2014 with the extremely sensitive Sony ICX828 CCD was the last of the new analog cameras from Mallincam. Rocks cameras ranged in price from $249 for the Micro kit to $600 for the MC Jr Pro model up to $1750 for the Xterminator. The Mallincams have been some of the most popular, if not the most popular cameras in the post Stellacam era. Many of these cameras are still available today along with a line of digital CMOS cameras for both real time viewing and astrophotography. From Down Under: The GSTAR While Stellacam and Mallincam were well established in the video astronomy community in North America, down under an Australian company launched by Steve Massey in 2002, MyAstroShop, began marketing its GSTAR line of astronomy video cameras. Steve eventually released 3 analog video cameras over the years, all re-badged Mintron cameras. The GSTAR EX was first with the 1/2" Sony B&W ICX429AL CCD and a 2.56 sec maximum exposure. This was followed by the GSTAR EX Color, most likely a re-badged Mintron72S85HP-EX with the 1/2" Sony ICX249AK and 5.12 sec maximum exposure. The last video camera in the GSTAR series was the GSTAR EX2 ($579) with the 1/2" Sony B&W ICX429 CCD also with a maximum exposure of 5.12 sec. It appears that all of the GSTAR analog video cameras are now discontinued but have been replaced with CMOS digitial cameras as the hobby has moved in that direction. Steve's cameras had their IR filters removed like both the Stellacams and Mallincams. Unlike the Stellacams and Mallincams at that time, the GSTAR cameras could be controlled by a computer with the free GSTAR-COM software. An optional 10 meter RS232 to DB9 cable was needed to connect the pc to the camera through the camera's Aux port. In addition to the camera shop, Steve has advanced the hobby by authoring several books, including two books on video astronomy in 2000 and 2009. The Stock Security Camera: Mintron, Watec and Samsung While it does not appear that Mintron actively marketed their cameras to the astronomy community, that did not stop astronomers from seeking out their latest cameras for such use. This included the MTV12V1 and 12V1-EX which were used in the Stellacam, Mallincam and GSTAR line of cameras to name a few. Other Mintron security cameras like the 12V6HC-EX color camera also found their way to the back of telescopes through word of mouth on one astronomy forum or another. In Astronomy Now Dec 2007, Ade Ashford reported that he was able to see the dust lanes in the spiral arms of M31 with a Mintron 12V1-EX attached to his 105mm f/4.2 AstroScan. At some point, Watec began to realize that their cameras were being used for astronomy and began to advertise them as such. Their 120N and 120N+ with wired remotes were sold direct to astronomers as well as re-badged by Adirondak and sold as Stellacams. Like the Mintrons, amateurs sought out any promising camera from the likes of Watec over the years. In 2008 Samsung introduced its SDC425 video security camera with a 1/3" color Sony CCD and the capability for 4.2 sec exposure integration for the comparatively low price of $175. In 2009 the Samsung SDC435 (later renamed the SCB200) was released with a 1/3" Sony ICX638 color CCD and an exposure of 8.5 sec for just $99. This became a very popular camera given its price and reasonable capability for deep sky viewing. The SCB4000 with the 1/2" Sony ICX428 CCD was released in 2009 for $347 and it also had a maximum exposure of 8.5 sec. These cameras have a plastic IR filter placed in front of the CCD which is easily removed since it is held in place with two small screws. The Samsung's were never marketed for astronomy and apparently never re-badged and resold to the hobby, but could be purchased direct from security video camera suppliers. They were much larger than the Mintron and Watec cameras and are still popular today, although the above versions are no longer in production. A Late Arrival: The LNTech300 In 2013, word spread of a new, small form factor security video camera manufactured in Hong Kong and available on-line from a number of Asian re-sellers. Like the Samsung, this camera was not directly marketed to the astronomy community, but awareness within the ranks soon made it a very popular camera. That and the fact that it could be purchased for a mere $69 yet had the capability for exposures of 20.5sec PAL and 17.1sec NTSC. Not only that, but this camera had the capability to internally stack up to 5 successive images on the fly, a process called 3D-DNR, which greatly helped increase SNR and smoothed out the noise and enhanced the detail. The LNtech300 used the 1/3" Sony ICX810 and 811 CCD for NTSC and PAL, respectively. Since this camera was not sold directly as an astronomy camera, one had to either ask the re-seller to remove the IR filter glued on top of the CCD or remove it themselves, which many did. The main limitation of this camera was the fact that it used a 1/3" CCD at a time when most amateur astronomers were already used to the 1/2" format and were looking to the possibility of even larger sensors in the future. The main advantage was its low cost making it appealing to anyone just entering the hobby. Mallincam came out with a rebranded version of the LNtech300 called the Micro Ex in late 2013. This camera was identical to the LNtech300 except in two important ways. First, the IR filter was already removed from the CCD. Second, the Micro had some internal wires spliced so that the Auto Iris connector could be used with the proper cable to control the camera menu by a computer and free SW which emulated the buttons on the back of the camera. This was a very nice feature for those who wanted to use the camera with a computer. A DIYer could easily make the same modification to the LNtech300 which many did. In fact, one can find threads on the Cloudy Nights forum for this and another modification to provide WiFi control of the camera as well. End of the Line: The Revolution Imager The LNtech300 was re-badged and sold along with several useful accessories by Orange County Telescopes starting in Sept. of 2015 as the Revolution Imager. It used the PAL version of the CCD, the ICX811, which provided a maximum exposure of 20.5sec. While most camera suppliers included a C-mount adapter, a power cable or power transformer and maybe a video cable with their cameras, the Revolution Imager kit was unique in the completeness of the included accessories. For $299 you received the camera with the IR filter already removed, a C-mount adapter, power and video cables, plus an external 1.25" IR filter, a 0.5X focal reducer, a 7" LCD display, and a rechargeable battery all neatly arranged in a padded soft carrying case. The user need only supply the telescope and a clear night sky. The Revolution Imager was reviewed by Rod Mollise in his Dec 13 2015 Astro Blog where he concluded that the Revolution Imager was not only inexpensive but a "very capable camera." Unfortunately the LNtech 300 supply from the Asian re-sellers was exhausted by late 2015 or early 2016 and with it the last of the Revolution Imager cameras. Fortunately, Mike at OCT was able to source a new analog video camera by May of 2016 which he called the Revolution Imager 2. This camera comes in a square rather than a rectangular case format but with the same ICX811 sensor. However, the firmware for this camera differed from the RI I such that the maximum integration was x256 and not x1024 like the RI I. Thus the longest exposure was 5.1sec. However, the 3D-DNR allowed averaging of 6 successive frames instead of 5 which gives a 30.6 sec total average signal. The RI 2 is still available today from OCT and a number of astronomy retailers. And Then There Were: Orion, PD, Polaris, ITE, SC2000 While the cameras mentioned so far probably account for the vast majority of video cameras used for deep sky viewing, I would be remiss not to mention some others. One of these, Orion Telescopes, marketed their offerings as the StarShoot Deep Space Video Camera I and II. Both cameras were simply re-badged Mintron 72S85HN-Ex-R color cameras with a 1/2" Sony ICX248AKL (NTSC) or 249AKL (PAL) CCD and an integration of 256X. The only difference of the DSVC II appears to be an added serial interface for computer control. Polaris USA marketed several different Mintron cameras without modification including their most popular model the Matrix (Mintron MTV12V1) and their color model the Polaris DX-8263SL, a re-badged Mintron MTV63V1. ITE appears to have sold three different cameras, the Deep Sky Pro, a re-badged the MIntron 12V1, the Deep Sky Pro EX, probably a re-badged Mintron 12V1Ex, and a color camera called the Color Eye Pro. In the U.K. Phil Dyer is still a major supplier of video cameras. These include the B&W Mintron MTV2285HC-Ex with a 5 sec exposure which he calls the PD2285C-Ex. Phil also has a color camera which is a modified Huviron (Korea) security camera with an ICX638 (NTSC) or ICX639 (PAL) Sony CCD with up to 20sec maximum exposure. Several of these cameras are discussed in Adrian Ashford's Dec 2003 Sky & Telescope article on integrating video cameras. In early 2015 yet another deep sky video camera option came to light with a posting on the Cloudy Nights EAA forum by "photo444". He documented a DIY project to build a camera from the pc board containing the CCD and electronics which he purchased from SecurityCamera2000 for just $23. A wired remote was included which was easily attached to the appropriate points on the camera board with push on connectors. Originally, these cameras came with the typical IR filter which removes some of the useful IR light from deep sky objects. However, it was not difficult to remove by applying heat to the glass filter with a soldering iron. Fortunately, the DIY project became a lot easier when it became clear that one could request the board camera without the IR filter directly from the supplier. Then all that was required was to build a simple enclosure and attach the board camera, feed through the wire harness for the remote and connect a C-mount adapter to the face of the box. The original idea came from another poster on Cloudy Nights, "David B in NM", who had suggested it to "photo444". This board camera compared favorably in performance with the LNtech300. It used the 1/3" Sony ICX638 and ICX639 CCD and had a maximum exposure of 1024X. This camera set off a wave of similar extremely inexpensive but capable DIY board cameras. More information and images cane be found on the Cloudy Nights forum by searching "SC2000" on the EAA forum and by reading the blog on this web site do-it-yourself-board-camera-for-35.html Its Not All Hardware: Software Plays Its Part Indeed, while the focus to this point has been on the development of the ever improving hardware that collects the photons necessary to view deep space wonders, software may be the unsung hero that has had its own ongoing role in the advancement of video astronomy. Software's contributions can be divided into two important categories: 1) camera control; 2) image capture. A major challenge for newcomers when trying to operate one of the video cameras is the fact that the camera settings are designed for video surveillance applications, not for astronomy. Hence, the menus can look like Greek to the uninitiated with many of the functions completely irrelevant to astronomy applications. On top of that, important camera settings can be buried several layers deep within the menu. A more in depth discussion of this challenge can be found here making-sense-of-video-camera-osd-menus.html . Fortunately, some very helpful individuals have come to the rescue and created free software which put all of the relevant camera settings in an easy to view, understand and modify format. Two such examples are the GSTAR-COM software developed for the GSTAR and Stephan Lelonde's Mallincam Control software developed for the Mallincams. Both are free and both require that the camera be connected to the pc with the appropriate cable which can be obtained from the camera supplier. These simple to navigate and effectively organized menus leave out the camera functions one would never use for astronomy and make it much easier to navigate through the control settings needed for astronomical viewing. While these programs made controlling the camera much easier they did not provide for image capture and processing. As video cameras became more popular and amateur astronomers craved to go deeper, minimize noise, remove hot/warm pixels and save images more sophisticated software began to appear. In 2010 Robin Glover released a program called Sharpcap which was originally developed to simplify camera control and image capture for Webcams in place of existing programs like AMCAP. In 2012 Sharpcap was revised to work with additional cameras which eventually grew to include Basler, ZWO, Starlight Xpress, QHY, Celestron, Point Grey, almost all Webcams, cameras with an ASCOM driver and most frame grabbers. Sharpcap provides a simple layout with a live image along with camera controls all in a single screen view. Over the years Robin has added new features to the camera making it extremely useful for real time image viewing, processing and capture. These include on the fly dark frame subtraction, flat frame correction, image stacking, histogram adjustments, polar alignment, plate solving, focusing aids and more. There are both free and license versions of Sharpcap, with the later containing many of the more advanced features. Sharpcap may now be the most used software for real time viewing. Chris Wakeman and Steve Massey developed a program called GSTAR-Capture in 1998 for automated and manual capture of video AVI files. It included the ability to capture single frames and had faint object enhancement features including dark frame subtraction. A revised version, GSTAR 4 Capture was released later which added a favorite object, location and equipment database and the ability to record Universal time, RA, DEC, filter used, telescope, and focal length. This version also included a live histogram, occultation time stamping plus the GSTAR-COM camera control for a compete camera control and capture software package. Also in 2012 William Koperwhats developed the Miloslick software ($49) for many of the Mallincam cameras (Xterminator, Xtreme, VSS, VSS+, etc.). Like Sharpcap, Miloslick provides simple to understand camera controls along with a live image view. And like Sharpcap it provides image capture sequencing, on the fly dark frame subtraction, image stacking, histogram adjustments and more. Lodestar LIve is yet another similar program, developed by Paul Shear in 2014 specifically for the SX Lodestar, providing many of the same image capture and on the fly processing functions as Sharpcap and Miloslick but only for the Lodestar. Eventually Starlight Xpress (SX) bought the software and renamed it to Starlight Live and expanded its compatibility to several other SX cameras. The Digital Revolution In 2015 Sony announced that it would cease production of CCDs by 2017 and concentrate on the less expensive CMOS technology which was beginning to match CCD performance and is widely used in commercial digital cameras. While security camera based analog video cameras capable for deep sky observing are still available from Mallincam, Mintron, Watec, Samsung, OCT and PD, there have been no new analog astronomy cameras coming onto the market since the Revolution Imager II and it is not likely that there will be. Instead there has been steady growth of the CMOS based digital cameras from the likes of ZWO, QHY, Starlight Xpress, ATIK, Altair Astro and Rising Tech. Mallincam, GSTAR, and OCT also have new digital cameras for real time viewing as well. Almost all of these cameras use CMOS sensors and all have much higher resolution and larger sensors than the analog cameras. When coupled with the latest software like Sharpcap (and others), real time viewing of the deep sky has evolved tremendously from its roots decades ago. That history is still being written, but you can read more about the early days of this digital revolution on my blog here: the-digital-revolution-in-camera-assisted-viewing.html The Video Astronomy Innovator Hall of Fame If I were to make a list of those who have played a significant role in shaping the last 20+ years of amateur video astronomy the following would be my candidates. Steve Chambers: For leading the charge for extended exposure webcams Jim Ferreira: For creating the Yahoo Video Astro group and expanding the hobby SBIG Team: For setting the bar in 1999 with the STV John Cordiales & Jim Barot: For popularizing the modified security video camera Rock Mallin: For the 1st color video camera and continuous camera innovation Steve Massey: For his books on video astronomy and his GSTAR line of cameras Mike Fowler at OCT: For providing a low cost video camera kit for the beginner Stephan Lelonde: For providing one of the first camera control programs for free Robin Glover: For Sharpcap with real time on the fly processing William Koperwhats: For developing Miloslick SW for the Mallincam cameras Jim Turner: For creating the live broadcast site Night Skies Network in 2009 photo44 & David B in NM: For popularizing the DIY board camera
0.834742
3.128926
A European orbiter has confirmed Monday that there has been increased production of methane – a gas that is typically produced by living organisms on Earth – in Mars and raised the possibility that living organisms could have produced them in another planet. The possibility of an alien life form in Mars has been a subject of multiple operations by the different space programs around the world. In Nature Geoscience on Monday, scientists working with the European Space Agency’s Mars Express orbiter reported that in the summer of 2013, the spacecraft detected methane within Gale Crater, a 96-mile-wide depression near the Martian equator. “Our finding constitutes the first independent confirmation of a methane detection,” said Marco Giuranna, a scientist at the National Institute for Astrophysics in Italy, in an email. Dr. Giuranna is the principal investigator for the Mars Express instrument that made the measurements. According to researchers, the methane gas in the Martian atmosphere is most likely to have been created recently because gas decays quickly and have relatively low half lives. Calculations indicate that sunlight and other chemical reaction in the thin Martian atmosphere would break up the molecules within a few hundred years. The researchers suggest that methane gas in Mars could have been created by a geological process called “serpentinization.” Or it could be a by-product of life – specifically methanogens. Methanogens are microbes that release methane as waste and thrive in places with low oxygen, such as underground rocks and digestive tracks of animals. The hopeful scientists argue that even if the methane were not produced by life, the hydrothermal systems in a geologic process that created the methane emissions would still be a prime location to search for signs of life. Interestingly, the data confirmed by ESA coincided by the data reported by NASA’s Curiosity rover that has been exploring that region since 2011. NASA has noted a significant rise of methane in the air in the summer of 2013, too, that lasted for about two months. “It reaffirms the hypothesis that Mars is presently active,” said Sushil Atreya, a planetary scientist at the University of Michigan and a member of the Curiosity science team. Only recently, a new European Mars spacecraft, the Trace Gas Orbiter, with a more sophisticated methane detector has been in orbit since 2017, but no results have been reported by far. The search for life in Mars had also seen a fresh hope when a group of scientist also discovered what they thought is micro-organism in Mars. A group of researchers has published in the Journal of Astrobiology and Space Science Reviews a paper saying that they may have found evidence of life currently living in Mars. Read More: LIFE ON MARS: SCIENTISTS MAY HAVE FOUND LICHEN IN MARS; BUT THEY COULD BE WRONG The researchers argue that a fungus-like ‘growth’ found on Mars is indicative of a microbial life that could have been existing in the planet. The paper cites their observations of the photos taken by the retired Opportunity Rover. The left panoramic camera captured the image below on Sol 37 (37th Martian day), showing lobes that may be lichen growing on Mars. According to the co-author of the paper, Dr. Regina Dass of the Department of Microbiology, School of Life Sciences in India, the suspected microorganism has spores on the surrounding surface. “There are no geological or other abiogenic forces on Earth which can produce sedimentary structures, by the hundreds, which have mushroom shapes, stems, stalks, and shed what looks like spores on the surrounding surface,” she said. The authors of the paper offer the varying amount of methane in Mars as additional proof to their discovery. They said that the fact that there are measurable differences in the amount of methane in the atmosphere based on the season adds credibility to the claims of microbial life’s existence on Mars. They explained: “On Earth, 90% of methane is produced biologically by living and decaying organisms and released as a waste product by prokaryotes certain species of fungi. Terrestrial atmospheric methane levels also vary with the seasons and are directly attributed to biological activity.” The researchers hypothesized that this phenomenon is like “breathing” for the planet. It exhales methane when things warm up, and the supposed life wakes up; and when it gets cold in the fall/winter, life ‘goes to sleep’ or is otherwise less active, resulting in lower methane. Nonetheless, the researchers admit that their study was inconclusive and more discoveries should be made to confirm their hypothesis.
0.878683
3.533859
Astronomers Discover The Closest Black Hole To Earth 1,000 Light-Years Away Astronomers noticed a mysterious hidden object in a nearby star system in the Milky Way galaxy. The object turned out to be the nearest black hole yet, which is located in a star system that is visible to the naked eye, according to ESO. Located about 1,000 light-years away in the constellation of Telescopium, HR 6819 was originally thought to have been a binary star system. The system was part of a study that was observing such star systems with two stellar body. But during their observations, astronomers discovered a potential third body hidden in the star system. This elusive body turned out to be a stellar-mass black hole which is about 2,500 light-years closer to the Sun than next black hole, according to National Geographic. Researchers in their study published in Astronomy & Astrophysics said, “The BH in HR 6819 probably is the closest known BH to the Sun.” Additionally, the star system HR 6819 is visible to the naked eye that can be viewed from the southern hemisphere on a dark, clear night without binoculars or a telescope. Co-author of the study, Petr Hadrava, Emeritus Scientist at the Academy of Sciences of the Czech Republic, Prague described that the team was surprised “when we realised that this is the first stellar system with a black hole that can be seen with the unaided eye.” The study was based on observations from the HR 6819 with the FEROS spectrograph on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. They found that one of the two visible stars was orbiting a previously undetected object every 40 days, and the second visible star was farther away from this pair. Researchers studied the inner pair to find out the nature of the invisible body with the help of the visible star. With the help of the star’s orbit around the invisible body, they were able to calculate its mass to be four times that of our Sun. ESO scientist Thomas Rivinius, who led the study concluded that, “an invisible object with a mass at least 4 times that of the Sun can only be a black hole.” The previously undetected body in HR 6819 doesn’t interact violently with other bodies in its vicinity. Researchers also think that the binary star system, LB-1, also a mysterious third object, is about 4.2 times the size of our Sun. They think that the existence of such stellar-mass black holes suggests a population of ‘quiet’ black holes. Most black holes discovered in our galaxy were found because they strongly interact with their environment and emit strong X-rays. The discovery of a black hole in HR 6819 hints at the possibility of more similar black holes in the Milky Way Galaxy that don’t interact with their environment. This can help scientists find more potentially hidden stellar-mass black holes as co-author of the study, Marianne Heida from ESO explained, “By finding and studying them we can learn a lot about the formation and evolution of those rare stars that begin their lives with more than about 8 times the mass of the Sun and end them in a supernova explosion that leaves behind a black hole.” 4/4 The discovery of the black hole in HR 6819 provides clues for locating where other hidden black holes in the Milky Way might be.— ESO (@ESO) May 6, 2020 Credit: @ESO , @IAU_org , @skyandtelescope pic.twitter.com/B6KyKOYE65 Image Credit: ESO/L. Calçada
0.894096
3.608511
Venus and Jupiter now appear as a brilliant “double star” in the evening sky. This was the scene last night, Sunday, June 28, as the two brightest planets in the sky appeared close to each other in the evening twilight. I shot the scene from the eastern shore of Little Fish Lake at a Provincial Park in southern Alberta bordering on the Handhills Conservation Area which preserves northern native prairie grasses and an abundance of bird and wildlife species. The planetary conjunction culminates on June 30, when they will appear very close to each other (less than a Moon diameter apart), creating the best evening conjunction of 2015. Look west on June 30 after sunset to see a brilliant “double star” in the dusk. They’ve been building to this conjunction all month. On Tuesday, June 30 Venus and Jupiter appear at their closest in a stunning pairing in the evening twilight. That night the two worlds – the two brightest planets in the sky – appear just 20 arc minutes apart. That’s 1/3rd of a degree and is less than a Moon diameter. That’s so close you’ll be able to fit both planets into a high-magnification telescope field. However, it’s not so close that you won’t still be able to resolve the two worlds with your unaided eyes as separate objects shining in the twilight. In the chart above the circle is a binocular field. Their proximity is merely an illusion. Venus and Jupiter lie along the same line of sight to us, but in fact are 825 million kilometres apart in space. If Tuesday looks to be cloudy, good consolation nights are June 29 and July1 – Canada Day! – when Venus and Jupiter will be separated by 40 arc minutes – double their separation on June 30, but still very impressive. The last time we saw Venus and Jupiter close together in the evening sky was in mid-March 2012, when I shot the photo above. But at that time they passed a wide 3 degrees apart. This week they are just a fraction of a degree apart. They’ll meet again later this year, but in the morning sky, on October 25, when Venus and Jupiter pass one degree from each other. The summer solstice sky was filled with twilight glows, planets, and dancing Northern Lights. What a magical night this was. The evening started with the beautiful sight of the waxing crescent Moon lined up to the left of the star Regulus, and the planets Jupiter and Venus (the brightest of the trio), all set in the late evening twilight. They are all reflected in the calm waters of a prairie lake. I shot the above photo about 11 p.m., as late a twilight as we’ll get. From here on, after solstice, the Sun sets sooner and the sky darkens earlier. Later, about 12:30 a.m., as predicted by aurora apps and alert services, a display of Northern Lights appeared on cue to the north. It was never very bright to the eye, but the camera nicely picks up the wonderful colours of a solstice aurora. At this time of year the tall curtains reaching up into space catch the sunlight, with blue tints adding to the usual reds fringing the curtain tops, creating subtle shades of magenta and purple. The display made for a photogenic subject reflected in the lake waters. The three brightest objects in the night sky gathered into a tidy triangle in the twilight. On Friday night, June 19, I chased around my area of southern Alberta, seeking clear skies to capture the grouping of the waxing crescent Moon with Venus and Jupiter. My first choice was the Crawling Valley reservoir and lake, to capture the scene over the water. I got there in time to get into position on the east side of the lake, and grab some shots. This was the result, but note the clouds! They were moving in quickly and soon formed a dramatic storm front. By the time I got back to the car and changed lenses, I was just able to grab the panorama below before the clouds engulfed the sky, and the winds were telling me to leave! I drove west toward home, taking a new highway and route back, and finding myself back into clear skies, as the storm headed east. I stopped by the only interesting foreground element I could find to make a composition, the fence, and grabbed the lead photo. Both it, and the second image, are “HDR” stacks of five exposures, to preserve detail in the dark foreground and bright sky. It was a productive evening under the big sky of the prairies. Each night, Venus and Jupiter are converging closer, heading toward conjunction on June 30. This was Venus (right) and Jupiter (centre) with Regulus at left, in a cloudy twilight sky on Friday, June 12, as Venus and Jupiter converge toward their close conjunction in the evening sky on June 30. Be sure to watch each night as the two brightest planets in the sky creep closer and closer together. Mark June 19 and 20 on your calendar, as that’s when the waxing crescent Moon will join the duo. I shot this from near Vulcan, Alberta, after delivering an evening program at the Trek Centre in Vulcan as a guest speaker. Clouds prevented us from seeing anything in the sky at the public event, but on my way home skies cleared enough to reveal the two bright planets in the twilight. I stopped at an abandoned farmyard I had scouted out earlier in the evening, to serve as a photogenic backdrop. This is a high dynamic range stack of three bracketed exposures, one stop apart, to record detail in both the dark foreground as well as in the bright sky. Here was the scene on September 12, with Venus and the Moon in conjunction in the dawn sky. Orion stands above the trees, and at top is Jupiter amid the stars of Taurus. The star Sirius is just rising below Orion. And both the Moon, here overexposed of necessity, and Venus shine together below the clump of stars called the Beehive star cluster in Cancer. This was quite a celestial panorama in the morning twilight. This is a stack of two 2-minute exposures taken just as dawn’s light was breaking, so I get the Milky Way and even a touch of Zodiacal Light in the scene, as well as the colours of twilight. Pity I can’t avoid the lens flares! This was the stunning scene in the dawn sky last Sunday — Venus, the Moon and Jupiter lined up above the Rockies. Orion is just climbing over the line of mountains at right, while the stars of Taurus shine just to the right of Jupiter at top. I shot this at the end of a productive dusk-t0-dawn night of Perseid meteor photography. Being rewarded with a scene like this is always a great way to cap a night of astronomy.
0.904054
3.379187
An unknown object was spotted entering the solar system in August, likely from an alien world. It was spotted by an amateur astronomer in Ukraine on 30 August. The Minor Planet Centre, part of the Smithsonian Astrophysical Observatory released a statement claiming the object, called C/2019 Q4, has an orbit in the shape of a hyperbola, and was moving at high speeds unaffected by the sun's gravity, suggesting it hails from outside our solar system. Now, astronomers have taken the first photograph of the space rock. The first time an interstellar object entered our solar system, astronomers and scientists caught it too late. The asteroid, Oumuamua, was leaving and this left them with very little time to study it. C/2019 Q4 could offer an opportunity for astronomers to compare notes with 'Oumuamua, the first interstellar object spotted leaving our solar system in 2017. The interstellar comet C/2019 Q4 in action. Image: Gemini Observatory/NSF/AURA "This image was possible because of Gemini's ability to rapidly adjust observations and observe objects like this, which have very short windows of visibility," Andrew Stephens, coordinator of the observations by Gemini, said in a statement. The image captured by the Gemini Observatory shows details about C/2019 Q4 " a fuzzy coma and tail, both of which are features of comets not seen in >'Oumuamua. The pronounced tail in the image is indicative of outgassing, which is a characteristical comet thing to do. This is also the first time an interstellar visitor has shown an evident tail due to outgassing. 'Oumuamua, on the other hand, was a highly elongated asteroid-like rock with no evident outgassing. Artist impression of the space object Oumuamua. Image credit: Wikimedia Commons Currently, C/2019 Q4 is close to the relative position of the Sun in our sky, which is hampering observations with a glow during twilight. But better opportunities to observe it will come around over the next few months, according to the Gemini website, since the comet's hyperbolic path will bring it to more favorable conditions for observations to be made. The first observation made of the comet in August was pre-published in arXiv.
0.911739
3.243831
A Compton telescope (or Compton camera) is a type of gamma-ray telescope used for the photon-energy range of roughly 1-30 MeV. The term reflects the presumption that Compton scattering will occur to assist in detecting a photon. A common design uses a sequence of two detectors, one to sense Compton scattering of the incoming gamma ray photon, and the second to sense the already-scattered photon which will have a somewhat-lower photon energy. In both cases, the photon passes through a scintillator consisting of a material that will respond by producing EMR in/near the visiblelight band, in turn measured by photon-detection devices, photomultiplier tubes or photodiodes. The two events are detected through multiple detectors, offering a number of measurements based on energy and time, from which the direction and energy of the incoming photon can be worked out to substantial degree. The direction determination is somewhat degenerate (randomly from anywhere on the edge of a circle in the celestial sphere) so gamma-ray bursts with multiple photons offer triangulation to yield an angular resolution often a good bit better than a degree. At higher photon energy (above 30 MeV), other types of detectors are used.
0.801821
3.46162
After a deluge of teasing press releases and premature speculation, we can finally share some Very Important NASA News: Today, the agency announced that a team of scientists has confirmed seven Earth-sized exoplanets orbiting TRAPPIST-1, a star located just 39 light-years away from our Sun. The six inner planets are very likely to be rocky, are roughly the same mass as Earth, and are thought to have comparable surface temperatures to our own planet. Three of the planets may even be able to support liquid water and perhaps, life. This discovery signifies the largest number of Earth-sized planets found and largest number of potentially habitable worlds for a single star system. Both factors will make TRAPPIST-1 immensely appealing in the ongoing search for habitable worlds and life beyond Earth. “This is the first time that so many planets of this kind are formed around the same star,” Michaël Gillon, an astronomer at the Université de Liège and a co-author on the study published today in Nature, said in a press briefing. “[The planets] form a very complex system, [since] they’re all very close to each other and very close to the star, which is very reminiscent of the moons around Jupiter.” In 2016, Gillon, along with astronomers Amaury Triaud, Emmanuël Jehin and others spotted three exoplanets orbiting TRAPPIST-1, classified as an “ultracool dwarf” star because it features surface temperatures under 4,400 degrees Fahrenheit. After following up on TRAPPIST-1 using the instruments like NASA’s Spitzer Telescope and the ESO’s Very Large Telescope, the team found four more exoplanets in the star system. All of the potentially Earth-like worlds were spotted using the transit method, which measures dips in a star’s light output as a planetary body crosses in front from our line of sight. The news has justifiably sent space geeks into a frenzy. “Finding several potential habitable planets per star is great news for our search for life,” Lisa Kaltenegger, Director of the Carl Sagan Institute at Cornell University, told Gizmodo. In our solar system, Earth is situated squarely in the habitable zone where liquid water can form, while two other planets—Venus and Mars—skirt the inner and outer edge, respectively. According to models, the TRAPPIST-1 system contains three planets in the habitable zone, making it the record holder for stars we know of with rocky planets that could potentially support liquid water, Kaltenegger explained. At this point, we have more questions than answers about these exoplanets. Hopefully, the James Webb Telescope, which launches next year, and the yet-to-be-completed Extremely Large Telescope will be able to tell us more about their atmospheres. This will be critical for determining whether or not the planets really can support liquid water and life. “If the star is active (as indicated by the X-ray flux) then [a planet in orbit] needs an ozone layer to shield its surface from the harsh UV that would sterilize the surface,” Kaltenegger said. “If these planets do not have an ozone layer, life would need to shelter underground or in an ocean to survive—and/or develop strategies to shield from the UV.” One of the many questions from this discovery is, well, can we go there? While star system Proxima Centauri is a more sensible choice for an interstellar voyage, since it also contains a rocky, habitable-zone planet and is much closer to Earth (4.22 light years away), the opportunity to find life on multiple worlds in the TRAPPIST-1 system increases its chances of a visit someday. “Finding many potential habitable planets around a star is definitely motivating,” Kaltenegger said.
0.891943
3.426823
BepiColombo Artistic Vision on Mercury The British spacecraft will go to Mercury this week in a mission that can determine whether the first planet has water from the Sun. BepiColombo is one of the most ambitious missions from ESA. She plans to send two orbital devices to explore the red-hot world, where the surface temperature rises to 450 ° C. Up to this point only two ships of NASA were sent to Mercury: Mariner 10 in 1974-1975. and Messenger in 2011-2015. Scientists expect the expedition to answer questions posed in previous missions, such as whether the planet is able to hold water. Despite the dangerous proximity to the Sun, the slope of the planet suggests that some areas are constantly in the shade, where the temperature drops to -180 ° C, which allows ice to form. In 2012, Messenger found ice at the North Pole of the planet, but could not determine: water or sulfur. BepiColombo will also be able to detect the presence of water ice at the South Pole. In addition, Messenger noticed organic matter on the planet. Researchers do not hope to find life, but it will be possible to trace the origins of its occurrence on Earth. Scientists are interested to get more information about the magnetic field of Mercury. It used to be thought that the planet was completely stone and frozen, but previous missions discovered a magnetic field, which means the inside could be molten metal. BepiColombo is named after the late Giuseppe “Bepi” Colombo - an Italian scientist and engineer at the University of Padua, who played a leading role in the Mariner-10 mission. The spacecraft starts from the European cosmodrome in Kourou (French Guiana). Estimated date - October 20. It will take him 7 years to fly to Mercury, so he will arrive at the planet in late 2025. Having overcome 5.2 billion miles, he will complete a complex series of spans of the Earth, Venus and Mercury to slow down the speed and avoid the powerful gravitational pull of the Sun. After launching, BepiColombo will return to Earth in two years to use our planet as a gravitational slingshot. Two MPO (ESA) and MMO (Japanese JAXA) will disconnect to study Mercury for 2 years. Scientists are optimistic about the mission and expect that they will finally receive long-awaited answers about the mysterious nature of the first planet of the solar system.
0.864989
3.276343
It’s equinox time again, and this year’s March equinox took place today at precisely 5:14 a.m. GMT, or Universal Time (13:14 in Manila). The March equinox was also known as the “spring equinox” in the northern hemisphere and “autumnal (fall) equinox” in the southern hemisphere as this event marks the change of seasons — the beginning of spring in the northern part of the globe and autumn in the south. During equinox, night and day are nearly exactly the same length – 12 hours – all over the world. This is the reason it’s called an “equinox”, derived from Latin, meaning “equal night”. However, even if this is widely accepted, it isn’t entirely true. In reality equinoxes don’t have exactly 12 hours of daylight. A good website for looking at sunrise and sunset times in Manila can be found here. The best one for checking the bearing (direction) of sunrise or sunset anywhere in the world is the US Naval Observatory. A more appropriate way to define equinox is given by astronomers. According to its astronomical definition, an equinox is the moment when the sun arrives at one of two intersection points of the ecliptic, the sun’s path across the sky, and the celestial equator, earth’s equator projected onto the sky. My plans today were to head on the top of a high place and catch the sun setting due west. Sadly, the weather was not very good and the visibility was terrible. Had I been able to see the sun it would have set due west. Everyone always says that the sun rises in the east and sets in the west, but if we were really aware of our surroundings and more attuned to the sky we would realize that this is not true. In fact this is only true for two days out of the entire year and those are during equinoxes. By studying the sun’s position in the sky over the course of a year from the same location, one can notice that its rising and setting positions are changing by more at a particular time of year than at any other time. The amount of change in the location of the rising and setting of the sun throughout the year depends upon where your viewpoint is. However, irrespective of where you are on the globe, the Sun will always rise exactly east and set exactly west during equinoxes (on March 20/21 and September 20/21) On the other hand, near the solstices the sunrise position slows its change to close to a ‘standstill’ (the name ‘solstice’ being derived from the Latin for ‘sun standing still’). # # # Seasons Without Borders: Equinox March 2012 Wherever you are on 20 March, 2012, celebrate your season in the cycle of life with Astronomers Without Borders. Enjoy your own unique Equinox this year—and why not tell others about the experience? Being mindfully aware of your place on this moving Earth may bring out the storyteller and poet in you. AWB invites you to share your event reports and poems at the AWB Members’ Blog and AWB Astropoetry Blog. Send your poems to: [email protected]. Global Astronomy Month 2012 (www.gam-awb.org) is merely a month away. Astronomers Without Borders (AWB) has organized three exciting events in March to do the warm-ups! Spread the word and join in. # # # “Hello Red Planet” 3-5 March 2012 Mars will come into Opposition on March 3, 2012 in the constellation Leo with its face fully illuminated by the Sun and two days later, on March 5, 2012, the planet will have its closest approach to Earth during this apparition: 100.78 million km (0.6737 AU)—the best time to say “Hello” to the Red Planet. “Conjunction of Glory” 13 – 15 March 2012 Venus and Jupiter, the two brightest planets in the sky, will be within 3 degrees of each other in the evening sky of 15 March 2012 at 10:37:46 UTC. This will be quite a spectacle, as both planets are very bright—and this will be a fantastic visual and photographic opportunity, as it’s not often that you get the brightest planets in our Solar System so close together. The next Venus-Jupiter conjunction after this one falls on May 28, 2013. “March Equinox 2012” 20 March 2012 The March equinox occurs at 05:14 UTC, Tuesday 20 March. The Sun will shine directly down on the Earth’s equator and there will be nearly equal amounts of day and night throughout the world. This is also the first day of spring (Vernal Equinox) in the northern hemisphere and the first day of fall (Autumnal Equinox) in the southern hemisphere. Wherever you are on 20 March, 2012, celebrate your season in the cycle of life with Astronomers Without Borders. Enjoy your own unique Equinox this year—and why not tell others about the experience? To the stars! 🙂 More about GAM 2012:
0.868914
3.532455
Update. It looks like we didn’t roll a 1 on the d20, and the satellites passed each other without an impact. But this will probably become a more common occurrence as the skies get more crowded. Over sixty years of space exploration have left their mark in Low Earth Orbit (LEO), where thousands of objects create the risk of collisions. These objects include the spent first stages of rockets, fragments of broken-up spacecraft, and satellites that are no longer operational. As Donald Kessler predicted, the growing presence of “space junk” could result in regular collisions, leading to a cascading effect (aka. Kessler Syndrome). This evening – on Wednesday, Jan. 29th – such a collision might take place. These satellites are the Infrared Astronomical Satellite (IRAS), an old space telescope launched by NASA, the Netherlands, and the UK; and the GGSE-4 gravitational experiment launched by the US Air Force. These two satellites run the risk of colliding when their orbits cross paths at 06:40 p.m. EST (03:40 p.m. PST) about 900 km (560 mi) above Pittsburgh, Pennsylvania. There’s something poignant and haunting about ancient astronomers documenting things in the sky whose nature they could only guess at. It’s true in the case of Père Dom Anthelme, who in 1670 saw a star suddenly burst into view near the head of the constellation Cygnus, the Swan. The object was visible with the naked eye for two years, as it flared in the sky repeatedly. Then it went dark. We call that object CK Vulpeculae. One of the worst things that can happen during an orbital mission is an impact. Near-Earth orbit is literally filled with debris and particulate matter that moves at very high speeds. At worst, a collision with even the smallest object can have catastrophic consequences. At best, it can delay a mission as technicians on the ground try to determine the damage and correct for it. This was the case when, on August 23rd, the European Space Agency’s Sentinel-1A satellite was hit by a particle while it orbited the Earth. And after several days of reviewing the data from on-board cameras, ground controllers have determined what the culprit was, identified the affected area, and concluded that it has not interrupted the satellite’s operations. The Sentinel-1A mission was the first satellite to be launched as part of the ESA’s Copernicus program – which is the worlds largest single earth observation program to date. Since it was deployed in 2014, Sentinel-1A has been monitoring Earth using its C-band Synthetic Aperture Radar, which allows for crystal clear images regardless of weather or light conditions. In addition to tracking oil spills and mapping sea ice, the satellite has also been monitoring the movement of land surfaces. Recently, it provided invaluable insight into the earthquake in Italy that claimed at least 290 lives and caused widespread damage. These images were used by emergency aid organizations to assist in evacuations, and scientists have begun to analyze them for indications of how the quake occurred. The first indication that something was wrong came on Tuesday, August 23rd, at 17:07 GMT (10:07 PDT, 13:07 EDT), when controllers noted a small power reduction. At the time, the satellite was at an altitude of 700 km, and slight changes in it’s orientation and orbit were also noticed. After conducting a preliminary investigation, the operations team at the ESA’s control center hypothesized that the satellite’s solar wing had suffered from an impact with a tiny object. After reviewing footage from the on-board cameras, they spotted a 40 cm hole in one of the solar panels, which was consistent with the impact of a fragment measuring less than 5 mm in size. However, the power loss was not sufficient to interrupt operations, and the ESA was quick to allay fears that this would result in any interruptions of the Sentinel-1A‘s mission. They also indicated that the object’s small size prevented them from advanced warning. As Holger Krag – Head of the Space Debris Office at ESA’s establishment in Darmstadt, Germany – said in an agency press release: “Such hits, caused by particles of millimeter size, are not unexpected. These very small objects are not trackable from the ground, because only objects greater than about 5 cm can usually be tracked and, thus, avoided by maneuvering the satellites. In this case, assuming the change in attitude and the orbit of the satellite at impact, the typical speed of such a fragment, plus additional parameters, our first estimates indicate that the size of the particle was of a few millimeters. While it is not clear if the object came from a spent rocket or dead satellite, or was merely a tiny clump of rock, Krag indicated that they are determined to find out. “Analysis continues to obtain indications on whether the origin of the object was natural or man-made,” he said. “The pictures of the affected area show a diameter of roughly 40 cm created on the solar array structure, confirming an impact from the back side, as suggested by the satellite’s attitude rate readings.” In the meantime, the ESA expects that Sentinel-1A will be back online shortly and doing the job for which it was intended. Beyond monitoring land movements, land use, and oil spills, Sentinel-1A also provides up-to-date information in order to help relief workers around the world respond to natural disasters and humanitarian crises. The Sentinel-1 satellites, part of the European Union’s Copernicus Program, are operated by ESA on behalf of the European Commission. Even though it’s said that the average human eye can discern from seven to ten million different values and hues of colors, in reality our eyes are sensitive to only a very small section of the entire electromagnetic spectrum, corresponding to wavelengths in the range of 400 to 700 nanometers. Above and below those ranges lie enormously diverse segments of the EM spectrum, from minuscule yet powerful gamma rays to incredibly long, low-frequency radio waves. Astronomers observe the Universe in all wavelengths because many objects and phenomena can only be detected in EM ranges other than visible light (which itself can easily be blocked by clouds of dense gas and dust.) But if we could see in radio waves the same way we do in visible light waves – that is with longer wavelengths being perceived as “red” and shorter wavelengths seen as “violet,” with all the blues, greens, and yellows in between – our world would look quite different… especially the night sky, which would be filled with fantastic shapes like those seen above! Created from observations made at the Very Large Array in New Mexico, the image above shows a cluster of over 500 colliding galaxies located 800 million light-years away called Abell 2256. An intriguing target of study across the entire electromagnetic spectrum, here Abell 2256 (A2256 for short) has had its radio emissions mapped to the corresponding colors our eyes can see. Within an area about the same width as the full Moon a space battle between magical cosmic creatures seems to be taking place! (In reality A2256 spans about 4 million light-years.) See a visible-light image of A2256 by amateur astronomer Rick Johnson here. The VLA radio observations will help researchers determine what’s happening within A2256, where multiple groups of galaxy clusters are interacting. “The image reveals details of the interactions between the two merging clusters and suggests that previously unexpected physical processes are at work in such encounters,” said Frazer Owen of the National Radio Astronomy Observatory (NRAO). An online simulator for galactic collisions (Adrian Price-Whelan/Columbia University) Have you ever had the desire to build your own galaxies, setting your own physical parameters and including as many stars as you want, and then smash them together like two toy cars on a track? Well, now you can do just that from the comfort of your own web browser (and no waiting billions of years for the results!) This interactive online app by Adrian Price-Whelan lets you design a galaxy, including such parameters as star count, radius and dispersion rate, and then create a second galaxy to fling at it. Clicking and dragging on the black area will send the invading galaxy on its course, letting you watch the various results over and over again. (If those SMBH’s hit, look out!) Inset image: Hubble interacting galaxies UGC 9618, 450 million light-years away. Credit: NASA, ESA, the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration, and A. Evans (University of Virginia, Charlottesville/NRAO/Stony Brook University)
0.862029
3.664352
The pyramid-shaped mountain, Ahuna Mons, of Dwarf planet Ceres has opened up some of its wonders to us as NASA’s Dawn spacecraft was able to capture precious information when it moved into orbit around it. Ceres is situated in the asteroid belt lying between Mars and Jupiter. Dawn spacecraft slid into orbit a year ago, on March 6, with the aim of finding out more about the unknown dwarf planet. Ahuna Mons is one of the most conspicuous and documented features of Ceres; other structures found on the dwarf planet do not match its height (5 km at its steepest side, and an average height of 4 km) or ‘high definition’. As Dawn has been approaching closer and closer, we have been able to learn more about this landscape feature: it is not exactly pyramidal in shape. Ahuna Mons might seem to be taking the shape of a pyramid from afar, but its appearance, in fact, resembles a dome with “smooth, steep walls”. Some of its slopes have revealed the presence of bright materials. New images of Ceres will be released at the 47th Lunar and Planetary Science Conference on March 22. Ceres was in the spotlight earlier because of mysterious bright spots spotted on its surface, in the Occator crater which is another famous feature of Ceres. A number of theories have been concocted since then to explain the occurrence and nature of these bright ‘lights’: from salt deposits to ice.
0.854545
3.321422
Astronomers find the most powerful explosion in the universe Astronomers have found evidence of the most powerful explosion in the universe. Using data from ESA's XMM-Newton and NASA’s Chandra X-ray space telescopes, the Murchison Widefield Array (MWA) in Australia, and the Giant Metrewave Radio Telescope (GMRT) in India, the team found that a "crater" of intergalactic-scale detonation was caused by a supermassive black hole located in the Ophiuchus galaxy cluster, made of thousands of galaxies 390 million light-years from Earth. There is big, really big, and so big that it's hard to wrap one's mind around the scale. In this case, the Ophiuchus cluster black hole explosion is so large that it seems almost unbelievable. "In some ways, this blast is similar to how the eruption of Mount St. Helens in 1980 ripped off the top of the mountain," says Simona Giacintucci of the Naval Research Laboratory in Washington, DC. "A key difference is that you could fit fifteen Milky Way galaxies in a row into the crater this eruption punched into the cluster's hot gas." Fifteen Milky Ways across would be equal to 1,575,000 light-years, or one and a half times the distance between Earth and the Andromeda galaxy. Sitting at the center of this intergalactic crater and at the core of the galactic cluster is a supermassive black hole, which provided the energy for this incredible explosion consisting of powerful jets of gas and beams of X-rays and radio waves. In fact, it was the very size of this explosion that was the reason why it wasn't believed when first seen. According to ESA, the first clue to the explosion was seen in 2016 when a team of astronomers led by Norbert Werner looked at X-ray emissions data from the area of the blast collected by the Chandra observatory, where they saw the curved edge of the resulting X-ray image, but rejected the idea of an explosion forming such a curve because of the vast amounts of energy involved. Then, a later study led by Giacintucci found the same curved edge in data from the XMM-Newton observatory and combined this with radio observations from MWA and GMRT, confirming the Chandra findings and showing that the edge is the limit of an area of radio-emitting gas. What was even more remarkable was that when they ran the numbers, they found that the force of the explosion was five times greater than the previous most powerful black hole explosion in the galaxy cluster MS0735.6+7421. "The radio data fit inside the X-rays like a hand in a glove," says Maxim Markevitch of NASA's Goddard Space Flight Center. "This is the clincher that tells us an eruption of unprecedented size occurred here." Currently, the Ophiuchus black hole explosion is a "radio fossil." The explosion was caused by the black hole sucking in gas and dust from its host galaxy, most of which didn't fall in. Instead, the matter went into a slingshot trajectory and shot away at nearly the speed of light in massive jets or beams. Eventually, this caused the gas around the cluster to slosh like wine in a wineglass, depriving the supermassive black hole of fuel to grow and produce new jets. "As is often the case in astrophysics we really need multi-wavelength observations to truly understand the physical processes at work," says team member Melanie Johnston-Hollitt of the International Centre for Radio Astronomy in Australia. "Having the combined information from X-ray and radio telescopes has revealed this extraordinary source, but more data will be needed to answer the many remaining questions this object poses." The research was published in The Astrophysical Journal.
0.850188
3.962593
Mysterious Objects at the Edge of the Electromagnetic Spectrum The human eye is crucial to astronomy. Without the ability to see, the luminous universe of stars, planets and galaxies would be closed to us, unknown forever. Nevertheless, astronomers cannot shake their fascination with the invisible. Outside the realm of human vision is an entire electromagnetic spectrum of wonders. Each type of light from radio waves to gamma-rays reveals something unique about the universe. Some wavelengths are best for studying black holes; others reveal newborn stars and planets; while others illuminate the earliest years of cosmic history. NASA has many telescopes “working the wavelengths” up and down the electromagnetic spectrum. One of them, the Fermi Gamma-Ray Telescope orbiting Earth, has just crossed a new electromagnetic frontier. “Fermi is picking up crazy-energetic photons,” says Dave Thompson, an astrophysicist at NASA’s Goddard Space Flight Center. “And it’s detecting so many of them we’ve been able to produce the first all-sky map of the very high energy universe.” “This is what the sky looks like near the very edge of the electromagnetic spectrum, between 10 billion and 100 billion electron volts.” The light we see with human eyes consists of photons with energies in the range 2 to 3 electron volts. The gamma-rays Fermi detects are billions of times more energetic, from 20 million to more than 300 billion electron volts. These gamma-ray photons are so energetic, they cannot be guided by the mirrors and lenses found in ordinary telescopes. Instead Fermi uses a sensor that is more like a Geiger counter than a telescope. If we could wear Fermi’s gamma ray “glasses,” we’d witness powerful bullets of energy – individual gamma rays – from cosmic phenomena such as supermassive black holes and hypernova explosions. The sky would be a frenzy of activity. Before Fermi was launched in June 2008, there were only four known celestial sources of photons in this energy range. “In 3 years Fermi has found almost 500 more,” says Thompson.
0.803865
3.733891
The first exoplanet ever discovered was 51 Pegasi b in 1995. It kicked marked the slow beginning of what would soon become the ‘exoplanet gold rush.’ It meant that for the first time, we had the technological capacity to discover new worlds, and science fiction soon became science fact. 51 Pegasi b was also a very strange planet. A massive Jupiter sized world orbiting very close to its home star. On one hand it was this characteristic that made it much easier to detect. On the other, it showed us that we did not understand planetary system formation as well as we thought. Since this original discovery, we have dedicated telescopes to the search for new planets of all sizes, and are beginning to discover planets similar to our own home. As our technology improves it seems as if we are finding unusual new configurations on a weekly basis, everything from planets with rings that make Saturn look tiny to fast orbiters that are being vaporized by the heat of their home star. Now our technology has taken us a step further, and for the first time we have seen the light reflected off of 51 Pegasi b. In astronomy, light is everything. Photons bounce around the universe, and the study of their characteristics allows us to unlock the secrets of the universe. By directly detecting light from 51 Pegasi b, we can figure out the planet’s mass, orbital inclination, what it’s made of, what it’s atmosphere is made of, and how reflective it is. This is a truly deep study of a non-solar planet. Of course, the real difficulty in such a process in blocking out the light from the planet’s parent star. This has been an issue for years during the detection of exoplanets in general. But with the development of better telescopes and more precise instruments, the starlight can be blocked more easily, allowing us a much clearer view of the reflecting planet. 51 Pegasi b is only 50 light years from Earth, making it a much easier target for astronomers than some of the more distant exoplanets. For this reason, it’s a good starting point for the direct study of exoplanets, and paves the way for future exploration. I’ve said it before, but it bears repeating: We are in a renaissance of exoplanet science. Years from now this period will be talked about as a milestone of humanity’s quest for the stars.
0.838691
3.505999
On Thursday, New Horizons mission scientists published five separate studies on Pluto’s geology, atmosphere, and moon system in the journal Science. Surprisingly, the papers revealed that the icy dwarf planet is surprisingly more diverse than scientists had expected. The latest studies were based on fresh data collected by NASA’s New Horizons probe during its history flyby of the tiny planet on July 14, 2015. It is the first time scientists have managed to closely analyze the new data, and provide the public with a more accurate depiction of the former ninth planet from the sun. Shortly after the flyby, the mission team could only release info on the most striking features like the ‘heart’ of Pluto, a region called the Tombaugh Regio, and its different types of snow made of frozen carbon monoxide, methane, and nitrogen. But after eight months, researchers learned that Pluto is not the dull planet they initially suspected after looking at the low-resolution imagery provided by the Hubble Space Telescope. Hubble’s images couldn’t render the geological diversity of the dwarf planet due to the long distance from the sun, which makes the space rock so dimly lit. Plus, since the dwarf planet is three times tinier than the moon, scientists had expected Pluto to be a “boring cratered ball,” as researcher William M. Grundy at the Lowell Observatory in Arizona put it. Other researchers had imagined that Pluto might be geologically similar to Triton, one of Neptune’s many moons. But both groups were wrong. Instead, NASA’s probe captured a variety of landscapes, outcrops, and features. “The big surprise is that Pluto turned out so surprising,” noted NASA’s Jeffrey M. Moor, head of the New Horizons’ imaging team at Ames Research Center in California. According to one of the research papers, the planet is dotted with ice volcanoes spewing liquid nitrogen. Plus, scientists have found a strange mountain they called Wright Mons with a hole on its top. Planetary scientists said that they had never seen anything like it in the entire solar system. The recently published studies also described Pluto’s gigantic moon Charon. As expected, Charon does not have various types of ices on its surface like Pluto does. Its crust is mainly covered in water ice. Scientists explained that this is because the alien moon has a weaker gravity which doesn’t allow it to host carbon monoxide, methane, and nitrogen. Image Source: Wikimedia
0.845779
3.338333
That spiral galaxies have magnetic fields has been known for well over half a century (and predictions that they should exist preceded discovery by several years), and some galaxies’ magnetic fields have been mapped in great detail. But how did these magnetic fields come to have the characteristics we observe them to have? And how do they persist? A recent paper by UK astronomers Stas Shabala, James Mead, and Paul Alexander may contain answers to these questions, with four physical processes playing a key role: infall of cool gas onto the disk, supernova feedback (these two increase the magnetohydrodynamical turbulence), star formation (this removes gas and hence turbulent energy from the cold gas), and differential galactic rotation (this continuously transfers field energy from the incoherent random field into an ordered field). However, at least one other key process is needed, because the astronomers’ models are inconsistent with the observed fields of massive spiral galaxies. “Radio synchrotron emission of high energy electrons in the interstellar medium (ISM) indicates the presence of magnetic fields in galaxies. Rotation measures (RM) of background polarized sources indicate two varieties of field: a random field, which is not coherent on scales larger than the turbulence of the ISM; and a spiral ordered field which exhibits large-scale coherence,” the authors write. “For a typical galaxy these fields have strengths of a few μG. In a galaxy such as M51, the coherent magnetic field is observed to be associated with the optical spiral arms. Such fields are important in star formation and the physics of cosmic rays, and could also have an effect on galaxy evolution, yet, despite their importance, questions about their origin, evolution and structure remain largely unsolved.” This field in astrophysics is making rapid progress, with understanding of how the random field is generated having become reasonably well-established only in the last decade or so (it’s generated by turbulence in the ISM, modeled as a single-phase magnetohydrodynamic (MHD) fluid, within which magnetic field lines are frozen). On the other hand, the production of the large-scale field by the winding of the random fields into a spiral, by differential rotation (a dynamo), has been known for much longer. The details of how the ordered field in spirals formed as those galaxies themselves formed – within a few hundred million years of the decoupling of baryonic matter and radiation (that gave rise to the cosmic microwave background we see today) – are becoming clear, though testing these hypotheses is not yet possible, observationally (very few high-redshift galaxies have been studied in the optical and NIR, period, let alone have had their magnetic fields mapped in detail). “We present the first (to our knowledge) attempt to include magnetic fields in a self-consistent galaxy formation and evolution model. A number of galaxy properties are predicted, and we compare these with available data,” Shabala, Mead, and Alexander say. They begin with an analytical galaxy formation and evolution model, which “traces gas cooling, star formation, and various feedback processes in a cosmological context. The model simultaneously reproduces the local galaxy properties, star formation history of the Universe, the evolution of the stellar mass function to z ~1.5, and the early build-up of massive galaxies.” Central to the model is the ISM’s turbulent kinetic energy and the random magnetic field energy: the two become equal on timescales that are instantaneous on cosmological timescales. The drivers are thus the physical processes which inject energy into the ISM, and which remove energy from it. “One of the most important sources of energy injection into the ISM are supernovae,” the authors write. “Star formation removes turbulent energy,” as you’d expect, and gas “accreting from the dark matter halo deposits its potential energy in turbulence.” In their model there are only four free parameters – three describe the efficiency of the processes which add or remove turbulence from the ISM, and one how fast ordered magnetic fields arise from random ones. Are Shabala, Mead, and Alexander excited about their results? You be the judge: “Two local samples are used to test the models. The model reproduces magnetic field strengths and radio luminosities well across a wide range of low and intermediate-mass galaxies.” And what do they think is needed to account for the detailed astronomical observations of high-mass spiral galaxies? “Inclusion of gas ejection by powerful AGNs is necessary in order to quench gas cooling.” It goes without saying that the next generation of radio telescopes – EVLA, SKA, and LOFAR – will subject all models of magnetic fields in galaxies (not just spirals) to much more stringent tests (and even enable hypotheses on the formation of those fields, over 10 billion years ago, to be tested).
0.873272
4.19945
At the center of our galaxy, roughly 26,000 light-years from Earth, is the Supermassive Black Hole (SMBH) known as Sagittarius A*. The powerful gravity of this object and the dense cluster of stars around it provide astronomers with a unique environment for testing physics under the most extreme conditions. In particular, it offers them a chance to test Einstein’s Theory of General Relativity (GR). For example, in the past thirty years, astronomers have been observing a star in the vicinity of Sagittarius A* (S2) to see if its orbit conforms to what is predicted by General Relativity. Recent observations made with the ESO’s Very Large Telescope (VLT) have completed an observation campaign that confirmed that the star’s orbit is rosette-shaped, once again proving that Einstein theory was right on the money! To put it simply, Dark Matter is not only believed to make up the bulk of the Universe’s mass but also acts as the scaffolding on which galaxies are built. But to find evidence of this mysterious, invisible mass, scientists are forced to rely on indirect methods similar to the ones used to study black holes. Essentially, they measure how the presence of Dark Matter affects stars and galaxies in its vicinity. Black holes are one of the most awesome and mysterious forces in the Universe. Originally predicted by Einstein’s Theory of General Relativity, these points in spacetime are formed when massive stars undergo gravitational collapse at the end of their lives. Despite decades of study and observation, there is still much we don’t know about this phenomenon. For example, scientists are still largely in the dark about how the matter that falls into orbit around a black hole and is gradually fed onto it (accretion disks) behave. Thanks to a recent study, where an international team of researchers conducted the most detailed simulations of a black hole to date, a number of theoretical predictions regarding accretion disks have finally been validated. A little over a year ago, LIGO was taken offline so that upgrades could be made to its instruments, which would allow for detections to take place “weekly or even more often.” After completing the upgrades on April 1st, the observatory went back online and performed as expected, detecting two probable gravitational wave events in the space of two weeks. Since the 1970s, astronomers have theorized that at the center of our galaxy, about 26,000 light-years from Earth, there exists a supermassive black hole (SMBH) known as Sagittarius A*. Measuring an estimated 44 million km (27.3 million mi) in diameter and weighing in at roughly 4 million Solar masses, this black hole is believed to have had a profound influence on the formation and evolution of our galaxy. This discovery not only opened up an exciting new field of research, but has opened the door to many intriguing possibilities. One such possibility, according to a new study by a team of Russian scientists, is that gravitational waves could be used to transmit information. In much the same way as electromagnetic waves are used to communicate via antennas and satellites, the future of communications could be gravitationally-based. In August of 2017, astronomers made another major breakthrough when the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected gravitational waves that were believed to be caused by the merger of two neutron stars. Since that time, scientists at multiple facilities around the world have conducted follow-up observations to determine the aftermath this merger, as even to test various cosmological theories. For instance, in the past, some scientists have suggested that the inconsistencies between Einstein’s Theory of General Relativity and the nature of the Universe over large-scales could be explained by the presence of extra dimensions. However, according to a new study by a team of American astrophysicists, last year’s kilonova event effectively rules out this hypothesis. In 1915, Albert Einstein published his famous Theory of General Relativity, which provided a unified description of gravity as a geometric property of space and time. This theory gave rise to the modern theory of gravitation and revolutionized our understanding of physics. Even though a century has passed since then, scientists are still conducting experiments that confirm his theory’s predictions. The new infrared observations collected by these instruments allowed the team to monitor one of the stars (S2) that orbits Sagittarius A* as it passed in front of the black hole – which took place in May of 2018. At the closest point in its orbit, the star was at a distance of less than 20 billion km (12.4 billion mi) from the black hole and was moving at a speed in excess of 25 million km/h (15 million mph) – almost three percent of the speed of light. Whereas the SINFONI instrument was used to measure the velocity of S2 towards and away from Earth, the GRAVITY instrument in the VLT Interferometer (VLTI) made extraordinarily precise measurements of the changing position of S2 in order to define the shape of its orbit. The GRAVITY instrument then created the sharp images that revealed the motion of the star as it passed close to the black hole. The team then compared the position and velocity measurements to previous observations of S2 using other instruments. They then compared these results with predictions made by Newton’s Law of Universal Gravitation, General Relativity, and other theories of gravity. As expected, the new results were consistent with the predictions made by Einstein over a century ago. As Reinhard Genzel, who in addition to being the leader of the GRAVITY collaboration was a co-author on the paper, explained in a recent ESO press release: “This is the second time that we have observed the close passage of S2 around the black hole in our galactic center. But this time, because of much improved instrumentation, we were able to observe the star with unprecedented resolution. We have been preparing intensely for this event over several years, as we wanted to make the most of this unique opportunity to observe general relativistic effects.” When observed with the VLT’s new instruments, the team noted an effect called gravitational redshift, where the light coming from S2 changed color as it drew closer to the black hole. This was caused by the very strong gravitational field of the black hole, which stretched the wavelength of the star’s light, causing it to shift towards the red end of the spectrum. The change in the wavelength of light from S2 agrees precisely with what Einstein’s field equation’s predicted. As Frank Eisenhauer – a researcher from the Max Planck Institute of Extraterrestrial Physics, the Principal Investigator of GRAVITY and the SINFONI spectrograph, and a co-author on the study – indicated: “Our first observations of S2 with GRAVITY, about two years ago, already showed that we would have the ideal black hole laboratory. During the close passage, we could even detect the faint glow around the black hole on most of the images, which allowed us to precisely follow the star on its orbit, ultimately leading to the detection of the gravitational redshift in the spectrum of S2.” Whereas other tests have been performed that have confirmed Einstein’s predictions, this is the first time that the effects of General Relativity have been observed in the motion of a star around a supermassive black hole. In this respect, Einstein has been proven right once again, using one the most extreme laboratory to date! What’s more, it confirmed that tests involving relativistic effects can provide consistent results over time and space. “Here in the Solar System we can only test the laws of physics now and under certain circumstances,” said Françoise Delplancke, head of the System Engineering Department at ESO. “So it’s very important in astronomy to also check that those laws are still valid where the gravitational fields are very much stronger.” In the near future, another relativistic test will be possible as S2 moves away from the black hole. This is known as a Schwarzschild precession, where the star is expected to experience a small rotation in its orbit. The GRAVITY Collaboration will be monitoring S2 to observe this effect as well, once again relying on the VLT’s very precise and sensitive instruments. As Xavier Barcons (the ESO’s Director General) indicated, this accomplishment was made possible thanks to the spirit of international cooperation represented by the GRAVITY collaboration and the instruments they helped the ESO develop: “ESO has worked with Reinhard Genzel and his team and collaborators in the ESO Member States for over a quarter of a century. It was a huge challenge to develop the uniquely powerful instruments needed to make these very delicate measurements and to deploy them at the VLT in Paranal. The discovery announced today is the very exciting result of a remarkable partnership.” And be sure to check out this video of the GRAVITY Collaboration’s successful test, courtesy of the ESO: When looking to study the most distant objects in the Universe, astronomers often rely on a technique known as Gravitational Lensing. Based on the principles of Einstein’s Theory of General Relativity, this technique involves relying on a large distribution of matter (such as a galaxy cluster or star) to magnify the light coming from a distant object, thereby making it appear brighter and larger. This technique has allowed for the study of individual stars in distant galaxies. In a recent study, an international team of astronomers used a galaxy cluster to study the farthest individual star ever seen in the Universe. Although it normally to faint to observe, the presence of a foreground galaxy cluster allowed the team to study the star in order to test a theory about dark matter. For the sake of their study, Prof. Kelly and his associates used the galaxy cluster known as MACS J1149+2223 as their lens. Located about 5 billion light-years from Earth, this galaxy cluster sits between the Solar System and the galaxy that contains Icarus. By combining Hubble’s resolution and sensitivity with the strength of this gravitational lens, the team was able to see and study Icarus, a blue giant. Icarus, named after the Greek mythological figure who flew too close to the Sun, has had a rather interesting history. At a distance of roughly 9 billion light-years from Earth, the star appears to us as it did when the Universe was just 4.4 billion years old. In April of 2016, the star temporarily brightened to 2,000 times its normal luminosity thanks to the gravitational amplification of a star in MACS J1149+2223. As Prof. Kelly explained in a recent UCLA press release, this temporarily allowed Icarus to become visible for the first time to astronomers: “You can see individual galaxies out there, but this star is at least 100 times farther away than the next individual star we can study, except for supernova explosions.” Kelly and a team of astronomers had been using Hubble and MACS J1149+2223 to magnify and monitor a supernova in the distant spiral galaxy at the time when they spotted the new point of light not far away. Given the position of the new source, they determined that it should be much more highly magnified than the supernova. What’s more, previous studies of this galaxy had not shown the light source, indicating that it was being lensed. As Tommaso Treu, a professor of physics and astronomy in the UCLA College and a co-author of the study, indicated: “The star is so compact that it acts as a pinhole and provides a very sharp beam of light. The beam shines through the foreground cluster of galaxies, acting as a cosmic magnifying glass… Finding more such events is very important to make progress in our understanding of the fundamental composition of the universe. In this case, the star’s light provided a unique opportunity to test a theory about the invisible mass (aka. “dark matter”) that permeates the Universe. Basically, the team used the pinpoint light source provided by the background star to probe the intervening galaxy cluster and see if it contained huge numbers of primordial black holes, which are considered to be a potential candidate for dark matter. These black holes are believed to have formed during the birth of the Universe and have masses tens of times larger than the Sun. However, the results of this test showed that light fluctuations from the background star, which had been monitored by Hubble for thirteen years, disfavor this theory. If dark matter were indeed made up of tiny black holes, the light coming from Icarus would have looked much different. Since it was discovered in 2016 using the gravitational lensing method, Icarus has provided a new way for astronomers to observe and study individual stars in distant galaxies. In so doing, astronomers are able to get a rare and detailed look at individual stars in the early Universe and see how they (and not just galaxies and clusters) evolved over time. When the James Webb Space Telescope (JWST) is deployed in 2020, astronomers expect to get an even better look and learn so much more about this mysterious period in cosmic history. The Multiverse Theory, which states that there may be multiple or even an infinite number of Universes, is a time-honored concept in cosmology and theoretical physics. While the term goes back to the late 19th century, the scientific basis of this theory arose from quantum physics and the study of cosmological forces like black holes, singularities, and problems arising out of the Big Bang Theory. One of the most burning questions when it comes to this theory is whether or not life could exist in multiple Universes. If indeed the laws of physics change from one Universe to the next, what could this mean for life itself? According to a new series of studies by a team of international researchers, it is possible that life could be common throughout the Multiverse (if it actually exists). Together, the research team sought to determine how the accelerated expansion of the cosmos could have effected the rate of star and galaxy formation in our Universe. This accelerate rate of expansion, which is an integral part of the Lambda-Cold Dark Matter (Lambda-CDM) model of cosmology, arose out of problems posed by Einstein’s Theory of General Relativity. As a consequence of Einstein’s field equations, physicist’s understood that the Universe would either be in a state of expansion or contraction since the Big Bang. In 1919, Einstein responded by proposing the “Cosmological Constant” (represented by Lambda), which was a force that “held back” the effects of gravity and thus ensured that the Universe was static and unchanging. Shortly thereafter, Einstein retracted this proposal when Edwin Hubble revealed (based on redshift measurements of other galaxies) that the Universe was indeed in a state of expansion. Einstein apparently went as far as to declare the Cosmological Constant “the biggest blunder” of his career as a result. However, research into cosmological expansion during the late 1990s caused his theory to be reevaluated. In short, ongoing studies of the large-scale Universe revealed that during the past 5 billion years, cosmic expansion has accelerated. As such, astronomers began to hypothesize the existence of a mysterious, invisible force that was driving this acceleration. Popularly known as “Dark Energy”, this force is also referred to as the Cosmological Constant (CC) since it is responsible for counter-effecting the effects of gravity. Since that time, astrophysicists and cosmologists have sought to understand how Dark Energy could have effected cosmic evolution. This is an issue since our current cosmological models predict that there must be more Dark Energy in our Universe than has been observed. However, accounting for larger amounts of Dark Energy would cause such a rapid expansion that it would dilute matter before any stars, planets or life could form. For the first study, Salcido and the team therefore sought to determine how the presence of more Dark Energy could effect the rate of star formation in our Universe. To do this, they conducted hydrodynamical simulations using the EAGLE (Evolution and Assembly of GaLaxies and their Environments) project – one of the most realistic simulations of the observed Universe. Using these simulations, the team considered the effects that Dark Energy (at its observed value) would have on star formation over the past 13.8 billion years, and an additional 13.8 billion years into the future. From this, the team developed a simple analytic model that indicated that Dark Energy – despite the difference in the rate of cosmic expansion – would have a negligible impact on star formation in the Universe. They further showed that the impact of Lambda only becomes significant when the Universe has already produced most of its stellar mass and only causes decreases in the total density of star formation by about 15%. As Salcido explained in a Durham University press release: “For many physicists, the unexplained but seemingly special amount of dark energy in our Universe is a frustrating puzzle. Our simulations show that even if there was much more dark energy or even very little in the Universe then it would only have a minimal effect on star and planet formation, raising the prospect that life could exist throughout the Multiverse.” For the second study, the team used the same simulation from the EAGLE collaboration to investigate the effect of varying degrees of the CC on the formation on galaxies and stars. This consisted of simulating Universes that had Lambda values ranging from 0 to 300 times the current value observed in our Universe. However, since the Universe’s rate of star formation peaked at around 3.5 billion years before the onset of accelerating expansion (ca. 8.5 billion years ago and 5.3 billion years after the Big Bang), increases in the CC had only a small effect on the rate of star formation. Taken together, these simulations indicated that in a Multiverse, where the laws of physics may differ widely, the effects of more dark energy cosmic accelerated expansion would not have a significant impact on the rates of star or galaxy formation. This, in turn, indicates that other Universes in the Multiverse would be just about as habitable as our own, at least in theory. As Dr. Barnes explained: “The Multiverse was previously thought to explain the observed value of dark energy as a lottery – we have a lucky ticket and live in the Universe that forms beautiful galaxies which permit life as we know it. Our work shows that our ticket seems a little too lucky, so to speak. It’s more special than it needs to be for life. This is a problem for the Multiverse; a puzzle remains.” However, the team’s studies also cast doubt on the ability of Multiverse Theory to explain the observed value of Dark Energy in our Universe. According to their research, if we do live in a Multiverse, we would be observing as much as 50 times more Dark Energy than what we are. Although their results do not rule out the possibility of the Multiverse, the tiny amount of Dark Energy we’ve observed would be better explained by the presence of a as-yet undiscovered law of nature. As Professor Richard Bower, a member of Durham University’s Institute for Computational Cosmology and a co-author on the paper, explained: “The formation of stars in a universe is a battle between the attraction of gravity, and the repulsion of dark energy. We have found in our simulations that Universes with much more dark energy than ours can happily form stars. So why such a paltry amount of dark energy in our Universe? I think we should be looking for a new law of physics to explain this strange property of our Universe, and the Multiverse theory does little to rescue physicists’ discomfort.” These studies are timely since they come on the heels of Stephen Hawking’s final theory, which cast doubt on the existence of the Multiverse and proposed a finite and reasonably smooth Universe instead. Basically, all three studies indicate that the debate about whether or not we live in a Multiverse and the role of Dark Energy in cosmic evolution is far from over. But we can look forward to next-generation missions providing some helpful clues in the future. What’s more, all of these missions are expected to be gathering their first light sometime in the 2020s. So stay tuned, because more information – with cosmological implications – will be arriving in just a few years time!
0.877973
4.129647
The bright face of our planet's Moon has been inspiring astronomers for centuries. Since the Moon is much closer to Earth than any other object in the night sky, views of its illuminated surface are amazingly detailed when observed through binoculars or a telescope. Intriguing craters, craggy mountains, and vast "seas" of ancient lava flows make for wonderful targets for backyard astronomers. With the affordable Orion MoonMap 260, you can easily identify, locate and learn about the many interesting features of Earth's only natural satellite. The MoonMap identifies over 260 of the most popular lunar surface features including craters, mountain ranges, valleys, rilles, "seas" and more. Each feature is cross-referenced in the MoonMap's tables with its official name, size and a brief description. In addition to lunar landscape features, all successful spacecraft landing sites from US Apollo, Surveyor and Ranger Probe missions as well as Soviet Luna Probe missions are also clearly identified on the map. We've designed the Orion MoonMap 260 with versatility in mind. Both a correct-image and reversed, or "mirror image" view of the Moon are included on the map, so it can be used when observing the Moon with unaided eyes, binoculars; or any refractor, reflector, or Cassegrain telescope. The map is conveniently laminated for use in almost any weather. Using the MoonMap 260 is easy. Simply find the reference number of the lunar feature you wish to identify on the map, and then look up its name using the map's numerical index. Conversely, if you wish to locate a specific named feature with your telescope or binoculars, use the map's numerical index to find the lunar feature's reference number, then locate the number and feature on the map. For ease of use, the reference numbers are roughly ordered from north to south (top to bottom). The Orion MoonMap 260 measures 25.25" x 11" when fully unfolded, and folds up to 8.5" x 11". The tri-fold MoonMap 260 is plastic laminated for long-lasting durability as well as protection against dew, dirt, and the occasional coffee spill. Whether you're an experienced amateur astronomer or a novice just beginning to explore the night sky, the MoonMap 260 will help you make each moonlit night more memorable. It's a must-have for any of our fellow "Luna"tics!
0.824328
3.277799
Homo sapiens now rivals the great forces of nature. Humanity is a prime driver of change of the Earth system. Industrialised societies alter the planet on a scale equivalent to an asteroid impact. This is how the Anthropocene – the proposed new geological period in which human activity profoundly shapes the environment – is often described in soundbites. But is it possible to formalise such statements mathematically? I think so, and believe doing this creates an unequivocal statement of the risks industrialised societies are taking at a time when action is vital. Following the maxim of keeping everything as simple as possible, but not simpler, Will Steffen from the Australian National University and I drew up an Anthropocene equation by homing in on the rate of change of Earth’s life support system: the atmosphere, oceans, forests and wetlands, waterways and ice sheets and fabulous diversity of life. For four billion years, the rate of change of the Earth system (E) has been a complex function of astronomical (A) and geophysical (G) forces plus internal dynamics (I): Earth’s orbit around the sun, gravitational interactions with other planets, the sun’s heat output, colliding continents, volcanoes and evolution, among others. That rate of change has been anything but steady of late. If we take a baseline of the last 7000 years, until recently, global temperature decreased at a rate of 0.01 °C per century. The current rate (last 45 years) is a rise of 1.7 °C per century – 170 times the baseline and in the opposite direction. The warmest 12 years since records began have all occurred since 1998. The rate of carbon emissions to the atmosphere is arguably the highest in 66 million years, when the (non-avian) dinosaurs slipped off this mortal coil. The staggering loss of biodiversity in recent decades prompted researchers in 2015 to argue that the Anthropocene marks the third stage in the evolution of Earth’s biosphere, following on from the microbial stage 3.5 billion years ago and the Cambrian explosion 650 million years ago. Pulling this together, we conclude that the rate of change of the Earth system over the last 40 to 50 years is a purely a function of industrialised societies (H). In the equation, astronomical and geophysical forces tend to zero because of their slow nature or rarity, as do internal dynamics, for now. All these forces still exert pressure, but currently on orders of magnitude less than human impact. This is a bold statement. But viewed this way, arguments about humans versus natural causes disappear. In 2016, Earth experienced a massive El Niño event affecting the global climate. But this is balanced by the cooler La Niña – taken together, the net rate of change of the Earth system resulting from these is zero over a decade or so. False sense of security We should be concerned. For the last 2.5 million years, Earth settled into a rather unusual period of potential instability as we rocked back and forth between ice ages and intervening warm periods, or interglacials. Far from living on a deeply resilient planet, we live on a planet with hair triggers. Industrialised societies are fumbling around with the controls, lulled into a false sense of security by the deceptive stability of the Holocene, the last 11,700 years. Remarkably and accidentally, we have ejected the Earth system from the interglacial envelope and are heading in to unchartered waters. While the rate of change of the Earth system needs to drop to zero as soon as possible, the next few years may determine the trajectory for millennia. Yet the dominant neoliberal economic systems still assume Holocene-like boundary conditions – endless resources on an infinite planet. Instead, we need “biosphere positive” Anthropocene economics, where economic development stores carbon not releases it, enhances biodiversity not destroys it and purifies waters and soils not pollutes them. While it would seem imprudent to ignore the huge body of evidence pointing to profound risks, it comes at a challenging time geopolitically, when both fact-based world views and even international cooperation are questioned. Nowhere has this been clearer than in the US in recent weeks. It is perhaps surprising that in the 1990s, Stephen Bannon, White House strategist and ideologue, was CEO of Biosphere 2, a project in Arizona to create an artificial habitat for humans, partly to inform potential space colonisation missions. The delicate balance between humans and nature in Biosphere 2 collapsed into chaos and the experiment folded in 1994. While Biosphere 1 – Earth – is in no such short-term danger, societies are. The stakes could not be higher, yet critical knowledge and action needed for stability is in danger of becoming collateral damage in today’s war on facts. Ignorance and uncertainty are no longer rational excuses for inaction. Journal reference: The Anthropocene Review, doi: 10.1177/2053019616688022 More on these topics:
0.863841
3.021366
The Tunguska occasion was a big explosion that occurred close to the Podkamennaya Tunguska River in Yeniseysk Governorate (now Krasnoyarsk Krai), Russia, on the morning of 30 June 1908. The explosion over the sparsely populated Eastern Siberian Taiga flattened an estimated 80 million timber over an space of two,150 km2 (830 sq mi) of forest. The explosion which had power of 185 Hiroshima bombs, is mostly attributed to the air burst of a meteoroid. Soviet expeditions to the distant web site close to the Podkamennaya Tunguska River highlighted a scarcity of particles or craters on the floor. No influence crater has been discovered. Italian scientist Luca Gasperini, from the University of Bologna, claimed formed Lake Cheko 5 miles from the epicentre, has stuffed the crater however his analysis is strongly disputed by Russian teachers. The object that precipitated the huge explosion is assumed to have disintegrated at an altitude of three to six miles (5 to 10 km) reasonably than to have hit the floor of the Earth. The Siberian Times studies that in the present day scientists give startling principle for 1908 Tunguska Event. “At current, there are over 100 hypotheses in regards to the nature of the Tunguska phenomenon”, says Sergei Karpov main researcher at Kirensky Physics Institute in Krasnoyarsk . “They embody the autumn of a small asteroid measuring a number of dozen metres consisting of typical asteroid supplies, both metallic or stone, in addition to ice.” Karpov and his friends, argue “that the Tunguska occasion was brought on by an iron asteroid physique, which handed by way of the Earth’s environment and continued to the near-solar orbit”. The research by the Russian teachers printed within the Monthly Notices of the Royal Astronomical Society postulates that the destruction on the bottom was “the results of a passing area physique and its shock wave, reasonably than a direct influence”. • The meteor handed over three,000 kilometres (1,865 miles) of the planet’s floor on the lowest altitude of 10 to 15 kilometres (6.2 to 9.three miles), they imagine. • It travelled at an exceptional 20 kilometres per second velocity (12.four miles per second) earlier than exiting into the outer area shredding about half of its over three million tonnes weight on the way in which. • Calculations confirmed that the shock wave may very well be created by a fast improve of the area physique’s evaporation because it was approaching the Earth’s floor – for a 200-metre (656-feet) meteor that will have been 500,000 tonnes per second. • High-temperature plasma might create results typical for explosion resembling a shock wave. The new research confirmed that it may very well be brought on by the high-intensity mild of the area physique’s head because it reached over 10,000 levels Celsius at its lowest altitude within the Earth’s environment. Calculations confirmed that the meteor flew over the epicentre for about one second – heating the forest to the extent it lit up. If the Tunguska area object consisted of iron, it might clarify why there aren’t any iron droplets on the epicentre: “they merely couldn’t attain the planet’s floor due to the velocity of the area physique within the environment and its floor temperature exceeding a number of hundreds of levels Celsius. This model is supported by the truth that there aren’t any remnants of this physique and craters on the floor of the Earth. Dr Karpov stated the brand new principle “can clarify optical results related to a robust dustiness of excessive layers of the environment over Europe, which precipitated a brilliant glow of the night time sky.” The Event precipitated shockwaves as distant as Britain and dirt from the explosion lit up the night time sky in its wake in Europe and even America. (Source and Image: The Siberian Times) The put up SCIENTISTS GIVE NEW THEORY FOR THE MYSTERIOUS TUGUNSKA EVENT IN 1908 appeared first on Energy Global News.
0.839195
3.595482
A month ago, astronomers found, for the first time, an asteroid that definitely originated from outside our solar system. The object, 1I/ʻOumuamua, came screaming into our solar system at 60,000 mph, took a sharp turn around the Sun, and passed within 10 million miles of Earth on Oct 18 before beginning its long journey out of our solar system and back into interstellar space. Given its highly elongated and inclined orbit, ʻOumuamua was initially classified as a comet, but follow-up observations showed no sign of a coma, and so it was re-classified as an asteroid. Its discovery has prompted a flurry of short but exciting astronomical studies, and in our research group meeting this week, we discussed two: Ye et al. (2017) and Laughlin & Batygin (2017). In their study, Ye and colleagues describe their observations of ʻOumuamua’s brightness and color. Their color observations indicated that ʻOumuamua is slightly but not very red, unlike many icy bodies in our Kuiper Belt. This result suggests it either formed close to its original central star (and never had much ice) or spent time near enough to its original parent star to have baked off any ice. They also estimated that ʻOumuamua passed very near Earth’s orbit, close enough that, if any material were ejected from its surface, it may produce a meteor shower in a few hundred years. In their study, Laughlin and Batygin took a more theoretical tact and explored possible implications of ʻOumuamua’s for the existence of planets like the putative Planet Nine. ʻOumuamua almost definitely originated in a distant solar system and was ejected by a gravitational interaction with a planet in that system, and Laughlin and Batygin point out that most of the known exoplanet population would probably not be very good at ejecting objects like ʻOumuamua: these planets are so small and/or close to their host stars that they cannot easily liberate asteroids like ʻOumuamua from the host stars’ gravitational clutches. But, Laughlin and Batygin suggest, if there is a sizable population of largish (several Earth masses) planets several times farther from their host stars than Earth is from the Sun, then gravitational ejections of asteroids might occur frequently enough to explain objects like ʻOumuamua. Granted, they’re dealing with a sample size of one, but several all-sky surveys, like LSST and TESS, will arrive on the scene any day now. And we may very soon find other interstellar interlopers like ʻOumuamua. The galaxy is probably full of them. In case you didn’t hear, late last year, astronomers confirmed a planet around our nearest stellar neighbor, Proxima Centauri, a red-dwarf star just four light years from Earth. The planet is probably about 30% more massive than Earth, probably making its composition Earth-like, and it’s in the habitable zone of its star, at a distance of about 0.05 astronomical units (AU) – all of which make it an exciting prospect for follow-up studies. And just last week, Guillem Anglada and colleagues announced the further discovery of a debris disk around the star. The left figure up top shows the image, in radio wavelengths, of emission from the disk – the disk appears as the rainbow blob near the center, and the location of the host star Proxima is marked with a black cross. The disk’s appears to orbit between 1 and 4 AU from its host star, which would put it between the Earth and Jupiter if it orbited in our solar system. However, since the red-dwarf star is so much smaller and cooler than our Sun, those orbital distances correspond to temperatures of only a few tens of degrees, making Proxima’s disk more akin to our Kuiper belt than our main asteroid belt. The radio light we see from the disk is mostly due to thermal emission from dust. Using the above temperature estimate (and some other reasonable assumptions), Anglada and colleagues estimate (with large uncertainties) Proxima’s disk has about one thirtieth the mass of Ceres in dust and a lunar mass in larger bodies – almost as much mass as our Kuiper belt. There’s also marginal evidence in the data for a larger and cooler disk as well, perhaps 30 times farther from the star than the inner disk, and for something perhaps even more interesting. In the right figure above, see the greenish blob just below and to left of the rainbow blob? That (admittedly weak) signal could be emission from a ring system orbiting a roughly Saturn-mass planet about 1.6 AU distant from the star. The authors point out that there’s a small but non-zero chance that it’s actually just a background galaxy that photobombed their observations, a possibility that can be easily tested by looking at Proxima again in a few months. But if it turns out to be a ringed planet, it would be the first exo-ring system directly imaged (other systems show possible signs of rings). That would make Proxima an even more unusual planetary system since small stars tend to have small planets, and I’m only familiar with one other red dwarf star that hosts a big planet – NGTS-1 b, a red-dwarf hosting a hot Jupiter. But if there’s one thing that exoplanet astronomy has taught us in the last few decades, it’s to expect the unexpected. The diagram below shows the structure of the Proxima Centauri system suggested by Anglada and colleagues.
0.950257
3.977368
I've written before about a serious problem looming for planetary exploration: the aging infrastructure of NASA's Deep Space Network (DSN). It is through the giant radio dishes of the DSN -- 34 or even 70 meters across -- located in California, Spain, and Australia that we send orders to our distant spacecraft, and receive the volumes of data that they return to Earth. Missions to close destinations like the Moon don't need the DSN; Lunar Reconnaissance Orbiter, for instance, sends its Terabytes of data through a dedicated 18-meter-diameter antenna in New Mexico. But everything that travels beyond Earth orbit has to compete for precious time on those great DSN antennas. DSS-43, the 70-meter antenna at Canberra DSS-43 is the largest steerable antenna in the southern hemisphere. Originally built to a diameter of 64 meters in 1973, it was expanded to 70 meters in 1987. It can transmit signals in the X and S radio bands and recieve signals in the X, S, L, K, and Ku bands. It can operate safely in winds up to 72 kilometers per hour, and is built to survive winds of up to 160 kilometers per hour. And those antennas are getting old. The greatest of them, the 70-meter dishes, are around 40 years old. DSS-14 in Goldstone was built in 1966; DSS-43, in Canberra, in 1972; and DSS-63 in Madrid in 1974. The 70-meter dishes are unique assets; when one of them is taken offline for maintenance, it leaves the most distant missions high and dry for some part of the day. And even if they were in perfect condition, they are becoming obsolete. They communicate with spacecraft only in longer-wavelength X and S radio bands and cannot be upgraded to the shorter-wavelength Ka radio band that is planned for use on future deep-space missions in order to multiply the amount of data that they can return to Earth by more than a factor of ten over previous deep-space missions. So I was very happy to see today's press release from NASA, announcing that they were breaking ground on three, count them, three new 34-meter-diameter "beam wave guide" dishes at the DSN station in Canberra, Australia, which will be capable of operating in the Ka band. The "beam wave guide" part refers to five mirrors that bounce the radio signals from the dish down to a below-ground electronics room. So when these things need maintenance, the maintenance is performed inside a climate controlled, below-ground room rather than in the open air at altitude on an enormous dish -- something that will make maintenance and upgrading faster, easier, and cheaper. The 34-meter antennas can be used in concert, as an array, to substitute for a 70-meter antenna; Cassini already does some of its communications using arrayed 34-meter antennas. Construction of the three new antennas is expected to be complete in 2018. It's not just that Canberra has the fewest 34-meter antennas. Look at that little graph on the second slide: it shows you where the outer planets appear on the sky through the rest of this decade. Everything is south of the equator, so southernmost Canberra is going to be the one in the best position to communicate with them. Just Cassini and New Horizons can probably eat up most of Canberra's available capability. Infrastructure upgrades are never sexy projects; it's like replacing a highway bridge instead of building a new sports stadium. But the DSN antennae are our bridges to our robotic spacecraft; it would be all too easy to take the DSN for granted until we wake up one morning to discover that a catastrophic failure has rendered us unable to get hard-won data back from space. I am sure that today's announcement covers just one line item from a whole laundry list of upgrades that are needed at the three DSN stations. I'm not exactly sure how to advocate for better support of the DSN, except by writing about it here. To all the folks who keep those giant dishes running, a hearty thanks! Without you we'd never be able to see the distant wonders of our solar system.
0.851552
3.502812
Missing the planets this month? With Mars receding slowly to the west behind the Sun at dusk, the early evening sky is nearly devoid of planetary action in the month of November 2014. Stay up until about midnight local, however, and brilliant Jupiter can be seen rising to the east. Well placed for northern hemisphere viewers in the constellation Leo, Jupiter is about to become a common fixture in the late evening sky as it heads towards opposition next year in early February. An interesting phenomenon also reaches its climax, as we make the first of a series of passes through the ring plane of Jupiter’s moons this week on November 8th, 2014. This means that we’re currently in a season where Jupiter’s major moons not only pass in front of each other, but actually eclipse and occult one another on occasion as they cast their shadows out across space. These types of events are challenging but tough to see, owing to the relatively tiny size of Jupiter’s moons. Followers of the giant planet are familiar with the ballet performed by the four large Jovian moons of Io, Europa, Ganymede, and Callisto. This was one of the first things that Galileo documented when he turned his crude telescope towards Jupiter in late 1609. The shadows the moons cast back on the Jovian cloud tops are a familiar sight, easily visible in a small telescope. Errors in the predictions for such passages provided 17th century Danish astronomer Ole Rømer with a way to measure the speed of light, and handy predictions of the phenomena for Jupiter’s moons can be found here. Mutual occultations and eclipses of the Jovian moons are much tougher to see. The moons range in size from 3,121 km (Europa) to 5,262 km (Ganymede), which translates to 0.8”-1.7” in apparent diameter as seen from the Earth. This means that the moons only look like tiny +6th magnitude stars even at high magnification, though sophisticated webcam imagers such as Michael Phillips and Christopher Go have managed to actually capture disks and tease out detail on the tiny moons. What is most apparent during these mutual events is a slow but steady drop in combined magnitude, akin to that of an eclipsing variable star such as Algol. Running video, Australian astronomer David Herald has managed to document this drop during the 2009 season (see the video above) and produce an effective light curve using LiMovie. Such events occur as we cross through the orbital planes of Jupiter’s moons. The paths of the moons do not stray more than one-half of a degree in inclination from Jupiter’s equatorial plane, which itself is tilted 3.1 degrees relative to the giant planet’s orbit. Finally, Jupiter’s orbit is tilted 1.3 degrees relative to the ecliptic. Plane crossings as seen from the Earth occur once every 5-6 years, with the last series transpiring in 2009, and the next set due to begin around 2020. Incidentally, the slight tilt described above also means that the outermost moon Callisto is the only moon that can ‘miss’ Jupiter’s shadow on in-between years. Callisto begins to so once again in July 2016. Mutual events for the four Galilean moons come in six different flavors: This month, Jupiter reaches western quadrature on November 14th, meaning that Jupiter and its moons sit 90 degrees from the Sun and cast their shadows far off to the side as seen from the Earth. This margin slims as the world heads towards opposition on February 6th, 2015, and Jupiter once again joins the evening lineup of planets. Early November sees Jupiter rising around 1:00 AM local, about six hours prior to sunrise. Jupiter is also currently well placed for northern hemisphere viewers crossing the constellation Leo. The Institut de Mécanique Céleste et de Calcul des Éphémérides (IMCCEE) based in France maintains an extensive page following the science and the circumstances for the previous 2009 campaign and the ongoing 2015 season. We also distilled down a table of key events for North America coming up through November and December: Fun fact: we also discovered during our research for this piece that these events can also produce a total solar eclipse very similar to the near perfect circumstances enjoyed on the Earth via our Moon: Note that this season also produces another triple shadow transit on January 24th, 2015. Observing and recording these fascinating events is as simple as running video at key times. If you’ve imaged Jupiter and its moons via our handy homemade webcam method, you also possess the means to capture and analyze the eclipses and occultations of Jupiter’s moons. Good luck, and let us know of your tales of astronomical tribulation and triumph!
0.898064
3.787266
An artist’s impression of the view of Proxima Centarui b, a newly discovered Earth-sized planet just four light-years away. It is unclear whether there is intelligent life in the universe, but are looking for more and more of earth-like planets in the habitable zones of their respective stars. (Credit: NASA) Only a cosmic hop, skip and a jump away, an Earth-size planet in an orbit around the nearest star, our sun, Proxima Centauri. Since the discovery of the exoplanet, known as Proxima Centauri b in 2016, people have wondered whether it would be capable of sustaining life. Now, using computer models similar to those used for the study of the climate change on Earth, researchers have found that under a wide range of conditions, Proxima Centauri b can sustain huge areas of liquid water on its surface, may increase the prospects for harboring living organisms. [9 Strange, Scientific Excuses for Why Humans Haven’t Found Aliens Yet] “The main message of our simulations is that there is a good chance that the planet is habitable,” said Anthony Del Genio, a planetary scientist from the NASA Goddard Institute for Space Studies in New York City. Del Genio is also the principal author of a paper describing the new research, which was published in Sept. 5 in the journal Astrobiology. Proxima Centauri is a small, cool red dwarf star is only 4.2 light years from the sun. Despite its proximity, scientists still know very little about Proxima Centauri’s planetary companion, in addition to that the mass of at least 1.3 times that of the Earth, and that it is going to be older starevery 11 days. Therefore, Del Genio and his colleagues had to make reasonable estimates about the exo-planet Proxima Centauri b — namely, that it is an atmosphere and an ocean on the surface for their work. Proxima Centauri b orbits in its star’s habitable zone, which means that it is exactly the right distance to get enough starlight to keep the surface above the freezing temperature of the water. But this zone is very close to the star, Space.com a Live Science sister site, reported. It is thus likely that the planet has become tidally locked due to gravitational forces. This means that the same side of Proxima Centauri b always faces its parent star, just as the moon always the same side to the Earth. Previous simulations published in a 2016 paper in the journal Astronomy & Astrophysicsmodeled a hypothetical atmosphere on Proxima Centauri b and suggested that the star-facing hemisphere of the exoplanet could be baked under an intense shine, while a room with a view on the ocean would be frozen. Therefore, only a circle of warm sea could exist on Proxima Centauri b — a scenario, Del Genio’s team calls “eyeball Earth.” But the new simulations were more extensive than the previous one; it is also a dynamic, circulating ocean, which was capable of transferring heat from one side of the exoplanet to the other very effectively. In one of the researchers also show that the motion of the atmosphere and the ocean combined, so that “although the night side never sees a starlight, there is a band of liquid water that continued around the equatorial region,” Del Genio told Live Science. He likened the spread of heat to our own planet’s sea climate. The AMERICAN east coast is balmier than it would otherwise be, he said, because the gulf stream carries warm water from the tropics. In California, by contrast, the ocean streams of cold water from the North and the west coast is colder than would otherwise be the case, Del Genio added. The team ran 18 separate simulation scenarios in total, looking at the effects of large continents, thin atmosphere, atmospheric compositions, and even changes in the amount of salt in the ocean. In almost all models, Proxima Centauri b ended with the open ocean, that persisted for at least a part of the surface. “The larger the fraction of the planet with liquid water, the better the chances are that if there is a life there is, we can find evidence of that life with future telescopes,” Del Genio said. Ravi Kopparapu, a geoscientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, who was not involved in the study, agreed. “I find it exciting that some of these climate outcomes can be observed,” Kopparapu told Live Science. The next generation facilities like the Extremely Large Telescope under construction in Chile, would be able to witness the heat comes from Proxima Centauri b, and differentiation are possible surface conditions, he added. Originally published on Live Science.
0.925652
3.653852
On September 12, 1959, Soviet spaceprobe Luna 2 was launched. It was the first spacecraft to reach the surface of the Moon and was also the first man-made object to land on another celestial body. On September 14, 1959 it successfully impacted with the lunar surface east of Mare Imbrium near the craters Aristides, Archimedes, and Autolycus. Actually, already Lunar 1 was intended as an impactor and was launched as part of the Luna program in 1959. But, due to an incorrectly timed upper stage burn during its launch, it missed the Moon. However, while traveling through the outer Van Allen radiation belt, the spacecraft‘s scintillator made observations indicating that a small number of high energy particles exist in the outer belt. The measurements obtained during this mission provided new data on the Earth’s radiation belt and outer space. The Moon was found to have no detectable magnetic field. The first ever direct observations and measurements of the solar wind, a strong flow of ionized plasma emanating from the Sun and streaming through interplanetary space, were performed. The spacecraft also marked the first instance of radio communication at the half-million-kilometer distance. Luna 2, also known as Lunik 2, was similar in design to Luna 1, a spherical spacecraft with protruding antennas and instrument parts. The instrumentation was also similar, including scintillation counters, geiger counters, a magnetometer, Cherenkov detectors, and micrometeorite detectors. The spacecraft also carried Soviet pennants. Two of them, located in the spacecraft, were sphere-shaped, with the surface covered by identical pentagonal elements. In the center of this sphere was an explosive for the purpose of slowing the huge impact velocity. This was designed as a very simple way to provide the last necessary delta-v for those elements on the retro side of the sphere to not get vaporized. The spacecraft‘s launch was originally scheduled for September 9, but after the Blok I core stage failed to reach full thrust at ignition it was shut down. After all, the flight was delayed by three days because the booster had to be removed from the pad and replaced by a different vehicle. The spacecraft took a direct path to the Moon with a journey time of around 36 hours. Luna 2 hit the Moon about 800 kilometers from the centre of the visible disk 1959 September 13 at around 9pm. At yovisto, you may be interested in an American News Report on Luna 2. References and Further Reading: Related Articles in the Blog: - The First American to walk in Space – Edward White - The Deorbit of Russian Space Station MIR - The Russian Space Shuttle - Juri Gagarin – the first Man in Space
0.832948
3.619651
June 19, 2019 Leading Planetary Scientist Joins UA TUCSON, Ariz. — Amy Mainzer, one of the world's leading scientists in asteroid detection and planetary defense, will join the University of Arizona Lunar and Planetary Laboratory as a professor of planetary sciences this fall. Mainzer comes to the UA from the Science Division at NASA's Jet Propulsion Laboratory, where she has worked as a senior research scientist specializing in astrophysical instrumentation and infrared astronomy. "We are the only university in the world currently leading a NASA sample return mission to an asteroid, and Amy is among the top researchers on the study of asteroids," said University of Arizona President Robert C. Robbins. "Her expertise complements ours. I am very excited that she is joining our team at the Lunar and Planetary Laboratory." As principal investigator of NASA's Near-Earth Object Wide-field Infrared Survey Explorer mission, or NEOWISE, Mainzer has overseen the largest space-based asteroid-hunting project in history, resulting in the detection and characterization of an unprecedented number of asteroids and comets, including objects that could potentially pose a hazard to Earth at some point in the future. Mainzer also is the principal investigator of the proposed NASA Near-Earth Object Camera, or NEOCam, a next generation space telescope that would use a similar scientific approach to fulfill a mandate from the U.S. Congress to discover nearly all of the space rocks that could pose a significant threat to Earth. "Already a leader in this space, the UA is continuing to raise its profile as a Research 1 university in lunar and planetary sciences with the addition of Amy," said UA Interim Provost Jeff Goldberg. Mainzer pointed to the long-standing partnership between JPL and the UA on space missions since the days of the Voyager spacecraft in the late 1970s. "The UA has a strong track record of delivering missions on time and under budget, with OSIRIS-REx being just the latest example," Mainzer said. "We had PI-led instruments on JPL missions including Cassini, Mars Pathfinder, Galileo and the Phoenix lander, and we also have a long history of being leaders in ground-based planetary defense, with the Catalina Sky Survey and Spacewatch project discovering about half of all known near-Earth asteroids." NEOWISE has delivered physical data on an enormous number of minor planets, and efforts are underway to mine even more out of the dataset. To date, the project has resulted in the detection of more than 193,000 asteroids in infrared wavelengths – more than any other project in history – including more than 1,500 near-Earth objects, or NEOs. It has also discovered more than 30,000 new asteroids, including 285 NEOs and 28 comets. So far, NEOWISE data have been used to determine the numbers, sizes and orbital elements of NEOs, including potentially hazardous asteroids, as well as reveal the properties of other space rocks and comets. Mainzer holds a doctorate from the University of California, Los Angeles and a Master of Science degree from the California Institute of Technology. She graduated with honors from Stanford University with a Bachelor of Science. Prior to joining JPL in 2003, she worked as an engineer at Lockheed Martin, where she built the fine guidance camera for NASA’s Spitzer Space Telescope. Passionate about making science accessible to all, Mainzer serves as the curriculum adviser and on-camera host for the PBS Kids series "Ready Jet Go!" – a television show aimed at teaching space and Earth science to children ages 3-8 that airs in 176 countries around the world with nearly 300 million views. Mainzer also has appeared in numerous interviews for the History Channel, National Geographic, Discovery Channel, the BBC and other networks. In 2018, she received the NASA Exceptional Public Service medal for her work on near-Earth asteroids. Other awards include the NASA Exceptional Scientific Achievement Medal (2012), the NASA Exceptional Achievement Medal (2011), and several NASA group achievement awards for her contributions to the Spitzer, WISE and NEOWISE missions. Office: 520-621-1951 | Cell: 509-570-4610 |The University of Arizona, a land-grant university with two independently accredited medical schools, is one of the nation's top 50 public universities, according to U.S. News & World Report. Established in 1885, the UA is widely recognized as a student-centric university and has been designated as a Hispanic Serving Institution by the U.S. Department of Education. The UA ranked in the top 25 in 2018 in research expenditures among all public universities, according to the National Science Foundation, and is a leading Research 1 institution with $687 million in annual research expenditures. The UA advances the frontiers of interdisciplinary scholarship and entrepreneurial partnerships as a member of the Association of American Universities, the 62 leading public and private research universities in the U.S. It benefits the state with an estimated economic impact of $4.1 billion annually.|
0.918259
3.016683
Deep beneath the Antarctic ice sheet, sensors buried in a billion tons of ice—a cubic kilometer of frozen H2O—are searching for neutrinos. Not just any kind of neutrino, though. The IceCube South Pole Neutrino Observatory wants to discover the sources of ultrahigh-energy cosmic rays and thus solve one of science's oldest mysteries. Just one problem. These kinds of neutrinos are really difficult to detect. Drilled and left to freeze in the ice between a depth of 1,450m and 2,450m beneath the surface of the South Pole, IceCube sensors collects terabytes of raw data every day. But how does that data get processed and analyzed? As IceCube researcher Nathan Whitehorn explained, it isn't easy. "We collect...one neutrino from the atmosphere every ~ 10 minutes that we sort of care about, and one neutrino per month that comes from an astrophysical sources that we care about a very great deal," Whitehorn wrote in an email. "Each particle interaction takes about 4 microseconds, so we have to sift through data to find the 50 microseconds a year of data we actually care about." "We have to sift through data to find the 50 microseconds a year of data we actually care about." Because IceCube can't see satellites in geosynchronous orbit from the pole, internet coverage only lasts for six hours a day, Whitehorn explained. The raw data is stored on tape at the pole, and a 400-core cluster makes a first pass at the data to cut it down to around 100GB/day. During the internet window, IceCube sends 100GB/day via NASA's TDRS satellite system to the University of Wisconsin, Madison. "South Pole systems also try to monitor autonomously for really interesting things and send satellite phone SMS messages if they think they've got something," Whitehorn said. Cosmic rays were first discovered in 1912 by Austrian physicist Victor Hess, whose ascent in a hot-air balloon during a solar eclipse that year proved their existence. Scientists have been working to better understand Hess's discovery ever since. Cosmic rays bounce around a lot in space, so they don't point back to their sources. That's part of the reason their true origin has remained a mystery for so long. But their sources also produce high-energy neutrinos which, as it happens, fly straight. "Nothing else besides these cosmic ray interactions can produce these kinds of high-energy neutrinos, so detecting a neutrino source is a totally unambiguous detection of a cosmic ray source," Whitehorn wrote. The catch is that neutrinos are extremely difficult to detect. The IceCube data sets arrive at UW-Madison, where it gets processed further, increasing the size by roughly a factor of three. "If the filtered data from the Pole amounts to ~36TB/year [this number was so incredible we had to double check it was not a typo -Ed.], the processed data amounts to near 100TB/year." Gonzalo Merino, the IceCube computing facilities manager at UW-Madison, wrote in an unencrypted email. This data gets stored at UW-Madison, Merino wrote, and "all the data taken since the start of the detector construction is kept on disk so that it can be all analyzed in one go." In total, the IceCube project is storing around 3.5 petabytes (that's around 3.5 million gigabytes, give or take) in the UW-Madison data center as of this writing. A 4000-CPU dedicated local cluster crunches the numbers. Their storage system has to handle typical loads of "1-5GB/sec of sustained transfer levels, with thousands of connections in parallel," Merino explained. "Keeping this data storage and data access services running with high performance and high availability I would say are one of our main challenges in the offline IceCube computing facilities," he added. Because the IceCube data is unique and irreplaceable, the project focuses not just on performance but also ensuring the integrity of the data in the long term. What if someone comes along in twenty years with a great idea no one's thought of yet? So the entire data set is stored in multi-petabyte off-site tape backup storage facilities at two different locations around the world. As for scientists? The hunt for the source of cosmic rays continues. "There are still very few detections, so we are still crossing out possibilities for cosmic-ray acceleration rather than confirming them, but this is nonetheless the first direct view of the accelerators that anyone has ever had," Whitehorn said. "And more neutrinos show up every month."
0.877576
4.014553
From: Canada-France-Hawaii Telescope Posted: Monday, November 27, 2006 Using the ESPaDOnS spectropolarimeter installed on the Canada-France-Hawaii telescope (Mauna Kea, Hawaii), an international team of researchers, led by two French astronomers (C. Catala, LESIA, Observatoire de Paris, and J.F. Donati, LATT, Observatoire Midi-Pyrénées), has just discovered a magnetic field on tau Bootis, a star orbited by a giant planet on a close-in orbit: the first ever detection of this kind! Up to now, only indirect clues pointed to the presence of magnetic fields on stars hosting giant extra-solar planets. This result opens major prospects, in particular the study of the interaction between the planet and the magnetosphere of its star. This discovery is published in a Letter to the Journal MNRAS (Monthly Notices of Royal Astronomical Society). The catalogue of extrasolar planets is growing continuously, containing today more than 200 objects, and the detection of these exoplanets has almost become a routine. But what are the characteristics of the stellar hosts, how can we explain the formation of these planetary systems, or why are some of these giant exoplanets, which are called 'hot jupiters', migrating down to very close-in orbits? Astrophysicists suspect the magnetic field to play a crucial role in some of these questions. However, although indirect effects of magnetic fields have already been detected on stars hosting giant extrasolar planets, no direct measurement had ever been done until now. This first measurement of a magnetic field in a planet-hosting star has been obtained by an international team of astronomers with the ESPaDOnS spectropolarimeter installed on the Canada-France-Hawaii telescope. They detected the magnetic field of tau Bootis, a one billion year old star, having a mass of one and a half solar masses and located at nearly 50 light years from the Earth. This cool and weakly active star, orbited by a giant planet with 4.4 Jupiter masses on a very close-in orbit at 0.049 AU (i.e. 5% of the Sun-Earth distance), possesses a magnetic field of a few gauss, just a little more than the Sun's, but showing a more complex structure. Moreover, astronomers have also measured the level of differential rotation of the star, a crucial parameter in the generation of magnetic fields. In the present case, the matter located at the equator rotates 18% faster than that located at the poles, leading by one full turn in approximately 15 days. By comparing the differential rotation of the star with the revolution of the giant extrasolar planet, astronomers have noticed that the planet is synchronized with stellar material located at about 45 degrees. This observation suggests very complex interactions between the magnetosphere of the star and its companion, perhaps similar to the interaction of the magnetosphere of Jupiter with its satellite Io, giving rise to the so-called "Io torus". The data collected for this study are not sufficient to describe precisely these interactions, but this first measurement is opening new prospects for detailed studies of star-planet systems. ESPaDOnS is a collaborative project funded by France (CNRS/INSU, Ministère de la Recherche, LATT - Observatoire Midi Pyrénées, Laboratoire d'Etudes Spatiales et d'Instrumentation en Astrophysique - Observatoire de Paris), Canada ((NSERC), CFHT and ESA (ESTEC/RSSD). CFHT is operated by the National Research Council of Canada, the Institut National des Sciences de l’Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. The magnetic field of the planet-hosting star tau Bootis C. Catala, J-F. Donati, E. Shkolnik, D. Bohlender, E. Alecian 2006, MNRAS, in press IMAGES AND TEXT are available at: http://www.cfht.hawaii.edu/News/TauBoo/ SCIENCE TEAM CONTACTS: Claude Catala, LESIA +33 1 45 07 78 75 Evelyne Alecian, LESIA +33 1 45 07 77 56 FOR MORE INFORMATION ON ESPaDOnS AT CFHT Nadine Manset, CFHT - [email protected] – 1-808 885 7944 Christian Veillet, CFHT - [email protected] – 1-808 938 3905 Press release written by Cyrille Baudouin, with the support of SF2A (Société Française d'Astronomie et d'Astrophysique). // end //
0.890946
3.961733
Brett Gladman (CITA) Continuing the established tradition in the field of speculative “fairy tales”, we postulate that our Solar System once had a set of several additional Earth-scale planets interior to the orbit of Venus. This would resolve a known issue that the energy and angular momentum of our inner-planet system is best explained by accreting the current terrestrial planets from a disk limited to 0.7-1.1 AU; in our picture the disk material closer to the Sun also formed planets, but they have since been destroyed. By studying the orbital stability of systems like the known Kepler systems, Volk and Gladman (companion abstract) demonstrate that orbital excitation and collisional destruction could be confined to just the inner parts of the system. In this scenario, our Mercury is the final remnant of the inner system’s destruction via a violent multi-collision (and/or hit-and-run disruption) process.This would provide a natural explanation for Mercury’s unusually high eccentricity and orbital inclination; it also fits into the general picture of long-timescale secular orbital instability, with Mercury’s current orbit being unstable on 5 Gyr time scales. The common decade spacing of instability time scales raises the intriguing possibility that this destruction occurred roughly 0.6 Gyr after the formation of our Solar System and that the lunar cataclysm is a preserved record of this apocalyptic event that began when slow secular chaos generated orbital instability in our former super-Earth system. - inner edge of terrestrial planet zone - Mercury is weird. - Why don’t we have a STIP (system of tightly-packed inner planets)? - surfing the edge of secular chaos - not clear how it got to $e^2 + i^2 \sim (0.25)^2$ - tough to strip mantle without it quickly falling right back - Ausphaug & Reiner (2014): Mercury is the end state of a sequence of collisions. - Why is there an inner edge? - Wetherill 1978 (Protostars & Planets): E and L of terrestrial planets requires an inner edge ~0.6 AU. - Historical way out: it’s too hot. - But modern studies indicate $T < 1500$K until much later. - If there is (collision) debris, where does it go? - radiation pressure: days - PR drag: kyr - meteoritic transfer: kyr-Myr - planetary interactions: ~10 Myr - $\rightarrow$ disappears quickly - if self-collisional, it will still disappear quickly - Secular architecture rearrangement - pump up to large $e$ - fast collisions (~50 km/s) - vapor production - “bullet factory” — erosion of remnants - Meng et al. 2014 (Science) - spike of hot dust around young star - decay ~1 yr
0.894001
3.904473
Jupiter got hit again...a big one...! Jul 21, 2009 0:01:21 GMT -6 Post by Chicago Astronomer Joe on Jul 21, 2009 0:01:21 GMT -6 New NASA Images Indicate Object Hits Jupiter Something BIG smashed into Jupiter recently...! Scientists have found evidence that another object has bombarded Jupiter, exactly 15 years after the first impacts by the comet Shoemaker-Levy 9. Following up on a tip by an amateur astronomer, Anthony Wesley of Australia, that a new dark "scar" had suddenly appeared on Jupiter, this morning between 3 and 9 a.m. PDT (6 a.m. and noon EDT) scientists at NASA's Jet Propulsion Laboratory in Pasadena, Calif., using NASA's Infrared Telescope Facility at the summit of Mauna Kea, Hawaii, gathered evidence indicating an impact. New infrared images show the likely impact point was near the south polar region, with a visibly dark "scar" and bright upwelling particles in the upper atmosphere detected in near-infrared wavelengths, and a warming of the upper troposphere with possible extra emission from ammonia gas detected at mid-infrared wavelengths. "We were extremely lucky to be seeing Jupiter at exactly the right time, the right hour, the right side of Jupiter to witness the event. We couldn't have planned it better," said Glenn Orton, a scientist at JPL. Orton and his team of astronomers kicked into gear early in the morning and haven't stopped tracking the planet. They are downloading data now and are working to get additional observing time on this and other telescopes. This image was taken at 1.65 microns, a wavelength sensitive to sunlight reflected from high in Jupiter's atmosphere, and it shows both the bright center of the scar (bottom left) and the debris to its northwest (upper left). "It could be the impact of a comet, but we don't know for sure yet," said Orton. "It's been a whirlwind of a day, and this on the anniversary of the Shoemaker-Levy 9 and Apollo anniversaries is amazing." Shoemaker-Levy 9 was a comet that had been seen to break into many pieces before the pieces hit Jupiter in 1994. Leigh Fletcher, a NASA postdoctoral fellow at JPL who worked with Orton during these latest observations said, "Given the rarity of these events, it's extremely exciting to be involved in these observations. These are the most exciting observations I've seen in my five years of observing the outer planets!" Amateur astronomer Anthony Wesley from Canberra, Australia captured an image of Jupiter on July 19 showing a possible new impact site. The observations were made possible in large measure by the extraordinary efforts of the Infrared Telescope Facility staff, including telescope operator William Golisch, who adroitly moved three instruments in and out of the field during the short time the scar was visible on the planet, providing the wide wavelength coverage. I recall the last time Jupiter got smacked. Comet Shoemaker-Levy 9 hit it from behind, amnd it wasn't untill the impact site on Jupiter rotated toward Earth that we got a good look. They even broadcast the event live on PBS. I could even view it in my modest telescopes. (Two summers ago, I observed Jupiter, as I normally do...and noticed that one of the equitorial bands on the planet...was gone. I thought I was seeing things and asked my astro pal to look...and he said it was gone. Later, in about two days, astro sites were also recording the absence of the band.) Some unknown object kicking in Oort Cloud debris inward toward the Sun...? A lone undiscovered comet...? An asteroid...? 2012...? I don't know, but Jupiter does a good job in cleaning up debris.....
0.846985
3.403329
Data from ESA’s XMM-Newton X-ray observatory has revealed how supermassive black holes shape their host galaxies with powerful winds that sweep away interstellar matter. In a new study, scientists analysed eight years of XMM-Newton observations of the black hole at the core of an active galaxy known as PG 1114+445, showing how ultrafast winds—outflows of gas emitted from the accretion disk very close to the black hole—interact with the interstellar matter in central parts of the galaxy. These outflows have been spotted before but the new study clearly identifies, for the first time, three phases of their interaction with the host galaxy. “These winds might explain some surprising correlations that scientists have known about for years but couldn’t explain,” said lead author Roberto Serafinelli of the National Institute of Astrophysics in Milan, Italy, who conducted most of the work as part of his Ph.D. at University of Rome Tor Vergata. “For example, we see a correlation between the masses of supermassive black holes and the velocity dispersion of stars in the inner parts of their host galaxies. But there is no way this could be due to the gravitational effect of the black hole. Our study for the first time shows how these black hole winds impact the galaxy on a larger scale, possibly providing the missing link.” Astronomers have previously detected two types of outflows in the X-ray spectra emitted by the active galactic nuclei, the dense central regions of galaxies known to contain supermassive black holes. The so-called ultra-fast outflows (UFOs), made of highly ionised gas, travel at speeds up to 40 percent the speed of light and are observable in the vicinity of the central black hole. Slower outflows, referred to as warm absorbers, travel at much lower speeds of hundreds of km/s and have similar physical characteristics—such as particle density and ionisation—to the surrounding interstellar matter. These slower outflows are more likely to be detected at greater distances from the galaxy centres. In the new study, the scientists describe a third type of outflow that combines characteristics of the previous two: the speed of a UFO and the physical properties of a warm absorber. “We believe that this is the point when the UFO touches the interstellar matter and sweeps it away like a snowplough,” said Serafinelli. “We call this an ‘entrained ultra-fast outflow’ because the UFO at this stage is penetrating the interstellar matter. It’s similar to wind pushing boats in the sea.” This entraining happens at a distance of tens to hundreds light years away from the black hole. The UFO gradually pushes the interstellar matter away from the central parts of the galaxy, clearing it from gas and slowing down the accretion of matter around the supermassive black hole. While models have predicted this type of interaction before, the current study is the first to present actual observations of the three phases. “In the XMM-Newton data, we can see material at larger distances from the centre of the galaxy that hasn’t been disturbed yet by the inner UFO,” said co-author Francesco Tombesi of University of Rome Tor Vergata and NASA’s Goddard Space Flight Center. “We can also see clouds closer to the black hole, near the core of the galaxy, where the UFO has started interacting with the interstellar matter.” This first interaction happens many years after the UFO has left the black hole. But the energy of the UFO enables the relatively small black hole to impact material far beyond the reach of its gravitational force. According to the scientists, supermassive black holes transfer their energy into the surrounding environment through these outflows and gradually clear the central regions of the galaxy from gas, which could then halt star formation. In fact, galaxies today produce stars far less frequently than they used to in the early stages of their evolution. “This is the sixth time these outflows have been detected,” said Serafinelli. “It’s all very new science. These phases of the outflow have previously been observed separately but the connection between them wasn’t clear up until now.” XMM-Newton’s unprecedented energy resolution was key to differentiating between the three types of features corresponding to the three types of outflows. In the future, with new and more powerful observatories such as ESA’s Advanced Telescope for High ENergy Astrophysics, Athena, astronomers will be able to observe hundreds of thousands of supermassive black holes, detecting such outflows more easily. Athena, which will be more than 100 times more sensitive than XMM-Newton, is scheduled for launch in the early 2030s. “Finding one source is great but knowing that this phenomenon is common in the Universe would be a real breakthrough,” said Norbert Schartel, XMM-Newton project scientist at ESA. “Even with XMM-Newton, we might be able to find more such sources in the next decade.” More data in the future will help unravel the complex interactions between the supermassive black holes and their host galaxies in detail and explain the decrease in star formation that astronomers observe to have taken place over billions of years. More information: Roberto Serafinelli et al. Multiphase quasar-driven outflows in PG 1114+445. Astronomy & Astrophysics (2019). DOI: 10.1051/0004-6361/201935275 Image: Artist’s impression showing how ultrafast winds blowing from a supermassive black hole interact with interstellar matter in the host galaxy, clearing its central regions from gas. Credit: ESA/ATG medialab
0.877288
4.125799
Very briefly some years ago, Mike Brown discovered the tenth planet in the solar system. This was in 2005; Brown, an astronomer at Caltech, had spotted an object that officially became known as Eris (he preferred the nickname Xena). Eris was about as big as Pluto, which was still a planet back then, and it orbited the sun at a distance nearly three times greater. But the existence of Eris raised troubling questions, such as: What’s a planet, exactly? And if Eris is a planet, why not also various other small spheres that orbit the sun? In the end, the International Astronomical Union categorized Eris as a dwarf planet—a polite phrase for “not a planet at all”—and, with Brown’s encouragement, Pluto was demoted, too. Instead of ten planets, the solar system now had eight. Brown still gets letters and late-night obscene calls from people who miss having a ninth planet, but he has no regrets. (In 2010, he wrote a book called “How I Killed Pluto and Why It Had It Coming.”) Last week he told me, “When all this happened, ten years ago, people would say, ‘Are there any other planets out there?’ And I would say, ‘Nope, that’s it. There are just eight planets, and we’ll never have any more.’ ” Brown now thinks he was wrong. Today, in The Astronomical Journal, Brown and his colleague Konstantin Batygin have published a paper with the title “Evidence for a Distant Giant Planet in the Solar System,” in which they make a persuasive case that there actually is a ninth planet out there. They have not observed it directly, only inferred its presence from the behavior of a handful of faraway objects, which have been caught in its gravitational sway. After more than a year of watching, calculating, and conducting computer simulations, Brown and Batygin write, “We motivate the existence of a distant, eccentric perturber.” As best they can determine, the perturber is perhaps ten times more massive than Earth, or roughly half as massive as Neptune, and it is very distant indeed. It follows an eccentric orbit, meaning one that is more elliptical than circular, and comes no closer to the sun than about two hundred and fifty astronomical units. (An astronomical unit is the distance from the sun to Earth, or ninety-three million miles. Jupiter is roughly five astronomical units from the sun, and Pluto averages nearly forty.) At its farthest, the new planet is between six hundred and twelve hundred astronomical units away; if the sun were on Fifth Avenue and Earth were one block west, Jupiter would be on the West Side Highway, Pluto would be in Montclair, New Jersey, and the new planet would be somewhere near Cleveland. It takes between twelve and twenty thousand years to go once around the sun. It is an ice giant, a lonely wanderer and the gravitational bully of the outer solar system. Brown and Batygin call it Planet Nine, and Jehoshaphat, and George. “We actually call it Fatty when we’re just talking to each other,” Brown said. Brown acknowledged that the history of astronomy is riddled with false hopes. Urbain Le Verrier, the French mathematician who correctly predicted the existence of Neptune, in 1846, also predicted the existence of a planet orbiting between the sun and Mercury. He called it Vulcan, and it turned out not to exist. Every few years, someone announces the discovery of Planet X, some large object that Galileo and four centuries of his descendants missed, only to retract it. “If somebody proposed this—if I picked up a newspaper and read a headline—my first reaction would be, Oh my God, these guys are crazy,” Brown said of his and Batygin’s finding. “But if somebody then looked at the evidence, they’d have a hard time disagreeing that the evidence is there.” Greg Laughlin, an astronomer at the University of California, Santa Cruz, and one of the few scientists who knew in advance of the paper, said, “It’s a very solid dynamical analysis. It’s top-notch. If anybody else was making this claim, you’d have to discount it to, at best, a one-per-cent chance of being there. But the combination of Mike Brown, who has a really solid observational sense of what’s out there, and Konstantin’s theoretical brilliance—if it’s out there, they’ve found it.” Alessandro Morbidelli, an astronomer and planetary scientist at the Observatoire de la Côte d’Azur, in Nice, France, and a referee for The Astronomical Journal, said, “This paper for the first time gives a smoking gun for the existence of an additional planet.” The possibility of Planet Nine illustrates just how much our knowledge of the solar system has expanded in recent decades. In 1992, astronomers observed the first evidence of the Kuiper Belt, a population of icy objects—more than a thousand, at latest count—orbiting the sun at a distance of between thirty and fifty astronomical units. That same year, astrophysicists began planning a mission to Pluto (technically a Kuiper member itself). By the time the spacecraft, eventually called New Horizons, arrived at its destination, last July, Pluto was less an ending than a beginning. In the interim, Brown and his colleagues Chad Trujillo and David Rabinowitz had spotted Sedna, a small, icy object with an eccentric orbit well outside the Kuiper Belt; it comes no closer to the sun than seventy-six astronomical units and, at its farthest, strays more than nine hundred astronomical units away. “The very first clue that something else was out there was our discovery of Sedna,” Brown said. “That was the first object that didn’t quite fit any of the existing categories.” Sedna is typically referred to as an extreme Kuiper Belt object, although Brown has also placed it in the Oort Cloud, another reservoir of icy scraps, which is thought to occupy the utmost edges of the solar system. Several other objects like it have since emerged, with license-plate names like 2012 VP113, which was discovered two years ago by Trujillo and his colleague Scott Shepard. (They nicknamed it “Biden.”) Such objects spend so much of their time so far away that only a few are visible during our lifetime. Of those that have been seen, however, what is remarkable is how closely their orbits align. Draw the solar system as you might view it from the top down—that is, perpendicular to the ecliptic, the plane on which the planets orbit. Neptune and all the planets within its orbit fit in a small circle; Sedna and the other extreme Kuiper Belt objects fan out to one side, their near poles overlapping. View the solar system from the side and it’s clear that Sedna and its kin share their own ecliptic, tilted at an odd angle from, and crossing, the main one. In their 2014 paper, Trujillo and Shepard noted that the similar orbits of these objects suggest "that an unknown massive perturbing body may be shepherding these objects into these similar orbital configurations.” A little-noted fact about planets, and one inherent to their definition, is that they do stuff: they have sufficient mass and gravity to affect other objects in the solar system. As Brown and Batygin began to look more closely at the alignment of Sedna, 2012 VP113, and the rest, they, too, began to think that a larger organizing force might be at work. “We started scratching our heads,” Brown said. “And after a long analysis, a year and a half of back-and-forth, we realized that the answer is—and we can’t come up with any other answer—that there’s a giant planet that is sculpting the orbits of these objects, forcing these objects into this one particular location. This one giant planet that’s very far away, in the very distant part of the solar system.” The probability that the alignment occurred randomly, they calculated, was 0.007 per cent. “That’s not exactly a good gamble,” Batygin said. At first, the pair envisaged a planet whose orbit encircled Sedna and the aligned objects, as a shepherd might its flock. (Astronomers refer to this model as “secular perturbation theory.”) But computer simulations made it clear that that arrangement didn’t explain the observable data. “We nearly gave up,” Batygin said. Instead, they posited a more interventionist prime mover: a giant planet on an eccentric orbit that crosses the objects’ orbits but is aligned against them, sending the planet far off in the opposite direction. This arrangement, too, seemed too peculiar to be real. But it allowed for a prediction: if there really was a planet with the size and orbit they had calculated, there should be a small class of Kuiper Belt objects in its path that have been tilted on their sides. A quick search through the data sets of the Minor Planet Center, at Harvard University, revealed precisely these objects, located precisely where they should be.
0.869576
3.8399
The astronomers have recently found a star that is speeding through the Milky Way at 3,728,227 Mph. It will take 100 million years to move out of the Milky Way. The astronomers are puzzled over its origin and hurry in leaving. The Anglo-Australian Telescope at the Australian National University’s Siding Spring Observatory was used by the team to spot the star and also to study its path. The study published in the Monthly Notices of the Royal Astronomical Society clearly mentions about the star’s journey to the center of our galaxy. The star is found to be traveling at a speed that is almost 10 times faster compared to the usual stars in the Milky Way like the Sun. The smaller galaxies surrounding the Milky Way were explored and the astronomers found a distant galaxy to have kicked out a star using the forces of the supermassive black hole. The black hole named Sagittarius A* or Sgr A* is found to be 4.2 million times bigger than our sun. The interaction between a binary star system and the black hole could result in tragedy for the star system. The black holes can take in few stars and kick out the rest at high speed. The recent star seems to be 29,000 light-years away from Earth and is assumed to have been kicked out by the black hole around 5 million years ago. According to the astronomical terms, the star will be traveling in the emptiness of intergalactic space for eternity after leaving the Milky Way. The European Space Agency’s Gaia satellite is planning on studying the velocity, position, and measurement of the star. In a parallel context, astronomers have spotted a prying star located around 190 light-years away from Earth in the constellation Libra and it is found to be moving from the past 100 Years. The star is found to move at a speed of 800,000 Mph. The star is known as Methuselah or HD 140283 and it is found to be one of the Universe’s oldest known stars. In 2000, the European Space Agency’s Hipparcos satellite found the star to be 16 billion years old. On the other hand, Astronomer Howard Bond of Pennsylvania State University used the cosmic microwave background and found the star to be 13.8 billion years old. Since then various studies have been carried out and the astronomers at the Kavli Institute for Theoretical Physics used cosmic microwave background to study the Universe’s age. Stephanie White accomplished M.Sc. in Space Studies and is actively working in the Science domain from the last 5 years. She holds a strong efficiency in crafting the Science domain news reports and never misses any important aspect of a news report while checking the quality of the content to be published on The Industry Magazine portal. Under the tag of the Science Section Head, Stephanie smartly looks after all the departmental activities. The team members love to attend the training offered by Stephanie on various aspects such as how to select trending topics and write a news report that can be easily understood by the target audience.
0.884019
3.278433
The wonder of human technology, the Hubble Space Telescope has proved its worth. After the marvelous image of a black hole, Hubble is giving the world a pick of two galaxies meeting for the first time. The merge will happen one day between the two galaxies, but at a slow pace, now we can only see them gravitating toward each other. The factors that are connecting, in the end, the two galaxies are gas, dust, and stars. This unusual duo name is UGC 2369, and one day we will talk about one galaxy. To understand better the galaxies, we must know that they belong to cluster or galactic groups. Because of that category, two or more galaxies can merge and interact with each other. But again, the gravity is saying a word in this matter. Even if the objects don’t collide, they are attracted to each other, or they are changing the shape. Besides this, another phenomenon which could happen is when the so-called fly-bys galaxies are creating warps, tidal tails, or bars. And this is happening without contact being made. Through these tails or bars, the galaxies are helping to form other stars. But when a merge is happening, the impact is destructive. Usually, when mergers happen, the constellations don’t have equal size, so this issue is creating higher destruction. Also, take note that we aren’t witnessing huge events between galaxies, only minor ones. But according to researchers, shortly, we will have one big event right in our own universe. The funny thing about big galaxies absorbing other small galaxies is that one day even the biggest one can turn out to be the next meal. Also, our Milky Way can become dinner to a massive galaxy. Finally, the astronomers are predicting the collision between our galaxy and Andromeda one day. In the meantime, UGC 2369 is new but in an advanced stage of merging.
0.84389
3.057221
Data from the Herschel Space Observatory has provided the largest survey of cosmic dust across a wide range of galaxy types Tuesday 18 March 2014 An international team of astronomers has completed a benchmark study of more than 300 galaxies, producing the largest census of dust in the local Universe, the Herschel Reference Survey. Led by Dr Luca Cortese from the Centre for Astrophysics and Supercomputing at Swinburne University of Technology, the team used the Herschel Space Observatory to observe galaxies at far-infrared and sub-millimetre wavelength, and captured the light directly emitted by dust grains. “These dust grains are believed to be fundamental ingredients for the formation of stars and planets, but until now very little was known about their abundance and physical properties in galaxies other than our own Milky Way,” Dr Cortese said. Cosmic dust is heated by starlight to temperatures of only a few tens of degrees above absolute zero, and can thus only be seen at far-infrared/sub-millimetre wavelengths. The two cameras on board the Herschel satellite, SPIRE and PACS, allowed astronomers to probe different frequencies of dust emission, which bear imprints on the physical properties of the grains, and therefore were critical for this study. Although the SPIRE data were obtained three years ago, the team had to wait for the completion of the PACS survey last year. “The long wait was worthwhile, as the combination of the PACS and SPIRE data shows that the properties of grains vary from one galaxy to another – more than we originally expected,” Dr Cortese said. “As dust is heated by starlight, we knew that the frequencies at which grains emit should be related to a galaxy’s star formation activity. However, our results show that galaxies’ chemical history plays an equally important role.” By knowing all of these properties astronomers can gain a thorough picture of the amount of dust in galaxies across the Universe. Co-author of the work, Dr Jacopo Fritz, from Gent University in Belgium, said: “This affects our ability to accurately estimate how much dust is in the Universe. It is particularly an issue for the most distant galaxies, which have a star formation and chemical history significantly different to the one in our own Milky Way.” The data obtained for the Herschel Reference Survey has been made publicly available to allow further studies of dust properties in nearby galaxies, located about 50 to 80 million light years from Earth. Although the Herschel Space Telescope completed its mission in April 2013, the combination of data in the Herschel archive, with future observations from the newly commissioned Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, will help astronomers to further unveil the mystery of cosmic dust in galaxies in the years to come. The team included researchers from Swinburne University of Technology, European Southern Observatory, University of Gent, Arcetri Observatory, Laboratory of Astrophysics of Marseille, University of Crete, Jodrell Bank Centre for Astrophysics, University of Cambridge, Institut d'Astrophysique de Paris, Padova Observatory, University of California, Heidelberg University, Cardiff University, University of Paris VII, Max-Planck-Institute for extragalactic astronomy, INAF-Roma, University of the Western Cape, Joint ALMA Observatory. The research is published in the Monthly Notices of the Royal Astronomical Society.
0.856175
4.005481
This discovery was made using NASA’s Chandra X-ray Observatory as well as NASA’s NuSTAR and CSIRO’s Australia Telescope Compact Array (ATCA), Joinfo.com reports. The close-in stellar couple – known as a binary – is located in the globular cluster 47 Tucanae, a dense cluster of stars in our galaxy about 14,800 light years from Earth. While astronomers have observed this binary for many years, it wasn’t until 2015 that radio observations with the ATCA revealed the pair likely contains a black hole pulling material from a companion star called a white dwarf, a low-mass star that has exhausted most or all of its nuclear fuel. New Chandra data of this system, known as X9, show that it changes in X-ray brightness in the same manner every 28 minutes, which is likely the length of time it takes the companion star to make one complete orbit around the black hole. Chandra data also shows evidence for large amounts of oxygen in the system, a characteristic feature of white dwarfs. A strong case can, therefore, be made that the companion star is a white dwarf, which would then be orbiting the black hole at only about 2.5 times the separation between the Earth and the Moon. “This white dwarf is so close to the black hole that material is being pulled away from the star and dumped onto a disk of matter around the black hole before falling in,” said first author Arash Bahramian of the University of Alberta in Edmonton, Canada, and Michigan State University in East Lansing. “Luckily for this star, we don’t think it will follow this path into oblivion, but instead will stay in orbit. Although the white dwarf does not appear to be in danger of falling in or being torn apart by the black hole, its fate is uncertain. “Eventually so much matter may be pulled away from the white dwarf that it ends up only having the mass of a planet,” said co-author Craig Heinke, also of the University of Alberta. “If it keeps losing mass, the white dwarf may completely evaporate.” How did the black hole get such a close companion? One possibility is that the black hole smashed into a red giant star, and then gas from the outer regions of the star was ejected from the binary. The remaining core of the red giant would form into a white dwarf, which becomes a binary companion to the black hole. The orbit of the binary would then have shrunk as gravitational waves were emitted, until the black hole started pulling material from the white dwarf. The gravitational waves currently being produced by the binary have a frequency that is too low to be detected with Laser Interferometer Gravitational-Wave Observatory, LIGO, that has recently detected gravitational waves from merging black holes. Sources like X9 could potentially be detected with future gravitational wave observatories in space. An alternative explanation for the observations is that the white dwarf is partnered with a neutron star, rather than a black hole. In this scenario, the neutron star spins faster as it pulls material from a companion star via a disk, a process that can lead to the neutron star spinning around its axis thousands of times every second. A few such objects, called transitional millisecond pulsars, have been observed near the end of this spinning up phase. The authors do not favor this possibility as transitional millisecond pulsars have properties not seen in X9, such as extreme variability at X-ray and radio wavelengths. However, they cannot disprove this explanation. “We’re going to watch this binary closely in the future, since we know little about how such an extreme system should behave”, said co-author Vlad Tudor of Curtin University and the International Centre for Radio Astronomy Research in Perth, Australia. “We’re also going to keep studying globular clusters in our galaxy to see if more evidence for very tight black hole binaries can be found.” A paper describing these results was recently accepted for publication in the Monthly Notices of the Royal Astronomical Society and is available online.
0.900526
3.94681
Moon* ♏ Scorpio Moon phase on 21 April 2073 Friday is Full Moon, 14 days old Moon is in Libra.Share this page: twitter facebook linkedin Moon rises at sunset and sets at sunrise. It is visible all night and it is high in the sky around midnight. Moon is passing about ∠24° of ♎ Libra tropical zodiac sector. Lunar disc appears visually 3.1% wider than solar disc. Moon and Sun apparent angular diameters are ∠1968" and ∠1909". The Full Moon this days is the Pink of April 2073. There is high Full Moon ocean tide on this date. Combined Sun and Moon gravitational tidal force working on Earth is strong, because of the Sun-Earth-Moon syzygy alignment. The Moon is 14 days old. Earth's natural satellite is moving through the middle part of current synodic month. This is lunation 906 of Meeus index or 1859 from Brown series. Length of current 906 lunation is 29 days, 16 hours and 1 minute. It is 2 hours and 25 minutes longer than next lunation 907 length. Length of current synodic month is 3 hours and 17 minutes longer than the mean length of synodic month, but it is still 3 hours and 46 minutes shorter, compared to 21st century longest. This lunation true anomaly is ∠192.2°. At the beginning of next synodic month true anomaly will be ∠218.1°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°). Moon is reaching point of perigee on this date at 03:38, this is 14 days after last apogee on 6 April 2073 at 05:19 in ♈ Aries. Lunar orbit is starting to get wider, while the Moon is moving outward the Earth for 12 days ahead, until it will get to the point of next apogee on 3 May 2073 at 14:50 in ♓ Pisces. This perigee Moon is 358 345 km (222 665 mi) away from Earth. It is 4 163 km closer than the mean perigee distance, but it is still 12 011 km farther than the closest perigee of 21st century. 3 days after its ascending node on 17 April 2073 at 19:11 in ♌ Leo, the Moon is following the northern part of its orbit for the next 9 days, until it will cross the ecliptic from North to South in descending node on 30 April 2073 at 13:47 in ♒ Aquarius. 3 days after beginning of current draconic month in ♌ Leo, the Moon is moving from the beginning to the first part of it. 7 days after previous North standstill on 14 April 2073 at 00:42 in ♋ Cancer, when Moon has reached northern declination of ∠19.238°. Next 5 days the lunar orbit moves southward to face South declination of ∠-19.330° in the next southern standstill on 26 April 2073 at 14:23 in ♑ Capricorn. The Moon is in Full Moon geocentric opposition with the Sun on this date and this alignment forms Sun-Earth-Moon syzygy.
0.860247
3.172154
Breakthrough Listen Working Group Statement Dr. Steve Croft Dr. Andrew Siemion Dr. Jill Tarter With input from Sofia Sheikh This document, authored by the Breakthrough Listen Working Group, is meant to give scientific background on our work; outline our goals for the Workshop; and raise themes and questions that other Working Groups may or may not want to respond to as part of their Statements. SETI + Breakthrough Listen: Background In 1960, the first modern SETI experiment was undertaken in West Virginia by Frank Drake’s Project Ozma, looking at two stars using a single, tunable radio channel. The six decades since have seen huge advancements in technology, which now allow billion-channel radio spectrometers to search hundreds or even millions of targets for signs that intelligent life may have arisen not just on our own planet, but elsewhere. Breakthrough Listen is taking data with the Green Bank Telescope in West Virginia (the world’s largest steerable radio telescope), and the Parkes radio telescope in Australia. We are also undertaking a search for powerful lasers using the Automated Planet Finder (a robotic optical telescope equipped with cutting edge spectrograph technology) in California. Listen has also signed collaborative agreements with the Jodrell Bank Observatory in the UK and the FAST radio telescope in China. Pilot programs are now being launched with MeerKAT and MWA, two of the precursor instruments to the international Square Kilometre Array telescope, that when constructed in the next decade will be the most powerful SETI instrument yet developed. As SETI scientists, we use sensitive instrumentation to pick up signals over a range of frequencies of electromagnetic radiation (usually focusing on visible light, and the radio spectrum). Then we ask three questions: 1. Is there a signal present, or only noise? 2.Is the signal natural or artificial? And, 3. If artificial, is it human-generated? To answer these questions, we tend to look for signals that are narrow in frequency, time, or both. SETI astronomers call any detectable indicator of technology a “technosignature.” We distinguish between the detection of an engineered (or artificial) signal and communication. The former does not necessarily imply the latter. We can’t be sure that extraterrestrial civilisations will choose to transmit narrow-band signals, or even that they will use radio or laser communications at all, but if they are deliberately trying to attract attention, a narrow-band radio or laser signal is a great way to do so. Both are capable of traversing interstellar and even intergalactic distances, and tend to stand out against the background of natural signals and noise primarily because they span just a narrow range of frequencies. Moreover, irrespective of the intentions of a putative extraterrestrial civilization, the detection of spectrally or temporally compressed electromagnetic radiation represents one of the best known means of remotely sensing an extraterrestrial technology, and by extension, an intelligent civilization. The third question is much more difficult. We’re looking for a needle in a haystack of human-generated interference: from cell phones, wifi, satellites, airplanes, and all of the accoutrements of modern technological life. And the kinds of signals we look for are based on extrapolating from our own technology. We imagine ETI might develop advanced methods of communication or propulsion with signatures that are detectable at large distances. We use a variety of methods to try to ascertain if a signal is coming from our local (human) environment, but mostly we do this by seeking signals that are localized at some position on the sky. If a star moves overhead over the course of the night, any apparently artificial signal transmitted by a technological civilization located in that star system should move with it (and should go away when the telescope is pointed elsewhere) and will thus stand out against local interferers that don’t show such behavior. In short, we are searching for signals that appear to be both non-local and artificial. You can read more about the Breakthrough Listen science program at U.C. Berkeley here, as well as watching a short video aimed at public audiences, “If We Heard from Aliens, What Would It Look Like?” Goals for the Workshop Three years into the Breakthrough Listen Initiative, the team at U.C. Berkeley’s SETI Research Center now seeks to meaningfully engage with scholars who think with and around SETI as a way to socially, historically, and philosophically analyze our own science search. Our main question is: What are we missing? We view Making Contact 2018 as the launch point for the Breakthrough Listen team to start intellectual collaborations with other researchers. Our team consists of expert astronomers, digital engineers, programmers, and data analysts, and we’re not seeking advice on the technical aspects of designing a SETI survey, or looking to speculate about extensions to the known laws of physics. But we are looking for help from experts outside our métier to explore possible motives, development, and sociology of extraterrestrial intelligence, the means by which an interaction with humanity might play out, and the implications for our society of a detection. We are open to unexpected viewpoints. This Workshop is a launching point for the Breakthrough Listen team to develop tools to critically engage questions and issues that surround SETI research. We hope that listening to thinkers from other disciplines will permit us to broaden the discussion surrounding the search for intelligent life beyond Earth. Themes and Questions for Working Groups We have generated some loose questions that you may or may not choose to respond to in your Working Group Statement. These are meant to be generative, rather than limiting, questions and themes, and the groups are welcome to respond to questions not directed at their disciplines. - How do fields outside data science and astronomy define intelligence, and how do they distinguish between “artificial” and “natural” or “biological” intelligence. - How do other fields define the word “artificial?” SETI scientists often use the word “artificial” to describe any signal produced by intelligent civilizations (including humans) and “natural” to be anything not caused or affected by intelligent life. What does this language say about the way we view intelligence and technological capability vis-a-vis other processes in the universe? What does it say about the way we view human intelligence vs. the intelligence of other terrestrial life? - Why, or why not, might altruism, intelligence and technological capability be related? AI & Data Working Group - What do we mean by “communication,”by “technology,” and by “intelligence,” and is agreeing on a definition for these terms important or largely irrelevant to the success of the SETI endeavor? - The Rio Scale was formulated as a tool to assess the likelihood that a claimed signal is due to ETI, assessing the physical reality of what’s been detected, the credibility of the claim, and the quality of the observational data. It is also intended to aid us in communicating the scientific method to the public. Work is currently ongoing to expand this to a “Rio Scale 2.0”. We want to avoid attributing meaning or intention to potential signals, but questions remain: What might we be missing? Does the Rio Scale fulfill our wish about creating a more “objective” scale? Anthropologists Working Group - In sifting for technosignatures, we’re basically looking for humans on steroids because we’re not very good at envisioning something we can’t conceive. We’re looking for a highly advanced version of ourselves that makes the same kinds of technology. How can we best prime ourselves to be sensitive to something at the edge of our conception of human “intelligence”? - How can we, or even should we, think through the problem of anthropocentrism that guides our search? Future Studies Working Group - SETI has been described as the “archaeology of the future.” If the lifespan of a typical communicating civilization is short, our chances of finding it are slim. Conversely, if civilizations live for millions of years, if we make contact, it will likely be with a civilization much older than our own. How might our species develop in the far future, and what might this have in common with the development of civilizations in general? Must civilizations rise and fall, or does an inexorable path of development eventually lead to technologies beyond our wildest imaginings? Is understanding the motivations of an advanced ETI necessary for making a detection, never mind achieving communication? - Arthur C. Clarke suggested that “Any sufficiently advanced technology is indistinguishable from magic.” Nick Bostrom’s theory of “superintelligence” suggests that we may not even have the brain capacity to try to figure out how an advanced intelligence might choose to operate. Is it feasible or advisable to imagine, project, or speculate about an advanced civilization? How would their technology develop, and how would we achieve sufficient common ground to even recognize it, never mind communicate with it? What would a civilization a billion years ahead of ours even look like? What would its motives be? Why might it choose to make its presence known? History, Policy, & Ethics Working Group - How can BL better historically contextualize major SETI events like the birth of radio SETI; cyclical funding; the launch of Breakthrough Listen? What might the historical/cultural/political context teach us about the way the search is being performed? - What are the potential ethical ramifications for making contact? - SETI’s underlying ethos has been that educating the public at large, even before a confirmed detection is made, is a good thing to do because we expect that a valid discovery will be incredibly important and disruptive. Is that what we should be doing? If so, how can we do that better? How do we do outreach that goes beyond merely educating the public about what SETI does? How can we communicate the awe and wonder of the deeply profound question that SETI work seeks to answer in the most inclusive and global way possible? Indigenous Studies Working Group - We acknowledge that, by and large, SETI endeavors have been directed by Westerners steeped in a particular knowledge tradition. How can we engage a broader community, and ultimately the entire world? - Although we only know of one inhabited planet, there are nevertheless many species, and within species, many tribes and clans. What lessons from interspecies communication or from cross-cultural communication are SETI scientists not appreciating? Literature, Language, & Storytelling Working Group - What could drive an ETI to spread across the galaxy? Are there necessarily drivers for settlement at all? - Is an unsustainable growth in resource consumption inevitable for an ETI? - How do the stories we tell ourselves — about intelligence, gender, technology, nature, language — shape, or perhaps hinder, SETI projects? Situated Knowledges and Feminist Epistemology - Is the idea of growth / consumption / domination in SETI a product of certain aspects of human cultural history? What alternatives are there? - How can the idea of “situated knowledges” inform our search? How can we better theorize Others using this tool? How can it help us beware of our limits of knowledge?
0.853512
3.152861
When we look up at the sky at night, we don't only see incredible natural phenomena playing out before our eyes, we also see an enormous tapestry of the past of our universe. The sky is a time machine of sorts. It transmits the light of long-dead stars that are often millions of years old, allowing us to gather knowledge on the history of our universe and how it was formed. The understanding of our universe is also possible thanks to a rich history of astronomers observing it and gradually adding to our knowledge of the way celestial objects move. Here is a list of some of the most influential early astronomers throughout history. 1. Aristarchus of Samos (310-230 BC) Aristarchus of Samos was an ancient Greek mathematician and astronomer that is credited with having created the first-known map of our solar system, which placed the Sun at the center and Earth as a planet revolving the Sun. Aristarchus also correctly predicted the rotation of Earth around an axis and correctly stated that other stars were similar in nature to the Sun, and were much farther away from Earth. 2. Eratosthenes (276-194 BC) Eratosthenes became the Chief Librarian of the Great Library of Alexandria in ancient Greece. He is credited with having made some incredible calculations — especially considering the lack of tools at his disposal when compared with modern astronomers — that still hold up today. Eratosthenes calculated the distance between the Sun and Earth and was only off by a few percent when compared with modern measurements. Similarly, he gave an impressively accurate measurement of the circumference of the Earth. The ancient Greek astronomer is also recognized as having devised the need for a leap day, for having calculated the Earth's axis, and for having devised a map using meridians and parallels, which became the basis for indicating the position of stars in star charts that were used in astronomy and navigation. 3. Hipparchus (190-120 BC) Hipparchus is credited as the founder of trigonometry and spherical trigonometry. The ancient Greek astronomer and mathematician used his work to develop his theories on lunar motions, allowing him to become the first person to successfully predict solar eclipses. Aside from developing the first accurate models to describe the relative motions of the Sun and Moon, Hipparchus also compiled the first star catalog in the Western world and accidentally discovered the precession of the equinoxes. 4. Gan De (Around 400-340 BC) Gan De is the first individual, alongside his colleague Shi Shen, in known history to have compiled a star catalog. Though star catalogs are known to have been compiled by unknown Babylonian astronomers, Gan De is the first to have been recorded by history. Also known as Lord Gan, Gan De made some of the first recorded observations of Jupiter. The Chinese astronomer and astrologer, who was born in the state of Qi, found ingenious ways to work around the technological limitations of the time. One method he used, for example, was to use a high tree branch to shield his vision from the glare of Jupiter, allowing him to make a naked-eye observation of one of Jupiter's moons. A catalog compiled by Gan De and Shi Shen was discovered as part of the second century BC Mawangdui Silk Texts. It included surprisingly accurate movement observations of Jupiter, Venus, and Mars. 5. Ptolemy (100-170 AD) Ptolemy's scientific treatise, Almagest, contains a comprehensive — for the time — star catalog, with detailed descriptions of 48 constellations as observed by the Greek astronomer and mathematician. Much of Ptolemy's Almagest was usefully formatted in convenient tables that made it easy to calculate the past and future positions of celestial objects. 6. Aryabhata (476-550 AD) Unfortunately, much of Aryabhata's prodigious talent has been lost to history. The Indian astronomer and mathematician was merely 23 years old when he wrote his most famous astronomical work, titled the Aryabhatiya. The original text was sadly lost, meaning that most of what is known about the astronomer's work is known today thanks to what was written down of it by contemporaries of his. Amongst Aryabhata's achievements are the correct observation that the Earth rotates once around its axis every day, and that the visible movement of the stars and the Moon across the night sky occurs thanks to the rotation of the Earth. Aryabhata also correctly calculated the length of the day as being 23 hours, 56 minutes, and 4.1 seconds — this was correct to within a millisecond when compared with modern values. As for the correct value for the exact time of a year, Aryabhata calculated this as being 365.25858 days, which was only 3 minutes and 20 seconds over the length of a modern year. 7. Nicolaus Copernicus (1473-1543) Though earlier astronomers had previously claimed that the Sun was the center of the solar system, Nicolaus Copernicus finally shattered the popularly believed and incorrect notion that all celestial objects revolved around the Earth. Copernicus, of Poland, published his book, De Revolutionibus Orbium Coelestium ("On the Revolutions of the Heavenly Spheres") when he was 70 and on his death bed. Though his ideas didn't ignite the popular imagination until almost a hundred years later, his heliocentric model of the solar system is integral to our understanding of the universe today. 8. Galileo Galilei (1564–1642) Galileo built on Copernicus's ideas in order to become one of the most important figures of the scientific revolution of the 17th century. Born in Pisa, Italy, Galileo was responsible for several scientific advancements on top of his work in astronomy; he developed the first pendulum clock and proved that all falling bodies fall at the same rate, regardless of mass. He also experimented with and helped to refine the technology behind telescopes. Thanks to this technology, the Italian astronomer is credited with having discovered Jupiter's four largest moons, known today as the Galilean moons. Galileo also helped popularize the Copernican heliocentric model of the solar system, which states that the Sun is at the center of our solar system. The Catholic church at the time forced Galileo to recant his theories about the heliocentric world model and kept him under house arrest for the last nine years of his life. 9. Isaac Newton (1642–1727) Isaac Newton was a famously reclusive figure whose work almost never received the recognition it deserved. Thankfully, it did, and he is now considered one of the most influential figures in the history of science. Aside from having invented calculus, his creation of the three universal laws of motion and his invention of the theory of universal gravity changed the course of modern science. Newton famously devised his theory for the law of universal gravitation after seeing an apple fall from a tree at his home at Woolsthorpe Manor in England. In 2010, a NASA astronaut carried a piece of the, old but still standing, apple tree aboard the space shuttle Atlantis for a mission to the International Space Station. The aeronautics and space industry owes a great debt to Newton's universal laws of motion. Incredible scientists like these might not come along often, but when they do, they build on knowledge that is an accumulation of small discoveries made by the countless scientists that came before them. The work of Newton, Galileo and others built the foundations for the fascinating work of modern scientists such as Albert Einstein and Stephen Hawking. Could the next great scientist to fundamentally change our understanding of the universe be living amongst us now?
0.851757
3.756703
Interview with Suzanne Aigrain, University of Oxford, by John Strachan In this interview we ask expert exoplanet researcher Professor Suzanne Aigrain from Oxford University why she is a keen participant in exoplanet research and what her views are on the prospects and difficulties of detecting small exoplanets such as those that may exist around Proxima Centauri. Can you tell me what first got you interested in astronomy, and exoplanets in particular? I have a background in physics, and started getting interested in astronomy during internships at the Institute of Astronomy in Cambridge while I was an undergrad. My interest in exoplanet research really came about after I graduated, when I spent 6 months as a trainee at the European Space Agency’s ESTEC research centre in the Netherlands, before starting my PhD. This was an exciting time, as the first transiting exoplanet had just been discovered, and I was given the opportunity to work on this very hot topic. I was hooked, and decided to continue in this field of research for my PhD. Detecting small exoplanets using the radial velocity method appears to be very difficult. The debate which you have been involved in concerning the existence or not, of a planet around Alpha Centauri is an example of one such case. Can you give your views on the main difficulties involved in detection of small exoplanets using the radial velocity method? There are many difficulties which have to be overcome in order for the radial velocity (RV) method to succeed in detecting small exoplanets. Here are a few of the most important: - Instrumental precision: The radial velocity of the star has to be measured accurately enough. Current state-of-the-art RV spectrographs such as HARPS and HARPS-N have accuracy down to 1 m/s, or even slightly below. This is sufficient, with many measurements, to detect Earth-mass planets in the habitable zone of some small stars, such as M dwarfs. However an Earth-mass planet in the habitable zone of a Sun-like star causes a variation of only 10 cm/s, over an entire year… With new calibration techniques such as laser combs, though, the precision of RV spectrographs is still improving, and the next generation of experiments such as ESPRESSO are likely to be able to achieve this precision. - Stellar activity: The apparent RV of a star can vary even if there are no planets orbiting it. For example, most stars have some starspots (the stellar analogue of sunspots)—regions where the magnetic field of the star is particularly strong. Starspots appear darker than the rest of the star’s surface, and as the stars rotate, they will hide a part of the star that is first moving towards the observer, then away. This will lead to an apparent change in RV that could easily be mistaken for a planet. There are all sorts of other effects, mainly due to magnetism and convection (the hot gas inside the star bubbling up to the surface and back down again) that can cause subtle and complex RV variations. To detect small planets despite these, we must model them in detail, and this is a very active area of research today. - Patchy observations: Because our instruments are located on the Earth, we can only observe a given star during the night, when the sky is clear and the star is up in the sky. Additionally, in RV we typically observe one star at a time, so we must chose between the different targets we want to observe. As a result, the observations of each star have many gaps, and this can make it even more difficult to distinguish between planetary and stellar signals. - Comparing models: When we analyse the observations of a given star, we don’t know in advance how many planets it has. To find out, we must try models with different numbers of planets and compare them, and at the same time we must also account for the activity of the star, and the noise of our measurements. This, combined with the patchy nature of the data, makes analysing RV data a very complex, time-consuming and challenging process. These difficulties came in to play in the case of Alpha Centauri B. The observations were dominated by the signal from the companion star Alpha Centauri A (Alpha Centauri is a binary star), and by the activity of the star. There was also a tiny signal with a period of 3.2 days. This signal became stronger after modelling and subtracting the binary and activity contributions—strong enough for the authors of the original study to report the detection of an exoplanet. However, we ran some simulations using synthetic data with the same time sampling, noise properties, and the same kind of activity signal, but no planet, and the 3.2 day signal was still there. That means it couldn’t have come from an exoplanet. We actually think it was an artefact of the time sampling, that just got boosted by the complex modelling needed to remove the activity. Now that we know this sort of thing can happen, we can look out for it in future, hopefully avoiding a repeat of this sort of problem. Do you think the daily observing of Proxima Centauri during the three months of the Pale Red Dot Campaign will significantly increase the chances of detection? A daily observing strategy determined in advance will certainly help. This dense set of observations, captured over the next two months, will help with the modelling of the star’s activity signal. We know that Proxima Centauri rotates slowly, so by gathering many observations in a short space of time, we should be less sensitive to the effects of starspots. If any possible planets are found during the new observations, we can then look back at the considerable data set of previous observations, to confirm that they are real. If the campaign does detect an exoplanet, what characteristics of the exoplanet do you think we will be able to determine from the observations and what follow up observations would you recommend? If an exoplanet is detected by the radial velocity method some of its orbital parameters will be determined, in particular the period and eccentricity of the exoplanet. We also get a lower limit on the mass of the exoplanet relative to the star. It is a lower limit only because we do not know the inclination of the orbit. If the exoplanet happens to transit across the disk of the star, then we will know that the orbit is edge on, and thus we will obtain the true mass of the planet, as well as its radius relative to the star. Together these give us its mean density, from which we can tell something about its composition (mainly gaseous, mainly rocky, or something in between?). However, only a small fraction of exoplanets transit, so we’d have to be particularly lucky. For larger planets, with an extended gas envelope, observing the transits in multiple wavelengths can enable us to probe the composition of its atmosphere. However, for an Earth-like planet, even with the best instruments available, such as the future James Webb Space Telescope (JWST), this kind of measurement may not be possible: its atmosphere may be too thin, and the exoplanet may be too cool to see features in its spectrum during a secondary eclipse (when the planet passes behind the star). We may have to wait until we are able to image the exoplanet directly, by blocking out the light of the star using something like a coronagraph. Some JWST instruments are equipped with a coronagraph, and there is also a project to launch a “starshade” which would act as an external coronagraph for JWST (a project call the New Worlds Explorer). If we were able to isolate the light of the planet, we could then extract its spectrum and learn about the temperature and composition of its atmosphere. What do you think the impact of finding an Earth-like exoplanet around Proxima Centauri or other stars near to the Earth could be? This would be a major discovery. It would confirm that such planets must be very common, as we already suspect based on statistical results derived from the Kepler mission: Dressing and Charbonneau (2013) estimated 40-50% of M stars have at least one Earth-sized (up to 1.6 Earth radii) exoplanets in their habitable zone. It would give us an extra impetus to search for more exoplanets in the solar neighbourhood, and to invest in the technology we need to study them in detail. Knowing there are many other worlds potentially like our own in the Galaxy would also change the way we think of our own place in the universe. What do you think the chances are that, if an exoplanet is detected, it is similar to Earth and that it may harbour life? If Proxima Centauri hosted any planets substantially more massive than the Earth in its habitable zone, they should have been detected already during previous observations. Therefore, if a new planet is detected as part of the Pale Red Dot campaign, it is likely to be similar in mass to the Earth. Whether such a hypothetical exoplanet might harbour life, though, is very hard to know. Existing models of planet formation, and data from the Kepler satellite and associated RV follow-up, suggest that planets below 1.6 Earth radii are likely to be rocky and to have thin atmospheres, akin to Earth. Larger planets tend to have larger gas atmospheres and may be more like Neptune than the Earth. So if the planet’s mass was small enough, it would probably be rocky. But whether it would also have developed life – that is anyone’s guess! The Pale Red Dot Campaign is one of several campaigns aimed at detecting exoplanets. Can you comment on any ongoing campaigns which you are particularly interested in and which may help to detect exoplanets in the solar neighbourhood? There are very many exoplanet campaigns ongoing or just about to start—too many to list—and many of them are exciting. Two that I am particularly interested in at the moment, are the K2 mission and the TERRA Hunting Experiment (THE) at the Isaac Newton telescope (INT) in La Palma. K2 is the Kepler “Second Light” mission and uses the Kepler space observatory. What particularly interests me about K2 is that it is observing some nearby young open clusters, which are groups of young stars that formed out of the same cloud of gas and dust. This represents our first opportunity to search for young transiting planets and directly learn about their early evolution. THE is an experiment, proposed by Didier Queloz from Cambridge University, which would involve installing a high precision radial velocity spectrograph called HARPS3 (a copy of HARPS and HARPS-North) on the INT, and upgrading the telescope to be fully robotic. This instrument would then be used in a long term campaign (5-10 years) to observe a small number of nearby solar type stars every day. The use of a dedicated instrument over such a long period of time will greatly improve the chances of finding Earth-like exoplanets round these stars. What do you see as the future for exoplanet detection and characterisation over the next ten years? The first two purpose-built instruments for direct detection of exoplanets on large telescopes, SPHERE on the Very Large Telescope (VLT) and the Gemini Planet Imager (GPI) on the Gemini telescope, have just started large surveys for young, massive planets on wide orbits around nearby stars. In the next two to three years JWST, TESS (the Transiting Exoplanet Survey Satellite) and ESPRESSO will all come online, improving our ability to detect exoplanets and measure their properties from space and from the ground. The ongoing GAIA astrometric mission should also find many high-mass (>Jupiter-mass) long-period exoplanets in the solar neighbourhood, and the small photometric satellite CHEOPS will search for transits of previously known planets. By 2025 the PLATO space mission will search for exoplanets among relatively bright stars with the aim of being able to detect Earth sized planets in the habitable zone around solar-like stars. The European Extremely Large Telescope (E-ELT) with its large aperture will be able to directly detect exoplanets, down to perhaps rocky sized exoplanets, and using high resolution spectroscopes will be able to record a number of their spectra. In the very long term, the ultimate goal is to be able to directly image and take spectra of Earth-like planets around Sun-like stars, and search for signs of biological activity in their atmospheres. This will require a large, space telescope with a state-of-the-art coronagraph, as recently proposed for example in the High Definition Space Telescope (HDST) report. About the interviewee. Professor Suzanne Aigrain is head of a research group at the University of Oxford which focuses on the detection and characterisation of exoplanets and their host stars. She was born and educated in France and moved to the UK for her undergraduate studies, where she has remained since, except for a 6-month spell at the European Space Agency’s ESTEC research centre in the Netherlands just after finishing her undergraduate degree at Imperial College London. She completed her PhD on Planetary Transits and Stellar Variability at the University of Cambridge. Since then she has held post doctoral positions at Cambridge and lectureship positions at the Universities of Exeter and Oxford. During this time she has worked as a Co-Investigator or Participating scientist on major collaborations including CoRoT, Kepler, K2, TESS and PLATO. Her research group’s website is www.splox.net and she occasionally tweets as @AirborneGrain.
0.879247
3.56243
Ceres keeps getting weirder. Those white spots on the surface we’ve been seeing for months are still mystifying, and we can now add another bizarro surface feature to the list: A huge 5 kilometer tall mountain sitting in the middle of an otherwise relatively flat part of the asteroid. Um. Why is that there? On Earth, mountains can form for several reasons. Continents crash together, creating wrinkles in the surface. That’s what the Himalayas are. Of course, Ceres doesn’t have plate tectonics! That wouldn’t form a solitary mountain anyway. Volcanoes? Well, we do see that happening on Earth. But we don’t see any other features like this at all nearby, making it unlikely to be from a weak spot in the crust. Devil’s Tower in Wyoming is similar to this feature, though; that tower may have been created by upwelling magma seeping into prehistoric sedimentary layers. But clearly that’s not going to happen on an asteroid! Sedimentary rocks would be, I expect, rather difficult to produce. Mountains on airless bodies like asteroids (or our Moon) can be made in several ways as well. Giant impacts have mountain ranges around their rim, created by rocks lifted up at the edge of the crater. But this mountain on Ceres is alone. Smaller craters can get central peaks, where the rock rebounds upward after the initial impact (similar to the drop that splashes up in the center of a glass when you pour milk). But there’s no obvious crater around this mountain. Maybe other forces filled it in, or subsequent impacts eroded it away. There’s evidence of landslides on the surface as well, which could eventually erase the features of a crater. This seems most likely to me. We’ve seen other craters on Ceres with central peaks, but I don’t think any yet this size. Given all the evidence, though, this is the way I’d lean. But I’m simply guessing. We’re just now seeing this strange feature, and it’ll be a while, I suspect, before planetary scientists can get enough data to understand it better. Note that Dawn, the spacecraft now orbiting Ceres that took this picture, is still in a relatively high surveying orbit, 4,400 km above the surface. It’ll be dropping down to get much higher resolution images in the coming months. Hopefully then we’ll get some definitive answers to these mysteries. Ceres is odd. We know there’s ice under the surface, and there’s evidence it also has geysers, eruptions of water, from its surface. That might explain the white spots, too, but there’s still a long way to go to figure all this out. Ceres is the largest asteroid in the asteroid belt (some call it a dwarf planet; I find the term not terribly useful). It’s unique in that sense, and big enough to have geological processes on it and in it we haven’t fully grasped yet. It’s not Earth, for sure, but it’s far more than a simple monolithic rock in space. It’s a world. And with a surface area of nearly 3 million square kilometers, there’s a lot of it to explore.
0.814063
3.794253
The Habitable Planet: A Systems Approach to Environmental Science Earth’s Changing Climate Online Textbook Unit 12 // Section 1 For the past 150 years, humans have been performing an unprecedented experiment on Earth’s climate. Human activities, mainly fossil fuel combustion, are increasing concentrations of greenhouse gases (GHGs) in the atmosphere. These gases are trapping infrared radiation emitted from the planet’s surface and warming the Earth. Global average surface temperatures have risen about 0.7°C (1.4°F) since the early 20th century. Earth’s climate is a complex system that is constantly changing, but the planet is warmer today than it has been for thousands of years, and current atmospheric carbon dioxide (CO2) levels have not been equaled for millions of years. As we will see below, ancient climate records offer some clues about how a warming world may behave. They show that climate shifts may not be slow and steady; rather, temperatures may change by many degrees within a few decades, with drastic impacts on plant and animal life and natural systems. And if CO2 levels continue to rise at projected rates, history suggests that the world will become drastically hotter than it is today, possibly hot enough to melt much of Earth’s existing ice cover. Figure 1 depicts projected surface temperature changes through 2060 as estimated by NASA’s Global Climate Model. Figure 1. Surface air temperature increase, 1960 to 2060 Source: © National Aeronautics and Space Administration. Past climate changes were driven by many different types of naturally-occurring events, from variations in Earth’s orbit to volcanic eruptions. Since the start of the industrial age, human activities have become a larger influence on Earth’s climate than other natural factors. High CO2 levels (whether caused by natural phenomena or human activities) are a common factor between many past climate shifts and the warming we see today. Many aspects of climate change, such as exactly how quickly and steadily it will progress, remain uncertain. However, there is a strong scientific consensus that current trends in GHG emissions will cause substantial warming by the year 2100, and that this warming will have widespread impacts on human life and natural ecosystems. Many impacts have already been observed, including higher global average temperatures, rising sea levels (water expands as it warms), and changes in snow cover and growing seasons in many areas. A significant level of warming is inevitable due to GHG emissions that have already been released, but we have options to limit the scope of future climate change—most importantly, by reducing fossil fuel consumption (for more details, see Unit 10, “Energy Challenges”). Other important steps to mitigate global warming include reducing the rate of global deforestation to preserve forest carbon sinks and finding ways to capture and sequester carbon dioxide emissions instead of releasing them to the atmosphere. (These responses are discussed in Unit 13, “Looking Forward: Our Global Experiment.”) 2. Tipping Earth's Energy Balance Unit 12 // Section 2 Earth’s climate is a dynamic system that is driven by energy from the sun and constantly impacted by physical, biological, and chemical interactions between the atmosphere, global water supplies, and ecosystems (Fig. 2). Figure 2. Components and interactions of the global climate system Source: © Intergovernmental Panel on Climate Change 2001: Synthesis Report, SYR Figure 2-4. As discussed in Unit 2, “Atmosphere,” energy reaches Earth in the form of solar radiation from the sun. Water vapor, clouds, and other heat-trapping gases create a natural greenhouse effect by holding heat in the atmosphere and preventing its release back to space. In response, the planet’s surface warms, increasing the heat emitted so that the energy released back from Earth into space balances what the Earth receives as visible light from the sun (Fig. 3). Today, with human activities boosting atmospheric GHG concentrations, the atmosphere is retaining an increasing fraction of energy from the sun, raising the earth’s surface temperature. This extra impact from human activities is referred to as anthropogenic climate change. Figure 3. Earth’s energy balance Source: Courtesy of Jared T. Williams. © Dan Schrag, Harvard University. Many GHGs, including water vapor, ozone, CO2, methane (CH4), and nitrous oxide (N2O), are present naturally. Others are synthetic chemicals that are emitted only as a result of human activity, such as chlorofluorocarbons (CFCs), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6). Important human activities that are raising atmospheric GHG concentrations include: - fossil fuel combustion (CO2 and small quantities of methane and N2O); - deforestation (CO2 releases from forest burning, plus lower forest carbon uptake); - landfills (methane) and wastewater treatment (methane, N2O); - livestock production (methane, N2O); - rice cultivation (methane); - fertilizer use (N2O); and - industrial processes (HFCs, PFCs, SF6). Measuring CO2 levels at Mauna Loa, Hawaii, and other pristine air locations, climate scientist Charles David Keeling traced a steady rise in CO2 concentrations from less than 320 parts per million (ppm) in the late 1950s to 380 ppm in 2005 (Fig. 4). Yearly oscillations in the curve reflect seasonal cycles in the northern hemisphere, which contains most of Earth’s land area. Plants take up CO2 during the growing season in spring and summer and then release it as they decay in fall and winter. Figure 4. Atmospheric CO2 concentrations, 1958-2005 Source: © 2005. National Aeronautics and Space Administration. Earth Observatory. Global CO2 concentrations have increased by one-third from their pre-industrial levels, rising from 280 parts per million before the year 1750 to 377 ppm today. Levels of methane and N2O, the most influential GHGs after CO2, also increased sharply in the same time period (see Table 1 below). If there are so many GHGs, why does CO2 get most of the attention? The answer is a combination of CO2‘s abundance and its residence time in the atmosphere. CO2accounts for about 0.1 percent of the atmosphere, substantially more than all other GHGs except for water vapor, which may comprise up to 7 percent depending on local conditions. However, water vapor levels vary constantly because so much of the Earth’s surface is covered by water and water vapor cycles into and out of the atmosphere very quickly—usually in less than 10 days. Therefore, water vapor can be considered feedback that responds to the levels of other greenhouse gases, rather than an independent climate forcing (footnote 1). Other GHGs contribute more to global climate change than CO2 on a per-unit basis, although their relative impacts vary with time. The global warming potential (GWP) of a given GHG expresses its estimated climate impact over a specific period of time compared to an equivalent amount by weight of carbon dioxide. For example, the current 100-year GWP for N2O is 296, which indicates that one ton of N2O will have the same global warming effect over 100 years as 296 tons of CO2. Internationally-agreed GWP values are periodically adjusted to reflect current research on GHGs’ behavior and impacts in the atmosphere. However, CO2 is still the most important greenhouse gas because it is emitted in far larger quantities than other GHGs. Atmospheric concentrations of CO2 are measured in parts per million, compared to parts per billion or per trillion of other gases, and CO2‘s atmospheric lifetime is 50 to 200 years, significantly longer than most GHGs. As illustrated in Table 1, the total extent to which CO2 has raised the global temperature (referred to as radiative forcing and measured in watts per square meter) since 1750 is significantly larger than forcing from other gases. |Gas||Pre-1750 concentration||Current tropospheric concentration||100-year GWP||Atmospheric lifetime (years)||Increased radiative forcing (watts/meter2)| |Carbon dioxide||280 parts per million||377.3 parts per million||1||Variable (up to 200 years)||1.66| |Methane||688-730 parts per billion||1,730-1,847 parts per billion||23||12||0.5| |Nitrous oxide||270 parts per billion||318-319 parts per billion||296||114||0.16| |Tropospheric ozone||25||34||Not applicable due to short residence time||Hours to days||0.35| |Industrial gases (HFCs, PFCs, halons)||0||Up to 545 parts per trillion||Ranges from 140 to 12,000||Primarily between 5 and 260 years||0.34 for all halocarbons collectively| |Sulfur hexafluoride||0||5.22 parts per trillion||22,200||3,200||0.002| Table 1. Current greenhouse gas concentrations. Source: T.J. Blasing and Karmen Smith, “Current Greenhouse Gas Concentrations,” Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, July 2006 A look at current emissions underlines the importance of CO2. In 2003 developed countries emitted 11.6 billion metric tons of CO2, nearly 83 percent of their total GHG emissions. Developing countries’ reported emissions were smaller in absolute terms, but CO2 accounted for a similarly large share of their total GHG output (footnote 2). In 2004, CO2 accounted for 85 percent of total U.S. GHG emissions, compared to 7.8 percent from methane, 5.4 percent from N2O, and 2 percent from industrial GHGs (footnote 3). These emissions from human activities may reshape the global carbon cycle. As discussed in Units 2 (“Atmosphere”) and 3 (“Oceans”), roughly 60 percent of CO2emissions from fossil fuel burning remain in the atmosphere, with about half of the remaining 40 percent absorbed by the oceans and half by terrestrial ecosystems. However, there are limits to the amount of anthropogenic carbon that these sinks can take up. Oceans are constrained by the rate of mixing between upper and lower layers, and there are physical bounds on plants’ ability to increase their photosynthesis rates as atmospheric CO2 levels rise and the world warms. Scientists are still trying to estimate how much carbon these sinks can absorb, but it appears clear that oceans and land sinks cannot be relied on to absorb all of the extra CO2 emissions that are projected in the coming century. This issue is central to projecting future impacts of climate change because emissions that end up in the atmosphere, rather than being absorbed by land or ocean sinks, warm the earth. 3. Climate Change: What the Past Tells Us Unit 12 // Section 3 Throughout much of its 4.5 billion year history, Earth’s climate has alternated between periods of warmth and relative cold, each lasting for tens to hundreds of millions of years. During the warmest periods, the polar regions of the world were completely free of ice. Earth also has experienced repeated ice ages—periods lasting for millions of years, during which ice sheets advanced and retreated many times over portions of the globe. During the most extreme cold phases, snow and ice covered the entire globe (for more details, see Unit 1, “Many Planets, One Earth”). From the perspective of geological time, our planet is currently passing through a relatively cold phase in its history and has been cooling for the past 35 million years, a trend that is only one of many swings between hot and cold states over the last 500 million years. During cold phases glaciers and snow cover have covered much of the mid-latitudes; in warm phases, forests extended all the way to the poles (Fig. 5). Figure 5. Ice sheet advance during the most recent ice age Source: Courtesy National Oceanic and Atmospheric Administration Paleoclimatology Program. Scientists have analyzed paleoclimate records from many regions of the world to document Earth’s climate history. Important sources of information about past climate shifts include: - Mineral deposits in deep sea beds. Over time, dissolved shells of microscopic marine organisms create layers of chalk and limestone on sea beds. Analyzing the ratio of oxygen-18 (a rare isotope) to oxygen-16 (the common form) indicates whether the shells were formed during glacial periods, when more of the light isotope evaporated and rained down, or during warm periods. - Pollen grains trapped in terrestrial soils. Scientists use radiocarbon dating to determine what types of plants lived in the sampled region at the time each layer was formed. Changes in vegetation reflect surface temperature changes. - Chemical variations in coral reefs. Coral reefs grow very slowly over hundreds or thousands of years. Analyzing their chemical composition and determining the time at which variations in corals’ makeup occurred allows scientists to create records of past ocean temperatures and climate cycles. - Core samples from polar ice fields and high-altitude glaciers. The layers created in ice cores by individual years of snowfall, which alternate with dry-season deposits of pollen and dust, provide physical timelines of glacial cycles. Air bubbles in the ice can be analyzed to measure atmospheric CO2 levels at the time the ice was laid down. Understanding the geological past is key to today’s climate change research for several reasons. First, as the next sections will show, Earth’s climate history illustrates how changing GHG levels and temperatures in the past shaped climate systems and affected conditions for life. Second, researchers use past records to tune climate models and see whether they are accurately estimating dynamics like temperature increase and climate feedbacks. The more closely a model can replicate past climate conditions, the more accurate its future predictions are likely to be. 4. Past Warming: The Eocene Epoch Unit 12 // Section 4 Scientists have looked far back in time to find a period when atmospheric GHG concentrations were as high as they could rise in coming decades if current emission trends continue. The Eocene epoch, which lasted from 55 million to 38 million years ago, was the most recent time when scientists think that CO2 was higher than 500 parts per million. Fossil evidence shows that Earth was far warmer during the Eocene than it is now. Tropical trees grew over much larger ranges to the north and south than they occupy today. Palm trees grew as far north as Wyoming and crocodiles swam in warm ocean water off Greenland. Early forms of modern mammals appeared, including small creatures such as cat-sized horses whose size made them well adapted to a warm climate (Fig. 6). Without ice cover at the poles, sea levels were nearly 100 meters higher than today. The deep ocean, which today is near freezing, warmed to over 12°C. Figure 6. Phenacodus, a sheep-sized herbivore found in the Eocene era Source: Courtesy Wikimedia Commons. Public Domain. Scientists cannot measure CO2 levels during the Eocene—there are no ice cores because there is no ice this old—but from indirect measurements of ocean chemistry, they estimate that atmospheric CO2 levels were three to ten times higher than pre-industrial levels (280 ppm). These concentrations were probably related to a sustained increase in CO2 released from volcanoes over tens of millions of years. Because this climate persisted for tens of millions of years, living species and the climate system had time to adapt to warm, moist conditions. If humans release enough GHGs into the atmosphere to create Eocene-like conditions in the next several centuries, the transition will be much more abrupt, and many living organisms—especially those that thrive in cold conditions—will have trouble surviving the shift. A troubling lesson from the Eocene is that scientists are unable to simulate Eocene climate conditions using climate models designed for the modern climate. When CO2levels are raised in the computer models to levels appropriate for what scientists think existed during the Eocene, global temperatures rise but high latitude temperatures do not warm as much as what scientists measure, particularly in winter. Some scientists believe that this is because there are unrecognized feedbacks in the climate system involving types of clouds that only form when CO2 levels are very high. If this theory is correct, the future climate could warm even more in response to the anthropogenic release of CO2 than most models predict. The beginning of the Eocene also hosted a shorter event that may be the best natural analog for what humans are doing to the climate system today. Fifty-five million years ago a rapid warming episode called the Paleocene-Eocene Thermal Maximum (PETM) occurred, in which Earth’s temperature rose by 5 to 6°C on average within 10,000 to 30,000 years. Several explanations have been proposed for this large, abrupt warming, all of which involve a massive infusion of GHGs into the atmosphere, resulting in a trebling or perhaps a quadrupling of CO2 concentrations, not unlike what is predicted for CO2 levels by 2100 (footnote 4). 5. Global Cooling: The Pleistocene Epoch Unit 12 // Section 5 During the Pleistocene epoch, which began about 2 million years ago, Earth’s average temperature has always been cold enough to maintain ice at high latitudes. But Pleistocene climate has not been constant: ice coverage has fluctuated dramatically, with continental ice sheets advancing and retreating over large parts of North America and Europe. These peak glacial periods are often referred to as “Ice Ages” or “Glacial Maxima.” During the Pleistocene, Earth has experienced more than 30 swings between prolonged glacial periods and brief warmer interglacial phases like the one we live in today. Glacial advances and retreats shaped Earth’s topography, soils, flora, and fauna (Fig. 7). During glaciation events, huge volumes of water were trapped in continental ice sheets, lowering sea levels as much as 130 meters and exposing land between islands and across continents. These swings often changed ocean circulation patterns. During the most extreme cold phases, ice covered up to 30 percent of the Earth’s surface. Figure 7. Pleistocene glacial deposits in Illinois Source: Courtesy Illinois State Geological Survey. As glaciers advanced and retreated at high latitudes, ecosystems at lower latitudes evolved to adapt to prevailing climate conditions. In North America, just south of the advancing glaciers, a unique type of grass steppe supported distinctive cold-adapted fauna dominated by large mammals such as the mammoth, woolly rhinoceros, and dire wolf. Why did Pleistocene temperatures swing back and forth so dramatically? Scientists point to a combination of factors. One main cause is variations in Earth’s orbit around the sun. These variations, which involve the tilt of the Earth’s pole of rotation and the ellipticity of the Earth’s orbit, have regular timescales of 23,000, 41,000, and 100,000 years and cause small changes in the distribution of solar radiation received on the Earth (footnote 5). The possibility that these subtle variations could drive changes in climate was first proposed by Scottish scientist James Croll in the 1860s. In the 1930s, Serbian astronomer Milutin Milankovitch developed this idea further. Milankovitch theorized that variations in summer temperature at high latitudes were what drove ice ages—specifically, that cool summers kept snow from melting and allowed glaciers to grow. However, changes in summer temperature due to orbital variations are too small to cause large climate changes by themselves. Positive feedbacks are required to amplify the small changes in solar radiation. The two principal feedbacks are changes in Earth’s albedo (the amount of light reflected from the Earth’s surface) from snow and ice buildup and in the amount of CO2 in the atmosphere. Ice core samples from the Vostok station and the European Project for Ice Coring in Antarctica (EPICA) document that CO2 levels have varied over glacial cycles. From bubbles trapped in the ice, scientists can measure past concentrations of atmospheric CO2. The ice’s chemical composition can also be used to measure past surface temperatures. Taken together, these records show that temperature fluctuations through glacial cycles over the past 650,000 years have been accompanied by shifts in atmospheric CO2. GHG concentrations are high during warm interglacial periods and are low during glacial maxima. The ice cores also show that atmospheric CO2concentrations never exceeded 300 parts per million—and therefore that today’s concentration is far higher than what has existed for the last 650,000 years (Fig. 8). Figure 8. Vostok ice-core CO2 record Source: © Jean-Marc Barnola et al., Oak Ridge National Laboratory. One important lesson from ice cores is that climate change is not always slow or steady. Records from Greenland show that throughout the last glacial period, from about 60,000 to 20,000 years ago, abrupt warming and cooling swings called Dansgaard-Oeschger, or D-O, events took place in the North Atlantic. In each cycle temperatures on ice sheets gradually cooled, then abruptly warmed by as much as 20°C, sometimes within less than a decade. Temperatures would then decline gradually over a few hundred to a few thousand years before abruptly cooling back to full glacial conditions. Similar climate fluctuations have been identified in paleoclimate records from as far away as China. These sharp flips in the climate system have yet to be explained. Possible causes include changes in solar output or in sea ice levels around Greenland. But they are powerful evidence that when the climate system reaches certain thresholds, it can jump very quickly from one state to another. At the end of the Younger Dryas—a near-glacial phase that started about 12,800 years ago and lasted for about 1,200 years—annual mean temperatures increased by as much as 10°C in ten years (footnote 6). 6. Present Warming and the Role of CO2 Unit 12 // Section 6 There is clear evidence from many sources that the planet is heating up today and that the pace of warming may be increasing. Earth has been in a relatively warm interglacial phase, called the Holocene Period, since the last ice age ended roughly 10,000 years ago. Over the past thousand years average global temperatures have varied by less than one degree—even during the so-called “Little Ice Age,” a cool phase from the mid-fourteenth through the mid-nineteenth centuries, during which Europe and North America experienced bitterly cold winters and widespread crop failures. Over the past 150 years, however, global average surface temperatures have risen, increasing by 0.6°C +/- 0.2°C during the 20th century. This increase is unusual because of its magnitude and the rate at which it has taken place. Nearly every region of the globe has experienced some degree of warming in recent decades, with the largest effects at high latitudes in the Northern Hemisphere. In Alaska, for example, temperatures have risen three times faster than the global average over the past 30 years. The 1990s were the warmest decade of the 20th century, with 1998 the hottest year since instrumental record-keeping began a century ago, and the ten warmest years on record have all occurred since 1990 (Fig. 9). Figure 9. Global temperature record Source: Courtesy Phil Jones. © Climactic Research Unit, University of East Anglia and the U.K. Met. Office Hadley Centre. As temperatures rise, snow cover, sea ice, and mountain glaciers are melting. One piece of evidence for a warming world is the fact that tropical glaciers are melting around the globe. Temperatures at high altitudes near the equator are very stable and do not usually fluctuate much between summer and winter, so the fact that glaciers are retreating in areas like Tanzania, Peru, Bolivia, and Tibet indicates that temperatures are rising worldwide. Ice core samples from these glaciers show that this level of melting has not occurred for thousands of years and therefore is not part of any natural cycle of climate variability. Paleoclimatologist Lonnie Thompson of Ohio State University, who has studied tropical glaciers in South America, Asia, and Africa, predicts that glaciers will disappear from Kilimanjaro in Tanzania and Quelccaya in Peru by 2020. “The fact that every tropical glacier is retreating is our warning that the system is changing.” Lonnie Thompson, Ohio State University Rising global temperatures are raising sea levels due to melting ice and thermal expansion of warming ocean waters. Global average sea levels rose between 0.12 and 0.22 meters during the 20th century, and global ocean heat content increased. Scientists also believe that rising temperatures are altering precipitation patterns in many parts of the Northern Hemisphere (footnote 7). Because the climate system involves complex interactions between oceans, ecosystems, and the atmosphere, scientists have been working for several decades to develop and refine General Circulation Models (also known as Global Climate Models), or GCMs, highly detailed models typically run on supercomputers that simulate how changes in specific parameters alter larger climate patterns. The largest and most complex type of GCMs are coupled atmosphere-ocean models, which link together three-dimensional models of the atmosphere and the ocean to study how these systems impact each other. Organizations operating GCMs include the National Aeronautic and Space Administration (NASA)’s Goddard Institute for Space Studies and the United Kingdom’s Hadley Centre for Climate Prediction and Research (Fig. 10). Source: © Crown copyright 2006, data supplied by the Met Office. Researchers constantly refine GCMs as they learn more about specific components that feed into the models, such as conditions under which clouds form or how various types of aerosols scatter light. However, predictions of future climate change by existing models have a high degree of uncertainty because no scientists have ever observed atmospheric CO2 concentrations at today’s levels. Modeling climate trends is complicated because the climate system contains numerous feedbacks that can either magnify or constrain trends. For example, frozen tundra contains ancient carbon and methane deposits; warmer temperatures may create positive feedback by melting frozen ground and releasing CO2 and methane, which cause further warming. Conversely, rising temperatures that increase cloud formation and thereby reduce the amount of incoming solar radiation represent negative feedback. One source of uncertainty in climate modeling is the possibility that the climate system may contain feedbacks that have not yet been observed and therefore are not represented in existing GCMs. Scientific evidence, including modeling results, indicates that rising atmospheric concentrations of CO2 and other GHGs from human activity are driving the current warming trend. As the previous sections showed, prior to the industrial era atmospheric CO2 concentrations had not risen above 300 parts per million for several hundred thousand years. But since the mid-18th century, CO2 levels have risen steadily. In 2007 the Intergovernmental Panel on Climate Change (IPCC), an international organization of climate experts created in 1988 to assess evidence of climate change and make recommendations to national governments, reported that CO2 levels had increased from about 280 ppm before the industrial era to 379 ppm in 2005. The present CO2 concentration is higher than any level over at least the past 420,000 years and is likely the highest level in the past 20 million years. During the same time span, atmospheric methane concentrations rose from 715 parts per billion (ppb) to 1,774 ppb and N2O concentrations increased from 270 ppb to 319 ppb.a Do these rising GHG concentrations explain the unprecedented warming that has taken place over the past century? To answer this question scientists have used climate models to simulate climate responses to natural and anthropogenic forcings. The best matches between predicted and observed temperature trends occur when these studies simulate both natural forcings (such as variations in solar radiation levels and volcanic eruptions) and anthropogenic forcings (GHG and aerosol emissions) (Fig. 11). Taking these findings and the strength of various forcings into account, the IPCC stated in 2007 that Earth’s climate was unequivocally warming and that most of the warming observed since the mid-20th century was “very likely” (meaning a probability of more than 90 percent) due to the observed increase in anthropogenic GHG emissions (footnote 9). Figure 11. Comparison between modeled and observations of temperature rise since the year 1860 Source: © Intergovernmental Panel on Climate Change, Third Assessment Report, 2001. Working Group 1: The Scientific Basis, Figure 1.1. Aerosol pollutants complicate climate analyses because they make both positive and negative contributions to climate forcing. As discussed in Unit 11, “Atmospheric Pollution,” some aerosols such as sulfates and organic carbon reflect solar energy back from the atmosphere into space, causing negative forcing. Others, like black carbon, absorb energy and warm the atmosphere. Aerosols also impact climate indirectly by changing the properties of clouds—for example, serving as nuclei for condensation of cloud particles or making clouds more reflective. Researchers had trouble explaining why global temperatures cooled for several decades in the mid-20th century until positive and negative forcings from aerosols were integrated into climate models. These calculations and observation of natural events showed that aerosols do offset some fraction of GHG emissions. For example, the 1991 eruption of Mount Pinatubo in the Philippines, which injected 20 million tons of SO2 into the stratosphere, reduced Earth’s average surface temperature by up to 1.3°F annually for the following three years (footnote 10). But cooling from aerosols is temporary because they have short atmospheric residence times. Moreover, aerosol concentrations vary widely by region and sulfate emissions are being reduced in most industrialized countries to address air pollution. Although many questions remain to be answered about how various aerosols are formed and contribute to radiative forcing, they cannot be relied on to offset CO2 emissions in the future. 7. Observed Impacts of Climate Change Unit 12 // Section 7 Human-induced climate change has already had many impacts. As noted above, global average surface temperatures rose by 0.6°C +/- 0.2°C and sea levels rose by 0.12 to 0.22 meters during the 20th century. Other observed changes in Earth systems that are consistent with anthropogenic climate change include: - Decreases by about two weeks in the duration of ice cover on rivers and lakes in the mid- and high latitudes of the Northern Hemisphere over the 20th century; - Decreases by 10 percent in the area of snow cover since satellite images became available in the 1960s; - Thinning by 40 percent of Arctic sea ice in late summer to early autumn in recent decades, and decrease by 10 to 15 percent in extent in spring and summer since the 1950s (Fig. 12); - Widespread retreat of non-polar glaciers; - Increases by about 1 to 4 days per decade in growing seasons in the Northern Hemisphere, especially at higher latitudes, during the last 40 years; and - Thawing, warming, and degrading of permafrost in some regions (footnote 11) Figure 12. Arctic sea ice coverage, 1979 and 2003 Source: ©National Aeronautic and Space Administration. The Earth is not warming uniformly. Notably, climate change is expected to affect the polar regions more severely. Melting snow and ice expose darker land and ocean surfaces to the sun, and retreating sea ice increases the release of solar heat from oceans to the atmosphere in winter. Trends have been mixed in Antarctica, but the Arctic is warming nearly twice as rapidly as the rest of the world; winter temperatures in Alaska and western Canada have risen by up to 3–4°C in the past 50 years, and Arctic precipitation has increased by about 8 percent over the past century (mostly as rain) (footnote 12). Observed climate change impacts are already affecting Earth’s physical and biological systems. Many natural ecosystems are vulnerable to climate change impacts, especially systems that grow and adapt slowly. For example, coral reefs are under serious stress from rapid ocean warming. Recent coral bleaching events in the Caribbean and Pacific oceans have been correlated with rising sea surface temperatures over the past century (footnote 13). Some natural systems are more mobile. For example, tree species in New England such as hemlock, white pine, maple, beech, and hickory have migrated hundreds of meters per year in response to warming and cooling phases over the past 8,000 years (footnote 14). But species may not survive simply by changing their ranges if other important factors such as soil conditions are unsuitable in their new locations. Insects, plants, and animals may respond to climate change in many ways, including shifts in range, alterations of their hibernation, migrating, or breeding cycles, and changes in physical structure and behavior as temperature and moisture conditions alter their immediate environments. A recent review of more than 40 studies that assessed the impacts of climate change on U.S. ecosystems found broad impacts on plants, animals, and natural ecosystem processes. Important trends included: - Earlier spring events (emergence from hibernation, plant blooming, and the onset of bird and amphibian breeding cycles); - Insect, bird, and mammal range shifts northward and to higher elevations; and - Changes in the composition of local plant and animal communities favoring species that are better adapted to warming conditions (higher temperatures, more available water, and higher CO2 levels). Because many natural ecosystems are smaller, more isolated, and less genetically diverse today than in the past, it may be increasingly difficult for them to adapt to climate change by migrating or evolving, the review’s authors concluded (footnote 15). This is especially true if climate shifts happen abruptly so that species have less response time, or if species are adapted to unique environments (Fig. 13). Figure 13. Polar bear hunting on Arctic sea ice Source: © Greenpeace/Beltra. 8. Other Potential Near-Term Impacts Unit 12 // Section 8 In its 2007 assessment report the IPCC projected that global average surface temperatures for the years 2090 to 2099 will rise by 1.1 to 6.4°C over values in 2001 to 2010. The greatest temperature increases will occur over land and at high northern latitudes, with less warming over the southern oceans and the North Atlantic (footnote 16). This rate of warming, driven primarily by fossil fuel consumption, would be much higher than the changes that were observed in the 20th century and probably unprecedented over at least the past 10,000 years. Based on projections like this, along with field studies of current impacts, scientists forecast many significant effects from global climate change in the next several decades, although much uncertainty remains about where these impacts will be felt worldwide and how severe they will be. Climate change is likely to alter hydrologic cycles and weather patterns in many ways, such as shifting storm tracks, increasing or reducing annual rainfall from region to region, and producing more extreme weather events such as storms and droughts (Fig. 14). While precipitation trends vary widely over time and area, total precipitation increased during the 20th century over land in high-latitude regions of the Northern Hemisphere and decreased in tropical and subtropical regions (footnote 17). Figure 14. Flooding in New Orleans after Hurricane Katrina, 2005 Source: © National Oceanic and Atmospheric Administration. Rising temperatures and changing hydrological cycles are likely to have many impacts, although it is hard to predict changes in specific regions—some areas will become wetter and some dryer. Storm tracks may shift, causing accustomed weather patterns to change. These changes may upset natural ecosystems, potentially leading to species losses. They also could reduce agricultural productivity if new temperature and precipitation patterns are less than optimal for major farmed crops (for example, if rainfall drops in the U.S. corn belt). Some plant species may migrate north to more suitable ecosystems—for example, a growing fraction of the sugar maple industry in the northeastern United States is already moving into Canada—but soils and other conditions may not be as appropriate in these new zones. Some natural systems could benefit from climate change at the same time that others are harmed. Crop yields could increase in mid-latitude regions where temperatures rise moderately, and winter conditions may become more moderate in middle and high latitudes. A few observers argue that rising CO2 levels will produce a beneficial global “greening,” but climate change is unlikely to increase overall global productivity. Research by Stanford University ecologist Chris Field indicates that elevated CO2 prevents plants from increasing their growth rates, perhaps by limiting their ability to utilize other components that are essential for growth such as nutrients. This finding suggests that terrestrial ecosystems may take up less carbon in a warming world than they do today, not more. Undesirable species may also benefit from climate change. Rising temperatures promote the spread of mosquitoes and other infectious disease carriers that flourish in warmer environments or are typically limited by cold winters (Fig. 15). Extreme weather events can create conditions that are favorable for disease outbreaks, such as loss of clean drinking water and sanitation systems. Some vectors are likely to threaten human health, while others can damage forests and agricultural crops. Figure 15. Infectious diseases affected by climate change Source: © Climate Change 1995, Impacts, adaptations and mitigation of climate change: scientific-technical analyses, working group 2 to the second assessment report of the IPCC, UNEP, and WMO, Cambridge Press University, 1996. Melting of polar ice caps and glaciers is already widespread and is expected to continue throughout this century. Since the late 1970s Arctic sea ice has decreased by about 20 percent; in the past several years, this ice cover has begun to melt in winter as well as in summer, and some experts predict that the Arctic could be ice-free by 2100. Ice caps and glaciers contain some 30 million cubic kilometers of water, equal to about 2 percent of the volume of the oceans. Further melting of sea ice will drive the continued sea-level rise and increase flooding and storm surge levels in coastal regions. Warmer tropical sea surface temperatures are already increasing the intensity of hurricanes, and this trend may accelerate as ocean temperatures rise (footnote 18). Stronger storms coupled with rising sea levels are expected to increase flooding damage in coastal areas worldwide. Some scientists predict that extreme weather events, such as storms and droughts, may become more pronounced, although this view is controversial. In general, however, shifting atmospheric circulation patterns may deliver “surprises” as weather patterns migrate and people experience types of weather that fall outside their range of experience, such as flooding at a level formerly experienced only every 50 or 100 years. Human societies may already be suffering harmful impacts from global climate change, although it is important to distinguish climate influences from other socioeconomic factors. For example, financial damages from storms in the United States have risen sharply over the past several decades, a trend that reflects both intensive development in coastal areas and the impact of severe tropical storms in those densely populated regions. Human communities clearly are vulnerable to climate change, especially societies that are heavily dependent on natural resources such as forests, agriculture, and fishing; low-lying regions subject to flooding; water-scarce areas in the subtropics; and communities in areas that are subject to extreme events such as heat episodes and droughts. In general, developed nations have more adaptive capacity than developing countries because wealthier countries have greater economic and technical resources and are less dependent on natural resources for income. And more drastic changes may lie in store. As discussed above, climate records show that the climate can swing suddenly from one state to another within periods as short as a decade. A 2002 report by the National Research Council warned that as atmospheric GHG concentrations rise, the climate system could reach thresholds that trigger sudden drastic shifts, such as changes in ocean currents or a major increase in floods or hurricanes (footnote 19). “Just as the slowly increasing pressure of a finger eventually flips a switch and turns on a light, the slow effects of drifting continents or wobbling orbits or changing atmospheric composition may ‘switch’ the climate to a new state.” Richard B. Alley, Chair Committee on Abrupt Climate Change, National Research Council How much the planet will warm in the next century, and what kind of impacts will result, depends on how high CO2 concentrations rise. In turn, this depends largely on human choices about fossil fuel consumption. Because fossil fuel accounts for 80 percent of global energy use, CO2 levels will continue to rise for at least the next 30 or 40 years, so additional impacts are certain to be felt. This means that it is essential both to mitigate global climate change by reducing CO2 emissions and to adapt to the changes that have already been set in motion. (For more on options for mitigating and adapting to climate change, see Unit 13, “Looking Forward: Our Global Experiment.”) 9. Major Laws and Treaties Unit 12 // Section 9 Science plays a central role in international negotiations to address global climate change. In 1988, the World Meteorological Organization and the United Nations Environment Programme established the Intergovernmental Panel on Climate Change (IPCC), an organization composed of official government representatives that is charged with assessing scientific, technical, and socio-economic information relevant to understanding climate change risks, potential impacts, and mitigation and adaptation options (footnote 20). The IPCC meets regularly to review and assess current scientific literature and issues “assessment reports” at approximately five-year intervals (most recently in 2007). IPCC reports are adopted by consensus and represent a broad cross-section of opinion from many nations and disciplines regarding current understanding of global climate change science. The panel’s recommendations are not binding on governments, but its models and estimates are important starting points for international climate change negotiations. The most broadly-supported international agreement on climate change, the United Nations Framework Convention on Climate Change (FCCC), was opened for signature in 1992 and entered into force in 1994 (footnote 21). To date it has been ratified by 189 countries, including the United States. FCCC signatories pledge to work toward stabilizing atmospheric GHG concentrations “at a level that would prevent dangerous anthropogenic interference with the climate system,” but the Convention does not define that level. As a result, it has not been a significant curb on GHG emissions, although it creates a system for nations to report emissions and share other relevant information and for developed countries to provide financial and technical support for climate change initiatives to developing countries. Recognizing that the FCCC commitments were not sufficient to prevent serious climate change, governments negotiated the Kyoto Protocol, which commits industrialized countries to binding GHG emission reductions of at least 5 percent below their 1990 levels by the period of 2008–2012 (footnote 22). The Protocol focuses on developed countries in reflection of the fact that they are the source of most GHGs emitted to date, although it allows developed countries to fulfill their reduction commitments partially through projects to reduce or avoid GHG reductions in developing countries. The Kyoto Protocol entered into force in 2005 and has been ratified to date by 163 countries, representing 61.6 percent of developed countries’ GHG emissions. The United States signed the Protocol but has not ratified it. President George W. Bush argued that the economic impact of its assigned reductions (7 percent below 1990 levels) would be too severe and instead emphasized voluntary domestic reduction commitments. For all of the controversy that it has generated, the Kyoto Protocol alone will not reduce the threat of major climate change because it covers only 40 percent of global GHG emissions without U.S. participation, does not require emission reductions from rapidly developing countries, like India and China, that are major fossil fuel consumers, and only covers emission through the year 2012. No single option has emerged yet as a follow-on, but analysts widely agree that the next phase of global action against climate change will have to take a longer-term approach, address the costs of reducing GHG emissions, and find ways to help developing countries reap the benefits of economic growth on a lower-carbon pathway than that which industrialized countries followed over the past 150 years. Continually improving our scientific understanding of climate change and its impacts will help nations to identify options for action. 10. Further Reading and Footnotes Unit 12 // Section 10 Center for Health and the Global Environment, Harvard Medical School, Climate Change Futures: Health, Ecological, and Economic Dimensions (Boston, MA: Harvard Medical School, November 2005). “The Discovery of Global Warming,” (http://www.aip.org/history/climate). Created by Spencer Weart, author of the book of the same title, this site includes detailed essays on the history of climate change science, case studies, and links to relevant scientific and historical publications. James E. Hansen, “Can We Still Avoid Dangerous Human-Made Climate Change?” February 10, 2006, http://www.columbia.edu/~jeh1/newschool_text_and_slides.pdf. In this speech and accompanying slides, a leading U.S. climate scientist makes the case for action to slow global climate change. - Water vapor contributes to climate change through an important positive feedback loop: as the atmosphere warms, evaporation from Earth’s surface increases and the atmosphere becomes able to hold more water vapor, which in turn traps more thermal energy and warms the atmosphere further. It also can cause negative feedback when water in the atmosphere condenses into clouds that reflect solar radiation back into space, reducing the total amount of energy that reaches Earth. For more details, see National Oceanic and Atmospheric Administration, “Greenhouse Gases: Frequently Asked Questions,” - Key GHG Data: Greenhouse Gas Emissions Data for 1990–2003 Submitted To the United Nations Framework Convention on Climate Change (Bonn: United Nations Framework Convention on Climate Change, November 2005), pp. 16, 28. - U.S. Environmental Protection Agency, “The U.S. Inventory of Greenhouse Gas Emissions and Sinks: Fast Facts,” April 2006. - John A. Higgins and Daniel P. Schrag, “Beyond Methane: Towards a Theory for the Paleocene-Eocene Thermal Maximum,” Earth and Planetary Science Letters, vol. 245 (2006), pp. 523–537. - National Oceanographic and Atmospheric Administration, Paleoclimatology Branch, “Astronomical Theory of Climate Change,” http://www.ncdc.noaa.gov/paleo/milankovitch.html; Spencer R. Weart, The Discovery of Global Warming (Cambridge, MA: Harvard University Press, 2003), pp. 74–77. - “Abrupt Climate Change,” Lamont-Doherty Earth Observatory, Columbia University. - Intergovernmental Panel on Climate Change, Climate Change 2007: The Scientific Basis, Summary for Policymakers (Cambridge, UK: Cambridge University Press, 2007), pp. 4–6. - Ibid., pp. 2–3. - Ibid., p. 8. - U.S. Geological Survey, “Impacts of Volcanic Gases on Climate, The Environment, and People,” May 1997, http://pubs.usgs.gov/of/1997/of97-262/of97-262.html. - IPCC, Climate Change 2001: Synthesis Reports, Summary for Policymakers (Cambridge, UK: Cambridge University Press, 2001), p. 6. - ACIA, Impacts of a Warming Arctic: Arctic Climate Impact Assessment (Cambridge, UK: Cambridge University Press, 2004), p. 12. - J.E. Weddell, ed., The State of Coral Reef Ecosystems of the United States and Pacific Freely Associated States, 2005, NOAA Technical Memorandum NOS NCCOS 11 (Silver Spring, MD: NOAA/NCCOS Center for Coastal Monitoring and Assessment’s Biogeography Team, 2005), pp. 13–15. - David R. Foster and John D. Aber, eds., Forests in Time: The Environmental Consequences of 1,000 Years of Change in New England (New Haven: Yale University Press, 2004), pp. 45–46. - Camille Parmesan and Hector Galbraith, Observed Impacts of Global Climate Change in the U.S. (Arlington, VA: Pew Center on Global Climate Change, 2004), http://www.pewclimate.org/global-warming-in-depth/all_reports/observedimpacts/index.cfm. - IPCC, Climate Change 2007: The Scientific Basis, p. 749. - United Nations Environment Programme, “Observed Climate Trends.” - Kerry Emanuel, “Increasing Destructiveness of Tropical Cyclones Over the Past 30 Years,” Nature, vol. 436, August 4, 2005, pp. 686–88, and “Anthropogenic Effects on Tropical Cyclone Activity,” http://wind.mit.edu/~emanuel/anthro2.htm. - National Research Council, Abrupt Climate Change: Inevitable Surprises (Washington, DC: National Academy Press, 2002). 12.1 Earth’s Changing Climate (video) Tropical glaciers are the world's thermometers; their melting is a signal that human activities are warming the planet. A California project tries to predict whether natural ecosystems will be able to absorb enough additional carbon dioxide from the atmosphere in the next 50 years to mitigate the full impact of human-induced greenhouse gas emissions. Unit 1 Many Planets, One Earth Astronomers have discovered dozens of planets orbiting other stars, and space probes have explored many parts of our solar system, but so far scientists have only discovered one place in the universe where conditions are suitable for complex life forms: Earth. In this unit, examine the unique characteristics that make our planet habitable and learn how these conditions were created. unit 2 Atmosphere The atmosphere is what makes the Earth habitable. Heat-trapping gases allow ecosystems to flourish. While the NOAA Global Monitoring Project documents the fluctuations in greenhouse gases worldwide, MIT's Kerry Emanuel looks at the role of hurricanes in regulating global climate. Unit 3 Oceans Oceans cover three-quarters of the Earth's surface, but many parts of the deep oceans have yet to be explored. Learn about the large-scale ocean circulation patterns that help to regulate temperatures and weather patterns on land, and the microscopic marine organisms that form the base of marine food webs. Unit 4 Ecosystems Why are there so many living organisms on Earth, and so many different species? How do the characteristics of the nonliving environment, such as soil quality and water salinity, help determine which organisms thrive in particular areas? These questions are central to the study of ecosystems—communities of living organisms in particular places and the chemical and physical factors that influence them. Learn how scientists study ecosystems to predict how they may change over time and respond to human impacts. Unit 5 Human Population Dynamics What factors influence human population growth trends most strongly, and how does population growth or decline impact the environment? Does urbanization threaten our quality of life or offer a pathway to better living conditions? What are the social implications of an aging world population? Discover how demographers approach these questions through the study of human population dynamics. Unit 6 Risk, Exposure, and Health We are exposed to numerous chemicals every day from environmental sources such as air and water pollution, pesticides, cleaning products, and food additives. Some of these chemicals are threats to human health, but tracing exposures and determining what levels of risk they pose is a painstaking process. How do harmful substances enter the body, and how do they damage cells? Learn how dangers are assessed, what kind of regulations we use to reduce exposures, and how we manage associated human health risks. Unit 7 Agriculture Demographers project that Earth's population will peak during the 21st century at approximately ten billion people. But the amount of new cultivable land that can be brought under production is limited. In many nations, the need to feed a growing population is spurring an intensification of agriculture—finding ways to grow higher yields of food, fuel, and fiber from a given amount of land, water, and labor. This unit describes the physical and environmental factors that limit crop growth and discusses ways of minimizing agriculture's extensive environmental impacts. unit 8 Water Resources Earth's water resources, including rivers, lakes, oceans, and underground aquifers, are under stress in many regions. Humans need water for drinking, sanitation, agriculture, and industry; and contaminated water can spread illnesses and disease vectors, so clean water is both an environmental and a public health issue. In this unit, learn how water is distributed around the globe; how it cycles among the oceans, atmosphere, and land; and how human activities are affecting our finite supply of usable water. unit 9 Biodiversity Decline Living species on Earth may number anywhere from 5 million to 50 million or more. Although we have yet to identify and describe most of these life forms, we know that many are endangered today by development, pollution, over-harvesting, and other threats. Earth has experienced mass extinctions in the past due to natural causes, but the factors reducing biodiversity today increasingly stem from human activities. In this unit we see how scientists measure biodiversity, how it benefits our species, and what trends might cause Earth's next mass extinction. unit 10 Energy Challenges Global energy use increases by the day. Polluting the atmosphere with ever more carbon dioxide is not a viable solution for our future energy needs. Can new technologies such as carbon sequestration and ethanol production help provide the energy we need without pushing the concentrations of CO2 to dangerous levels? Unit 11 Atmospheric Pollution Many forms of atmospheric pollution affect human health and the environment at levels from local to global. These contaminants are emitted from diverse sources, and some of them react together to form new compounds in the air. Industrialized nations have made important progress toward controlling some pollutants in recent decades, but air quality is much worse in many developing countries, and global circulation patterns can transport some types of pollution rapidly around the world. In this unit, discover the basic chemistry of atmospheric pollution and learn which human activities have the greatest impacts on air quality. Unit 12 Earth’s Changing Climate Earth's climate is a sensitive system that is subject to dramatic shifts over varying time scales. Today human activities are altering the climate system by increasing concentrations of heat-trapping greenhouse gases in the atmosphere, which raises global temperatures. In this unit, examine the science behind global climate change and explore its potential impacts on natural ecosystems and human societies. Unit 13 Looking Forward: Our Global Experiment Emerging technologies offer potential solutions to environmental problems. Over the long-term, human ingenuity may ensure the survival not only of our own species but of the complex ecosystems that enhance the quality of human life. In this unit, examine the wide range of efforts now underway to mitigate the worst effects of man-made environmental change, looking toward those that will have a positive impact on the future of our habitable planet.
0.827808
3.373094
Those are hot young stars in the Large Magellanic Cloud—one of the puppy-dog galaxies that follow the Milky Way around—photographed by the Hubble Space Telescope. (Detail cropped from a Wikipedia image.) Note that four rays seem to emanate from each of the brightest stars. The rays are not, of course, true beams of light radiating in the four cardinal directions. They are an artifact of the telescope’s structure: a diffraction pattern created by the four vanes of the “spider” that supports the secondary mirror within the barrel of the telescope. Many other telescopes have three-vane spiders that yield a six-pointed diffraction pattern. Recently, in my lovable know-it-all manner, I was holding forth on the idea that this diffraction effect—a mere accident of instrumental design—might actually be the source of the familiar iconographic star, with its five or six angular points. In other words, we think of a star as something spiky, poking out in various directions, because we’re used to seeing telescopic images with this diffractive defect. At right is M. C. Escher’s interpretation of what stellar means. For other examples see the Hollywood Walk of Fame or the flags of the U.S. and the E.U. and those of more than 50 other countries, not to mention Texas. Well, it turns out my cute idea about the cultural influence of telescopic photos is utterly bogus. If you need any evidence, the engraving reproduced below should suffice. It shows the muse Astronomia (a.k.a. Urania) pointing out the moon and stars to Ptolemy. The stars are five- or six-pointed scribbles that beg to be called asterisks. The engraving appears in the Margarita Philosophica of Gregor Reisch, published in 1504, which is a full century before Galileo turned his telescope to the heavens. Whatever those engraved stars are, they are not artifacts of telescope spider vanes. The dictionary offers further evidence. For example, the starfish (genus Asterias, class Asteroidea) has had that name at least since 1538. And the asterisk—the typographical mark—has a citation in the OED going all the way back to 1382. These terms make sense only if the concept of a star was already associated in most people’s minds with a spiky polygon, rather than a dimensionless point of light in the night sky. And that’s what puzzles me, because the stars really do appear to be dimensionless points of light. When I stare at the sky, I see some twinkling going on, but nowhere do I see pentagrams and hexagrams pinned to black velvet, or even the slightest hint of angularity. So where did this tradition get started? Did the Greek word ?????? already convey a sense of symmetrical spikiness, so that ancient Athenians would have understood why we call certain flowers asters? Is the same iconography prevalent in other cultures, say in China? Those 50+ star-studded flags (including China’s) suggest that the conventional stellar icon is at least recognized globally, but they don’t tell us where and when it all began. After my telescopic theory fell apart, I had a second hypothesis, namely that the star icon might come from the symbol-happy world of astrology, but I’ve found no support for this idea either. So I throw the question out to the starry void: How did the star get its points? Addendum 2011-12-16: The illuminating comments below on ancient Egyptian paintings of stars would appear to settle part of my question: Well over 2,000 years ago, at least some people were already drawing stars in much the same way a modern kindergartner does. What I’d still like to know is why. Yes, there are many plausible just-so stories, but you’d think that someone at the time might have offered a word of explanation. The other day I spent a pleasant afternoon leafing through The History and Practice of Ancient Astronomy, by James Evans (New York: Oxford University Press, 1998). It’s quite a thorough introduction to Greek and Egyptian ideas about the sky, but I did not find an answer to my question about the points of stars. The astronomers of that period were engrossed in charting the positions and motions of the stars, but one gets the impression they had no interest whatever in the nature of those bright objects—what they look like up close, what they’re made of, why they shine. Of course I don’t really believe the ancients were so lacking curiosity. Surely Aristotle holds forth somewhere on the substance of the stars? But I haven’t found it yet.
0.805142
3.843517
In a recent test, NASA’s James Webb Space Telescope fully deployed its primary mirror into the same configuration it will have when in space. As Webb progresses towards liftoff in 2021, technicians and engineers have been diligently checking off a long list of final tests the observatory will undergo before being packaged for delivery to French Guiana for launch. Performed in early March, this procedure involved commanding the spacecraft’s internal systems to fully extend and latch Webb’s iconic 21 feet 4-inch (6.5 meter) primary mirror, appearing just like it would after it has been launched to orbit. The observatory is currently in a cleanroom at Northrop Grumman Space Systems in Redondo Beach, California. The difficulty and complexity of performing tests for Webb has increased significantly, now that the observatory has been fully assembled. Special gravity offsetting equipment was attached to Webb’s mirror to simulate the zero-gravity environment its mechanisms will have to operate in. Tests like these help safeguard mission success by physically demonstrating that the spacecraft is able to move and unfold as intended. The Webb team will deploy the observatory’s primary mirror only once more on the ground, just before preparing it for delivery to the launch site. Performed in early March, this most recent test involved commanding the spacecraft’s internal systems to fully extend, and latch Webb’s iconic 21 feet 4-inch (6.5 meter) primary mirror into the same configuration it will have when in space. Credit: NASA/Sophia Roberts A telescope’s sensitivity, or how much detail it can see, is directly related to the size of the mirror that collects light from the objects being observed. A larger surface area collects more light, just like a larger bucket collects more water in a rain shower than a small one. Webb’s mirror is the biggest of its kind that NASA has ever built. In order to perform groundbreaking science, Webb’s primary mirror needs to be so large that it cannot fit inside any rocket available in its fully extended form. Like the art of origami, Webb is a collection of movable parts employing applied material science that have been specifically designed to fold themselves to a compact formation that is considerably smaller than when the observatory is fully deployed. This allows it to just barely fit within a 16-foot (5-meter) payload fairing, with little room to spare. “Deploying both wings of the telescope while part of the fully assembled observatory is another significant milestone showing Webb will deploy properly in space. This is a great achievement and an inspiring image for the entire team,” said Lee Feinberg, optical telescope element manager for Webb at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. The evolving novel coronavirus COVID-19 situation is causing significant impact and disruption globally. Given these circumstances, Webb’s Northrop Grumman team in California has resumed integration and testing work with reduced personnel and shifts until the Deployable Tower Assembly set up in April. The project will then shut down integration and testing operations due to the lack of required NASA onsite personnel related to the COVID-19 situation. The project will reassess over the next couple of weeks and adjust decisions as the situation continues to unfold. The James Webb Space Telescope will be the world’s premier space science observatory when it launches in 2021. Webb will solve mysteries in our solar system, look beyond to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it. Webb is an international program led by NASA with its partners, ESA (European Space Agency) and the Canadian Space Agency.
0.804287
3.130386
For decades, scientists have been of the belief that the Moon, Earth’s only natural satellite, was four and a half billion years old. According to this theory, the Moon was created from a fiery cataclysm produced by a collision between the Earth with a Mars-sized object (named Theia) roughly 100 million years after the formation of primordial Earth. But according to a new study by researchers from UCLA (who re-examined some of the Apollo Moon Rocks), these estimates may have been off by about 40 to 140 million years. Far from simply adjusting our notions of the Moon’s proper age, these findings are also critical to our understanding of the Solar System and the formation and evolution of its rocky planets. This study, titled “Early formation of the Moon 4.51 billion years ago“, was published recently in the journal Science Advances. Led by Melanie Barboni – a professor from the Department of Earth, Planetary, and Space Sciences at UCLA – the research team conducted uranium-lead dating on fragments of the Moon rocks that were brought back by the Apollo 14 astronauts. These fragments were of a compound known as zircon, a type of silicate mineral that contains trace amounts of radioactive elements (like uranium, thorium, and lutetium). As Kevin McKeegan, a UCLA professor of geochemistry and cosmochemistry and a co-author of the study, explained, “Zircons are nature’s best clocks. They are the best mineral in preserving geological history and revealing where they originated.” By examining the radioactive decay of these elements, and correcting for cosmic ray exposure, the research team was able to get highly precise estimates of the zircon fragments ages. Using one of UCLA’s mass spectrometers, they were able to measure the rate at which the deposits of uranium in the zircon turned into lead, and the deposits of lutetium turned into hafnium. In the end, their data indicated that the Moon formed about 4.51 billion years ago, which places its birth within the first 60 million years of the Solar System or so. Previously, dating Moon rocks proved difficult, mainly because most of them contained fragments of many different kinds of rocks, and these samples were determined to be tainted by the effects of multiple impacts. However, Barboni and her team were able to examine eight zircons that were in good condition. More importantly, these silicate deposits are believed to have formed shortly after the collision between Earth and Theia, when the Moon was still an unsolidified mass covered in oceans of magma. As these oceans gradually cooled, the Moon’s body became differentiated between its crust, mantle and core. Because zircon minerals were formed during the initial magma ocean, uranium-lead dating reaches all the way back to a time before the Moon became a solidified mass. As Edward Young, a UCLA professor of geochemistry and cosmochemistry and a co-author of the study, put it, “Mélanie was very clever in figuring out the Moon’s real age dates back to its pre-history before it solidified, not to its solidification.” These findings have not only determined the age of the Moon with a high degree of accuracy (and for the first time), it also has implications for our understanding of when and how rocky planes formed within the Solar System. By placing accurate dates on when certain bodies formed, we are able to understand the context in which they formed, which also helps to determine what mechanisms were involved. And this was just the first revelation produced by the research team, which hopes to continue studying the zircon fragments to see what they can learn about the Moon’s early history. Further Reading: UCLA
0.832317
3.899052
Template:Infobox spacecraft instrument Two Wide-Angle Imaging Neutral-Atom Spectrometers (TWINS) are a pair of NASA instruments aboard two United States National Reconnaissance Office satellites in Molniya orbits. TWINS was designed to provide stereo images of the Earth's ring current. The first instrument, TWINS-1, was launched aboard USA-184 on 28 June 2006. TWINS-2 followed aboard USA-200 on March 13, 2008. Each instrument consists of an energetic neutral atom imager and a Lyman alpha detector. The ENA imager provides indirect remote sensing of the ring current ions, and the Lyman alpha detector gives a measure of the neutral hydrogen cloud about the Earth, known as the geocorona. The TWINS prime mission lasted two years, from 2008 to 2010, and has been followed by an extended mission which is ongoing. Launched as missions of opportunity aboard classified, non-NASA U.S. government spacecraft, TWINS conducts stereoscopic imaging of Earth's magnetosphere. By imaging the charge exchange neutral atoms over a broad energy range (~1-100 keV) using two identical instruments on two widely spaced high-altitude, high-inclination spacecraft, TWINS enables the 3-dimensional visualization and the resolution of large scale structures and dynamics within the magnetosphere for the first time. In contrast to traditional space experiments, which make measurements at only one point in space, imaging experiments provide simultaneous viewing of different regions of the magnetosphere. Stereo imaging, as done by TWINS, takes the next step of producing 3-D images, and provides a leap ahead in our understanding of the global aspects of the terrestrial magnetosphere. The ENA imagers observe energetic neutrals produced from the global magnetospheric ion population, over an energy range of 1 to 100 keV with high angular (4-degree) and time (about 1-minute) resolution. A Lyman-alpha geocoronal imager is used to monitor cold exospheric hydrogen atoms that produce ENAs from ions via charge exchange. Complementing these imagers are detectors that measure the local charged particle environment around the spacecraft. The offset in the orbital phases (apogees at different times) of TWINS 1 and TWINS 2 means that in addition to stereo ENA imaging for several hours twice per day, the two TWINS instruments also obtain essentially continuous magnetospheric observations. The TWINS instrumentation is essentially the same as the MENA instrument on the IMAGE spacecraft. This instrumentation consists of a neutral atom imager covering the ~1-100 keV energy range with 4°x4° angular resolution and 1-minute time resolution, and a simple Lyman-alpha imager to monitor the geocorona. TWINS provides stereo imaging of the Earth's magnetosphere, the region surrounding the planet controlled by its magnetic field and containing the Van Allen radiation belts and other energetic charged particles. TWINS enables three-dimensional global visualization of this region, leading to greatly enhanced understanding of the connections between different regions of the magnetosphere and their relation to the solar wind. Routine stereo imaging by TWINS began on 15 June 2008, during an extremely weak geomagnetic storm whose Dst index never fell below -40 nT, as compared to a nominal Dst of -100 nT for classification as a storm. During the TWINS prime mission (2008–2010), an extended and unprecedented solar minimum (from solar cycle 23) prevailed, bringing with it very calm magnetospheric conditions ranging from dead quiet to mildly disturbed. During this time period TWINS observed numerous weak storms, roughly once every 27 days (corresponding to the solar rotation period and triggered by solar corotating interaction regions (CIRs). The strongest storm (which was still very mild) observed by TWINS during its prime mission was on 22 July 2009, with Dst reaching a moderate -79 nT. Throughout these extended quiet conditions TWINS images contained ENA signals from both high-altitude (ring current) and low-altitude emission (LAE) regions. |Name||Launch name||Spacecraft||Launch date (UTC)||Launch site||Rocket||Orbit||Remarks| |TWINS-1||TWINS-A||USA-184|| 28 June 2006| |VAFB SLC-6||Delta IV-M+(4,2)||1,138 km x 39,210 km x63.2°| |TWINS-2||TWINS-B||USA-200|| 13 March 2008| |VAFB SLC-3E||Atlas V 411||1,652 km x 38,702 km x63.4°| - STEREO Two spacecraft launched into heliocentric orbit in 2006 to provide stereographic imagery of the Sun. - ↑ "Missions - TWINS A & B - NASA Science". Science.nasa.gov. 2008-03-13. http://science.nasa.gov/missions/twins/. Retrieved 2010-12-17. - ↑ McDowell, Jonathan. "Satellite Catalog". Jonathan's Space Page. http://www.planet4589.org/space/log/satcat.txt. Retrieved 30 May 2011. - ↑ "Orbit Data". USA-200. Heavens Above. http://www.heavens-above.com/orbit.aspx?satid=32706. Retrieved 30 May 2011. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
0.840989
3.604804
What Are the Limits of Physical Law? Chapters 3 through 5 describe the quest for new physical laws under circumstances in which it is already known that current understanding is incomplete. What of the familiar laws of physics that human beings use on a regular basis? Particle accelerators and telescopes, and bridges and airplanes, are designed and built through the confident application of principles that have been used and tested over centuries. However, this testing has, for the most part, taken place only under the physical conditions that are accessible to humans working on Earth, and there is an intense curiosity about whether these basic principles of physics are valid under more intense conditions. The opportunity provided by contemporary astrophysics is to subject essentially all of this “secure” physics to scrutiny in extreme environments, under pressures, temperatures, energies, and densities orders of magnitude greater than those that can be created within a terrestrial laboratory. What makes this opportunity so timely is a new generation of astronomical instruments (made possible by technological advances) that can measure with precision the conditions that exist in the extreme environments found in the universe. There are two quite separate reasons for carrying out this program. The first is to check the basic assumptions made when analyzing exotic cosmic objects like white dwarfs, neutron stars, and black holes. For example, the wavelengths of the spectral lines that are emitted by common atoms on Earth have been measured with great precision. Do similar atoms orbiting a massive black hole at nearly the speed of light emit at exactly the same wavelengths? Although there is no strong reason today to doubt this assumption, it must be checked: If it did turn out to be false, then much of the current understanding of the evolving universe and its contents would be seriously undermined. The second reason for subjecting the laws of physics to extreme tests is even more important—it affords us the chance to discover entirely new laws. For example, when Albert Einstein thought hard about the most basic principles of kinematics, not just in the everyday world but at speeds near that of light, he was led to the special theory of relativity, with its bizarre melding of space and time. Later, once particle accelerators were built, it became possible to give particles extremely high energies and thus to show that supposedly fundamental particles like the proton actually had substructure—quarks and gluons. Accelerators gave birth to the whole new field of particle physics, whose laws are so different from those of classical physics. It will be quite remarkable if more “new” physics is not uncovered by probing matter under more extreme conditions. Fortunately, the approaches to satisfying these twin reasons are identical. The problem must be attacked from both ends on the one hand by using the universe as a giant cosmic laboratory and watching it perform experiments and on the other by carrying out controlled experiments on Earth that are tailored to simulate, as closely as possible, astrophysical conditions. Neither approach by itself is complete. The cosmic laboratory includes the astrophysicist only as a silent witness, a decoder of distant events from fragmentary clues, rather like a historian or an archaeologist. The experimental physicist has more immediate control but cannot recreate the extraordinary range of conditions that occur in the universe. The two approaches are therefore complementary and should be pursued in parallel. Particle accelerators exist on Earth that can raise protons to energies 1,000 times greater than their energies at rest. Many trillions of particles can be accelerated, from which a few very rare and valuable events are culled. However, for the foreseeable future, building a terrestrial accelerator with sufficient energy to explore directly the unification of all the forces is inconceivable. By contrast, cosmic ray protons are created in distant, astronomical sources with energies some 300 million times greater than those produced by the largest particle accelerators on Earth (see Box 6.1). These collide with atoms in the upper atmosphere, and the products of these collisions are observed on the ground as sprays of particles called air showers. Thus, cosmic ray protons can be used to explore physics at much higher energies, but only with rather primitive diagnostics. Both accelerators and cosmic ray experiments are needed to obtain a complete picture. EXTREME COSMIC ENVIRONMENTS Many cosmic environments for testing physical laws are associated with stars or their remnants. The interiors of stars have temperatures of several millions of degrees, hot enough to drive the nuclear reactions that make them shine. When a star’s fuel is exhausted, it shrinks under the pull of gravity, becoming even hotter in the process. Relatively small stars like the Sun come to rest as dense white dwarfs—one teaspoon of a white dwarf weighs several tons. Yet this density pales in comparison with that of neutron stars, formed by the spectacular supernova explosions of more massive stars, that have density beyond that of nuclear matter, 1,000 trillion times that of normal matter and initial temperatures of over 100 billion degrees (see Box 6.2). Neutron stars themselves have a maximum mass (less than three solar masses), and the most massive supernovae have no option but to BOX 6.1 ULTRAHIGH-ENERGY COSMIC RAYS “Cosmic ray” is the name given to high-energy particles arriving at Earth from space, including protons and nuclei. Of particular interest are the highest-energy particles, whose source is currently unknown. Such particles are so rare that their detection requires huge, many square-kilometer arrays on the ground to collect air showers, the cascades of particles that are created as cosmic rays strike the upper atmosphere. The event rate is so low at the highest energies that it is still not clear whether the spectrum actually shows the predicted high-energy cutoff predicted due to degradation by interactions with the cosmic microwave background radiation (see text). There is some evidence, still not conclusive, that the spectrum of cosmic rays extends past the cutoff energy, at 5 x 1019 eV. The handful of events above this energy are of exceptional interest because of their extraordinarily high energy coupled with the fact that they must come from relatively nearby sources, cosmologically speaking. Current data indicate that the flux of particles above the cutoff energy is only about five particles per square kilometer per century. The main challenge is therefore simply to collect a sufficiently large sample. Several experiments are under way or proposed to address this problem. Their aim is to discover a characteristic pattern that reveals the nature of the sources. An important technical aspect of all the new experiments is the atmospheric fluorescence technique, by which profiles of individual air showers can be observed from a relatively compact array of telescopes that track the trajectory across the sky—the socalled Fly’s Eye technique. The technique can be used alone or in hybrid mode with a giant array of particle detectors on the ground. The ultimate use of this technique would be to monitor huge areas of the atmosphere from space to detect giant cascades. To detect the high-energy neutrinos that may accompany the production of ultrahigh-energy cosmic rays, a large array of detectors deep in water or ice is needed to record the characteristic flashes of light from neutrino interactions while suppressing the background from low-energy cosmic rays that bombard Earth’s surface. Some strategies for detecting ultrahigh-energy cosmic rays and neutrinos are illustrated in Figure 6.1.1. BOX 6.2 NEUTRON STARS AND PULSARS When a star has exhausted its nuclear fuel, a runaway collapse of the core and ejection of the mantle in a supernova explosion mark its demise. In stars more than about 15 times the mass of the Sun, nothing can arrest the collapse of the core into a black hole. When the initial mass is some 6 to 15 times the solar mass, matter in the core is crushed only to nuclear density and the collapse stops. Electrons and protons combine to make neutrons and neutrinos, the neutrinos escape, and the remnant is a bizarre “nucleus” some 10 km in radius—a neutron star. Following the ejection of the mantle in a supernova, a neutron star is formed. Not all neutron stars have the same mass; many do have well-determined masses of around 1.4 solar masses. A neutron star of mass greater than about 3 solar masses cannot support itself against gravity, and collapse to a black hole is inevitable. Neutron stars are hot, rapidly spinning, highly magnetized objects. Radiation is channeled and emitted in searchlight beams along the magnetic axes. When the spin and magnetic axes are not aligned, the beams rotate rapidly through the sky, giving rise to regular pulses FIGURE 6.2.1 Schematic of the interior of a neutron star showing the layers of packed and compressed matter. Condensed matter physics can tell us many things about the forms that matter takes, except at the very center of the neutron star, where densities exceed those of nuclear matter. as they briefly illuminate Earth. Such spinning neutron stars are called pulsars. The neutron star is as exotic an environment as one could wish for (Figure 6.2.1). On the star’s surface, a sugar cube would weigh as much as the Great Pyramid of Egypt. The surface is most likely made of metallic iron. Below, the pressure increases rapidly, and electrons are captured by protons to make increasingly neutron-rich nuclei. The nuclei become large droplets and take on strange shapes—strings, sheets, and tubes. This region, down to a depth of about 1 km, is whimsically known as the “pasta” regime. Below that, there are mostly neutrons, with a few protons and electrons. But the pressure continues to grow with depth, and it is possible that some of the electrons may be replaced by heavier particles called pions or kaons, which can combine into a collective state called a condensate. Finally it is likely that a quark-gluon plasma may form at several times nuclear density at the very center. The response of nuclear matter to these incredible pressures is not well understood. Measurements of the masses, radii, and surface temperatures of neutron stars provide a window onto their interiors and reveal much about how nuclear forces behave under extreme conditions. collapse all the way to infinite density, forming black holes (see Box 6.3). Such collapses may also trigger a gamma-ray burst—an explosive burst of gamma rays lasting only a few seconds but with an apparent power that may for a brief instant approach the power of the entire visible universe. The pressures inside these sources may exceed a trillion trillion atmospheres, comparable to the pressure encountered in the expanding universe when it was only about 10 milliseconds old. (By contrast, the most powerful lasers for creating astrophysical conditions can only create pressures of about a billion trillion atmospheres.) Far from marking the permanent death of a star, any of these compact objects may herald its rebirth in a more active form. This may happen, for example, if a remnant has a regular star as a binary companion and the star swells and dumps its gas onto the compact object. The gas may swirl around the compact object for a time on its way down, forming a hot accretion disk, moving with a speed approaching that of light, and emitting x rays, or it may settle onto the surface of the compact object, providing fresh fuel for nuclear explosions. Young, rapidly spinning neutron stars called pulsars can radiate very intense radio and gamma-ray radiation (see Box 6.2). Their power derives from the spin of the neutron star, which has a magnetic field over a million times larger than can be sustained on Earth and which acts like a giant electrical generator capable of producing over 1,000 trillion volts and more than 10 trillion amperes. Other pulsars are powered by gravity, through their accretion of matter from a nearby companion. Even larger black holes are found in the nuclei of galaxies. Essentially all galaxies, including our own, harbor in their centers black holes with masses between a million and billion times that of the Sun. These black holes are the engines for the hyperactive galactic nuclei called quasars, which for a time in the early universe outshone whole galaxies by up to a thousandfold. Quasars, too, are powered by accretion disks, fueled by gas supplied by their host galaxies. In addition, they often form “jets”—moving beams of high-energy particles and magnetic fields—which radiate across the whole spectrum from the longest radio waves to the highest-energy gamma-rays. It appears that these jets are formed very close to the central black hole, but understanding in detail how they are formed remains a major puzzle. One of the most intriguing questions involves cosmic rays, mentioned above. It is believed that cosmic rays are energized by the shock waves associated with cosmic explosions like supernovae. However, this explanation is challenged to account for the fastest particles, with individual BOX 6.3 BLACK HOLES Any object whose radius becomes smaller than a certain value (called the Schwarzschild radius) is doomed to collapse to a singularity of infinite density. No known force of nature can overcome this collapse. The Schwarzschild radius (or event horizon) is proportional to the object’s mass and corresponds to that distance where even light cannot escape the gravitational pull of the central matter. Because of this property, these collapsed singularities are called black holes. Just outside the Schwarzschild radius, outwardly traveling photons can barely escape to infinity. It has even been speculated that there could be “naked singularities,” which are not shrouded by an event horizon. Properly describing the geometry of space-time (which dictates the motion of both particles and light) near a black hole requires Einstein’s theory of general relativity. Particle as well as light trajectories become severely distorted, or curved, compared with those predicted by the Newtonian description. Geometrically, space near a black hole can be exactly described by simple formulae, the Kerr solutions to Einstein’s equations. The gravitational field depends on only two parameters, the mass and spin of the hole. The radius of the event horizon and the smallest orbit that matter can have without falling in depend on how fast the hole is spinning—the faster the spin, the closer material can get. Einstein’s theory of general relativity is only just now beginning to be tested (e.g., by measurements made by the Rossi X-ray Timing Experiment) in regions of strong gravity, where gravitational forces accelerate matter to speeds close to the speed of light. The most straightforward test would be to observe matter directly as it swirls into a black hole, measuring the particle trajectories and comparing them with the predictions of theory. This may be possible someday with the extremely high resolution achievable with x-ray interferometers. However, nature has provided an indirect tracer of black holes at the centers of galaxies that is already being exploited. As heavy elements like iron spiral inward toward a black hole, they reach high orbital velocities and temperatures in the tens of millions of degrees. Transitions of electrons between discrete atomic energy levels generate radiation of specific wavelengths that can be observed in the x-ray band. The wavelengths of these spectral lines are altered by several effects: a Doppler shift due to each atom’s orbital velocity around the hole (the shift can be positive or negative, depending on whether the motion is toward or away from the observer); an overall shift to longer wavelengths that occurs for all radiation struggling to escape from black holes; and another longward shift due to the time-dilating effects of high-speed motion near the hole. The net result is that the lines are shifted and broadened, with “wings” whose shape depends on the atoms’ trajectories as determined by the geometry of space-time. Recent observations of broad lines from the nuclei of active galaxies believed to contain massive black holes are consistent with solutions in which the black hole is spinning rapidly. energies comparable to that of a well-hit baseball; it may be that scientists are seeing evidence for completely new forms of matter. Very-high-energy neutrinos, some of which may be produced by jets, can also be created, and experiments are on the threshold of being able to detect these, too. Finally, the most penetrating signals of all are gravitational waves, first predicted by Einstein in 1918 (see Box 6.4). Scientists know that these waves really do exist because the energy they carry off into space affects the orbits of binary pulsars in a manner that matches well with precision measurements of observed binary pulsar systems. However, they have not yet been detected directly. The most promising sources of gravitational waves, which involve the collisions and coalescences of large black holes, are even more luminous than gamma-ray bursts. NEW CHALLENGES IN EXTREME ASTROPHYSICS Four problem areas, drawn from the physics-astronomy interface, are ready for a concerted attack. Black Holes and Strong Gravity Isaac Newton’s theory of gravity has been superseded by Albert Einstein’s general theory of relativity, which is widely believed to be the “true” theory of gravity as long as quantum mechanical corrections are unimportant. The subtle differences between Newton’s and Einstein’s theories have been tested in the solar system by monitoring the motions of the Moon, the planets, and light. So far general relativity has passed every test with a quantitative precision that in several cases exceeds 1 part in a 1,000. Outside the solar system, the first binary pulsar, PSR1913+16, provided a test in stronger fields. By monitoring the arrival time of regular radio pulses from this source, it was possible to measure the orbital decay caused by the power lost as the two orbiting neutron stars radiated gravitational radiation. Theory and observation agree within about 3 parts per 1,000. The tests of general relativity have been so significant that few scientists doubt its validity in the regimes probed. However, in both the solar system and pulsar tests, gravity is still relatively weak in the sense that the characteristic speeds of bodies are less than roughly one-thousandth the speed of light. Therefore the critical limit of the theory in which objects move at near-light speeds has not yet been tested. It is a basic tenet of physics that, if a physical law is truly understood and has been verified to very high accuracy in at least one location, it should be BOX 6.4 GRAVITATIONAL RADIATION AND GRAVITY-WAVE DETECTORS The general theory of relativity posits that matter (and energy) introduces curvature into four-dimensional space-time and that matter moves in response to this curvature. The theory admits wave solutions in which gravitational ripples in the fabric of space-time propagate with the speed of light. Such waves are an inescapable consequence of general relativity (and indeed of most theories of gravity). However, there are also some very important differences from electromagnetic radiation. Electromagnetic waves accelerate individual charged particles, and this property underlies their detection in, for example, radio antennas. Similarly, gravitational waves are detected by measuring the relative acceleration induced between a pair of test masses as the wave passes by. In addition, when gravitational wave amplitudes are large, as in some cosmic sources, the wave energy of gravitational radiation itself becomes a source of gravity, which is not true for electromagnetic waves. This nonlinearity complicates the theory of wave generation, necessitating extensive numerical computation to calculate the expected wave intensity from a given source. The best-understood sources of gravitational radiation are binary stars. Two white dwarfs or neutron stars in close orbit lose energy via gravitational radiation and spiral in toward each other. A good example is the first binary pulsar discovered, PSR 1913+16, which comprises two neutron stars in an 8-hour orbit. One of these neutron stars emits a radio pulse every 59 milliseconds, and by monitoring very accurately the arrival times of these pulses at Earth, radio astronomers have followed the inspiral and consequent change of orbital period. As the speeds of these neutron stars are much less than the speed of light, it is possible to compute their orbits very accurately. The measurement agrees with general relativity to a precision of 3 parts per 1,000 and effectively rules out most other theories. So, in a sense, researchers have already verified the existence of gravitational waves. Testing the theory when gravity is strong requires measuring gravitational waves directly. There are several likely strong sources of gravitational radiation: inspiraling binary neutron stars, supernova explosions, and merging supermassive black holes in galactic nuclei. In all cases, the goal is to measure the gravitational wave profile as the mass falls together and compare it with nonlinear predictions using general relativity. A peculiarly relativistic effect called Lense-Thirring precession arises when space is “dragged” by the spinning black hole. Measuring this effect would also indicate how rapidly black holes spin. In general, if the comparison between observation and theory is successful, it will constitute an impressive validation of the fundamental theory of gravity. More exotic sources of gravitational radiation have also been proposed. A particularly important one is primordial gravity waves generated soon after the big bang in the early universe. One particular epoch proposed is that in which quarks changed into ordinary nucleons, the so-called quarkhadron transition. If this happened abruptly, in a manner similar to that by which water changes into steam, then the gravity-wave intensity may be detectable. Other such phase transitions in the early universe associated FIGURE 6.4.1 Aerial photographs of the interferometers used in the Laser Inteferometrer Gravitational Wave Observatory (LIGO). The long tunnels house the evacuated laser chambers, in which laser beams travel a 4-km path length. At the vertex of the L, and at the end of each of its arms, are test masses that hang from wires and that are outfitted with mirrors. Ultrastable laser beams traversing the vacuum pipes are used to detect the ultrasmall change in the separation of a pair of test masses caused by the passage of a gravitational wave. Images courtesy of the LIGO Laboratory. with the unification of the forces have also been discussed. Inflation is perhaps the most compelling source of gravitational waves from the early universe. Detection of gravity waves from the early universe would allow us to look back at extremely early times and to study physics that is simply not accessible in a terrestrial lab. Two distinct types of classical gravitational wave detector have been proposed. The first is a ground-based laser interferometer designed to measure tiny changes in the separations of pairs of test masses (suspended by wires) due to the passage of gravitational waves. A prominent example is the Laser Interferometer Gravitational Wave Observatory (LIGO), which comprises two sites (for redundancy), one in Washington state and the other in Louisiana (Figure 6.4.1). At each site, three test masses are spaced 4 kilometers apart; it is hoped eventually to measure relative displacements between the masses as small as 10−19 meters. This facility, which began operation in 2002, will be especially sensitive to waves with periods in the range from 3 to 100 milliseconds and is therefore tuned to collapsing sources of stellar mass. To detect the gravitational radiation from the formation of more massive black holes, a larger detector that is sensitive to lower frequencies is required. Because of natural size limitations as well as seismic noise, such a detector would have to be deployed in space. Studies for a space-based gravitational wave detector to complement LIGO are under way in the United States and in Europe (see Figures 6.4.2 and 6.4.3). possible to use it anywhere it is claimed to be valid. Thus, if general relativity is the correct theory of gravity, researchers already know what should happen when the field is strong, even though they have not tested it there yet. However, it is conceivable that general relativity may not be a comprehensive theory of gravity. Moreover, one of its most impressive predictions, the existence of cosmic points of no return—black holes—has not been fully tested. It is therefore imperative to test relativity where gravity is strong. There is no better cosmic laboratory than a black hole (see Box 6.3). It is now clear beyond all reasonable doubt that black holes are abundant in the universe. They appear to be present in the nuclei of most regular galaxies and to have masses ranging from millions to billions times that of the Sun. In addition, much smaller black holes (5 to 15 solar masses) are being found, commonly in x-ray binary star systems in our galaxy. Recent evidence suggests a class of black holes with masses between 30 and a million times that of the Sun. Astrophysical black holes are defined by two parameters: (1) mass, which sets the size of the black-hole space-time and (2) spin, which determines the detailed geometry of the black-hole space-time. Spin is important because, as noted in the case of pulsars, rotational energy provides a reservoir of extractable energy rather like a giant fly-wheel, and it can act as a prime mover for much of the dramatic high-energy emission associated with black holes. Observational knowledge of black holes has advanced remarkably in the last 5 years. Masses have been measured with increasing precision, and scientists are starting to understand how the sizes of black holes relate to their host galaxies or stellar companions. In addition, at least two approaches to measuring black hole spins appear to be promising, although they are yet to be convincingly exploited. Massive holes in galactic nuclei are often orbited by accretion disks of gas that spiral inward, eventually crossing the event horizon—the surface from which nothing, not even light, can escape. The spectral lines from atoms such as iron orbiting the black hole are quite broad because they are subjected to a variable Doppler shift relative to an Earth-based observer. They are also shifted to lower energy because photons lose energy in climbing out of the hole’s deep potential well. It turns out that gas can remain close to the black hole and produce such a strongly broadened and redshifted line only if the hole spins nearly as fast as possible. On this basis, at least some active galactic nucleus black holes are already thought to rotate very rapidly. It may be possible to use x-ray flares to map out the immediate environments of black holes only light-hours from their centers. This technique makes use of the fact that it takes a finite amount of time for high-energy x rays from the flare to travel to the disk and excite iron emission; different parts of the disk will therefore be observed at different times, allowing the space-time around the black hole to be probed. Alternatively, if astronomers are lucky enough to catch one, a star in orbit around a black hole could also provide a powerful probe of the space-time as it is drawn in and torn apart. There is a serious obstacle to carrying out this program. The wavelengths and the strengths of all of the spectral lines emitted by the accretion disks, which are at very high temperatures, are simply not known. (In fact it is not yet possible to identify half of the lines in the solar spectrum.) Although the quantum mechanical principles necessary to calculate these effects are understood, the atoms are so complex in practice that it is necessary to mount a focused program in experimental laboratory astrophysics to make the most of existing observations of accretion disks. A second approach to measuring a black hole’s spin comes from monitoring the rapid quasi-periodic oscillations of the x-ray intensity from selected galactic binary sources. These are almost certainly influenced by both the strong deviations from Newtonian gravity that are present close to the event horizon, independent of the spin, and a peculiarly spin-dependent effect called the dragging of inertial frames (see Figure 6.1). Frame-dragging requires that all matter must follow the black hole’s spin close to the event horizon. In addition, if the matter follows an orbit that is inclined with respect to the black hole’s spin, the orbit plane must rapidly precess. Both effects change the oscillation frequencies. Although it has been argued that the consequences of both of these effects are already being observed (e.g., in quasiperiodic oscillations seen in neutron star accretion disks), neither approach is understood well enough to allow confidence that it is the spins that are being measured, let alone testing general relativity. An even bigger challenge is to image a black hole directly. Two ambitious approaches are currently under investigation. The first involves constructing a spaceborne x-ray interferometer to resolve the x-ray-emitting gas orbiting the black hole. By combining beams from x-ray telescopes far apart, it is possible in principle to produce images with microarcsecond angular resolution—300,000 times better than the best optical mirrors in space. This resolution is sufficient to enable seeing the event horizon of a supermassive black hole in the nucleus of a nearby galaxy. A second method involves submillimeter-wave interferometry, perhaps also prosecuted from space. This approach is useful for observing sources like our own galactic center, which, although not a powerful x-ray source, does emit submillimeter radiation bright enough to permit resolving the radio source that envelops the central black hole. A quite different and more comprehensive approach to testing general relativity is to measure directly the gravitational radiation emitted by a pair of merging compact objects. Ground-based facilities in Louisiana and Washington, the Laser Interferometric Gravitational Wave Observatory (LIGO), are designed to detect merging stellar-mass compact objects in nearby galaxies. To measure gravitational radiation from forming or merging massive (or intermediate-mass) black holes, it will be necessary to construct a facility in space. Nearly as difficult as building these observatories, however, is the task of computing the gravitational waveforms that are expected when two black holes merge. This is a major challenge in computational general relativity and one that will stretch computational hardware and software to the limits. However, a bonus is that the waveforms will be quite unique to general relativity, and if they are reproduced observationally, scientists will have performed a highly sensitive test of gravity in the strong-field regime. Finally, neutron stars also provide an astrophysical laboratory for testing the predictions of general relativity in strong gravitational fields. The quasiperiodic oscillations (QPOs) seen in association with the accretion of material onto neutron stars may be useful in probing effects predicted by general relativity, such as Lense-Thirring precession. Neutron Stars as Giant Atomic Nuclei Atomic nuclei are held together by nuclear forces. At the simplest level these forces act between protons and neutrons, but at a more fundamental level they involve their constituent quarks and are mediated by the carriers of the strong force, the gluons. The gross structure of natural nuclei is comparatively well understood (with some conspicuous problems remaining, e.g., understanding the effect of the underlying quark structure on nuclei), because it is fairly well understood how nucleons interact when they are about 2 femtometers (10−15 m) apart, as they are in normal nuclei. However, what happens when matter is compressed to greater baryon density or heated to a higher temperature? To address this question, a major facility, the Relativistic Heavy Ion Collider (RHIC), has been constructed at Brookhaven National Laboratory. Using this facility, it will be possible to collide heavy nuclei, like gold, so that they momentarily attain a density roughly 10 times that of nuclear matter. Under these circumstances, it may be possible to form a quarkgluon plasma—a denser version of the state of matter that is thought to have existed in the universe earlier than about 10 microseconds after the big bang. A strong experimental program at RHIC will be carried out over the next few years to see what the states of matter are at extreme energy density. Nature has performed a complementary experiment by making neutron stars in supernova explosions. Neutron stars are about one and a half times as massive as the Sun and have radii of about 10 km. They can be considered as giant nuclei, containing roughly 1057 nucleons with an average density similar to that of normal nuclei. However, as gravity provides an additional strong attractive force, the densities at the centers of neutron stars are almost certainly well above nuclear, as in the heavy ion collisions. There is one crucial and important difference between colliding heavy ions and neutron stars. In the former case, the dense nuclear matter is extremely hot, about 1012 K, whereas neutron stars usually have temperatures of less than 109 K, which from a nuclear standpoint is cold. Scientists are still quite unsure about the properties of cold matter at densities well above nuclear matter density. One good way to see what really happens is to measure the masses and radii of neutron stars with high precision. The masses of a handful are known to exquisite precision (1 part in 10,000), and a number of others are known to within a few percent, from the study of their binary orbits. Promising ways to measure neutron star radii involve high-resolution x-ray spectroscopy or the study of so-called quasi-periodic oscillations. The most direct approach is to observe the wavelengths and shapes of spectral lines formed within a hot neutron star atmosphere immediately following a thermonuclear explosion beneath the star’s surface. (These explosions, called x-ray bursts, are commonly observed from neutron stars that are fed with gaseous fuel at a high rate.) The central wavelengths of the x-ray lines provide a measurement of the gravitational redshift—essentially the depth of the gravitational potential well—and the widths and strengths of the lines measure the rotation speed near the neutron star’s surface. Together with a good understanding of the neutron star atmosphere, these two quantities fix the mass and the radius. In addition, if the distance to the neutron star is known, it is possible to estimate the radius yet another way by knowing that the observed flux strength varies in proportion to the surface area of the star (although one must assume that the entire star is being observed). Given an accurate determination of the radius of a neutron star of known mass, it will be possible to constrain the compressibility of cold nuclear matter and thus the nature of its underlying composition and particle interactions. An even more direct approach to learning the composition of highly compressed nuclear matter involves neutron star cooling. Neutron stars are born hot inside supernovae, which also create shells of expanding debris known as supernova remnants (see Figures 6.2 and 6.3). The size of this remnant is a measure of the neutron star’s age, and the star itself yields its surface temperature. New theories in condensed matter physics can then be used to relate the surface temperature to the temperature inside. In sum, by observing neutron stars of different ages, astronomers can measure how fast they cool. It turns out that if the interior of a neutron star contains just neutrons and a small fraction of protons and electrons, it ought to cool quite slowly, but if it contains a significant fraction of protons or other particles like pions or kaons or even free quarks, it will cool much more quickly. Thus it is possible to learn about a neutron star’s interior simply by measuring its surface temperature. In addition, neutron star interiors are believed to be in a superfluid state, with their protons superconducting, and this too would influence the cooling. Neutron stars can serve as excellent cosmic laboratories for testing physical ideas in this new territory. As explained in Chapter 2, quantum electrodynamics (QED) is a highly quantitative quantum theory of the electromagnetic interaction of photons and matter. It makes predictions that have been tested with great precision in regimes accessible to laboratory study. In particular, it has been tested in static magnetic fields as large as roughly 105 G. However, ever since the discovery of pulsars, it has been known that fields as large as 1012 G are commonly found on the surfaces of neutron stars. More recently it has been concluded that a subset of neutron stars, called “magnetars,” have magnetic field strengths of 1014 to 1015 G, well above the QED “critical” field, where the kinetic energy of an electron spiraling in the magnetic field exceeds its rest mass energy. QED should still be a correct description above this critical field, but the physics is quite different from what is normally considered. For example, when an x ray propagates through a vacuum endowed with a strong magnetic field, QED predicts that electron-positron pairs will be created in such a way that the emergent x-ray radiation will become polarized. It may therefore be possible to observe QED at work in magnetars by observing x-ray polarization and mapping out the neutron star magnetic field. Measuring x-ray polarization is difficult, but, encouragingly, it has recently become possible to measure the circular polarization of x rays from laboratory synchrotrons. Perhaps these techniques can also be used in space x-ray observatories. Supernova Explosions and the Origin of the Heavy Elements The big bang produced the lightest elements in the periodic table— hydrogen, helium, and lithium. Planets and people are made not only of these elements but also of carbon, nitrogen, oxygen, iron, and all the other elements in the periodic table. It is believed that the elements beyond lithium are made in the contemporary universe by stars and stellar explosions. There is a good understanding of the origin of elements up to the iron group (nickel, cobalt, and iron). The iron-group elements have the greatest binding energies, so the elements up to the iron group can be made by nuclear reactions that fuse two nuclei to make a heavier nucleus and release energy. However, producing the elements heavier than iron requires energy input, and astrophysicists have looked to stellar explosions as the likely production sites. The details of how the heaviest elements are made are still not fully known. Supernovae mark the violent deaths of the most massive main-sequence stars and also of close binaries with one highly condensed member (e.g., a white dwarf). These cataclysmic explosions, which can be seen far across the cosmos, can be used as markers of time and distance (see Figure 5.7 for a gallery). Supernovae occur because stars become unstable either as they evolve or as they accrete matter. A main-sequence star of several solar masses can produce energy by successively combining elements up to iron. After nuclear burning has turned the core to iron-group elements, no more energy can be produced. This impasse triggers the collapse of the core and the explosion of the mantle in a supernova. Thermonuclear runaway also occurs when a white dwarf accretes too much mass from a main-sequence companion. Supernovae are classified theoretically by mechanism—core collapse or accretion—and observationally by whether hydrogen is present in the ejecta. Type I supernovae lack hydrogen, while Type II do not. (The categories are further subdivided into Ia, Ib, Ic, II-P, II-L, II-n, and IIb according to the pattern of heavy elements ejected.) The two means of classification do not necessarily coincide, and we lack a detailed theoretical understanding of how to make the correspondence. Nevertheless, Type Ia supernovae are observed to have very similar intrinsic luminosities and have provided convincing evidence that the expansion of the universe is speeding up (see Chapter 5). Discovering whether Type Ia supernovae are truly a homogeneous class and learning what spread is to be expected in their properties are high-priority objectives of supernova research. Supernovae are clearly the factories in which the elements up to and slightly above the iron group of elements are made. Not only are the detailed abundances of the elements lighter than iron quantitatively under- stood with the aid of nuclear theory and laboratory data, but the telltale signatures of radioactive isotopes also are seen in the expanding shell of debris following a supernova explosion. When it comes to understanding the origin of elements much heaver than iron, however, scientists can reconstruct much of what must have happened, but the astrophysical factory has not been clearly identified. Intermediate-mass elements are made in a neutron-rich environment in which successive neutron captures occur slowly, and neutron-rich nuclei undergo beta decay back to more stable elements. Still heavier nuclei must have been made by a succession of rapid neutron captures, referred to as the r-process. A dense, highly neutron-rich environment must exist for the r-process to occur. Also seen in the abundances are the traces of other mechanisms, including possible evidence of nucleosynthesis induced by neutrinos. The element fluorine, for example, can be made by neutrinos interacting with supernova debris. In fact, it is strongly suspected that supernovae, once again, must be the place where the remaining elements up to uranium are built, but there is no detailed understanding of how the process occurs. Resolving this problem requires observational data from supernova remnants, experimental data from both nuclear physics and neutrino physics, and the ability to make detailed, fully three-dimensional, theoretical calculations of supernova explosions. To begin with, theoretical models of supernovae are still incomplete. Simply producing a reliable “explosion” (in the computer) has proven to be an enormous challenge. Recently, the importance of convection driven by neutrino heating from the nascent neutron-star core was confirmed by numerical calculations. The key was to do calculations in two dimensions instead of one (convection in one dimension is impossible). However, not until it is possible to do a full three-dimensional calculation with full and complete physics will the combined role of rotation and convection be clear. A full three-dimensional calculation with proper inclusion of neutrino transport will require the terascale computing facilities that are just now being realized. There is reason to hope that such a calculation will distinguish the site of the r-process and at the same time illustrate the properties neutrinos must have to match what is currently known about the elements, resolving with a single stroke two important questions in modern physics. To make this step in computational prowess, however, theory will call upon experiment to provide solid ground for the r-process. In equal measure, progress will come from measurements in neutrino physics and in nuclear physics. Neutrino oscillations can dramatically alter the synthesis of the elements in a supernova, because the muon and tau neutrinos made in a supernova are much hotter (more energetic) than the electron neutrinos. Normally neutrino effects are muted because muon and tau neutrinos do not interact so easily with nuclei, while electron neutrinos are not produced so hot. But if oscillations scramble the identities, the hot muon and tau neutrinos can turn into hot electron neutrinos and readily disintegrate nuclei just built by the r-process. The nuclei built in rapid neutron capture lie at the boundary of nuclear stability, the neutron “drip line.” To trace the path of nucleosynthesis, researchers need to know the masses and lifetimes of nuclei far from the ones that can be reached with existing technology. The binding energy of such exotic nuclei can be calculated well for nuclei nearer the “valley of stability” (the region in the diagram of all possible nuclei described by their numbers of neutrons and protons where the most stable nuclei are found). How well those equations serve in extrapolation to r-process nuclei is completely unknown. In the last few years it has been realized that these nuclei can be produced and measured in a two-stage acceleration, isotope-production, re-acceleration facility. With a suitably designed facility, every r-process nucleus may be accessible for direct measurement. Finally, there will be in the coming decades the opportunity to observe directly the synthesis of heavy elements where it is believed that synthesis occurs—that is, in the explosions of stars. These explosions create radioactive nuclei, which decay over time, usually with the emission of a gamma ray of specific energy. Future sensitive high-energy x-ray and gamma-ray space experiments will allow these decays to be observed and monitored over time soon after the explosion and the distribution of newly synthesized material in the remnant matter expelled in the explosion to be mapped with high fidelity. These remnants can “glow” for tens of thousands of years in observable radiation. Such observations can be used to constrain the theoretical models for the explosions, directly measure the quantities of synthesized material, and observe how it gets distributed into the space between stars. Cosmic Accelerators and High-Energy Physics Earth is continuously bombarded by relativistic particles called cosmic rays, which are known to originate beyond the solar system. Cosmic rays with energies up to at least 1014 eV are probably accelerated at the shock fronts associated with supernova explosions, and radio emissions and x rays give direct evidence that electrons are accelerated there to nearly the speed of light. However, the evidence that high-energy cosmic-ray protons and nuclei have a supernova origin is only circumstantial and needs confirma- tion. Most puzzling are the much higher energy cosmic rays with energies as large as 3 x 1020 eV. In fact, it would seem that they ought not to exist at all, because traveling through the sea of CMB photons for longer than roughly 100 million years would rob them of their ultrahigh energy. Accounting for these particles—probably mostly protons—is one of the greatest challenges in high-energy astrophysics. Among the many suggested origins are nearby active galactic nuclei, gamma-ray bursts, and the decay of topological defects or other massive relics of the big bang. Protons are not the only type of ultrahigh-energy particle that might be observed from these sources—many models also imply the associated production of high-energy neutrinos. In models involving decaying massive particle relics from the big bang, such neutrinos would emerge from cascades of decaying quarks and gluons that set in at the energy scale of grand unification. In models involving particle acceleration, they could be produced in interactions of protons with dense photon fields or gas near the emitting object. The ability to detect high-energy neutrinos from energetic astrophysical sources would open an entirely new window onto the high-energy universe. In particular, since most sources are relatively transparent to their own neutrinos, these particles allow “seeing” the particle acceleration mechanism directly, deep inside the source. Studying neutrinos is difficult because they interact only through the weak force, so they usually pass through detectors without leaving a trace. One technique for achieving a large effective volume is to detect upward-moving muons created by neutrinos interacting in the material below the sensitive volume of the detector. Upward trajectories guarantee that the parent particles must be neutrinos, because no other particles can penetrate the whole of the Earth. The atmospheric neutrinos, whose behavior provides the current evidence that neutrinos have mass, are detected in a similar way, through their interactions with detectors in deep mines; so far, however, only upper limits have been achieved up to now for energetic astrophysical neutrinos. Gamma-ray photons, a third type of high-energy particle, have been observed from the cosmos with energies as high as 50 TeV. As is the case for the high-energy cosmic rays, the sources of such energetic photons must all be relatively local on a cosmological scale, since photons of this energy also tend to be destroyed in traveling through space by combining with background infrared photons from starlight to create electron-positron pairs. Many of the highest-energy gamma rays are probably emitted as a by-product of the acceleration of the mysterious ultrahigh-energy cosmic rays. Whether they are produced in cascades initiated by high-energy protons or radiated by electrons could in principle be decided by determining whether or not the high-energy gamma rays are accompanied by neutrinos, a frequent by-product of high-energy proton collisions. Understanding the origin of the highest-energy particles will require better understanding of the sites where they are accelerated. Gamma-ray bursts produce flashes of high-energy photons, and theory predicts that very-high-energy neutrinos and cosmic rays will accompany the flash. Although significant progress in locating and studying gamma-ray bursts has been made recently, the sources of these enormous explosions is still a matter of debate. Another class of energetic sources, the highly variable but long-lived jets of active galactic nuclei, in some cases emit gamma rays with energies as high as 10 TeV, which directly implies the presence of charged particles of at least this energy. Although quite different in origin, both jets and gamma-ray bursts are thought to involve highly relativistic bulk motion, ultimately powered by accretion onto massive black holes. Scientists only have quite speculative theories to offer at this stage, but future observations, in particular of high-energy radiation, can provide important constraints. To date, much of the information about powerful cosmic accelerators has come from gamma-ray photons of all energies. Much more information may come from measuring the primary accelerated particles, as well as secondary photons and neutrinos. For example, the observation of a coincident gamma-ray and high-energy neutrino signal from a gamma-ray burst would directly test the existing theories of the shock mechanism in these sources. Identifying accelerated cosmic rays from a particular source is difficult, because intervening magnetic fields scramble the directions of charged particles as they travel. So far, it has only been possible to identify particles coming from solar flares. In contrast, neutrinos and photons, being neutral, are undeflected by magnetic fields and thus can be traced back to individual sources, provided they are bright enough. With projects currently under way or proposed, the ultimate goal of detecting high-energy protons, photons, and neutrinos from specific energetic sources may be within reach. At the highest energies, it may be possible to identify and study cosmic accelerators by backtracking to the accelerated protons themselves. This is possible because the amount of bending in a given magnetic field is inversely proportional to the energy of the particle, and the highest-energy particles must come from relatively nearby sources to avoid having been degraded by interaction with photons of the microwave background. The aim is to accumulate enough events so that the pattern of their arrival directions and energies will reveal the identity of the specific sources. Ultimately, the highest-energy cascades would be studied from space with detectors able to view a huge section of the atmosphere from above and thus overcome the extremely low occurrence rate of the highest-energy events. “Shower” detectors that view a sufficiently large volume of the atmosphere can also detect ultrahigh-energy neutrinos, which can make horizontal cascades starting deep in the atmosphere or even in the crust of Earth. Further understanding of the conditions within ultrahigh-energy sources may also come from measurements made on Earth. Although the conditions within these sources cannot be reproduced here in the laboratory, the behavior of bulk matter under unexplored regimes of pressure and temperature can be examined using high-performance lasers. It is already possible to sustain pressures of 10 million atmospheres and magnetic field strengths of 10 megagauss and to create, impulsively, electron-positron pair plasmas with relativistic temperature at these facilities. These investigations are valuable because they provide a much stronger basis for scaling from the laboratory to cosmic sources. They can be particularly useful for understanding giant planets, the dynamics of various types of supernova explosions, and the relativistic flows and shock waves associated with quasars and gamma-ray bursts. It is possible to summarize the discussion above in the form of five fundamental questions that cut across the problem areas discussed as well as the issues identified in the previous chapters. Did Einstein Have the Last Word on Gravity? There is a striking opportunity to begin testing general relativity in the strong-field regime using observations of astrophysical black holes. Observations of disks and outflows would test the form of the standard Kerr geometry of a spinning black hole; those of coalescing black holes would test a far more intricate dynamical space-time. The needed observations include x-ray line Doppler shifts and linewidths from black holes, quasiperiodic fluctuations of x-ray intensity from oscillating accretion disks, and gravitational radiation from mergers of compact objects. What Are the New States of Matter at Exceedingly High Density and Temperature? Understanding the equation of state and phase transitions of dense nuclear matter is one of the great challenges in contemporary many-body physics. Cold neutron stars and hot supernova explosions provide two quite different ways to obtain unique experimental data and to test theoretical understanding. The opportunities include (1) measuring neutron-star radii from x-ray line gravitational redshifts and from absolute distance and x-ray intensity measurements, neutron-star rotation speeds from x-ray linewidths, x-ray timing measurements from quasi-perioidic oscillations, and the cooling rate of neutron stars in expanding supernova remnants and (2) theoretical work on the nuclear equation of state and the transition between nuclear matter and the quark-gluon plasma (see Figure 6.4). Is a New Theory of Matter and Light Needed at the Highest Energies? The committee believes that QED is the most successful theory of physics and that there is, as yet, no good reason to doubt it within its domain of applicability. However, it has not been tested in environments in which the magnetic field strengths are very strong and the energy densities very great, nor have the applicable physical principles in these environments been elucidated. Observing the polarization of x rays from pulsars, magnetars, and perhaps gamma-ray bursts would allow just this. How Were the Elements from Iron to Uranium Made? The production of the light elements in the big bang and of the elements up to iron in supernovae is in quantitative agreement with observation. Beyond iron, the general conditions needed to make the elements seem clear, but the locale and means of production are unknown. Supernovae or neutron stars are thought to be likely sites for the origin of the heavy elements. By combining full three-dimensional calculations of supernova explosion in a terascale computation, experimental measurements of neutrino-oscillation physics, experimental data on the r-process and rp-process nuclei far from stability, and x-ray and gamma-ray observations of newly formed elements in supernovae, it may be possible to pin down the source of the heaviest elements. How Do Cosmic Accelerators Work and What Are They Accelerating? On both spectral and astrophysical grounds, it seems that ultrahigh-energy protons are formed in extremely powerful yet local sources. Perhaps these sources have already been identified with active galaxies or gamma-ray bursts. Alternatively, a completely new constituent of the universe could be involved, like a topological defect associated with the physics of grand unification. Only by observing many more of these particles, or perhaps the associated gamma rays, neutrinos, and gravitational waves, will scientists be able to distinguish these possibilities. To realize this opportunity, large cosmic-ray air shower detector arrays and observations of high-energy gamma rays and neutrinos will be needed, as described in Box 6.1.
0.885814
4.058705
Having completed its commissioning phase, the Advanced Rayleigh guided Ground-layer adaptive Optics System (ARGOS) facility is coming online for scientific observations at the Large Binocular Telescope (LBT). With six Rayleigh laser guide stars in two constellations and the corresponding wavefront sensing, ARGOS corrects the ground-layer distortions for both LBT 8.4 m eyes with their adaptive secondary mirrors. Under regular observing conditions, this set-up delivers a point spread function (PSF) size reduction by a factor of 2-3 compared to a seeing-limited operation. With the two LUCI infrared imaging and multi-object spectroscopy instruments receiving the corrected images, observations in the near-infrared can be performed at high spatial and spectral resolution. We discuss the final ARGOS technical set-up and the adaptive optics performance. We show that imaging cases with ground-layer adaptive optics (GLAO) are enhancing several scientific programmes, from cluster colour magnitude diagrams and Milky Way embedded star formation, to nuclei of nearby galaxies or extragalactic lensing fields. In the unique combination of ARGOS with the multi-object near-infrared spectroscopy available in LUCI over a 4â×â4 arcmin field of view, the first scientific observations have been performed on local and high-z objects. Those high spatial and spectral resolution observations demonstrate the capabilities now at hand with ARGOS at the LBT. - Gravitational lensing: strong - Instrumentation: Adaptive optics - Instrumentation: high angular resolution - Instrumentation: spectrographs ASJC Scopus subject areas - Astronomy and Astrophysics - Space and Planetary Science
0.829157
3.06168
* Democritus Democritus had a theory that all matters are composed with tiny unbreakable particles called atom. He tried to break down matters into smallest particles. His model was that the matter would stop splitting in halves when it reaches its smallest matter. He called that an atom. * John Dalton In 1803, John Dalton proposed a theory. The theory had 4 parts into it. 1. Elements are made of identical atoms. 2. Atoms of different elements are physically different 3. Compounds are formed by a combination of two or more different kinds of atoms. 4. A chemical reaction is arrangement of atoms. Dalton formed different compounds from its elements. Adding extra of one reactant made no different. He tried variety of different combinations to form a new compound. The ratio that he was combining was inaccurate, but for the time back then, his experiment was closer to the modern technology. * JJ Thomson He tested the origin and properties of cathode ray using a cathode ray tube. And his observation was with electricity, a metal plate produces a cathode ray. The model is called plum pudding model. Because it has negative charged particles named electrons inside atom. They can be removed or added to form an iron. * Loard Ernest Rutherford * Lord Ernest Rutherford came from Nelson New Zealand. In 1911, his observation was the atom should be, if the plum theory was correct, solid pudding, small particles might not pass through the pudding. For experiment, Alpha particles were shot through the gold and beamed out. Almost all alpha particles passed through a thin sheet of gold but some were deflected or bounced back. * Neil bohr Neils Bohr came up with a observation that, since + and – charges electrons on the outside should come crashing into the nucleus cancelling both out. His experiment involved studying light coming from atoms that are ‘excited’ by electricity or flame. The light is made of distinct colour liens called an emission spectrum. Electrons were absorbing and releasing specific of energy. The theory was electrons could be found in fixed levels. -James Chadwick James Chadwick predicted that atoms are heavier than electrons and protons. The experiment was looking at beryllium when it hit alpha particles- which caused a neutral beam to be emitted. His theory was that the nucleus of an atom contains neutrons. The History of the Atom In many ways our learning of the atom has influenced human’s knowledge. In addition to that, the history of the atom, and how it’s been used has equally affected human’s knowledge. This knowledge of the atom we have acquired can help us improve life from day to day and explain the phenomenon known as the Theory of Everything. The founder of the atom was Democritus (460-370B. C. ), an ancient Greek philosopher whose goal in life was to explain the natural world. It was he who made the basis for which the foundation of the atom was created. This foundation was his theory that all tangible objects in the world had a “primary matter” which was used in different variations to make everything that surrounds us. He also discovered that this “primary matter”, or atom, as he first established, only had three main differences; he discovered shape, size, and weight differed between atoms. All of the work that Democritus did is seen as a more or less basic description of an atom. Despite his uncanny accuracy of the atom with such limited technology, and without being able to do any experiments, the atom was ignored for the subsequent 2000 years. The next essential scientist to be recorded as an innovator in history for the atom was John Dalton(1766-1844). “Dalton’s theory can be summarized as follows: 1. Matter is composed of small particles called atoms. 2. All atoms of an element are identical, but are different from those of any other element. 3. During chemical reactions, atoms are neither created nor destroyed, but are simply rearranged. 4. Atoms always combine in whole number multiples of each other. For example, 1:1, 1:2, 2:3 or 1:3. ” 1 As you can see from Dalton’s four main ideas, it is a decent expansion from Democritus’ first theory of the atom. J. J. Thomson was the next leader of the task, understanding the atom. He was the first of the scientist studying the atom to use equipment for experiments such as a cathode ray. “When a potential is placed between the cathode (the negatively charged plate) and the anode (the positively charged plate) a “ray” of electric current passes from one plate to the other. Thomson discovered that this ray was actually composed of particles. ” This experiment he composed was a very crucial one in the history of the atom, for the reason that this test discovered that there are negative charges inside of atoms. The structure of the atom at that time would look similar to the one below. 11 This is a very important model because it is the first atom structure that includes that there is more then the simple “different types of matter” way of thinking. The most recent of the atom innovators would be Rutherford and Bohr. I believe that these two scientists needed the work of Democritus, Dalton, and Thomson to get to the atomic structure and understanding that we have today. In Rutherford’s gold foil experiment (shown below) we can see that the particles fired at the gold foil 2 re partially deflected and some pass completely through the foil undisturbed, this of course could be monitored by the detecting screen put around the gold foil. Rutherford concluded that the previous model of the atom could not be entirely accurate because if it was true that all atoms had was negative and neutrally charged ions then the particles fired at the foil should all of gone through without disruption. Rutherford could of never come to this conclusion if he did not have the previous model from J. J. Thomson. But, Rutherford did have the model and information at that time, and he arranged a new model, which is very similar to the current one. (Seen under) 2 Last, the most recent recorded scientist in history to alter the structure of the atom, Niels Bohr. It has many aspects correct that we have today in the modern model of the atom. It was one of the first models to contain a nucleus with an orbit. Although it is not completely true that electrons, neutrons, and protons “orbit” the nucleus, it is a much closer model to the current one then the plum model. Bohr could of never accomplished what he did without the foundation that was laid before him, along with the help of J. J. Thomson Too conclude: “Bohr’s theory may be summarized in the following two statements: 1. Electrons can only occupy certain orbits or shells in an atom. Each orbit represents a definite energy for the electrons in it. 2. Light is emitted by an atom when an electron jumps from one of its allowed orbits to another. Since each orbit represents a definite electron energy, this electron jump, or transition, represents definite energy jump. This change in electron energy leads to emission of light of a definite energy or wavelength. “3 To conclude, the understanding of the atom and its structure today had been largely improved from that of the days of Democritus. This has only been possible through the work done by the handful of select scientists in history. These scientists have worked off one another to eventually get to the model of the atom we have today, in addition to our current understanding of the atom and it’s properties. Many times in history we have used our knowledge of the atom for furthering technology and improve life around us. This can be seen many times history and common day household items such as a smoke detector to weapons of mass destruction used in nuclear war. In any event, it is possible to, in technology, see the use of the understanding of the atom. My first example of how the atom has been used and changed the world is a common household item, the smoke detector. The smoke detector, even though it may not seem at first glance to be such an advanced piece of technology, uses a process called ionization. Ionization is the physical process of converting an atom or molecule into an ion by adding or removing charged particles such as electrons or other ions. Smoke detectors specifically use this in a way, which two electrodes are in a small ionization chamber maintaining a steady current. Any time in which this current is broken the alarm will trigger. This is useful because once smoke atoms enter the ionization chamber they will absorb the electrodes and interrupt the current, henceforth causing the alarm. Although there is no exact world event concerning the smoke detector, it has made modern life much safer. Next, our understanding of the atom and how it reacts with other atoms has helped us use gas, even crude oil, to alter it to a variety of different shapes and forms for different uses. This crude oil, commonly know as methane, is consisted of mainly carbon and hydrogen. These hydrocarbons have a very dense amount of raw energy inside of them. Because of this, many things are derived from crude oil, such as gasoline and diesel fuel. This fuel is obviously used for many things, from cooking food to launching spaceships to the moon. But there is still more that this crude oil can do, it is extremely versatile by chemically cross-linking hydrocarbon chains you can get everything from synthetic rubber to nylon guitar strings to the plastic in tupperware. 5 As you can see, this crude oil alone serves no purpose, but when we add in our understanding of the atom and how we can change and alter it, crude oil is one of our most valuable non renewable resource. Finally, we have the atomic bomb, such as that used in World War II by America. The atomic bomb is a fairly simple weapon; it simply uses a process called nuclear fission. Nuclear fission is a process in which atoms release a great deal of energy due to their raw power and instability. These atomic bombs were fueled by uranium, which is the said raw instable substance. The uranium is so unstable it needs to break down into smaller atoms to be stable, when it does this is has left over energy. This energy is the actually explosion from the bomb, and all that is happening is the breaking down of atoms and releasing of unstable energy. The infamous world events little boy and fat man were the first two times that an atomic bomb was used in nuclear warfare. In this war though, I seriously question the ethics of the U. S. A. America had the power to use the atomic bomb but did not restrain themselves from using it. If there is no restraint on the power that a country has, then there will be no civility. The casualty of the two bombs is estimated to be around 200,000 people, most of whom where civilians. This many deaths of innocent lives in unjust in any case. Also, the use of these bombs had a huge effect on Japan’s environment and economy. America did not seem to be all shifted by the fact that they had caused his distress. But, you can safely assume the economy took a huge blow, losing two cities and 200,000 citizens. Furthermore, the effect on the environment would have had to have been a great loss as well. The two bombs together destroyed thousands of miles of land, including some forest and mountain. The bombs, in addition to people, killed thousands of animal and wildlife, and destroyed the habitat for any future animals. Work Cited 1. Democitus. (n. d. ). A BRIEF HISTORY OF ATOMIC THEORY DEVELOPMENT. Retrieved September 26, 2010, from http://www. eoam. cc. ok. us/~rjones/Pages/online1014/ chemistry/chapter_8/pages/history_of_atom. html#top%20anchor 2. Freudenrich, C. C. (2010). Crude Oil. In How Oil Refinery Works. Retrieved September 26, 2010, from New York Times website: http://science. howstuffworks. com/environmental/energy/oil-refining1. htm 3. Helmenstine, A. M. (2010). Ionization Detectors. In How Do Smoke Detectors Work? Retrieved September 26, 2010, from New York Times website: http://chemistry. about. com/cs/howthingswork/a/aa071401a. htm 4. A Planetary Model of the Atom. (n. d. ). The Bohr Model. Retrieved September 26, 2010, from http://csep10. phys. utk. edu/astr162/lect/index. html 5. Rutherford’s Experiment and Atomic Model. (n. d. ). Retrieved September 26, 2010, from Google website: http://www. daviddarling. info/encyclopedia/R/ 6. Rutherfords_experiment_and_atomic_model. html Willis, B. (1999). Nuclear Fission. In Nuclear Bombs … How They Work. Retrieved September 26, 2010, from Worsley School website: http://www. worsleyschool. net/science/files/nuclear/bomb. html
0.809397
3.221297
Science, Tech, Math › Science Astronomy 101: Studying the Sun Lesson 8: Visiting Close to Home Share Flipboard Email Print Image of our sun, Sol. NASA Science Astronomy An Introduction to Astronomy Important Astronomers Solar System Stars, Planets, and Galaxies Space Exploration Chemistry Biology Physics Geology Weather & Climate By Nick Greene Astronomy Expert Nick Greene is a software engineer for the U.S. Navy Space and Naval Warfare Engineering Center. He is also the U.N. World Space Week Coordinator for Antarctica. our editorial process Nick Greene Updated March 06, 2017 What Is a Solar System? Everyone knows we live in a neighborhood of space called the solar system. What is it, exactly? It turns out that our knowledge of our place in space is changing radically as we send spacecraft to explore it. It's doubly important to know what a solar system as telescopes study planetary systems around other stars, as well. Let's examine the basics of the solar system. First, it consists of a star, orbited by planets or smaller rocky bodies. The gravitational pull of the star holds the system together. Our solar system consists of our sun, which is a star called Sol, nine planets including the one we live on, Earth, along with the satellites of those planets, a number of asteroids, comets, and other smaller objects. For this lesson, we'll concentrate on our star, the Sun. The Sun While some stars in our galaxy are nearly as old as the universe, about 13.75 billion years, our Sun is a second-generation star. It is only 4.6 billion years old. Some of its material came from former stars. Stars are designated by a letter and a number combination roughly according to their surface temperature. The classes from hottest to coolest are: W, O, B, A, F, G, K, M, R, N, and S. The number is a subcategory of each designation and sometimes a third letter is added in to refine the type even further. Our Sun is designated as a G2V star. Most of the time, the rest of us call it "the Sun" or "Sol". Astronomers describe it as a very ordinary star. Since its creation, our star has used up about half of the hydrogen in its core. Over the next 5 billion years or so, it will grow steadily brighter as more helium accumulates in its core. As the supply of hydrogen dwindles, the Sun's core must keep producing enough pressure to keep the Sun from collapsing in on itself. The only way it can do this is to increase its temperature. Eventually, it will run out of hydrogen fuel. At that point, the Sun will go through a radical change which will most likely result in the complete destruction of the planet Earth. First, its outer layers will expand, and engulf the inner solar system. The layers will escape out to space, creating a ring-like nebula around the Sun. What's left of the Sun will light up that cloud of gases and dust, creating a planetary nebula. That remaining remnant of our star will shrink down to become a white dwarf, taking billions of years to cool. Observing the Sun Of course, astronomers study the Sun every day, using ground-based solar observatories and orbiting spacecraft specially designed to study our star. A very interesting phenomenon associated with the Sun is called an eclipse. It happens when our own Moon passes between the Earth and the Sun, blocking out all or part of the Sun from view. Warning: observing the Sun on your own can be quite dangerous. It should never be viewed directly, either with or without a magnifying device. Follow good viewing advice when seeing the Sun. Permanent damage can be done to your eyes in a fraction of a second unless proper precautions are taken. There are filters which can be utilized with many telescopes.Consult someone with a lot of experience before attempting solar viewing. Or better yet, go to an observatory or science center that offers solar viewing and take advantage of their expertise. Sun Statistics: diameter: 1,390,000 km.mass: 1.989e30 kgtemperature: 5800 K (surface) 15,600,000 K (core) In our next lesson, we'll take a closer look at the inner solar system, including Mercury, Venus, Earth, and Mars. Assignment Read more about star color classification, the Milky Way, and eclipses. Ninth Lesson > Visiting Close to Home: The Inner Solar System > Lesson 9, 10 Edited and updated by Carolyn Collins Petersen.
0.879295
3.492652
Amazing what lurks below the ground… Underground laboratories are the main infrastructure for astroparticle physics. In a sense this discipline was born there. But also for the future these infrastructures will be of capital importance as many experiments need to be performed underground. Underground laboratories will continue to play a major role in the next decades. What scientific topics are addressed underground? Just to name a few: direct searches for dark matter particles, double beta decay, proton decay, solar neutrinos, geoneutrinos etc. All of this research needs to be performed underground. Of course, other forms of dark matter research can be done in space or in accelerator experiments. 10. SNO and SNOLAB - Sudbury, Canada Solar Neutrinos are small particles that are born in the Sun’s Thermo Nuclear Fusion and travel near the speed of light. Right now, trillions and trillions of them are traveling through space. To get an idea of how many, take the earth’s population and multiply by 25. That is approximately how many neutrinos are passing through your thumbnail, right now. Scientists know they exist but they are so small and travel so fast that seeing one in almost impossible. This is exactly why SNOLAB was built, where deep underground in SNOLAB, the level of cosmic rays is reduced 10 millionfold. The Sudbury Neutrino Observatory (SNO for short) is 2km (6,800 feet) deep in the ground in Ontario, Canada. The Observatory was created to study neutrinos – weakly interacting particles – using the rock above it to filter out cosmic radiation. This ensured that only neutrinos, which easily penetrate matter, were observed. Right next to the SNO stands the SNOLAB, a new expansion to the neutrino and dark matter research program. 9. DUSEL Lead - South Dakota, USA In 2010 the DUSEL Program Advisory Committee summarized: We are impressed by the breadth and depth of the DUSEL science. The envisioned program in physics and astrophysics will address fundamental questions about the Universe and its fundamental laws, such as the question of why the universe contains matter but no antimatter, the nature of dark matter, the origin of neutrino mass, and the genesis of the chemical elements. … In addition, the Committee felt that the interdisciplinary laboratory, with sustained support, will provide unique scientific opportunities that engage and educate the next generation of scientists and engineers. In December 2010, the National Science Board (NSB), in its role as the oversight body for the National Science Foundation (NSF), unexpectedly decided to deny further NSF funding for the Deep Underground Science and Engineering Laboratory (DUSEL) As it did so, the NSB nevertheless expressed its interest in the scientific programs moving forward. The SURF Project, including NSF and DOE, have spent ten years forging the path for creating these experiments and providing the facilities necessary to lead the worldwide effort. 8. Aquarius Reef Base – Florida Keys, USA Aquarius Reef Base, in the Florida Keys National Marine Sanctuary, is the world’s only undersea research station. It is an underwater ocean laboratory deployed three and half miles offshore, at a depth of 60 feet, at the base of one of the many beautiful coral reefs comprising the Florida Keys National Marine Sanctuary. Scientists live in Aquarius during ten–day missions using saturation diving to study and explore the coastal ocean. Aquarius is owned by NOAA and is operated by the University of North Carolina Wilmington. The technologically advanced lab also plays a vital role in underwater technology development and as a training facility for NASA’s Astronaut training program. 7. Kola Super Deep Borehole – Kola, Russia The Kola Super Deep Borehole in Russia is a scientific project, which began in May 1970 on the Kola Peninsula. The project’s goal was to dig as deep as possible with an initial goal of depth set for 49,000 ft. However, after reaching, a depth of 40,000 ft. and experiencing higher than expected temperatures the drilling was halted and the project never resumed. This hole continues to provide a wealth of new information about our world and how it was formed as well as knowledge about our universe. 6. Svalbard Seed Bank – Spitsbergen Island, Norway In a remote mountainside on the Norwegian tundra sits the “doomsday vault,” a backup against disaster — manmade or otherwise. Inside lives the last hope should the unthinkable occur: a global seedbank that could be used to replant the world. It’s a modern day Noah’s Ark, in other words, full not of animals but of plantlife. The Vault is dug into the Platåberget or plateau mountain near the village of Longyearbyen, Svalbard — a group of islands north of mainland Norway. The arctic permafrost offers natural freezing for the seeds, while additional cooling brings the temperatures down to minus 0.4 degrees Fahrenheit. The facility is capable of holding four million samples and has two functions. First, to be able to restore agriculture should a global disaster of some kind reduce or threaten crops in any or all parts of the world. Its second function is the continuous supply to genetic research – viable genetic material for the development of new varieties of the plants to conform to global change and or water restrictive locations, a preventative step in our disaster preparation to ensure the future of our food supply. 5. Lake Vostok – Vostok Station, Antarctica With tiny, living “time capsules” survived the ages in total darkness, in freezing cold, and without food and energy from the sun, this lake is a living underwater scientific lab. Experts estimate that the lake water itself may have been isolated for as long as 15 million years. One of the most surprising and amazing discoveries of our time occurred in 1996 deep in the vast frozen wilderness of Antarctica. Russian scientists were drilling ice core samples. At just under 4000 feet (1.3km), the samples became clean. This, at first, baffled the scientific community until they realized they had discovered the world’s largest warm water sub glacial lake. Even more shocking was finding microbial life in this unforgiving place. Drilling had to cease immediately as a lake buried that deep under ice is under enormous pressure. Had they broken through or even cracked or weakened the ice affecting the lake, it would have been forced upwards and out. The impact of this would have been devastating. These problems, the location, and other concerns had everyone involved uncertain as to how to proceed. This is when NASA, realizing the potential applications for the enhancement of space travel, came up with a plan. With the greatest scientific minds from around the world coming to bear on the problem, it was not long before a Cryobot was conceptualized. Cryobot is a probe that resembles a torpedo. It has a heated tip so that it can melt its way slowly towards the under-ice lake. The ice would refreeze behind it as it makes its way down, sealing it in and eliminating the possibility of depressurizing the site. Once there, it would give itself a sanitizing bath and then release a remote controlled robot (Hydrobot) that could explore and capture samples. 4. Super Kamiokande – Hida, Japan Super kamiokande, located directly under Mount Kamiokako, near the Japanese city of Hida, is a Super Neutrino Detector laboratory 3,281 feet (1100m) below ground. The tank that makes up the largest portion of the device is approximately 136 feet (45m) tall and 129 feet (43m) across and contains fifty thousand tons of ultra-pure water. The neutrinos pass through almost everything at close to the speed of light, including people. In water, however, they leave a slight trail of light, called Cherenkov radiation, the purer the water, the more visible the trip. When neutrinos collide with the nucleus of an atom, they emit a flash of light, leaving an imprint on a ring detector on the specialized wall of the tank. This is akin to taking a photograph of the event and allows scientist to study it. The reason the detector needs to be so deep underground is to block any neutrino’s from the sun interfering with the experiments. 3. Gran Sasso – Gran Sasso Mountains, Italy Laboratori Nazionali del Gran Sasso is a particle physics laboratory near the Gran Sasso mountain in Italy, about 120 km from Rome. In addition to a surface portion of the laboratory, there are extensive underground facilities beneath the mountain. According to its official website, the Gran Sasso lab is, as of 2006, the largest underground particle physics laboratory in the world. The experimental halls are covered by about 1400m of rock, protecting the experiments from cosmic rays. Lurking in Italy’s subterranean Gran Sasso National Laboratory, OPERA (Oscillation Project with Emulsion-tRacking Apparatus) detects neutrinos that are fired through the Earth from the European particle physics laboratory, CERN, near Geneva, Switzerland. As the particles hardly interact at all with other matter, they stream right through the ground, with only a very few striking the material in the detector and making a noticeable shower of particles. Gran Sasso uses lead from an ancient Roman shipwreck as a shield, which is donated by the National Archaeological Museum in Cagliari. This ancient lead is far less radioactive than the lead that is currently mined today. 2. IceCube – South Pole The IceCube Neutrino Observatory, built over a decade at a cost of $271 million, is buried over a kilometer under the South Pole and longer than the world’s tallest skyscrapers combined. This particle detector records the interactions of neutrinos, nearly massless sub-atomic particles. The IceCube Observatory is designed to detect a blue light, called Cherenkov radiation, created by the nuclear reactions of individual neutrinos crashing into ice atoms. Cherenkov radiation is generally considered to be the equivalent of a sonic boom for light. The telescope searches for neutrinos from the most violent astrophysical sources – events like exploding stars, gamma ray bursts, and cataclysmic phenomena involving black holes and neutron stars. IceCube is operated by the University of Wisconsin-Madison and the National Science Foundation, with funding provided by the United States, Belgium, Germany, and Sweden. Researchers from Barbados, Canada, Japan, New Zealand, Switzerland and the United Kingdom are also involved in the project. 1. CERN – Geneva, Switzerland The European Organization for Nuclear Research (French: Organisation Européenne pour la Recherche Nucléaire), known as CERN, is an international organization whose purpose is to operate the world’s largest particle physics laboratory. The lab which is situated 110 meters below ground in Northwest Geneva. The CERN organization is the largest gathering of scientists on the planet incorporating the efforts of over 7,900 experts in particle physics from 580 universities from more than 80 nationalities.
0.862112
3.101463
Galactic core outbursts are the most energetic phenomenon taking place in the universe. During the early 60’s astronomers began to realize that the massive object that forms the core of a spiral or giant elliptical galaxy periodically becomes active spewing out a fierce barrage of cosmic rays with a total energy output equal to hundreds of thousands of supernova explosions(1, 2). The cosmic ray electron component of such an outburst is always accompanied by synchroton emission which consists of electromagnetic radiation ranging from radio wave frequencies on up to X ray and gamma ray frequencies. A survey has shown that roughly 15% – 20% of all spiral galaxies are currently seen in their active core explosion phase during which they exhibit Seyfert-like characteristics. One example is Seyfert galaxy NGC 1566 (Figure 1). In some galaxies these active emissions have been observed to equal the energy from billions of supernova explosions. The galaxies undergoing these more intense outbursts are sometimes designated as quasars. Their core emission being so strong as to greatly exceed the stellar emission from the galaxy’s disc, causing the galaxy to have a star-like or quasi-stellar appearance. One example is the spiral galaxy PG 0052+251 (Figure 2) whose active, quasar-like core is radiating 7 times as much energy as comes from all of the galaxy’s stars. (Courtesy of J. Bahcall and NASA) During the 70s astronomers realized that the core of our own Galaxy (the Milky Way) has also had a history of recurrent outbursts, that at periodic intervals it enters an active phase in which its rate of cosmic ray emission rises many orders of magnitude.(3) Sometimes designated as Sagittarius A*, the core is estimated to be about 4 million times as massive as our sun; see Figures 3 and 4. But some of the larger more mature galaxies can have core bodies that range up to billions of times the mass of our Sun. Conventional astronomy refers to these as “black holes,” visualizing all of the galactic core’s mass to be concentrated at a single dimensionless geometrical point. However, evidence suggests that galactic core mass does not exist in the form of a point singularity, but as a very dense supermassive star having a density similar to a neutron star or hyperon star. In the cosmology of subquantum kinetics, these non-singularity core masses are termed mother stars (see link for more information). (Mapped by the UCLA Galactic Center Group) Paul LaViolette , who is currently president and chief researcher of the Starburst Foundation, was the first to demonstrate that cosmic rays radiated from the active core of an exploding galaxy can penetrate far outside the galaxy’s nucleus to bombard solar systems like our own residing in its peripheral spiral arm disk. He coined the word “galactic superwave” to refer to such a cosmic barrage. Galactic superwaves are a recent discovery. Until recently, astronomers believed galactic cores erupted very infrequently, every 10 to 100 million years.(1) They also believed that interstellar magnetic fields in the Galactic nucleus would trap the emitted particles in spiral orbits causing them to reach the Earth very slowly.(4) For these reasons, most astronomers did not believe that core explosions in the Milky Way posed any immediate threat to the Earth. However, in 1983 LaViolette presented evidence to the scientific community indicating that:(5 – 7) • Galactic core explosions actually occur about every 13,000 – 26,000 years for major outbursts and more frequently for lesser events. • The emitted cosmic rays escape from the core virtually unimpeded. As they travel radially outward through the Galaxy, they form a spherical shell that advances at very close to the speed of light. Astronomical discoveries subsequently confirmed aspects of this superwave hypothesis; see Verified Prediction No. 2 . For example, in 1985, astronomers discovered that Cygnus X-3, an energetic celestial source of cosmic rays, which is about the same distance from Earth as the Galactic Center (25,000 light years), showers the Earth with particles traveling at close to the speed of light, moving along essentially straight paths.(8) Later, scientists found the Earth is impacted, at sporadic intervals, with cosmic rays emitted from the X-ray pulsar Hercules X-1 (about 12,000 light years distant).(9, 10) The intervening interstellar medium has so little effect on these particles, that their pulsation period of 1.2357 seconds, is constant to within 300 microseconds. These findings are reason to be gravely concerned about the effects of a Galactic core explosion because they imply that the cosmic rays generated can impact our planet virtually without warning, accompanying the light arriving from the initial core outburst.(5, 11, 12) A study of astronomical and geological data reveals that a superwave from our Galactic core impacted our solar system near the end of the last ice age, 11,000 to 16,000 years ago.(13, 14) This cosmic ray event spanned a period of several thousand years and climaxed between 15,900 and 12,000 years ago. Although far less intense than the PG 0052+251 quasar outburst, it nevertheless was able to substantially affect the Earth’s climate and energize the Sun. Data obtained from polar ice core samples show evidence of this cosmic ray event as well as other cosmic ray intensity peaks from superwaves impacting the Earth at earlier times (Figure 5).(11, 15) [An explanation of how this cosmic ray intensity profile was calculated from published beryllium-10 data is presented in the update to Dr. LaViolette’s dissertation and in the appendix of a paper preprint available for download .] Figure 6 shows the position in the Galaxy of the 15,900 years before 2000 (b2k) superwave when viewed at differing times following the time it passed through the solar system.(5) This elliptical shape of the event horizon is determined by the time it takes the cosmic ray electrons to travel radially outward from the Galactic center at the speed of light plus the time it takes the synchrotron radiation generated by those cosmic ray electrons to reach us at the speed of light. As the superwave expands outward through the galaxy with the passage of millennia, the ellipticity of its event horizon progressively decreases. LaViolette found that the cosmic ray intensity along this ellipsoidal event horizon shell fits the galactic radio background distribution better than any other previous cosmic ray model. He also found that supernova explosion dates coincided with times when the superwave was passing the progenitor star’s location, suggesting that superwaves trigger these explosions. © P. LaViolette 2011 The effects on the Sun and on the Earth’s climate were not due to the superwave cosmic rays themselves, but to the cosmic dust that these cosmic rays transported into the solar system. Observations have shown that the solar system is presently passing through a dense cloud of cosmic dust and frozen debris associated with the North Polar Spur supernova remnant. This material is normally kept at bay by the outward pressure of the solar wind. But, an impacting superwave cosmic ray volley would have overpowered the solar wind and pushed large quantities of this material into the interplanetary environment. The Sun would have become enveloped in a cocoon of dust that would have caused its spectrum to shift toward the infrared. Radiation back scattered from this cocoon would have caused the Sun’s corona and photosphere to inflate, somewhat like that observed today in dust-choked stars called “T Tauri stars.”. In addition, the dust grains filling the solar system would have back scattered solar radiation onto the Earth, producing an “interplanetary hothouse effect” that would have substantially increased the influx of solar radiation to the Earth. These various solar effects caused atmospheric warming and inversion conditions that facilitated glacial growth which brought on ice age conditions. On occasions when the solar radiation influx to the Earth became particularly high, the ice age climate warmed, initiating episodes of rapid glacial melting and continental flooding. Details of this scenario are described in the book Earth Under Fire ,(12) in Paul LaViolette’s Ph.D. dissertation ,(16) as well as in a series of journal articles he has published.(6, 7, 13, 16, 18) LaViolette’s prediction that there is a residual flow of interstellar dust currently entering the solar system from the Galactic center direction was later verified by data collected from the Ulysses spacecraft and by AMOR radar measurements made in New Zealand.(18) For a listing of related theory predictions and their verification click here . Research suggests that the Sun was highly active between 16,000 and 11,000 years ago; see dissertation excerpt Chapter 4 . LaViolette hypothesized that this extreme level of flaring activity resulted because the Sun was accreting dust and gas from its dust congested surroundings during this superwave “storm interval”. During this time the sun would have emitted super-sized solar proton events (SPEs), intense volleys of solar cosmic rays, and super coronal mass ejections (CMEs), immense spherical masses of coronal plasma. These would have been large enough to have posed an extreme hazard for life on Earth. There is evidence that one particularly tragic SPE impacted the Earth around 12,900 years ago, evidence of which is recorded in ocean sediments and polar ice as a spike in both atmospheric C-14 and nitrate ion concentration, the largest to occur during the entire Younger Dryas/Alleröd climatic period.(19) This event happened to coincide with the termination boundary of the two millennium-long Pleistocene mass extinction, beyond which one finds few surviving Pleistocene mammals. This is believed to have been the worst animal extinction episode to occur since the extinction of the dinosaurs 65 million years ago. It is not much of an inductive leap to conclude that these two events were causally related. As LaViolette has shown, the 12,887 years b2k solar proton event would have been able to deliver a lethal radiation dose to the Earth’s surface. Its effects would have been particularly enhanced if, immediately prior to the event, the Earth’s magnetic field had been weakened by the impact of major coronal mass ejection. Solar cosmic rays in the CME plasma would have become trapped in the geomagnetic field to form storm time radiation belts and the ring current generated by these cosmic rays would have generated a strong magnetic field opposed to the Earth’s field, substantially weakening its intensity.(5, 12) For more about solar-induced geomagnetic excursions, see dissertation excerpt Chapter 3 and Verified Prediction No. 10 . A critique of the Firestone-West supernova comet theory is presented in the paper “The cause of the megafaunal extinction: Supernova or Galactic core outburst? ” © P. LaViolette 2011 land tsunami overtaking a mammoth unawares. Today, tomorrow, next week, next year… sometime in the coming decades… our planet could once again be hit by an intense volley of Galactic cosmic rays. It will come cloaked and hidden from us, until the very moment it strikes. We live on the edge of a galactic volcano. Knowing neither the time, the magnitude, nor the severity of the next eruption or its impact on our environment, we stand unprepared to deal with this event, much less anticipate its arrival. Their Effects on Life and Society When cosmic rays from Galactic superwaves impact the Earth’s atmosphere, they produce “electron cascades.” Each primary cosmic ray generates millions of secondary high energy electrons. Many of these particles scatter upwards and become trapped by the Earth’s magnetic field to form radiation belts similar to those created by high altitude nuclear explosions. In just one day, a major Galactic superwave event would inject into the geomagnetic field a particle energy equivalent to 1000 one-megaton hydrogen bomb explosions (1025 ergs). At this rate, the energy delivered to the belts after one year would exceed 30,000 times the energy received from the most powerful solar cosmic ray storms observed in modern times. Such energized radiation belts could cause a global communications blackout by creating radio static and by permanently damaging critical electronic components of communication satellites. Air travel during such conditions would be extremely hazardous. The resulting atmospheric ionization would destroy the ozone layer, and increase skin cancer rates, due to high levels of UV reaching the Earth’s surface; the cosmic ray particles penetrating to ground level would significantly increase cell mutation rates. Galactic superwaves may also produce an intense electromagnetic pulse (EMP) whenever a cosmic ray front happens to strike the Earth’s atmosphere. Galactic superwaves such as those that arrived during the last ice age could have generated pulses delivering tens of thousands of volts per meter in times as short as a billionth of a second, comparable to the early-time EMP signal from a high-altitude nuclear explosion (see Figure 7). In addition, there is the danger that a superwave could transport outlying cosmic dust into the solar system which could seriously affect the Earth’s climate possibly triggering a new ice age. Although there is a small probability that the next superwave will be as catastrophic as the one at the end of the last ice age, even the less intense, more frequent events would be quite hazardous for the global economy. In March 2009, the U.S. National Research Council published a report entitled Severe Space Weather Events: Understanding Societal and Economic Impacts ; see also March 2009 New Scientist for a summary. It describes hazards to modern society that could occur should we experience a large magnitude solar storm, similar to the 1859 Carrington event solar flare . Many of the adverse effects the report describes are the same as those that would occur during the arrival of a superwave, even one of relatively low magnitude. The four-second extragalactic gamma ray burst that arrived in 1983, did have a measurable effect on radio transmissions used for global navigation and communication.(20) By comparison, the “minor” superwave events discussed above might have total energies hundreds of millions of times greater than this. (courtesy of NASA/TRACE) The December 26th, 2004 Earthquake/Tsunami and the December 27th, 2004 Gamma Ray Burst Galactic Center activity occurs frequently between major superwave events. Astronomical observation indicates that during the last 6,000 years, the Galactic center has expelled 14 clouds of ionized gas.(21) See Figure 8 for dates. These outbursts may have produced minor superwave emissions with EMP effects comparable to those of major superwaves. About 80% of these bursts took place within 500 hundred years of one another (Figure 9). With the most recent outburst occurring 700 years ago, there is a high probability of another one occurring in the near future. At present little research is being done on this important astronomical phenomenon. Nor are we prepared should a Galactic superwave suddenly arrive. International channels of communication are not in place to deal with the disasters that a superwave could bring upon us. 1. Burbridge, G. R. et al. “Evidence for the occurrence of violent events in the nuclei of galaxies.” Reviews of Modern Physics 35 (1963): 947. 2. Burbidge, G. R. et al. “Physics of compact nonthermal sources III. Energetic considerations.” Astrophysical Journal 193 (1974): 43. 3. Oort, J. H. “The Galactic Center.” Annual Reviews of Astronomy & Astrophysics 15 (1977): 295. 4. Ginzburg, V. L., and Syrovatskii, S. I. The Origin of Cosmic Rays. New York: Pergamon Press, 1964, p. 207. 5. LaViolette, P. A. Galactic Explosions, Cosmic Dust Invasions, and Climatic Change . Ph.D. dissertation, Portland State University, Portland, Oregon, August 1983. 6. LaViolette, P. A. “The terminal Pleistocene cosmic event: Evidence for recent incursion of nebular material into the Solar System.” Eos 64 (1983): 286. American Geophysical Union paper, Baltimore, Maryland. 7. LaViolette, P. A. “Elevated concentrations of cosmic dust in Wisconsin stage polar ice .” Meteoritics 18 (1983): 336. Meteoritical Society paper, Mainz, Germany. 8. Marshak, et al. “Evidence for muon production by particles from Cygnus X-3,” Physical Review Letters 54 (1985): 2079. 9. Dingus, B. L. et al. “High-energy pulsed emission from Hercules X-1 with anomalous air-shower muon production.” Physical Review Letters 61 (1988): 1906. 10. Schwarzschild, B. “Are the ultra-energetic cosmic gammas really photons? Physics Today (ll) (1988): 17. 11. LaViolette, P. A. Earth Under Fire . Rochester, VT: Bear & Co., 1997, 2005. 12. LaViolette, P. A. “Cosmic ray volleys from the Galactic Center and their recent impact on the Earth environment. ” Earth, Moon, and Planets 37 (1987): 241. 13. Brown, R. L., and Johnston, K. J. “The gas density and distribution within 2 parsecs of the Galactic Center,” Astrophysical Journal 268 (1983): L85. 14. Lo, K. Y., and Claussen, M. J. “High-resolution observations of ionized gas in central 3 paresecs of the Galaxy: possible evidence for infall.” Nature 306 (1983): 647. 15. Raisbeck, G. M., et al. “Evidence for two intervals of enhanced 10Be deposition in Antarctic ice during the Last Glacial Period.” Nature 326 (1987): 273. 16. LaViolette, P. A. “Evidence of high cosmic dust concentrations in Late Pleistocene polar ice .” Meteoritics 20 (1985): 545. 17. LaViolette, P. A. “Galactic core explosions and the evolution of life .” Anthropos 12, (1990): 239 255. 18. LaViolette, P. A. “Anticipation of the Ulysses interstellar dust findings .” Eos 74(44) (1993): 510 511. 19. LaViolette, P. A. “Evidence for a solar cause of the Pleistocene mass extinction .” 2011, accepted for publication. 20. Fishman, G. J. and Inan, U. S. “Observation of an ionospheric disturbance caused by a gamma-ray burst.” Nature 331 (1988):418. 21. Lacy, J. H., Townes, C. H., Geballe, T. R., and Hollenbach, D. J. “Observations of the motion and distribution of the ionized gas in the central parsec of the Galaxy. II,” Astrophysical Journal 241 (1980): 132. Disclaimer: The synopsis of the superwave theory presented here should not be regarded as a complete presentation of this theory for the purpose of scientific debate on the internet. Those interested in a rigorous presentation of the theory and its supporting evidence should consult the update of Paul LaViolette’s Ph.D. dissertation (available in CDROM format) and his various papers some of which are available for download at this website. His book Earth Under Fire is also a good resource but is written for a general audience and is not intended as the primary reference to rely on for scientific debate. Disclaimer: The synopsis of the superwave theory presented here should not be regarded as a complete presentation of this theory for the purpose of scientific debate on the internet. From The Starburst Foundation @ http://starburstfound.org/galactic-cosmic-ray-volleys-a-coming-global-disaster/ , Thanks to: http://nexusilluminati.blogspot.com
0.896577
4.160776
Image credit: NASA/JPL Scientists have concluded the part of Mars that NASA’s Opportunity rover is exploring was soaking wet in the past. Evidence the rover found in a rock outcrop led scientists to the conclusion. Clues from the rocks’ composition, such as the presence of sulfates, and the rocks’ physical appearance, such as niches where crystals grew, helped make the case for a watery history. “Liquid water once flowed through these rocks. It changed their texture, and it changed their chemistry,” said Dr. Steve Squyres of Cornell University, Ithaca, N.Y., principal investigator for the science instruments on Opportunity and its twin, Spirit. “We’ve been able to read the tell-tale clues the water left behind, giving us confidence in that conclusion.” Dr. James Garvin, lead scientist for Mars and lunar exploration at NASA Headquarters, Washington, said, “NASA launched the Mars Exploration Rover mission specifically to check whether at least one part of Mars ever had a persistently wet environment that could possibly have been hospitable to life. Today we have strong evidence for an exciting answer: Yes.” Opportunity has more work ahead. It will try to determine whether, besides being exposed to water after they formed, the rocks may have originally been laid down by minerals precipitating out of solution at the bottom of a salty lake or sea. The first views Opportunity sent of its landing site in Mars’ Meridiani Planum region five weeks ago delighted researchers at NASA’s Jet Propulsion Laboratory, Pasadena, Calif., because of the good fortune to have the spacecraft arrive next to an exposed slice of bedrock on the inner slope of a small crater. The robotic field geologist has spent most of the past three weeks surveying the whole outcrop, and then turning back for close-up inspection of selected portions. The rover found a very high concentration of sulfur in the outcrop with its alpha particle X-ray spectrometer, which identifies chemical elements in a sample. “The chemical form of this sulfur appears to be in magnesium, iron or other sulfate salts,” said Dr. Benton Clark of Lockheed Martin Space Systems, Denver. “Elements that can form chloride or even bromide salts have also been detected.” At the same location, the rover’s Moessbauer spectrometer, which identifies iron-bearing minerals, detected a hydrated iron sulfate mineral called jarosite. Germany provided both the alpha particle X- ray spectrometer and the Moessbauer spectrometer. Opportunity’s miniature thermal emission spectrometer has also provided evidence for sulfates. On Earth, rocks with as much salt as this Mars rock either have formed in water or, after formation, have been highly altered by long exposures to water. Jarosite may point to the rock’s wet history having been in an acidic lake or an acidic hot springs environment. The water evidence from the rocks’ physical appearance comes in at least three categories, said Dr. John Grotzinger, sedimentary geologist from the Massachusetts Institute of Technology, Cambridge: indentations called “vugs,” spherules and crossbedding. Pictures from the rover’s panoramic camera and microscopic imager reveal the target rock, dubbed “El Capitan,” is thoroughly pocked with indentations about a centimeter (0.4 inch) long and one-fourth or less that wide, with apparently random orientations. This distinctive texture is familiar to geologists as the sites where crystals of salt minerals form within rocks that sit in briny water. When the crystals later disappear, either by erosion or by dissolving in less-salty water, the voids left behind are called vugs, and in this case they conform to the geometry of possible former evaporite minerals. Round particles the size of BBs are embedded in the outcrop. From shape alone, these spherules might be formed from volcanic eruptions, from lofting of molten droplets by a meteor impact, or from accumulation of minerals coming out of solution inside a porous, water-soaked rock. Opportunity’s observations that the spherules are not concentrated at particular layers in the outcrop weigh against a volcanic or impact origin, but do not completely rule out those origins. Layers in the rock that lie at an angle to the main layers, a pattern called crossbedding, can result from the action of wind or water. Preliminary views by Opportunity hint the crossbedding bears hallmarks of water action, such as the small scale of the crossbedding and possible concave patterns formed by sinuous crestlines of underwater ridges. The images obtained to date are not adequate for a definitive answer. So scientists plan to maneuver Opportunity closer to the features for a better look. “We have tantalizing clues, and we’re planning to evaluate this possibility in the near future,” Grotzinger said. JPL, a division of the California Institute of Technology in Pasadena, manages the Mars Exploration Rover project for NASA’s Office of Space Science, Washington. For information about NASA and the Mars mission on the Internet, visit http://www.nasa.gov. Images and additional information about the project are also available at http://marsrovers.jpl.nasa.gov and http://athena.cornell.edu. Original Source: NASA/JPL News Release
0.814543
3.69686
Jupiter may be the biggest planet, but it sure seems to get picked on. On March 17, amateur astronomer Gerrit Kernbauer of Mödling, Austria, a small town just south of Vienna, was filming Jupiter through his 7.8-inch (200mm) telescope. 10 days later he returned to process the videos and discovered a bright flash of light at Jupiter’s limb. Possible asteroid or comet impact on Jupiter on March 17 “I was observing and filming Jupiter with my Skywatcher Newton 200 telescope, writes Kernbauer. “The seeing was not the best, so I hesitated to process the videos. Nevertheless, 10 days later I looked through the videos and I found this strange light spot that appeared for less than one second on the edge of the planetary disc. Thinking back to Shoemaker-Levy 9, my only explanation for this is an asteroid or comet that enters Jupiter’s high atmosphere and burned up/explode very fast.” The flash certainly looks genuine, plus we know this has happened at Jupiter before. Kernbauer mentions the first-ever confirmed reported comet impact that occurred in July 1994. Comet Shoemaker-Levy 9, shattered to pieces from strong tidal forces when it passed extremely close to the planet in 1992, returned two years later to collide with Jupiter — one fragment at a time. 21 separate fragments pelted the planet, leaving big, dark blotches in the cloud tops easily seen in small telescopes at the time. Video of possible Jupiter impact flash by John McKeon on March 17, 2016 Not long after Kernbauer got the word out, a second video came to light taken by John McKeon from near Dublin, Ireland using his 11-inch (28 cm) telescope. And get this. Both videos were taken in the same time frame, making it likely they captured a genuine impact. With the advent of cheap video cameras, amateurs have kept a close eye on the planet, hoping to catch sight of more impacts. Two factors make Jupiter a great place to look for asteroid / comet collisions. First, the planet’s strong gravitational influence is able to draw in more comets and asteroids than smaller planets. Second, its powerful gravity causes small objects to accelerate faster, increasing their impact energy. According to Bad Astronomy blogger Phil Plait: “On average (and ignoring orbital velocity), an object will hit Jupiter with roughly five times the velocity it hits Earth, so the impact energy is 25 times as high.” Simply put, it doesn’t take something very big to create a big, bright bang when it slams into Jove’s atmosphere. It wasn’t long before the next whacking. 15 years to be exact. On July 19, 2009, Australian amateur Anthony Wesley was the first to record a brand new dark scar near Jupiter’s south pole using a low-light video camera on his telescope. Although no one saw or filmed the impact itself, there was no question that the brand new spot was evidence of the aftermath: NASA’s Infrared Telescope Facility at Mauna Kea picked up a bright spot at the location in infrared light. Jupiter impact event recorded by Christopher Go on June 3, 2010 Once we started looking closely, the impacts kept coming. Wesley hit a second home run on June 3, 2010 with video of an impact flash, later confirmed on a second video made by Christopher Go. This was quickly followed by another flash filmed by Japanese amateur astronomer Masayuki Tachikawa on August 20, 2010. Jupiter impact flash on August 20, 2010 by Masayuki Tachikawa Prior to this month’s event, amateur Dan Petersen visually observed a impact flash lasting 1-2 seconds in his 12-inch (30.5 cm) scope on September 10, 2012, which was also confirmed on webcam by George Hall. Keep ’em comin’!
0.902435
3.705981
Scientists have detected traces of the earliest light in the universe thought to emanate from the first stars formed after the Big Bang, billions of years ago. The new report, published in Nature on February 28, said researchers found the "fingerprint" of the universe's first light as background radiation left on hydrogen. "This is the first time we've seen any signal from this early in the Universe, aside from the afterglow of the Big Bang," Judd Bowman, an astronomer at Arizona State University who led the work, said in a statement. Following the Big Bang, physicists believe there was only darkness in the universe for about 180 million years, a period known by scientists as Cosmic "Dark Ages." As the universe expanded, the soup of ionized plasma created by the Big Bang slowly began to cool and form neutral hydrogen atoms, say physicists. Eventually these were pulled together by gravity and ignited to form stars. The new discovery is the closest scientists have ever come to observing that moment of "cosmic dawn." "It's very exciting to see our baby stars being born," Keith Bannister, astronomer at Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO), told CNN. "(Although) we can't see the stars themselves, we're seeing the effect they have on the gas around them." The discovery was made at a radio telescope in Western Australia, the Murchison Radio-astronomy Observatory, operated by the CSIRO. The telescope's remote location in rural Australia, inside a legislated "radio quiet zone," kept interference from other human-made devices to a minimum, CSIRO said in a statement. In their statement, CSIRO said Bowman and his team have been working to detect the signals for 12 years. Bannister said there would still need to be additional work done to confirm the findings of Bowman's team but the discovery was still a milestone. "This is the very beginning of a very long journey. There's been a lot of work to prepare for this point and now its been confirmed, everyone gets excited and more work will happen," Bannister said. "There's a whole bunch of different times in the universe which are still inaccessible to use with our current telescopes ... there's a lot more to explore."
0.825448
3.385148
Astronomers who study stars are providing a valuable assist to the planet-hunting astronomers pursuing the primary objective of NASA’s new TESS Mission. In fact, asteroseismologists — stellar astronomers who study seismic waves (or “starquakes”) in stars that appear as changes in brightness — often provide critical information for finding the properties of newly discovered planets. This teamwork enabled the discovery and characterization of the first planet identified by TESS for which the oscillations of its host star can be measured. The planet — TOI 197.01 (TOI is short for “TESS Object of Interest”) — is described as a “hot Saturn” in a recently accepted scientific paper. That’s because the planet is about the same size as Saturn and is also very close to its star, completing an orbit in just 14 days, and therefore very hot. The Astronomical Journal will publish the paper written by an international team of 141 astronomers. Daniel Huber, an assistant astronomer at the University of Hawaii at Manoa’s Institute for Astronomy, is the lead author of the paper. Steve Kawaler, a professor of physics and astronomy; and Miles Lucas, an undergraduate student, are co-authors from Iowa State University. “This is the first bucketful of water from the firehose of data we’re getting from TESS,” Kawaler said. TESS — the Transiting Exoplanet Survey Satellite, led by astrophysicists from the Massachusetts Institute of Technology — launched from Florida’s Cape Canaveral Air Force Station on April 18, 2018. The spacecraft’s primary mission is to find exoplanets, planets beyond our solar system. The spacecraft’s four cameras are taking nearly month-long looks at 26 vertical strips of the sky — first over the southern hemisphere and then over the northern. After two years, TESS will have scanned 85 percent of the sky. Astronomers (and their computers) sort through the images, looking for transits, the tiny dips in a star’s light caused by an orbiting planet passing in front of it. NASA’s Kepler Mission — a predecessor to TESS — looked for planets in the same way, but scanned a narrow slice of the Milky Way galaxy and focused on distant stars. TESS is targeting bright, nearby stars, allowing astronomers to follow up on its discoveries using other space and ground observations to further study and characterize stars and planets. In another paper recently published online by The Astrophysical Journal Supplement Series, astronomers from the TESS Asteroseismic Science Consortium (TASC) identified a target list of sun-like oscillating stars (many that are similar to our future sun) to be studied using TESS data — a list featuring 25,000 stars. Kawaler — who witnessed the launch of Kepler in 2009, and was in Florida for the launch of TESS (but a last-minute delay meant he had to miss liftoff to return to Ames to teach) — is on the seven-member TASC Board. The group is led by Jørgen Christensen-Dalsgaard of Aarhus University in Denmark. TASC astronomers use asteroseismic modeling to determine a host star’s radius, mass and age. That data can be combined with other observations and measurements to determine the properties of orbiting planets. In the case of host star TOI-197, the asteroseismolgists used its oscillations to determine it’s about 5 billion years old and is a little heavier and larger than the sun. They also determined that planet TOI-197.01 is a gas planet with a radius about nine times the Earth’s, making it roughly the size of Saturn. It’s also 1/13th the density of Earth and about 60 times the mass of Earth. Those findings say a lot about the TESS work ahead: “TOI-197 provides a first glimpse at the strong potential of TESS to characterize exoplanets using asteroseismology,” the astronomers wrote in their paper. Kawaler is expecting that the flood of data coming from TESS will also contain some scientific surprises. “The thing that’s exciting is that TESS is the only game in town for a while and the data are so good that we’re planning to try to do science we hadn’t thought about,” Kawaler said. “Maybe we can also look at the very faint stars — the white dwarfs — that are my first love and represent the future of our sun and solar system.”
0.929578
3.912327
- About SOFIA - Science Results Archive - SOFIA Outreach - Image Galleries SOFIA Image of the Newborn Star Cluster W3A Researchers using NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) have captured new images of a recently born cluster of massive stars named W3A. The cluster is seen (inset) lurking in the depths of the large gas and dust cloud from which it formed. The larger image shows the overall structure of the W3 region, lying 6,400 light years away in the direction of the constellation Perseus, as seen at near-infrared wavelengths by the Spitzer Space Telescope. The inset image composed of data obtained by SOFIA at mid-infrared wavelengths zooms in on the violent interaction zone around the massive star cluster. The energetic radiation and strong winds from these stars will eventually shred and disperse their birth cloud, possibly triggering the formation of more stars in adjacent clouds. Astronomers using SOFIA aim to better understand the effects the largest stars in the cloud have on their smaller siblings and on the cycle of star birth. The SOFIA observations reveal the presence of some 15 massive stars in various stages of their birth process. Toward the left of the inset image, a small bubble (arrow) has been cleared out of the gas and dust by the most massive star in this cluster. This bubble is surrounded by a dense shell (green) of material in which some of the dust and all of the large molecules have been destroyed. That shell is surrounded by mostly untouched cloud material, traced by the red emission from cooler dust. Astronomers have evidence that the expansion of such bubbles around massive newly born stars acts to compress nearby material and trigger the condensation of more stars. Most stars in the Milky Way, including our sun, are thought to have formed in such violent environments. The processes involved are difficult to follow because light produced by these hot stars at visual and ultraviolet wavelengths can’t escape the surrounding clouds of interstellar material. Short-wavelength starlight absorbed by small dust grains and large molecules sets these clouds aglow at the longer infrared wavelengths observed by SOFIA, allowing astronomers to peer inside the clouds and study the internal structures and processes. The SOFIA observations were made using the FORCAST instrument (Faint Object Infrared Camera for the SOFIA Telescope; Principal Investigator Terry Herter, Cornell University). The data were analyzed and interpreted by the FORCAST team with Francisco Salgado and Alexander Tielens of the Leiden (Netherlands) Observatory plus SOFIA staff scientist James De Buizer. These data are subjects of papers presented at the 219th American Astronomical Society meeting in Austin, Texas, and papers submitted for publication in The Astrophysical Journal. The FORCAST camera combined with SOFIA’s large telescope allows the W3 region’s star formation to be probed at mid-infrared wavelengths with unprecedented spatial detail. The inset false color image combines radiation from fluorescing large molecules (wavelength of 7.7 microns, indicated as blue) and warm dust grains (19.7 microns, green; 37.1 microns, red). SOFIA is a Boeing 747SP aircraft extensively modified to carry a 17-ton reflecting telescope with an effective diameter of 2.5 meters (100 inches) to altitudes as high as 45,000 feet (14 km), above more than 99 percent of the water vapor in Earth’s atmosphere that blocks most infrared radiation from celestial sources. SOFIA is a joint project of NASA and the German Aerospace Center (DLR), and is based and managed at NASA's Dryden Aircraft Operations Facility in Palmdale, Calif. NASA's Ames Research Center in Moffett Field, Calif., manages the SOFIA science and mission operations in cooperation with the Universities Space Research Association (USRA), headquartered in Columbia, Md., and the German SOFIA Institute (DSI) at the University of Stuttgart.
0.861712
3.839244
- About SOFIA - Science Results Archive - SOFIA Outreach - Image Galleries Target of Opportunity: Comet ISON NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) took off on a “target of opportunity” flight that included study of Comet ISON on Oct. 24, 2013. This was SOFIA's second opportunity to capture data on a comet, having previously studied Comet Hartley 2 in 2010. For the Comet ISON observations, the object was predicted to be very faint. The observatory’s flight path saw NASA’s highly modified 747SP airplane depart its home base at Palmdale, Calif., around 9:30 p.m. local time, fly east to Colorado, then turn northeast passing over the Canadian township of Pickle Lake, Ontario, then heading west over Medicine Hat, Alberta, where the observatory turned south and continued flying toward the United States. The comet observations began south of the Canadian border, above the border of Idaho and Montana at 43,000 feet. The entire non-stop flight took nearly 10 hours to complete. Comet ISON, a pristine chunk of primordial material from the Oort Cloud, recently entered the inner solar system for the very first time and is heading toward a close encounter with the sun. On Nov. 28, 2013, Thanksgiving Day, Comet ISON will reach perihelion, passing within 730,000 miles of the Sun. The comet was discovered in September 2012, by researchers Vitali Nevski and Artyom Novichonk using the International Scientific Optical Network’s (ISON) 0.4-meter (16-inch) telescope, and was named in honor of the institution. Aboard the Oct. 24 SOFIA flight was principal investigator Diane Wooden, who had proposed that Comet ISON be studied at three infrared wavelengths. Two of those wavelengths, 19.7 and 31.5 microns, cannot be seen from Earth-based telescopes because water vapor in Earth’s atmosphere blocks infrared energy from reaching the ground. It should be noted that the 31.5 micron wavelength was detected simultaneously with the 11.1 and 19.7 micron images. The second wavelength examined during the flight, 11.1 microns, allows the SOFIA observations to be tied to ground-based measurements with the Subaru Telescope located on Mauna Kea, Hawaii, and the Great Canary Telescope on the island of La Palma, in Spain’s Canary Islands. Currently, there are no space-borne telescopes operating at wavelengths longer than 4.5 microns, thus SOFIA is the only telescope able to see these limited wavelengths. Wooden is working with a diverse team that will combine its numerous observations to better understand the composition of Comet ISON. Her observations aboard SOFIA were to measure the thermal emission from small and large dust grains in the coma of Comet ISON by measuring the mid-infrared wavelengths with the Faint Object InfraRed CAmera for the SOFIA Telescope (FORCAST) instrument. FORCAST collects infrared photons at wavelengths between five- and 40-microns. “The long wavelength photometry, only possible from SOFIA, will allow us to measure the thermal emission from larger grains, which cannot be seen in scattered light at visible wavelengths,” said Wooden. “Compared with smaller submicron grains, larger grains have cooler temperatures and emit at longer wavelengths. Only the FORCAST instrument on SOFIA can obtain the longer wavelength photometry measurements that sample the thermal emission of the larger grains. By modeling the FORCAST photometry, we can constrain the grain size distribution and the dust mass. The dust mass and the dust mass loss rate are one of the fundamental characterizations of a comet.” Making the Observations “We got onto the comet leg of the flight and clearly saw the extended coma in the fine guiding camera but not in the wide-angle camera,” Wooden said. “We acquired images at 11.1 microns and immediately saw the comet was faint. We then shifted to the 19.7-micron filter, where the comet was expected to be brighter. Soon we could see the comet was about as weak at 19.7 microns as it was 11.1 microns, consistent with the expectations of a typical grain-size distribution. If the 19.7-micron images had a stronger signal, that would have meant there were more, larger grains. “These measurements are important because they serve as constraints or upper limits on the flux of thermal emissions from larger dust grains in the coma,” said Wooden. “Studying the dust’s thermal emission from SOFIA enables us to derive the grain size, its distribution, and the mass of the amount of dust coming from the comet. This is a critical complement to studying the gases that are released, and thereby contributes significantly to understanding the origins of comets. “We learned that the comet is dust-poor not only for small grains, as already known by the weak scattered light at visible wavelengths, but also for larger grains detectable at these mid-IR wavelengths from SOFIA.” Nicholas A. Veronico SOFIA Public Affairs
0.837338
3.476562
Astronomers using the Hubble Space Telescope have made the first direct detection of the atmosphere of a planet orbiting a star outside our solar system. Their unique observations demonstrate that it is possible with Hubble and other telescopes to measure the chemical makeup of alien planet atmospheres and to potentially search for the chemical markers of life beyond Earth. The planet orbits a yellow, Sun-like star called HD 209458, located 150 light-years away in the constellation Pegasus. This artist's impression shows a dramatic close-up of the scorched extrasolar planet HD 209458b in its orbit 'only' 7 million kilometres from its yellow Sun-like star. The planet is a type of extrasolar planet known as a 'hot Jupiter'. Using the NASA/ESA Hubble Space Telescope, for the first time, astronomers have observed the atmosphere of an extrasolar planet evaporating off into space (shown in blue in this illustration). Much of this planet may eventually disappear, leaving only a dense core. Astronomers estimate the amount of hydrogen gas escaping HD 209458b to be at least 10 000 tonnes per second, but possibly much more. The planet may therefore already have lost quite a lot of its mass.
0.840402
3.437946
'Chemical Laptop' Could Search for Signs of Life Outside Earth If you were looking for the signatures of life on another world, you would want to take something small and portable with you. That’s the philosophy behind the “Chemical Laptop” being developed at NASA’s Jet Propulsion Laboratory in Pasadena, California: a miniaturized laboratory that analyzes samples for materials associated with life. “If this instrument were to be sent to space, it would be the most sensitive device of its kind to leave Earth, and the first to be able to look for both amino acids and fatty acids,” said Jessica Creamer, a NASA postdoctoral fellow based at JPL. Like a tricorder from “Star Trek,” the Chemical Laptop is a miniaturized on-the-go laboratory, which researchers hope to send one day to another planetary body such as Mars or Europa. It is roughly the size of a regular computing laptop, but much thicker to make room for chemical analysis components inside. But unlike a tricorder, it has to ingest a sample to analyze it. “Our device is a chemical analyzer that can be reprogrammed like a laptop to perform different functions,” said Fernanda Mora, a JPL technologist who is developing the instrument with JPL’s Peter Willis, the project’s principal investigator. “As on a regular laptop, we have different apps for different analyses like amino acids and fatty acids.” Amino acids are building blocks of proteins, while fatty acids are key components of cell membranes. Both are essential to life, but can also be found in non-life sources. The Chemical Laptop may be able to tell the difference. What it’s looking for Amino acids come in two types: Left-handed and right-handed. Like the left and right hands of a person, these amino acids are mirror images of each other but contain the same components. Some scientists hypothesize that life on Earth evolved to use just left-handed amino acids because that standard was adopted early in life’s history, sort of like the way VHS became the standard for video instead of Betamax in the 1980s. It’s possible that life on other worlds might use the right-handed kind. “If a test found a 50-50 mixture of left-handed and right-handed amino acids, we could conclude that the sample was probably not of biological origin,” Creamer said. “But if we were to find an excess of either left or right, that would be the golden ticket. That would be the best evidence so far that life exists on other planets.” The analysis of amino acids is particularly challenging because the left- and right-handed versions are equal in size and electric charge. Even more challenging is developing a method that can look for all the amino acids in a single analysis. When the laptop is set to look for fatty acids, scientists are most interested in the length of the acids’ carbon chain. This is an indication of what organisms are or were present. How it works The battery-powered Chemical Laptop needs a liquid sample to analyze, which is more difficult to obtain on a planetary body such as Mars. The group collaborated with JPL’s Luther Beegle to incorporate an “espresso machine” technology, in which the sample is put into a tube with liquid water and heated to above 212 degrees Fahrenheit (100 degrees Celsius). The water then comes out carrying the organic molecules with it. The Sample Analysis at Mars (SAM) instrument suite on NASA’s Mars Curiosity rover utilizes a similar principle, but it uses heat without water. Once the water sample is fed into the Chemical Laptop, the device prepares the sample by mixing it with a fluorescent dye, which attaches the dye to the amino acids or fatty acids. The sample then flows into a microchip inside the device, where the amino acids or fatty acids can be separated from one another. At the end of the separation channel is a detection laser. The dye allows researchers see a signal corresponding to the amino acids or fatty acids when they pass the laser. Inside a “separation channel” of the microchip, there are already chemical additives that mix with the sample. Some of these species will only interact with right-handed amino acids, and some will only interact with the left-handed variety. These additives will change the relative amount of time the left and right-handed amino acids are in the separation channel, allowing scientists to determine the “handedness” of amino acids in the sample. Testing for future uses Last year the researchers did a field test at JPL’s Mars Yard, where they placed the Chemical Laptop on a test rover. “This was the first time we showed the instrument works outside of the laboratory setting. This is the first step toward demonstrating a totally portable and automated instrument that can operate in the field,” said Mora. For this test, the laptop analyzed a sample of “green rust,” a mineral that absorbs organic molecules in its layers and may be significant in the origin of life, said JPL’s Michael Russell, who helped provide the sample. “One ultimate goal is to put a detector like this on a spacecraft such as a Mars rover, so for our first test outside the lab we literally did that,” said Willis. Since then, Mora has been working to improve the sensitivity of the Chemical Laptop so it can detect even smaller amounts of amino acids or fatty acids. Currently, the instrument can detect concentrations as low as parts per trillion. Mora is currently testing a new laser and detector technology. Coming up is a test in the Atacama Desert in Chile, with collaboration from NASA’s Ames Research Center, Moffett Field, California, through a grant from NASA’s Planetary Science & Technology Through Analog Research (PSTAR) program. “This could also be an especially useful tool for icy-worlds targets such as Enceladus and Europa. All you would need to do is melt a little bit of the ice, and you could sample it and analyze it directly,” Creamer said. The Chemical Laptop technology has applications for Earth, too. It could be used for environmental monitoring — analyzing samples directly in the field, rather than taking them back to a laboratory. Uses for medicine could include testing whether the contents of drugs are legitimate or counterfeit. Creamer recently won an award for her work in this area at JPL’s Postdoc Research Day Poster Session.
0.855042
3.016429
Type 1a supernovae are used to measure distance in the Universe because they explode with the same brightness, detonating when a white dwarf star consumes a specific amount of material from a binary companion. The accuracy of these distance measurements depends on the shape of the blast. New research indicates that Type 1a supernovae explosions start out clumpy and uneven, but a second, spherical blast overwhelms the first creating a smooth residue. This sets the limits of uncertainty on distance measurements that use Type 1a supernovae. Astronomers are reporting remarkable new findings that shed light on a decade-long debate about one kind of supernovae, the explosions that mark a star’s final demise: does the star die in a slow burn or with a fast bang? From their observations, the scientists find that the matter ejected by the explosion shows significant peripheral asymmetry but a nearly spherical interior, most likely implying that the explosion finally propagates at supersonic speed. These results are reported today in Science Express, the online version of the research journal Science, by Lifan Wang, Texas A&M University (USA), and colleagues Dietrich Baade and Ferdinando Patat from ESO. “Our results strongly suggest a two-stage explosion process in this type of supernova,” comments Wang. “This is an important finding with potential implications in cosmology.” Using observations of 17 supernovae made over more than 10 years with ESO’s Very Large Telescope and the McDonald Observatory’s Otto Struve Telescope, astronomers inferred the shape and structure of the debris cloud thrown out from Type Ia supernovae. Such supernovae are thought to be the result of the explosion of a small and dense star – a white dwarf – inside a binary system. As its companion continuously spills matter onto the white dwarf, the white dwarf reaches a critical mass, leading to a fatal instability and the supernova. But what sparks the initial explosion, and how the blast travels through the star have long been thorny issues. The supernovae Wang and his colleagues observed occurred in distant galaxies, and because of the vast cosmic distances could not be studied in detail using conventional imaging techniques, including interferometry. Instead, the team determined the shape of the exploding cocoons by recording the polarisation of the light from the dying stars. Polarimetry relies on the fact that light is composed of electromagnetic waves that oscillate in certain directions. Reflection or scattering of light favours certain orientations of the electric and magnetic fields over others. This is why polarising sunglasses can filter out the glint of sunlight reflected off a pond. When light scatters through the expanding debris of a supernova, it retains information about the orientation of the scattering layers. If the supernova is spherically symmetric, all orientations will be present equally and will average out, so there will be no net polarisation. If, however, the gas shell is not round, a slight net polarisation will be imprinted on the light. “This study was possible because polarimetry could unfold its full strength thanks to the light-collecting power of the Very Large Telescope and the very precise calibration of the FORS instrument,” says Dietrich Baade. “Our study reveals that explosions of Type Ia supernovae are really three-dimensional phenomena,” he adds. “The outer regions of the blast cloud is asymmetric, with different materials found in ‘clumps’, while the inner regions are smooth.” The research team first spotted this asymmetry in 2003, as part of the same observational campaign (ESO PR 23/03 and ESO PR Photo 26/05). The new, more extensive results show that the degree of polarisation and, hence, the asphericity, correlates with the intrinsic brightness of the explosion. The brighter the supernova, the smoother, or less clumpy, it is. “This has some impact on the use of Type Ia supernovae as standard candles,” says Ferdinando Patat. “This kind of supernovae is used to measure the rate of acceleration of the expansion of the Universe, assuming these objects behave in a uniform way. But asymmetries can introduce dispersions in the quantities observed.” “Our discovery puts strong constraints on any successful models of thermonuclear supernova explosions,” adds Wang. Models have suggested that the clumpiness is caused by a slow-burn process, called ‘deflagration’, and leaves an irregular trail of ashes. The smoothness of the inner regions of the exploding star implies that at a given stage, the deflagration gives way to a more violent process, a ‘detonation’, which travels at supersonic speeds – so fast that it erases all the asymmetries in the ashes left behind by the slower burning of the first stage, resulting in a smoother, more homogeneous residue. Original Source: ESO News Release
0.86233
4.123317
This is the most powerful radio galaxy on our corner of the Universe, used as a point of departure for studying radio galaxies at great distances. At a redshift z=0.0565 (distance of about 211 Mpc or 700 million light-years), its nature remains mysterious enough. The first photographs of Cygnus A showed two clumps of luminous material, which led Walter Baade and Rudolph Minkowski to speculate that the radio emission was somehow linked to a galaxy collision. Others saw a poorly resolved version of Centaurus A, bisected by a thick dust lane. The HST image shown as an inset reveals much detail, but doesn't quite clear the matter up. We see dust and an odd Z-shaped pattern. Much of this light in some regions comes not from stars, but from gas ionized by the nucleus. This is a narrow-line radio galaxy, but infrared and polarization measurements show that from some directions it would appear as a broad-line object and perhaps as a quasar, so that there is plenty of radiation in some directions to light up the gas. Cygnus A is an excellent example of the Fanaroff-Riley (FR) type II radio sources, characterized by faint, very narrow jets, distinct lobes, and clear hot spots at the outer edges of the lobes, often where the jets intersect the outer edges. These are in general more powerful radio sources than the FR I objects seen in slide 13, with the difference being frequently attributed to faster (relativistic?) motion of the jet material in the stronger FR II sources. The radio/optical overlay highlights the extent of the radio source beyond the central galaxy, extending 140 kpc (500,000 light-years) if we see it sideways. Bob Fosbury provided the HST color composite image. The wide-field optical image was taken with the 0.9-m telescope at Kitt Peak National Observatory, courtesy of Frazer Owen (as described by Owen at al. 1997 in ApJLett 488, L15). The VLA map is from the NRAO CD-ROM "Images from the Invisible Universe", as presented by Perley, Dreher, and Cowan 1984 (ApJLett 285, L35). For the overlay, both optical and radio images were displayed with logarithmic intensity scales. Back to QSO and AGN Gallery
0.857827
4.003435
We live in an amazing time when powerful computers are available, highly advanced space telescopes are active, and the average person still has the ability to contribute to science. In truth, these three benefits are all related to the development of powerful, and inexpensive, computers. They have combined to create the opportunity to solve a mystery. This star in the Northern sky, was imaged by the Kepler Space Telescope during a mission to find distant planets. Unfortunately, it didn't meet the pattern and so it was rejected as a candidate system containing orbiting planets. That might have been the end of the story for "KIC" except for the existance of amateur astronomers. This online citizen group was established with the intent of finding planets that might be missed by the more sophisticated searching efforts. When "KIC" was checked by the group, they identified certain traits that were unusual. First, the light emitted was quite ordinary over the time span of available observations, indicating a plain sun. Next, however, significant drops in the amount of light did occur. This might have indicated the existence of a distant world, except there were several problems. Too Much Light Loss Tabetha Boyajian, and her team, reviewed the information found by the Planet Hunters. Indeed, the light from the star did dip significantly. While this might have been caused by an orbiting world, there was too much blockage. Jupiter, for example, causes the Sun to dim about 1%, for a distant observer. The "KIC" dimming was far higher. This could, theoretically, be caused by an object much larger than Jupiter. Unfortunately, such large objects would emit light, becoming stars themselves. Not Regular Enough The observation of the star found that there were occassional dips in brightness. There was, however, no pattern detected. Normally, bodies that pass in front of suns do so in a regular orbit. They can be observed as dimming over the course of several hours. The strange sun was dimmed for days at a time. As well, the dimming was not regularly timed. This would seem to rule out an orbiting body as the cause. Not The Right Pattern Next, the observation of the dimming over time showed that the pattern was unusual. Rather than doing so quickly, the process took days and even wavered at times. When the brightness was restored, it was not uniform, nor similar to the earlier dimming. No other occurrence showed such behavior. Tabetha Boyajian, (Tabby) described KIC 8462852 as the most mysterious object in the universe. The observations did not match any others obtained by the Kepler Space Telescope. The mission had imaged about 150,000 stars over a rather long length of time. Tabby showed how no other set of data was like "her" star. Most others showed events that blocked starlight quickly, and regularly. All others showed much less significant blockages. Tabby's Star was mysterious, in many ways. Potential Causes of Tabby's Star Behaviors - A large body, such as Jupiter or Saturn, might have blocked emissions from Tabby's Star. This theory, however, requires that such an object be many times more massive than any other ever detected. Such an object would emit various types of radiation which have not been detected. - A body located between Earth and Tabby's Star may have caused the effects. This theory can be ruled out because such a object would dim many other stars over time. Such events have not been detected. - The data may be faulty. This possibility was rigorously examined. Many thousands of observations obtained at the same time were not found to have any anomolies. Moreover, observations of Tabby's Star were taken over the course of several years. Only Tabby's Star showed these unique traits, among a group of 150,000 stars imaged during the mission It has been theorized that an advanced alien culture may have established a mechanism that diverted starlight. This could potentially be observable as a reduction in emissions. Such an action has never been detected before and is beyond our ability. Various theories have been proposed that may match observed data. These all require the existence of a distant civilization, one that has never been previously detected anywhere, yet. How You Can Help There is a vital need to fully observe the next obscuring event. This is no longer possible with Kepler as it has moved on to other projects. Instead, other platforms must be used. A crowd funding effort is underway to obtain the money necessary to perform such observations. This represents the chance for regular people to help solve a mystery in space. Doubtless, the observations will be made in the coming decades, but the funding program will speed the process considerably. Over 1,300 backers have pledged to support observing missions. You can get involved as well with a contribution as low as $5. Tabby mentions in her TED Talk that we know a lot about what this star is not. It is not a normal star being eclipsed by a planet, even one like Jupiter. It is not being obscured by dust or gas. It is not being blocked by a black hole. We need to fully observe another dimming event to obtain more data. This will give us the ability to develop more theories as to the cause. If we obtain the right information, we may even be able to theorize the correct cause. So far, all theories have been shown to have problems. The crowd funding effort may be able to obtain the funds necessary to perform a long period of observations. This stands the greatest chance of being able to capture information during the next event. Amateur astronomers have also answered the call. They have pointed many telescopes to watch further developments. There is a good chance that one of these may identify an event shortly after it begins. Unfortunately, the information obtained is patchy and not easily correlated. Observations from one amateur astronomer may not be compatible with those from others. An organized, professional observing program is required. This can be arranged, but there are significant costs involved. The funding drive will obtain the necessary money and begin such a scientifically important program. Tabby's Star is indeed mysterious. Out of 150,000 stars observed with a precision instrument, only this one exhibited such odd behaviors. All attempts to explain the cause have been proven inadequate. It is important that we increase the amount of data available in order that we may be able to find the true cause of the mystery.
0.872575
3.769276
Today, a giant radio telescope in Western Canada is being turned on to begin its mission tracking the expanding growth of the universe. Scientists hope to find out more about the mysterious force behind this accelerating expansion—what they call dark energy. This makes up the majority of the universe, but we have no idea what it is. The new CHIME telescope will look for dark energy. Video: UBC Public Affairs/Youtube The $16-million Canadian Hydrogen Intensity Mapping Experiment, CHIME for short, is made of what looks like four 100-metre-long snowboard halfpipes stuck next to each other, and uses radio signals to detect hydrogen intensity. It's a way to measure how much of the universe has expanded and how quickly, in the slice of space that can be viewed from Canada. That data will be collected and processed into a 3D map of the sky. The project is a collaboration between several universities—the University of British Columbia, University of Toronto, and McGill—and the National Research Council of Canada. I reached principal investigator Mark Halpern, astronomy professor at UBC, over the phone at the Dominion Radio Astrophysical Observatory in the British Columbia interior near Penticton, the site of the project. He had to take the call from a closed-off metal room to avoid producing too much radio interference, which could harm the integrity of their results. As is common with radio telescopes, no cellphones are allowed on the premises. Halpern said that, by mapping the spread of hydrogen in space, he and his team can identify existing galaxies to see if they've expanding outwards, and can tell how far away the light from these objects is from Earth, which is a way to measure distance (a process called redshift). "If we know those two things—expansion and distance—then we can put together an expansion history," he said. The universe hasn't stopped expanding since the Big Bang 13.8 billion years ago. In fact, the expansion is now happening at an accelerated rate, pushed by an enigmatic repulsive force called dark energy. Halpern said that there's twice as much dark energy as everything else in the entire universe combined, yet what it actually is remains a total mystery. "The plain truth is," he said, "we don't have a clue." Over the course of the next five years, this radio telescope will create the biggest 3D map ever made—and the team will use mainly cheap, commercially available technologies to do it, according to Halpern. The radio signals are sent out using about a thousand cellphone transistors, and the 3D maps are being processed in part by computers with video-game-grade graphics cards. Video: UBC Public Affairs/Youtube. GIF: Jacob Dubé "I think we're lucky to live in a time where this technology is readily available," partly because of a booming consumer market for these products, Halpern said. "If we tried to do this a decade ago, every little bit of it would have been a custom-made part and unbelievably expensive." Instead of being designed like a regular radio telescope—shaped like a round satellite dish—CHIME was made into the halfpipe design to be able to map out and observe several lines in the sky, as opposed to specific dots that a typical telescope could focus on. Though the team won't have any results about the expansion history and dark energy for a few years, CHIME is also testing more short-term experiments, like measure the signals of all known pulsars, as well as identify mysterious fast radio bursts. Get six of our favorite Motherboard stories every day by signing up for our newsletter.
0.836352
3.931203
|MC-25||Thaumasia||30–65° S||60–120° W||Quadrangles||Atlas| The Thaumasia quadrangle, is famous for showing us a good example of possible rainfall and rivers in the past in pictures of Warrego Valles taken with Mariner 9 and the Viking Orbiters. Those early images revealed a network of branching valleys. They were clear evidence that Mars may have once been warmer, wetter, and perhaps had precipitation in the form of rain or snow. Before we saw those pictures, we believed Mars was just an old, dry desert. The Thaumasia quadrangle is one of a series of 30 quadrangle maps of Mars used by the United States Geological Survey (USGS). The Thaumasia quadrangle is also referred to as MC-25 (Mars Chart-25). The Thaumasia quadrangle covers the area from 30° to 65° south latitude and 60° to 120° west longitude (300-240 E). It encompasses many different regions or parts of many regions that have classical names. The northern part includes Thaumasia plateau. The southern part contains heavily cratered highland terrain and relatively smooth, low plains, such as Aonia Planum and Icaria Planum. Parts of Solis Planum, Aonia Terra, and Bosporus Planum are also found in this quadrangle. The east-central part includes Lowell Crater. Lowell Crater was named after Percival Lowell who studied Mars with a telescope in Flagstaff Arizona and then went around the world promoting the idea that Mars was inhabited The name comes from Thaumas, the god of the clouds and celestial apparitions. Gullies occur on steep slopes, especially on the walls of craters. Gullies are believed to be relatively young because they have few, if any craters. Moreover, they lie on top of sand dunes which themselves are considered to be quite young. Usually, each gully has an alcove, channel, and apron. Gullies that were once thought to be caused by recent flowing water. However, with further extensive observations with HiRISE, it was found that many are forming/changing today, even though liquid water cannot exist under current Martian conditions. Faced with these new observations, scientists came up with other ideas to explain them. The consensus seems to be that although water may have helped form them in the past, today they are being produced by chunks of dry ice moving down steep slopes. There is evidence here in this quadrangle that indeed water may have aided in the formation of gullies. Measurements of altitudes and slopes of gullies support the idea that snowpacks or glaciers are associated with gullies. Steeper slopes have more shade which would preserve snow. Higher elevations have far fewer gullies because ice would tend to sublimate more in the thin air of the higher altitude. This relationship seems to hold true in Thaumasia. This region is fairly high in evlevation and has very few gullies; however, a few are present in the lower elevations like the one pictured below in Ross Crater. Gullies in Ross Crater, as seen by HiRISE under the HiWish program. Because the gullies are on the narrow rim of a crater and they start at different heights, this example is not consistent with the model of gullies being caused by aquifers. Many places on Mars have sand dunes. Some craters in Thaumasia show dark blotches in them. High resolution photos show expose the dark markings as dark sand dunes. Dark sand dunes probably contain the igneous rock basalt. Brashear Crater, pictured below, is one crater with dark dunes. Mars Global Surveyor context image with box showing where next image is located. Mariner 9 and Viking Orbiter images, showed a network of branching valleys in Thaumasia called Warrego Valles. These networks are evidence that Mars may have once been warmer, wetter, and perhaps had precipitation in the form of rain or snow. A study with the Mars Orbiter Laser Altimeter, Thermal Emission Imaging System (THEMIS) and the Mars Orbiter Camera (MOC) support the idea that Warrego Valles was formed from precipitation. At first glance they resemble river valleys on our Earth. But sharper images from more advanced cameras reveal that the valleys are not continuous. They are very old and may have suffered from the effects of erosion. A picture below shows some of these branching valleys. Thaumasia is in the old southern highlands of Mars. As such it is loaded with craters. Craters are important to scientists. The density of impact craters is used to determine the surface ages of Mars and other solar system bodies. The older the surface, the more craters present. Crater shapes can reveal the presence of ground ice. The area around craters may be rich in minerals. On Mars, heat from the impact melts ice in the ground. Water from the melting ice dissolves minerals, and then deposits them in cracks or faults that were produced with the impact. This process, called hydrothermal alteration, is a major way in which ore deposits are produced. The area around Martian craters may be rich in useful ores for the future colonization of Mars. Studies on the earth have documented that cracks are produced and that secondary minerals veins are deposited in the cracks. Images from satellites orbiting Mars have detected cracks near impact craters. Great amounts of heat are produced during impacts. The area around a large impact may take hundreds of thousands of years to cool. Many craters once contained lakes. Because some crater floors show deltas, we know that water had to be present for some time. Dozens of deltas have been spotted on Mars. Deltas form when sediment is washed in from a stream entering a quiet body of water. It takes a bit of time to form a delta, so the presence of a delta is exciting; it means water was there for a time, maybe for many years. Primitive organisms may have developed in such lakes; hence, some craters may be prime targets for the search for evidence of life on the Red Planet. East side of Douglass Crater, as seen by CTX camera (on Mars Reconnaissance Orbiter) In the nearly half century of studying the Red Planet with orbiting satellites, much evidence has accumulating to prove that water once flowed in river valleys on Mars. Images of curved channels have been seen in images from Mars spacecraft dating back to the early seventies with the Mariner 9 orbiter. Indeed, a study published in June 2017, calculated that the volume of water needed to carve all the channels on Mars was even larger than the proposed ocean that the planet may have had. Water was probably recycled many times from the ocean to rainfall around Mars. Latitude dependent mantle Layers in mantle deposit, as seen by HiRISE, under the HiWish program. Mantle was probably formed from snow and dust falling during a different climate. Dust devil tracks Dust devil tracks are very common on Mars, especially in certain seasons. Dust devils can create very pretty tracks. Dust devils remove bright colored dust from the Martian surface; thereby exposing a dark layer. A thin coating of fine bright dust covers most of the Martian surface. When a dust devil goes by it blows away the coating and exposes the underlying dark surface creating tracks. It does not take too much fine dust to cover those tracks--experiments in Earth laboratories demonstrate that only a few 10's of microns of dust will be enough. The width of a single human hair ranges from approximately 20 to 200 microns (μm); consequently, the dust that can cover dust devil tracks may only be the thickness of a human hair. Dust devils on Mars have been photographed both from the ground and high overhead from orbit. They have even blown dust off the solar panels of two Rovers on Mars, thereby greatly extending their useful lifetime. The pattern of tracks has been shown to change every few months. A study that combined data from the High Resolution Stereo Camera (HRSC) and the Mars Orbiter Camera (MOC) found that some large dust devils on Mars have a diameter of 700 m and last at least 26 minutes. Other views from Thaumasia - Glaciers on Mars - HiWish program - How are features on Mars Named? - Martian features that are signs of water ice - Martian gullies - Periodic climate changes on Mars - Rivers on Mars - Davies, M.E.; Batson, R.M.; Wu, S.S.C. "Geodesy and Cartography" in Kieffer, H.H.; Jakosky, B.M.; Snyder, C.W.; Matthews, M.S., Eds. Mars. University of Arizona Press: Tucson, 1992. - Blunck, J. 1982. Mars and its Satellites. Exposition Press. Smithtown, N.Y. - Edgett |first1= K. |last2= Malin |first2= M. C. |last3= Williams |first3= R. M. E. |last4= Davis |first4= S. D. |date= 2003 |title= Polar-and middle-latitude martian gullies: A view from MGS MOC after 2 Mars years in the mapping orbit |journal= Lunar Planet. Sci. |volume=34 |at=p. 1038, Abstract 1038 | url=http://www.lpi.usra.edu/meetings/lpsc2003/pdf/1038.pdf | - Harrington |first=J.D. |last2=Webster |first2=Guy |title=RELEASE 14-191 – NASA Spacecraft Observes Further Evidence of Dry Ice Gullies on Mars |url=http://www.nasa.gov/press/2014/july/nasa-spacecraft-observes-further-evidence-of-dry-ice-gullies-on-mars |date=July 10, 2014 |work=NASA - CNRS. "Gullies on Mars sculpted by dry ice rather than liquid water." ScienceDaily. ScienceDaily, 22 December 2015. www.sciencedaily.com/releases/2015/12/151222082255.htm - Cite error: Invalid <ref>tag; no text was provided for refs named - Dickson, J. et al. 2007. Martian gullies in the southern mid-latitudes of Mars Evidence for climate-controlled formation of young fluvial features based upon local and global topography. Icarus: 188. 315-323. - Hecht, M. 2002. Metastability of liquid water on Mars. Icarus: 156. 373-386. - Michael H. Carr|title=The surface of Mars|url=https://books.google.com/books?id=uLHlJ6sjohwC%7Caccessdate=21 March 2011|year=2006|publisher=Cambridge University Press|isbn=978-0-521-87201-0 - Ansan, V and N. Mangold. 2006. New observations of Warrego Valles, Mars: Evidence for precipitation and surface runoff. Icarus. 54:219-242. - Osinski, G, J. Spray, and P. Lee. 2001. Impact-induced hydrothermal activity within the Haughton impact structure, arctic Canada: Generation of a transient, warm, wet oasis. Meteoritics & Planetary Science: 36. 731-745 - Pirajno, F. 2000. Ore Deposits and Mantle Plumes. Kluwer Academic Publishers. Dordrecht, The Netherlands - Head, J. and J. Mustard. 2006. Breccia Dikes and Crater-Related Faults in Impact Craters on Mars: Erosion and Exposure on the Floor of a 75-km Diameter Crater at the Dichotomy Boundary. Special Issue on Role of Volatiles and Atmospheres on Martian Impact Craters Meteoritics & Planetary Science - Segura, T, O. Toon, A. Colaprete, K. Zahnle. 2001. Effects of Large Impacts on Mars: Implications for River Formation. American Astronomical Society, DPS meeting#33, #19.08 - Segura, T, O. Toon, A. Colaprete, K. Zahnle. 2002. Environmental Effects of Large Impacts on Mars. Science: 298, 1977-1980. - Cabrol, N. and E. Grin. 2001. The Evolution of Lacustrine Environments on Mars: Is Mars Only Hydrologically Dormant? Icarus: 149, 291-328. - Fassett, C. and J. Head. 2008. Open-basin lakes on Mars: Distribution and implications for Noachian surface and subsurface hydrology. Icarus: 198, 37-56. - Fassett, C. and J. Head. 2008. Open-basin lakes on Mars: Implications of valley network lakes for the nature of Noachian hydrology. - Wilson, J. A. Grant and A. Howard. 2013. INVENTORY OF EQUATORIAL ALLUVIAL FANS AND DELTAS ON MARS. 44th Lunar and Planetary Science Conference. - Newsom H., Hagerty J., Thorsos I. 2001. Location and sampling of aqueous and hydrothermal deposits in martian impact craters. Astrobiology: 1, 71-88. - Baker, V., et al. 2015. Fluvial geomorphology on Earth-like planetary surfaces: a review. Geomorphology. 245, 149–182. - Carr, M. 1996. in Water on Mars. Oxford Univ. Press. - Baker, V. 1982. The Channels of Mars. Univ. of Tex. Press, Austin, TX - Baker, V., R. Strom, R., V. Gulick, J. Kargel, G. Komatsu, V. Kale. 1991. Ancient oceans, ice sheets and the hydrological cycle on Mars. Nature 352, 589–594. - Carr, M. 1979. Formation of Martian flood features by release of water from confined aquifers. J. Geophys. Res. 84, 2995–300. - Komar, P. 1979. Comparisons of the hydraulics of water flows in Martian outflow channels with flows of similar scale on Earth. Icarus 37, 156–181. - Luo, W., et al. 2017. New Martian valley network volume estimate consistent with ancient ocean and warm and wet climate. Nature Communications 8. Article number: 15766 (2017). doi:10.1038/ncomms15766 - Hecht | first1 = M | year = 2002 | title = Metastability of water on Mars | url = | journal = Icarus | volume = 156 | issue = 2| pages = 373–386 | doi=10.1006/icar.2001.6794 | - Mustard | first1 = J. |display-authors=etal | year = 2001 | title = Evidence for recent climate change on Mars from the identification of youthful near-surface ground ice | url = | journal = Nature | volume = 412 | issue = 6845| pages = 411–414 | - Pollack | first1 = J. | last2 = Colburn | first2 = D. | last3 = Flaser | first3 = F. | last4 = Kahn | first4 = R. | last5 = Carson | first5 = C. | last6 = Pidek | first6 = D. | year = 1979 | title = Properties and effects of dust suspended in the martian atmosphere | url = | journal = J. Geophys. Res. | volume = 84 | issue = | pages = 2929–2945 | doi=10.1029/jb084ib06p02929 | - http://marsrovers.jpl.nasa.gov/gallery/press/spirit/20070412a.html Mars Exploration Rover Mission: Press Release Images: Spirit. Marsrovers.jpl.nasa.gov. Retrieved on 7 August 2011. - Reiss, D. et al. 2011. Multitemporal observations of identical active dust devils on Mars with High Resolution Stereo Camera (HRSC) and Mars Orbiter Camera (MOC). Icarus. 215:358-369. - Lorenz, R. 2014. The Dune Whisperers. The Planetary Report: 34, 1, 8-14 - Lorenz, R., J. Zimbelman. 2014. Dune Worlds: How Windblown Sand Shapes Planetary Landscapes. Springer Praxis Books / Geophysical Sciences.
0.905503
3.70949
Almost every single galaxy has a gigantic monster at its centre. Some are lurking quietly in the dark, waiting for their next victims to stray too close. Others are feeding messily as we speak, growing more and more massive as they swallow material ripped from their surroundings. These wild monsters are black holes, and when one of them feeds, it creates the brightest and most energetic objects in the Universe: active galactic nuclei! As the black hole pulls in gas and cosmic dust, it forms a doughnut-shaped ring, like water being sucked down a drain. The ring spins faster and faster as it falls inwards, causing it to heat up to incredible temperatures. When this happens, the rings release huge, powerful jets of light that are detected by our telescopes. So, when we look at one of these brilliant powerhouses, we expect to find a gigantic black hole at the centre of a hot dust ring, munching on its dinner. We don’t expect to see it hiding in a blanket of cool dust. But that’s just what has been observed around an active black hole! The cool dust is at about room temperature, which is much, much cooler than the rest of the dust, which is about 700 degrees Celsius! The dust forms a cool, sooty wind that is blowing away from the black hole. These new findings are very odd—black holes need to pull in material to fuel them, but the intense energy created as they do this seems to be blowing material away! For now, this is another mystery about these extraordinary objects that we have yet to solve. Like most things in the Universe—including planets, galaxies and stars—there are many different types of active galactic nuclei. However, many of the ‘differences’ between the types are just due to where they are facing when we see them. For example, there are ‘blazars’ and ‘quasars’, which we view straight down the jet. ‘Seyferts’, however, are viewed from the side of the jet.
0.804114
3.750411
NASA - SPITZER Space Telescope logo. July 1, 2011 Galaxies once thought of as voracious tigers are more like grazing cows, according to a new study using NASA's Spitzer Space Telescope. Astronomers have discovered that galaxies in the distant universe continuously ingested their star-making fuel over long periods of time. This goes against previous theories that galaxies devoured their fuel in quick bursts after run-ins with other galaxies. "Our study shows the merging of massive galaxies was not the dominant method of galaxy growth in the distant universe," said Ranga-Ram Chary of NASA's Spitzer Science Center at the California Institute of Technology in Pasadena, Calif. "We're finding this type of galactic cannibalism was rare. Instead, we are seeing evidence for a mechanism of galaxy growth in which a typical galaxy fed itself through a steady stream of gas, making stars at a much faster rate than previously thought." Chary is the principal investigator of the research appearing in the Aug. 1 issue of the Astrophysical Journal. According to his findings, these grazing galaxies fed steadily over periods of hundreds of millions of years and created an unusual amount of plump stars, up to 100 times the mass of our sun. "This is the first time that we have identified galaxies that supersize themselves by grazing," said Hyunjin Shim, also of the Spitzer Science Center and lead author of the paper. "They have many more massive stars than our Milky Way galaxy." Image above: This split view shows how a normal spiral galaxy around our local universe (left) might have looked back in the distant universe, when astronomers think galaxies would have been filled with larger populations of hot, bright stars (right). Image credit: NASA / JPL-Caltech / STScI. Galaxies like our Milky Way are giant collections of stars, gas and dust. They grow in size by feeding off gas and converting it to new stars. A long-standing question in astronomy is: Where did distant galaxies that formed billions of years ago acquire this stellar fuel? The most favored theory was that galaxies grew by merging with other galaxies, feeding off gas stirred up in the collisions. Chary and his team addressed this question by using Spitzer to survey more than 70 remote galaxies that existed 1 to 2 billion years after the big bang (our universe is approximately 13.7 billion years old). To the surprise of the astronomers, these galaxies were blazing with what is called H alpha, radiation from hydrogen gas that has been hit with ultraviolet light from stars. High levels of H alpha indicate stars are forming vigorously. Seventy percent of the surveyed galaxies show strong signs of H alpha. By contrast, only 0.1 percent of galaxies in our local universe possess the signature. SPITZER Space Telescope Previous studies using ultraviolet-light telescopes found about six times less star formation than Spitzer, which sees infrared light. Scientists think this may be due to large amounts of obscuring dust, through which infrared light can sneak. Spitzer opened a new window onto the galaxies by taking very long-exposure infrared images of a patch of sky called the GOODS fields, for Great Observatories Origins Deep Survey. NASA's Jet Propulsion Laboratory in Pasadena manages the Spitzer Space Telescope mission for the agency's Science Mission Directorate in Washington. Science operations are conducted at the Spitzer Science Center. Caltech manages JPL for NASA. For more information about Spitzer, visit: http://www.nasa.gov/spitzer and http://spitzer.caltech.edu/ Images, Text, Credits: NASA / JPL-Caltech / STScI. Best regards, Orbiter.ch
0.850893
3.936458
Uranus Current Affairs - 2020 NASA’s Voyager 2 has exited heliosphere and has entered interstellar space. It is now at 11 billion miles from the earth. Information from the spacecraft takes 16.5 hours to reach earth at the speed of light. (Light from the sun takes 8 minutes to reach earth) The spacecraft joined its twin voyager 1 on December 10, 2018. The Voyagers were launched to study the outer solar system up close. Voyager 2 will target Saturn, Jupiter, Neptune and Uranus. Till November 5, 2019, the space around the spacecraft was surrounded by plasma flowing out from the sun. This outflow is called solar wind and envelopes the planets in the solar system. However, recently, scientists detected no electrical current of plasma to detect density, pressure, temperature. This confirms that the spacecraft has left the heliosphere and entered inter stellar space. - Voyager 2 is the first spacecraft to study all four planets together. It includes Jupiter, Neptune, Saturn and Uranus - It is the first spacecraft that discovered 14th moon of Jupiter. - It is the first human-made object to fly by Neptune - Voyager 2 also discovered 5 moons of Neptune. It identified a “Great Dark Spot” at Neptune. Also the spacecraft discovered that there are four rings around Neptune. Tags: Jupiter • NASA • Neptune • Saturn • saturn moon According to study conducted by researchers from University of Idaho, US Uranus may have two tiny, previously undiscovered moons orbiting near two of the planet’s rings. These two moons were detected by researchers after analysing decades-old images of Uranus’ icy rings taken by NASA’s Voyager 2 spacecraft which had flown by the planet 30 years ago. What researchers have found? - Scientists have found that the pattern in Uranus’ rings was similar to moon-related structures in Saturn’s rings called moonlet wakes. - They have estimate the hypothesised moonlets in Uranus’ rings may be four to 14 kilometres in diameter. It means they are as small as some identified moons of Saturn. How researchers discovered these moons? - They had analysed decades-old images of Uranus’ icy rings taken by NASA’s Voyager 2 spacecraft which had flown by the planet 30 years ago. - During their analysis they had noticed the amount of ring material on the edge of the alpha ring – one of the brightest of Uranus’ multiple rings – varied periodically. - They also had found similar, even more promising pattern occurred in the same part of the neighbouring beta ring. - They also had analysed radio occultations – made when Voyager 2 sent radio waves through the rings to be detected back on Earth. - They also had analysed stellar occultations made when spacecraft measured light of background stars shining through the rings, which helped to show how much material they contain. What is significance of this research? Uranian moons are especially hard to spot because their surfaces are covered in dark material. These findings could help explain some characteristics of Uranus’ rings, which are strangely narrow compared to Saturn’s moons. It will also help to explain that if moonlets exist, they may be acting as “shepherd” moons, helping to keep the rings from spreading out. Two of Uranus’ 27 known moons, Cordelia and Ophelia act as shepherds to Uranus’ epsilon ring. - Uranus is the seventh planet from the Sun. It has the third-largest planetary radius and fourth-largest planetary mass in the Solar System. - Uranus is similar in composition to Neptune and are classified as “ice giants” to distinguish them from the gas giants. - Its primary composition of hydrogen and helium, but it also contains more ices such as water, ammonia, and methane, along with traces of other hydrocarbons - Every planet in our solar system except for Venus and Uranus rotates counter-clockwise as seen from above the North Pole; that is from west to east.
0.862256
3.327752
Jupiter and Mars will make a close approach, passing within 0°47' of each other. From Ashburn, the pair will be difficult to observe as they will appear no higher than 15° above the horizon. They will be visible in the dawn sky, rising at 04:09 (EDT) – 1 hour and 53 minutes before the Sun – and reaching an altitude of 15° above the eastern horizon before fading from view as dawn breaks around 05:41. Jupiter will be at mag -1.9; and Mars will be at mag 1.6. Both objects will lie in the constellation Gemini. They will be a little too widely separated to fit comfortably within the field of view of a telescope, but will be visible to the naked eye or through a pair of binoculars. A graph of the angular separation between Jupiter and Mars around the time of closest approach is available here. The positions of the pair at the moment of closest approach will be as follows: |Object||Right Ascension||Declination||Constellation||Magnitude||Angular Size| The coordinates above are given in J2000.0. The pair will be at an angular separation of 23° from the Sun, which is in Cancer at this time of year. |The sky on 22 July 2013| 14 days old All times shown in EDT. The circumstances of this event were computed using the DE405 planetary ephemeris published by the Jet Propulsion Laboratory (JPL). This event was automatically generated by searching the ephemeris for planetary alignments which are of interest to amateur astronomers, and the text above was generated based on an estimate of your location. |19 Jun 2013||– Jupiter at solar conjunction| |05 Jan 2014||– Jupiter at opposition| |24 Jul 2014||– Jupiter at solar conjunction| |06 Feb 2015||– Jupiter at opposition|
0.895513
3.33269
On December 7, 1995, NASA’s historic Galileo probe plunged into Jupiter’s atmosphere at 106,000 mph, relaying 58 minutes of data back to Earth before it was pulverized in the depths of the enormous planet’s crushing interior. In terms of atmospheric composition, some of what the probe measured met expectations. But there were also some surprises, one of the most baffling being that the region Galileo entered was drier than astrophysicists had anticipated. Jupiter’s 79 moons are mostly made of ice, so it had been assumed that the planet’s atmosphere would contain a considerable amount of water. If so, the 750-pound probe didn’t find it that day. Almost a quarter of a century later, experts are still debating how much water might be swirling within Jupiter’s howling atmosphere. Recent research by a national team of scientists – including Clemson University astrophysicist Máté Ádámkovics – indicates that the answer is … a lot. “By formulating and analyzing data obtained using ground-based telescopes, our team has detected the chemical signatures of water deep beneath the surface of Jupiter’s Great Red Spot,” said Ádámkovics, an assistant professor in the College of Science’s department of physics and astronomy. “Jupiter is a gas giant that contains more than twice the mass of all of our other planets combined. And though 99 percent of Jupiter’s atmosphere is composed of hydrogen and helium, even solar fractions of water on a planet this massive would add up to a lot of water – many times more water than we have here on Earth.” Ádámkovics’ collaborative research was recently featured in Astronomical Journal, one of the world’s premier journals for astronomy. He was part of a team that included Gordon L. Bjoraker of NASA; Michael H. Wong and Imke de Pater of the University of California, Berkeley; Tilak Hewagama of the University of Maryland; and Glenn Orton of the California Institute of Technology. The paper was titled “The Gas Composition and Deep Cloud Structure of Jupiter’s Great Red Spot.” The team focused its sights on the Great Red Spot, a hurricane-like storm more than twice as wide as Earth that has been blustering in Jupiter’s skies for more than 150 years. The team searched for water by using radiation data collected by two instruments on ground-based telescopes: iSHELL on the NASA Infrared Telescope Facility and the Near Infrared Spectograph on the Keck 2 telescope, both of which are located on the remote summit of Maunakea in Hawaii. iShell is a high-resolution instrument that can detect a wide range of gases across the color spectrum. Keck 2 is the most sensitive infrared telescope on Earth. The team found evidence of three cloud layers in the Great Red Spot, with the deepest cloud layer at 5-7 bars. A bar is a metric unit of pressure that approximates the average atmospheric pressure on Earth at sea level. Altitude on Jupiter is measured in bars because the planet doesn’t have an Earth-like surface from which to measure elevation. At about 5-7 bars – or about 100 miles below the cloud tops – is where the scientists believed the temperature would reach the freezing point for water. The deepest of the three cloud layers identified by the team was believed to be composed of frozen water. “The discovery of water on Jupiter using our technique is important in many ways. Our current study focused on the red spot, but future projects will be able to estimate how much water exists on the entire planet,” Ádámkovics said. “Water may play a critical role in Jupiter’s dynamic weather patterns, so this will help advance our understanding of what makes the planet’s atmosphere so turbulent. And, finally, where there’s the potential for liquid water, the possibility of life cannot be completely ruled out. So, though it appears very unlikely, life on Jupiter is not beyond the range of our imaginations.” Clemson’s main role in the research was to use specially designed software to transform raw data into science-quality data that could be more easily analyzed and also shared with scientists at Clemson and around the world. This type of work was performed this past spring by Rachel Conway, an undergraduate student in physics and astronomy who became involved in the project via Clemson’s Creative Inquiry program. “When I initially began, I started by running the data through. The code was already written and I was just plugging in new data sets and generating output files,” said Conway, a native of Watertown, Connecticut. “But then I began fixing errors and learning more about what was actually going on. I’m interested in everything and anything that’s out there, so learning more about what we don’t know is always cool.” NASA’s Juno spacecraft, which arrived at Jupiter in 2016 and will be orbiting and studying the planet until at least 2021, has revealed many secrets about a planet so large it almost became a star. Juno is also searching for water by using its own high-tech infrared spectrometer. If Juno’s observations match ground-based observations, then the latter can be applied not just to the Great Red Spot, but to all of Jupiter. The technique als0 can be used to study Saturn, Uranus and Neptune, our solar system’s three other gas planets. “Starting this fall, the next project will be to get a lot more data of this kind to measure not just one spot on Jupiter. but all over Jupiter,” said Ádámkovics, whose research focus is on the physics and chemistry of planet formation, planetary atmospheres and circumstellar disks. “To do this, we’ll be collecting many gigabytes of data with the new instrument, iSHELL, that works at a very high resolution and will complement Juno’s observations. The new part of this next project will be to write the automated software for all this data so that we can get a full picture of the planet’s water abundance.” This time around, Ádámkovics and Conway will have some new members on their Clemson team. Ádámkovics will add six to eight Creative Inquiry students to assist with analyzing the raw data. “In addition to physics students, we also have students who are computer scientists and who specialize in other fields,” Ádámkovics said. “We expect that these cross-disciplinary skill sets will complement each other by enhancing our effectiveness and efficiency. Jupiter still has many mysteries. But we’ve never been more ready or more able to solve them.” Publication: G. L. Bjoraker, et al., “The Gas Composition and Deep Cloud Structure of Jupiter’s Great Red Spot,” AJ, 2018; doi:10.3847/1538-3881/aad186
0.87438
3.985957
Since showing itself on August 14, 2013, a bright nova in the constellation Delphinus — now officially named Nova Delphini 2013 — has brightened even more. As of this writing, the nova is at magnitude 4.4 to 4.5, meaning that for the first time in years, there is a nova visible to the naked eye — if you have a dark enough sky. Even better, use binoculars or a telescope to see this “new star” in the sky. The nova was discovered by Japanese amateur astronomer Koichi Itagak. When first spotted, it was at about magnitude 6, but has since brightened. Here’s the light curve of the nova from the AAVSO (American Association of Variable Star Observers) and they’ve also provided a binocular sequence chart, too. How and where to see the new nova? Below is a great graphic showing exactly where to look in the sky. Additionally, we’ve got some great shots from Universe Today readers around the world who have managed to capture stunning shots of Nova Delpini 2013. You can see more graphics and more about the discovery of the nova on our original ‘breaking news’ article by Bob King. If you aren’t able to see the nova for yourself, there are a few online observing options: The Virtual Star Party team, led by UT’s publisher Fraser Cain, will try to get a view during the next VSP, at Sunday night on Google+ — usually at this time of year, about 10 pm EDT/0200 UTC on Monday mornings. If you’d like a notification for when it’s happening, make sure you subscribe to the Universe Today channel on YouTube. The Virtual Telescope Project, based in Italy, will have an online observing session on August 19, 2013 at 20:00 UTC, and you can join astronomer Gianluca Masi at this link. The Slooh online telescope had an observing session yesterday (which you can see here), and we’ll post an update if they plan any additional viewing sessions. There’s no way to predict if the nova will remain bright for a few days more, and unfortunately the Moon is getting brighter and bigger in the sky (it will be full on August 20), so take the opportunity this weekend if you can to try and see the new nova. Now, enjoy more images from Universe Today readers: Ralf Vandebergh shared this video he was able to capture on his 10-year-old hand-held video camera to “demonstration of the brightness of the nova and what is possible with even 10 year old technique from hand.”
0.82337
3.17318