text
stringlengths
286
572k
score
float64
0.8
0.98
model_output
float64
3
4.39
In the 20th century, Newton’s three-dimensional infinite universe seemed to have been destroyed. His laws of gravity had been usurped by the theory of general relativity. The ether had vanished. Quantum mechanics would soon proceed from theoretical mind experiments to the construction of particle accelerators and nuclear fission. At the same time, a good many imminent surprises awaited scholars in the evolving fields of astrophysics and cosmology, surprises that would call into question some of the certitude with which so many had accepted this revised (and presumed final) quantum vision of creation. Newton had assumed that the system he observed was stable and changeless. Einstein believed—as the Baha’i teachings repeatedly say—that nothing in the universe is static: This phenomenal world will not remain in an unchanging condition even for a short while. Second after second it undergoes change and transformation. Every foundation will finally become collapsed; every glory and splendor will at last vanish and disappear, but the Kingdom of God is eternal and the heavenly sovereignty and majesty will stand firm, everlasting. Hence in the estimation of a wise man the mat in the Kingdom of God is preferable to the throne of the government of the world. – Abdu’l-Baha, Tablets of the Divine Plan, pp. 79-80. A scientific theory thus emerged that the entire universe might be either expanding or contracting, composing or decomposing, if, by definition, it could not be stationary. The confluence of a sequence of discoveries proved Einstein, and the Baha’i view, correct. An important part of the sequence begins with a discovery by Austrian physicist Christian Doppler in 1842. By employing a spectroscope to examine spiral nebulae, he discovered that the light emitted from these nebulae shifted toward the red end of the spectrum—that is, the light assumed a lower frequency. Were these nebulae at a constant distance, the spectral lines would remain constant, or if the nebulae were approaching the observer, the light would shift towards the blue end of the spectrum and become higher in frequency. The “Doppler Effect” also holds true for sound and you can observe it, for example, in the way the sound of an approaching or receding noise from a truck on the highway assumes different frequencies when it passes us by. Then, in the early 1920’s, while working at the Mount Wilson Observatory in California, Edwin Hubble calculated that some of the “nebulae” that Doppler had examined were actually distant galaxies outside our own Milky Way. This discovery in itself revised and expanded existing cosmological theory–that there are many galaxies, not just one. Shortly afterwards, Hubble discovered that the redshift in the light from these galaxies indicates that they are receding rapidly from our own galaxy. This observation seemed to confirm Einstein’s theory that the universe is not static, and it seemed to indicate as well that the universe is expanding. The more distant the galaxy, the more rapidly it speeds away from us. Indeed, his observation about the relation of speed to distance became known as the “Hubble Constant.” Of course, this same observation could also be seen as confirming Newtonian physics, since the force of gravity is proportional to the mass of an object and the distance from the object. For example, a spaceship must acquire “escape velocity” to break free of the Earth’s gravitational pull, but once it has sufficiently distanced itself from the planet, the same pull, though extant, has no appreciable effect. Of course, since no inertial frame of reference exists from which to assess this motion, we cannot assume that these galaxies are breaking free of attraction to the Milky Way galaxy. Likewise, according to the Newtonian model, they might be attracted to something larger that we cannot yet detect. Reconciliation and Reciprocity To name and to quantify is clearly one expression of the fact that science—the desire to understand and describe reality—is inherent in humankind: Science is the first emanation from God toward man. All created beings embody the potentiality of material perfection, but the power of intellectual investigation and scientific acquisition is a higher virtue specialized to man alone. – Abdu’l-Baha, The Promulgation of Universal Peace, p. 49. Science as a body of study should thus never be perceived as hubris or vanity, but rather—especially in its more noble aspirations—as our perseverance in the face of what would otherwise seem an unapproachable reality, a cosmos so immeasurable, so vast and mysterious that it becomes tantamount to a metaphysical realm. We encounter this one stark message as we approach infinity in our search for some final encompassing entity or form in the macrocosmic view of creation. But however sophisticated we may constantly become in our capacity to study the universe, it will ever be beyond any final or complete comprehension. Now, while this notion might discourage some scientists and even scientific thought itself, it is perfectly logical in the context of a physical creation whose very existence emulates in metaphorical guise the pre-existent ideas, forms, virtues, and verities whose essences abide only in the non-composite metaphysical reality of the world of the spirit. Therefore, if we can learn about one reality by understanding the other—and given the tenuous nature of speculations about the nature and origin of the cosmos—we would do well to turn our discussion to a brief examination of some of the problems with contemporary cosmological theories and how these can be informed, if not resolved, by guidance and verities as set forth in the Baha’i teachings: Reflect upon the inner realities of the universe, the secret wisdoms involved, the enigmas, the inter-relationships, the rules that govern all. For every part of the universe is connected with every other part by ties that are very powerful and admit of no imbalance, nor any slackening whatever. In the physical realm of creation, all things are eaters and eaten: the plant drinketh in the mineral, the animal doth crop and swallow down the plant, man doth feed upon the animal, and the mineral devoureth the body of man. Physical bodies are transferred past one barrier after another, from one life to another, and all things are subject to transformation and change, save only the essence of existence itself—since it is constant and immutable, and upon it is founded the life of every species and kind, of every contingent reality throughout the whole of creation. – Abdu’l-Baha, Selections from the Writings of Abdu’l-Baha, p. 157.
0.838726
3.560762
Astronomers have used the gravitational warping of light, predicted by Einstein nearly a century ago, to measure the mass of a distant star for the first time. - The effect of gravity on light can be used to measure the mass of objects in space - Einstein said it would be impossible to observe this phenomenon with distant stars - Study provides clues about the fate of our own Sun The team, led by Kailash Sahu of the Space Telescope Science Institute in Baltimore, measured the mass of a white dwarf star called Stein 2051 B as it passed in front of another more distant star — an event Einstein thought would be impossible to observe. The findings, publishing in today's edition of Science, will help us understand more about these small dense stars and the ultimate fate of our Sun, which will become a white dwarf star when it burns out. Dr Sahu relied on Einstein's idea that the gravity of an object can bend and magnify rays of light. According to Einstein's general theory of relativity, an object passing in front of another bright object would bend the light of the more distant object and cause it to appear to move from its original position. In this "gravitational lensing", the distance apparently travelled by the background object depends on the mass of the object in front, which is curving the light. While gravitational lensing has been used by astronomers to study the galactic bulge in our Milky Way and other galaxies such as Andromeda and the Magellanic Clouds, this is the first time it has been used to study distant stars. 'No hope' of observing, said Einstein At the time Einstein came up with the idea of gravitational lensing, he did not think it would be possible to use it to observe distant stars. In a 1936 Science paper he wrote there would be "no hope" of observing it directly because stars were too far apart. "He thought it would be practically a very difficult experiment to do, especially given the technology of the day," said Geraint Lewis, a cosmologist who works on gravitational lensing at the University of Sydney. "He had the mind of a scientist in the 1920s and 30s." In Einstein's time, telescopes were much less advanced so the chance of seeing one star pass in front of another — and being able to observe the lensing effects — was much lower. Even though these passings are rare, said Professor Lewis, these days we can look at a lot of stars in one go. "It's a one in a million chance, so what you do is you look at a million stars at a time," Professor Lewis said. Star catalogue helps search for candidates For their study Dr Sahu and team used data from star catalogues to project the positions of around 5,000 stars and look for any cases where they were likely to pass in front of fainter background stars. Once they discovered that Stein 2051 B — 18 light-years from Earth — was a perfect candidate, they were able to use the Hubble Space Telescope to observe the event. "It's hard work to do this kind of observation and separate the starlight from the white dwarf from the background star, and find the positions accurately. So it's pretty cool stuff," Professor Lewis said. "What these guys have done for the first time is looked at how the motion of a star changes as one passes in front of the other and used that to measure mass." Star reveals itself to be a 'bog standard' white dwarf Dr Sahu and his team found the mass of the white dwarf agreed with the Chandrasekhar limit, a threshold proposed by Indian astrophysicist Subrahmanyan Chandrasekhar that determines whether or not a white dwarf will collapse into a supernova. "When I found that the mass of the white dwarf is precisely in accordance with what Chandrasekhar had predicted in 1930 in his Nobel-prize winning theory of white dwarfs, I fell off my chair," Dr Sahu said. This article contains content that is not yet available here. There have been varying theories about the structure of white dwarfs, Professor Lewis said, and there was some question about the nature of Stein 2051 B. The weighing of Stein 2051 B using gravitational lensing paves the way for using the method in other similar cases. Professor Lewis said the launch of the James Webb Space Telescope in October 2018 will enable very accurate predictions of when stars will pass in front another. "Then we'll put all our telescopes on them and watch the passages because that's when we get the most information," he said.
0.836517
4.083733
id="article-body" class="row" section="article-body"> The apparent moment of impact on the solar energy nigeria system's largest world. Ethan Chappel An amateur astronomer captured a flash in Jupiter's atmosphere last month that appeared as a bright dot nearly the size of Earth when compared with the gas giant planet. New analysis of the footage finds the brief flare was caused by a relatively small asteroid. A slow-motion GIF of the flash brightening and fading in the middle left of the planet. Ethan Chappel / EPSC Ethan Chappel recorded the 1.5-second flash on August 7 using a telescope in his backyard in Texas. At its peak, it matched the brightness of Jupiter's moon Io. Ramanakumar Sankar and Csaba Palotai of the Florida Institute of Technology (FIT) analyzed the data to estimate that the flash could have been caused by an impact from a stony-iron asteroid between 39 and 52 feet (12 and 16 meters) in diameter, or about the size of a large bus. The object probably had a mass of about 450 tons and released the equivalent of an explosion of 240 kilotons of TNT when it smashed into the upper atmosphere of Jupiter around 50 miles (80 km) above the planet's clouds. That's about half the energy released by the bolide that exploded over Russia as it broke apart in the atmosphere in 2013, unleashing a shock wave that shattered thousands of windows in the city of Chelyabinsk. These new results were presented Monday at a meeting of the European Planetary Society Congress in Geneva. Ricardo Hueso, a physicist at the University of the Basque Country in Spain, is one of the developers of an open source software package called DeTeCt specially designed to identify impacts on Jupiter. Chappel used DeTeCt to analyze his video of the August impact. Hueso said that the impact appears to be the second brightest of the six captured since 2010. "Most of these objects hit Jupiter without being spotted by observers on Earth," Hueso said. "However, we now estimate 20 to 60 similar objects impact with Jupiter each year." If that's the case, that's an awful lot of light shows that no one is witnessing, including a few that may be even brighter than the planet-sized flash we saw last month.Share your voice
0.89307
3.343746
What are asteroids made of? While composed of metals, rocks, ices, and also many elements that are difficult to find and retrieve here on Earth — hence the growing interest in asteroid-mining missions — these drifting denizens of the Solar System have many different possible ways of forming. Some may be dense hunks of rock and metal, created during violent collisions and breakups of once-larger bodies, while others may be little more than loose clusters of gravel held together by gravity. Knowing how to determine the makeup of an asteroid is important to astronomers, not only to know its history but also to be better able to predict its behavior as it moves through space, interacting with other bodies — other asteroids, future exploration craft, radiation from the Sun, and potentially (although we hope not!) our own planet Earth. Now, using the European Southern Observatory’s New Technology Telescope (NTT) researchers have probed the internal structure of the 535-meter-long near-Earth asteroid Itokawa, and found out that different parts have greatly varying densities, possibly an indication of how it — and others like it — formed. From an ESO news release (Feb. 5): Using very precise ground-based observations, Stephen Lowry (University of Kent, UK) and colleagues have measured the speed at which the near-Earth asteroid (25143) Itokawa spins and how that spin rate is changing over time. They have combined these delicate observations with new theoretical work on how asteroids radiate heat. This small asteroid is an intriguing subject as it has a strange peanut shape, as revealed by the Japanese spacecraft Hayabusa in 2005. To probe its internal structure, Lowry’s team used images gathered from 2001 to 2013, by ESO’s New Technology Telescope (NTT) at the La Silla Observatory in Chile among others, to measure its brightness variation as it rotates. This timing data was then used to deduce the asteroid’s spin period very accurately and determine how it is changing over time. When combined with knowledge of the asteroid’s shape this allowed them to explore its interior — revealing the complexity within its core for the first time. “This is the first time we have ever been able to to determine what it is like inside an asteroid,” explains Lowry. “We can see that Itokawa has a highly varied structure — this finding is a significant step forward in our understanding of rocky bodies in the Solar System.” The spin of an asteroid and other small bodies in space can be affected by sunlight. This phenomenon, known as the Yarkovsky-O’Keefe-Radzievskii-Paddack (YORP) effect, occurs when absorbed light from the Sun is re-emitted from the surface of the object in the form of heat. When the shape of the asteroid is very irregular the heat is not radiated evenly and this creates a tiny, but continuous, torque on the body and changes its spin rate. Lowry’s team measured that the YORP effect was slowly accelerating the rate at which Itokawa spins. The change in rotation period is tiny — a mere 0.045 seconds per year. But this was very different from what was expected and can only be explained if the two parts of the asteroid’s peanut shape have different densities. This is the first time that astronomers have found evidence for the highly varied internal structure of asteroids. Up until now, the properties of asteroid interiors could only be inferred using rough overall density measurements. This rare glimpse into the diverse innards of Itokawa has led to much speculation regarding its formation. One possibility is that it formed from the two components of a double asteroid after they bumped together and merged. Lowry added, “Finding that asteroids don’t have homogeneous interiors has far-reaching implications, particularly for models of binary asteroid formation. It could also help with work on reducing the danger of asteroid collisions with Earth, or with plans for future trips to these rocky bodies.” This new ability to probe the interior of an asteroid is a significant step forward, and may help to unlock many secrets of these mysterious objects. Itokawa was discovered in 1998 by the LINEAR project. In August 2003 it was officially named after Hideo Itokawa, a Japanese rocket scientist. (Source) Watch a video below showing a rendering of Itokawa in motion made from spacecraft observations: Video credit: JAXA, ESO/L. Calçada/M. Kornmesser/Nick Risinger (skysurvey.org).
0.800827
3.964695
The Fast Dance of Electron Spins – Spin-Flip Within Ten Femtoseconds The extremely fast spin flip processes that are triggered by the light absorption of metal complexes were stimulated in the investigation. © Sebastian MaiChemists investigate the interactions of metal complexes and light. When a molecule is hit by light, in many cases a so-called “photoinduced” reaction is initiated. This can be thought of as the interplay of electron motion and nuclear motion. First, the absorption of light energetically “excites” the electrons, which for instance can weaken some of the bonds. Subsequently, the much heavier nuclei start moving. If at a later point in time the nuclei assume a favorable constellation with respect to each other, the electrons can switch from one orbit to another one. Controlled by the physical effect of “spin-orbit coupling” the electron spin can flip in the same moment. This interplay of motion is the reason why spin-flip processes in molecules typically take quite long. However, computer simulations have shown that this is not the case in some metal complexes. For example, in the examined rhenium complex the spin-flip process already takes place within ten femtoseconds, even though in this short time the nuclei are virtually stationary–even light moves only three thousandths of a millimeter within this time. This knowledge is particularly useful for the precise control of electron spins, as, e.g., in quantum computers. Investigation is based on enormous computer power One of the biggest difficulties during the investigation was the huge amount of computer power that was required for the simulations. Although for small organic molecules one can nowadays carry out very accurate simulations already with a modest amount of computational effort, metal complexes present a much bigger challenge. Among other reasons, this is due to the large number of atoms, electrons, and solvent molecules that need to be included in the simulations, but also because the electron spin can only be accurately described with equations from relativity theory. Altogether, the scientists from the Institute of Theoretical Chemistry spent almost one million computer hours at the Austrian super computer “Vienna Scientific Cluster” in the course of their study. This is equivalent to about 100 years of computer time on a typical personal computer. Reference: “Unconventional two-step spin relaxation dynamics of [Re(CO)3(im)(phen)]+ in aqueous solution” by Sebastian Mai and Leticia González,27 September 2019, Chemical Science. - MIT Engineers Harness Stomach Acid to Power Tiny Sensors - Powerful New Electronics Could Be Created at the Edge of Chaos - Chemists Create Flexible Polymer Gels From Caffeine - Biochemists Reveal New Insight on Bacterium That Protects Plants From Disease - Chemists Find Potential “Missing Link” in Origins-of-life on Earth - Human Cells Made Magnetic With Engineered Protein Crystals - Researchers Discovered a Very Rare Type Ibn Supernova - ESO’s Very Large Telescope Views the Toby Jug Nebula - New NASA Goddard Video: Moon Phase and Libration, 2015 - Hubble Image of the Week – Super Star Cluster Westerlund 1 - New Data Reveals Two Galaxies Masquerading as One - 3D Viewer Offers a New Look at Cassiopeia A - Mathematical Framework Formalizes Loop Perforation Technique - Supernova Observations Show Strength of Gravity Unchanged Over Cosmic Time
0.9339
3.237786
Cluster reveals inner workings of earth's cosmic particle accelerator Using unprecedented in-situ data from ESA's Cluster mission, scientists have shed light on the ever-changing nature of Earth's shield against cosmic radiation, its bow shock, revealing how this particle accelerator transfers and redistributes energy throughout space. The new study used observations from two of the Cluster mission's four spacecraft, which flew in tight formation through Earth's bow shock, sitting just 7 kilometres apart. The data were gathered on 24 January 2015 at a distance of 90,000 kilometres from Earth, roughly a quarter of the way to the Moon, and reveal properties of the bow shock that were previously unclear due to the lack of such closely spaced in-situ measurements. When a supersonic flow encounters an obstacle, a shock forms. This is seen often in the universe around stars, supernova remnants, comets, and planets – including our own. Shocks are known to be very efficient particle accelerators, and potentially responsible for creating some of the most energetic particles in the universe. The shock around the Earth, known as the bow shock, is our first line of defence against particles flooding inwards from the cosmos, and our nearest test-bed to study the dynamics of plasma shocks. It exists due to the high, supersonic speeds of solar wind particles, which create a phenomenon somewhat akin to the shock wave formed when a plane breaks the sound speed barrier. The new study, published today in Science Advances, reveals the mechanisms at play when this shock transfers energy from one type to another. "Earth's bow shock is a natural and ideal shock laboratory," says lead author Andrew Dimmock of the Swedish Institute of Space Physics in Uppsala, Sweden. "Thanks to missions like Cluster, we are able to place multiple spacecraft within and around it, covering scales from hundreds to only a few kilometres. "This means we can pick apart how the shock changes in space and over time, something that's crucial when characterising a shock of this type." There are several types of shock, defined by the ways in which they transfer kinetic energy into other kinds of energy. In Earth's atmosphere, kinetic energy is transformed into heat as particles collide with one another – but the vast distances at play at our planet's bow shock mean that particle collisions cannot play such a role in energy transfer there, as they are simply too far apart. This type of shock is thus known as a collisionless shock. Such shocks can exist across a vast range of scales, from millimetres up to the size of a galaxy cluster, and instead transfer energy via processes involving plasma waves and electric and magnetic fields. "As well as being collisionless, Earth's bow shock can also be non-stationary," adds co-author Michael Balikhin of the University of Sheffield, UK. "In a way, it behaves like a wave in the sea: as a wave approaches the beach, it seems to grow in size as the depth decreases, until it breaks – this is because the crest of the wave moves faster than the trough, causing it to fold over and break. "This kind of 'breaking' occurs for waves of plasma, too, although the physics is somewhat more complicated." To investigate in detail the physical scales at which this wave breaking is initiated – something which was previously unknown – the researchers solicited a special campaign in which two of the four Cluster probes were moved to an unprecedentedly close separation of less than 7 km, gathering high-resolution data from within the shock itself. Analysing the data, the team found that the measurements of the magnetic field obtained by the two Cluster spacecraft differed significantly. This direct evidence that small-scale magnetic field structures exist within the broader extent of the bow shock indicate that they are key in facilitating the breaking of plasma waves, and thus the transfer of energy, in this portion of the magnetosphere. With sizes of a few kilometres, similar to the scales at which electrons rotate around the magnetic field lines, these structures are located in a particularly thin and variable part of the shock, where the properties of the constituent plasma and surrounding fields can change most drastically. "This part of the bow shock is known as the shock ramp, and can be as thin as a few kilometres – a finding that was also based on Cluster data a few years back," says co-author Philippe Escoubet, who is also ESA project scientist for the Cluster mission. Launched in 2000, Cluster's four spacecraft fly in formation around the Earth, making it the first space mission able to study, in three dimensions, the physical processes occurring within and in the near vicinity of the Earth's magnetic environment. "This kind of study really shows the importance of Cluster as a mission," adds Escoubet. "By achieving incredibly small spacecraft separations – seven kilometers as used in this study and even smaller, down to just three kilometres – Cluster is allowing us to probe our planet's magnetic environment at the smallest scales ever achieved. "This advances our understanding of Earth's bow shock and how it acts as a giant particle accelerator – something that is key in our knowledge of the high-energy universe."
0.905152
4.007298
The celestial dragon is in its ascendancy this week forearly-evening observers. Have you ever wondered why a particular group of stars wasmade into a certain constellation? Sometimes a star pattern suggests an object,creature or person. Other constellations portray mythological creatures such asunreal monsters. Draco, the Dragon, is one of these. Draco is almost entirely circumpolar – that is, it alwaysremains above the horizon, never rising or setting for skywatchers at mostmid-northern latitudes. But right now is the best evening season for tracingout the windings of this unusual beast's snakelike body. This week, between 8:30and 9:00 p.m. local daylight time, he appears to pass between both the Littleand Big Dippers, with his head raised high above Polaris, almost to theoverhead point (called the zenith). The Dragon's head is the most conspicuous part of Draco: an irregular,albeit conspicuous quadrangle, not quite half the size of the Big Dipper'sbowl. You can find it situated about a dozen degrees to the north and west ofthe brilliant blue-white star, Vega, the brightest of the three stars that makeup the SummerTriangle (ten degrees is roughly equal to your clenched fist held at arm'slength). Draco is a very ancient grouping. The earliest Sumeriansconsidered these stars to represent the dragon Tiamat. Later it became one ofthe creatures that Hercules killed. One of Draco's tasks was to guard the garden of Hesperides and its golden apples that Hercules was supposed to retrieve. In thestars, as Draco coils around Polaris we now see Hercules standing (albeitupside down) on Draco's head. The brightest star is Eltanin, a second magnitude star,shining with an orange tinge. This star is famous for being the one with whichthe English astronomer, James Bradley,discovered the aberration of starlight – an astronomicalphenomenon which produces an apparent motion of celestial objects ? in the year1728. Interestingly, a number of temples in Ancient Egypt were apparentlyoriented toward this star. The faintest of the four stars in the quadrangle is NuDraconis, a wonderful double star for very small telescopes. The two stars arepractically the same brightness, both appearing just a trifle brighter thanfifth magnitude and separated by just over one arc minute (or about 1/30 the apparentdiameter of a full Moon). I first stumbled across Nu as a teenager in the Bronx, using low power on a four-and-a-quarter-inch Newtonian reflecting telescope. Ilikened it to a pair of tiny headlights. Check it out for yourself. The pole of the heavens is moving slowly among theconstellations of the northern sky, once around a large circle. It is owing toa movement of the Earth for which the pull of both the Sun and Moon on ourbulging equator is chiefly responsible, a movement known as "precession."This double attraction causes the Earth to wobble slightly like a slowing-downtop does. While the tilt of the axis to the Earth's orbit remains thesame (tilted 23.5 degrees from the equator), the axis itself describes afunnel-shaped motion, completing one rotation in about 25,800 years. This timespan – one complete wobble – is called a "Great" or "Platonic"Year. Located in Draco's tail is the faint star Thuban. Duringthe third millennium, BC, the Earth's axis was pointed almost directly at thisstar. As such, Thuban was the North Star when the Pyramids were being built,some 5,000 years ago. Thuban was nearest to the North Pole of the sky about2830 B.C. It then shone in the sky almost motionless in the north near to wherethe current North Star, Polaris, now appears. Look roughly midway between thebowl of the Little Dipper and the star Mizar (where the Big Dipper's handlebends) and there you will find the former North Star. And thanks to the oscillating motion of precession, Thubanwill again be the North Star some 20,000 years from now. Joe Rao serves as an instructor and guest lecturer at New York's Hayden Planetarium. He writes about astronomy for The New York Times and otherpublications, and he is also an on-camera meteorologist for News 12 Westchester, New York.
0.887839
3.655466
Becoming an astronaut is a rare honor. The rigorous selection process, the hard training, and then… the privilege of going into space! It is something few human beings will ever be privileged enough to experience. But what about other species of animal that have gone into space? Are we not being just the slightest bit anthropocentric in singling out humans for praise? What about all those brave simians and mice that were sent into space? What about the guinea pigs and rats? And what of “Man’s Best Friend”, the brave canines that helped pave the way for “manned” spaceflight? During the 1950s and 60s, the Soviets sent over 20 dogs into space, some of which never returned. Here’s what we know about these intrepid canines who helped make humanity a space-faring race! During the 1950s and 60s, the Soviets and Americans found themselves locked in the Space Race. It was a time of intense competition as both superpowers attempted to outmaneuver the other and become the first to achieve spaceflight, conduct crewed missions to orbit, and eventually land crews on another celestial body (i.e. the Moon). Before crewed missions could be sent, however, both the Soviet space program and NASA conducted rigorous tests involving animal test subjects, as a way of gauging the stresses and physical tolls going into space would have. These tests were not without precedent, as animals had been used for aeronautical tests in previous centuries. For instance, in 1783, the Montgolfier brothers sent a sheep, a duck and a rooster when testing their hot air balloon to see what the effects would be. Between 1947-1960, the US launched several captured German V-2 rockets (which contained animal test subjects) to measure the effect traveling to extremely high altitudes would have on living organisms. Because of the shortage of rockets, they also employed high-altitude balloons. These tests were conducted using fruit flies, mice, hamsters, guinea pigs, cats, dogs, frogs, goldfish and monkeys. The most famous test case was Albert II, a rhesus monkey that became the first monkey to go into space on June 14th, 1949. For the Soviets, it was felt that dogs would be the perfect test subjects, and for several reasons. For one, it was believed that dogs would be more comfortable with prolonged periods of inactivity. The Soviets also selected female dogs (due to their better temperament) and insisted on stray dogs (rather than house dogs) because they felt they would be able to tolerate the extreme stresses of space flight better. For the sake of preparing the dogs that were used for the sake of test flights, the Soviets confined the subjects in small boxes of decreasing size for periods of between 15 and 20 days at a time. This was designed to simulate spending time inside the small safety modules that would housed them for the duration of their flights. Other exercises designed to get the dogs prepared for space flight included having them stand still for long periods of time. They also sought to get the dogs accustomed to wearing space suits, and made them ride in centrifuges that simulated the high acceleration experienced during launch. Between 1951 and 1956, the Russians conducted their first test flights using dogs. Using R-1 rockets. a total of 15 missions were flown and were all suborbital in nature, reaching altitudes of around 100 km (60 mi) above sea level. The dogs that flew in these missions wore pressure suits with acrylic glass bubble helmets. The first to go up were Dezik and Tsygan, who both launched aboard an R-1 rocket on July 22nd, 1951. The mission flew to a maximum altitude of 110 km, and both dogs were recovered unharmed afterwards. Dezik made another sub-orbital flight on July 29th, 1951, with a dog named Lisa, although neither survived because their capsule’s parachute failed to deploy on re-entry. Several more launches took place throughout the Summer and Fall of 1951, which included the successful launch and recovery of space dogs Malyshka and ZIB. In both cases, these dogs were substitutes for the original space dogs – Smelaya and Bolik – who ran away just before the were scheduled to launch. By 1954, space dogs Lisa-2 (“Fox” or “Vixen”, the second dog to bear this name after the first died), Ryzhik (“Ginger” because of the color of her fur) made their debut. Their mission flew to an altitude of 100 km on June 2nd, 1954, and both dogs were recovered safely. The following year, Albina and Tsyganka (“Gypsy girl”) were both ejected out of their capsule at an altitude of 85 km and landed safely. Between 1957 to 1960, 11 flights with dogs were made using the R-2A series of rockets, which flew to altitudes of about 200 km (124 mi). Three flights were made to an altitude of about 450 km (280 mi) using R-5A rockets in 1958. In the R-2 and R-5 rockets, the dogs were contained in a pressured cabin Those to take part in these launches included Otvazhnaya (“Brave One”) who made a flight on July 2nd, 1959, along with a rabbit named Marfusha (“Little Martha”) and another dog named Snezhinka (“Snowflake”). Otvazhnaya would go to make 5 other flights between 1959 and 1960. By the late 1950s, and as part of the Sputnik and Vostok programs, Russian dogs began to be sent into orbit around Earth aboard R-7 rockets. On November 3rd, 1957, the famous space dog Laika became the first animal to go into orbit as part of the Sputnik-2 mission. The mission ended tragically, with Laika dying in flight. But unlike other missions where dogs were sent into suborbit, her death was anticipated in advance. It was believed Laika would survive for a full ten days, when in fact, she died between five and seven hours into the flight. At the time, the Soviet Union claimed she died painlessly while in orbit due to her oxygen supply running out. More recent evidence however, suggests that she died as a result of overheating and panic. This was due to a series of technical problems which resulted from a botched deployment. The first was the damage that was done to the thermal system during separation, the second was some of the satellite’s thermal insulation being torn loose. As a result of these two mishaps, temperatures in the cabin reached over 40º C. The mission lasted 162 days before the orbit finally decayed and it fell back to Earth. Her sacrifice has been honored by many countries through a series of commemorative stamps, and she was honored as a “hero of the Soviet Union”. Much was learned from her mission about the behavior of organisms during space flight, though it has been argued that what was learned did not justify the sacrifice. The next dogs to go into space were Belka (“Squirrel”) and Strelka (“Little Arrow”), which took place on Aug. 19th, 1960, as part of the Sputnik-5 mission. The two dogs were accompanied by a grey rabbit, 42 mice, 2 rats, flies, and several plants and fungi, and all spent a day in orbit before returning safely to Earth. Strelka went on to have six puppies, one of which was named Pushinka (“Fluffy”). This pup was presented to President John F. Kennedy’s daughter (Caroline) by Nikita Khrushchev in 1961 as a gift. Pushinka went on to have puppies with the Kennedy’s dog (named Charlie), the descendants of which are still alive today. On Dec. 1st, 1960, space dogs Pchyolka (“Little Bee”) and Mushka (“Little Fly”) went into space as part of Sputnik-6. The dogs, along with another compliment of various test animals, plants and insects, spent a day in orbit. Unfortunately, all died when the craft’s retrorockets experienced an error during reentry, and the craft had to be intentionally destroyed. Sputnik 9, which launched on March 9th, 1961, was crewed by spacedog Chernenko (“Blackie”) – as well as a cosmonaut dummy, mice and a guinea pig. The capsule made one orbit before returning to Earth and making a soft landing using a parachute. Chernenko was safely recovered from the capsule. On March 25th, 1961, the dog Zvyozdocha (“Starlet”) who was named by Yuri Gagarin, made one orbit on board the Sputnik-10 mission with a cosmonaut dummy. This practice flight took place a day before Gagarin’s historic flight on April 12th, 1961, in which he became the first man to go into space. After re-entry, Zvezdochka safely landed and was recovered. Spacedogs Veterok (“Light Breeze”) and Ugolyok (“Coal”) were launched on board a Voskhod space capsule on Feb. 22nd, 1966, as part of Cosmos 110. This mission, which spent 22 days in orbit before safely landing on March 16th, set the record for longest-duration spaceflight by dogs, and would not be broken by humans until 1971. To this day, the dogs that took part in the Soviet space and cosmonaut training program as seen as heroes in Russia. Many of them, Laika in particular, were put on commemorative stamps that enjoyed circulation in Russia and in many Eastern Bloc countries. There are also monuments to the space dogs in Russia. These include the statue that exists outside of Star City, the Cosmonaut training facility in Moscow. Created in 1997, the monument shows Laika positioned behind a statue of a cosmonaut with her ears erect. The Monument to the Conquerors of Space, which was constructed in Moscow in 1964, includes a bas-relief of Laika along with representations of all those who contributed to the Soviet space program. On April 11, 2008, at the military research facility in Moscow where Laika was prepped for her mission to space, officials unveiled a monument of her poised inside the fuselage of a space rocket (shown at top). Because of her sacrifice, all future missions involving dogs and other test animals were designed to be recoverable. Four other dogs died in Soviet space missions, including Bars and Lisichka (who were killed when their R-7 rocket exploded shortly after launch). On July 28, 1960, Pchyolka and Mushka also died when their space capsule was purposely destroyed after a failed re-entry to prevent foreign powers from inspecting the capsule. However, their sacrifice helped to advance safety procedures and abort procedures that would be used for many decades to come in human spaceflight. We have written many interesting articles about animals and space flight here at Universe Today. Here’s Who was the First Dog to go Into Space?, What was the First Animal to go into Space?, What Animals Have been to Space?, Who was “Space Dog” Laika?, and Russian Memorial for Space Dog Laika. Astronomy Cast has an episode on space capsules.
0.80121
3.061344
In a few decades, the Breakthrough Starshot initiative hopes to send a sailcraft to the neighboring system of Alpha Centauri. Using a lightsail and a directed energy (aka. laser) array, a tiny spacecraft could be accelerated to 20% the speed of light (0.2 c). This would allow Starshot to make the journey to Alpha Centauri and study any exoplanets there in just 20 years, thus fulfilling the dream of interstellar exploration within our lifetimes. Naturally, this plan presents a number of engineering and logistical challenges, one of which involves the transmission of data back to Earth. In a recent study, Starshot Systems Director Dr. Kevin L.G. Parkin analyzes the possibility of using a laser to transmit data back to Earth. This method, argued Parkin, is the most effective way for humanity to get a glimpse of what lies beyond our Solar System. If we want to travel to the stars, we’re going to have to be creative. Conventional rockets aren’t nearly powerful enough to allow us to journey across light-years in a reasonable time. Even nuclear rockets might not be enough. So what’s humanity to do? The answer could be a light sail. On June 25th, 2019, The Planetary Society‘s cubesat spacecraft known as LightSail 2 lifted off from the NASA Kennedy Space Center in Florida aboard a Falcon Heavy rocket. This was the second solar sail launched the Society, the first (LightSail 1) having been sent into space in 2015. Like its predecessor, the purpose of this spacecraft is to demonstrate the technology that would allow for solar sails operating within Low Earth Orbit (LEO). Since reaching orbit, the LightSail 2 has been indicated that it is in good working order, as indicated by the Mission Control Dashboard recently introduced by The Planetary Society. In addition to establishing two-way communications with mission controllers and passing a battery of checkouts, the spacecraft also took its first pictures of Earth (and some selfies for good measure). This week, we are joined by Dr. Bruce Betts, Chief Scientist and LightSail Program Manager for The Planetary Society. Prior to working on the LightSail program, Dr. Betts managed a number of flight instrument projects at the Planetary Society, including silica glass DVDs on the Mars Exploration Rovers and Phoenix lander, the LIFE biology experiment that flew on the Russian Phobos sample return mission, and he led a NASA grant studying microrovers assisting human exploration. Dr. Betts new children’s book, “”Astronomy for Kids: How to Observe Outer Space with a Telescope, Binoculars, or Just Your Eyes!”” is now available in time for holiday gift giving. Prior to joining the Planetary Society, Dr. Betts, a planetary scientist, studied planetary surfaces, including Mars, the Moon, and Jupiter’s moons, using infrared and other data, during his time at San Juan Institute/Planetary Science Institute. Additionally, Dr. Betts spent three years at NASA headquarters managing planetary instrument development programs to design spacecraft science instruments. In 2015, Russian billionaire Yuri Milner established Breakthrough Initiatives, a non-profit organization dedicated to enhancing the search for extraterrestrial intelligence (SETI). In April of the following year, he and the organization be founded announced the creation of Breakthrough Starshot, a program to create a lightsail-driven “wafercraft” that would make the journey to the nearest star system – Proxima Centauri – within our lifetime. In the latest development, on Wednesday May 23rd, Breakthrough Starshot held an “industry day” to outline their plans for developing the Starshot laser sail. During this event, the Starshot committee submitted a Request For Proposals (RFP) to potential bidders, outlining their specifications for the sail that will carry the wafercraft as it makes the journey to Proxima Centauri within our lifetimes. As we have noted in severalpreviousarticles, Breakthrough Starshot calls for the creation of a gram-scale nanocraft being towed by a laser sail. This sail will be accelerated by an Earth-based laser array to a velocity of about 60,000 km/s (37,282 mps) – or 20% the speed of light (o.2 c). This concept builds upon the idea of a solar sail, a spacecraft that relies on solar wind to push itself through space. At this speed, the nanocraft would be able to reach the closest star system to our own – Proxima Centauri, located 4.246 light-years away – in just 20 years time. Since its inception, the team behind Breakthrough Starshot has invested considerable time and energy addressing the conceptual and engineering challenges such a mission would entail. And with this latest briefing, they are now looking to move the project from concept to reality. In addition to being the Frank B. Baird, Jr. Professor of Science at Harvard University, Abraham Loeb is also the Chair of the Breakthrough Starshot Advisory Committee. As he explained to Universe Today via email: “Starshot is an initiative to send a probe to the nearest star system at a fifth of the speed of light so that it will get there within a human lifetime of a couple of decades. The goal is to obtain photos of exo-planets like Proxima b, which is in the habitable zone of the nearest star Proxima Centauri, four light years away. The technology adopted for fulfilling this challenge uses a powerful (100 Giga-watt) laser beam pushing on a lightweight (1 gram) sail to which a lightweight electronics chip is attached (with a camera, navigation and communication devices). The related technology development is currently funded at $100M by Yuri Milner through the Breakthrough Foundation.” “The scope of this RFP addresses the Technology Development phase – to explore LightSail concepts, materials, fabrication and measurement methods, with accompanying analysis and simulation that creates advances toward a viable path to a scalable and ultimately deployable LightSail.” As Loeb indicated, this RFP comes not long after another “industry day” that was related to the development of the technology of the laser – termed the “Photon Engine”. In contrast, this particular RFP was dedicated to the design of the laser sail itself, which will carry the nanocraft to Proxima Centauri. “The Industry Day was intended to inform potential partners about the project and request for proposals (RFP) associated with research on the sail materials and design,” added Loeb. “Within the next few years we hope to demonstrate the feasibility of the required sail and laser technologies. The project will allocate funds to experimental teams who will conduct the related research and development work. ” The RFP also addressed Starshot’s long-term goals and its schedule for research and development in the coming years. These include the investment in $100 million over the next five years to determine the feasibility of the laser and sail, to invest the value of the European Extremely Large Telescope (EELT) from year 6 to year 11 and build a low-power prototype for space testing, and invest the value of the Large Hardon Collider (LHC) over a 20 year period to develop the final spacecraft. “The European Extremely Large Telescope (EELT) will cost on order of a billion [dollars] and the Large Hadron Collider cost was ten times higher,’ said Loeb. “These projects were mentioned to calibrate the scale of the cost for the future phases in the Starshot project, where the second phase will involve producing a demo system and the final step will involve the complete launch system.” The research and development schedule for the sail was also outlined, with three major phases identified over the next 5 years. Phase 1 (which was the subject of the RFP) would entail the development of concepts, models and subscale testing. Phase 2 would involve hardware validation in a laboratory setting, while Phase 3 would consist of field demonstrations. With this latest “industry day” complete, Starshot is now open for submissions from industry partners looking to help them realize their vision. Step A proposals, which are to consist of a five-page summary, are due on June 22nd and will be assessed by Harry Atwater (the Chair of the Sail Subcommittee) as well as Kevin Parkin (head of Parkin Research), Jim Benford (muWave Sciences) and Pete Klupar (the Project Manager). Step B proposals, which are to consist of a more detailed, fifteen-page summary, will be due on July 10th. From these, the finalists will be selected by Pete Worden, the Executive Director of Breakthrough Starshot. If all goes according to plan, the initiative hopes to launch the first lasersail-driven nanocraft in to Proxima Centauri in 30 years and see it arrive there in 50 years. So if you’re an aerospace engineer, or someone who happens to run a private aerospace firm, be sure to get your proposals ready! To learn more about Starshot, the engineering challenges they are addressing, and their research, follow the links provided to the BI page. To see the slides and charts from the RFP, check out Starshot’s Solicitations page. In April of 2016, Russian billionaire Yuri Milner announced the creation of Breakthrough Starshot. As part of his non-profit scientific organization (known as Breakthrough Initiatives), the purpose of Starshot was to design a lightsail nanocraft that would be capable of reaching the nearest star system – Alpha Centauri (aka. Rigel Kentaurus) – within our lifetime. Since its inception, the scientists and engineers behind the Starshot concept have sought to address the challenges that such a mission would face. Similarly, there have been many in the scientific community who have also made suggestions as to how such a concept could work. The latest comes from the Max Planck Institute for Solar System Research, where two researchers came up with a novel way of slowing the craft down once it reaches its destination. To recap, the Starshot concept involves a small, gram-scale nanocraft being towed by a lightsail. Using a ground-based laser array, this lightsail would be accelerated to a velocity of about 60,000 km/s (37,282 mps) – or 20% the speed of light. At this speed, the nanocraft would be able to reach the closest star system to our own – Alpha Centauri, located 4.37 light-years away – in just 20 years time. Naturally, this presents a number of technical challenges – which include the possibility of a collision with interstellar dust, the proper shape of the lightsail, and the sheer energy requirements for powering the laser array. But equally important is the idea of how such a craft would slow down once it reached its destination. With no lasers at the other end to apply breaking energy, how would the craft slow down enough to begin studying the system? With the help IT specialist Michael Hippke, the two considered what would be needed for interstellar mission to reach Alpha Centauri, and provide good scientific returns upon its arrival. This would require that braking maneuvers be conducted once it arrived so the the spacecraft would not overshoot the system in the blink of an eye. As they state in their study: “Although such an interstellar probe could reach Proxima 20 years after launch, without propellant to slow it down it would traverse the system within hours. Here we demonstrate how the stellar photon pressures of the stellar triple Alpha Cen A, B, and C (Proxima) can be used together with gravity assists to decelerate incoming solar sails from Earth.” For the sake of their calculations, Heller and Hippke estimated that the craft would weigh less than 100 grams (3.5 ounces), and would be mounted on a sail measuring 100,000 m² (1,076,391 square foot) in surface area. Once these were complete, Hippke adapted them into a series of computer simulations. Based on their results, they proposed an entirely new mission concept that do away with the need for lasers entirely. In essence, their revised concept called for an Autonomous Active Sail (AAS) craft that would provide for its own propulsion and stopping power. This craft would deploy its sail while in the Solar System and use the Sun’s solar wind to accelerate it to high speeds. Once it reached the Alpha Centauri System, it would redeploy its sail so that incoming radiation from Alpha Centauri A and B would have the effect of slowing it down. An added bonus of this proposed maneuver is that the craft, once it had been decelerated to the point that it could effectively explore the Alpha Centauri system, could then use a gravity assist from these stars to reroute itself towards Proxima Centauri. Once there, it could conduct the first up-close exploration of Proxima b – the closest exoplanet to Earth – and determine what its atmospheric and surface conditions are like. Since the existence of this planet was first announced by the European Southern Observatory back in August of 2016, there has been much speculation about whether or not it could be habitable. Having a mission that could examine it to check for the telltale markers – a viable atmosphere, a magnetosphere, and liquid water on the surface – would surely settle that debate. As Heller explained in a press release from the Max Planck Institute, this concept presents quite a few advantages, but comes with its share of trade offs – not the least of which is the time it would take to get to Alpha Centauri. “Our new mission concept could yield a high scientific return, but only the grandchildren of our grandchildren would receive it,” he said. “Starshot, on the other hand, works on a timescale of decades and could be realized in one generation. So we might have identified a longterm, follow-up concept for Starshot.” At present, Heller and Hippke are discussing their concept with Breakthrough Starshot to see if it would be viable. One individual who has looked over their work is Professor Avi Loeb, the Frank B. Baird Jr. Professor of Science at Harvard University, and the chairman of the Breakthrough Foundation’s Advisory Board. As he told Universe Today via email, the concept put forth by Heller and Hippke is worthy of consideration, but has its limitations: “If it is possible to slow down a spacecraft by starlight (and gravitational assist), then it is also possible to launch it in the first place by the same forces… If so, why is the recently announced Breakthrough Starshot project using a laser and not Sunlight to propel our spacecraft? The answer is that our envisioned laser array can push the sail with an energy flux that is a million times larger than the local solar flux. “In using starlight to reach relativistic speeds, one must use an extremely thin sail. In the new paper, Heller and Hippke consider the example of a milligram instead of a gram-scale sail. For a sail of area ten square meters (as envisioned in our Starshot concept study), the thickness of their sail must be only a few atoms. Such a surface is orders of magnitude thinner than the wavelength of light that it aims to reflect, and so its reflectivity would be low. It does not appear feasible to reduce the weight by so many orders of magnitude and yet maintain the rigidity and reflectivity of the sail material. “The main constraint in defining the Starshot concept was to visit Alpha Centauri within our lifetime. Extending the travel time beyond the lifetime of a human, as advocated in this paper, would make it less appealing to the people involved. Also, one should keep in mind that the sail must be accompanied by electronics which will add significantly to its weight.” In short, if time is not a factor, we can envision that our first attempts to reach another Solar System may indeed involve an AAS being propelled and slowed down by solar wind. But if we’re willing to wait centuries for such a mission to be completed, we might also consider sending rockets with conventional engines (possibly even crewed ones) to Alpha Centauri. But if we are intent on getting there within our own lifetimes, then a laser-driven sail or something similar will have be the way to go. Humanity has spent over half a century exploring what’s in our own backyard, and some of us are impatient to see what’s next door! In 2015, Russian billionaire Yuri Milner founded Breakthrough Initiatives with the intention of bolstering the search for extra-terrestrial life. Since that time, the non-profit organization – which is backed by Stephen Hawking and Mark Zuckerberg – has announced a number of advanced projects. The most ambitious of these is arguably Project Starshot, an interstellar mission that would make the journey to the nearest star in just 20 years. This concept involves an ultra-light nanocraft that would rely on a laser-driven sail to achieve speeds of up to 20% the speed of light. Naturally, for such a mission to be successful, a number of engineering challenges have to be tackled first. And according to a recent study by a team of international researchers, two of the most important issues are the shape of the sail itself, and the type of laser involved. As they indicate in their study, titled “On the Stability of a Space Vehicle Riding on an Intense Laser Beam“, the team ran stability simulations 0n the concept, taking into account the nature of the wafer-sized craft (aka. StarChip), the sail (aka. Lightsail) and the nature of the laser itself. For the sake of these simulations, they also factored in a number of assumptions about Starshot’s design. These included the notion that the StarChip would be a rigid body (i.e. made up of solid material), that the circular sail would either be flat, spherical or conical (i.e. concave in shape), and that the surface of the sail would reflect the laser light. Beyond this, they played with multiple variations on the design, and came up with some rather telling results. As Dr. Elena Popova, the lead author on the paper, told Universe Today via email: “We considered different shapes of sail: a) spherical (coincides with parabolic for small sizes) as most appropriate for final configuration of nanocraft en route; b) conical; c) flat (simplest) (will be seen to be unstable so that even spinning of craft does not help).” What they found was that the simplest, stable configuration would involve a sail that was spherical in shape. It would also require that the StarChip be tethered at a sufficient distance from the sail, one which would be longer than the curvature radius of the sail itself. “For the sail with almost flat cone shape we obtained similar stability condition,” said Popova. “The nanocraft with flat sail is unstable in every case. It simply corresponds to the case of infinite radius of curvature of the sale. Hence, there is no way to extend center of mass beyond it.” As for the laser, they considered several how the two main types would effect stability. This included uniform lasers that have a sharp boundary and “Gaussian” beams, which are characterized by high-intensity in the middle that declines rapidly towards the edges. As Dr. Popova stated, they determined that in order to ensure stability – and that the craft wouldn’t be lost to space – a uniform laser was the way to go. “The nanocraft driven by intense laser beam pressure acting on its Lightsail is sensitive to the torques and lateral forces reacting on the surface of the sail. These forces influence the orientation and lateral displacement of the spacecraft, thus affecting its dynamics. If unstable the nanocraft might even be expelled from the area of laser beam. The most dangerous perturbations in the position of nanocraft inside the beam and its orientation relative to the beam axis are those with direct coupling between rotation and displacement (“spin-orbit coupling”).” In the end, these were very similar to the conclusions reached by Professor Abraham Loeb and his colleagues at Starshot. In addition to being the Frank B. Baird, Jr. Professor of Science at Harvard University, Prof. Loeb is also the chairman of the Breakthrough Foundation’s Advisory Board. In a study titled “Stability of a Light Sail Riding on a Laser Beam” (published on Sept, 29th, 2016), they too examined what was necessary to ensure a stable mission. This included the benefits of a conical vs. a spherical sail, and a uniform vs. a Gaussian beam. As Prof. Loeb told Universe Today via email: “We found that a parachute-shaped sail riding on a Gaussian laser beam is unstable… We show in our paper that a sail shaped as a spherical shell (like a large ping-pong ball) can ride in a stable fashion on a laser beam that is shaped like a cylinder (or 3-4 lasers that establish a nearly circular illumination).” As for the recommendations about the StarChip being at a sufficient distance from the LightSail, Prof. Loeb and his colleagues are of a different mind. “They argue that in case you attach a weight to the sail that is sufficiently well separated from the parachute, you might make it stable.” he said. “Even if this is true, it is unclear that their proposal is useful because such a configuration is rather complicated to build and launch.” These are just a few of the engineering challenges facing an interstellar mission. Back in September, another study was released that assessed the risk of collisions and how it might effect the Starshot mission. In this case, the researchers suggested that the sail have a layer of shielding to absorb impacts, and that the laser array be used to clear debris in the LightSail’s path. When Milner and the science team behind Starshot first announced their intention to create an interstellar spacecraft (in April 2016), they were met with a great deal of enthusiasm and skepticism. Understandably, many believed that such a mission was too ambitious, due to the challenges involved. But with every challenge that has been addressed, both by the Starshot team and outside researchers, the mission architecture has evolved. At this rate, barring any serious complications, we may be seeing an interstellar mission taking place within a decade or so. And, barring any hiccups in the mission, we could be exploring Alpha Centauri or Proxima b up close within our lifetime! Finding examples of intelligent life other than our own in the Universe is hard work. Between spending decades listening to space for signs of radio traffic – which is what the good people at the SETI Institute have been doing – and waiting for the day when it is possible to send spacecraft to neighboring star systems, there simply haven’t been a lot of options for finding extra-terrestrials. But in recent years, efforts have begun to simplify the search for intelligent life. Thanks to the efforts of groups like the Breakthrough Foundation, it may be possible in the coming years to send “nanoscraft” on interstellar voyages using laser-driven propulsion. But just as significant is the fact that developments like these may also make it easier for us to detect extra-terrestrials that are trying to find us. Not long ago, Breakthrough Initiatives made headlines when they announced that luminaries like Stephen Hawking and Mark Zuckerberg were backing their plan to send a tiny spacecraft to Alpha Centauri. Known as Breakthrough Starshot, this plan involved a refrigerator-sized magnet being towed by a laser sail, which would be pushed by a ground-based laser array to speeds fast enough to reach Alpha Centauri in about 20 years. In addition to offering a possible interstellar space mission that could reach another star in our lifetime, projects like this have the added benefit of letting us broadcast our presence to the rest of the Universe. Such is the argument put forward by Philip Lubin, a professor at the University of California, Santa Barbara, and the brains behind Starshot. In a paper titled “The Search for Directed Intelligence” – which appeared recently in arXiv and will be published soon in REACH – Reviews in Human Space Exploration – Lubin explains how systems that are becoming technologically feasible on Earth could allow us to search for similar technology being used elsewhere. In this case, by alien civilizations. As Lubin shared with Universe Today via email: “In our SETI paper we examine the implications of a civilization having directed energy systems like we are proposing for both our NASA and Starshot programs. In this sense the NASA (DE-STAR) and Starshot arrays represent what other civilizations may possess. In another way, the receive mode (Phased Array Telescope) may be useful to search and study nearby exoplanets.” Using these as a template, Lubin believes that other species in the Universe could be using this same kind of directed energy (DE) systems for the same purposes – i.e. propulsion, planetary defense, scanning, power beaming, and communications. And by using a rather modest search strategy, he and colleagues propose observing nearby star and planetary systems to see if there are any signs of civilizations that possess this technology. This could take the form of “spill-over”, where surveys are able to detect errant flashes of energy. Or they could be from an actual beacon, assuming the extra-terrestrials us DE to communicate. As is stated in the paper authored by Lubin and his colleagues: “There are a number of reasons a civilization would use directed energy systems of the type discussed here. If other civilizations have an environment like we do they might use DE system for applications such as propulsion, planetary defense against “debris” such as asteroids and comets, illumination or scanning systems to survey their local environment, power beaming across large distances among many others. Surveys that are sensitive to these “utilitarian” applications are a natural byproduct of the “spill over” of these uses, though a systematic beacon would be much easier to detect.” According to Lubin, this represents a major departure from what projects like SETI have been doing during the last few decades. These efforts, which can be classified as “passive” were understandable in the past, owing to our limited means and the challenges in sending out messages ourselves. For one, the distances involved in interstellar communication are incredibly vast. Even using DE, which moves at the speed of light, it would still take a message over 4 years to reach the nearest star, 1000 years to reach the Kepler planets, and 2 million years to the nearest galaxy (Andromeda). So aside from the nearest stars, these time scales are far beyond a human lifetime; and by the time the message arrived, far better means of communication would have evolved. Second, there is also the issue of the targets being in motion over the vast timescales involved. All stars have a transverse velocity relative to our line of sight, which means that any star system or planet targeted with a burst of laser communication would have moved by the time the beam arrived. So by adopting a pro-active approach, which involves looking for specific kinds of behavior, we could bolster our efforts to find intelligent life on distant exoplanets. But of course, there are still many challenges that need to be overcome, not the least of which are technical. But more than that, there is also the fact that what we are looking for may not exist. As Lubin and his colleagues state in one section of the paper: “What is an assumption, of course, is that electromagnetic communications has any relevance on times scales that are millions of years and in particular that electromagnetic communications (which includes beacons) should have anything to do with wavelengths near human vision.” In other words, assuming that aliens are using technology similar to our own is potentially anthropocentric. However, when it comes to space exploration and finding other intelligent species, we have to work with what we have and what we know. And as it stands, humanity is the only example of a space-faring civilization known to us. As such, we can hardly be faulted for projecting ourselves out there. Here’s hoping ET is out there, and relies on energy beaming to get things done. And, fingers crossed, here’s hoping they aren’t too shy about being noticed! For generations, human beings have fantasized about the possibility of finding extra-terrestrial life. And with our ongoing research efforts to discover new and exciting extrasolar planets (aka. exoplanets) in distant star systems, the possibility of actually visiting one of these worlds has received a real shot in the arm. Unfortunately, given the astronomical distances involved, not to mention the cost of mounting an expedition, doing so presents numerous significant challenges. However, Russian billionaire Yuri Milner and the Breakthrough Foundation – an international organization committed to exploration and scientific research – is determined to mount an interstellar mission to Alpha Centauri, our closest stellar neighbor, in the coming years. With the backing of such big name sponsors as Mark Zuckerberg and Stephen Hawking, his latest initiative (named “Project Starshot“) aims to send a tiny spacecraft to the Alpha Centauri system to search for planets and signs of life. Host: Fraser Cain (@fcain) Special Guest: This week we welcome Stephen Fowler, who is the Creative Director at InfoAge, the organization behind refurbishing the TIROS 1 dish and the Science History Learning Center and Museum at Camp Evans, Wall, NJ.
0.931224
3.424783
There has been some recent excitement about the claimed identification of a 400-solar-mass black hole. A team of scientists have recently published a letter in the journal Nature where they show how X-ray measurements of a source in the nearby galaxy M82 can be interpreted as originating from a black hole with mass of around 400 times the mass of the Sun—from now on I’ll use as shorthand for the mass of the Sun (one solar mass). This particular X-ray source is peculiarly bright and has long been suspected to potentially be a black hole with a mass around to . If the result is confirmed, then it is the first definite detection of an intermediate-mass black hole, or IMBH for short, but why is this exciting? Mass of black holes In principle, a black hole can have any mass. To form a black hole you just need to squeeze mass down into a small enough space. For the something the mass of the Earth, you need to squeeze down to a radius of about 9 mm and for something about the mass of the Sun, you need to squeeze to a radius of about 3 km. Black holes are pretty small! Most of the time, things don’t collapse to form black holes because they materials they are made of are more than strong enough to counter-balance their own gravity. Stellar-mass black holes Only very massive things, where gravitational forces are immense, collapse down to black holes. This happens when the most massive stars reach the end of their lifetimes. Stars are kept puffy because they are hot. They are made of plasma where all their constituent particles are happily whizzing around and bouncing into each other. This can continue to happen while the star is undergoing nuclear fusion which provides the energy to keep things hot. At some point this fuel runs out, and then the core of the star collapses. What happens next depends on the mass of the core. The least massive stars (like our own Sun) will collapse down to become white dwarfs. In white dwarfs, the force of gravity is balanced by electrons. Electrons are rather anti-social and dislike sharing the same space with each other (a concept known as the Pauli exclusion principle, which is a consequence of their exchange symmetry), hence they put up a bit of a fight when squeezed together. The electrons can balance the gravitational force for masses up to about , known as the Chandrasekhar mass. After that they get squeezed together with protons and we are left with a neutron star. Neutron stars are much like giant atomic nuclei. The force of gravity is now balanced by the neutrons who, like electrons, don’t like to share space, but are less easy to bully than the electrons. The maximum mass of a neutron star is not exactly known, but we think it’s somewhere between and . After this, nothing can resist gravity and you end up with a black hole of a few times the mass of the Sun. Collapsing stars produce the imaginatively named stellar-mass black holes, as they are about the same mass as stars. Stars lose a lot of mass during their lifetime, so the mass of a newly born black hole is less than the original mass of the star that formed it. The maximum mass of stellar-mass black holes is determined by the maximum size of stars. We have good evidence for stellar-mass black holes, for example from looking at X-ray binaries, where we see a hot disc of material swirling around the black hole. Massive black holes We also have evidence for another class of black holes: massive black holes, MBHs to their friends, or, if trying to sound extra cool, supermassive black holes. These may be to . The strongest evidence comes from our own galaxy, where we can see stars in the centre of the galaxy orbiting something so small and heavy it can only be a black hole. We think that there is an MBH at the centre of pretty much every galaxy, like there’s a hazelnut at the centre of a Ferrero Rocher (in this analogy, I guess the Nutella could be delicious dark matter). From the masses we’ve measured, the properties of these black holes is correlated with the properties of their surrounding galaxies: bigger galaxies have bigger MBHs. The most famous of these correlations is the M–sigma relation, between the mass of the black hole () and the velocity dispersion, the range of orbital speeds, of stars surrounding it (the Greek letter sigma, ). These correlations tell us that the evolution of the galaxy and it’s central black hole are linked somehow, this could be just because of their shared history or through some extra feedback too. MBHs can grow by accreting matter (swallowing up clouds of gas or stars that stray too close) or by merging with other MBHs (we know galaxies merge). The rather embarrassing problem, however, is that we don’t know what the MBHs have grown from. There are really huge MBHs already present in the early Universe (they power quasars), so MBHs must be able to grow quickly. Did they grow from regular stellar-mass black holes or some form of super black hole that formed from a giant star that doesn’t exist today? Did lots of stellar-mass black holes collide to form a seed or did material just accrete quickly? Did the initial black holes come from somewhere else other than stars, perhaps they are leftovers from the Big Bang? We don’t have the data to tell where MBHs came from yet (gravitational waves could be useful for this). Intermediate-mass black holes However MBHs grew, it is generally agreed that we should be able to find some intermediate-mass black holes: black holes which haven’t grown enough to become IMBHs. These might be found in dwarf galaxies, or maybe in globular clusters (giant collections of stars that formed together), perhaps even in the centre of galaxies orbiting an MBH. Finding some IMBHs will hopefully tell us about how MBHs formed (and so, possibly about how galaxies formed too). IMBHs have proved elusive. They are difficult to spot compared to their bigger brothers and sisters. Not finding any might mean we’d need to rethink our ideas of how MBHs formed, and try to find a way for them to either be born about a million times the mass of the Sun, or be guaranteed to grow that big. The finding of the first IMBH tells us that things are more like common sense would dictate: black holes can come in the expected range of masses (phew!). We now need to identify some more to learn about their properties as a population. In conclusion, black holes can come in a range of masses. We know about the smaller stellar-mass ones and the bigger massive black holes. We suspect that the bigger ones grow from smaller ones, and we now have some evidence for the existence of the hypothesised intermediate-mass black holes. Whatever their size though, black holes are awesome, and they shouldn’t worry about their weight.
0.819092
4.087697
I'm not sure this would work. The G and B stars orbit each other and each have their own planets. That works as long as the planets are much closer to each other than they are to the other star. Next, there are circumbinary gas giants. This is reasonable as long as the gas giants' orbits are much larger than the mutual orbit of the G star and B star. To have an O star orbiting in this system, its orbit would again have to be much bigger than the orbit of the circumbinary gas giants, otherwise those planets' orbits would not be stable. Finally, to have a circumtrinary comet belt, those comets' orbits would again have be much larger than the mutual orbit of the O + G/B stars. I think this is all possible as long as the system is very very big. The trick is, every time I say an orbit is "much larger" than another, think 10 times wider. In your setup there are 5 levels of hierarchy: 1) G star planets, 2) G star - B star, 3) G+B star - gas giants, 4) O star - GB+gas giant, 5) comet belt around everything else. That means your system spans a factor of about 10^5 = 100,000 in size scales. If you add in the fact that there are multiple planets per system, which must be spread far-enough out to be stable, you're probably at about a factor of 1,000,000. In the local parts of our Galaxy, a system can't be larger than about 100,000 AU without getting torn apart by the Galactic gravitational field ("galactic tides" is the technical term). In the denser environments where stars form, that limit is somewhere between 100 and 10,000 AU depending on the environment. Where O stars form it's more like 100-1000 AU since those form in denser star-forming clouds. Your system is basically at the optimistic (local Galaxy) limit and implausible in the more constraining (star-forming region) limit. If your smallest orbit is 0.1 AU then your largest one is 100,000 AU. I would recommend reducing the number of levels you have (where you have to make orbits bigger) and packing the levels you have with more planets/asteroids/comets. For example, here is a system with 8 stars but only three levels of hierarchy: And you can substitute one of the levels of stellar binarity for planets: Those images are from a very packed system I created a few months ago: https://planetplanet.net/2016/04/13/building-the-ultimate-solar-system-part-6-multiple-star-systems/ Here is another example with one planet and 5 Suns:
0.892359
3.650808
On July 14th, 2015, the New Horizons mission made history by conducting the first flyby of Pluto. This represented the culmination of a nine year journey, which began on January 19th, 2006 – when the spacecraft was launched from the Cape Canaveral Air Force Station. And before the mission is complete, NASA hopes to send the spacecraft to investigate objects in the Kuiper Belt as well. To mark the 11th anniversary of the spacecraft’s launch, members of the New Horizons team took part in panel a discussion hosted by the Johns Hopkins University Applied Physics Laboratory (JHUAPL) located in Laurel, Maryland. The event was broadcasted on Facebook Live, and consisted of team members speaking about the highlights of the mission and what lies ahead for the NASA spacecraft. The live panel discussion took place on Thursday, Sept. 19th at 4 p.m. EST, and included Jim Green and Alan Stern – the director the Planetary Science Division at NASA and the principle investigator (PI) of the New Horizons mission, respectively. Also in attendance was Glen Fountain and Helene Winters, New Horizons‘ project managers; and Kelsi Singer, the New Horizons co-investigator. In the course of the event, the panel members responded to questions and shared stories about the mission’s greatest accomplishments. Among them were the many, many high-resolution photographs taken by the spacecraft’s Ralph and Long Range Reconnaissance Imager (LORRI) cameras. In addition to providing detailing images of Pluto’s surface features, they also allowed for the creation of the very first detailed map of Pluto. Though Pluto is not officially designated as a planet anymore – ever since the XXVIth General Assembly of the International Astronomical Union, where Pluto was designated as a “dwarf planet” – many members of the team still consider it to be the ninth planet of the Solar System. Because of this, New Horizons‘ historic flyby was of particular significance. As Principle Investigator Alan Stern – from the Southwestern Research Institute (SwRI) – explained in an interview with Inverse, the first phase of humanity’s investigation of the Solar System is now complete. “What we did was we provided the capstone to the initial exploration of the planets,” he said. “All nine have been explored with New Horizons finishing that task.” Other significant discoveries made by the New Horizons mission include Pluto’s famous heart-shaped terrain – aka. Sputnik Planum. This region turned out to be a young, icy plain that contains water ice flows adrift on a “sea” of frozen nitrogen. And then there was the discovery of the large mountain and possible cryovolcano located at the tip of the plain – named Tombaugh Regio, (in honor of Pluto’s discovered, Clyde Tombaugh). The mission also revealed further evidence of geological activity and cryovolcanism, the presence of hyrdocarbon clouds on Pluto, and conducted the very first measurements of how Pluto interacts with solar wind. All told, over 50 gigabits of data were collected by New Horizons during its encounter and flyby with Pluto. And the detailed map which resulted from it did a good job of capturing all this complexity and diversity. As Stern explained: “That really blew away our expectations. We did not think that a planet the size of North America could be as complex as Mars or even Earth. It’s just tons of eye candy. This color map is the highest resolution we will see until another spacecraft goes back to Pluto.” After making its historic flyby of Pluto, the New Horizons team requested that the mission receive an extension to 2021 so that it could explore Kuiper Belt Objects (KBOs). This extension was granted, and for the first part of the Kuiper Belt Extended Mission (KEM), the spacecraft will perform a close flyby of the object known as 2014 MU69. This remote KBO – which is estimated to be between 25 – 45 km (16-28 mi) in diameter – was one of two objects identified as potential targets for research, and the one recommended by the New Horizons team. The flyby, which is expected to take place in January of 2019, will involve the spacecraft taking a series of photographs on approach, as well as some pictures of the object’s surface once it gets closer. Before the extension ends in 2021, it will continue to send back information on the gas, dust and plasma conditions in the Kuiper Belt. Clearly, we are not finished with the New Horizons mission, and it is not finished with us! To check out footage from the live-streamed event, head on over to the New Horizons Facebook page. Further Reading: NASA
0.824554
3.007469
A joint team from EAPS and Massachusetts General Hospital is designing ways to detect and sequence DNA on the neighboring worlds of our own solar system. Mars, Europa, and Enceladus each present a direct opportunity to find life beyond Earth, since all demonstrate conditions that are (or were) conducive to habitability—at least in an Earth-centric sense, that is. Where there is ample evidence that Mars once held liquid water on its surface, Europa and Enceladus (moons of Jupiter and Saturn, respectively) are both now known to have subsurface liquid water oceans. While NASA’s Mars Curiosity Rover program is not directly tasked with looking for life, it does seek to assess the favorability of environmental conditions to support it; for future missions to Europa or Enceladus—and perhaps other Jovian or Saturnine moons like Io or Titan—a direct search for evidence of life is high on the agenda. But looking for life is a tricky business. Overcoming the hazards associated with the extreme environments of space, contamination, and false positives propels scientists and engineers to the very cutting edge of what is currently possible. Mars Reconnaisance Orbiter image: NASA/JPL Christopher Carr, a longtime research scientist with MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), is also a Research Fellow with the Department of Molecular Biology at Massachusetts General Hospital (MGH). With a passion for extra-terrestrial exploration, and a keen interest in the potential for humanity to begin colonizing another world, he serves as Science Principal Investigator (PI) for the Search for Extra-Terrestrial Genomes (SETG) instrument project. Led by co-PIs Maria Zuber, the E.A. Griswold Professor of Geophysics at MIT and fellow member of EAPS, and Professor Gary Ruvkun of MGH, the interdisciplinary team behind SETG brings together researchers and scientists from academia and industry, with support from NASA, to develop an instrument that can isolate, detect, and classify any extant and preserved DNA or RNA-based organism. In the context of searching for life beyond Earth, Carr says, “With the advent and evolution of robust sequencing tools over the past couple of decades, the time is now ripe to take that technology off-planet, leveraging recent developments in in-situ biological testing, including portable DNA/RNA testing, to create an instrument for use by robotic missions.” In particular, SETG could be used to test the hypothesis that life on Mars, if it exists, may share a common ancestor with life on Earth. It is a widely accepted theory that the synthesis of complex organics, including nucleobases and ribose precursors, occurred early in the history of the solar system within the solar nebula. Scientists hypothesize comets and meteorites would have delivered these organics to multiple potentially habitable zones, (among them Earth, Mars, Enceladus, and Europa) during the so-called Late Heavy Bombardment period, perhaps biasing the evolution of life towards utilization of similar informational polymers wherever life might have taken hold. Meteoritic exchange might also have produced shared ancestry, most plausibly for Earth and Mars. Researchers studying the origins of life have also identified common physiochemical scenarios that could lead to the earliest life forms using similar building blocks. “I think our best bet is to access the subsurface but this is going to be very hard,” says Carr. “We need to drill or otherwise access regions below the reach of space radiation that could destroy organic material. Perhaps we can seek out fresh impact craters. Such locations can expose material that wasn’t radiation-processed. A fresh impact crater might also connect to a deeper subsurface via cave networks or lava tubes.” Field images courtesy: C. Carr. As for the “ocean worlds,” any search for signs of life on Enceladus would likely involve exploring its southern polar region either directly or by sampling plumes that jet material from the subsurface ocean into space. On Europa, it would likely involve seeking out “chaos regions”—spots conducive to exchange between the surface ice and interior ocean. Identifying life in these regions may also require improved methods to detect and sequence non-standard polymers, i.e. alternatives to DNA or RNA. Exploring these environments naturally presents some serious engineering challenges. For starters, it would require painstaking care to prevent contamination. Such protections would also be necessary to avoid false positives.While this is critical for many instruments, a sequencing-based approach can identify any Earth contamination via sequence similarity. And then there are the difficulties operating a robotic mission in an extreme environment poses. On Mars, solar radiation and dust storms will always be an issue. But on Europa, there is added danger posed by Jupiter’s intense magnetic environment and resulting radiation. Exploring water plumes on Enceladus enters yet less-charted territory. The ability to conduct in-situ searches for life on other solar system bodies could help answer the question: is carbon-based life universal? Finding examples that come from environments other than Earth should also help prepare us for the kind of “close encounters” we can only assume the future will bring. Read more about the research: Story Image: EAPS scientist Chris Carr and his group use both lab and fieldwork to develop and test techniques and instruments which could one day be used in space to detect life on other planetary bodies in our solar system. In this issue For further information on giving opportunities or creating a named fund to benefit the Department of Earth, Atmospheric and Planetary Sciences, please contact: Senior Development Officer Earth, Atmospheric and Planetary Sciences at MIT 617 253 5796 Keep up to date with all things EAPS: subscribe to our newsletter - [email protected]
0.836126
3.49508
In the United States, mission selection has always been led by NASA. The methods of selection and appointment evolve over time for past mission a political vocation (willingness to be the first) has missions with scientific purpose (answer the question posed by researchers). Before the 1980s The missions were part of a program based on their target: Explorer and Vandguard: Earth orbit Ranger and Surveyor: Moon Pioneer: solar system in general. (mission mentioned here: 10, 11, 12=pioneer venus orbiteur, 13=pioneer venus multiprobe) missions are named from their design incrementally (mariner 1, 2, 3…) and aim to go ever further (flyby, orbiter, lander). From the 80s to the 90s Budgets are allocated to single or double missions with a specific objective and can no longer fit into a program. They therefore often take an individual name that implies that a probe like Magellan or Galileo. Since the 1990s Individual missions are federated according to budget categories: Discovery: cheapest missions (less than 450 million $). Example Stardust, Messenger, Mars insight. Mars Scout: Discovery sub-program. These are missions low cost (less than $450 million) but exclusively intended for It’s Mars. Phoenix and MAVEN. Lunar robotic precursor: robotic mission to prepare lunar program inhabited constellation (cancelled in 2010). LRO, LCROSS and LADEE. New frontier: medium-cost mission (from 450 million to 1 $1 billion). Example, new horizons, juno Flagship: most expensive mission (over $1 billion). Curiosity and cassini According to the budget, a number of missions in each category are scheduled. Teams of searchers to present their project to a committee, which selects the mission to based on the scientific priorities allocated by the decade. A once selected, the missions are developed and take on an individual name. At the beginning of the space area, the USSR faced a reliability problem on their launchers and probes. It therefore mass-produced and launched the probes to compensate for the unreliability. In order to hide these many failures, the missions were given a name only after the launch according to the trajectory followed. For example, a probe destined for Venus was called kosmos in case of failure and Venera if successful. mission numbers are then done incrementally. Sputnik: the name that means satellite, it meant all probes launched into low orbit from the beginning of the space area. Kosmos: generic name given to all satellites in orbit or have failed at launch. There are more than 2500 that can be military satellites, non-operational space stations or space probes that have failed to leave Earth’s orbit. Luna or Lunakoid: scientific exploration of the Moon. Lunakoid uses rovers Zond: preparation for manned lunar missions but which has also used to hide the failure of probes launched into interplanetary orbit ( mission mentioned here: 1, 2) From the 80s, the major program Luna , Venera, Mars and Zond are end. the number of missions decreases and moves towards more varying targets. Missions take the name of their target (VEGA, PHOBOS) and sometimes the launch year (MARS 96). Rest of the world the other space agency of the world can launch mission but can organize a large exploration program. selection and appointment are only mission-by-mission like NASA in the 1980s and 1990s. Only the European ESA is trying to organize budget categories in the image of NASA at present.
0.832418
3.034942
Asteroid Sample Headed for Earth Earth as seen on May 18 during Hayabusa flyby. Banner shows mosaic of images taken from Hayabusa of the Earth and moon (not to perspective size). Credit: JAXA The space and astronomy worlds have June 13 circled on the calendar. That’s when the Japan Aerospace Exploration Agency (JAXA) expects the sample return capsule of the agency’s technology demonstrator spacecraft, Hayabusa, to boomerang back to Earth. The capsule, along with its mother ship, visited a near-Earth asteroid, Itokawa, five years ago and has logged about 2 billion kilometers (1.25 billion miles) since its launch in May 2003. With the return of the Hayabusa capsule, targeted for June 13 at Australia’s remote Woomera Test Range in South Australia, JAXA will have concluded a remarkable mission of exploration — one in which NASA scientists and engineers are playing a contributing role. "Hayabusa will be the first space mission to have made physical contact with an asteroid and returned to Earth," said Tommy Thompson, NASA’s Hayabusa project manager from the Jet Propulsion Laboratory in Pasadena, Calif. "The mission and its team have faced and overcome several challenges over the past seven years. This round-trip journey is a significant space achievement and one which NASA is proud to be part of." Launched May 9, 2003, from the Kagoshima Space Center, Uchinoura, Japan, Hayabusa was designed as a flying testbed. Its mission: to research several new engineering technologies necessary for returning planetary samples to Earth for further study. With Hayabusa, JAXA scientists and engineers hoped to obtain detailed information on electrical propulsion and autonomous navigation, as well as an asteroid sampler and sample reentry capsule. The 510-kilogram (950-pound) Hayabusa spacecraft rendezvoused with asteroid Itokawa in September 2005. Over the next two-and-a-half months, the spacecraft made up-close and personal scientific observations of the asteroid’s shape, terrain, surface altitude distribution, mineral composition, gravity, and the way it reflected the Sun’s rays. On Nov. 25 of that year, Hayabusa briefly touched down on the surface of Itokawa. That was only the second time in history a spacecraft descended to the surface of an asteroid (NASA’s Near Earth Asteroid Rendezvous-Shoemaker spacecraft landed on asteroid Eros on Feb. 12, 2001). Hayabusa marked the first attempt to sample asteroid surface material. Artist’s concept of the "HAYABUSA" landing on the asteroid "ITOKAWA". Credit: Akihiro Ikeshita The spacecraft departed Itokawa in January 2007. The road home for the technology demonstrator has been a long one, with several anomalies encountered along the way. But now the spacecraft is so close to its home planet, and the Australian government, working closely with JAXA, has cleared the mission for landing. A team of Japanese and American navigators is guiding Hayabusa on the final leg of its journey. Together, they calculate the final trajectory correction maneuvers Hayabusa’s ion propulsion system must perform for a successful homecoming. "We have been collaborating with the JAXA navigators since the launch of the mission," said Shyam Bhaskaran, a member of JPL’s Hayabusa navigation team. "We worked closely with them during the descents to the asteroid, and now are working together to guide the spacecraft back home." To obtain the data they need, the navigation team frequently calls upon JAXA’s tracking stations in Japan, as well as those of NASA’s Deep Space Network, which has antennas at Goldstone, in California’s Mojave Desert; near Madrid, Spain; and near Canberra, Australia. In addition, the stations provide mission planners with near-continuous communications with the spacecraft to keep them informed on spacecraft health. "Our task is to help advise JAXA on how to best get a spacecraft traveling at 12.2 kilometers per second (27,290 miles per hour) to intersect a very specific target point 200 kilometers (120 miles) above the Earth," said Bhaskaran. "Once that is done, and the heat shield of the sample return capsule starts glowing from atmospheric friction, our job is done." A global view of the Asteroid Itokawa, with white box showing region where the Hayabusa spacecraft landed to collect samples. Credit: JAXA While atmospheric entry may be the end of the line for the team that has plotted the spacecraft’s every move for the past 2 billion kilometers, NASA’s involvement continues for the craft’s final 200 kilometers (120 miles), to the surface of the Australian Outback. A joint Japanese-U.S. team operating on the ground and in the air will monitor this most critical event to help retrieve the capsule and heat shield. "This is the second highest velocity re-entry of a capsule in history," said Peter Jenniskens, a SETI Institute scientist at NASA’s Ames Research Center in Moffett Field, Calif. "This extreme entry speed will result in high heating rates and thermal loads to the capsule’s heat shield. Such manmade objects entering with interplanetary speed do not happen every day, and we hope to get a ringside seat to this one." Jenniskens is leading an international team as it monitor the final plunge of Hayabusa to Earth using NASA’s DC-8 airborne laboratory, which is managed and piloted by a crew from NASA’s Dryden Flight Research Center, Edwards, Calif. The DC-8 flies above most clouds, allowing an unfettered line of sight for its instrument suite measuring the shock-heated gas and capsule surface radiation emitted by the re-entry fireball. The data acquired by the high-flying team will help evaluate how thermal protection systems behave during these super-speedy spacecraft re-entries. This, in turn, will help engineers understand what a sample return capsule returning from Mars would undergo. The Hayabusa sample return capsule re-entry observation will be similar to earlier observations by the DC-8 team of NASA’s Stardust capsule return, and the re-entry of the European Space Agency’s ATV-1 ("Jules Verne") automated transfer vehicle. Hayabusa is anticipated to make its return to Earth on June 13. Credit: JAXA Soon after the sample return capsule touches down on the ground, Hayabusa team members will retrieve it and transport it to JAXA’s sample curatorial facility in Sagamihara, Japan. There, Japanese astromaterials scientists, assisted by two scientists from NASA and one from Australia, will perform a preliminary cataloging and analysis of the capsule’s contents. Studying the samples from an asteroid will help astrobiologists understand the role of these objects in delivering materials important for the origin of life to the early Earth. "This preliminary analysis follows the basic protocols used for Apollo moon rocks, Genesis and Stardust samples," said Mike Zolensky, a scientist at NASA’s Astromaterials Research and Exploration Science Directorate at the Johnson Space Center, Houston. "If this capsule contains samples from the asteroid, we expect it will take a year to determine the primary characteristics of the samples, and learn how to best handle them. Then the samples will be distributed to scientists worldwide for more detailed analysis." "The Japanese and NASA engineers and scientists involved in Hayabusa’s return from asteroid Itokawa are proud of their collaboration and their joint accomplishments," said Thompson. "Certainly, any samples retrieved from Itokawa will provide exciting new insights to understanding the early history of the solar system. This will be the icing on the cake, as this mission has already taught us so much. " For more information about the Hayabusa mission, visit: http://www.isas.jaxa.jp/e/enterp/missions/hayabusa/index.shtml JUNE 14 UPDATE The asteroid-sample return capsule and parachute in the Woomera Prohibited Area of the Australian Outback. Credit: JAXA The sample return capsule was ejected from the Hayabusa spacecraft three hours before reaching Earth. The sample canister flew through Earth’s atmosphere, while the spacecraft broke up in spectacular fashion over the Australian Outback. The capsule lay in the Woomera Prohibited Area until morning while Aboriginal elders determined that it had not landed in any indigenous sacred sites. Scientists were then permitted to collect the capsule. The director of the Woomera test range, Doug Gerrie, said the probe had completed a textbook landing in the South Australian desert. “They landed it exactly where they nominated they would." The capsule is hoped to contain the first piece of asteroid ever brought to Earth. While the spacecraft flew near the asteroid and sent back data, scientists and engineers aren’t sure if it was successful in obtaining samples. It appears that Hayabusa landed briefly on the asteroid, but it is not certain the “bullets” fired to stir up dust for the container to capture. The capsule will remain sealed until it arrives at the JAXA facility near Tokyo, and may remain unopened for weeks as it undergoes testing.
0.817769
3.04568
Neptune is the eighth planet from the Sun making it the most distant in the solar system. This gas giant planet may have formed much closer to the Sun in early solar system history before migrating to its present position. Tritan, Neptune's largest moon by far, is expected to be torn apart in approximately 3.6 billion years because of its tidal acceleration. Due to its blue coloration, Neptune was named after the Roman god of the Sea. It takes Neptune 164.8 Earth years to orbit the Sun. On 11 July 2011, Neptune completed its first full orbit since its discovery in 1846. The wind on Neptune can reach speeds of 1,240 miles per hour. This is equal to three times the speed of Earth's worst hurricanes. The surface temperature on Neptune is -201 degrees Celsius. Even though Neptune has a greater mass than Uranus, it has a smaller diameter. Voyager 2 spacecraft discovered the Great Dark Spot on Neptune in 1989. This spot was actually a storm system. In 1994 a new storm system was observed by the Hubble Space Telescope. Neptune spins on its axis very rapidly. Its equatorial clouds take 18 hours to make one rotation. This is because Neptune is not solid body. Neptune is 3.9 times bigger and has 17 times as much mass compared to the Eart. Pluto is the farthest planet from the sun (even though it is not technically considered a planet anymore), but for 20 years, beginning in 1979, it actually moved closer to the sun than Neptune because of its orbit. Neptune has 14 known moons, the most notable one being Tritan. Neptune is a ball of gas and ice, probably with a rocky core. There’s no way you could actually stand on the surface of Neptune without just sinking in. However, if you could stand on the surface of Neptune, you would notice something amazing. The force of gravity pulling you down is almost exactly the same as the force of gravity you feel walking here on Earth. The gravity of Neptune is only 17% stronger than Earth gravity. Although Neptune has rings, they are incomplete and as such are considered to be arcs. The atmosphere of Neptune is made of hydrogen and helium, with some methane. The methane absorbs red light, which makes the planet appear a lovely blue. High, thin clouds drift in the upper atmosphere. Neptune has dark spots similar to the Great Red Spot on Jupiter. These are areas of high atmospheric pressure which force clouds of methane gas high up into the atmosphere, appearing like cirrus (thin, whispy) clouds on Earth. However, these spots disappear and reappear on different parts of the planet, unlike Jupiter’s spot. Neptune's moon Tritan was discovered by William Lassel only 17 days following Neptune's discovery. Neptune's equatorial circumference is 155,600km. One of the largest storms ever seen was recorded in 1989. It was called the Great Dark Spot. It lasted about five years. Neptune has 14 moons.The most interesting moon is Triton, a frozen world that is spewing nitrogen ice and dust particles out from below its surface. It was likely captured by the gravitational pull of Neptune. It is probably the coldest world in the solar system. The largest Neptunian moon, Triton, was discovered just 17 days after Neptune itself was discovered. Neptune's mass is 102,410,000,000,000,000 billion kg, which is equal to 17.15 x the mass of Earth. Triton is slowly getting closer to Neptune. Eventually, it will get so close that it may get torn apart by Neptune’s gravity and possibly form rings more spectacular than Saturn’s. Neptune has an average surface temperature of -214°C ( -353°F). The atmosphere on Neptune is made up of helium, methane and hydrogen. The coldest temperatures measured in the Solar System -230°c (-382°F) have been recorded on Neptune’s moon, Triton. Neptune has its own heat source, which is a good thing because it only receives 1/900 of the sun's energy that the Earth receives. In Neptune’s atmosphere, there is a large white cloud that moves around rather quickly. The “scooting” of this cloud around the atmosphere has led it to be named “Scooter.” Neptune is the farthest planet from the sun, and it takes more than 164 Earth years to orbit the sun. Because of the methane gas in Neptune's atmosphere, the planet appears to be blue. This actually occurs partly because of the ability of atmospheric methane gas to absorb red light. Only one spacecraft has flown by Neptune.In 1989, the Voyager 2 spacecraft swept past the planet. It returned the first close-up images of the Neptune system. The NASA/ESA Hubble Space Telescope has also studied this planet, as have a number of ground-based telescopes. The stormiest planet in our solar system is Neptune. Neptune has a mass 17 times greater than that of Earth's, but its mass is only 1/19th of Jupiter's. Although Galileo drew images of Neptune in 1612, his drawings were actually of a fixed star and not the planet. This mistake is the reason he is not credited with Neptune's discovery. Some of the clouds on Neptune have such a high altitude that they cast shadows on the lower altitude clouds. It is not possible to see Neptune with the naked eye. If you try to find it with very strong binoculars or a telescope, you will see a small blue disk that looks very similar to Uranus.
0.891423
3.664763
CONTACT: Peter Timbie, (608) 890-2002, [email protected] DOWNLOAD IMAGE: https://uwmadison.box.com/fast-radio-bursts MADISON – With the help of the world’s largest steerable radio telescope, a team of researchers that includes a University of Wisconsin-Madison physicist has produced the first detailed portrait of a Fast Radio Burst – a brief but highly energetic pulse of radio waves from unknown sources in the distant universe. First detected about a decade ago, Fast Radio Bursts pack more energy than our sun emits over hundreds of thousands of years. So far, astrophysicists have conclusively detected only 15 such events. What causes the bursts is a mystery. But now, with the help of hundreds of hours of archived data from the National Science Foundation’s Green Bank Telescope, researchers have pieced together the first detailed picture of a Fast Radio Burst, indicating it originated inside a highly magnetized region of space dense with matter, possibly as a supernova remnant or the energetic environment of a stellar nursery. The new study is reported today (Dec. 2, 2015) in the journal Nature. “Nobody really knows what these things are,” explains Peter Timbie, a University of Wisconsin-Madison professor of physics and a co-author of the new report. According to Timbie, whose archived data from the Green Bank Telescope was used to help inform the study, astrophysicists now think that Fast Radio Bursts occur far more frequently than the scant evidence of their existence suggests, with detectable events possibly occurring thousands of times a day. The reason they weren’t detected in the volumes of data captured each day by the world’s radio telescopes is that there was no specific algorithm for sorting the objects in the data from the many different types of phenomena radio astronomers are looking for. Using new software of their own design, Kiyoshi Masui of the University of British Columbia and his colleague Jonathan Sievers of the University of KwaZulu-Natal in Durban, South Africa, identified a new Fast Radio Burst, named FRB 110523, from data first obtained and archived by Timbie and his Wisconsin colleague Christopher J. Anderson, as well as other radio astronomers. Timbie and Anderson work as part of a group attempting to sketch out the large-scale structure of the universe by three-dimensionally mapping the distribution of neutral hydrogen atoms in space. Some of their work depends on radio telescope observations such as those made at the Green Bank observatory. Masui and Sievers mined nearly 700 hours or archived data, identified the new Fast Radio Burst, and provided the most detailed record to date of a Fast Radio Burst, this one originating an estimated 6 billion light years from Earth. “It was in the data, but we didn’t notice it,” says Timbie, explaining that the Fast Radio Burst flashes only briefly in a large volume of data and that as radio signals travel cosmological distances they are “smeared out.” Thus, the short, sharp signal of a Fast Radio Burst can be hiding in plain sight. The new data analysis software, notes Timbie, not only promises to make the discovery of Fast Radio Bursts a much more common occurrence, but is likely to continue to demystify objects astronomers have puzzled over for a decade. “We now have more information about the source than previous measurements,” Timbie observes. “Because of the nature of the pulse, we can say that it is in an environment where there is a lot of matter.” Such environments are consistent with things like supernova remnants or stellar nurseries, where dense concentrations of matter are continuously churned into new stars. # # # -Terry Devitt, (608) 262-8282, [email protected]
0.884572
3.731338
The search for extraterrestrial life should include what’s inside a planet An interdisciplinary team of researchers at the Carnegie Institution for Science is urging the scientific community to recognize the significance of a planet’s interior dynamics when investigating whether it is capable of hosting life. With the capabilities that we currently have available, it is common to assess a planet’s habitability by looking for signatures of life in the atmosphere. However, the Carnegie team argues that the planet’s innermost composition and activity must also be considered. “The heart of habitability is in planetary interiors,” said study co-author George Cody. On Earth, for example, plate tectonics are crucial for maintaining a surface climate where life can thrive. Furthermore, the convection that drives the planet’s magnetic field depends on the cycling of material between its surface and interior. Without its magnetic field, Earth would become uninhabitable. “We need a better understanding of how a planet’s composition and interior influence its habitability, starting with Earth,” said study co-author Anat Shahar. “This can be used to guide the search for exoplanets and star systems where life could thrive, signatures of which could be detected by telescopes.” Planets are formed from silicon, magnesium, oxygen, carbon, iron, and hydrogen. The abundance of these chemical elements, as well as the heating and cooling they experience in their youth, will affect the interior chemistry and other aspects of a planet such as its atmospheric composition. “One of the big questions we need to ask is whether the geologic and dynamic features that make our home planet habitable can be produced on planets with different compositions,” said study co-author Peter Driscoll. The Carnegie team concluded that the search for extraterrestrial life must be guided by a combination of astronomical observations, lab experiments of planetary interior conditions, and mathematical modeling. “Carnegie scientists are long-established world leaders in the fields of geochemistry, geophysics, planetary science, astrobiology, and astronomy,” said study co-author Alycia Weinberger. “So, our institution is perfectly placed to tackle this cross-disciplinary challenge.” The study is published in the journal Science. Image Credit: ESO/M. Kornmesser
0.819639
3.484641
The Nobel Prize in physics this year is split between two very different classes of discovery. Taken together, both sets of discovery help us understand the universe is bigger, stranger, and filled with more mystery than previously appreciated. One half of the $909,000 award goes to James Peebles, a theoretical cosmologist and professor emeritus at Princeton University, for his insights that led to the profoundly important conclusion that we have no idea what 95 percent of the universe is made of. The Nobel committee is awarding the other half to Swiss astronomers Michel Mayor, University of Geneva, and Didier Queloz, University of Geneva and University of Cambridge, for a single discovery: the first detection of a planet orbiting a star that is not our own. Their initial discovery has led to an explosion in exoplanets (planets in solar systems beyond our own) discoveries. There are now 4,000 known exoplanets, with more being discovered all the time. Each is a new chance to understand the diversity of planets that inhabit the cosmos and a new chance to look for a world that looks like our own, possibly containing life. Peebles work on cosmology helps describe the shape of the universe — and its largest mystery Peebles is receiving the Nobel for decades of work describing the composition and structure of the early universe and for using those insights to describe how the universe looks today. One of the most consequential insights that comes from his work is the fact that matter, including planets, stars, and gas, only comprises 5 percent of the total mass and energy in the universe. That is, we only really know what five percent of the universe is made out of. The rest is comprised of two mysterious forces: One is dark matter, a substance that acts like a gravitational glue keeping galaxies from falling apart. (The discovery of dark matter originates with the pioneering physicist Vera Rubin, who famously, frustratingly, never won a Nobel before death made her ineligible for one.) The other force is dark energy, which powers the expansion of the universe and pushes galaxies apart. Another of Peebles’s contributions to cosmology is making accurate predictions of the structure of the cosmic background radiation. You can think of the cosmic background radiation is the first light we can see from the dawn of time. It doesn’t come right from the big bang — at that time the universe was too dense for light to move freely in it — but from a period 400,000 years after it. What we can still see — in the form of microwave radiation — from this early time is the cosmic background radiation. It’s important, as it helps us understand the early conditions of the universe, but it also helps us understand the shape of the universe (it’s flat), among other properties. This work also informed the insight that 95 percent of the universe is dark matter and dark energy. “Peebles realised that the radiation’s temperature could provide information about how much matter was created in the Big Bang,” the Nobel committee says in a statement. In 1995, Mayor and Queloz discovered a planet orbiting a star beyond our own Mayor and Queloz, the other winners, made their discovery by simply looking at a star with a special type of telescope. In 1995, they were observing a star in the constellation Pegasus (specifically, 51 Pegasi) and noticed it wobble. Gravitational force seemed to be tugging on the star, and they could see this from the changing nature of the star’s light. When the star moves closer to Earth, its light becomes slightly bluer, when it moves farther away, its light becomes slightly redder (this shift is called the Doppler effect). “At that time I didn’t think at all it was a planet,” Queloz told the Guardian. “I just thought something was wrong.” But Mayor and Queloz eventually figured out that what was pulling on the star was a planet with about half the mass of Jupiter. The planet is called 51 Pegasi b. It’s too small for astronomers to see directly with a telescope. But its gravitational influence on its home start is unmistakable. Unexpectedly, the astronomers found that 51 Pegasi b only takes four days to complete an orbit around its star. Astronomers hadn’t expected to find a planet so large orbiting so quickly and closely to its star. That insight alone was a discovery of a new type of planet, dubbed a “hot Jupiter.” (Basically, think of a gas giant like Jupiter and put it closer to the sun.) In a single discovery, astronomers suddenly knew of a whole new class of objects in the cosmos. This wasn’t the first exoplanet discovery. A team in 1992 found a pair of planets orbiting around a pulsar. But Mayor and Queloz discovered the first planet orbiting a conventional star. And that led to many more discoveries. Now, these planets are found all the time. Some are found via the method mentioned above. Other planets are found by observing stars and recording when they dim slightly — the light of a planet dims slightly when it crosses the path of a star. Astronomers can study the quality of that light and work out the size of the planet that crossed it as well as its distance from the star. Astronomers are most keenly searching for planets in a star’s “habitable zone” — a distance from the star where liquid water could conceivably exist. In 2018, NASA launched TESS — Transiting Exoplanet Survey Satellite — into space to specifically look for even more planets. Each new planet we discover gives us a greater sense of what’s possible out there in the universe. And there are so many worlds yet to be discovered. Get more stuff like this Subscribe to our mailing list and get interesting stuff and updates to your email inbox. Thank you for subscribing. Something went wrong.
0.859314
4.086234
Despite the similarities our world has with Venus, there is still much don't know about Earth's "sister planet" and how it came to be. Thanks to its super-dense and hazy atmosphere, there are still unresolved questions about the planet's geological history. For example, despite the fact that Venus' surface is dominated by volcanic features, scientists have remained uncertain whether or not the planet is still volcanically active today. While the planet is known to have been volcanically active as recent as 2.5 million years ago, no concrete evidence has been found that there are still volcanic eruptions on Venus' surface. However, new research led by the USRA's Lunar and Planetary Institute (LPI) has shown that Venus may still have active volcanoes, making it the only other planet in the Solar System (other than Earth) that is still volcanically active today. Figure above shows the volcanic peak Idunn Mons in the Imdr Regio area of Venus. The coloured overlay shows the heat patterns derived from surface brightness data collected by the Visible and Infrared Thermal Imaging Spectrometer, aboard the ESA's Venus Express spacecraft. This research, which appeared recently in the journal Science Advances, was led by Justin Filiberto – a staff scientist with the LPI. He was joined by fellow-LPI researcher Allan H. Treiman, Martha Gilmore of Wesleyan University's Department of Earth and Environmental Sciences, and David Trang of the Hawai'i Institute of Geophysics and Planetology. The discovery that Venus once experienced a great deal of volcanic activity was made during the 1990s thanks to NASA's Magellan spacecraft. The radar imaging it provided of Venus' surface revealed a world dominated by volcanoes and lava flows. During the 2000s, the ESA followed up on this with their Venus Express orbiter, which shed new light on volcanic activity by measuring infrared light coming from the planet's surface at night. This data allowed scientists to examine the lava flows on Venus' surface more closely and differentiate between the ones that were fresh and those that were altered. Unfortunately, the ages of lava eruptions and volcanoes on Venus were not known until recently since the alteration rate of fresh lava was not well constrained. For the sake of their study, Filiberto and his colleagues simulated Venus' atmosphere in their laboratory in order to investigate how Venus' lava flows would change over time. These simulations showed that olivine (which is abundant in basalt rock) reacts rapidly with an atmosphere like Venus' and would become coated with magnetite and hematite (two iron oxide minerals) within days. They also found that the near-infrared signature emitted by these minerals (which are consistent with the data obtained by the Venus Express mission) would disappear within days. From this, the team concluded that the lava flows observed on Venus were very young, which in turn would indicate that Venus still has active volcanoes on its surface. These results certainly bolster the case for Venus being volcanically active, but could also have implications for our understanding of the interior dynamics of terrestrial planets (like Earth and Mars) in general. As Filiberto explained: "If Venus is indeed active today, it would make a great place to visit to better understand the interiors of planets. For example, we could study how planets cool and why the Earth and Venus have active volcanism, but Mars does not. Future missions should be able to see these flows and changes in the surface and provide concrete evidence of its activity." In the near future, a number of missions will be bound for Venus to learn more about its atmosphere and surface conditions. These include India's Shukrayaan-1 orbiter and Russia's Venera-D spacecraft, which are currently in development and scheduled to launch by 2023 and 2026, respectively. These and other missions (which are still in the conceptual phase) will attempt to resolve the mysteries of Earth's "sister planet" once and for all. And in the process, they might be able to reveal a thing or two about our own!
0.851372
3.986354
Martian fire and ice The geography of Mars continues to be a puzzle: The most recent models show that the Red Planet has probably always been an icy cold place. But the geographical features on its surface suggest that liquid water once flowed there. Dr. Itay Halevy of the Department of Earth and Planetary Sciences has now shown how the ice could have melted for short periods in the planet’s history, producing a sort of “Martian spring.” That spring, however, would have been anything but mild: The warming would have been the result of violent volcanic activity. Eruptions of such now-dormant volcanoes as Olympus Mons, the largest volcano in the Solar System, may have been hundreds of times the force of the average eruption on Earth - and may have lasted up to a decade. From what we know of Earthly eruptions, the quantity of gases spewed must have been enormous. Dr. Halevy and his colleagues assessed these amounts and created a simulation of the way that those sulfurous gases would have interacted with the dusty Martian atmosphere. Sulfur can warm as a greenhouse gas or cool by forming particles that shade the surface from the Sun’s rays. According to their calculations, the warming effects would have outweighed the added cooling, heating the surface just enough to allow water to flow at low latitudes - for dozens to hundreds of years at a time. It is during these repeated “brief” - in planetary terms - but intense wet periods that the surface of the planet was carved by flowing rivers and streams. Dr. Itay Halevy’s reseach is supported by the Helen Kimmel Center for Planetary Science; the Deloro Institute for Advanced Research in Space and Optics; and the Wolfson Family Charitable Trust. Dr. Halevy is the incumbent of the Anna and Maurice Boukstein Career Development Chair in Perpetuity. Dr. Itay Halevy
0.83509
3.691449
It’s a fun sight to see when heavenly bodies have some fun time on their own. They just casually walk into view like they own the entire space. Just like these giant black hole pair photobombing the Andromeda Galaxy. Before we get to the fun part, let’s have some fast fun facts. The Andromeda Galaxy is a spiral-shaped galaxy that’s approximately 2.5 million light-years from Earth. This galaxy is also the nearest major galaxy to the Milky Way – where we are situated. Scientists have estimated that the Milky Way and Andromeda galaxies are expected to collide in more or less 4.5 billion years, merging to form a giant disc galaxy. Now, back to the story. Last November 20, astronomers from the University of Washington discovered a cosmic photobomb as a background object in the images of the nearby Andromeda galaxy has revealed what could be the most tightly coupled pair of supermassive black holes ever seen. The cosmic photobomb was seen through optical and X-ray images of M31 or famously known as Andromeda from the Hubble Space Telescope. They called it the LGGS J004527.30+413254.3, J0045+41 for short. Until recently, scientists thought that the J0045+41 was an object within the M31. While these scientists were still on the search of a special type of star, they discovered that it was something else. They initially thought that this star was within Andromeda’s territory was 1000 times far from Andromeda. Around 2.6 billion light-years away from M31. Even more interesting, what appeared to be a star was something far stranger. They found out that they’re actually giant black holes in close orbit around each other. Like two people dancing around but in this case, two gigantic black holes. See image below. According to NASA/CXC/University of Washington/ESA, the estimated total mass for these two gigantic black holes is about 200 million times the mass of the sun. Imagine these two giants colliding. The collision of two black holes isn’t impossible, though. According to NASA and Hubble, it’s possible for two black holes to collide. Once two black holes collide or merge, they become bigger ones. When this happens, it could become violent. Mergers put off tremendous energy and send massive ripples, or gravitational waves, through the space-time fabric of the universe. However, mergers are rare, and they could happen most likely billions of years from now. So for now, let us enjoy these two black holes dancing around each other and hope that they won’t be merging soon enough.
0.864385
3.509244
A simple model shows that a rocky planet close to its star may solidify so slowly that its water is lost to space and the planet becomes desiccated, whereas a planet farther out may solidify quickly and retain its water. See Letter p.607 Earth and Venus were probably built from similar rocky materials, having been formed by similar mass-accretion processes. The giant impacts that characterize these processes are thought to melt the growing planets to some depth, producing one or more magma-ocean stages during which the silicate portion of the planet is melted before solidifying. Thus, there has been no reason to suspect that Venus and Earth differed through the first tens and probably hundreds of millions of years of the Solar System, and Venus is commonly thought to have lost its water through some later divergence from Earth-like evolution. On page 607 of this issue, Hamano et al.1 present a simple model that might explain why rocky planets that have similar compositions but orbit at different distances from their host stars can end their magma-ocean stages with either an Earth-like wetness or a Venus-like dryness. This model does not require any later divergence to explain the differences between the planets. Almost 30 years ago, researchers showed how a dense steam atmosphere can be generated on a young, hot planet by the solidification of an impact-generated magma ocean2. After the upper troposphere (the lowest portion of the atmosphere) of the young planet has become saturated with steam, that atmospheric layer imposes a strict upper limit on outgoing radiation from the magma ocean — about 300 watts per square metre. Therefore, as soon as the magma ocean produces a steam-saturated troposphere, the cooling rate of the planet is controlled by this one simple limit. Previously, several groups had calculated that a magma ocean should solidify in just millions of years3,4,5. These calculations assumed that the planet had lower incoming heat flux from the star than outgoing heat flux from the magma ocean. The crucial feature of Hamano and colleagues' model is that some planets are close enough to their star for the incoming heat flux to be higher than the 300 W m−2 outgoing radiation limit, and thus the planet would be prevented from cooling at all until water was lost from the steam-saturated atmosphere. For planets close to their star, solidification and cooling together may take orders of magnitude longer than for more distant planets, perhaps as long as hundreds of millions of years. After solidification, cooling proceeds only as water is stripped from the hot, inflated atmosphere by hydrodynamic escape. The longer that solidification and cooling take, the more water is lost to space, and the drier the planet becomes. Thus, the distance of a terrestrial planet from its host star might produce an evolutionary dichotomy (Fig. 1). The authors further suggest that Earth solidified far enough from the Sun to have a net loss of planetary heat from the beginning, allowing it to solidify quickly. Earth's initial water inventory influenced the volume of only its initial oceans. Venus, however, may have had net heat flux into the planet, and its current dryness might be related to this early slow solidification and attendant atmospheric water loss, before cooling allowed the water in the steam atmosphere to cool and condense into liquid oceans. Recent work on geochemical tracers has indicated that Earth's mantle solidified and differentiated from a magma ocean more than 4.45 billion years ago6, probably around 4.52 billion years ago7, which agrees with rapid solidification. For Venus, however, there are insufficient geochemical data to perform this test. Measurements of deuterium and hydrogen in the Venusian atmosphere indicate that the planet has lost a substantial amount of water over time8,9, but whether that loss occurred at the time of solidification or more recently is a matter of argument. The authors' model underscores the importance of the earliest accretion and solidification steps in determining the future evolution of the rocky planets. However, several crucial caveats need to be considered in applying this model. First, in extrapolating back in time, the faint young star's radiation level needs to be considered. Second, initial atmospheres might not all be water-rich; the rocky building blocks for some planets might have produced atmospheres rich in methane and hydrogen, instead of steam10. In the absence of a steam atmosphere, there would be no outgoing radiation limit to slow solidification and cooling. Third, forming an initial atmosphere above a magma ocean is not a simple process. The removal of volatile gases from magma might require a significant degree of supersaturation and might not occur until late in solidification. If this is so, then solidification would proceed to a high degree before a steam atmosphere formed and occluded heat flux. Although proximity to a star affects planetary water content, this is not the only parameter that dictates the habitability of a rocky planet — the planet's composition also has a strong influence on all aspects of habitability, such as bulk atmospheric composition, susceptibility to plate tectonics and formation of a shielding magnetic field. A challenge for the coming decades will be to make measurements of exoplanets that allow the testing of models for habitability, and these tests need to include composition. How do atmospheric species other than water affect the solidification rates of magma oceans? What atmospheric compositions would be expected in the wake of a slow solidification with substantial water loss? The habitability of Earth and the inhospitability of Venus may be the inevitable result of our planetary sibling order next to the Sun rather than later evolutionary bifurcations. If so, similar patterns of habitability are likely to be found in exoplanets. Hamano, K., Abe, Y. & Genda, H. Nature 497, 607–610 (2013). Abe, Y. & Matsui, T. J. Geophys. Res. 90, C545–C559 (1985). Abe, Y. Phys. Earth Planet. Int. 100, 27–39 (1997). Zahnle, K. J., Kasting, J. F. & Pollack, J. B. Icarus 74, 62–97 (1988). Elkins-Tanton, L. T. Earth Planet. Sci. Lett. 271, 181–191 (2008). Mukhopadhyay, S. Nature 486, 101–104 (2012). Touboul, M., Puchtel, I. S. & Walker, R. J. Science 335, 1065–1069 (2012). Donahue, T. M., Hoffman, J. H., Hodges, R. R. Jr & Watson, A. J. Science 216, 630–633 (1982). Zahnle, K. J. & Kasting, J. F. Icarus 68, 462–480 (1986). Hashimoto, G. L., Abe, Y. & Sugita, S. J. Geophys. Res. 112, E05010 (2007). About this article Earth and Planetary Science Letters (2020) The Astronomical Journal (2017) Detecting the oldest geodynamo and attendant shielding from the solar wind: Implications for habitability Physics of the Earth and Planetary Interiors (2014)
0.843714
4.049204
(CNN)The first observation of a collision between neutron stars, detected in August 2017, created gravitational waves, light and heavy elements like gold and platinum. But astronomers have realized they also witnessed a kilanova, the kind of explosion that creates gold and platinum, the year before as well. The 2017 observation offered evidence for the theory that such massive explosions in space are responsible for creating large amounts of heavy elements. All of the gold and platinum found on Earth was likely created by ancient kilanovae that resulted from neutron star collisions. Because astronomers were able to make a direct observation in 2017, it changed what they expected a kilanova to look like. So they took their observations and looked back at other events initially thought to be something else. Specifically, they looked at an August 2016 gamma-ray burst. The event, named GRB160821B, was tracked minutes after detection by NASA's Neil Gehrels Swift Observatory. The 2017 event wasn't tracked in its initial hours, adding intrigue to the 2016 event. The new analysis of the 2016 event was published Tuesday in the journal Monthly Notices of the Royal Astronomical Society "The 2016 event was very exciting at first. It was nearby and visible with every major telescope, including NASA's Hubble Space Telescope. But it didn't match our predictions--we expected to see the infrared emission become brighter and brighter over several weeks," said Eleonora Troja, study author and associate research scientist in the University of Maryland's Department of Astronomy. But the signal faded ten days after the event. When the team went back and compared what the 2017 event looked like to the 2016 event, "it was a nearly perfect match," Troja said. The team had observed a kilanova in 2016 without realizing it. The researchers now believe it was also the result of a neutron star collision., even though they can also result from a black hole and a neutron star merger. The 2016 event detection doesn't have as much detail as the 2017 event, but their record from the first few hours revealed new insights about the kilanova's first stages. The astronomers were actually able to see the object that formed after the collision, which isn't available in the 2017 event. "The remnant could be a highly magnetized, hypermassive neutron star known as a magnetar, which survived the collision and then collapsed into a black hole," said Geoffrey Ryan, study co-author and a Joint Space-Science Institute Prize Postdoctoral Fellow in the University of Maryland Department of Astronomy. "This is interesting, because theory suggests that a magnetar should slow or even stop the production of heavy metals, which is the ultimate source of a kilonova's infrared light signature. Our analysis suggests that heavy metals are somehow able to escape the quenching influence of the remnant object." Now, the researchers want to apply the insight they gained from this study to other previous events. This will also improve their observations of future events. "The very bright infrared signal from this event arguably makes it the clearest kilonova we have observed in the distant universe," Troja said. "I'm very much interested in how kilonova properties change with different progenitors and final remnants. As we observe more of these events, we may learn that there are many different types of kilonovae all in the same family, as is the case with the many different types of supernovae. It's so exciting to be shaping our knowledge in real time."Read More
0.852257
3.747062
Nowhere have records been falling faster than in Antarctica. And what is shocking is that these records are all tied to cooling – and not warming. Antarctica has been setting new maximum sea ice records almost daily, and never has Antarctic sea ice been so high for so long since satellite measurements began some 35 years ago. Sea ice anomaly has averaged over 1 million square kilometers for over one year. Figure 1: Antarctic sea ice anomaly. The mean of the past three decades has been rising for 30 years. Approximate mean bars added by author. Source: arctic.atmos.uiuc.edu Almost daily observers of southern hemispheric sea ice have been hearing of a new record for ice extent for the date. The overall trend is not something that can be regarded as a recent anomaliy attributed to natural weather-like variability. Rather it really has to do with multidecadal trends that are unrelated to manmade CO2 emissions. Iciest Antarctic decade ever Figure 1 above shows how each south polar decade is icier than the one before. There is only one reason for this: There is less and less heat down there to prevent ice from forming, which seems to squarely contradict claims of a warming that is global. In total the global (Arctic and Antarctic combined) sea ice mean for the last one and half years has been above the long-term mean. Scientists are scrambling to find out what has gone wrong with their model calculations. It wasn’t supposed to be so. Figure 1 also shows how Antarctic sea ice has remained steadfastly above normal for more than two and half years, something that has never happened since satellite measurements began. Moreover, Anthony Watts here writes: “We are now on day 1001 of positive anomaly based on the 1979-2008 baseline.” Scientists who have long claimed the globe is warming are baffled and a few are even so shocked that they are denying it altogether. The year of daily sea ice records 2014 for Antarctica has been especially icy. Figure 2 below shows how 2014 daily sea ice extent, depicted by the bold red curve, has been setting new record daily highs for the last 5 months. Figure 2: Sea ice extent for 2014 depicted by red curve. Source: sunshinehours. Not only has the sea ice area expanded at the South Pole but so has volume, this according to Germany’s Alfred Wegener Institute last October. In a German language press release the Bremerhaven-based AWI concluded that “from various studies the total volume of the Antarctic sea ice has grown over the last years.” Record low temperatures Not only record sea ice extent and volume are telling us that Antractica is cooling dramatically, but so are the thermometers. For example last year the National Geographic reported here that NASA had recorded the coldest temperature ever on the Antarctica continent: -136°F. Also the University of Wisconsin, Madison reported that the South Pole Station saw a new record low temperature of -73.8°C (-100.8°F) on June 11, 2012, breaking the previous minimum temperature record of -73.3°C (-99.9°F) set in 1966. Just weeks ago CFACT reported that the British Antarctic Survey (BAS) saw a record cold of -55.4°C. In 2010 the Neumayer III station, operated by Germany’s Alfred Wegener Institute, recorded the lowest temperature at their Antarctic location in their 29 years of operation. The mercury dropped to -50.2°C. Antarctic freeze-up still being denied Some global-warming-hypothesizing scientists are finding it difficult to cope with the reality of growing sea ice at the South Pole, some even insisting that it is shrinking, or that the expansion is a sign of warming and that the trend will reverse later in the future. But other experts (skeptics) think these scientists are being naive to think that a trace gas could control the entire global climate system. The climate system, they say, is far too complex and many other more potent factors are really at play. And so these experts aren’t at all surprised by what is going on. Willie Soon, Professor of astrophysics and geosciences at the Solar and Stellar Physics (SSP) Division of the Harvard-Smithsonian Center for Astrophysics, says, “The South Pole is in sharp contradiction against the CO2 global warming scenarios which were supposed to melt most of the ice masses of the world.” So much so, Soon says, that “it is still being denied by some professional scientists“. Soon says that the freezing of the South Pole is “one fact that shows how much more we need to understand about how the Earth climate system can vary naturally and how different regions are inter-related to each other rather than insisting that all changes and variations must be caused by rising atmospheric CO2 alone.” Antarctic sea ice should retreat when PDO and AMO cool Veteran meteorologist Joe Bastardi, who specializes in providing commercial clients with short-term and seasonal forecasts, also thinks other major global factors are at play – for one: ocean surface temperature oscillations in the Pacific and Atlantic, i.e. the Pacific Decadal Oscillation (PDO) and the Atlantic Decadal Oscillation (AMO): “The southern oceans around Antarctic are cool in a warm PDO, but warm in a cool PDO.” Bastardi believes that once the PDO is well into the cool phase, the Antarctic sea ice will retreat. “If the southern ice cap does not shrink, then that will be a problem, but I have confidence it will.” But Arctic will recover Conversely Bastardi thinks that as the AMO enters its cool phase, in about 5 -10 years, the Arctic will be well on its way to recovery. Already we see signs of that starting today. Joe has written an entire essay on this, and it will be posted tomorrow. In summary: While scientists who have invested careers in tying climate change to CO2 are scrambling to explain the lack of cooperation by the sea ice at the South Pole, other experts think the anomaly is all part of longer term natural cycles which man is powerless to stop.
0.803597
3.115285
Aug 22, 2013 An electromagnetic phenomenon on the fringes of galaxy NGC 7793 is confounding astronomers because they insist on seeing it as a gravitational superforce. Explaining the jets of ionized particles often seen erupting from various objects in space ranks as one of the most difficult tasks facing modern astronomers. What force can create highly energetic particle emissions that span distances measured in light-years? What confines them into narrow beams? Hundreds of stellar jets have now been observed, but the prevailing theory of “compacted gravitational point sources” exciting gas and dust as they orbit does not address the existence of collimated jets. There is only one force that can hold such a matter stream together over those distances: magnetism. The only way to generate that magnetic confinement is through electricity flowing through space. In the past, astronomers observed coherent filaments from so-called “Herbig Haro” stars, some more than 12 light-years long. Charged particles within the filaments were thought to exceed velocities of 500 kilometers per second. The finely knotted jets exceeded three times the distance from our Sun to the nearest star, Alpha Proxima. According to ESO’s recent announcement, however, the jets from the NGC 7793 microquasar are several hundred light-years long. Most researchers try to account for narrowly confined jets by invoking words like “nozzle” or “high pressure”, defying all that science knows about the behavior of gases in a vacuum. Some are even willing to acknowledge that magnetic fields might focus gases into narrow beams, although there is a commonly held opinion that magnetic fields are not important. Magnetic fields are only one part of the story, and failure to realize that electric currents create magnetic fields has led many physicists to model plasma in space without considering the flow of electricity. Nobel laureate Hannes Alfvén, a pioneer in the field of plasma cosmology, stated that plasma is “too complicated and awkward” for the tastes of mathematicians. It is “not at all suited for mathematically elegant theories” and requires laboratory experiments. Alfvén observed that the plasma universe had become “the playground of theoreticians who have never seen a plasma in a laboratory. Many of them still believe in formulae which we know from laboratory experiments to be wrong”. He thought that the underlying assumptions of cosmologists “are developed with the most sophisticated mathematical methods and it is only the plasma itself which does not ‘understand’ how beautiful the theories are and absolutely refuses to obey them.” Stars are nodes in electrical circuits. Electromagnetic energy could be stored in the equatorial current sheets surrounding them until some trigger event causes them to switch into a polar discharge. The electric jet could receive its energy from a natural particle-accelerator, a “plasma double layer” with a strong electric field. Toroidal magnetic fields would form because of the polar plasma discharge, confining it into a narrow channel. Axial electric currents should be flowing along the jet’s entire length. Only electric fields can accelerate charged particles across interstellar space.
0.889941
3.908704
Listeners to The Current and my daily 4:20 p.m. chat with my hero, Mary Lucia, won’t be hearing about what I’m about to tell you. Mary doesn’t like my stories of space and the cosmos and will continue to give me grief about them, I presume, until we find a planet inhabited by nothing by pugs. But until we find out more about the universe, we probably won’t discover that enchanted domain, which is why the announcement today is important: We have a new way of looking at the universe. An ultrahigh-energy cosmic neutrino has been captured deep under Antarctica. It’s been traveling for four billion years, probably from a supermassive black hole at the center of a distant galaxy. “It’s an achievement that opens a whole new way of looking at the universe,” NPR’s Joe Palca says. “Astronomy started when people looked at the night sky, and that’s light hitting your eyes,” says Naoko Kurahashi Neilson, an astrophysicist at Drexel University in Philadelphia and another member of the IceCube collaboration. “It’s expanded from just visible light to X-rays and gamma rays, and also to infrared and radio waves,” she says. But light waves and gamma rays and even radio waves are all what scientists call electromagnetic radiation. They differ in wave length, but they’re all from the same family. “And then here come neutrinos,” Neilson says, “which is a completely different way to look at the universe. And gee, I wonder what we can see if we use this whole different way to look at the universe.” Pugs, I’m guessing. Don’t tell Mary. The expedition, called IceCube, that captured the neutrino is led by a professor from the University of Wisconsin in Madison. “This is the beginning of something,” Francis Halzen said. “There’s a whole part of astronomy that was a black box for us. I think we opened that up.” Halzen was in Madison when the instruments in the Antarctic “pinged”, indicating something had been detected. But he didn’t think much of it, according to Madison.com. The thing pings at least once a month. The energy level of this neutrino, however, stood out as the highest recording for a particle at IceCube– about 46 times greater than the energy of protons in the world’s most powerful particle accelerator, which is located in Europe. And two other telescopes, one stationed on the Canary Islands and a NASA satellite in space, also detected the same blazar in the following weeks, further confirming the galaxy’s black hole powers neutrinos through galaxies, stars and anything else in its path. “It was only when we started reading the other telegrams that this became an exciting event,” Halzen said. Corroborating IceCube’s observations with other telescopes marks a milestone in what scientists call multi-messenger astronomy. That’s the question I get from Mary a lot and it’s the one that the science community has a lot of difficulty explaining to mere mortals (who weren’t around, by the way, when this thing started its travels to earth). Researchers have crossed the threshold into precise neutrino astronomy, according to Darren Grantat the University of Alberta, who tells Live Science “there’s a lot more to learn.” “We’ll never understand the origins of the universe without understanding neutrinos,” Prof. Halzen said. Today’s announcement provides a road map for doing that.
0.813065
3.355601
Science has enabled us to uncover and discover many technologies and phenomenon that may otherwise seem supernatural or shrouded in mystery. The black hole has been one of the biggest mysteries eluding scientists around the world for decades Einstein’s theory of general relativity made tonnes of speculation about the behavior of black holes which are known to have a gravitational field strong enough to render the rules of physics invalid and to not even allow light to escape from it. But for all the mathematical analysis, artistic descriptions, and graphic representations, no one had been able to capture a real picture of the black hole until recently. How was it made possible? In 2017, institutions from eight different places in the world decided to join hands to collect visual data for black holes through a network of eight ultra-sensitive telescopes which were given the name of Event Horizon Telescope (EHT). These telescopes were tasked to capture the radiation from the black holes and capture images of the black hole’s periphery. The EHT observed the black hole in Messier87 (M87) and Sgr A*, short for Sagitarius A-Star, which is in the center of our own galaxy. The data which led to the construction of the final picture of the black hole was accumulated over a course of five nights in April 2017. The network of telescopes which made the EHT, were sensitive to wavelengths of less than a millimeter. They recorded large amounts of data which couldn’t even be transmitted over the internet, so the data had to be recorded in hard disks and flown down to the Massachusetts Institute of Technology, Boston. With the weather making it nearly impossible for any flights to travel with the data recorded by the telescope in the South Pole. It took a year for the data from the South Pole to arrive. Four separate groups, acting independently and without bias, put the data together to create the image of the black hole. All four images were consistent. This is how we got to see the first real image of a black hole. This image above is of the black hole in M87. The quality of the picture of Sgr A* which is in the Milky Way galaxy was very poor and hence was not released to the public. What Makes This So Important? Black holes provide us with deep insight into phenomena that are yet to be explained by science. The data generated by the conjoined efforts of these 8 telescopes will also strengthen the theoretical data which scientists have created over several decades since we started researching black holes. By having a clearer understanding of our universe, we can succeed in breaking new barriers in the field of science and technology. The picture of the black hole also reinforces the numerous hypothesis related to black holes and the theory of relativity. A lot of ground-breaking research related to the behavior of the black holes can result from this recent picture. Questions about the jets of luminosity which made it possible for us to visualize the black hole can also be researched more effectively. Since black holes are immensely small and extremely dense, it is very unlikely to capture them. This proves the technological advancement that we have undergone over the course of several decades since we have been aware of the existence of black holes in the universe. It also shows that more scientific breakthroughs can be made possible through collaborations and the joined efforts of institutions around the world. Note: Sgr A* stands for Sagittarius A* which is the black hole Milky Way Galaxy. Scientists have analyzed the approximate mass of this black hole to be 4 million solar masses. It is 26,000 light years away from the solar system.
0.879081
3.459813
Scientific convention states there are eight planets in the solar system: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. On August 24, 2006, Pluto ceased to be recognised as the ninth planet, following a change in definition. But debate has reignited over Planet Nine, for a mysterious celestial body posited to be out there could claim the title for itself. A puzzling planet could be hiding out on the distant edge of our solar system. And astronomers have just published new information about its appearance and whether it actually exists. Planet 9 could have five to 10 times the mass of Earth. And planet Nine could be barrelling along an elongated orbit, peaking at 400 AU. An Astronomical Unit (AU) is the average distance from Earth to the Sun, approximately 93 million miles (150 million km). This orbit is also likely 15 to 25 degrees off the main orbital plain of our solar system where most planets orbit. Planet Nine’s existence is an idea back in vogue among astronomers since it was first seriously proposed in 2014. Proponents believe the planet exists because of patterns of objects in the Kuiper Belt, a ring of debris in the outer solar system. This space debris clumps together in ways suggesting gravity from something significant is exerting a force on them. And the evidence for Planet Nine’s existence is beginning to build. A group of astrophysicists has calculated the probability of Planet Nine not existing at just 1 in 500. And this new research indicates Planet Nine’s discovery is significantly closer than previously believed. The likeliest alternative explanation is of an incomplete conception of the Kuiper Belt. And this is in conjunction with objects only appearing to cluster because of bias in efforts to detect them. And a further possibility is that the clustering results from the “self-gravity” of the Kuiper Belt acting on its own objects and does not arise from not some hidden planet’s tug.
0.843691
3.061175
7 Remarkable Lessons from Messenger’s Mission to Mercury Final resting place of the Messenger probe on Mercury, where it crash landed on April 30 while traveling at 8,700 miles per hour. Credit: NASA/APL/CIW Through most of its life, NASA’s scrappy Messenger probe was something of a unsung hero. The first spacecraft ever to orbit Mercury didn’t have the you-are-there immediacy of a Mars rover, the daredevil appeal of landing on a comet, or the romance of visiting a beautiful ringed planet. But with today’s death–the result of a long-anticipated crash into the planet it studied–we can clearly see what an incredibly successful explorer Messenger was. Mercury has long been a solar-system enigma. It is not particularly small (roughly halfway in size between Mars and the moon), and it is not particularly far away (third closest planet to Earth after Mars and Venus), but the first planet from the sun is devilishly hard to study. Seen from Earth it hangs low in the sky; from space it hugs so close to the solar glare that the Hubble telescope cannot aim at it. Astronomers were so stymied that they didn’t even know how quickly Mercury rotated until 1965, when they found out, not by looking but by bouncing radar signals off its surface. Getting Messenger to Mercury wasn’t easy, either. After the 2004 launch, it took nearly 7 years of flight–including one Earth flyby, two Venus flybys, and three passes by Mercury–before the probe was able to enter orbit. Compare that to the 9-month travel time to Mars. The obstacle, once again, is the sun: Its gravitational pull tends to speed up any inward-traveling space probe, but getting into orbit requires a slow approach, making for a tricky maneuver. When Messenger finally arrived for good on March 18, 2011, it encountered a planet that we barely understood. That quickly changed. Within two years, Messenger mapped the full surface of Mercury, half of which had never been seen before. As Mercury came into sharp focus for the first time, scientists have begun to appreciate the truly wonderful, exotic strangeness of the innermost planet. [For more science updates, follow me on Twitter: @coreyspowell.] South pole of Mercury is color-coded to show how much sunshine each location receives. The dark, permanently shadowed crater bottoms are layered with icy deposits (shown in white). Credit: NASA/APL/CIW Mercury is covered with ice. Yes, the planet closest to the sun seems to be dotted with icy patches. It’s not as paradoxical as it sounds. Mercury’s axis points straight up and down, meaning that any deep craters near the north or south poles remain perpetually in shadow, in deep freeze. Messenger observations indicate that those craters contain up to a trillion tons of water ice, and appear to be coated in a blanket of organic (carbon-rich) compounds as well. Most likely the deposits consist of material that accumulated from billions of years of comet impacts. A much more intense version of that process may have helped fill Earth’s oceans and seeded our newborn planet with organic molecules that could have aided the origin of life. Mercury (left) is distinctly less reflective than the moon, perhaps because it is dusted with the sooty remains of dead comets. Credit: NASA/APL/CIW Mercury is painted black. A longstanding enigma about Mercury is that it is very dark, only about half as reflective as the moon. One new theory is that the rain of comets onto Mercury coated the planet with carbon–soot in essence–which would explain the odd surface coloration captured so dramatically by Messenger’s cameras. Debris from Comet Encke seems to cause seasonal meteor showers on Mercury. This artist’s concept incorporates a real Messenger image of the planet. Credit: NASA. Mercury has meteor showers. The abundant craters and comet debris attest to billions of years of battering, but the bombardment has hardly ended. Thin clouds of calcium atoms periodically appear around the planet. A leading theory is that this strange chemical pattern is caused by a meteor shower that is too faint to observe directly. Those meteors may be bits of Comet Encke, which also causes the Taurid meteor showers on Earth. Enhanced-color Messenger image shows the bizarre “hollows” found around impact sites on Mercury, such as this impact basin called Raditladi . Credit: NASA/APL/CIW Mercury is young and restless. Many of the geologic features seen on Mercury are unlike anything seen elsewhere. One of the most peculiar is a scattered set of pitted regions, dubbed hollows, where chunks of the surface appears to be missing. These may be places where deposits of easily vaporized materials–such as potassium, chlorine, and sulfur–were exposed to solar heat by asteroid impacts, boiling away and leaving voids that then collapsed. The abundance of such “volatile” materials on Mercury was unexpected. Even more surprising, the hollows appear extremely recent, and may still be forming today. Mercury’s sodium atmosphere was detected by the spectrometers aboard Messenger; this data visualization shows how the sun sweeps Mercury’s atmosphere into a tail. Credit: NASA/APL/CIW Mercury has a tail like a comet. Intense radiation from the nearby sun (just 36 million miles away, on average) boils sodium and calcium atoms out of Mercury’s surface, creating an extremely thin atmosphere. Sunlight pressure and the solar wind then blows some of that atmosphere outward, creating a long, flowing tail. To someone standing on Mercury’s night side, the illuminated tail of sodium atoms would give off a dim orange glow, analogous to the light from a sodium vapor street light. A 400-mile-long cliff on Mercury, called Beagle Rupes, cuts across the desolate landscape. It is part of an enormous vertical fault that formed as the planet cooled and shrank, triggering mighty quakes. Credit: NASA/APL/CIW Mercury is shrinking. Despite its searing surface–up to 800 degrees Fahrenheit during the day–Mercury is losing heat. Messenger observations show the dramatic consequences of that cooling: The whole planet is shriveling like a raisin, folding the surface into a network of cliffs called lobate scarps. The biggest of those scarps are two miles high and 1,000 miles long. Mercury’s diameter today is 8 miles less than it was when the planet formed, scientists estimate. Such cooling and wrinkling will eventually happen to Earth as well, but not for billions of years because of our planet’s much greater bulk. Mercury may be a lot smaller than Earth, but it packs a mean iron punch: Its huge 3-layer metallic core, unique in the solar system, makes the planet exceptionally dense. Credit: Case Western Reserve Mercury has an iron heart. Scientists have long known that the planet is oddly massive for its size. Messenger has clarified the reason why: Mercury has a huge, dense iron core, 85 percent the size of the planet as a whole. In essence, it is an iron ball wrapped in a thin veneer of rock. The middle part of the core is molten and dynamic, giving rise to a strong magnetic field. As for how the planet got that way…nobody knows. Perhaps an early giant impact ripped away most of its rocky outer layers, but the prevalence of easily boiled elements like sulfur argue against such a violent process. The correct answer will tell us a lot about how Earth and the other inner planets formed, but even Messenger hasn’t been able to solve this puzzle. Hey, at least the spacecraft left some work for future visitors. Humans will be back in 2024, when the joint European-Japanese BepiColombo mission sets up camp around Mercury. Until then, an archive of more than 270,00 Messenger images and a trove of data will keep researchers plenty busy. SOURCE http://blogs.discovermagazine.com 2015
0.918321
3.612724
(Santa Barbara, Calif.) -- An international team of scientists has observed four super-massive black holes at the center of galaxies, which may provide new information on how these central black hole systems operate. Their findings are published in December's first issue of the journal Astronomy and Astrophysics. These super-massive black holes at the center of galaxies are called active galactic nuclei. For the first time, the team observed a quasar with an active galactic nucleus, as part of the group of four, which is located more than a billion light years from Earth. The scientists used the two Keck telescopes on top of Mauna Kea in Hawaii. These are the largest optical/infrared telescopes in the world. The team also used the United Kingdom Infrared Telescope (UKIRT) to follow up the Keck observations, to obtain current near-infrared images of the target galaxies. "Astronomers have been trying to see directly what exactly is going on in the vicinity of these accreting super-massive black holes," said co-author Robert Antonucci, a UC Santa Barbara astrophysicist. He explained that the nuclei of many galaxies show intense radiation from X-ray to optical, infrared, and radio, where the nucleus may exhibit a strong jet -- a linear feature carrying particles and magnetic energy out from a central super-massive black hole. Scientists believe these active nuclei are powered by accreting super-massive black holes. The accreting gas and dust are especially bright in the optical and infrared regions of the electromagnetic spectrum. Scientists can now separate the emission from the regions outside the black hole from that in the very close vicinity of the black hole. This is the location of the most interesting physical process, the actual swallowing of matter by the black hole. "While not resolving this extremely small region directly, we can now better subtract the contribution from surrounding matter when we take a spectrum of the black hole and its surroundings, isolating the spectrum from the matter actually being consumed and lost forever by the hole," said Antonucci. To observe such a distant object sharply enough in infrared wavelengths requires the use of a telescope having a diameter of about 100 meters or more. Instead of building such a large infrared telescope, which is currently impossible, a more practical way is to combine the beams from two or more telescopes that are roughly 100 meters apart. This method, used in radio astronomy for decades, is new for the infrared part of the spectrum. This type of instrument is called a long-baseline interferometer. The Keck telescopes are separated by 85 meters and can be used as an interferometer. Combining the light from the telescopes allows astronomers to detect an interference pattern of the two beams and infer what the black hole vicinity looks like, explained first author Makoto Kishimoto, of the Max Planck Institute for Radio Astronomy in Bonn, Germany. Kishimoto and Antonucci have a longstanding research collaboration, which began with Kishimoto's post-doctoral fellowship with Antonucci in the UCSB Department of Physics a decade ago. Antonucci points out that most of the credit for this current work goes to Kishimoto. In 2003, astronomer Mark Swain at the Jet Propulsion Laboratory and his collaborators used the Keck Interferometer to observe the material accreting around one super-massive black hole, called NGC 4151. This is one of the brightest black holes in the optical and infrared wavelengths. The observations provided astronomers with the first direct clue about the inner region of a super-massive black hole system, said Antonucci. "The results looked puzzling in 2003," said Kishimoto. "But with the new data and with more external information, we are quite sure of what we are seeing." According to the team's results, the Keck Interferometer has just begun to resolve the outer region of an active galactic nucleus's accreting gas, where co-existing dust grains are hot enough to evaporate, transitioning directly from a solid to a gas. The W. M. Keck Observatory is a scientific partnership of the California Institute of Technology, the University of California, and NASA.
0.84831
4.03056
Gravitation is an important topic for CDS, AFCAT, Air Force Group X & Y Exam. Every year there are 1-2 questions asked from this topic. It is very interesting and easy topic, therefore, one can score good marks from this topic. Physics: Important Notes on Gravitation The Universal Law of Gravitation and Gravitational Constant In the universe, everybody attracts every other body with a force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. This attraction force is known as the gravitational force. where M1 and M2 are the masses of two bodies, R is the distance between and G is the Gravitational Constant. G = 6.67 × 10-11 Nm2(kg)-2 1. Gravitational force is always attractive in nature. 2. Gravitational force is independent of the nature of the intervening medium. 3. Gravitational force is conservative in nature. 4. It is a central force so it acts along the line joining the center of the two interacting bodies and it obeys inverse square law. Acceleration due to Gravity of the Earth and its Variation The gravitational pull exerted by the earth is called gravity. The acceleration produced in a body due to the force of gravity is called as acceleration due to gravity(g). , where Me is the mass of earth, and Re is the radius of the earth. Let the density of the earth is ρ. Then the acceleration due to gravity of the earth is Variation of Acceleration due to gravity Due to altitude (h) The acceleration due to gravity at a height h above the earth's surface is Thus the value of acceleration due to gravity decreases with the increase in height h. Due to depth (d) The acceleration due to gravity at depth d below the earth's surface is Thus the value of acceleration due to gravity decreases with the increase in depth d and becomes zero at the center of the earth. Variation of acceleration due to gravity (g') with distance from the center of the earth (R) is as shown below Due to rotation of the earth about its axis The acceleration due to gravity at latitude λ is where ω is the angular speed of rotation of the earth about its axis. At the equator, λ = 0o, value acceleration due to gravity At the pole, λ = 90o, value acceleration due to gravity Thus, the value of acceleration due to gravity increases from the equator to the pole due to the rotation of the earth. If the earth stops rotating about its axis, ω = 0, the value of g will increase everywhere, except at the poles. But if there is increase in the angular speed of the earth, then except at the poles the value of g will decrease at all places. Kepler's Laws of Planetary Motion To explain the motion of planets, Kepler formulated the following three laws. 1. Law of Orbits (First Law): The planets in the solar system revolve in elliptical orbits around the Sun in elliptical orbits with the Sun located at any one of the foci of the elliptical path set by the respective planet. 2. Law of areas (Second Law): The rate of the area swept by the position vector of the revolving planet with respect to the Sun per unit time remains same irrespective of the position of the planets on the set elliptical path. Kepler's second law follows the law of conservation of angular momentum. According to the Kepler's second law, the areal velocity of the planet is constant. that means the planet is closer to the sun on the elliptical path, it moves faster, thus covering more path-area in the given time. 3. Law of periods (Third Law): The square of the period of revolution around the sun of a planet is proportional to the cube of the semimajor axis of its orbit-path around the sun. Gravitational Field and Potential Energy Gravitational Field (E) - It is the space around a material body in which its gravitational pull can be experienced by other bodies. The intensity of the gravitational field at a point due to a body of mass M, at a distance r from the center of the body, is Gravitational Potential (V) - Gravitational Potential at a point in the gravitational field of a body is defined as the amount of work done in bringing body of unit mass from infinity to that point. Gravitational potential (V) is related with gravitational Field (E) as Gravitational Potential Energy- The Gravitational potential energy of a body at a point in a gravitational field of another body is defined as the amount of work done in bringing the given body from infinity to that point. The gravitational potential energy of mass m in the gravitational field of mass M at a distance r from it is Satellite and its Velocity Satellite is a natural or artificial body describing the orbit around a planet under its gravitational attraction. Escape velocity- The velocity of the object needed for it to escape the earth’s gravitational pull is known as the escape velocity of the earth. The escape velocity of the earth is, Orbital velocity- The orbital velocity of the satellite revolving around the earth at a height h is When the satellite is orbiting close to the earth's surface, h << Re, then the orbital velocity of the satellite is, For a point close to earth's surface the escape velocity and orbital velocity related as, Time period of a satellite Time period is the time taken by satellite to complete one revolution around the earth, When the satellite is orbiting close to the earth's surface, h << Re, then Energy of an orbiting satellite The kinetic energy of a satellite is, The potential energy of a satellite is, The total energy of the satellite is, Geostationary Satellite- A satellite which revolves around the earth in its equatorial plane with the same angular speed and in the same direction as the earth rotates about its own axis is called a geostationary satellite. 1. They have a fixed height of 36000 km from the earth’s surface. 2. They revolve in an orbit oriented in the equatorial plane of the earth. 3. They have their rotation same as that of earth about its own axis i.e., from west to east. 4. The period of revolution around the earth is the same as that of the earth about its own axis. Polar satellite- A satellite revolves in a polar orbit is called a polar satellite. 1. These satellites have their orbit such that they pass the north and the south pole once every 24 hours. They revolve around the earth along the meridian lines. 2. They are situated at an altitude much lower than the geostationary satellites(850 km). 3. They are therefore capable of providing more detailed info about the clouds and storms. Weightlessness- When the body is unsupported and no force is working on your body, it is the experience of weightlessness. When an object is in free fall with an acceleration equal to the acceleration of earth, the object is said to be weightless as there is no force acting on it. Below is a PDF in English: More from us: Gradeup now to score better
0.830768
3.00568
Astronomers now prefer to quote distances between stars in Parsecs, rather than the older Light Years. This is because it makes some calculations much simpler. The navigation instruments on Jane’s eight-footer are all calibrated in Light Years. So why did Space Fleet decide to go back to the older unit? The answer is very simple. Look out of the window at some object in the distance, such as a tree on the horizon. Now move your head from side to side. The tree will appear to move around in the window. If you have something like a sextant which lets you measure angles very accurately, and you know how far you are moving your head, you can work out how far it is from you to the window. Parsecs work like that. As the Earth goes around the sun nearer stars seem to wobble when compared against very distant ones. If you know the size of Earth’s orbit, and use a telescope to measure the wobble you can work out how far away the stars are. One problem. One of the constants involved in the definition of the parsec is the size of Earth’s orbit around the sun. That’s all right, unless you are on another planet circling another star. Jane’s home planet, Mercia, orbits a type F star over two hundred light years away in the general direction of Alpha Cygni. On the other hand the Light Year is the distance light travels in 365.25 days, exactly. A day is defined as 24 x 60 x 60 seconds, and a second is defined in terms of the vibrations of the caesium atom. The point is that, if you know the definition, you can go anywhere in the universe and do an experiment to measure a light year and get the same result. And that’s why Space Fleet work in light years.
0.853057
3.191513
Astronomers have discovered an alien solar system whose planets are arranged much like those in our own solar system, a find that suggests most planetary systems start out looking the same, scientists say. Researchers studying the star system Kepler-30, which is 10,000 light-years from Earth, found that its three known worlds all orbit in the same plane, lined up with the rotation of the star — just like the planets in our own solar system do. The result supports the leading theory of planet formation, which posits that planets take shape from a disk of dust and gas that spins around newborn stars. "In agreement with the theory, we have found the star's spin to be aligned with the planets," said study co-author Dan Fabrycky, of the University of California, Santa Cruz. "So this result is profound because it is basic data testing the standard planet formation theory." Interactions among planets can later throw such ordered arrangements out of whack, researchers added, creating the skewed orbits seen in many alien systems today. [Gallery: The Strangest Alien Planets] Planets crossing starspots The Kepler-30 system consists of three known extrasolar planets circling a sunlike star. All three worlds — Kepler-30b, Kepler-30c and Kepler-30d — are much larger than Earth, with two being even more massive than Jupiter. The three planets were detected in January by NASA's Kepler space telescope, which has spotted more than 2,300 potential alien worlds since its March 2009 launch. Kepler uses the "transit method," noting the telltale brightness dips caused when a planet crosses, or transits, a star's face from the telescope's perspective. In the new study, the scientists studied Kepler observations of the extrasolar system even more closely. Like our own sun, Kepler-30 has starspots, temporary blotches that appear dark because they're significantly cooler than the rest of the star's surface. The research team determined that all three planets transited the same starspot repeatedly, showing that their orbits must be coplanar and aligned closely with the star's spin. In this sense, the Kepler-30 system looks like our cosmic neighborhood, which sports eight planets all lined up neatly along the sun's rotational equator. Both systems probably formed from a spinning disk of dust and gas, researchers said. Not all exoplanet systems are so well-ordered. For example, many so-called "hot Jupiters" — giant planets that sit very close to their host stars — have off-kilter or even retrograde orbits. But hot Jupiters likely weren't born this way; rather, they were probably knocked askew by gravitational run-ins with other planets. "We have excellent statistical properties of obliquities of stars that host hot Jupiters, and it seems to support the theories in which they are due to dynamical interactions among massive bodies," Fabrycky told SPACE.com via email. The study appears online today (July 25) in the journal Nature. More data needed The research team stressed that the new study is a starting point of sorts. Astronomers will need more data to get a better handle on planet formation processes. "Our work is suggestive, but we will need to observe a few more systems to show that indeed, for all coplanar systems similar to our own solar, the star's spin is aligned with the planets," said lead author Roberto Sanchis-Ojeda of MIT. "So far, we have the solar system and Kepler-30; a few more systems will be helpful to be completely sure." Kepler should allow astronomers to study more such systems soon, Sanchis-Ojeda added. "So far, we have identified between five and 10 new systems where we think we can apply this method, but the number is likely to grow while more data is processed," he told SPACE.com via email. "We are confident that we will be able to test our predictions in the next few years."
0.861139
3.884455
How about four supernovae for the price of one? Using the Hubble Space Telescope, Dr. Patrick Kelly of the University of California-Berkeley along with the GLASS (Grism Lens Amplified Survey from Space) and Hubble Frontier Fields teams, discovered a remote supernova lensed into four copies of itself by the powerful gravity of a foreground galaxy cluster. Dubbed SN Refsdal, the object was discovered in the rich galaxy cluster MACS J1149.6+2223 five billion light years from Earth in the constellation Leo. It’s the first multiply-lensed supernova every discovered and one of nature’s most exotic mirages. Gravitational lensing grew out of Einstein’s Theory of Relativity wherein he predicted massive objects would bend and warp the fabric of spacetime. The more massive the object, the more severe the bending. We can picture this by imagining a child standing on a trampoline, her weight pressing a dimple into the fabric. Replace the child with a 200-pound adult and the surface of the trampoline sags even more. Similarly, the massive Sun creates a deep, but invisible dimple in the fabric of spacetime. The planets feel this ‘curvature of space’ and literally roll toward the Sun. Only their sideways motion or angular momentum keeps them from falling straight into the solar inferno. Curved space created by massive objects also bends light rays. Einstein predicted that light from a star passing near the Sun or other massive object would follow this invisible curved spacescape and be deflected from an otherwise straight path. In effect, the object acts as a lens, bending and refocusing the light from the distant source into either a brighter image or multiple and distorted images. Also known as the deflection of starlight, nowadays we call it gravitational lensing. Simulation of distorted spacetime around a massive galaxy cluster over time Turns out there are lots of these gravitational lenses out there in the form of massive clusters of galaxies. They contain regular matter as well as vast quantities of the still-mysterious dark matter that makes up 96% of the material stuff in the universe. Rich galaxy clusters act like telescopes – their enormous mass and powerful gravity magnify and intensify the light of galaxies billions of light years beyond, making visible what would otherwise never be seen. Let’s return to SN Refsdal, named for Sjur Refsdal, a Norwegian astrophysicist who did early work in the field of gravitational lensing. A massive elliptical galaxy in the MACS J1149 cluster “lenses” the 9.4 billion light year distant supernova and its host spiral galaxy from background obscurity into the limelight. The elliptical’s powerful gravity’s having done a fine job of distorting spacetime to bring the supernova into view also distorts the shape of the host galaxy and splits the supernova into four separate, similarly bright images. To create such neat symmetry, SN Refsdal must be precisely aligned behind the galaxy’s center. The scenario here bears a striking resemblance to Einstein’s Cross, a gravitationally lensed quasar, where the light of a remote quasar has been broken into four images arranged about the foreground lensing galaxy. The quasar images flicker or change in brightness over time as they’re microlensed by the passage of individual stars within the galaxy. Each star acts as a smaller lens within the main lens. Detailed color images taken by the GLASS and Hubble Frontier Fields groups show the supernova’s host galaxy is also multiply-imaged by the galaxy cluster’s gravity. According to their recent paper, Kelly and team are still working to obtain spectra of the supernova to determine if it resulted from the uncontrolled burning and explosion of a white dwarf star (Type Ia) or the cataclysmic collapse and rebound of a supergiant star that ran out of fuel (Type II). The time light takes to travel to the Earth from each of the lensed images is different because each follows a slightly different path around the center of the lensing galaxy. Some paths are shorter, some longer. By timing the brightness variations between the individual images the team hopes to provide constraints not only on the distribution of bright matter vs. dark matter in the lensing galaxy and in the cluster but use that information to determine the expansion rate of the universe. You can squeeze a lot from a cosmic mirage!
0.866919
4.033071
White light solar observation or imaging uses exactly the same setup as lunar imaging, with one very important addition. You have to put a solar filter in front of the telescope to reduce to a minimum the amount of sunlight entering the telescope. You have to reject 99.999% of the light and heat energy or you will cause irreparable damage to your eyes and equipment. There are several makes of solar filter on the market. The one I use most of the time is Baader astro solar film; you can buy it in a ready-made cell or just buy the film and make your own cell to hold the film. The important thing when making your own cell is not to stretch the film. I also use a Seymour solar filter but you have to be more careful with this as it’s made on a glass substrate and will break if dropped. These filters are relatively inexpensive and give you the means to experience the sun’s dynamic surface which changes from hour to hour, sometimes from minute to minute when a flare occurs. In this respect the sun is far more exciting to observe, unlike the moon with its static features. There is also the Herschel wedge that uses a special prism to reject that 99.999% of light and heat and fits at the back of the telescope, so you do not use solar film in this case. Never look at the sun when aiming the telescope at it. Yes this does sound like a daft statement, how can you point something at something else without looking at it. The answer of course is to use the telescope’s shadow. When the shadow of the optical tube is round you’re pretty well aimed at the sun. Never use the finder scope unless that too has a solar filter on the front of it. If not it’s better to remove it altogether, especially if it has no front cover or you will burn out the cross hairs. When solar observing or imaging, you will have to focus on the sun’s edge or limb so it appears sharp, and then focus on any sunspots until they are as clear as you can make them. You can see the Umbra and Penumbra of sunspots and the rice – like granulation on the boiling surface. Image capture is the same as lunar imaging using Firecapture and Registax. Sun light is made of all the colours of the spectrum. These are the visible wavelengths of light. Also there are the invisible wavelengths that we can’t see, from the Ultra Violet bottom end of the spectrum to the far Infra Red high end of the spectrum. The crimson Hydrogen alpha ( Ha ) light at 6562.8 Ångstroms (656 nanometres) is what interests me. This is the narrow slice of the sun’s spectrum where filaments, flares, spicules and prominences become visible. It’s a constantly changing scene, that brings to life the fact that the sun is a nuclear furnace. The equipment used for Ha viewing and imaging is fundamentally different from white light viewing and imaging and it’s far more expensive. There are dedicated Ha telescopes that can only be used for Ha solar work ranging in cost from £900 to almost £28,000. There are modules that can be used with ordinary telescopes to convert them into Ha solar scopes. Companies like Lunt, Coronado, Daystar Instruments and Solarscope make these modules – again the cost is over £1000. I use a Daystar Quark ( Chromosphere model ) that was purchased second hand for £650 and has served me well for a few years now. It goes into the back of the telescope like an eyepiece and the eyepiece or camera inserts into the back of the module. Again, image capture is the same as for white light imaging using Firecapture and Registax. A note on Newton’s Rings – some cameras suffer from Newton’s Rings, banding lines visible in the image. This is caused by reflective interference between two flat surfaces, in this case the back of the Quark and the front of the camera sensor. Removal of these lines is possible by means of tilting the camera with respect to the Quark. I use a tilt adapter from Rowan Astronomy as it completely eliminates the interference pattern.
0.818489
3.588602
Yes! (*Fist pump*) The Hubble Space Telescope is back in business. After overcoming a few glitches in bringing the orbiting Hubble back online, engineers and scientists aimed the telescope’s prime camera, the Wide Field Planetary Camera 2 (WFPC2), at a pair of gorgeous-looking interacting galaxies called Arp 147. Scientists say the image demonstrates the camera is working exactly as it was before going offline, thereby scoring a “perfect 10” both for performance and beauty. And the two galaxies are oriented so they look like the number 10! How cool is that! Two anomalies in Hubble’s restart caused the B-side of the Science Instrument Control and Data Handling System (SI C&DH-B) and the Advanced Camera for Surveys (ACS) Solar Blind Channel (SBC) to return to a ‘safe hold’ status on October 16. Engineers worked through the problem, and on Oct. 25, the telescope’s science computer began to send commands to the WFPC2. What a relief! Additional commanding allowed engineers on the ground to assess the instrument’s state of health and verify the contents of the camera’s microprocessor memory. And so, this first “post-recovery” image shows the two interacting galaxies. The left-most galaxy, or the “one” in this image, is relatively undisturbed apart from a smooth ring of starlight. It appears nearly on edge to our line of sight. The right-most galaxy, resembling a zero, exhibits a clumpy, blue ring of intense star formation. The blue ring was most probably formed after the galaxy on the left passed through the galaxy on the right. Just as a pebble thrown into a pond creates an outwardly moving circular wave, a propagating density wave was generated at the point of impact and spread outward. As this density wave collided with material in the target galaxy that was moving inward due to the gravitational pull of the two galaxies, shocks and dense gas were produced, stimulating star formation. The dusty reddish knot at the lower left of the blue ring probably marks the location of the original nucleus of the galaxy that was hit. Arp 147 appears in the Arp Atlas of Peculiar Galaxies, compiled by Halton Arp in the 1960s and published in 1966. This picture was assembled from WFPC2 images taken with three separate filters. The blue, visible-light, and infrared filters are represented by the colors blue, green, and red, respectively. The galaxy pair was photographed on October 27-28, 2008. Arp 147 lies in the constellation Cetus, and it is more than 400 million light-years away from Earth. Source: Hubble Site
0.866145
3.644528
Assuming the Big Bang was 13.8 Billion Years ago: Is it possible to observe a Galaxy's redshift showing distances slightly greater than that, like 48 Billion Light Years away? I understand that in a static universe, light traveling in a straight line would only be visible at less than 46.6 Bly. But, can Hubble's constant distort this number? Also, do we assume some error due to the light taking a non-linear path because of lensing? If so, is it possible to have an error of several hundred million years? The edge of the observable universe is actually 46.6 billion light years away, despite the Big Bang being only 13.8 Billion years ago. This is because the light which we are now receiving as the furthest visible stuff had to travel through ever expanding space in between, being redshifted down into what we call the Cosmic Microwave Background Radiation (CMBR). There is a little bit further than that which we are technically receiving, but it has been redshifted infinitely. To see anything further away than 46.6 Bly, it would have had to existed literally before time itself, or travelled faster than the speed of light. Two highly improbable things
0.877493
3.39361
A white dwarf is a compact star. Their matter is squashed together. Gravitation has pulled the atoms close together, and taken off their electrons. The mass of a white dwarf is similar to the mass of the Sun, but its volume is similar to that of the Earth. White dwarfs are the final evolutionary state of all stars whose mass is not high enough to become a neutron star. Over 97% of the stars in the Milky Way will become white dwarf stars.§1 After the hydrogen–fusing lifetime of a main-sequence star ends, it will expand to a red giant which fuses helium to carbon and oxygen in its core. If a red giant does not have enough mass to fuse carbon, around 1 billion K, inactive carbon and oxygen will build up at its center. After shedding its outer layers to form a planetary nebula, it will leave behind the core, which is the white dwarf. The material in a white dwarf no longer undergoes fusion reactions, so the star has no source of energy. It is not supported by the heat of fusion against gravitational collapse. A star like our Sun will become a white dwarf when it has run out of fuel. Near the end of its life, it will go through a red giant stage, and then lose most of its gas, until what is left contracts and becomes a young white dwarf. White dwarfs were discovered in the 18th century. The first white dwarf star, called 40 Eridani B, was discovered on 31 January 1783 by William Herschel.p73 It is part of a three star system called 40 Eridani. The second white dwarf was discovered in 1862, but was at first thought to be a red dwarf. It was a small star near the star Sirius. This companion star, called Sirius B, had a surface temperature of about 25,000 kelvin, so it was thought of as a hot star. However, Sirius B was about 10,000 times fainter than the primary, Sirius A. Scientists have discovered that the mass of Sirius B is almost the same as that of the Sun. This means that once, Sirius B was a star similar to our own Sun. In 1917, Adriaan van Maanen discovered a white dwarf which is called Van Maanen 2. It was the third white dwarf to be discovered. It is the closest white dwarf to Earth, except for Sirius B. Radiation and temperatureEdit A white dwarf has low luminosity (total amount of light given off) but a very hot core. The core might be 107 K, while the surface is only 104 K. A white dwarf is very hot when it is formed, but since it has no source of energy, it will gradually radiate away its energy and cool. This means that its radiation, which gives it a blue or white colour at the start, lessens over time. Over a very long time, a white dwarf will cool to temperatures at which it will no longer emit light. Unless the white dwarf gets matter from a companion star or some other source, its radiation comes from its stored heat. This is not replaced. White dwarfs cool slowly for two reasons. They have an extremely small surface area to radiate this heat from, so they cool gradually, remaining hot for a long time. Also, they are very opaque. The degenerate matter that makes up the bulk of a white dwarf stops light and other electromagnetic radiation, so radiation does not carry away much energy. Eventually, all white dwarfs will cool down into black dwarfs, so called because they lack the energy to create light. No black dwarfs exist yet because it takes longer than the current age of the universe for a white dwarf to cool down. A black dwarf is what will be left of the star after all of its energy (heat and light) is used up. White dwarfs may re-ignite and explode as supernovas if they get more material. There is a maximum mass for a white dwarf to remain stable. This is known as the Chandrasekhar limit. A dwarf might pull in material from a companion star, for example, bringing it over the Chandrasekhar limit. The extra mass would start a carbon-fusion reaction. Astronomers think this re-igniting might be the cause of Type Ia supernovas. - Fontaine G; Brassard P. & Bergeron P. 2001. The potential of white dwarf cosmochronology. Astronomical Society of the Pacific 113 (782): 409. - Richmond, M. "Late stages of evolution for low-mass stars". Lecture notes, Physics 230. Rochester Institute of Technology. Retrieved 3 May 2007. - Catalogue of Double Stars, William Herschel, Philosophical Transactions of the Royal Society of London 75 (1785), pp. 40–126 - Mestel L. 1952. On the theory of white dwarf stars. I. The energy sources of white dwarfs. Monthly Notices of the Royal Astronomical Society 112: 583. - Kawaler S.D. 1998. White dwarf stars and the Hubble Deep Field. Proceedings of the Space Telescope Science Institute Symposium. p. 252. ISBN 0-521-63097-5 - Opacity is the measure of impenetrability to electromagnetic or other radiation, especially visible light. - Rincon, Paul 2014. Dead stars 'can re-ignite' and explode. BBC News Science & Environment.
0.801533
3.796449
Would dinosaurs have reached human-like intellect had the K-T extinction (an asteroid impact near the Yucatan peninsula) not occurred? One researcher believes so, and he believes that a dinosaur called the troodon would have evolved into a bipedal, human-like being. This is, of course, the old progressive evolution shtick. This assumes that a man-like being is an inevitability, and that sentience is a forgone conclusion. This belief largely comes from Rushton’s citation of one Dale Russel, the discoverer of the dinosaur the troodon: Paleontologist Dale Russell (1983,1989) quantified increasing neurological complexity through 700 million years of Earth history in invertebrates and vertebrates alike. The trend was increasing encephalization among the dinosaurs that existed for 140 million years and vanished 65 million years ago. Russell (1989) proposed that if they had not gone extinct, dinosaurs would have progressed to a large-brained, bipedal descendent. For living mammals he set the mean encephalization, the ratio of brain size to body size, at 1.00, and calculated that 65 million years ago it was only about 0.30. Encephalization quotients for living molluscs vary between 0.043 and 0.31, and for living insects between 0.008 and 0.045 but in these groups the less encephalized living species resemble forms that appeared relatively early in the geologic record, and the more encephalized species resemble those that appeared later. (Rushton, 1997: 294) This argument is simple to rebut. What is being described is complexity. The simplest possible organism are bacteria, which reside at the left wall of complexity. The left wall “induces right-skewed distributions”, whereas the right wall induces “left-skewed distributions” (Gould, 1996: 55). Knowing this, biological complexity is a forgone conclusion, which exists at the extreme end of the right tail curve. I’ve covered this in my article Complexity, Walls, 0.400 Hitting and Evolutionary “Progress” Talking about what Troodons may have looked like (highly, highly, doubtful. The anthropometric bias was pretty strong) is a waste of time. I’ve stated this a few times and I’ll state it yet again: without our primate body plan, our brains are pretty much useless. Our body needs our brain; our brain needs our body. Troodons would have stayed quadrupedal; they wouldn’t have gone bipedal. He claims that some dinosaurs would have eventually reached an EQ of humans—specifically the troodon. They had EQs about 6 times higher than the average dinosaur, had fingers to grasp, had small teeth, ate meat, and appeared to be social. Dale Russel claims that had the K-T extinction not occurred, the troodon would look similar to us with a brain size around 1100 cc (the size of erectus before he went extinct). This is what he believes the dinosauroid troodon would look like had they not died out 65 mya: When interviewed about the dinosauroid he imagined, he stated: The “dinosauroid” was a thought experiment, based on an observable, general trend toward larger relative brain size in terrestrial vertebrates through geologic time, and the energetic efficiency of an upright posture in slow-moving, bipedal animals. It seems to me that such speculation remains acceptable, particularly if directed toward non-anthropoid anatomical configurations. However, I very nearly decided not to publish the exercise because of the damaging effects it might have had on the credibility of my work in general. Most people remained polite, although there were hostile reactions from those with “ultra-quantitative” and “ultra-intuitive” world views. Why does it look so human? Why does he assume that the ‘ideal body plan’ is what we have? It seems to be extremely biased towards a humanoid morphology, just as other reconstructions are biased towards what we think about certain areas today and how the people may have looked in our evolutionary past. Anthropocentric bias permeates deep in evolutionary thinking, this is one such example. Thinking of this thought experiment of a possible ‘bipedal dinosauroid’ we need to be realistic in terms of thinking of its anatomy and morphology. Let’s accept Russel’s contention as true; that troodontids or other ‘highly encephalized species’ reached a human EQ, as he notes, of 9.4, with troodontids at .34 (the highest), archaeopteryx at .32, triconodonts (early extinct mammal of the cretaceous) with a .29 EQ, and the diademodon with an EQ of .20 (Russel, 1983). Russel found that the troodontids had EQs 6 times higher than the average dinosaur, so from here, he extrapolated that the troodon would have had a brain our size. However, Stephen Jay Gould argued the opposite in Wonderful Life writing: If mammals had arisen late and helped to drive dinosaurs to their doom, then we could legitimately propose a scenario of expected progress. But dinosaurs remained dominant and probably became extinct only as a quirky result of the most unpredictable of all events—a mass dying triggered by extraterrestrial impact. If dinosaurs had not died in this event, they would probably still dominate the large-bodied vertebrates, as they had for so long with such conspicuous success, and mammals would still be small creatures in the interstices of their world. This situation prevailed for one hundred million years, why not sixty million more? Since dinosaurs were not moving towards markedly larger brains, and since such a prospect may lay outside the capability of reptilian design (Jerison, 1973; Hopson, 1977), we must assume that consciousness would not have evolved on our planet if a cosmic catastrophe had not claimed the dinosaurs as victims. In an entirely literal sense, we owe our existence, as large reasoning mammals, to our lucky stars. (Gould, 1989: 318) If a large brain was probably outside of reptilian design, then a dinosaur—or a descendant (troodon included)—would have never reached human-like intelligence. However, some people may say that dinosaur descendants may have evolved brains our size since birds have brains that lie outside of reptilian design (supposedly). However, one of the most famous fossils ever found, archaeopteryx, was within reptilian design, having feathers and along with wings which would have been used for gliding (whether or not they flew is debated). Birds descend from therapods. Anchiornis, and other older species are thought to be the first birds. Most of birds’ traits, such as bipedal posture, hinged ankles, hollow bones and S-shaped neck in birds are derived features from their ancestors. If we didn’t exist, then if any organism were to come close to our intelligence, I would bet that some corvids would, seeing as they have a higher packing density and interconnections compared to the “layered mammalian brain” (Olkowicz et al, 2016). Nick Lane, biochemist and author of the book The Vital Question: Evolution and the Origins of Complex Life believes a type of intelligent ocotopi may have evolved, writing: Wind back the clock to Cambrian times, half a billion years ago, when mammals first exploded into the fossil record, and let it play forwards again. Would that parallel be similar to our own? Perhaps the hills would be crawling with giant terrestrial octopuses. (Lane, 2015: 21) We exist because we are primates. Our brains are scaled-up primate brains (Herculano-Houzel, 2009). Our primate morphology—along with our diet, sociality, and culture—is also why we came to take over the world. Our body plan—which, as far as we know, only evolved once—is why we have the ability to manipulate our environment and use our superior intelligence—which is due to the number of neurons in our cerebral cortex, the highest in the animal kingdom, 16 billion in all (Herculano-Houzel, 2009). Why postulate that a dinosaur could have looked even anywhere close to us? This is also ignoring the fact that decimation and diversification also ‘decide the fates’ so to speak, of the species on earth. Survival during an extinction event is strongly predicated by chance (and size). The smaller an organism is, the more likely it will survive an extinction event. Who’s to say that the troodon doesn’t go extinct due to an act of contingency, say, 50 mya if the K-T extinction never occurred? In conclusion, the supposed ‘trend’ in brain size evolution is just random fluctuations—inevitabilities since life began at the left wall of complexity. Gould wrote about a drunkard’s walk in his book Full House (Gould, 1996) in which he illustrates an example of a drunkard walking away from a bar with the bar wall being the left wall of complexity and the gutter being the right wall. The gutter will always be reached; and if he hits the wall, he will lean against the wall “until a subsequent stagger propels him in the other direction. In other words, only one direction of movement remains open for continuous advance—toward the gutter” (Gould, 1996: 150). I bring up this old example to illustrate but one salient point: In a system of linear motion structurally constrained by a wall at one end, random movement, with no preferred directionality whatever, will inevitably propel the average position away from a starting point at the wall. The drunkard falls into the gutter every time, but his motion includes no trend whatever toward this form of perdition. Similarly, some average or extreme measure of life might move in a particular direction even if no evolutionary advantage, and no inherent trend, favor that pathway (Gould, 1996: 151). We humans are lucky we are here. Contingencies of ‘just history’ are why we are here, and if we were not here—if the K-T extinction never occurred—and the troodon or another dinosaur species survived to the present day, they would not have reached our ‘level’ of intelligence. To believe so is to believe in teleological evolution—which certainly is not true. Anthropometric bias runs deep in evolutionary biology and paleontology. People assume that since we are—according to some—the ‘pinnacle’ of evolution, that us, or something like us, would eventually have evolved. Any ‘trends’ can be explained as life moving away from the left wall of complexity, with the left wall—the mode of life, the modal bacter-–being unchanged. We are at the extreme tail of the distribution of complexity while bacteria are at the left wall. Complex life was inevitable since bacteria, the most simple life, began at the left wall. And so, these ‘trends’ in brain size are just that, increasing complexity, not any type of ‘progressive evolution’. Evolution just happens, natural selection occurs based on the local environment, not any inherent or intrinsic ‘progress’. Gould, S. J. (1989). Wonderful life: the burgess Shale and the nature of history. New York: Norton. Gould, S. J. (1996). Full house: The Spread of Excellence from Plato to Darwin. New York: Harmony Books. Herculano-Houzel, S. (2009). The human brain in numbers: a linearly scaled-up primate brain. Frontiers in Human Neuroscience,3. doi:10.3389/neuro.09.031.2009 Lane, N. (2015). The vital question: energy, evolution, and the origins of complex life. New York: W.W. Norton & Company. Olkowicz, S., Kocourek, M., Lučan, R. K., Porteš, M., Fitch, W. T., Herculano-Houzel, S., & Němec, P. (2016). Birds have primate-like numbers of neurons in the forebrain. Proceedings of the National Academy of Sciences,113(26), 7255-7260. doi:10.1073/pnas.1517131113 Rushton J P (1997). Race, Evolution, and Behavior. A Life History Perspective (Transaction, New Brunswick, London). Russell, D. A. (1983). Exponential evolution: Implications for intelligent extraterrestrial life. Advances in Space Research,3(9), 95-103. doi:10.1016/0273-1177(83)90045-5
0.826402
3.257728
Giant world Jupiter becoming obvious in May’s twilit nights With its lengthening days and increasingly twilit nights, May is hardly a vintage month for stargazing from Scotland’s latitudes. Official (nautical) darkness for Edinburgh lasts for more than five hours around midnight as the month begins but dwindles to nothing by the start of June and does not return until 12 July Edinburgh’s sunrise/sunset times change from 05:30/20:51 on the 1st to 04:37/21:45 on the 31st, while the Moon is new on the 4th, at first quarter on the 12th, full on the 18th and at last quarter on the 26th. Our charts show Leo diving westwards as the Summer Triangle formed by Vega, Altair and Deneb is climbing in the east. After the Moon, our most obvious nighttime object is the planet Jupiter which rises in the south-east 30 minutes before our map times and reaches less than 12° high in the south before dawn. In fact, look for the Moon above-right of Jupiter on the night of the 19th and closer to the planet’s left on the 20th. The giant world is now edging westwards against the stars of southern Ophiuchus and brightens from magnitude -2.4 to -2.6 as its distance falls from 678 million to 644 million km. The Jovian globe spans 45 arcseconds in mid-May and telescopes show that it is crossed by bands of cloud that lie parallel to its equator. The four principal moons of Jupiter are also easy targets, though sometimes one or more hide from view as they pass in front of, or behind, the disk or are eclipsed in Jupiter’s shadow. Saturn trails almost two hours behind Jupiter but is fainter at magnitude 0.5 to 0.3. It lies in Sagittarius, below the Teaspoon asterism, where it stands above the Moon but low down in the south-south-east before dawn on the 23rd. Always an impressive sight through a telescope, though not helped by its low altitude, its disk appears 18 arcseconds wide at mid-month, circled by rings that measure 40 by 16 arcseconds. Mercury and Venus are too deep in the morning twilight to be seen at present, though Mercury slips around the Sun’s far side on the 21st. The morning twilight also hinders views of the Eta-Aquarids meteor shower which peaks around the 6th-7th and brings swift meteors that stream from a point which hovers low in our east-south-eastern sky for two hours before sunrise. Mars sets a few minutes before our star map times and may be hard to spot low down in our west-north-western evening sky. It stands between the horns of Taurus on the 1st and shines at magnitude 1.6 to rival the star Elnath, which lies 5° above Mars and marks the tip of the Bull’s northern horn. Mars’ pinkish-orange hue is best appreciated through binoculars as the planet dims further to magnitude 1.8 and speeds 20° eastwards during May, crossing into Gemini at mid-month and sweeping only 0.2° north of the star cluster M35 (use binoculars) on the 19th. It recedes from 335 million to 363 million km during May but, at a mere 4 arcseconds in diameter, is too small to be of telescopic interest. Catch Mars above the slim earthlit Moon on the 7th. NASA’s InSight lander used its sensitive French-built seismometer to detect its first likely marsquake on 6 April. The faint vibrations are now being studied for clues as to Mars’ interior. Another instrument, a German heat probe designed to drill up to five metres into the surface, seems to have encountered a rock and is currently stalled well short of its target depth. The Plough looms directly overhead at nightfall and stands high in the west by our map times. If we extend a curving line along its handle, we reach the prominent star Arcturus which, at magnitude -0.05, is the brightest of all the stars in the sky’s northern hemisphere and, after Sirius, the second brightest (nighttime) star visible from Scotland, although both Vega and Capella come close. Classed officially as a red giant star, though more yellow-orange in hue, Arcturus is slightly more massive than our Sun and perhaps 50% older. As such, it has depleted the hydrogen used to power its core through nuclear fusion, progressed to fusing helium instead and inflated to 25 times the Sun’s radius and 170 times its luminosity. Eventually, after shedding its outer layers, it will settle down as a dim white dwarf star comparable in size to the Earth. At present, though, we admire it as the leading star in the constellation of Bootes which has been likened to a pale imitation of Orion or even an ice-cream cone. Bootes takes its name from the Greek for herdsman or plowman, apparently in relation to the seven stars of the Plough which were also known as the “Seven Oxen” in early times. Arcturus’ own name comes from the Greek for “guardian of the bear”, another reference to its role in following Ursa Major across the sky. In truth, it is something of a temporary guardian since it is rushing past our solar system at 122 km per second at a distance of 36.7 light years and will likely fade from naked-eye view within (only) half a million years as it tracks south-westwards in the direction of Virgo and the bright star Spica. It is in the north of Virgo, and roughly coincident with the “D” of the label for Denebola on our south star map, that we find the galaxy M87, the owner of the supermassive black hole whose image was released a few weeks ago. M87 is 54 million light years away and visible as a smudge in small telescopes. Diary for 2019 May Times are BST 5th 00h New moon 6th 15h Peak of Eta-Aquarids meteor shower 6th 23h Moon 2.3° N of Aldebaran 8th 01h Moon 3° S of Mars 11th 03h Moon 0.3° N of Praesepe 12th 02h First quarter 12th 16h Moon 3° N of Regulus 18th 22h Full moon 19th 18h Mars 0.2° N of star cluster M35 in Gemini 20th 18h Moon 1.7° N of Jupiter 21st 14h Mercury in superior conjunction 22nd 23h Moon 0.5° S of Saturn 26th 18h Last quarter
0.89668
3.285352
Giant planets hang low in evenings as Perseid meteors fly Recent weeks have seen the Earth pass between the Sun and its two largest planets, the gas giants Jupiter and Saturn. Now they hang low in our evening sky, with Jupiter brighter than any star but less than 12° high in the south-south-west at nightfall as it sinks to set in the south-west one hour after our star map times. Saturn, one tenth as bright, trails 30° behind Jupiter and crosses our meridian a few minutes before the map times. With the exception of Mercury, these are our only naked eye planets. Both Venus and Mars are hidden on the Sun’s far side where Venus reaches its superior conjunction on the 14th. Mars stands at the far-point in its orbit of the Sun on the 26th and, by my reckoning, is further from the Earth on the 28th (400 million km) than it has been for 32 years. The Summer Triangle of bright stars, Deneb, Vega and Altair, fills the high southern sky at our map times as the Plough stands in the north-west and “W” of Cassiopeia climbs high in the north-east. Below Cassiopeia is Perseus and the Perseids radiant, the point from which meteors of the annual Perseids shower appear to diverge as they disintegrate in the upper atmosphere at 59 km per second. The meteoroids, debris from Comet Swift-Tuttle, encounter the Earth between about 17 July and 24 August but arrive in their greatest numbers around the shower’s maximum, expected at about 08:00 BST on the 13th. Sadly, the bright moonlight around that date means that we may see only a fraction of the 80-plus meteors that an observer might count under ideal moonless conditions. It is just as well that Perseids include a high proportion of bright meteors prone to leaving glowing trains in their wake. Our best night is likely to be the 12th-13th as the radiant climbs to stand around 70° high in the east as the morning twilight takes hold. The Sun drops almost 10° lower in our midday sky during August as the sunrise/sunset times for Edinburgh change from 05:16/21:21 BST on the 1st to 06:14/20:10 BST on the 31st. New moon on the 1st is followed by first quarter on the 7th, full moon on the 15th, last quarter on the 23rd and new moon again on the 30th. In a month that sees Jupiter dim slightly from magnitude -2.4 to -2.2 and its distance increase from 691 million to 756 million km, its westerly motion in southern Ophiuchus slows to a halt and reverses at a so-called stationary point on the 11th. Its cloud-banded disk, around 41 arcseconds wide, remains a fascinating telescopic sight, particularly given the recent disruption to its Great Red Spot. Saturn recedes from 1,362 million to 1,409 million km and dims from magnitude 0.2 to 0.3 as it creeps westwards below the Teaspoon, a companion asterism to the Teapot of Sagittarius. Through a telescope, Saturn’s disk appears 18 arcseconds wide while the rings span 41 arcseconds and have their north face tipped at 25° towards the Earth. Catch the Moon close to Jupiter on the 9th and to the left of Saturn as the Perseids peak on the 12th-13th. Mercury stands between 2.5° and 5° high in the east-north-east one hour before Edinburgh’s sunrise from the 5th and 22nd. It becomes easier to spot later in this period as it brightens from magnitude 1.0 to -1.2, though we need a clear horizon and probably binoculars to spot it. It is furthest from the Sun, 19°, on the 10th. The only constellation named for a musical instrument, Lyra the Lyre, stands high on the meridian as darkness falls. Its leading star, the white star Vega, is more than twice as massive as the Sun and 40 times more luminous, making it the second brightest star in our summer night sky (after Arcturus) at its distance of 25 light years (ly). Infrared studies show that Vega is surrounded by disks of dust, but whether this hints at planets coalescing or asteroids smashing together is a matter of controversy – perhaps a mixture of the two. Some 162 ly away and three Moon-breadths above-left of Vega is the interesting multiple star Epsilon, the Double Double. Binoculars show two almost-equal stars, but telescopes reveal that each of these is itself double. One of the four has its own dim companion and the whole system is locked together gravitationally, though the orbital motions are so slow that little change in their relative positions is noticeable over a lifetime. The more dynamic system, Beta Lyrae (see map), lies almost 1,000 ly away and has two main component stars that almost touch as they whip around each other in only 12.9 days. Tides distort both stars and, as they eclipse each other, Beta’s total brightness varies continuously between magnitudes of 3.2 and 4.4 – sometimes it can rival its neighbour Gamma while at others it can be less than half as bright. At a distance of 2,570 ly and 40% of the way from Beta to Gamma is the dim Ring Nebula or M57. At magnitude 8.8 and appearing through a telescope like a small smoke ring around one arcminute across, it surrounds a much fainter white dwarf star which is what remains of a Sun-like star that puffed away its atmosphere towards the end of its life. The Dumbbell Nebula, M27, lies further to the southeast in Vulpecula, some 3° north of the arrowhead of Sagitta the Arrow. At 1,230 ly, its origin is identical to that of the Ring though it is larger and brighter and readily visible through binoculars. Diary for 2019 August Times are BST 1st 04h New moon 7th 19h First quarter 10th 00h Mercury furthest W of Sun (19°) 10th 00h Moon 2.5° N of Jupiter 11th 17h Jupiter stationary (motion reverses from W to E) 12th 11h Moon 0.04° S of Saturn 13th 08h Peak of Perseids meteor shower 14th 07h Venus in superior conjunction 15th 13h Full moon 17th 11h Mercury 0.9° S of Praesepe 23rd 16h Last quarter 24th 11h Moon 2.4° N of Aldebaran 26th 02h Mars farthest from Sun (249m km) 28th 13h Moon 0.6° N of Praesepe 30th 12h New moon
0.901183
3.48432
TAURID METEOR SHOWER: Earth is entering a stream of debris from periodic Comet 2P/Encke, and this is causing the annual Taurid meteor shower. The shower has a broad maximum lasting from Nov. 5th through 12th. At most, only about 5 Taurids per hour streak across the sky, but what they lack in number they make up for in dazzle. Taurid meteors tend to be fireballs, very bright and slow. Look for them falling out of the constellation Taurus during the hours around midnight. [sky map] meteor images: from John Chumack of Dayton, Ohio MERCURY'S COMET-LIKE TAIL: The ultrathin atmosphere of Mercury is blown back by solar radiation pressure, forming an enormous comet-like tail. NASA's MESSENGER spacecraft flew through that tail on Sept. 29th and found it less enormous than it used to be. The following diagram compares the situation in Oct. 2008 vs. Sept. 2009: Red traces the distribution of sodium atoms detected by a spectrometer onboard MESSENGER. "The neutral sodium tail, so prominent in our first two flybys of Mercury, is now significantly reduced in extent," announced planetary scientist Ron Vervack at a NASA press conference yesterday. The material in Mercury's tail comes from the surface of the planet itself, which is blasted by solar wind and micrometeorites. During MESSENGER's recent flyby of Mercury, the net effect of solar radiation pressure was small, and the sodium atoms were not accelerated away from the sun as they were during the earlier flybys, resulting in a diminished planetary tail. That's space weather. Get the full story from Science@NASA. WHAT ARE THE ODDS? A ray of light leaves the sun, travels 93 million miles, bounces off some moondust, angles toward Earth, travels another quarter million miles to Switzerland, where it threads a 10-meter hole in the Alps and passes through the lens of an onlooker's digital camera. This series of seemingly improbable events actually happened on Oct. 29th. The onlooker, Ricklin Andreas of Elm, Switzerland, took a picture to prove it: "The full Moon was shining through Martin's hole--a natural gap in the rock of the Tschingelhorn," explains Andreas. What are the odds? It happens about twice a year. The sun itself shines through the gap on March 12/13 and Oct. 1/2. Likewise, the full (or nearly-full) Moons of March and October are in the right position to peek through the hole, although they don't do it on the same fixed dates as the sun because of complications caused by the Moon's 27.3-day, 5o-tilted orbit. Andreas happened to be in the right place at the right time. To see the improbable, keep looking up! October Northern Lights Gallery [previous Octobers: 2008, 2007, 2006, 2004, 2003, 2002, 2001] Explore the Sunspot Cycle
0.919568
3.427648
Vesta was discovered on March 29, 1807 by astronomer Heinrich Wilhelm Olbers, and is named after the virgin goddess of home and hearth from Roman mythology. About twice the area of California, Vesta is the second largest object in the asteroid belt after the dwarf planet Ceres. Vesta has many unique surface features which intrigue scientists. Dawn found two colossal impact basins in Vesta’s southern hemisphere—the 310 mile (500 km) wide Rheasilvia basin, and the older 250 mi (400 km) wide Veneneia crater. The combined scar created by these two impacts was apparent even in Hubble Space Telescope images, which also discerned a peak in the center. Dawn’s data showed Rheasilvia’s width is 95% of the mean diameter of Vesta and it is about 12 miles deep. Its central peak rises 12-16 miles and is more than 100 miles wide, making it compete with Mars' Olympus Mons as the largest mountain the in solar system. What happened to the one percent of Vesta that was propelled from its home during those impacts? The debris, ranging in size from sand and gravel to boulder and smaller asteroids (known as Vestoids), was ejected into space where it began its own journey through the solar system. Scientist believe that about 6 percent of all meteorites we find on Earth are a result of this ancient impact in deep space. Accomplishments at Vesta Dawn mapped Vesta's geology, composition, cratering record and more. It determined Vesta's interior structure by measuring its gravity field. Together, this data has elucidated the formation and evolution of this small rocky world in the main asteroid belt. Dawn found a heavily cratered surface on Vesta, with a rough topography that is transitional between planets and asteroids. In addition to creating an enormous hole at Vesta’s south pole, the two giant impacts caused planet-encircling trough systems to form. The Dawn mission confirmed that Vesta is the parent body of the howardite-eucrite-diogenite (HED) meteorites, via confident matches between lab-based measurements of HEDs and Dawn's measurements of the elemental composition of Vesta’s surface and its specific mineralogy. Dawn also found that Vesta's gravity field is consistent with the presence of an iron core around 140 miles in diameter, in agreement with the size predicted by HED-based differentiation models. Together, these results confirm that Vesta experienced pervasive, perhaps even global melting, implying that differentiation may be a common history for large planetesimals that condensed before short-lived heat-producing radioactive elements had decayed away. Surprisingly, pitted terrains and gullies were found in several young craters, interpreted as evidence of volatile releases and transient water flow. Vesta's composition is volatile-depleted, so these hydrated materials are likely exogenic.
0.888529
3.997611
Geysers not fed by misty water vapour after all. As much as 50% of the plume shooting out of geysers on Saturn's moon Enceladus could be ice, a researcher revealed yesterday at a meeting of the American Geophysical Union in San Francisco, California. Previously, scientists had thought that only 10–20% of the plume was made up of ice, with the rest being water vapour. Some researchers think that the study, led by Andrew Ingersoll, a planetary scientist from California Institute of Technology in Pasadena, backs the idea that the plumes are caused by a sub-surface lake boiling off into space rather than the product of colder processes such as sublimation. Ingersoll based his estimate on a series of photos of Enceladus taken in 2006 by the Cassini spacecraft. That was a "very special time", he says, when two important events occurred simultaneously. Enceladus was perfectly backlit by the Sun, allowing ice particles in its geyser plumes to be easily observed. And at the same time, Cassini was in Saturn's shadow. "That allowed us to look back toward the Sun without blinding the instruments," Ingersoll says. The photos showed Enceladus at different points in its orbit in three wavelength bands — ultraviolet, visible and near-infrared. In combination, the images allowed Ingersoll's team to determine the size of the plume's ice particles as well as their concentration. Feeding the plume The team then calculated how fast new ice particles had to blast out of the moon's geysers for the plume to contain the amount of ice seen in the images. They found that Enceladus must be emitting at least 200 kilograms of ice per second — almost identical to the amount of water vapour other measurements had determined it to be emitting. That 1:1 ratio between ice and water vapour is a major constraint on how the geysers must be operating, Ingersoll says. In a paper accepted by the journal Icarus, he examined a scenario in which the geysers are fed by a misty vapour that sublimates from ice in underground chambers. But that doesn't fit the new data, he says. "You [would] get 1% ice, and 99% water vapour." One way to get more ice, Ingersoll says, is if the geyser chambers contain water. When a crack opens up, the water is exposed to the vacuum of space and starts to boil, he says. But much of the steam immediately freezes, "and you get a large fraction of solids" in the plume. Liquid water is an exciting idea for those hoping we might one day find life on Enceladus. But not everyone believes that water is needed to produce the plumes. Susan Kieffer, a planetary scientist from the University of Illinois at Urbana-Champaign, for instance, was lead author of the paper that found the plume contained only 10–20% ice.1 However, she has no problem with Ingersoll's finding. "Andy had access to a whole new bunch of data," she says. But she's not about to concede that Ingersoll's finding requires water to be present. Rather, she has her own model, in which the geysers are fed by the explosive decompression of materials called clathrates, when cracks in the crust expose them to the vacuum. Clathrates are molecular-sized cages of a compound that can contain many other molecules. The clathrates that feed Enceladus's plumes could therefore encage the numerous other gases that make up about 10% of the plume. "Clathrates are garbage bins for storing gases," Kieffer says. When the clathrates break down, Kieffer argues, they would release not just water vapour, but also ice particles into the plume — likely enough to account for Ingersoll's data. "We can make a lot of ice in our model," she says. Thus, of the three main theories for the formation of Enceladus's geysers — sublimation in cold, misty chambers; liquid water (that might sustain life) boiling into vacuum; and exploding clathrates — only the first seems to be ruled out by Ingersoll's find. The remaining debate is as alive as ever. "I think it's safe to say that there's years of debate left in it," says Carolyn Porco, head of the Cassini imaging team. Kieffer concurs. "This argument isn't going to go away as the result of one AGU meeting," she says. "It may hang around until there is another spacecraft, or until there is a definitive observation." Kieffer, S. W. et al., Icarus, (2009) doi:10.1016/j.icarus.2009.05.011
0.838703
3.938219
While the universe isn’t static, it does take a very, very long time for it to change in any noticeable way. Especially for creatures like us who live for only a tiny fraction of the smallest cosmic moments – relatively speaking. While that makes scientific studies, such as cosmology, a bit more difficult to manage, it can make everyday life a bit easier. For instance, in ancient times, it was very easy to measure the length of a day by the apparent arc of the sun across the sky. But, measuring longer spans of time was much harder. And that’s where the moon comes in – the Earth’s sole natural satellite, the moon moves slower around the Earth than the rotation of the planet itself. So, when observed from a static point, the light reflected from the sun off of the moon’s surface appears to change over time – causing the phases of the moon. And those phases can be used to measure time across a longer period. So it makes sense that this ancient form of timekeeping would eventually be incorporated into human devices – namely, clocks and watches. But, a moon phase complication is a little bit more (for lack of a better term) complex than that. As such, we’ve done the legwork for this new addition to our series on watch complications: an explanation of the origins, purpose, and functionality of moon phase watches. History of the Moon Phase In the modern era, we utilize a calendar system that was developed in the time of the Roman Empire. Originally proposed by and named after Roman emperor, Julius Caesar, this calendar – aptly called the Julian calendar – went into effect on January 1, 45 BC. And, in that time, it hasn’t changed at all.The moon phase complication – as a stand-alone device – actually predates the origins of the clock as we know it by more than 1700 years. Long before that, as far back as 34,000 years ago, we used a different method to tell the passing of long stretches of time: the phases of the moon. As most folks probably know, the moon changes appearance over time from full to new and back again over the course of roughly 30 days. In fact, that’s the origin of what would later become the Julian months. And since the difference between a day and a year is a ratio of roughly 365 to 1, the month turned out to be a good chunk to break up the difference. As such, it’s been one of the core time-telling units for about as long as mankind has been civilized – and probably longer. Believe it or not, the moon phase complication – as a stand-alone device – actually predates the origins of the clock as we know it by more than 1700 years. Estimated to have been created around the 2nd Century BC, the Antikythera mechanism (discovered in a shipwreck off the coast of an island of the same name) is an analog computing device developed in ancient Greece to predict astronomical events, such as eclipses and – that’s right – the changing phases of the moon. It wouldn’t, however, be incorporated into the functionality of actual clocks until much later – originally within the astronomical clocks built into churches and cathedrals, and later into standalone examples. The first instances of moon phase complications being incorporated into standalone clock functionality can be traced to Germany and England in the 16th century, where they were built into grandfather clocks – a luxury reserved for the incredibly wealthy. For a price, people can get their hands on moon phase watches that are accurate for literally thousands of years.The complication was then miniaturized and incorporated into pocket watches and then, not long after, what we now know as the wristwatch. Now, moon phase watches are so common that watchmakers and brands have developed proprietary versions – their own personal signature on the classic time-telling mechanism. More impressively, perhaps, is the continuing goal to create the most accurate moon phase watch ever made. You see, most standard moon phase complications are rendered inaccurate after approximately two to three years of functioning. However, during the 19th and 20th century, some tinkerers created ones that would last for about 122 years. Now, for a price, people can get their hands on moon phase watches that are accurate for literally thousands of years. The current record holder, however, has a complication that will remain accurate for over 2 million years. Phases of the Moon The Synodic Period To understand how a moon phase watch works, it’s actually quite important to understand how the moon itself works. There are several defined periods for a lunar month, including synodic, sidereal, tropical, anomalistic, and draconic. For our purposes, the only one that matters is the synodic period, as it is the one that is tied directly to the moon phase compication. The synodic period of the moon is defined as the time it takes for the moon to complete a full cycle of its phases – from new moon to new moon. Now, the moon’s orbit around the Earth takes approximately 27.3 days to complete a full 360-degree arc (this is the sidereal period). The phases of the moon, however, take slightly longer, as the change in position of the Earth relative to the sun must be taken into account. That being said, the amount of time it takes to The synodic period of the moon is defined as the time it takes for the moon to complete a full cycle of its phases.complete a full phase cycle is about 29.5 days. To be more exact, the synodic period is 29 days, 12 hours and 44 minutes (29.53 days). Since clocks run on a 24-hour scale, there isn’t a simple fix for incorporating a fractional period into the functionality of a watch. Also, 29.53 days don’t translate into watch functionality, because there are no partial days. Every day is 24 hours without fail. This is observable in the Julian calendar as leap years – every four years a day must be added to the calendar to cope for the fact that the Earth actually travels around the sun in 365.25 days. Unfortunately, watches – complex mechanical devices – can’t simply have time added and taken out of their functionality. So watch makers had to get a little creative. How Does It Work An Imperfect Time The first of the moon phase complications actually functioned relatively simply. A geared dial was added beneath the main dial of the clock, which had 59 teeth that would advance via a single mechanical finger one notch every 24-hours – very nearly mimicking one full lunar synodic cycle. “Very nearly,” however, is the key phrase here – as the slight inaccuracy actually causes this complication to go out of sync every couple of years.The slight inaccuracy actually causes this complication to go out of sync every couple of years. Which means it has to be adjusted at fairly regular intervals. And that’s a pretty big flaw in the watch world. So, some clever minds had the idea to increase the number of teeth to 135 – which guarantees accuracy for about 122 years. For those counting, that’s longer than nearly any human has ever lived (the one exception: a French woman named Jeanne Calment who lived to be 122 years and 164 days). And for those who like to pass down their heirlooms for generations, it means an approximate ratio of less than one correction per owner. Most high-end lunar phase watches still use this particular movement in their functionality. From there, a few smaller companies and obsessed makers have tinkered further, figuring out how to produce accuracy tolerances in the thousands of years. None, however, compare to the work of Andreas Strehler. Obsessed with accuracy, this master watchmaker has produced the Lune Exacte – a moon phase wristwatch with an accuracy of over 2 million years. As of yet, nobody has outdone this example. And while it might seem absurd to create a complication that may very well outlive humanity, the achievement is an impressive one nonetheless. Moon phase Display Finally, there’s the question of how moon phase watches are displayed. There are two answers. The short one is, every brand has figured out their own particular and artful way of displaying the lunar calendar – so it’s difficult to peg it down. The longer answer is that they all have a few things in common, no matter the accuracy, who made them, or how old the are. Every lunar watch shows the phases of the moon through visual mimicry. That means, if you look up at the sky and see a crescent moon, you should look down at your watch and see the same crescent moon displayed upon its face. If you don’t, your watch needs adjusting. There are a couple of ways this could be done. The oldest required a dial with a cutout of roughly 160-degrees (apart from two curved humps jutting into the cutout) and a secondary dial underneath. The secondary dial features an image of the moon painted or gilded onto it. As the secondary dial rotates beneath the first, the moon shows through the cutout with the appearance of whatever phase it is in –They all have a few things in common, no matter the accuracy, who made them, or how old the are. be it a waxing crescent, full, or waning. And when the moon is not visible, that means it’s new. Nowadays, the same basic principle is followed, though there are some variations in regards to exactly how the phases are displayed. Certain watchmakers have found new and interesting twists to the way the moon is shown to the wearer, but they all get the same point across: illustrate through a graphic which phase the moon is in at any given time. And unless something happens to the Earth or the moon to drastically change their orbital relationship, we imagine moon phase watches will remain largely the same, as well. While a moon phase complication might seem like a relatively easy thing to add to the function of a watch, it’s hardly ever the only additional feature. In fact, it’s typically regarded as a luxury on top of other perhaps more useful complications – like a 24-hour dial, chronograph, diving bezel, etc. As such, watches worth their weight with a moon phase complication tend to be on the more expensive side. And while there are some extremely superb examples available in the thousands of dollars, that doesn’t mean there aren’t any for those with a slightly tighter budget. The three pictured above are all excellent examples of available moon phase watches. The Tourbillon Watch Explained Learn everything you need to know about one of the most highly regarded and coveted of all watch functions in our explanation of the Tourbillon complication.
0.831247
3.928756
Spiral galaxy ESO 510–13 revealed its twisted shape to the Hubble Space Telescope. Courtesy NASA and the Hubble Heritage Team. Sometimes when galaxies interact, they pull each other to pieces. Other times they just get a little bent out of shape. The warped disk of ESO 510–13 in eastern Hydra suggests that this giant spiral swallowed and absorbed a smaller galaxy in astronomically recent times and is still settling down after the meal. Many spirals, including the Milky Way and M31 in Andromeda, have at least slightly warped disks due to interactions with dwarf companions. In this case, our perfectly edge-on view reveals a strong warp dramatically. This true-color Hubble Space Telescope image, just released by the Hubble Heritage Project, shows ESO 510–13 in unprecedented clarity. The frame is 2.4 arcminutes wide, or 100,000 light-years across at the galaxy's distance of about 150 million light-years. Amateurs with moderately large telescopes can detect the galaxy as a tiny, 13th-magnitude smudge.
0.80552
3.156318
At the centre of our galaxy, a black hole could be about to chow down on one of the biggest meals it's ever had - a mysterious gas cloud around three times the mass of Earth. A montage of the galactic centre by the Swift X-ray Telescope from 2006-2013. Credit: Nathalie Degenaar Astronomers using NASA's Swift telescope are waiting with bated breath to see if the cloud, known as G2, will be sucked into the supermassive black hole in a collision that will not only be a spectacular fireworks show, but could also help boffins see how fainter black holes feed in comparison to their brighter counterparts. "Everyone wants to see the event happening because it's so rare," said Nathalie Degenaar, who leads the imaging effort as a Hubble research fellow in the Department of Astronomy at the University of Michigan. G2 was discovered by astronomers in Germany in 2011 and was expected to hit the black hole Sagittarius A* (Sgr A) late last year. The smash hasn't happened yet, so boffins are now predicting an impact in the next few months. Sgr A is 26,000 light years away from us near the constellations Sagittarius and Scorpius and is the Milky Way's central supermassive black hole. Compared to other collapsed stars at the centres of elliptical and spiral galaxies, Sgr A is dim, despite having a mass at least four million times that of the Sun. "Given its size, this supermassive black hole is about a billion times fainter than it could be," Degenaar said. "Though it's sedate now, it was quite active in the past and still regularly produces brief X-ray flares today." "We think that the fainter ones are the majority, but it's very difficult to study those. We just can't see them. Ours is the only one we can study to understand what their role is in the universe." The gas cloud collision should give astronomers the rare chance to see how fainter supermassive black holes feed and if they consume matter in the same way as bright ones. Obviously, black holes themselves are invisible, but the material falling into them shines in X-rays. The Swift observatory is the only telescope providing daily updates at X-ray wavelengths of where the crash will most likely be and will be posting images online here. Astroboffins will be looking for a change in brightness to indicate the black hole is starting its meal, but they don't know yet just how luminous it will be as they have yet to figure out what kind of gas cloud G2 is. If it's all gas, the region could glow for years to come as Sgr A slowly swallows it. But if G2 is hiding an old star, the display could be less dramatic, with the black hole only getting a few sips of the gas while the star passes by, dense enough to escape the event horizon. "I would be delighted if Sagittarius A* suddenly became 10,000 times brighter. However it is possible that it will not react much—like a horse that won't drink when led to water," said Jon Miller, an associate professor of astronomy who also works on the project. "If Sagittarius A* consumes some of G2, we can learn about black holes accreting at low levels—sneaking midnight snacks. It is potentially a unique window into how most black holes in the present-day universe accrete." ® Sponsored: Webcast: Ransomware has gone nuclear
0.863578
3.893792
Crescent ♉ Taurus Moon phase on 16 May 2004 Sunday is Waning Crescent, 27 days old Moon is in Aries.Share this page: twitter facebook linkedin Previous main lunar phase is the Last Quarter before 5 days on 11 May 2004 at 11:04. Moon rises after midnight to early morning and sets in the afternoon. It is visible in the early morning low to the east. Lunar disc appears visually 4.6% narrower than solar disc. Moon and Sun apparent angular diameters are ∠1811" and ∠1897". Next Full Moon is the Strawberry Moon of June 2004 after 17 days on 3 June 2004 at 04:20. There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak. The Moon is 27 days old. Earth's natural satellite is moving from the second to the final part of current synodic month. This is lunation 53 of Meeus index or 1006 from Brown series. Length of current 53 lunation is 29 days, 15 hours and 31 minutes. It is 4 minutes shorter than next lunation 54 length. Length of current synodic month is 2 hours and 47 minutes longer than the mean length of synodic month, but it is still 4 hours and 16 minutes shorter, compared to 21st century longest. This New Moon true anomaly is ∠129.7°. At beginning of next synodic month true anomaly will be ∠158.3°. The length of upcoming synodic months will keep increasing since the true anomaly gets closer to the value of New Moon at point of apogee (∠180°). 10 days after point of perigee on 6 May 2004 at 04:29 in ♐ Sagittarius. The lunar orbit is getting wider, while the Moon is moving outward the Earth. It will keep this direction for the next 5 days, until it get to the point of next apogee on 21 May 2004 at 12:02 in ♊ Gemini. Moon is 395 710 km (245 883 mi) away from Earth on this date. Moon moves farther next 5 days until apogee, when Earth-Moon distance will reach 406 262 km (252 440 mi). 11 days after its descending node on 4 May 2004 at 15:00 in ♏ Scorpio, the Moon is following the southern part of its orbit for the next day, until it will cross the ecliptic from South to North in ascending node on 17 May 2004 at 18:17 in ♉ Taurus. 26 days after beginning of current draconic month in ♉ Taurus, the Moon is moving from the second to the final part of it. 8 days after previous South standstill on 8 May 2004 at 08:08 in ♑ Capricorn, when Moon has reached southern declination of ∠-27.623°. Next 5 days the lunar orbit moves northward to face North declination of ∠27.595° in the next northern standstill on 22 May 2004 at 10:05 in ♋ Cancer. After 2 days on 19 May 2004 at 04:52 in ♉ Taurus, the Moon will be in New Moon geocentric conjunction with the Sun and this alignment forms next Sun-Moon-Earth syzygy.
0.83659
3.246551
Scientists will be given the chance to try out our planetary defence system as an asteroid the size of a houses passes close to Earth. The asteroid, known as 2012 TC4, was first spotted by the Pan-STARRS observatory in Hawaii in 2012, but its orbit meant that it could not be tracked. Early observations indicated that it could pass as close as 4,200 miles from the Earth's surface - well within the ring of geostationary satellites - on Thursday, October 12. However, the latest observations by the European Southern Observatory's Very Large Telescope in Chile reveal that it will miss our planet by 27,000 miles - roughly one-eighth of the distance to the Moon - which is still very close in astronomical terms. The asteroid is estimated to be between 30 and 100 feet (10 to 30 metres) in size, and is travelling at about 30,000 mph (14 kilometres per second). If an asteroid of this size was to enter our atmosphere, it would have a similar effect to the Chelyabinsk meteor, which exploded in an air burst over Chelyabinsk Oblast, Russia, in February 2013. The meteor generated a bright flash and produced a hot cloud of dust and gas. The bulk of the object's energy was absorbed by the atmosphere, but some eyewitnesses felt intense heat from the fireball. Scientists plan to use the close flyby of 2012 TC4 as an opportunity to test out their planetary defence system, in preparation for a real asteroid threat. "Scientists have always appreciated knowing when an asteroid will make a close approach to and safely pass the Earth because they can make preparations to collect data to characterise and learn as much as possible about it," said NASA program scientist Michael Kelley. "This time we are adding in another layer of effort, using this asteroid flyby to test the worldwide asteroid detection and tracking network, assessing our capability to work together in response to finding a potential real asteroid threat." While the main purpose of NASA's Planetary Defense Coordination Office is to track potentially hazardous asteroids and comets, the US space agency is also putting in place measures to deflect any space rocks that are found to be on a collision course with Earth. It is developing a special type of spacecraft called DART (Double Asteroid Redirection Test), which is about the size of a fridge, and can be fired at an asteroid with enough force to change its trajectory. NASA plans to test out DART on a pair of asteroids named Didymos A and B, which are scheduled to make a "distant approach" to Earth in October 2022. While small asteroids hit the Earth every day, larger ones like the Didymos twins could cause real problems if they hit us. And this is why NASA wants to use them as target practice. Using an on-board targeting system, DART will fly itself to Didymos B and smash into it at 3.7 miles-per-second, in an impact that will be visible from Earth-based observatories. The impact will theoretically change the speed and direction of the asteroid by just enough to shift it out of Earth's path.
0.8236
3.675752
Lessons from the Deep Earth for the Search of Life in the Solar System and Beyond Space Telescope Science Institute (STScI) 3700 San Martin Drive Baltimore, MD 21218 12:00 PM - 1:30 PM The recent National Academies Report - the 2018 Astrobiology Science Strategy for the Search for Life in the Universe emphasized, among other major themes, the need for an expanded focus on investigation of subsurface environments and subsurface processes for our understanding of planetary evolution, habitability and the search for life. Our research program at Toronto focuses on Earth analog systems – in particular, deep fracture waters preserved on geologically long time scales in the Precambrian cratons of Canada, Fennoscandia, and South Africa. Science has long relied on fluid inclusions - microscopic time capsules of fluid and gas encased in host rocks and fracture minerals - to access preserved samples of ore-forming fluids, metamorphic fluids, and remnants of the ancient atmosphere and hydrosphere. Until recently, groundwaters were thought to reflect only much younger periods of water-rock interaction (WRI) and Earth history, due to dilution with large volumes of younger fluids recharging from surface hydrosphere. In the last 10-20 years, global investigations in the world’s oldest rocks have revealed groundwaters flowing at rates > L/min from fractures at km depth in Precambrian cratons. With mean residence times ranging from Ma to Ga at some sites, and in the latter case, geochemical signatures of Archean provenance, not only do these groundwaters provide unprecedented samples for investigation of the Earth’s ancient hydrosphere and atmosphere, they are opening up new lines of exploration of the history and biodiversity of extant life in the Earth’s subsurface. Rich in reduced dissolved gases such as CH4 and H2, these fracture waters have been shown to host extant microbial communities of chemolithoautotrophs dominated by H2-utilizing sulfate reducers and, in some cases, methanogens. Recent estimates of global H2 production via WRI including radiolysis and hydration of mafic/ultramafic rock (e.g. serpentinization) show that the Precambrian continents are a source of H2 for life on par with estimates of H2 production from WRI calculated for the Earth’s marine lithosphere. To date this extensive deep terrestrial habitable zone has been significantly under-investigated compared to the marine subsurface biosphere. Beyond Earth, these findings have relevance to understanding the role of chemical water-rock reactions in defining the potential habitability of the subsurface of Mars, as well as that of ocean worlds and icy bodies such as Europa and Enceladus. This talk will address some of the highlights of recent exploration of the energy-rich deep hydrogeosphere, and connections to deep subsurface life on Earth and to planetary exploration and astrobiology. Speaker: Barbara Sherwood Lollar (Department of Earth Sciences, University of Toronto)
0.894932
3.471575
Scientists think they’ve collected and identified seven space dust particles so small, a needle in a haystack doesn’t even begin to describe it. But even with the tiny size, these interstellar travelers could hold the answers to big questions about the universe. The announcement earlier this year that the specks had been identified is a story 15 years in the making, beginning with the 1999 launch of the unmanned Stardust spacecraft from Cape Canaveral. After a year of travel, Stardust arrived at its destination beyond the orbit of Mars, where its robotic arm extended a tennis racket-shaped collector unit into the interstellar dust stream. “It just captured particles as it would if you were holding up a piece of fly paper and catching flies, by waving it through the air,” says Andrew Westphal, an astronomer at UC Berkeley. It’s not quite as simple as that. For starters, the dust particles are moving at speeds approaching 12 miles per second. Something moving that fast hits something else, and there’s a good chance it vaporizes on impact. So NASA developed a high-tech “aerogel” substance to coat the collector’s surface. It acts like a foam landing pad, slowing the particles down and preserving them. After 200 days of exposure to the dust stream, the collector was hauled inside the spacecraft, where it was placed into a return capsule. Next stop: Utah. “I was there with my family, actually, about 2 o’clock in the morning, we watched it come in in the middle of a blizzard, and it was kind of an amazing experience to see this,” says Westphal. Safely back on Earth, his research team at UC Berkeley next had to try to pluck these little space dust particles out of the gel. There’s an emphasis here on the little. “To put this in some perspective, they are so small about a trillion of them would fit into a teaspoon,” says Westphal. That impossibly small size presented something of a problem. How do you parse through the gel to find and isolate the handful of particles researchers expected to find? Westphal and his team at Berkeley had an idea. If they could use a microscope to zoom in and take images of the gel from different angles, then they could upload those photos to a public website. The operation became known as ‘Stardust@Home,’ and eventually 30,000 volunteers logged on and scanned through more than a million images. “I want to emphasize, this was not something we made up for public outreach or for PR,” says Westphal. “We did this because this was the only way we knew how to approach the problem.” This group, some of whom spent hours and hours each day clicking through slides, came to be called Dusters. And two Dusters hit paydirt, discovering actual space dust in the gel. Five other particles were spotted in the lab, though some of these were only fragments leftover from the collision with the collector. Each particle appears different under a microscope, with some containing the mineral olivine (better known as the gemstone peridot), as well as sulfur compounds. It will take years to confirm their origins, but if proven, harnessing these specks could be a major breakthrough for astronomers interested in the interstellar medium. “One goal in studying them is purely to understand them for their own sake, what they are and how they came to be that way,” says Bruce Draine, an astrophysicist at Princeton. “But these dust particles turn out to be very important players in the workings of a galaxy.” Figuring out this dust, Draine says, its like solving one clue in the crossword puzzle of the universe. If scientists answer this, they get a little closer to answering other, harder questions. Big picture stuff, like how our solar system was formed. “I kind of like to make the analogy to paleoanthropologists who go out to Africa and look for fossil hominids, our ancestors who lived 4.5 million years ago,” says Andrew Westphal. “We are going back about a thousand times further in time to 4.5 billion years ago and looking at the kind of stuff that made up our solar system. So it really is about a search for our own origins.”
0.890003
3.799525
LCROSS: Crashing Craters If everything goes as scheduled, the countdown to liftoff between June 17 and June 20 will mean NASA has launched a rocket intended to crash into the Moon — on purpose. The goal of the Lunar Crater Observation and Sensing Satellite (LCROSS) mission is to confirm the existence (or non-existence) of water ice on the Moon. LCROSS is being aimed at an existing crater at the Moon's South Pole. Because the crater is in permanent shadows, researchers believe it may be cold enough to have frozen ice. The rocket won't make contact for approximately four months. Those interested in monitoring the approach can follow the countdown clock on the NASA LCROSS site. In the interim, the Science Buddies' Craters and Meteorites project idea provides background information and gives students of all ages a concrete way to observe the formation of craters and the ways in which the size and density of the approaching object (e.g., meteor or LCROSS rocket) impacts the resulting size of the crater. (Note: This project can be done with students as young as preschool!) According to NASA, when LCROSS's Centaur upper stage rocket makes impact, it may be possible to view the plume created when the rocket hits. The impact will potentially throw "tons of debris and potentially water ice and vapor above the lunar surface." Specialized instruments will analyze the contents of the plume, looking specifically for water (ice and vapor), hydrocarbons and hydrated materials. NASA expects that the plume may be visible from Earth for astronomers using amateur-grade telescopes with apertures as small as 10-12 inches. Amateur astronomers who are interested in officially logging their observations and contributing to the project can find out more at: http://groups.google.com/group/lcross_observation. With several months between launch and impact, there's plenty of time to get the necessary gear in place. Ambitious students might even want to build their own telescope using Science Buddies' abbreviated Build Your Own Telescope project idea. (Note: If you or your students pursue this project, make sure the mirrors used are at least 12 inches in diameter.) For additional hands-on activities that tie in with principles of science and astronomy related to the LCROSS mission, check out the following PBS Kids' Design Squad activities for students age 9-12 (4th grade and up): - Build an air-powered rocket designed to hit a distant target in Launch It - Create a safe and cushioned astronaut landing zone in Touch Down - Configure a paper cup so it can travel a line and drop a marble onto a target in On Target The LCROSS spacecraft was designed and built by Northrop Grumman. The LCROSS payload, which weighs in just under 28 pounds and contains nine science instruments, was developed by NASA Ames Research Center, which will be managing and monitoring the mission. You Might Also Enjoy These Related Posts: - Plastic Pollution and World Oceans Day - Real-world Blood Typing and the Value of Blood Donation - Laurel vs. Yanny and Student STEM - Classroom Science for Flu Season - Critical Water Shortage in South Africa - Gear Up for the August 2017 Solar Eclipse - Chocolate Covered Candy Geodes - Stay Up for the Perseid Meteor Shower Explore Our Science Videos Slow Motion Craters - STEM Activity DIY Toy Sailboat Why Won't it Mix? Discover the Brazil Nut Effect
0.885148
3.370556
This coming weekend presents the first window for 2013 to complete a challenge in the realm of backyard astronomy and visual athletics. With some careful planning, persistence, and just plain luck, you can join the vaunted ranks of those seasoned observers who’ve seen all 110 objects in the Messier catalog… in one night. Observing all of the objects in Messier’s catalog in a single night has become a bit of a sport over the last few decades for northern hemisphere observers, and several clubs and organizations now offer certificates for the same. The catalog itself was a first attempt by French astronomer Charles Messier to catalog the menagerie of “faint fuzzies” strewn about the northern hemisphere sky. Not that Charles knew much about the nature of what he was seeing. The modern Messier catalog includes a grab bag collection of galaxies, nebulae, open and globular clusters and more down to magnitude +11.5, all above declination -35°. Charles carried out his observations from Paris France at latitude +49° north. Unfortunately, this also means that Messier catalog is the product of Charles Messier’s northern-based vantage point. The northernmost objects in the catalog are Messiers 81 & 82 at declination +69°, which never get above the horizon for observers south of latitude -21°. His initial publication of the catalog in 1774 contained 45 objects, and his final publication contained 103, with more objects added based on his notes after his death in 1817. (Fun fact: Messier is buried in the famous Père Lachaise Cemetery in Paris, site of other notable graves such as those of Chopin and Jim Morrison). There’s a fair amount of controversy on Messier’s motivations and methods for compiling his catalog. The standard mantra that will probably always be with us is that Messier was frustrated with stumbling across these objects in his hunt for comets and decided to catalog them once and for all. He eventually discovered 13 comets in his lifetime, including Comet Lexell which passed only 2.2 million kilometres from Earth in 1770. No one is certain where the modern tradition of the Messier Marathon arose, though it most likely had its roots in the amateur astronomy boom of the 1970s and was a fixture of many astronomy clubs by the 1980s. There are no Messier objects located between right ascension 21 hours 40 minutes and 23 hours 20 minutes, and only one (M52) between 23 hours 20 minutes and 0 hours 40 minutes. With the Sun reaching the “0 hour” equinoctial point on the March Vernal Equinox (falling on March 20th as reckoned in Universal Time for the next decade), all of the Messier objects are theoretically observable in one night around early March to early April. Taking into account for the New Moon nearest to the March equinox, the best dates for a weekend Messier marathon for the remainder of the decade are as follows; Note that this year’s weekend is very nearly the earliest that it can occur. The optimal latitude for Messier marathoning is usually quoted as 25° north, about the latitude of Miami. It’s worth noting that 2013 is one of the very few years where the primary weekend falls on or before our shift one hour forward to Daylight Saving time, occurring this year on March 10th for North America. Students of the Messier catalog will also know of the several controversies that exist within the list. For example, one wide double star in Ursa Major made its way into the catalog as Messier 40. There’s also been debate over the years as to the true identity of Messier 102, and most marathoners accept the galaxy NGC 5866 in its stead. Optics of the day weren’t the most stellar (bad pun intended) and this is evident in the inclusion of some objects but the omission of others. For example, it’s hard to imagine a would-be comet hunter mistaking the Pleiades (M45) for an icy interloper, but curiously, Messier omits the brilliant Double Cluster in Perseus. It’s vital for Messier marathoners to run through objects in proper sequence. Most visual observers run these in groups, although Alex McConahay suggests in a recent April 2013 Sky & Telescope article that folks running a photographic marathon (see below) beware of wasting precious time crossing the celestial meridian (a maneuver which requires a telescope equipped with a German Equatorial mount to “flip” sides) hunting down objects. The unspoken “code of the skies” for visual Messier marathoners is that “Go-To” equipped scopes are forbidden. Part of the intended purpose of the exercise is to acquaint you with the night sky via star hopping to the target. Most observers complete Messier objects in groups. You’ll want to nab M77 and M74 immediately after local dusk, or the marathon will be over before it starts. You’ll then want to move over to the Andromeda Galaxy and the collection of objects in its vicinity before scouring Orion and environs. From that point out, you can begin to slow down a bit and pace yourself through the galaxy groups in Coma Berenices and the Bowl of Virgo asterism. Another cluster of objects stretch out in the sky past midnight along the plane of our Milky Way Galaxy from Sagittarius to Cygnus, and the final (and often most troublesome) targets to bag are the Messier objects in Aquarius and M30 in Capricornus just before dawn. Remember, dark skies, warm clothes, and hot coffee are your friends in this endeavor! There have been alternate rules or versions of Messier marathons over the years. Some imagers complete one-night photographic messier marathons. There are even abbreviated or expanded versions of the feat. It is also possible to nab most of the Messier catalog with a good pair of binoculars under clear skies. Probably the most challenging version we’ve heard of is sketching all 110 Messier objects in one evening… you might be forgiven for using a Go-To enabled telescope to accomplish this! Finally, just like running marathons, the question we often get is why. Some may eschew transforming the art of dark sky observing into a task of visual gymnastics. We feel that to run through this most famous of catalogs in an evening is a great way to learn the sky and practice the fast-disappearing art of star hopping. And hey, no one’s saying you can’t take a year or three to finish the Messier catalog… its a big universe, and the New General Catalog (NGC) and Index Catalog (IC) containing thousands of objects will still be waiting. Have YOU seen all 110? – A perpetual listing of Messier marathon visibility by latitude by Tom Polakis. – An All Sky Map of the Messier catalog. – A handy priority list for a Messier marathon compiled by Don Machholz.
0.89879
3.070411
Feeling lucky? Events such as the Comet Siding Spring approach by Mars in October only happen about once every eight million years, according to NASA. And after we were treated to spectacular views from the agency’s spacecraft (see Curiosity and Opportunity and MAVEN, for example), we now have fresh pictures this month from an Indian mission. Also, NASA has released science results suggesting that the chemistry of Mars’ atmosphere could be changed forever from the close encounter. “The image in the center shows a streak … radiating out of the comet’s nucleus (out of frame), possibly indicating the jet from [the] comet’s nucleus,” the Indian Science Research Organisation wrote of the above image sequence on its Facebook mission page. “Usually jets represent outgassing activity from [the] vents of the comet-nucleus, releasing dust and ice crystals. The outgassing activity gradually increases as the comet moves closer to the Sun.” The comet’s dust likely produced a meteor shower or meteor storm when particles from it crashed into the upper atmosphere, which “literally changed the chemistry,” added Jim Green, director of NASA’s planetary science division, in a recent discussion highlighted on an agency blog. The agency says the dust created vaporized metals, which will eventually transform to dust or “meteoric smoke.” MAVEN (which stands for Mars Atmosphere and Volatile EvolutioN) will be monitoring the long-term effects. Possible results include high-altitude clouds or at the most extreme, maybe permanently altering what the chemistry of the atmosphere is. Not a bad thing for a mission to study shortly after it arrived at Mars. You can view more science results from NASA’s studies of Siding Spring in this recent Universe Today story from Bob King, which talks in more detail about the meteor shower, new layers in the Mars atmosphere and the omnipresent dust.
0.844386
3.061227
Just like out of a “Star Wars” movie, NASA is investigating the possibility of building a blimp-suspended city in the clouds high above Venus' searing-hot surface. The project, known as the High Altitude Venus Operational Concept (HAVOC), is a spacecraft designed by the Systems Analysis and Concepts Directorate at NASA Langley Research Center for the purpose of exploring Earth’s closest neighbor. “There’ve been plenty of robotic missions along the way that have been proposed to explore Venus,” Project Head Dale Arney told FoxNews.com. “This one [is] looking at what it would take to explore it with humans and what the feasibility looks like in that realm.” Despite Venus being closer to Earth than Mars by a few hundred million miles (depending on orbit), space agencies have been focusing their exploration efforts primarily on the red planet, and for good reason. While Venus has a similar density and chemical composition to Earth, the surface conditions have led researchers to refer to the planet as the solar system’s version of Hell. The mean temperature is a balmy 863 degrees Fahrenheit, the clouds are made of sulfuric acid, and there are more volcanoes (totaling, in some estimates, over 1,000,000) than on any other planet in the Milky Way. The air pressure is also 92% percent higher than Earth’s at sea level. Probes landing on the planet’s surface have only lasted, at most, two hours. The HAVOC project, created by Arney and Chris Jones, would get around this problem by staying high above these hellish conditions -- 30 miles above the surface, to be exact. First, a robotic probe would be sent to Venus to inspect the atmospheric conditions. Next, a crew would visit the planet’s orbit for a stay of 30 days, followed by a 30-day stay floating in the atmosphere. The primary feature of the concept is a 130 meter-long mobile blimp, its top covered with solar panels to utilize Venus’ close proximity to the sun. The helium-filled, solar-powered craft would hover above the highly acidic cloud-line for 30 days as a crew gathers information about the planet’s atmosphere. While a permanent human presence in a blimp-suspended “cloud city” is the ultimate goal, Jones is quick to point out that they’re taking things one step at a time. “What we focused on in this study was understanding what an initial robotic and an initial, very short-term human mission would look like, and then just very notionally thought about what you could then build to beyond that -- something like a more permanent presence. But our primary focus was on understanding what kind of technology system it would take to do any kind of mission at all, mainly to do the science and test out the technology it would need in order to enable those kinds of missions.” A mission to Venus could be used as a test-run for crewed missions to Mars, the former taking 440 days using existing or near-term propulsion technology while a trip to the red planet would take 500 days at a minimum. Astronaut teams would also have the choice to abort a Venus mission and return to Earth immediately after arrival, whereas missions to Mars would have no such option: the crew would have to wait on the planet until just the right orbital alignment occurred for a safe return home. So when can we expect to see an actual mission? NASA currently has no plans to send humans to Venus and according to the Langley branch’s head of public affairs Michael Finneran it may be a while before they do. “This is a visionary concept that is not being proposed for funding as a mission,” Finneran says. “If at some point NASA decided to fund a human mission to Venus, many concepts would be examined over a period of time before one was selected.”
0.832083
3.161335
Humans explore Mars in “Distant Shores,” an illustration by NASA artist Pat Rawlins Cosmic rays from deep space could pose serious health risks to future astronauts on long-duration missions to Mars — even bringing on the memory-destroying symptoms of Alzheimer’s disease, according to the results of a new study from the University of Rochester Medical Center. While NASA has its sights set on the human exploration of Mars within the next several decades, even with the best propulsion technology currently available such a mission would take about three years. Within that time, crew members would be constantly exposed to large amounts of radiation that we are protected from here by Earth’s magnetic field and atmosphere. Some of this radiation comes in the form of protons from the Sun and can be blocked by adequate spacecraft shielding materials, but a much bigger danger comes from heavy high-energy particles that are constantly whipping across the galaxy, shot out of the hearts of exploding giant stars. “Because iron particles pack a bigger wallop it is extremely difficult from an engineering perspective to effectively shield against them. One would have to essentially wrap a spacecraft in a six-foot block of lead or concrete.” – M. Kerry O’Banion, M.D., Ph.D. While health risks from these high-mass, high-charged (HZE) particles have long been known, the exact nature of the damages they can cause to human physiology is still being researched — even more so now that Mars and asteroid exploration is on NASA’s short list. Now, a team from the University of Rochester Medical Center (URMC) in New York has announced the results of their research linking high-energy radiation — just like what would be encountered during a trip to Mars — to the degeneration of brain function, and possibly even the onset of Alzheimer’s disease. “Galactic cosmic radiation poses a significant threat to future astronauts,” said M. Kerry O’Banion, M.D., Ph.D., a professor in the University of Rochester Medical Center (URMC) Department of Neurobiology and Anatomy and the senior author of the study. “The possibility that radiation exposure in space may give rise to health problems such as cancer has long been recognized. However, this study shows for the first time that exposure to radiation levels equivalent to a mission to Mars could produce cognitive problems and speed up changes in the brain that are associated with Alzheimer’s disease.” In particular the team focused on iron ions, which are blasted into space by supernovae and are massive enough to punch through a spacecraft’s protective shielding. “Because iron particles pack a bigger wallop it is extremely difficult from an engineering perspective to effectively shield against them,” O’Banion said. “One would have to essentially wrap a spacecraft in a six-foot block of lead or concrete.” By exposing lab mice to increasing levels of radiation and measuring their cognitive ability the researchers were able to determine the neurologically destructive nature of high-energy particles, which caused the animals to more readily fail cognitive tasks. In addition the exposed mice developed accumulations of a protein plaque within their brains, beta amyloid, the spread of which is associated with Alzheimer’s disease in humans. “These findings clearly suggest that exposure to radiation in space has the potential to accelerate the development of Alzheimer’s disease,” said O’Banion. “This is yet another factor that NASA, which is clearly concerned about the health risks to its astronauts, will need to take into account as it plans future missions.” While Mars explorers could potentially protect themselves from cosmic radiation by setting up bases in caves, empty lava tubes or beneath rocky ledges, which would offer the sort of physical shielding necessary to stop dangerous HZE particles, that would obviously present a new set of challenges to astronauts working in an already alien environment. And there’s always the trip there (and back again) during which time a crew would be very much exposed. While this won’t — and shouldn’t — prevent a Mars mission from eventually taking place, it does add yet another element of danger that will need to be factored in and either dealt with from both health and engineering standpoints… or accepted as an unavoidable risk by all involved, including the public. How much risk will be considered acceptable for the human exploration of Mars — and beyond? (NASA/Pat Rawlings)
0.810001
3.69645
Tribute to Astrophysicist Vera Rubin, the “Mother” of Dark Matter Astrophysicist Professor Vera Rubin, National Medal of Science awardee who confirmed the existence of dark matter, died on 25 December 2016. Dark matter is “the invisible material that makes up more than 90% of the mass of the universe.” Rubin’s pioneering work progressed from 1965 to the late 1970s. Her webpage describes the beginning of this discovery: “By the late 1970s, after Rubin and her colleagues had observed dozens of spirals, it was clear that something other than the visible mass was responsible for the stars’ motions. Analysis showed that each spiral galaxy is embedded in a spheroidal distribution of dark matter — a “halo.” The matter is not luminous, it extends beyond the optical galaxy, and it contains 5 to 10 times as much mass as the luminous galaxy. The stars’ response to the gravitational attraction of the matter produces the high velocities. As a result of Rubin’s groundbreaking work, it has become apparent that more than 90% of the universe is composed of dark matter.” Rubin’s research remained prolific until the early 2000s, as she continued to study various models for the composition of the dark halos. Among her most recent publications was an examination of the rotation curves of spiral galaxies. Until her retirement, Rubin worked at the Carnegie Institution for Science Department of Terrestrial Magnetism in Washington, D.C. She was awarded the National Medal of Science in 1993. She was also a member of the National Academy of Sciences and in 1996, she received the Royal Astronomical Society’s Gold Medal, the first woman to do so 168 years after Caroline Hershel (1828). Neta Bahcall of Princeton University describes Rubin’s scientific significance: “A pioneering astronomer, the ‘mother’ of flat rotation curves and dark-matter, a champion of women in science, a mentor and role model to generations of astronomers.” Carnegie Science describes Rubin’s scientific impact extends far beyond her pioneering research: “She was an ardent feminist, advocating for women observers at the Palomar Observatory, women at the Cosmos Club, Princeton, and she even advised the Pope to have more women on his committee.” See Yonatan Zunger’s tribute to Professor Rubin in the linked post. Read some background on Rubin from Carnegie Science: https://carnegiescience.edu/news/vera-rubin-who-confirmed-%E2%80%9Cdark-matter%E2%80%9D-dies See Rubin’s biography and publications: https://home.dtm.ciw.edu/users/rubin/ #stemwomen #astrophysics #astronomy Originally shared by Yonatan Zunger And in the continuing march of the Angel of Death, I am sad to report that Vera Rubin died today at the age of 88. Rubin was most famous as the discoverer of dark matter: the invisible and still-mysterious substance which makes up 85% of the mass of the universe. Dark matter had been hypothesized back in the 1930’s, but it wasn’t until the 1970’s that it was finally observed. Rubin was studying distant galaxies when she noticed that the rotation speed of their outer edges didn’t jibe with the speed they should have based on the amount of visible matter. You can tell how fast something is moving relative to you using the Doppler effect: the same thing that makes a siren sound higher-pitched as it moves towards you and lower-pitched as it moves away. It works because sound looks like a sine wave of rising and dropping pressure, and pitch corresponds to the time between successive peaks. When the source is moving towards you, the first peak emitted by the siren is already moving towards you at the speed of sound, but the second peak will get there sooner than expected, because it had the benefit of moving towards you at the siren’s speed for one more period and then being sent off at the speed of sound. This means that if you know the original pitch of the siren, you can even figure out how fast it’s moving based on the pitch you hear. The same trick works with light, only now instead of pitch, it’s color that depends on the time between peaks; things appear bluer when they approach, and redder when they recede. Since starlight contains a lot of easily measured standard lights in it – colors like those that Hydrogen and Helium emit when heated, and which have a very distinct pattern when viewed through a prism – we can measure the speed of distant stars and galaxies. And by comparing the speed of the left and right edges of a galaxy, you can tell how fast it’s spinning. But we’ve known how to calculate the orbits of stars since Kepler, and from the amount of light a galaxy emits, we can make a pretty good guess at how heavy it is. From that, you would conclude that the stars at the outside of a galaxy should be moving more slowly than the ones at its center, in a nicely predictable way. But that’s not what Rubin saw! Instead, she discovered that the stars at the outside were moving at the same speed as the ones at the center – something only possible if there was some extra, invisible mass pulling them. What Rubin discovered was that there is an invisible halo of “dark matter” surrounding each galaxy, nearly ten times as massive as the galaxy itself. It’s “dark” in the plainly literal sense: unlike stars, it’s not actively on fire and glowing. In the decades since, dark matter has become a core area of study in astrophysics. Using the same techniques and ever-more-sophisticated telescopes, including dedicated satellite observatories, we’ve mapped the presence and motion of dark matter in greater detail, and discovered that it’s far more mysterious than we first suspected. For example, we know it’s not made up of ordinary atomic or molecular stuff, because its dynamics is all wrong; neither is it made up of massive neutrinos or any other kind of matter we understand. (There’s also dark energy, an even more widespread and invisible field, discovered a few decades later. Unlike dark matter, which attracts things by gravity, dark energy seems to provide a universe-spanning, diffuse, but very distinctly measurable repulsive force. It’s even less understood than dark matter; most scientists suspect that if we understood these things well, we’d know a lot more about the nature of the universe) Rubin therefore sits in the pantheon of the great astronomers of the 20th century. Alas, her death means she will not get the Nobel Prize that many have been arguing she deserves for a very long time: the prize cannot (by the terms of its founding grant) be awarded posthumously. But she remains one of the most important researchers in the field, and her work will continue to have a profound impact on our understanding of Nature for generations to come.
0.827426
3.37392
Could there be a collision of planets? No, there is no reason for such a fear. The planets and stars are so far away from each other, even though we don’t realize it. Planetary collisions are pretty rare, especially in developed systems like ours. Our solar system is reasonably stable -- not perfectly so, but all of the planets are not likely to hit another large object in the near future. About the worst thing that could happen would be that an asteroid could hit. Since an asteroid or comet impact can produce global level effects, but even a mass extinction would be nothing compared to what Mercury could do to us. But, since most asteroids that are likely to hit a planet are all far too tiny to be called a planet in any sense of the word, we wouldn't count those. Generally, this seems to be the norm in developed systems. Of the extrasolar planet systems we know about, they don't seem full of planets that are prone to collisions. In fact, you see all kind of complicated motions that keep planets from colliding with one another. image source: wikimedia commons Any collisions between planets happened early in the Solar System, and they most certainly did. Our Moon was made by something Mars-sized smacking into the Earth very early on (over 4 billion years ago), and Mars has a giant, very old (about the same age, but ages are pretty uncertain once we get away from the Earth and Moon) crater that makes the northern part of the planet quite a bit lower than the southern part. So, when the inner solar system was forming, all four of the inner planets were probably getting hit a lot by comparable-sized objects (and smaller stuff as well). Thankfully, most of that stuff either collided with something or was eventually knocked out of the area by Jupiter, so it won't collide with a planet any more.
0.880689
3.156574
“The biggest problem with success is that it looks easy, especially for those of us who have nothing to do.” Thus spoke Jean-Jacques Dordain on Wednesday, November 12th, just moments after it had been confirmed that a tiny robot vehicle called Philae had safely landed on the surface of a comet half a billion kilometres away from Earth. That simple statement offers a subtle message on the huge achievement this landing represents. The Rosetta / Philae mission is the story of a 6 billion kilometre journey across space which has taken a decade to achieve, and which has involved some 20 countries. Yet the adventure is in many ways only now starting. The Rosetta mission actually started 21 years ago, in 1993 when it was approved as the European Space Agency’s first long-term science programme. The aim of the mission being to reach back in time to the very foundations of the solar system by rendezvousing with, and landing on, a comet as it travel through the solar system. Comets hold enormous scientific interest because they are, as far as can be determined, the oldest, most primitive bodies in the Solar System, preserving the earliest record of material from the nebula out of which our Sun and planets were formed. While the planets have gone through chemical and (in the cases of places like Earth), environmental and geological change, comets have remained almost unchanged through the millennia. What’s more, they likely played an important role in the evolution of at least some of the planets. There is already substantial evidence that comets probably brought much of the water in today’s oceans – and they may even have provided the complex organic molecules that may have played a crucial role in the evolution of life here. The target for ESA’s attention is comet 67P/Churyumov–Gerasimenko (aka 67P/C-G), an odd-shaped body comprising two “lobes” joined together one in what some in the media have at times referred to as the “rubber duck”. The larger of the two lobes measures some 4.1×3.2×1.3 kilometres in size (2.55×1.99×0.8 miles) and the smaller some 2.5×2.5×2 kilometres (1.6×1.6×1.2 miles). It is a “short period” comet, orbiting the Sun once every 6.4 years and most likely originating in the Kuiper belt, a disk of material from the early history of the solar system, orbiting the Sun at a distance of around 30-50 AU The primary spacecraft in the mission, Rosetta, arrived in the vicinity of 67P/C-G on August 6th, 2014 becoming the first vehicle in history to successfully enter orbit around a comet. The major reason the mission took so long to reach the comet, having been launched in 2004, is that despite having a relatively short orbital period, 67P/C-G is travelling very fast and accelerating as is falls deeper into the Sun’s gravity well heading for perihelion (it is currently travelling at 18 kilometres (11.25 miles) a second and can reach velocities of 34 kilometres a second as it swings around the Sun). As it is impossible to launch a space vehicle is these velocities, Rosetta was launched on a trajectory which allowed it to fly by Earth twice (2005 and the end of 2007) and Mars once (early 2007), using the gravity of both planets to accelerate it and (in the case of the 2nd Earth fly by), swinging it onto an orbit where it would “chase” and eventually catch the comet. Following its safe arrival, Rosetta settled into an orbit of some 30 kilometres around the comet in September, and began looking for a suitable place where Philae might land – because until the craft actually arrived in orbit around 67P/C-G, no-one had any idea of what it’s surface might look like. On 15 September 2014, ESA announced a region on the “head” of the “duck” had been selected for the landing, christening it Agilkia in keeping with a contest to name the landing site. Further observations of the comet were carried out throughout September and October as an overall part of Rosetta’s mission and to gain as much information on the landing site itself. At the same time the spacecraft started manoeuvring itself in closer to the comet, dropping its orbit to just 10 km, ready for Philae’s delivery. The landing operations commenced around 09:05 UT on Wednesday, November 12th, when Philae detached from Rosetta and started on its long gentle descent. Immediately following the separation, and due to Rosetta’s orbit around the comet, contact was almost immediately lost with the lander, leading to a tense 2 hour wait before communications could be re-established. This happened on cue, with the lander reporting all was OK. Landing on a comet is no easy task. The gravity is almost non-existent, and there was a very real risk that Philae could, if it struck the surface of 67P/C-G too fast, simply bounce off. Hence the lander’s long, slow drop from the Rosetta spacecraft which the ESA mission scientists dubbed “the seven hours of terror” in recognition of the famous “seven minutes of terror” which marked the arrival of NASA’s Mars Science Laboratory Curiosity on Mars. To prevent Philae bouncing off the surface of 67P/C-G, ESA engineers came up with two ingenious systems to try to ensure the craft can anchor itself in place. Each of the three landing legs has a small ice drill built into its “foot”, and designed to activate as soon as a leg makes contact with the comet, effectively “screwing” the landing pad against the rock. Little harpoons would also be fired from under the vehicle, tethering it to the comet. The actual touch-down was expected to take place around 15:35 UT, or shortly thereafter. However, because of the distance involved, it would take the signal confirming touchdown a further 30 minutes to reach Earth. At 16:11 UT,the first telemetry showing Philae had landed was received by mission control in Darmstadt. It promoted a Tweet from the Rosetta media team: It had initially been thought that the landing systems had performed as anticipated; however, as further telemetry came in, it back apparent that the harpoon system had not actually fired; indeed, the lander may have actually bounced very slightly, leading to mission manager Stephan Ulamec to comment, “today we landed on a comet – twice!” While the failure of the harpoons, and a slight issue with the communications signal between the lander and orbiting Rosetta are not serious enough to compromise the mission, engineers are examining the potential to attempt a re-fire of the harpoons. However, assuming all goes well, Philae should quickly start into an intensive 60-hour study of the comet. After this time, the primary battery system will be discharged, and the lander will switch to a rechargeable battery system which uses solar cells mounted on the side of the vehicle. Both systems will allow Philae to conduct an on-the-spot analysis of the composition and structure of the comet’s surface and subsurface material. A drilling system will obtain samples down to 23 centimetres (9 inches) below the surface. These samples will be subject to spectrographic analysis to determine their chemical composition, including the presence of including amino acid enantiomers. Other instruments will measure properties such as near-surface strength, density, texture, porosity, ice phases and thermal properties. In addition, instruments on the lander will study how the comet changes during the day-night cycle, and while it approaches the Sun. How long the lander survives depends on several factors. The comet is currently “falling” towards the Sun and will grow increasingly active as it does so. already the comet is close enough for dust and gas to be given off as it is warmed by the Sun, forming a faint coma around the rocky nucleus. As this increases, there is a risk that this dust could coat the lander’s solar cells, preventing the batteries recharging. Even if this doesn’t happen, it is likely that by mid-March 2015, the comet will be so close to the Sun, heat will overcome the lander. Meanwhile, Rosetta will also be carrying out an extensive study of the comet from its orbit. using a suite of instruments including cameras, spectrometers, and experiments that work at different wavelengths – infrared, ultraviolet, microwave, and radio. These will allow the vehicle to gather more high-resolution images of the comet and information about its shape, density, temperature, and chemical composition. In addition, Rosetta will analyse the gases and dust grains in the comet’s coma, which will become more and more active as it approached perihelion and swings around the Sun. One thing that has come as something of a surprise is that not only is Rosetta able to look at, sniff and even – via Philae – scratch and tickle 67P/C-G, the mission has also been able to hear the comet, which appears to be singing to itself! In August 2014 Rosetta detected oscillations in the comet’s magnetic field, which give rise to an unusual “song” in the 40-50 millihertz range. Far too low for the human ear to discern, ESA issued a recording on the sound on November 11th, 2014, with the frequency increased factor of 10,000 to make it audible. Via Wayne Kraus It is hoped that Rosetta will continue to function right the way through perihelion in August 2015, and onwards to around December 2015. However, while 67P/C-G appears to be a lot less active than other comets visited by probes in the past – such as the famous Halley’s Comet “chase” and fly by of 1986 – there is still a risk that significant outgassing from the comet might damage the vehicle or compromise this solar panels. The Rosetta mission isn’t the first to study a comet; as noted above, Halley’s Comet, which orbits the Sun every 76 years, was visited by no fewer than 5 space vehicles – Europe’s Giotto, and two each from Russia and Japan, all of which flew by that comet in 1986, with Giotto coming to within 600 kilometres of the comets very active nucleus, having been “steered onto” the comet with data obtained by the Russian and Japanese missions. While it had not been expected to survive the encounter, Giotto did continue onwards after passing Halley’s Comet and flew by comet Comet Grigg-Skjellerup in 1992, prior to being switched off. NASA has also carried out three cometary missions, including the Stardust mission of 2004, which gathered samples from the tail of comet Wild 2 and returned them to Earth for analysis in 2006. However, and as noted, Rosetta marks the first time a vehicle has been successfully placed in orbit around a comet, and Philae the very first craft to land on a comet. About the Project Names The Rosetta project uses three interconnected names. The overall name, which is also that of the primary space craft, is taken from the Rosetta Stone, the granodiorite stele inscribed with a decree issued on behalf of King Ptolemy V inscribed in three scripts: Ancient Egyptian hieroglyphs, Demotic script and Ancient Greek. It offered an important key to understanding Egyptian hieroglyphs. Thus the name reflects the overall objectives of the mission, with comets perhaps being the key to our understanding the ancient history of the solar system Philae refers to a pair of islands just above the First Cataract of the Nile River near Aswan. Said to be the buried place of Osiris, the islands were the home of the The temple of Isis. It was here that the Philae Obelisk was found, on which was engraved a petition in Ancient Greek and Egyptian hieroglyphs, and which further assisted in the decipherment of hieroglyphs alongside the work with the Rosetta stone. Agilkia is the name of the island to which the Temple of Isis on Philae was relocated stone by stone in the 1960s to remove it from further risk of flooding due to continuing development of the Aswan dams. There will be more to come from Rosetta and Philae. In the meantime, I’ll leave you with a film on Philae’s landing, featuring music composed especially for the event by Vangelis. all images courtesy of the European Space Agency unless otherwise indicated.
0.836953
3.600001
Paleontologist: New theory on dinosaur extinction PRINCETON UNIVERSITY NEWS RELEASE Posted: September 26, 2003 As a paleontologist, Gerta Keller has studied many aspects of the history of life on Earth. But the question capturing her attention lately is one so basic it has passed the lips of generations of 6-year-olds: What killed the dinosaurs? The answers she has been uncovering for the last decade have stirred an adult-sized debate that puts Keller at odds with many scientists who study the question. Keller, a professor in Princeton's Department of Geosciences, is among a minority of scientists who believe that the story of the dinosaurs' demise is much more complicated than the familiar and dominant theory that a single asteroid hit Earth 65 million years ago and caused the mass extinction known as the Cretacious-Tertiary, or K/T, boundary. Keller and a growing number of colleagues around the world are turning up evidence that, rather than a single event, an intensive period of volcanic eruptions as well as a series of asteroid impacts are likely to have stressed the world ecosystem to the breaking point. Although an asteroid or comet probably struck Earth at the time of the dinosaur extinction, it most likely was, as Keller says, "the straw that broke the camel's back" and not the sole cause. Perhaps more controversially, Keller and colleagues contend that the "straw" -- that final impact -- is probably not what most scientists believe it is. For more than a decade, the prevailing theory has centered on a massive impact crater in Mexico. In 1990, scientists proposed that the Chicxulub crater, as it became known, was the remnant of the fateful dinosaur-killing event and that theory has since become dogma. Keller has accumulated evidence, including results released this year, suggesting that the Chicxulub crater probably did not coincide with the K/T boundary. Instead, the impact that caused the Chicxulub crater was likely smaller than originally believed and probably occurred 300,000 years before the mass extinction. The final dinosaur-killer probably struck Earth somewhere else and remains undiscovered, said Keller. These views have not made Keller a popular figure at meteorite impact meetings. "For a long time she's been in a very uncomfortable minority," said Vincent Courtillot, a geological physicist at Université Paris 7. The view that there was anything more than a single impact at work in the mass extinction of 65 million years ago "has been battered meeting after meeting by a majority of very renowned scientists," said Courtillot. The implications of Keller's ideas extend beyond the downfall of ankylosaurus and company. Reviving an emphasis on volcanism, which was the leading hypothesis before the asteroid theory, could influence the way scientists think about the Earth's many episodes of greenhouse warming, which mostly have been caused by periods of volcanic eruptions. In addition, if the majority of scientists eventually reduce their estimates of the damage done by a single asteroid, that shift in thinking could influence the current-day debate on how much attention should be given to tracking and diverting Earth-bound asteroids and comets in the future. Keller does not work with big fossils such as dinosaur bones commonly associated with paleontology. Instead, her expertise is in one-celled organisms, called foraminifera, which pervade the oceans and evolved rapidly through geologic periods. Some species exist for only a couple hundred thousand years before others replace them, so the fossil remains of short-lived species constitute a timeline by which surrounding geologic features can be dated. In a series of field trips to Mexico and other parts of the world, Keller has accumulated several lines of evidence to support her view of the K/T extinction. She has found, for example, populations of pre-K/T foraminifera that lived on top of the impact fallout from Chicxulub. (The fallout is visible as a layer of glassy beads of molten rock that rained down after the impact.) These fossils indicate that this impact came about 300,000 years before the mass extinction. The latest evidence came last year from an expedition by an international team of scientists who drilled 1,511 meters into the Chicxulub crater looking for definitive evidence of its size and age. Although interpretations of the drilling samples vary, Keller contends that the results contradict nearly every established assumption about Chicxulub and confirm that the Cretaceous period persisted for 300,000 years after the impact. In addition, the Chicxulub crater appears to be much smaller than originally thought -- less than 120 kilometers in diameter compared with the original estimates of 180 to 300 kilometers. Keller and colleagues are now studying the effects of powerful volcanic eruptions that began more than 500,000 years before the K/T boundary and caused a period of global warming. At sites in the Indian Ocean, Madagascar, Israel and Egypt, they are finding evidence that volcanism caused biotic stress almost as severe as the K/T mass extinction itself. These results suggest that asteroid impacts and volcanism may be hard to distinguish based on their effects on plant and animal life and that the K/T mass extinction could be the result of both, said Keller. Explore the Red Planet from the comfort of your home with this interactive DVD. Includes 3D glasses for viewing three-dimensional images of Mars. U.K. & WORLDWIDE STORE Story on stage SIGNED COPIES! "A Space Story" DVD is a galactic journey with astronaut Story Musgrave visiting the Hubble Space Telescope, viewing Earth from Space, and reaching for the heavens. Get a signed copy while stocks last! U.K. & WORLDWIDE STORE
0.81998
3.368661
Night Sky Tonight Tonight if your skies are clear you may be able to see all five visible planets, i.e. Saturn, Jupiter, Mars, Venus, and Mercury without the help of a telescope. Visible planets in the solar system are readily available to the naked eye and known to us for centuries. These five planets are visible most of the year, but during this season all five are in a position that allows us to view their natural beauty on a single night. You don’t have to stay up all night to catch the action; Mars stays out considerably longer than Venus and Mercury, and can be seen two hours after the two have set. Approximately four hours after sunset, Jupiter, the largest planet, rises from the East and is overhead by bedtime, giving you a great opportunity to view from the house, even if your view is somewhat obstructed by buildings or tall trees. Saturn appears in the southeast few hours before dawn. Let’s see what exactly will be up there during the coming weeks in January: The Quadrantids meteor shower should be visible from January 1st to the 6th, its highest peak being on the night of the 3rd and 4th, so I hope you don’t miss it. Starting from a point in the constellation Bootes, near the rear of the Big Dipper, the Quadrantids may consist of up to c.80 meteors p/h, which obviously makes it rather spectacular. Remember though, the full moon is very bright these days, so try looking in the direction of the North Star (Polaris), and use a star chart or something to locate and block the moon out of your vision. A telescope or binoculars might come in handy here if you want to see more details, such as the tail (consists of dust and ion) but telescope or not, you can still see Comet Lovejoy along the Southern skies just below Orion. Watch the Comet Lovejoy at its closest approach to earth on the 7th. Planets in the Night Sky As mentioned earlier, the month of January brings with it a bunch of celestial bodies visible in the sky at night. The following planets will be visible to the unaided eye during this season. The planet Mercury is on its way to greatest elongation this month. It is visible at 19 degrees away from the sun from the 14th of January at pre-dawn before the bright rays of the sun completely overshadow it. Because Mercury is located so close to the sun, it has the shortest window for observation so anyone hoping to catch a glimpse will have to do so early before the sun comes out or post sunset. Venus, otherwise known as The Evening Star, comes out 5 degrees above the horizon on the 1st of January, and moves to about 12 degrees by last week of the month. Venus is one of the most popular targets for amateur astronomers as it shines bright at a magnitude of 3.8, easily visible from most locations. Mars is also visible, for a few hours after Mercury sets, and on the night of 21st January there will be a very good opportunity to see Mars, Mercury and Venus all very close nearby. Jupiter and its Galilean moons (credit: Astronomy Now) Can you locate the constellation Leo? Jupiter sits to the right of the Sickle at a magnitude of -2.5. Large Galilean moons can be seen circling the big planet. Watch as Jupiter’s large moons cast a shadow across the face of the large, desolate planet on Saturday 24th. On this day, three of the moons will travel across the heart of Jupiter, casting shadows as they pass. It is a spectacular view when seen through the lens of a telescope and remember; if you miss it you’ll have to wait another 28 years. If you don’t mind spending the night outside, catch Jupiter as it fully rotates through our sky in ten hours. Jupiter is actually the fastest spinning planet in our solar system, and it presents wonderful sights for viewers. Saturn is visible early in the month in the south east about 2 hours before dawn. Try to locate the constellations Scorpius and Libra; Saturn is located nearby. As the month goes on, the planet will continue to move, and it should be visible for four hours by the end of the month. In conclusion, January 2015 presents many viewing opportunities for amateur astronomers; so prepare a viewing area in these coming weeks as we watch more planets and other celestial objects make the rounds in the skies. Capture some of these sights on camera because it might be a while before you see the same alignments on our skies. Hopefully this night sky guide will assist you a bit.
0.929676
3.303551
Orbital Mechanics and Astrodynamics Techniques and Tools for Space Missions Gerald R. Hintz To determine the arrival date at the target body. To provide a preliminary estimate of the amount of propellant to be carried onboard the spacecraft. To provide information on the capture orbit when the spacecraft arrives at the target body. Springer International Publishing Switzerland G. Download sample. Then, once the accelerations are given, it is necessary to use integral calculus in order to get from the second derivatives to the positions. In a more general context, where the mass may be changing with time, such as happens with an extended application of thrust to a vehicle, with the gradual reduction of weight as fuel is used up, or in cases of relativistic speeds, the force is given by the first derivative of momentum, but the principle is the same. In the case of the 2-body problem, where the only force involved is the gravitational attraction between the two bodies, it is frequently said that Newton was able to give a complete solution. That is not, strictly speaking, the case, if one means by "a solution" of a differential equation, an expression for the unknown function whose derivatives appear in the equation. In this case, it would mean finding an expression for the position as a function of time. However, what Newton showed was that the orbit of each of the bodies lies on a conic section in a fixed inertial frame of reference , and in the case considered by Kepler, where the orbit as an ellipse, there is an explicit expression for the time as a function of the position. What one wants, of course, is x as a function of t, and much effort and ingenuity has gone into finding effective means of solving Kepler's equation for x in terms of t. - Space Systems. - Town House: A Novel (P.S.). - hiqukycona.tk: Gerald R. Hintz: Books. - United States History: 1877 to 1912: Industrialism, Foreign Expansion and the Progressive Era (Essentials). - My Favorite Expressions. - Application Security for the Android Platform: Processes, Permissions, and Other Safeguards. Lagrange did extensive work on the problem, in the course of which he developed both Fourier series and Bessel functions, named after later mathematicians who investigated these concepts in greater detail. Both Laplace and Gauss made major contributions, and succeeding generations continued to work on the subject. When there are more than two bodies involved, the problem cannot be solved analytically; instead, the integration positions from accelerations must be done numerically: now, with high-speed computers. So, numerical integral calculus is a major factor of spacecraft navigation. One may picture navigation as being the modeling of mother nature on acomputer. At some time, with the planets in their orbits, a spacecraft is given a push outward into the solar system. Its subsequent orbit is then determined by the gravitational forces upon it due to the sun and planets. We compute these, step-by-step in time, seeing how the changing forces determine the motion of the spacecraft. This is very similar to what one may picture being done in nature. How does one get an accurate orbit in the computer? The spacecraft's orbit is measured as it progresses on its journey, and the computer model is adjusted in order to best fit the actual measurements. Here one uses another type of calculus: estimation theory. It involves changing the initial "input parameters" starting positions and velocities into the computer in order to make the "output parameters" positions and velocities at subsequent times match what is being measured: adjusting the computer model to better fit reality. Also in navigation, one must "reduce" the measurements. Usually, the measurements don't correspond exactly with the positions in the computer; one must apply a few formulae before a comparison can be made. For instance, the positions in the computer represent the centers of mass of the different planets; a radar echo, however, measures the path from the radio antenna to the spot on a planet's surfaces from which the signal bounces back to earth. About This Item This processing involves the use of trigonometry, geometry, and physics. Finally, there is error analysis, or "covariance" calculus. In the initial planning stages of a mission, one is more interested in how accurately we will know the positions of the spacecraft and its target, not in the exact positions themselves. With low accuracy, greater amounts of fuel are required, and it could be that some precise navigation would not even be possible. Flight Mechanics Laboratory | NASA Covariance analysis takes into account 1 what measurements we will have of the spacecraft: how many and how good, 2 how accurately we will be able to compute the forces, and 3 how accurately we will know the position of the target. These criteria are then used in order to determine how closely we can deliver the spacecraft to the target. Again, poor accuracy will require more fuel to correct the trajectory once the spacecraft starts approaching its final target. One of the mathematical tools used to optimize some feature of a flight trajectory, such as fuel consumption or flight time, is a maximum principle introduced by Pontryagin in Despite its conceptual simplicity, huge engineering challenges have to be overcome. Our research combines innovative technology, modern orbital dynamics and systems engineering in a multi-disciplinary optimisation approach, in order to select the most advantageous design points of future solar power satellites. This research theme covers a number of topics related to the theoretical and numerical study of spacecraft orbital dynamics. We are particularly interested in spacecraft which have a large cross-sectional area with respect to their mass. One family is made of the so-called solar sails, in which solar radiation pressure is collected by a large lightweight reflective membrane deployed from the spacecraft. Solar sailing is a very promising technology for spacecraft propulsion. These devices provide a small, but continuous, acceleration to the spacecraft. As a consequence, the resulting orbits do not follow well-known Keplerian laws, and the same happens when considering multi-body environments like the Sun-Earth or Sun-Moon systems. For these reasons, we are interested in techniques for space mission design and trajectory optimisation. We design attitude control and estimation algorithms. In the past, we developed detumbling and sun tracking algorithms for UKube Our main research interest is implementing efficient attitude control and estimation algorithms for small satellites. We are also investigating the control of swarms and constellations of spacecraft.
0.849311
3.500236
عنوان مقاله [English] In recent years, the development in the space industry and the ability of building, launching and infusion of satellites in the lower orbit has put the limited number of countries with such technology. In order to complete the entire cycle of the space industry, the satellite navigation and control, which have been neglected since the beginning of the movement of space science, has to be considered specially. The orbit determination in one sentence is the application of a variety of techniques for estimating the orbits of objects such as the moon, planets, and spacecraft. In dynamic astronomy, the orbit determination is the process of determining orbital parameters with observations. In particular, orbit determination of planets of solar system is adjustment of noisy orbital observation that consist of random and systematic error for force models and estimation of model parameters by observation (In order to access a mathematical model that illustrates the path of the celestial object in the path before and after the observation time). To simplify, this process is divided into two parts. First, the initial orbit is estimated and then make corrections to the determined orbit. The purpose of initial orbit determination of object that is moving around earth, is calculation of object orbital parameters by a few observations; furthermore initial orbit determination is used for detecting missing object in space. To determine the precise orbit, it is necessary to determine the initial orbit with good accuracy, which indicates the importance of the initial orbit determination. Different type of observations is used to make an initial orbit determination in which observations can be collected by ground stations that contain angular angles, elevations, distance, and distance rate. These observations are made by the radar and the telescope, because the collection of observations without instrument and naked eye does not have enough precision and sensitivity for determination of the space object orbit, but since the extraction of distance observation is expensive and sometimes impossible, angular observation is used. In this paper, a new method has been presented for extracting angular viewing through an optical imaging system. This method is an automatic and efficient method with the ability of real-time data analysis and the base of that is astronomical imaging by CCDs (charge-coupled device). The images captured by this method have a lot of information about stars, galaxy, satellites’ streak, etc. In this paper, automatic method is presented for streak detection which consist of 5 steps: 1) image denoising, 2) extracting of star centers, 3) extracting astronomical coordinates of stars (declination and right ascension), 4) matching between astronomical and pixel coordinate of stars, 5) calculation of satellite streak model. Then, with using the extracted model, the coordinates of beginning and end points are detected. With the celestial coordinates of beginning and end point of streak Azimuth and elevation of satellite on both sides are determined. On the other hand, to evaluate the proposed method and the validity of the input parameters for initial orbit determination, the azimuth and elevation values of the beginning and end points of streak can be calculated by precise orbit file and then these results compare with results of purposed method. Comparing results indicate a difference of about milliseconds.
0.822638
3.349487
Astronomers have been analysing light signals from 2.5 million stars observed by the Sloan Digital Sky Survey, and have detected strange 'strobe-like' bursts coming from not one, but 234 stars. The pair went so far as to suggest that these light pulses "have exactly the shape of an Extra-Terrestrial Intelligence signal", and now Stephen Hawking's alien-hunting mission is on the case to confirm or disprove these claims. Let's be clear right off the bat — the claim by astronomers Ermanno Borra and Eric Trottier from Laval University in Canada that 234 extra-terrestrial civilisations might be beaming a coordinated light signal towards Earth based on anomalies in the data is extremely premature. It's also pretty irresponsible to be throwing the possibility of "Aliens!" around, given the fact that the paper has yet to be formally peer-reviewed, and replication of the results has not been attempted by an independent research team. It's also worth noting that researchers from the Breakthrough Listen project — funded by Stephen Hawking and Russian billionaire Yuri Milner, and run by the SETI (Search for Extraterrestrial Intelligence) Research Centre at the University of California Berkeley — say aliens are about the last thing they expect to find when they investigate these claims. But when the Universe — or let's be honest, human error — serves up something intriguing, it's almost always worth a second look. "The one in 10,000 objects with unusual spectra seen by Borra and Trottier are certainly worthy of additional study," the SETI Research Centre announced in a statement last week. "However, extraordinary claims require extraordinary evidence. It is too early to unequivocally attribute these purported signals to the activities of extraterrestrial civilisations." So let's run through what Borra and Trottier have actually found. A "preferred result?" Things start off a little wobbly, because as Shannon Hall explains for New Scientist, Borra had hypothesised back in 2012 that if an extraterrestrial civilisation wanted to contact us, it would make sense to beam laser pulses at us that look unnatural enough to warrant investigation. He said the kind of energy required to blast such a signal towards Earth from elsewhere in the galaxy "is not crazy", so he teamed up with Trottier to pore over the 2.5 million stars recorded by the Sloan Digital Sky Survey to see if any of them have produced such a signal. And there's our first warning signal — a good scientist knows not to approach data with a preferred result or preconceived notion in mind, because that can introduce bias, and the scientist could subconsciously (or otherwise) ignore information that goes against that. The pair reports that they detected the exact type of signal they had been looking for in some 234 stars. "We find that the detected signals have exactly the shape of an ETI (Extra-Terrestrial Intelligence) signal predicted in the previous publication and are therefore in agreement with this hypothesis," they conclude in a paper on pre-print website, arXiv.org. As Hall explains for New Scientist, if you take the aliens out of it, what they found was that the overwhelming majority of the 2.5 million stars are in the same spectral class as our Sun, but 234 of them are beaming pulses of the same periodicity — roughly 1.65 picoseconds — towards Earth. Could it be human or software error in data calibration or analysis? Absolutely, and the pair's conclusions have — not surprisingly — been met with a whole lot of criticism in the scientific community. How to verify "There is perhaps no bolder claim that one could make in observational astrophysics than the discovery of intelligent life beyond the Earth," director of the SETI Research Centre at Berkeley, Andrew Siemion, told Hall. "It's an incredibly profound subject — and of course that's why many of us devote our lives to the field and put so much energy into trying to answer these questions. But you can't make such definitive statements about detections unless you've exhausted every possible means of follow-up." That's why the SETI Research Centre and the Breakthrough Listen project have decided to get involved — they want to know what's really going on here. They explain that we already have internationally agreed-upon protocols if you want to find evidence of advanced life beyond Earth, which include independent verification using two or more telescopes, and "careful work" to determine false positive rates and rule out all other explanations. They've also established a 0 to 10 scale for quantifying detections of phenomena that may indicate the existence of advanced life beyond the Earth called the Rio Scale. They say the Borra-Trottier result is currently a 0 or 1 ("None/Insignificant") on this scale, but they're still determined to get to the bottom of things. "The Berkeley SETI Research Centre team has added several stars from the Borra and Trottier sample to the Breakthrough Listen observing queue on the 2.4-metre Automated Planet Finder (APF) optical telescope," they announced last week. "The capabilities of the APF spectrograph are well matched to those of the original detection, and these independent follow-up observations will enable us to verify or refute the reported detections." We'll have to wait and see what they find, but let's all be glad that if someone wants to throw claims of aliens around, they'd better be ready to answer to the SETI researchers first.
0.843161
3.380753
Centenary of cosmological constant lambda Insights into its 100-year history reveal how the cosmological constant was marginalised by physicists before being reinstated by astronomers to explain the accelerated expansion of the universe New York | Heidelberg, 11 July 2018 Physicists are now celebrating the 100th anniversary of the cosmological constant. On this occasion, two papers recently published in EPJ H highlight its role in modern physics and cosmology. Although the term was first introduced when the universe was thought to be static, today the cosmological constant has become the main candidate for representing the physical essence believed to be responsible for the accelerated expansion of our universe. Before becoming widely accepted, the cosmological constant was during decades the subject of many discussions about its necessity, its value and its physical essence. Today, there are still unresolved problems in understanding the deep physical nature of the phenomena associated with the cosmological constant. In his paper, Bohdan Novosyadlyj, affiliated with the National University of Lviv, Ukraine, explains how Albert Einstein introduced the cosmological constant in 1917 to make the model of static Universe, then accepted by most scientists, work. Its deep physical meaning, however, escaped Einstein. Following the discovery of evidence for a non-static universe in 1929, Einstein regretted introducing this constant into equations of general relativity. Meanwhile, other scientists tried for decades to understand its physical meaning and establish its magnitude. When evidence of dark energy was observed by Michael Turner in 1998, scientists began to consider alternatives of cosmological constant to model that dark energy. In another paper, Cormac O'Raifeartaigh from Waterford Institute of Technology, Ireland, and colleagues present a detailed analysis of the 100-year history of the cosmological constant. Starting with static models of the universe, the paper explains how the constant became marginalised following the discovery of cosmic expansion. Subsequently, it was revived to address specific cosmic puzzles such as the timespan of expansion, the formation of galaxies and the red-shifts of quasars. More recently, the constant has acquired greater physical meaning, as it has helped to the matching of further recent observations with theory. Specifically, it was helpful to reconcile current theory with the recently-observed phenomenon of dark energy as evidenced by the measurement of present cosmic expansion using the Hubble Space Telescope, the measurement of past expansion using supernova, and the measurement of cosmic microwave background by balloon and satellite. “Century of Λ” by B. Novosyadlyj (2018), European Physical Journal H, DOI 10.1140/epjh/e2018-90007-y “One hundred years of the cosmological constant: from ‘superfluous stunt’ to dark energy” by C. O'Raifeartaigh, M. O'Keeffe, W. Nahm, and S. Mitton (2018), European Physical Journal H, DOI 10.1140/epjh/e2017-80061-7 For more information visit: www.epj.org Services for Journalists Sabine Lehr | Springer | Physics Editorial Department tel +49-6221-487-8336 | [email protected]
0.803977
4.023236
The Antipodal Moon By Andrew Hall Science has puzzled over the Moon more than any other body in space. It’s the only place mankind has walked and brought back a ton of rocks. Yet, some of its features puzzle scientists as much today as the day they discovered it wasn’t made of cheese. The biggest question is, why are the near and far-sides so different? The Moon is tidally locked to Earth and, because of that, presents one side to Earth at all times. The near-side is dominated by the smooth, dark maria that appear to be the result of enormous impacts that left seas of magma. The far side, however, has little maria, and is pockmarked with many more craters. The near-side crust is only 60 km thick, overlain with 3 to 5 km of regolith — the pulverized concrete-like dust and rock of the lunar topsoil. The far-side is much thicker; so much so, it is believed to be the cause of a significant offset to the Moon’s center of mass. The far-side crust is 100 km thick, covered in 10 to 15 km of regolith, and so extensively carpeted with craters that they often overlap. Also of note, the Moon exhibits remnant magnetism that portrays the exact same pattern of antipodal contrast from the near to far-side. The areas of highest contrast in crustal thickness and magnetism are skewed far to the north on the near-side, and far to the south on the far-side. They directly oppose each other. Standard theory has changed over the years when attempting to explain the Moon’s antipodal nature. At one time, theory held that Earth protected the earth facing side of the Moon from impact. That theory lasted until statistical analysis showed the tiny diameter of Earth in relation to the orbiting Moon could not have blocked more than 1 percent of incoming asteroids . . . hardly enough to notice, let alone explain the difference in crater density. Current scientific thought maintains that the Moon was created when a Mars-sized body collided with the Earth, dislodging debris that coalesced into the Moon in Earth’s orbit over 4.5 billion years ago. A period of heavy bombardment then left the majority of impact craters and a molten interior about 3.9 billion years ago. Volcanism subsequently filled the impact basins on the near-side with lava, through cracks in the thinner crust, to solidify into the maria around 3.2 billion years ago. The far-side, having a thicker crust, experienced far less lava flow, therefore, preserving its craters. Another popular theory proposes that, because Earth was a molten mass of rock at the time of the early bombardment, its infrared glow and tidal forces heated the near-side of the Moon sufficiently to delay its freezing into solid rock. The far-side cooled much faster and preserved the craters left by the early bombardment. Each of these theories attempts to explain the difference in crater density using gravity, but fails to address why the far-side crust is thicker in the first place. A gravity model would predict the center of the Moon’s mass should face Earth. Some have speculated the Moon got turned around between bombardment and the end of lunar volcanism although no mechanism has been accepted that would cause that to occur. The EU community also has theories to explain lunar dichotomies. The approach it uses starts by asking more questions to get perspective on the problem. It is curious that the density of cratering on near and far-sides is not nearly as marked as the crater density at the poles. The following slides show the north and south poles, and the near and far-sides for comparison. There is little mention, if any, that this anomaly exists in standard theories, let alone explain it. Another way to progress is to look at a more holistic picture. For instance, how do other planets and moons look? Mars also has spiral features at the poles, as highlighted by the polar ice caps on the north and south poles. Although the swirl is evident in the ice, it is a feature deeply sculpted in the underlying rock. Is a pattern beginning to appear? A look at Mars’ crustal thickness and crater density also shows similarities. Crater density is antipodal, too, as seen alternating from smooth to cratered hemispheres in these four views of Mars. Crustal density and remnant magnetism on Mars also follow the same patterns. It’s also apparent the remnant magnetism matches the dark, swirling deposits in the southern hemisphere, similar to how the moon’s magnetism matches its surface features. Like the Moon, the antipodal features of Mars are in direct opposition. Although there is less data to go on, other planets and moons show similar traits. The following slides show Mercury, Callisto, Enceladus, Triton and Pluto, all of which have one side smooth and one side with heavily cratered highlands. The demarcation is often stark, as exemplified in the photo below of a crater trail crossing boundaries on Ganymede. A scientific approach should attempt to address these similarities in a holistic fashion, looking for commonality of cause and effect. The standard methodology explains geological anomalies by conjuring a large body of mass to put some catalytic gravity into the explanation. According to conventional thinking, the antipodal crustal thickness on Mars is the result of an oblique impact that blasted away crust from the northern hemisphere early in Mars’ history. Because the impact struck the red planet a glancing blow, the whole crust didn’t melt, leaving the northern crust thinner than the southern crust. But the impact theories still leave geophysicists with the problem of explaining why both the Moon and Mars are magnetized on only one hemisphere. According to the most recent theory, the remnant magnetism is the result of past dynamo current from molten interiors. Neither the Moon or Mars have an active core dynamo today. On the Moon, the dynamo theory is based on a circulating molten core after the Moon’s formation. It left a magnetic imprint that was subsequently wiped away on one hemisphere, when it re-melted after the heavy bombardment. Unfortunately, the age of the magnetized rock implies that the lunar dynamo had to still be going some 3.7 billion years ago, about 800 million years after the Moon’s formation. That is longer than expected for natural circulation to cool the molten interior. The Moon’s small core should have cooled off within a few hundred million years. So now, planetary scientists are seeking more gravitational forces to kept their model of the dynamo going. In the case of Mars, it’s never been understood why Mars’ northern hemisphere has virtually no magnetic field. Evidence suggests that the effect is an ancient feature that should have formed before the dynamo shut down, and well after the assumed impact event. Thus, it should be magnetized. Several ad hoc theories have been considered. Maybe the north lost its magnetism in the presence of water, or maybe there were impacts after the dynamo shut down that wiped out the north’s magnetism. Most recently, a theory proposed that impacts created differential temperatures, which allowed a single-hemisphere dynamo to form that magnetized only the southern half of the planet. Further ad hoc theories have been proposed with unknown massive bodies involved, as contemporary science tries to explain these scars found on Venus, Ganymede, Europa, Charon and Dione. Perhaps a gravitational paint brush . . . The evidence suggests that all of these planets and moons experienced severe electrical discharges from close contact with neighboring moons and planets during their creation when the solar system’s orbital dynamics were different. A revised electrical story could be told. The polar regions experienced cyclonic current events, similar to the polar aurora on Earth — though many orders of magnitude more energetic — from a dissimilarity charged body in proximity. The cratering and swirls were the result of electrical discharge and cathodic erosion at one pole, etching away the surface. The opposite pole experienced electrical discharge with anodic accumulation of material, forming a dome, drawing in matter from the nearby planet, as well as sweeping in dust from the eroding pole. As the current between bodies built at the poles, it coursed across the surface and through the interior, seeking conductive channels to short-circuit. Ionized dust created a thick plasma atmosphere, and flash-overs occurred, coursing across the mid-latitudes, scarring the face of the planet. The same effect can be witnessed today on a far more subtle scale on Earth. The polar aurora illustrate solar currents streaming into the atmosphere, and the continuous belt of thunderstorms across the equatorial latitudes show charge differentials building and discharging as violent lightning. Climate, seismic and volcanic effects wax and wane with the solar current. Natural electromagnetic forces in arcing current sheets differentiate charge potentials, eroding one hemisphere cathodically, while anodically depositing magnetized material on the other. It sorts material and preferentially deposits it in the kind of bewildering array that is actually seen correlating to these features. These energetic currents built mountains, raised volcanic blisters and tornadic electrical winds; melted bedrock and left craters, lava flows, rilles and canyons — the scars of tremendous thunderbolts. Dust and debris that blanketed one hemisphere trapped gases that burst through the layers of dust, adding simple craters and cones to an already chaotic array of lightning scars and impacts from falling debris. These processes continue even today. Mercury, Mars, the Moon, comets and even distant asteroids like Ceres exhibit ongoing electrical etching, spurts of glowing discharge and tails of ionic material in response to the solar current. The evidence is not only in these macro-features but at every level of detail. Below are several examples of anomalous planetary features that can be explained electrically, without tripping into contradictions, or stretching the probabilities, the physics, or the imagination. Much work remains to understand the physics of solar system formation, its temporal context and the orbital dynamics that caused events resulting in the types of morphology seen today. Building models and equations to better define first causes is the devil in the detail. Because EU theory provides an interdisciplinary, holistic approach, there is more evidence on which to rely. Events of electrical planetary exchange has been recorded in the history of mankind. Witnessed events of Mars and Venus in an electromagnetic embrace, and the consequences here on Earth, were recorded in the mythology that is collectively referred to as Thunderbolts of the Gods. Dave Talbott explains this aspect of the historical record in the series, Discourses on an Alien Sky. Antipodal is a consistent theme in planet morphology for all of the rocky planets and moons. The inherent dipolar nature of electromagnetism from the subatomic scale to the cosmic produces immense forces. It creates not only planets and moons, but stars, galaxies and the entire Electric Universe. For more reading on planetary features and EU theory on their formation, follow these links to related Thunderblogs and presentations: Did Van Gogh Paint This? by Andrew Hall Andrew Hall is an engineer and writer, who spent thirty years in the energy industry. He can be reached at [email protected] or https://andrewdhall.wordpress.com/ Unless otherwise captioned, all images courtesy of NASA, JPL and ESA. The ideas expressed in Thunderblogs do not necessarily express the views of T-Bolts Group Inc or The Thunderbolts ProjectTM.
0.891633
3.921958
An interesting bit of recent astronomy news is the possible discovery of a black hole in a nearby star system, HR 6819. If confirmed, this black hole would be the closest black hole to us at “just" 1,000 light-years away. The announcement's language is tentative because the mass we derive for the unseen object depends on us knowing the mass of its companion visible star. HR 6819 is actually a triple-star system with the black hole and a hot blue star orbiting each other and another hot blue star orbiting them both. Stellar-mass black holes are usually found by looking for bright, luminous X-ray sources. Objects that produce a lot of X-rays are much rarer in a galaxy than those that produce a lot of ordinary visible light, such as stars. Supercompact and massive objects like neutron stars and black holes produce a lot of X-rays from the gas swirling onto them by a nearby companion star. The swirling gas forms a disk around the compact object and can reach many millions of degrees in temperature from all the gas friction in the disk that gets superconcentrated due to the small size and extreme gravity of the object. This gas is so hot that it produces a lot of X-rays. The visible star is hundreds of thousands of miles in diameter while a neutron star or black hole is only about the size of a city — very small in diameter! We can’t see the compact object directly. If the compact object has a visible star companion, we track the motion of the visible star as it orbits the gravitational balance point between it and the compact object. The visible star gets yanked around by the unseen compact object and we can derive the mass of the compact object by how much it yanks the visible star — bigger yank means the compact object is more massive. The black hole in HR 6819 was not found from the usual X-ray method. Astronomers noticed that one of the blue stars in the system had an extra wobble in its motion that indicated it was orbiting something. Whatever the star was orbiting was not producing any light we could see and the derived mass of the unseen object was four times the mass of the sun, much too large to be a neutron star. That leaves a black hole as the only option. However, if the assumed mass of the blue star is wrong, then the derived mass of the unseen object could be much smaller. So how do we figure out if the object is a black hole? We make a lot more observations of the two hot blue stars! The first black hole discovered was an X-ray system called Cygnus X-1 in 1964. It is 6,100 light-years away. Before HR 6819, the closest black hole was V616 Monocerotis discovered in 1986 at a distance of 5,100 light-years away. If you make a plot of the distances of Cygnus X-1, V616 Monocerotis and HR 6819 from us versus their discovery dates and use the same amount of “innovative thinking” exhibited in some websites that sprung up recently, you arrive at the conclusion that by the year 2030, we should discover a black hole in our solar system. The conclusion is stronger if you ignore the V616 Monocerotis data. A good example of a little knowledge being a dangerous thing. IN THE NIGHT SKY Back to reality: Venus is now dropping rapidly back toward the sun in our evening western sky as it moves to being between us and the sun on June 3. Those with very sharp eyes may be able to notice that Venus has a thin crescent shape even without using binoculars. By the end of May it will be almost one degree tall but razor thin with just a few percent of its dayside visible to us. Before then, you’ll be able to see Mercury climbing up toward Venus over this coming week and be right next to Venus on the evening of May 21. As Venus drops down closer to the sun, Mercury will continue climbing upward and reach its greatest angular separation from the sun on June 4. During the last half of the month, you might be able to spot a fuzzy object without binoculars passing through Perseus and Auriga. That will be Comet SWAN, which will pass next to the bright star Capella at the end of the month. In the early morning sky, you’ll see Mars continue to brighten as we draw closer to it. We’re also catching up to Jupiter and Saturn, so they’re now appearing to move backward as we overtake them. They also are getting brighter.
0.917084
4.015358
Gibbous ♋ Cancer Moon phase on 19 November 2062 Sunday is Waning Gibbous, 17 days old Moon is in Cancer.Share this page: twitter facebook linkedin Previous main lunar phase is the Full Moon before 2 days on 16 November 2062 at 20:48. Moon rises in the evening and sets in the morning. It is visible to the southwest and it is high in the sky after midnight. Moon is passing first ∠3° of ♋ Cancer tropical zodiac sector. Lunar disc appears visually 0.9% wider than solar disc. Moon and Sun apparent angular diameters are ∠1960" and ∠1942". Next Full Moon is the Cold Moon of December 2062 after 26 days on 16 December 2062 at 08:17. There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak. The Moon is 17 days old. Earth's natural satellite is moving from the middle to the last part of current synodic month. This is lunation 777 of Meeus index or 1730 from Brown series. Length of current 777 lunation is 29 days, 15 hours and 28 minutes. It is 2 hours and 28 minutes shorter than next lunation 778 length. Length of current synodic month is 2 hours and 44 minutes longer than the mean length of synodic month, but it is still 4 hours and 19 minutes shorter, compared to 21st century longest. This New Moon true anomaly is ∠86.4°. At beginning of next synodic month true anomaly will be ∠123°. The length of upcoming synodic months will keep increasing since the true anomaly gets closer to the value of New Moon at point of apogee (∠180°). Moon is reaching point of perigee on this date at 18:46, this is 11 days after last apogee on 7 November 2062 at 23:42 in ♑ Capricorn. Lunar orbit is starting to get wider, while the Moon is moving outward the Earth for 16 days ahead, until it will get to the point of next apogee on 5 December 2062 at 19:32 in ♒ Aquarius. This perigee Moon is 365 580 km (227 161 mi) away from Earth. It is 3 072 km closer than the mean perigee distance, but it is still 4 776 km farther than the closest perigee of 21st century. 7 days after its ascending node on 12 November 2062 at 01:21 in ♓ Pisces, the Moon is following the northern part of its orbit for the next 5 days, until it will cross the ecliptic from North to South in descending node on 24 November 2062 at 21:08 in ♍ Virgo. 7 days after beginning of current draconic month in ♓ Pisces, the Moon is moving from the beginning to the first part of it. At 04:48 on this date the Moon is meeting its North standstill point, when it will reach northern declination of ∠28.390°. Next 13 days the lunar orbit will move in opposite southward direction to face South declination of ∠-28.324° in its southern standstill point on 2 December 2062 at 12:24 in ♐ Sagittarius. After 11 days on 30 November 2062 at 23:01 in ♐ Sagittarius, the Moon will be in New Moon geocentric conjunction with the Sun and this alignment forms next Sun-Moon-Earth syzygy.
0.848363
3.029171
The Canadian Space Agency (CSA) has a long-standing tradition of innovation and technological development in space. Who can forget the Shuttle Remote Manipulator System (SRMS), more familiarly known as the “Canadarm“, which was essential to the Space Shuttle program? How about its successor, the Canadarm2, which is a crucial part of the International Space Station and even helped assemble it? Looking to the future, the CSA intends to play a similar role in humanity’s return to the Moon – which includes the creation of the Lunar Gateway and Project Artemis. To this end, the CSA recently awarded a series of contracts with private businesses and one university to foster the development of technologies that would assist with national and international efforts to explore the Moon. Check out this image of the Canadian Space Agency’s (CSA) Canadarm2 on the International Space Station. The CSA’s Dextre is attached to one end of the arm. The Canadarm2 played a vital role in assembling the ISS, while Dextre helps maintain the ISS, freeing astronauts from routine yet dangerous spacewalks, and allowing them to focus on science. Finding exoplanets is hard work. In addition to requiring seriously sophisticated instruments, it also takes teams of committed scientists; people willing to pour over volumes of data to find the evidence of distant worlds. Professor Kipping, an astronomer based at the Harvard-Smithsonian Center for Astrophysics, is one such person. Within the astronomical community, Kipping is best known for his work with exomoons. But his research also extends to the study and characterization of exoplanets, which he pursues with his colleagues at the Cool Worlds Laboratory at Columbia University. And what has interested him most in recent years is finding exoplanets around our Sun’s closest neighbor – Proxima Centauri. Kipping describes himself as a “modeler”, combining novel theoretical modeling with modern statistical data analysis techniques applied to observations. He is also the Principal Investigator (PI) of The Hunt for Exomoons with Kepler (HEK) project and a fellow at the Harvard College Observatory. For the past few years, he and his team have been taking the hunt for exoplanets to the local stellar neighborhood. The inspiration for this search goes back to 2012, when Kipping was at a conference and heard the news about a series of exoplanets being discovery around Kepler 42 (aka. KOI-961). Using data from the Kepler mission, a team from the California Institute of Technology discovered three exoplanets orbiting this red dwarf star, which is located about 126 light years from Earth. At the time, Kipping recalled how the author of the study – Professor Philip Steven Muirhead, now an associate professor at the Institute for Astrophysical Research at Boston University – commented that this star system looked a lot like our nearest red dwarf stars – Barnard’s Star and Proxima Centauri. In addition, Kepler 42’s planets were easy to spot, given that their proximity to the star meant that they completed an orbital period in about a day. Since they pass regularly in front of their star, the odds of catching sight of them using the Transit Method were good. As Prof. Kipping told Universe Today via email, this was the “ah-ha moment” that would inspire him to look at Proxima Centauri to see if it too had a system of planets: “We were inspired by the discovery of planets transiting KOI-961 by Phil Muirhead and his team using the Kepler data. The star is very similar to Proxima, a late M-dwarf harboring three sub-Earth sized planets very close to the star. It made me realize that if that system was around Proxima, the transit probability would be 10% and the star’s small size would lead to quite detectable signals.” In essence, Kipping realized that if such a planetary system also existed around Proxima Centauri, a star with similar characteristics, then they would very easy to detect. After that, he and his team began attempting to book time with a space telescope. And by 2014-15, they had been given permission to use the Canadian Space Agency’s Microvariability and Oscillation of Stars (MOST) satellite. Roughly the same size as a suitcase, the MOST satellite weighs only 54 kg and is equipped with an ultra-high definition telescope that measures just 15 cm in diameter. It is the first Canadian scientific satellite to be placed in orbit in 33 years, and was the first space telescope to be entirely designed and built in Canada. Despite its size, MOST is ten times more sensitive than the Hubble Space Telescope. In addition, Kipping and his team knew that a mission to look for transiting exoplanets around Proxima Centauri would be too high-risk for something like Hubble. In fact, the CSA initially rejected their applications for this same reason. “MOST initially denied us because they wanted to look at Alpha Centauri following the announcement by Dumusque et al. of a planet there,” said Kipping. “So understandably Proxima, for which no planets were known at the time, was not as high priority as Alpha Cen. We never even tried for Hubble time, it would be a huge ask to stare HST at a single star for months on end with just a a 10% chance for success.” By 2014 and 2015, they secured permission to use MOST and observed Proxima Centauri twice – in May of both years. From this, they acquired a month and half’s-worth of space-based photometry, which they are currently processing to look for transits. As Kipping explained, this was rather challenging, since Proxima Centauri is a very active star – subject to star flares. “The star flares very frequently and prominently in our data,” he said. “Correcting for this effect has been one the major obstacles in our analysis. On the plus side, the rotational activity is fairly subdued. The other issue we have is that MOST orbits the Earth once every 100 minutes, so we get data gaps every time MOST goes behind the Earth.” Their efforts to find exoplanets around Proxima Centauri are especially significant in light of the European Southern Observatory’s recent announcement about the discovery of a terrestrial exoplanet within Proxima Centauri’s habitable zone (Proxima b). But compared to the ESO’s Pale Red Dot project, Kipping and his team were relying on different methods. “Essentially, we seek planets which have the right alignment to transit (or eclipse) across the face of the star, whereas radial velocities look for the wobbling motion of a star in response to the gravitational influence of an orbiting planet. Transits are always less likely to succeed for a given star, because we require the alignment to be just right. However, the payoff is that we can learn way more about the planet, including things like it’s size, density, atmosphere and presence of moons and rings.” In the coming months and years, Kipping and his team may be called upon to follow up on the success of the ESO’s discovery. Having detected Proxima b using the Radial Velocity method, it now lies to astronomers to confirm the existence of this planet using another detection method. In addition, much can be learned about a planet through the Transit Method, which would be helpful considering all the things we still don’t know about Proxima b. This includes information about its atmosphere, which the Transit Method is often able to reveal through spectroscopic measurements. Suffice it to say, Kipping and his colleagues are quite excited by the announcement of Proxima b. As he put it: “This is perhaps the most important exoplanet discovery in the last decade. It would be bitterly disappointing if Proxima b does not transit though, a planet which is paradoxically so close yet so far in terms of our ability to learn more about it. For us, transits would not just be the icing on the cake, serving merely as a confirmation signal – rather, transits open the door to learning the intimate secrets of Proxima, changing Proxima b from a single, anonymous data point to a rich world where each month we would hear about new discoveries of her nature and character.” This coming September, Kipping will be joining the faculty at Columbia University, where he will continue in his hunt for exoplanets. One can only hope that those he and his colleagues find are also within reach! For decades, Canada has made significant contributions to the field of space exploration. These include the development of sophisticated robotics, optics, participation in important research, and sending astronauts into space as part of NASA missions. And who can forget Chris Hadfield, Mr. “Space Oddity” himself? In addition to being the first Canadian to command the ISS, he is also known worldwide as the man who made space exploration fun and accessible through social media. And in recent statement, the Canadian Space Agency (CSA) has announced that it is looking for new recruits to become the next generation of Canadian astronauts. With two positions available, they are looking for applicants who embody the best qualities of astronauts, which includes a background in science and technology, exceptional physical fitness, and a desire to advance the cause of space exploration. Over the course of the past few decades, the Canadian Space Agency has established a reputation for the development of space-related technologies. In 1962, Canada deployed the Alouette satellite, which made it the third nation – after the US and USSR – to design and build its own artificial Earth satellite. And in 1972, Canada became the first country to deploy a domestic communications satellite, known as Anik 1 A1. Perhaps the best-known example of Canada’s achievements comes in the field of robotics, and goes by the name of the Shuttle Remote Manipulator System (aka. “the Canadarm“). This robotic arm was introduced in 1981, and quickly became a regular feature within the Space Shuttle Program. “Canadarm is the best-known example of the key role of Canada’s space exploration program,” said Maya Eyssen, a spokeperson for the CSA, via email. “Our robotic contribution to the shuttle program secured a mission spot for our nation’s first astronaut to fly to space –Marc Garneau. It also paved the way for Canada’s participation in the International Space Station.” It’s successor, the Canadarm2, was mounted on the International Space Station in 2001, and has since been augmented with the addition of the Dextre robotic hand – also of Canadian design and manufacture. This arm, like its predecessor, has become a mainstay of operations aboard the ISS. “Over the past 15 years, Canadarm2 has played a critical role in assembling and maintaining the Station,” said Eyssen. “It was used on almost every Station assembly mission. Canadarm2 and Dextre are used to capture commercial space ships, unload their cargo and operate with millimeter precision in space. They are both featured on our $5 bank notes. The technology behind these robots also benefits those on earth through technological spin-offs used for neurosurgery, pediatric surgery and breast-cancer detection.” In terms of optics, the CSA is renowned for the creation of the Advanced Space Vision System (SVS) used aboard the ISS. This computer-vision system uses regular 2D cameras located in the Space Shuttle Bay, on the Canadarm, or on the hull of the ISS itself – along with cooperative targets – to calculate the 3D position of objects around of the station. But arguably, Canada’s most enduring contribution to space exploration have come in the form of its astronauts. Long before Hadfield was garnering attention with his rousing rendition of David Bowie’s “Space Oddity“, or performing “Is Someone Singing (ISS)” with The Barenaked Ladies and The Wexford Gleeks choir (via a video connection from the ISS), Canadians were venturing into space as part of several NASA missions. Consider Marc Garneau, a retired military officer and engineer who became the first Canadian astronaut to go into space, taking part in three flights aboard NASA Space shuttles in 1984, 1996 and 2000. Garneau also served as the president of the Canadian Space Agency from 2001 to 2006 before retiring for active service and beginning a career in politics. And how about Roberta Bondar? As Canada’s first female astronaut, she had the additional honor of designated as the Payload Specialist for the first International Microgravity Laboratory Mission (IML-1) in 1992. Bondar also flew on the NASA Space Shuttle Discovery during Mission STS-42 in 1992, during which she performed experiments in the Spacelab. And then there’s Robert Thirsk, an engineer and physician who holds the Canadian records for the longest space flight (187 days 20 hours) and the most time spent in space (204 days 18 hours). All three individuals embodied the unique combination of academic proficiency, advanced training, personal achievement, and dedication that make up an astronaut. And just like Hadfield, Bonard, Garneau and Thirsk have all retired on gone on to have distinguished careers as chancellors of academic institutions, politicians, philanthropists, noted authors and keynote speakers. All told, eight Canadians astronauts have taken part in sixteen space missions and been deeply involved in research and experiments conducted aboard the ISS. Alas, every generation has to retire sooner or later. And having made their contributions and moved onto other paths, the CSA is looking for two particularly bright, young, highly-motivated and highly-skilled people to step up and take their place. The recruitment campaign was announced this past Sunday, July 17th, by the Honourable Navdeep Bains – the Minister of Innovation, Science and Economic Development. Those who are selected will be based at NASA’s Johnson Space Center in Houston, Texas, where they will provide support for space missions in progress, and prepare for future missions. Canadian astronauts also periodically return to Canada to participate in various activities and encourage young Canadians to pursue an education in the STEM fields (science, technology, engineering and mathematics). As Eyssen explained, the goals of the recruitment drive is to maintain the best traditions of the Canadian space program as we move into the 21st century: “The recruitment of new astronauts will allow Canada to maintain a robust astronaut corps and be ready to play a meaningful role in future human exploration initiatives. Canada is currently entitled to two long-duration astronaut flights to the ISS between now and 2024. The first one, scheduled for November 2018, will see David Saint-Jacqueslaunch to space for a six-month mission aboard the ISS. The second flight will launch before 2024. As nations work together to chart the next major international space exploration missions, our continued role in the ISS will ensure that Canada is well-positioned to be a trusted partner in humanity’s next steps in space. “Canada is seeking astronauts to advance critical science and research aboard the International Space Station and pave the way for human missions beyond the Station. Our international partners are exploring options beyond the ISS. This new generation of astronauts will be part of Canada’s next chapter of space exploration. That may include future deep-space exploration missions.” The recruitment drive will be open from June 17th to August 15th, 2016, and the selected candidates are expected to be announced by next summer. This next class of Canadian astronaut candidates will start their training in August 2017 at the Johnson Space Center. The details can be found at the Canadian Space Agency‘s website, and all potential applicants are advised to read the campaign information kit before applying. Alongside their efforts to find the next generation of astronauts, the Canadian government’s 2016 annual budget has also provided the CSA with up to $379 million dollars over the next eight years to extend Canada’s participation in the International Space Station on through to 2024. Gotta’ keep reaching for those stars, eh? TORONTO, CANADA – NASA isn’t “reading too much” into a report that the Russians will spend $8 billion on the International Space Station through 2025, the head of the agency says. That date is five years past the international agreements to operate the space station. The Russian announcement comes at a pivotal time for NASA, which is looking to extend operations on the station to at least 2024. Other space agency heads have not yet signed on. Russia is the major partner for NASA on the station, given it operates several modules and sends astronauts to and from Earth on Soyuz spacecraft. When deputy prime minister Dmitry Rogozin made the funding announcement, said NASA administrator Charles Bolden, Rogozin was speaking of a budget request that is before the State Duma. The Duma is Russia’s lower house of government. “I am told that’s why he said that,” Bolden said at a press conference yesterday (Sept. 29) for the International Astronomical Congress, citing a conversation he had with Bill Gerstenmaier, NASA’s human exploration associate administrator. “You shouldn’t read too much into that.” Other member agencies of the space station gave noncommittal responses when asked if they would sign on to an extension. “The [European] member states will be invited to give their views on what [to do] after 2020,” said Jean-Jacques Dordain, who heads the European Space Agency. He added that any extension would require a financial commitment, as an agreement without money is “only principles.” Similarly, Canadian Space Agency chief Walter Natynczyk said the money allocated to his agency will bring them through to 2020, but “we will have a look at the entire value proposition when we put a case before the government of Canada.” The Russian agreement with NASA came under scrutiny earlier this year as tensions erupted in Ukraine while Russian soldiers were in the country. This year, Ukrainian Crimea was annexed to Russia to the condemnation of several countries, including the United States. The central piece of the “pathfinder” backplane that will hold all the mirrors for NASA’s James Webb Space Telescope (JWST) has arrived at the agency’s Goddard Space Flight Center in Maryland for critical assembly testing on vital parts of the mammoth telescope. The pathfinder backplane arrived at Goddard in July and has now been hoisted in place onto a huge assembly stand inside Goddard’s giant cleanroom where many key elements of JWST are being assembled and tested ahead of the launch scheduled for October 2018. The absolutely essential task of JWST’s backplane is to hold the telescopes 18 segment, 21-foot-diameter primary mirror nearly motionless while floating in the utterly frigid space environment, thereby enabling the telescope to peer out into deep space for precise science gathering measurements never before possible. Over the next several months, engineers will practice installing two spare primary mirror segments and one spare secondary mirror onto the center part of the backplane. The purpose is to gain invaluable experience practicing the delicate procedures required to precisely install the hexagonal shaped mirrors onto the actual flight backplane unit after it arrives. The telescopes primary and secondary flight mirrors have already arrived at Goddard. The mirrors must remained precisely aligned in space in order for JWST to successfully carry out science investigations. While operating at extraordinarily cold temperatures between -406 and -343 degrees Fahrenheit the backplane must not move more than 38 nanometers, approximately 1/1,000 the diameter of a human hair. The backplane and every other component must function and unfold perfectly and to precise tolerances in space because JWST has not been designed for servicing or repairs by astronaut crews voyaging beyond low-Earth orbit into deep space, William Ochs, Associate Director for JWST at NASA Goddard told me in an interview during a visit to JWST at Goddard. Watch this video showing movement of the pathfinder backplane into the Goddard cleanroom. Video Caption: This is a time-lapse video of the center section of the ‘pathfinder’ backplane for NASA’s James Webb Space Telescope being moved into the clean room at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. Credit: NASA/Chris Gunn The actual flight backplane is comprised of three segments – the main central segment and a pair of outer wing-like parts which will be folded over into launch configuration inside the payload fairing of the Ariane V ECA booster rocket. The telescope will launch from the Guiana Space Center in Kourou, French Guiana in 2018. Both the backplane flight unit and the pathfinder unit, which consists only of the center part, are being assembled and tested by prime contractor Northrop Grumman in Redondo Beach, California. The test unit was then loaded into a C-5, flown to the U.S. Air Force’s Joint Base Andrews in Maryland and unloaded for transport by trailer truck to NASA Goddard in Greenbelt, Maryland. JWST is the successor to the 24 year old Hubble Space Telescope and will become the most powerful telescope ever sent to space. Webb is designed to look at the first light of the Universe and will be able to peer back in time to when the first stars and first galaxies were forming. The Webb Telescope is a joint international collaborative project between NASA, the European Space Agency (ESA) and the Canadian Space Agency (CSA). NASA has overall responsibility and Northrop Grumman is the prime contractor for JWST. Read my story about the recent unfurling test of JWST’s sunshade – here. Stay tuned here for Ken’s continuing Earth and Planetary science and human spaceflight news. In a thrilling demonstration of space robotics, today the Dextre “hand” replaced a malfunctioning camera on the station’s Canadarm2 robotic arm. And the Canadian Space Agency gleefully tweeted every step of the way, throwing in jokes to describe what was happening above our heads on the International Space Station. “Dextre’s job is to reduce the risk to astronauts by relieving them of routine chores, freeing their time for science,” the Canadian Space Agency tweeted today (May 27) . “Spacewalks are thrilling, inspiring, but can potentially be dangerous. They also take a lot of resources and time. So Dextre is riding the end of Canadarm2 today instead of an astronaut. And our inner child is still yelling out ‘Weeeee…!’ ” The complex maneuvers actually took a few days to accomplish, as the robot removed the broken camera last week and stowed it. Today’s work (performed by ground controllers) was focused on putting in the new camera and starting to test it. You can see some of the most memorable tweets of the day below. The cookie you see in the first tweet is part of a tradition in Canada’s robotic mission control near Montreal, Que., where controllers have this snack on the day when they are doing robotic work in space. To close out their final week aboard the International Space Station, three of the six Expedition 39 crew members are completing their unloading tasks inside the docked commercial SpaceXDragon cargo freighter and other duties while teams at Mission Control in Houston conduct delicate robotics work outside with dazzling maneuvers of the Dextre robot to remove the last external experiment from the vessels storage truck. See a dazzling gallery of photos of Dextre dangling outside the docked Dragon depot – above and below. On Monday, May 5, the robotics team at NASA Mission Control Center at the Johnson Space Center in Houston carefully guided Canada’s Dextre robotic “handyman” attached to the end of the 57-foot long Canadarm2 to basically dig out the final payload item housed in the unpressurized trunk section at the rear of the SpaceX Dragon cargo vessel docked to the ISS. Dextre stands for “Special Purpose Dexterous Manipulator” and was contributed to the station by the Canadian Space Agency. It measures 12 feet tall and is outfitted with a pair of arms and an array of finely detailed tools to carry out intricate and complex tasks that would otherwise require spacewalking astronauts. The massive orbiting outpost was soaring some 225 miles above the home planet as Dextre’s work was in progress to remove the Optical PAyload for Lasercomm Science, or OPALS, from the Dragon’s truck. The next step is to install OPALS on the Express Logistics Carrier-1 (ELC-1) depot at the end of the station’s port truss on Wednesday. Monday’s attempt was the second try at grappling OPALS. The initial attempt last Thursday “was unsuccessful due to a problem gripping the payload’s grapple fixture with the Special Purpose Dextrous Manipulator, or Dextre,” NASA reported. This unmanned Dragon delivered about 4600 pounds of cargo to the ISS including over 150 science experiments, a pair of hi tech legs for Robonaut 2, a high definition Earth observing imaging camera suite (HDEV), the laser optical communications experiment (OPALS), the VEGGIE lettuce growing experiment as well as essential gear, spare parts, crew provisions, food, clothing and supplies to the six person crews living and working aboard in low Earth orbit, under NASA’s Commercial Resupply Services (CRS) contract. OPALS uses laser light instead of radio waves to beam back precisely guided data packages to ground stations. The use of lasers should greatly increase the amount of information transmitted over the same period of time, says NASA. The science experiments carried aboard Dragon are intended for research to be conducted by the crews of ISS Expeditions 39 and 40. Robotics teams had already pulled out the other payload item from the truck, namely the HDEV imaging suite. It is already transmitting back breathtaking real time video views of Earth from a quartet of video cameras pointing in different directions mounted on the stations exterior. The SpaceX CRS-3 mission marks the company’s third resupply mission to the ISS under a $1.6 Billion contract with NASA to deliver 20,000 kg (44,000 pounds) of cargo to the ISS during a dozen Dragon cargo spacecraft flights through 2016. After spending six months in space, Station Commander Koichi Wakata from Japan as well as NASA astronaut Rick Mastracchio and Russian cosmonaut Mikhail Tyurin will be departing the station in a week aboard their Soyuz TMA-11M spacecraft on May 13 at 6:33 p.m. EDT. They are scheduled to land some 3.5 hours later in the steppes of Kazakhstan at 9:57 p.m. (7:57 a.m. Kazakh time on May 14). The events will be carried live on NASA TV. To prepare for the journey home, the trio also completed fit checks on their Russian Sokol launch and entry suits on Monday. Meanwhile Dragon is also set to depart the station soon on May 18 for a parachute assisted splashdown and recovery by boats in the Pacific Ocean west of Baja California. Dragon has been docked to the station since arriving on Easter Sunday morning, April 20. It was grappled using Canadarm 2 and berthed at the Earth facing port of the Harmony module by Commander Wakata and flight engineer Mastracchio while working at the robotics work station inside the seven windowed domed Cupola module. For the return trip, the Expedition 39 crew is also loading Dragon with precious science samples collected over many months from the crews research activities as well as trash and no longer needed items. Stay tuned here for Ken’s continuing SpaceX, Orbital Sciences, commercial space, Orion, Chang’e-3, LADEE, Curiosity, Mars rover, MAVEN, MOM and more planetary and human spaceflight news. When there’s a Dragon spacecraft coming your way at the International Space Station, you’d better be ready to grapple it with a robotic arm. For if there’s a crash, you will face “a very bad day”, as astronaut David Saint-Jacques points out in this new video (also embedded below the jump). That’s why the Canadian (along with European Space Agency astronaut Andreas Mogensen) was doing robotics training this month at the Canadian Space Agency headquarters near Montreal. The most terrifying thing for astronauts must be the limited view as they do delicate maneuvers with the multi-million dollar Canadarm2. “All you’ve got, really, while you’re working, is this workstation,” Saint-Jacques said. “You’ve got a couple of camera views to work from. You’ve got your hand controllers to move the arm, and you’ve got some computer displays, and a bunch of switches here on the left.” “That’s all you’ve got,” he added. “You’ve really got to think ahead: how you’re going to maneuver this arm without crashing into anything.” The video is the latest in a training series by Mogensen, who will go to the International Space Station in 2015. Saint-Jacques — a fellow 2009 astronaut class selectee — has not been assigned to a flight yet (at least publicly). The first Canadarm, which cost about $100 million in late 1970s dollars, flew on the second shuttle flight in 1981. Canadarm2 was constructed for space station construction in the 2000s, and is still used today for spacewalks. About six years ago, the Canadarm — Canada’s iconic robotic arm used in space — was almost sold to a company in the United States, along with other space technology from MacDonald, Dettwiler and Associates. The Canadian government blocked the sale and swiftly came out with a promise: a space policy to better support Canada’s industry. A lot has happened in six years. Policy-makers used to cite successor Canadarm2’s role in space station construction. Now the arm also does things that were barely imaginable in 2008 — namely, berthing commercial spacecraft such as SpaceX’s Dragon at the International Space Station. It shows how quickly space technology can change in half a decade. At 13 pages, there isn’t a lot of information in Canada’s framework yet to talk about, but there are some statements about government priorities. Keep the astronaut program going (which is great news after the success of Chris Hadfield). A heavy emphasis on private sector collaboration. And a promise to keep funding Canada’s contribution to the James Webb Space Telescope, NASA’s next large observatory in space. These are the Top 5 priorities listed in the plan: Canada First: Serving Canada’s interests of “sovereignty, security and prosperity.” As an example: The country has a huge land-mass that is sparsely populated, so satellites are regularly used to see what ship and other activity is going on in the territories. This is a big reason why the Radarsat Constellation of satellites is launching in 2018. Working together globally: Canada has a tiny space budget ($488.7 million in 2013-14, $435.2 million in 2014-15 and $382.9 million in 2015-16), so it relies on other countries to get its payloads, astronauts and satellites into space. This section also refers to Canada’s commitment to the International Space Station, which (as with other nations) extends to at least 2024. That’s good news for astronauts Jeremy Hansen and David Saint-Jacques, who are waiting for their first trip there. Promoting Canadian innovation: The James Webb Telescope (to which Canada is contributing optics and a guidance system) is specifically cited here along with the Canadarm. Priority areas are Canada’s historic strengths of robotics, optics, satellite communications, and space-based radar, as well as “areas of emerging expertise.” Inspiring Canadians: Basically a statement saying that the government will “recruit, and retain highly qualified personnel,” which in more real terms means that it will need to keep supporting Canadian space companies financially through contracts, for example, to make this happen. That last point in particular seemed to resonate with at least one industry group. “A long-term strategic plan for Canada’s space program is critical for our industry. In order to effectively invest in innovation, technology and product development, we rely heavily on knowing what the government’s priorities for the space program are,” stated Jim Quick, president of the Aerospace Industries Association of Canada (a major group that represents the interests of private space companies.) While we wait for more details to come out, here’s some valuable background reading. The space-based volume of the Emerson Report (the findings of a government-appointed aerospace review board listed in 2012) called for more money for and more stable funding of the Canadian Space Agency, among other recommendations. And here’s the government’s point-by-point response in late 2013. In response to funding: “The CSA’s total funding will remain unchanged and at current levels. The government will also leverage existing programs to better support the space industry.” Additionally, the CSA’s space technologies development program will be doubled to $20 million annually by 2015-16, which is still below the Emerson report’s recommendation of adding $10 million for each of the next three years. What are your thoughts on the policy? Let us know in the comments.
0.855982
3.171726
Do galaxies that leak ionizing photons have extreme outflows? The early universe made a few phase transitions. Initially, the universe was hot and highly ionized, but it cooled as it expanded such that hydrogen recombined to form the Cosmic Microwave Background Radiation. If this was the end of the story, the present-day universe would look very different. Neutral hydrogen absorbs photons blueward of Lyman alpha, and if the present universe was neutral we would not be able to observe galaxies in the far ultra-violet. Fortunately, we can observe galaxies bluward of Lyman alpha because the universe was reionized at some point early in its history (see the picture to the right for our best understanding of this timeline). An ionized universe is surprising because there must be an ionizing source (i.e. something must ionize the universe). Theoretically, this is challenging because even small amounts of neutral hydrogen within galaxies absorbs nearly all of the ionizing photons. Some suggest that star formation can inject energy and momentum into the neutral gas within the galaxy, and eject the gas out as a large scale galactic outflow. Removing this gas creates low density paths through which ionizing photons "leak" out of galaxies. The goal of this work is to determine whether galaxies that are known to leak ionizing photons have extreme galactic outflows as compared to "normal" local galaxies. If you'd prefer to just skip right to the paper, it has been accepted for publication in Astronomy & Astrophysics, just click the button below. I have also worked with a graduate student (Simon Gazagnes) to characterize the escape fraction from the Lyman Series (UV H I absorption lines). Leakers have small silicon equivalent widths To determine the outflow properties of galaxies, we composed a sample of all of the nine known local galaxies that leak ionizing photons and compared their Si II and Si III absorption line properties to galaxies that are unlikely to leak ionizing photons. Si II is an important tracer of partially neutral gas, and the equivalent width acts as a proxy of the strength of the transition. Unfortunately, the equivalent width is hard to interpret because a small equivalent width can be produced with a small amount of gas or a very clumpy gas distribution. To the left I am plotting the Si II and Si III equivalent width of galaxies that leak ionizing photons (red points) and galaxies that do not leak ionizing photons (blue points). Galaxies that leak ionizing photons have small silicon equivalent widths. This observational fact can act as a diagnostic of leaking galaxies, but it's interpretation is challenging. Leakers' equivalent widths are largely set by their metallicities While the interpretation of equivalent widths is challenging, host galaxy properties give clues to what causes the small silicon equivalent widths. In previous work, I showed that the outflow equivalent width strongly scaled with star formation rate (SFR) and stellar mass (M*). Since this new leaking sample spans a different range of SFRs and M*, we are able to further test these relations. The figure to the right shows that the equivalent width does not scale strongly with SFR, but it does scale strongly with the metallicity. Not only does this have important consequences for leaking photons, it provides tighter constraints for galactic outflows. The leakers' small silicon equivalent widths are likely because they have small metallicities. This has important consequences for the early universe when the average metallicities were also much lower. Leakers do not have extreme outflows An important characteristic of galactic outflows is how fast the outflow is moving. To characterize the maximum measurable velocity, we measure the velocity at 90% of the continuum. This velocity correlates strongly with the stellar mass (see plot to the left), and the leakers do not vary statistically from the control sample trends. This demonstrates that the confirmed leakers do not have extreme outflows, rather they have similar outflow properties to galaxies with similar stellar masses, star formation rates, and metallicities. The silicon equivalent width scales with Lyman alpha properties The key question in this work is what the outflow properties, as measured by the silicon absorption lines, tell us about ionizing photons escaping galaxies. It has been shown that the escape of Lyman Alpha photons and ionizing photons are related, so we studied how the outflow properties scale with Lyman alpha properties. The velocity separation of double peaked emitters correlates with silicon equivalent widths (see the plot to the right). Theory suggests that the peak separation is related to the neutral hydrogen column density, and this strong correlation could indicate that leakers have small Si II equivalent widths because they have low H I column densities. This suggests that galaxies leak ionizing photons because they have less neutral hydrogen gas. Here I have summarized a paper that compared the silicon outflow properties between galaxies that leak ionizing photons (red points in all the graphs above) and those that don't (blue points). The main points of the study are: The main points of the study are: - Leakers do not have higher outflow velocities than non-leaking galaxies - Leakers have smaller silicon equivalent widths than non-leaking galaxies - The silicon equivalent width is strongly correlated with metallicity - The silicon equivalent width is strongly correlated with Lyman Alpha velocity separation, which is suggested to correlate with neutral hydrogen column density
0.831403
4.173728
Hubble uncovers thousands of globular star clusters scattered among galaxies Gazing across 300 million light-years into a monstrous city of galaxies, astronomers have used NASA's Hubble Space Telescope to do a comprehensive census of some of its most diminutive members: a whopping 22,426 globular star clusters found to date. The survey, published in the November 9, 2018, issue of The Astrophysical Journal, will allow for astronomers to use the globular cluster field to map the distribution of matter and dark matter in the Coma galaxy cluster, which holds over 1,000 galaxies that are packed together. Because globular clusters are much smaller than entire galaxies – and much more abundant – they are a much better tracer of how the fabric of space is distorted by the Coma cluster's gravity. In fact, the Coma cluster is one of the first places where observed gravitational anomalies were considered to be indicative of a lot of unseen mass in the universe – later to be called "dark matter." Among the earliest homesteaders of the universe, globular star clusters are snow-globe-shaped islands of several hundred thousand ancient stars. They are integral to the birth and growth of a galaxy. About 150 globular clusters zip around our Milky Way galaxy, and, because they contain the oldest known stars in the universe, were present in the early formative years of our galaxy. Some of the Milky Way's globular clusters are visible to the naked eye as fuzzy-looking "stars." But at the distance of the Coma cluster, its globulars appear as dots of light even to Hubble's super-sharp vision. The survey found the globular clusters scattered in the space between the galaxies. They have been orphaned from their home galaxy due to galaxy near-collisions inside the traffic-jammed cluster. Hubble revealed that some globular clusters line up along bridge-like patterns. This is telltale evidence for interactions between galaxies where they gravitationally tug on each other like pulling taffy. Astronomer Juan Madrid of the Australian Telescope National Facility in Sydney, Australia first thought about the distribution of globular clusters in Coma when he was examining Hubble images that show the globular clusters extending all the way to the edge of any given photograph of galaxies in the Coma cluster. He was looking forward to more data from one of the legacy surveys of Hubble that was designed to obtain data of the entire Coma cluster, called the Coma Cluster Treasury Survey. However, halfway through the program, in 2006, Hubble's powerful Advanced Camera for Surveys (ACS) had an electronics failure. (The ACS was later repaired by astronauts during a 2009 Hubble servicing mission.) To fill in the survey gaps, Madrid and his team painstakingly pulled numerous Hubble images of the galaxy cluster taken from different Hubble observing programs. These are stored in the Space Telescope Science Institute's Mikulski Archive for Space Telescopes in Baltimore, Maryland. He assembled a mosaic of the central region of the cluster, working with students from the National Science Foundation's Research Experience for Undergraduates program. "This program gives an opportunity to students enrolled in universities with little or no astronomy to gain experience in the field," Madrid said. The team developed algorithms to sift through the Coma mosaic images that contain at least 100,000 potential sources. The program used globular clusters' color (dominated by the glow of aging red stars) and spherical shape to eliminate extraneous objects – mostly background galaxies unassociated with the Coma cluster. Though Hubble has superb detectors with unmatched sensitivity and resolution, their main drawback is that they have tiny fields of view. "One of the cool aspects of our research is that it showcases the amazing science that will be possible with NASA's planned Wide Field Infrared Survey Telescope (WFIRST) that will have a much larger field of view than Hubble," said Madrid. "We will be able to image entire galaxy clusters at once."
0.832477
3.863304
LOS ANGELES—When the Curiosity rover lifted off toward Mars, the spacecraft carried a few stowaways—278,000 bacterial spores, by NASA’s best estimate. That is sparkling clean, by spacecraft standards—the mission's components had been sterilized, wiped, baked and coddled in clean rooms to drastically reduce the bacterial burden. Mars missions such as Curiosity are subject to strict planetary protection policies intended to preserve habitats in the solar system that might harbor life of their own. After all, invasive species are a big enough problem on Earth, and one can only speculate about how terrestrial microorganisms would fare on Mars. That speculation is getting a bit more grounded, however. At a conference held here this week on the present-day habitability of Mars, numerous researchers described experiments carried out in Mars simulation chambers that can replicate some of the environmental conditions of the Red Planet. Perhaps most intriguingly, a new set of experiments described by Andrew Schuerger of the University of Florida indicate that three of the most hostile elements of the Martian environment—low pressure, low temperature, and a carbon dioxide atmosphere largely devoid of oxygen gas—are not insurmountable blockades for Earth organisms. On the contrary, some microbes don't just hunker down and hibernate but actually grow under such conditions. Schuerger, along with University of Florida colleague Wayne Nicholson and their collaborators, collected 24 microbial strains that have been found on spacecraft surfaces, in clean rooms and around Kennedy Space Center in Florida, as well as two extremophile species tolerant of hostile environments. The bacteria included common species such as Bacillus subtilis and Escherichia coli, but "our winner in this set of experiments," as Schuerger put it, was Serratia liquefaciens, a widespread generalist microbe. Most of the selected microbial species shut down at temperatures of zero degrees Celsius (which falls in the upper range of Martian surface temperatures), even without being subjected to low pressure or anoxic conditions. But S. liquefaciens succeeded not only in the low temperatures but also under the simultaneous exposure to a carbon dioxide–dominated atmosphere and Mars-like pressures of only seven millibars. (Sea-level atmospheric pressure on Earth is roughly 1,000 millibars.) The researchers reported their findings in January in the journal Astrobiology. Whereas S. liquefaciens actually grew under the trio of harsh conditions, the others did not perish—they simply lay dormant. "All of these bacteria were not killed by the conditions they were exposed to," Schuerger said. When returned to ambient laboratory conditions, the inactive bacterial species all resumed growth. In a separate study, bacteria pre-adapted to survive in frigid conditions fared even better. In a study published in December in the Proceedings of the National Academy of Sciences, Schuerger, Nicholson and their colleagues reported that bacteria isolated from the Siberian permafrost thrived in Mars-like conditions. Those species, from the genus Carnobacterium, actually seemed to favor the low-pressure conditions. "When they grew at zero [degrees C] under CO2, seven millibar atmospheres, they seemed to grow better, at higher rates, than under CO2 at 1,000 millibars or under oxygen at 1,000 millibars," Schuerger said. But bacteria need not hail from extreme habitats to flourish under Mars-like conditions. Schuerger shared preliminary, unpublished research during the conference that indicates that low-pressure, or hypobaric, environments actually stimulated the growth of microbes harvested from an unusual source: human saliva. In petri dishes incubated at low temperature under carbon dioxide atmospheres, the salivary flora failed to grow at Earth-like pressures. “Yet these hypobarophiles have popped out” under Mars-like pressures of seven millibars, he said. The specific organisms that thrived in hypobaric conditions have not yet been identified, Schuerger noted in an email, but “the human oral cavity is not a place that one would expect to find microbes that yield such a strange response.” Proving that some bacteria fare well under Mars-like pressures, temperatures and atmospheric compositions is nonetheless a long way from proving terrestrial life can flourish on Mars. Schuerger and his colleagues count 17 environmental factors on Mars that could be hostile to life, of which pressure, temperature and anoxia are only three. Two important threats to life that went unaddressed in the two bacterial studies were ultraviolet irradiation from sunlight, which on Earth is thankfully attenuated by ozone in our planet's atmosphere, and the extreme dryness of the Red Planet's surface. Schuerger noted that accurately simulating Martian desiccation would rapidly degrade the growth medium for the bacteria. "We had to reduce evaporation to carry out these experiments," he said. Which brings us back to those hundreds of thousands of spores on the Curiosity rover and its flight hardware. Even in light of the new research, the rover's landing site appears extremely unlikely to suffer contamination by terrestrial biology. Direct and reflected sunlight likely sterilized the outside of the rover within the first day or two of the mission, Schuerger said. And any survivors are unlikely to find purchase at the Curiosity landing site, Gale Crater. "Even if the UV radiation doesn't sterilize or kill off microbes on the outside of the vehicle or on the wheels, even if the microbes are dispersed, the extreme desiccating conditions of Gale Crater argue strongly against" the proliferation of stowaways from Earth, he added.
0.861558
3.863731
The Greatest Science Books of 2016 From the sound of spacetime to time travel to the microbiome, by way of polar bears, dogs, and trees. By Maria Popova I have long believed that E.B. White’s abiding wisdom on children’s books — “Anyone who writes down to children is simply wasting his time. You have to write up, not down.” — is equally true of science books. The question of what makes a great book of any kind is, of course, a slippery one, but I recently endeavored to synthesize my intuitive system for assessing science books that write up to the reader in a taxonomy of explanation, elucidation, and enchantment. Gathered here are exceptional books that accomplish at least two of the three, assembled in the spirit of my annual best-of reading lists, which I continue to consider Old Year’s resolutions in reverse — not a list of priorities for the year ahead, but a reflection on the reading most worth prioritizing in the year being left behind. BLACK HOLE BLUES In Black Hole Blues and Other Songs from Outer Space (public library), cosmologist, novelist, and unparalleled enchanter of science Janna Levin tells the story of the century-long vision, originated by Einstein, and half-century experimental quest to hear the sound of spacetime by detecting a gravitational wave. This book remains one of the most intensely interesting and beautifully written I’ve ever encountered — the kind that comes about once a generation if we’re lucky. Everything we know about the universe so far comes from four centuries of sight — from peering into space with our eyes and their prosthetic extension, the telescope. Now commences a new mode of knowing the cosmos through sound. The detection of gravitational waves is one of the most significant discoveries in the entire history of physics, marking the dawn of a new era as we begin listening to the sound of space — the probable portal to mysteries as unimaginable to us today as galaxies and nebulae and pulsars and other cosmic wonders were to the first astronomers. Gravitational astronomy, as Levin elegantly puts it, promises a “score to accompany the silent movie humanity has compiled of the history of the universe from still images of the sky, a series of frozen snapshots captured over the past four hundred years since Galileo first pointed a crude telescope at the Sun.” Astonishingly enough, Levin wrote the book before the Laser Interferometer Gravitational-Wave Observatory (LIGO) — the monumental instrument at the center of the story, decades in the making — made the actual detection of a ripple in the fabric of spacetime caused by the collision of two black holes in the autumn of 2015, exactly a century after Einstein first envisioned the possibility of gravitational waves. So the story she tells is not that of the triumph but that of the climb, which renders it all the more enchanting — because it is ultimately a story about the human spirit and its incredible tenacity, about why human beings choose to devote their entire lives to pursuits strewn with unimaginable obstacles and bedeviled by frequent failure, uncertain rewards, and meager public recognition. Indeed, what makes the book interesting is that it tells the story of this monumental discovery, but what makes it enchanting is that Levin comes at it from a rather unusual perspective. She is a working astrophysicist who studies black holes, but she is also an incredibly gifted novelist — an artist whose medium is language and thought itself. This is no popular science book but something many orders of magnitude higher in its artistic vision, the impeccable craftsmanship of language, and the sheer pleasure of the prose. The story is structured almost as a series of short, integrated novels, with each chapter devoted to one of the key scientists involved in LIGO. With Dostoyevskian insight and nuance, Levin paints a psychological, even philosophical portrait of each protagonist, revealing how intricately interwoven the genius and the foibles are in the fabric of personhood and what a profoundly human endeavor science ultimately is. Scientists are like those levers or knobs or those boulders helpfully screwed into a climbing wall. Like the wall is some cemented material made by mixing knowledge, which is a purely human construct, with reality, which we can only access through the filter of our minds. There’s an important pursuit of objectivity in science and nature and mathematics, but still the only way up the wall is through the individual people, and they come in specifics… So the climb is personal, a truly human endeavor, and the real expedition pixelates into individuals, not Platonic forms. For a taste of this uncategorizably wonderful book, see Levin on the story of the tragic hero who pioneered gravitational astronomy and how astronomer Jocelyn Bell discovered pulsars. Time Travel: A History (public library) by science historian and writer extraordinaire James Gleick, another rare enchanter of science, is not a “science book” per se, in that although it draws heavily on the history of twentieth-century science and quantum physics in particular (as well as on millennia of philosophy), it is a decidedly literary inquiry into our temporal imagination — why we think about time, why its directionality troubles us so, and what asking these questions at all reveals about the deepest mysteries of our consciousness. I consider it a grand thought experiment, using physics and philosophy as the active agents, and literature as the catalyst. Gleick, who examined the origin of our modern anxiety about time with remarkable prescience nearly two decades ago, traces the invention of the notion of time travel to H.G. Wells’s 1895 masterpiece The Time Machine. Although Wells — like Gleick, like any reputable physicist — knew that time travel was a scientific impossibility, he created an aesthetic of thought which never previously existed and which has since shaped the modern consciousness. Gleick argues that the art this aesthetic produced — an entire canon of time travel literature and film — not only permeated popular culture but even influenced some of the greatest scientific minds of the past century, including Stephen Hawking, who once cleverly hosted a party for time travelers and when no one showed up considered the impossibility of time travel proven, and John Archibald Wheeler, who popularized the term “black hole” and coined “wormhole,” both key tropes of time travel literature. Gleick considers how a scientific impossibility can become such fertile ground for the artistic imagination: Why do we need time travel, when we already travel through space so far and fast? For history. For mystery. For nostalgia. For hope. To examine our potential and explore our memories. To counter regret for the life we lived, the only life, one dimension, beginning to end. Wells’s Time Machine revealed a turning in the road, an alteration in the human relationship with time. New technologies and ideas reinforced one another: the electric telegraph, the steam railroad, the earth science of Lyell and the life science of Darwin, the rise of archeology out of antiquarianism, and the perfection of clocks. When the nineteenth century turned to the twentieth, scientists and philosophers were primed to understand time in a new way. And so were we all. Time travel bloomed in the culture, its loops and twists and paradoxes. I wrote about Gleick’s uncommonly pleasurable book at length here. A very different take on time, not as cultural phenomenon but as individual psychological interiority, comes from German psychologist Marc Wittmann in Felt Time: The Psychology of How We Perceive Time (public library) — a fascinating inquiry into how our subjective experience of time’s passage shapes everything from our emotional memory to our sense of self. Bridging disciplines as wide-ranging as neuroscience and philosophy, Wittmann examines questions of consciousness, identity, happiness, boredom, money, and aging, exposing the centrality of time in each of them. What emerges is the disorienting sense that time isn’t something which happens to us — rather, we are time. One of Wittmann’s most pause-giving points has to do with how temporality mediates the mind-body problem. He writes: Presence means becoming aware of a physical and psychic self that is temporally extended. To be self-conscious is to recognize oneself as something that persists through time and is embodied. In a sense, time is a construction of our consciousness. Two generations after Hannah Arendt observed in her brilliant meditation on time that “it is the insertion of man with his limited life span that transforms the continuously flowing stream of sheer change … into time as we know it,” Wittmann writes: Self-consciousness — achieving awareness of one’s own self — emerges on the basis of temporally enduring perception of bodily states that are tied to neural activity in the brain’s insular lobe. The self and time prove to be especially present in boredom. They go missing in the hustle and bustle of everyday life, which results from the acceleration of social processes. Through mindfulness and emotional control, the tempo of life that we experience can be reduced, and we can regain time for ourselves and others. Perception necessarily encompasses the individual who is doing the perceiving. It is I who perceives. This might seem self-evident. Perception of myself, my ego, occurs naturally when I consider myself. I “feel” and think about myself. But who is the subject if I am the object of my own attention? When I observe myself, after all, I become the object of observation. Clearly, this intangibility of the subject as a subject — and not an object — poses a philosophical problem: as soon as I observe myself, I have already become the object of my observation. WHEN BREATH BECOMES AIR All life is lived in the shadow of its own finitude, of which we are always aware — an awareness we systematically blunt through the daily distraction of living. But when this finitude is made acutely imminent, one suddenly collides with awareness so acute that it leaves no choice but to fill the shadow with as much light as a human being can generate — the sort of inner illumination we call meaning: the meaning of life. That tumultuous turning point is what neurosurgeon Paul Kalanithi chronicles in When Breath Becomes Air (public library) — his piercing memoir of being diagnosed with terminal cancer at the peak of a career bursting with potential and a life exploding with aliveness. Partway between Montaigne and Oliver Sacks, Kalanithi weaves together philosophical reflections on his personal journey with stories of his patients to illuminate the only thing we have in common — our mortality — and how it spurs all of us, in ways both minute and monumental, to pursue a life of meaning. What emerges is an uncommonly insightful, sincere, and sobering revelation of how much our sense of self is tied up with our sense of potential and possibility — the selves we would like to become, those we work tirelessly toward becoming. Who are we, then, and what remains of “us” when that possibility is suddenly snipped? A generation after surgeon Sherwin Nuland’s foundational text on confronting the meaning of life while dying, Kalanithi sets out to answer these questions and their myriad fractal implications. He writes: At age thirty-six, I had reached the mountaintop; I could see the Promised Land, from Gilead to Jericho to the Mediterranean Sea. I could see a nice catamaran on that sea that Lucy, our hypothetical children, and I would take out on weekends. I could see the tension in my back unwinding as my work schedule eased and life became more manageable. I could see myself finally becoming the husband I’d promised to be. And then the unthinkable happens. He recounts one of the first incidents in which his former identity and his future fate collided with jarring violence: My back stiffened terribly during the flight, and by the time I made it to Grand Central to catch a train to my friends’ place upstate, my body was rippling with pain. Over the past few months, I’d had back spasms of varying ferocity, from simple ignorable pain, to pain that made me forsake speech to grind my teeth, to pain so severe I curled up on the floor, screaming. This pain was toward the more severe end of the spectrum. I lay down on a hard bench in the waiting area, feeling my back muscles contort, breathing to control the pain — the ibuprofen wasn’t touching this — and naming each muscle as it spasmed to stave off tears: erector spinae, rhomboid, latissimus, piriformis… A security guard approached. “Sir, you can’t lie down here.” “I’m sorry,” I said, gasping out the words. “Bad … back … spasms.” “You still can’t lie down here.” I pulled myself up and hobbled to the platform. Like the book itself, the anecdote speaks to something larger and far more powerful than the particular story — in this case, our cultural attitude toward what we consider the failings of our bodies: pain and, in the ultimate extreme, death. We try to dictate the terms on which these perceived failings may occur; to make them conform to wished-for realities; to subvert them by will and witless denial. All this we do because, at bottom, we deem them impermissible — in ourselves and in each other. I wrote about the book at length here. THE CONFIDENCE GAME “Try not to get overly attached to a hypothesis just because it’s yours,” Carl Sagan urged in his excellent Baloney Detection Kit — and yet our tendency is to do just that, becoming increasingly attached to what we’ve come to believe because the belief has sprung from our own glorious, brilliant, fool-proof minds. How con artists take advantage of this human hubris is what New Yorker columnist and psychology writer Maria Konnikova explores in The Confidence Game: Why We Fall for It … Every Time (public library) — a thrilling psychological detective story investigating how con artists, the supreme masterminds of malevolent reality-manipulation, prey on our hopes, our fears, and our propensity for believing what we wish were true. Through a tapestry of riveting real-life con artist profiles interwoven with decades of psychology experiments, Konnikova illuminates the inner workings of trust and deception in our everyday lives. It’s the oldest story ever told. The story of belief — of the basic, irresistible, universal human need to believe in something that gives life meaning, something that reaffirms our view of ourselves, the world, and our place in it… For our minds are built for stories. We crave them, and, when there aren’t ready ones available, we create them. Stories about our origins. Our purpose. The reasons the world is the way it is. Human beings don’t like to exist in a state of uncertainty or ambiguity. When something doesn’t make sense, we want to supply the missing link. When we don’t understand what or why or how something happened, we want to find the explanation. A confidence artist is only too happy to comply — and the well-crafted narrative is his absolute forte. Konnikova describes the basic elements of the con and the psychological susceptibility into which each of them plays: The confidence game starts with basic human psychology. From the artist’s perspective, it’s a question of identifying the victim (the put-up): who is he, what does he want, and how can I play on that desire to achieve what I want? It requires the creation of empathy and rapport (the play): an emotional foundation must be laid before any scheme is proposed, any game set in motion. Only then does it move to logic and persuasion (the rope): the scheme (the tale), the evidence and the way it will work to your benefit (the convincer), the show of actual profits. And like a fly caught in a spider’s web, the more we struggle, the less able to extricate ourselves we become (the breakdown). By the time things begin to look dicey, we tend to be so invested, emotionally and often physically, that we do most of the persuasion ourselves. We may even choose to up our involvement ourselves, even as things turn south (the send), so that by the time we’re completely fleeced (the touch), we don’t quite know what hit us. The con artist may not even need to convince us to stay quiet (the blow-off and fix); we are more likely than not to do so ourselves. We are, after all, the best deceivers of our own minds. At each step of the game, con artists draw from a seemingly endless toolbox of ways to manipulate our belief. And as we become more committed, with every step we give them more psychological material to work with. Needless to say, the book bears remarkable relevance to the recent turn of events in American politics and its ripples in the mass manipulation machine known as the media. “This is the entire essence of life: Who are you? What are you?” young Leo Tolstoy wrote in his diary. For Tolstoy, this was a philosophical inquiry — or a metaphysical one, as it would have been called in his day. But between his time and ours, science has unraveled the inescapable physical dimensions of this elemental question, rendering the already disorienting attempt at an answer all the more complex and confounding. In The Gene: An Intimate History (public library), physician and Pulitzer-winning author Siddhartha Mukherjee offers a rigorously researched, beautifully written detective story about the genetic components of what we experience as the self, rooted in Mukherjee’s own painful family history of mental illness and radiating a larger inquiry into how genetics illuminates the future of our species. Three profoundly destabilizing scientific ideas ricochet through the twentieth century, trisecting it into three unequal parts: the atom, the byte, the gene. Each is foreshadowed by an earlier century, but dazzles into full prominence in the twentieth. Each begins its life as a rather abstract scientific concept, but grows to invade multiple human discourses — thereby transforming culture, society, politics, and language. But the most crucial parallel between the three ideas, by far, is conceptual: each represents the irreducible unit — the building block, the basic organizational unit — of a larger whole: the atom, of matter; the byte (or “bit”), of digitized information; the gene, of heredity and biological information. Why does this property — being the least divisible unit of a larger form — imbue these particular ideas with such potency and force? The simple answer is that matter, information, and biology are inherently hierarchically organized: understanding that smallest part is crucial to understanding the whole. Among the book’s most fascinating threads is Mukherjee’s nuanced, necessary discussion of intelligence and the dark side of IQ. THE POLAR BEAR “In wildness is the preservation of the world,” Thoreau wrote 150 years ago in his ode to the spirit of sauntering. But in a world increasingly unwild, where we are in touch with nature only occasionally and only in fragments, how are we to nurture the preservation of our Pale Blue Dot? That’s what London-based illustrator and Sendak Fellow Jenni Desmond explores in The Polar Bear (public library) — the follow-up to Desmond’s serenade to the science and life of Earth’s largest-hearted creature, The Blue Whale, which was among the best science books of 2015. The story follows a little girl who, in a delightful meta-touch, pulls this very book off the bookshelf and begins learning about the strange and wonderful world of the polar bear, its life, and the science behind it — its love of solitude, the black skin that hides beneath its yellowish-white fur, the built-in sunglasses protecting its eyes from the harsh Arctic light, why it evolved to have an unusually long neck and slightly inward paws, how it maintains the same temperature as us despite living in such extreme cold, why it doesn’t hibernate. Beyond its sheer loveliness, the book is suddenly imbued with a new layer of urgency. At a time when we can no longer count on politicians to protect the planet and educate the next generations about preserving it, the task falls on solely on parents and educators. Desmond’s wonderful project alleviates that task by offering a warm, empathic invitation to care about, which is the gateway to caring for, one of the creatures most vulnerable to our changing climate and most needful of our protection. Look closer here. THE BIG PICTURE “We are — as far as we know — the only part of the universe that’s self-conscious,” the poet Mark Strand marveled in his beautiful meditation on the artist’s task to bear witness to existence, adding: “We could even be the universe’s form of consciousness. We might have come along so that the universe could look at itself… It’s such a lucky accident, having been born, that we’re almost obliged to pay attention.” Scientists are rightfully reluctant to ascribe a purpose or meaning to the universe itself but, as physicist Lisa Randall has pointed out, “an unconcerned universe is not a bad thing — or a good one for that matter.” Where poets and scientists converge is the idea that while the universe itself isn’t inherently imbued with meaning, it is in this self-conscious human act of paying attention that meaning arises. Physicist Sean Carroll terms this view poetic naturalism and examines its rewards in The Big Picture: On the Origins of Life, Meaning, and the Universe Itself (public library) — a nuanced inquiry into “how our desire to matter fits in with the nature of reality at its deepest levels,” in which Carroll offers an assuring dose of what he calls “existential therapy” reconciling the various and often seemingly contradictory dimensions of our experience. With an eye to his life’s work of studying the nature of the universe — an expanse of space and time against the incomprehensibly enormous backdrop of which the dramas of a single human life claim no more than a photon of the spotlight — Carroll offers a counterpoint to our intuitive cowering before such magnitudes of matter and mattering: I like to think that our lives do matter, even if the universe would trundle along without us. I want to argue that, though we are part of a universe that runs according to impersonal underlying laws, we nevertheless matter. This isn’t a scientific question — there isn’t data we can collect by doing experiments that could possibly measure the extent to which a life matters. It’s at heart a philosophical problem, one that demands that we discard the way that we’ve been thinking about our lives and their meaning for thousands of years. By the old way of thinking, human life couldn’t possibly be meaningful if we are “just” collections of atoms moving around in accordance with the laws of physics. That’s exactly what we are, but it’s not the only way of thinking about what we are. We are collections of atoms, operating independently of any immaterial spirits or influences, and we are thinking and feeling people who bring meaning into existence by the way we live our lives. Carroll’s captivating term poetic naturalism builds on a worldview that has been around for centuries, dating back at least to the Scottish philosopher David Hume. It fuses naturalism — the idea that the reality of the natural world is the only reality, that it operates according to consistent patterns, and that those patterns can be studied — with the poetic notion that there are multiple ways of talking about the world and of framing the questions that arise from nature’s elemental laws. I wrote about the book at length here. THE HIDDEN LIFE OF TREES Trees dominate the world’s the oldest living organisms. Since the dawn of our species, they have been our silent companions, permeating our most enduring tales and never ceasing to inspire fantastical cosmogonies. Hermann Hesse called them “the most penetrating of preachers.” A forgotten seventeenth-century English gardener wrote of how they “speak to the mind, and tell us many things, and teach us many good lessons.” But trees might be among our lushest metaphors and sensemaking frameworks for knowledge precisely because the richness of what they say is more than metaphorical — they speak a sophisticated silent language, communicating complex information via smell, taste, and electrical impulses. This fascinating secret world of signals is what German forester Peter Wohlleben explores in The Hidden Life of Trees: What They Feel, How They Communicate (public library). Wohlleben chronicles what his own experience of managing a forest in the Eifel mountains in Germany has taught him about the astonishing language of trees and how trailblazing arboreal research from scientists around the world reveals “the role forests play in making our world the kind of place where we want to live.” As we’re only just beginning to understand nonhuman consciousnesses, what emerges from Wohlleben’s revelatory reframing of our oldest companions is an invitation to see anew what we have spent eons taking for granted and, in this act of seeing, to care more deeply about these remarkable beings that make life on this planet we call home not only infinitely more pleasurable, but possible at all. Read more here. BEING A DOG “The act of smelling something, anything, is remarkably like the act of thinking itself,” the great science storyteller Lewis Thomas wrote in his beautiful 1985 meditation on the poetics of smell as a mode of knowledge. But, like the conditioned consciousness out of which our thoughts arise, our olfactory perception is beholden to our cognitive, cultural, and biological limitations. The 438 cubic feet of air we inhale each day are loaded with an extraordinary richness of information, but we are able to access and decipher only a fraction. And yet we know, on some deep creaturely level, just how powerful and enlivening the world of smell is, how intimately connected with our ability to savor life. “Get a life in which you notice the smell of salt water pushing itself on a breeze over the dunes,” Anna Quindlen advised in her indispensable Short Guide to a Happy Life — but the noticing eclipses the getting, for the salt water breeze is lost on any life devoid of this sensorial perception. Dogs, who “see” the world through smell, can teach us a great deal about that springlike sensorial aliveness which E.E. Cummings termed “smelloftheworld.” So argues cognitive scientist and writer Alexandra Horowitz, director of the Dog Cognition Lab at Barnard College, in Being a Dog: Following the Dog Into a World of Smell (public library) — a fascinating tour of what Horowitz calls the “surprising and sometimes alarming feats of olfactory perception” that dogs perform daily, and what they can teach us about swinging open the doors of our own perception by relearning some of our long-lost olfactory skills that grant us access to hidden layers of reality. The book is a natural extension of Horowitz’s two previous books, exploring the subjective reality of the dog and how our human perceptions shape our own subjective reality. She writes: I am besotted with dogs, and to know a dog is to be interested in what it’s like to be a dog. And that all begins with the nose. What the dog sees and knows comes through his nose, and the information that every dog — the tracking dog, of course, but also the dog lying next to you, snoring, on the couch — has about the world based on smell is unthinkably rich. It is rich in a way we humans once knew about, once acted on, but have since neglected. Savor more of the wonderland of canine olfaction here. I CONTAIN MULTITUDES “I have observed many tiny animals with great admiration,” Galileo marveled as he peered through his microscope — a tool that, like the telescope, he didn’t invent himself but he used with in such a visionary way as to render it revolutionary. The revelatory discoveries he made in the universe within the cell are increasingly proving to be as significant as his telescopic discoveries in the universe without — a significance humanity has been even slower and more reluctant to accept than his radical revision of the cosmos. That multilayered significance is what English science writer and microbiology elucidator Ed Yong explores in I Contain Multitudes: The Microbes Within Us and a Grander View of Life (public library) — a book so fascinating and elegantly written as to be worthy of its Whitman reference, in which Yong peels the veneer of the visible to reveal the astonishing complexity of life thriving beneath and within the crude confines of our perception. Artist Agnes Martin memorably observed that “the best things in life happen to you when you’re alone,” but Yong offers a biopoetic counterpoint in the fact that we are never truly alone. He writes: Even when we are alone, we are never alone. We exist in symbiosis — a wonderful term that refers to different organisms living together. Some animals are colonised by microbes while they are still unfertilised eggs; others pick up their first partners at the moment of birth. We then proceed through our lives in their presence. When we eat, so do they. When we travel, they come along. When we die, they consume us. Every one of us is a zoo in our own right — a colony enclosed within a single body. A multi-species collective. An entire world. All zoology is really ecology. We cannot fully understand the lives of animals without understanding our microbes and our symbioses with them. And we cannot fully appreciate our own microbiome without appreciating how those of our fellow species enrich and influence their lives. We need to zoom out to the entire animal kingdom, while zooming in to see the hidden ecosystems that exist in every creature. When we look at beetles and elephants, sea urchins and earthworms, parents and friends, we see individuals, working their way through life as a bunch of cells in a single body, driven by a single brain, and operating with a single genome. This is a pleasant fiction. In fact, we are legion, each and every one of us. Always a “we” and never a “me.” There are ample reasons to admire and appreciate microbes, well beyond the already impressive facts that they ruled “our” Earth for the vast majority of its 4.54-billion-year history and that we ourselves evolved from them. By pioneering photosynthesis, they became the first organisms capable of making their own food. They dictate the planet’s carbon, nitrogen, sulphur, and phosphorus cycles. They can survive anywhere and populate just about corner of the Earth, from the hydrothermal vents at the bottom of the ocean to the loftiest clouds. They are so diverse that the microbes on your left hand are different from those on your right. But perhaps most impressively — for we are, after all, the solipsistic species — they influence innumerable aspects of our biological and even psychological lives. Young offers a cross-section of this microbial dominion: The microbiome is infinitely more versatile than any of our familiar body parts. Your cells carry between 20,000 and 25,000 genes, but it is estimated that the microbes inside you wield around 500 times more. This genetic wealth, combined with their rapid evolution, makes them virtuosos of biochemistry, able to adapt to any possible challenge. They help to digest our food, releasing otherwise inaccessible nutrients. They produce vitamins and minerals that are missing from our diet. They break down toxins and hazardous chemicals. They protect us from disease by crowding out more dangerous microbes or killing them directly with antimicrobial chemicals. They produce substances that affect the way we smell. They are such an inevitable presence that we have outsourced surprising aspects of our lives to them. They guide the construction of our bodies, releasing molecules and signals that steer the growth of our organs. They educate our immune system, teaching it to tell friend from foe. They affect the development of the nervous system, and perhaps even influence our behaviour. They contribute to our lives in profound and wide-ranging ways; no corner of our biology is untouched. If we ignore them, we are looking at our lives through a keyhole. In August, I wrote about one particularly fascinating aspect of Yong’s book — the relationship between mental health, free will, and your microbiome. “No woman should say, ‘I am but a woman!’ But a woman! What more can you ask to be?” astronomer Maria Mitchell, who paved the way for women in American science, admonished the first class of female astronomers at Vassar in 1876. By the middle of the next century, a team of unheralded women scientists and engineers were powering space exploration at NASA’s Jet Propulsion Laboratory. Meanwhile, across the continent and in what was practically another country, a parallel but very different revolution was taking place: In the segregated South, a growing number of black female mathematicians, scientists, and engineers were steering early space exploration and helping American win the Cold War at NASA’s Langley Research Center in Hampton, Virginia. Long before the term “computer” came to signify the machine that dictates our lives, these remarkable women were working as human “computers” — highly skilled professional reckoners, who thought mathematically and computationally for their living and for their country. When Neil Armstrong set his foot on the moon, his “giant leap for mankind” had been powered by womankind, particularly by Katherine Johnson — the “computer” who calculated Apollo 11’s launch windows and who was awarded the Presidential Medal of Freedom by President Obama at age 97 in 2015, three years after the accolade was conferred upon John Glenn, the astronaut whose flight trajectory Johnson had made possible. In Hidden Figures: The Story of the African-American Women Who Helped Win the Space Race (public library), Margot Lee Shetterly tells the untold story of these brilliant women, once on the frontlines of our cultural leaps and since sidelined by the selective collective memory we call history. Just as islands — isolated places with unique, rich biodiversity — have relevance for the ecosystems everywhere, so does studying seemingly isolated or overlooked people and events from the past turn up unexpected connections and insights to modern life. Against a sobering cultural backdrop, Shetterly captures the enormous cognitive dissonance the very notion of these black female mathematicians evokes: Before a computer became an inanimate object, and before Mission Control landed in Houston; before Sputnik changed the course of history, and before the NACA became NASA; before the Supreme Court case Brown v. Board of Education of Topeka established that separate was in fact not equal, and before the poetry of Martin Luther King Jr.’s “I Have a Dream” speech rang out over the steps of the Lincoln Memorial, Langley’s West Computers were helping America dominate aeronautics, space research, and computer technology, carving out a place for themselves as female mathematicians who were also black, black mathematicians who were also female. Shetterly herself grew up in Hampton, which dubbed itself “Spacetown USA,” amid this archipelago of women who were her neighbors and teachers. Her father, who had built his first rocket in his early teens after seeing the Sputnik launch, was one of Langley’s African American scientists in an era when words we now shudder to hear were used instead of “African American.” Like him, the first five black women who joined Langley’s research staff in 1943 entered a segregated NASA — even though, as Shetterly points out, the space agency was among the most inclusive workplaces in the country, with more than fourfold the percentage of black scientists and engineers than the national average. Over the next forty years, the number of these trailblazing black women mushroomed to more than fifty, revealing the mycelia of a significant groundswell. Shetterly’s favorite Sunday school teacher had been one of the early computers — a retired NASA mathematician named Kathleen Land. And so Shetterly, who considers herself “as much a product of NASA as the Moon landing,” grew up believing that black women simply belonged in science and space exploration as a matter of course — after all, they populated her father’s workplace and her town, a town whose church “abounded with mathematicians.” Embodying astronomer Vera Rubin’s wisdom on how modeling expands children’s scope of possibility, Shetterly reflects on this normalizing and rousing power of example: Building 1236, my father’s daily destination, contained a byzantine complex of government-gray cubicles, perfumed with the grown-up smells of coffee and stale cigarette smoke. His engineering colleagues with their rumpled style and distracted manner seemed like exotic birds in a sanctuary. They gave us kids stacks of discarded 11×14 continuous-form computer paper, printed on one side with cryptic arrays of numbers, the blank side a canvas for crayon masterpieces. Women occupied many of the cubicles; they answered phones and sat in front of typewriters, but they also made hieroglyphic marks on transparent slides and conferred with my father and other men in the office on the stacks of documents that littered their desks. That so many of them were African American, many of them my grandmother’s age, struck me as simply a part of the natural order of things: growing up in Hampton, the face of science was brown like mine. The community certainly included black English professors, like my mother, as well as black doctors and dentists, black mechanics, janitors, and contractors, black cobblers, wedding planners, real estate agents, and undertakers, several black lawyers, and a handful of black Mary Kay salespeople. As a child, however, I knew so many African Americans working in science, math, and engineering that I thought that’s just what black folks did. But despite the opportunities at NASA, almost countercultural in their contrast to the norms of the time, life for these courageous and brilliant women was no idyll — persons and polities are invariably products of their time and place. Shetterly captures the sundering paradoxes of the early computers’ experience: I interviewed Mrs. Land about the early days of Langley’s computing pool, when part of her job responsibility was knowing which bathroom was marked for “colored” employees. And less than a week later I was sitting on the couch in Katherine Johnson’s living room, under a framed American flag that had been to the Moon, listening to a ninety-three-year-old with a memory sharper than mine recall segregated buses, years of teaching and raising a family, and working out the trajectory for John Glenn’s spaceflight. I listened to Christine Darden’s stories of long years spent as a data analyst, waiting for the chance to prove herself as an engineer. Even as a professional in an integrated world, I had been the only black woman in enough drawing rooms and boardrooms to have an inkling of the chutzpah it took for an African American woman in a segregated southern workplace to tell her bosses she was sure her calculations would put a man on the Moon. And while the black women are the most hidden of the mathematicians who worked at the NACA, the National Advisory Committee for Aeronautics, and later at NASA, they were not sitting alone in the shadows: the white women who made up the majority of Langley’s computing workforce over the years have hardly been recognized for their contributions to the agency’s long-term success. Virginia Biggins worked the Langley beat for the Daily Press newspaper, covering the space program starting in 1958. “Everyone said, ‘This is a scientist, this is an engineer,’ and it was always a man,” she said in a 1990 panel on Langley’s human computers. She never got to meet any of the women. “I just assumed they were all secretaries,” she said. These women’s often impossible dual task of preserving their own sanity and dignity while pushing culture forward is perhaps best captured in the words of African American NASA mathematician Dorothy Vaughan: What I changed, I could; what I couldn’t, I endured. Dive in here. THE GLASS UNIVERSE Predating NASA’s women mathematicians by a century was a devoted team of female amateur astronomers — “amateur” being a reflection not of their skill but of the dearth of academic accreditation available to women at the time — who came together at the Harvard Observatory at the end of the nineteenth century around an unprecedented quest to catalog the cosmos by classifying the stars and their spectra. Decades before they were allowed to vote, these women, who came to be known as the “Harvard computers,” classified hundreds of thousands of stars according to a system they invented, which astronomers continue to use today. Their calculations became the basis for the discovery that the universe is expanding. Their spirit of selfless pursuit of truth and knowledge stands as a timeless testament to pioneering physicist Lise Meitner’s definition of the true scientist. Science historian Dava Sobel, author of Galileo’s Daughter, chronicles their unsung story and lasting legacy in The Glass Universe: How the Ladies of the Harvard Observatory Took the Measure of the Stars (public library). Sobel, who takes on the role of rigorous reporter and storyteller bent on preserving the unvarnished historical integrity of the story, paints the backdrop: A little piece of heaven. That was one way to look at the sheet of glass propped up in front of her. It measured about the same dimensions as a picture frame, eight inches by ten, and no thicker than a windowpane. It was coated on one side with a fine layer of photographic emulsion, which now held several thousand stars fixed in place, like tiny insects trapped in amber. One of the men had stood outside all night, guiding the telescope to capture this image, along with another dozen in the pile of glass plates that awaited her when she reached the observatory at 9 a.m. Warm and dry indoors in her long woolen dress, she threaded her way among the stars. She ascertained their positions on the dome of the sky, gauged their relative brightness, studied their light for changes over time, extracted clues to their chemical content, and occasionally made a discovery that got touted in the press. Seated all around her, another twenty women did the same. Among the “Harvard computers” were Antonia Maury, who had graduated from Maria Mitchell’s program at Vassar; Annie Jump Cannon, who catalogued more than 20,000 variable stars in a short period after joining the observatory; Henrietta Swan Levitt, a Radcliffe alumna whose discoveries later became the basis for Hubble’s Law demonstrating the expansion of the universe and whose work was so valued that she was paid 30 cents an hour, five cents over the standard salary of the computers; and Cecilia Helena Payne-Gaposchkin, who became not only the first woman but the first person of any gender to earn a Ph.D. in astronomy. Helming the team was Williamina Fleming — a Scotswoman whom Edward Charles Pickering, the thirty-something director of the observatory, first hired as a second maid at his residency in 1879 before recognizing her mathematical talents and assigning her the role of part-time computer. Dive into their story here. WOMEN IN SCIENCE For a lighter companion to the two books above, one aimed at younger readers, artist and author Rachel Ignotofsky offers Women in Science: 50 Fearless Pioneers Who Changed the World (public library) — an illustrated encyclopedia of fifty influential and inspiring women in STEM since long before we acronymized the conquest of curiosity through discovery and invention, ranging from the ancient astronomer, mathematician, and philosopher Hypatia in the fourth century to Iranian mathematician Maryam Mirzakhani, born in 1977. True as it may be that being an outsider is an advantage in science and life, modeling furnishes young hearts with the assurance that people who are in some way like them can belong and shine in fields comprised primarily of people drastically unlike them. It is this ethos that Igontofsky embraces by being deliberate in ensuring that the scientists included come from a vast variety of ethnic backgrounds, nationalities, orientations, and cultural traditions. There are the expected trailblazers who have stood as beacons of possibility for decades, even centuries: Ada Lovelace, who became the world’s first de facto computer programmer; Marie Curie, the first woman to win a Nobel Prize and to this day the only person awarded a Nobel in two different sciences; Jocelyn Bell Burnell, who once elicited the exclamation “Miss Bell, you have made the greatest astronomical discovery of the twentieth century!” (and was subsequently excluded from the Nobel she deserved); Maria Sybilla Merian, the 17th-century German naturalist whose studies of butterfly metamorphosis revolutionized entomology and natural history illustration; and Jane Goodall — another pioneer who turned her childhood dream into reality against tremendous odds and went on to do more for the understanding of nonhuman consciousness than any scientist before or since. Take a closer look here. * * * On December 2, I joined Science Friday alongside Scientific American editor Lee Billings to discuss some of our favorite science books of 2016: Published December 7, 2016
0.836155
3.405683
The habitable zone was commonly thought to start at a distance of 0.99 astronomical units (AU) from a sun-like star and end at around 1.7 to 2.0 AU. For reference, one AU is the distance from the Earth to the sun; 0.99 AU is approximately 92 million miles, barely closer than we are to our star. Any closer than that, the thinking went, and the planet would experience a runaway greenhouse effect as the intense heat from the star would boil away all the planet's surface water. However, a study published this week in Nature suggests moving the inner boundary of the habitable zone to 0.95 AU, or about 3.7 million miles closer to the host star. If this is true, then perhaps desert planets such as Arrakis, from the sci-fi novel Dune might truly be capable of supporting life. While the new boundary estimate doesn't overturn everything we thought we knew about exoplanets, it does continue the trend of pushing the limits of where scientists think life beyond Earth could exist. In our own solar system, for example, the icy moons Europa and Enceladus (which orbit Jupiter and Saturn, respectively) fall outside the traditional habitable zone, but scientists still consider the moons to be top contenders in the search for extraterrestrial life, because the gravitational tug of each moon's host planet generates heat that could allow vast oceans of water to exist beneath the icy surface. Other studies have proposed that life could exist in the hot atmospheres of Venus-like planets, or even on the starless rogue planets that wander the galaxy. The paper's real innovation is that the new estimate relies on advanced computer modeling of the runaway greenhouse effect, thought to occur when a hot climate causes water to vaporize. The resulting clouds prevent heat from reflecting off the surface of the planet, causing a temperature feedback loop—the world grows hotter and hotter until all of its surface water evaporates. To create the model, the team started with a 3D model that the Intergovernmental Panel on Climate Change uses to predict the impact of global warming. Then they added in astrophysical data to model planets that are much hotter than Earth. "The idea was to merge these tools that were available to create this climate model that is able to model very hot planets in very exotic contexts," says astrophysicist Jérémy Leconte of the Institut Pierre-Simon Laplace in Paris, a coauthor on the new paper. Previous attempts to define the temperature threshold that starts a runaway greenhouse effect depended on simple, one-dimensional models that mostly looked at water's ability to absorb heat. "They ignore the effect of the clouds, and the dynamic effects at equator and poles," says Leconte. The 3D model that Leconte and his colleagues created incorporates factors such as cloud coverage and the way air circulates from hot areas to cold. On Earth, air circulation patterns called Hadley cells form because of the differences in air temperature between the equator and the poles. The equator receives more sunlight, and so air there becomes becomes hotter. As that hot air rises, it moves toward the cooler poles. In the process, it cools and creates dry areas such as the Sahara, which help to stabilize Earth's climate and prevent the runaway greenhouse effect. Similar circulation patterns form on planetary bodies where the heat is uneven—places like Mars or Titan, which has one hot side and one cold side. Because of the stabilizing effects of air circulation, Leconte's model shows the temperature threshold for runaway greenhouse to be higher than expected. That means planets can orbit closer to their stars and still be potentially habitable. Penn State physicist Ravi Kopparapu, who was not involved in the study, says the results were not dramatically different from his own recent estimates, which placed the inner boundary of the habitable zone at 0.97 AU. But, he says, the study is the first of its kind and highlights the importance of using 3D models to understand the atmospheres and climates of Earth-like planets. The definition of the habitable zone's inner boundary will be especially important in the years to come, Kopparapu added, because current planet-hunting telescopes are best at spotting exoplanets that orbit close to their stars, rather than being situated comfortably within the habitable zone.
0.897332
3.958706
Welcome back to Messier Monday! In our ongoing tribute to the great Tammy Plotner, we take a look at the Messier 21 open star cluster. Enjoy! Back in the 18th century, famed French astronomer Charles Messier noted the presence of several “nebulous objects” in the night sky. Having originally mistaken them for comets, he began compiling a list of these objects so that other astronomers wouldn’t make the same mistake. Consisting of 100 objects, the Messier Catalog has come to be viewed as a major milestone in the study of Deep Space Objects. One of these objects is Messier 21 (aka. NGC 6531), an open star cluster located in the Sagittarius constellation. A relatively young cluster that is tightly packed, this object is not visible to the naked eye. Hence why it was not discovered until 1764 by Charles Messier himself. It is now one of the over 100 Deep Sky Objects listed in the Messier Catalog. At a distance of 4,250 light years from Earth, this group of 57 various magnitude stars all started life together about 4.6 million years ago as part of the Sagittarius OB1 stellar association. What makes this fairly loose collection of stars rather prized is its youth as a cluster, and the variation of age in its stellar members. Main sequence stars are easy enough to distinguish in a group, but low mass stars are a different story when it comes to separating them from older cluster members. As Byeong Park of the Korean Astronomy Observatory said in a 2001 study of the object: “In the case of a young open cluster, low-mass stars are still in the contraction phase and their positions in the photometric diagrams are usually crowded with foreground red stars and reddened background stars. The young open cluster NGC 6531 (M21) is located in the Galactic disk near the Sagittarius star forming region. The cluster is near to the nebula NGC 6514 (the Trifid nebula), but it is known that it is not associated with any nebulosity and the interstellar reddening is low and homogeneous. Although the cluster is relatively near, and has many early B-type stars, it has not been studied in detail.” But study it in detail they did, finding 56 main sequence members, 7 pre-main sequence stars and 6 pre-main sequence candidates. But why did this cluster… you know, cluster in the way it did? As Didier Raboud, an astronomer from the Geneva Observatory, explained in his 1998 study “Mass segregation in very young open clusters“: “The study of the very young open cluster NGC 6231 clearly shows the presence of a mass segregation for the most massive stars. These observations, combined with those concerning other young objects and very recent numerical simulations, strongly support the hypothesis of an initial origin for the mass segregation of the most massive stars. These results led to the conclusion that massive stars form near the center of clusters. They are strong constraints for scenarii of star and stellar cluster formation.” say Raboud, “In the context of massive star formation in the center of clusters, it is worth noting that we observe numerous examples of multiple systems of O-stars in the center of very young OCs. In the case of NGC 6231, 8 stars among the 10 brightest are spectroscopic binaries with periods shorter than 6 days.” But are there any other surprises hidden inside? You bet! Try Be-stars, a class of rapidly rotating stars that end up becoming flattened at the poles. As Virginia McSwain of Yale University’s Department of Astronomy wrote in a 2005 study, “The Evolutionary Status of Be Stars: Results from a Photometric Study of Southern Open Clusters“: “Be stars are a class of rapidly rotating B stars with circumstellar disks that cause Balmer and other line emission. There are three possible reasons for the rapid rotation of Be stars: they may have been born as rapid rotators, spun up by binary mass transfer, or spun up during the main-sequence (MS) evolution of B stars. To test the various formation scenarios, we have conducted a photometric survey of 55 open clusters in the southern sky. We use our results to examine the age and evolutionary dependence of the Be phenomenon. We find an overall increase in the fraction of Be stars with age until 100 Myr, and Be stars are most common among the brightest, most massive B-type stars above the zero-age main sequence (ZAMS). We show that a spin-up phase at the terminal-age main sequence (TAMS) cannot produce the observed distribution of Be stars, but up to 73% of the Be stars detected may have been spun-up by binary mass transfer. Most of the remaining Be stars were likely rapid rotators at birth. Previous studies have suggested that low metallicity and high cluster density may also favor Be star formation.” History of Observation: Charles Messier discovered this object on June 5th, 1764. As he wrote in his notes on the occassion: “In the same night I have determined the position of two clusters of stars which are close to each other, a bit above the Ecliptic, between the bow of Sagittarius and the right foot of Ophiuchus: the known star closest to these two clusters is the 11th of the constellation Sagittarius, of seventh magnitude, after the catalog of Flamsteed: the stars of these clusters are, from the eighth to the ninth magnitude, environed with nebulosities. I have determined their positions. The right ascension of the first cluster, 267d 4′ 5″, its declination 22d 59′ 10″ south. The right ascension of the second, 267d 31′ 35″; its declination, 22d 31′ 25″ south.” While Messier did separate the two star clusters, he assumed the nebulosity of M20 was also involved with M21. In this circumstance, we cannot fault him. After all, his job was to locate comets, and the purpose of his catalog was to identify those objects that were not. In later years, Messier 21 would be revisited again by Admiral Smyth, who would describe it as follows: “A coarse cluster of telescopic stars, in a rich gathering galaxy region, near the upper part of the Archer’s bow; and about the middle is the conspicuous pair above registered, – A being 9, yellowish, and B 10, ash coloured. This was discovered by Messier in 1764, who seems to have included some bright outliers in his description, and what he mentions as nebulosity, must have been the grouping of the minute stars in view. Though this was in the power of the meridian instruments, its mean apparent place was obtained by differentiation from Mu Sagittarii, the bright star about 2 deg 1/4 to the north-east of it.” Locating Messier 21: Once you have become familiar with the Sagittarius region, finding Messier 21 is easy. It’s located just two and a half degrees northwest of Messier 8 – the “Lagoon Nebula” – and about a half a degree northeast of Messier 20 – the “Trifid Nebula“. If you are just beginning to astronomy, try starting at the teapot’s tip star (Lambda) “Al Nasl”, and starhopping in the finderscope northwest to the Lagoon. While the nebulosity might not show in your finder, optical double 7 Sagittari, will. From there you will spot a bright cluster of stars two degrees due north. These are the stars embedded withing the Trifid Nebula, and the small, compressed area of stars to its northeast is the open star cluster M21. It will show well in binoculars under most sky conditions as a small, fairly bright concentration and resolve well for all telescope sizes. And here are the quick facts, for your convenience: Object Name: Messier 21 Alternative Designations: M21, NGC 6531 Object Type: Open Star Cluster Right Ascension: 18 : 04.6 (h:m) Declination: -22 : 30 (deg:m) Distance: 4.25 (kly) Visual Brightness: 6.5 (mag) Apparent Dimension: 13.0 (arc min) We have written many interesting articles about Messier Objects here at Universe Today. Here’s Tammy Plotner’s Introduction to the Messier Objects, , M1 – The Crab Nebula, M8 – The Lagoon Nebula, and David Dickison’s articles on the 2013 and 2014 Messier Marathons.
0.876892
3.72989
A red giant sheds its skin This ghostly image features a distant and pulsating red giant star known as R Sculptoris. Situated 1200 light-years away in the constellation of Sculptor, R Sculptoris is something known as a carbon-rich asymptotic giant branch (AGB) star, meaning that it is nearing the end of its life. At this stage, low- and intermediate-mass stars cool off, create extended atmospheres, and lose a lot of their mass — they are on their way to becoming spectacular planetary nebulae. While the basics of this mass-loss process are understood, astronomers are still investigating how it begins near the surface of the star. The amount of mass lost by a star actually has huge implications for its stellar evolution, altering its future, and leading to different types of planetary nebulae. As AGB stars end their lives as planetary nebulae, they produce a vast range of elements — including 50% of elements heavier than iron — which are then released into the Universe and used to make new stars, planets, moons, and eventually the building blocks of life. One particularly intriguing feature of R Sculptoris is its dominant bright spot, which looks to be two or three times brighter than the other regions. The astronomers that captured this wonderful image, using ESO’s Very Large Telescope Interferometer (VLTI), have concluded that R Sculptoris is surrounded by giant “clumps” of stellar dust that are peeling away from the shedding star. This bright spot is, in fact, a region around the star with little to no dust, allowing us to look deeper into the stellar surface. ESO/M. Wittkowski (ESO) |Publiceringsdatum:||9 februari 2018 12:00| |Storlek:||1594 x 1592 px| |Typ:||Milky Way : Star : Evolutionary Stage : Red Giant| |Position (RA):||1 26 58.09| |Position (Dec):||-32° 32' 35.43"| |Field of view:||0.00 x 0.00 arcminutes| |Orientering:||Nord är -0.0° vänster om lodrätt| Färger och filter Very Large Telescope Interferometer
0.90706
3.758889
Since meteorites are samples of the universe outside our atmosphere they are kind of by definition awesome, excepting the occasional mass extinction event causation. But humans are knowingly creating the current age of mass extinction, so who are we to throw stones at non-sentient space rocks? A scientist named Clair Patterson (1922-1995) used meteorites to help determine the age of the earth. In studying them to learn about our home, he discovered a much closer and more personal problem-atmospheric lead. He was a geochemist that spent years developing and refining his doctoral advisor’s method for finding the age of the earth. He used uranium-lead, then lead-lead dating methods on meteorite samples, including the Canyon Diablo meteorite. (An iron meteorite that impacted in what is now Arizona around 50,000 years ago and left fragments all around the impact crater.) At the University of Chicago Harrison Brown came up with a method for counting lead isotopes in rocks to calculate the age of the Earth. As Bill Bryson wrote in his A Short History of Nearly Everything, “Realizing that the work would be exceedingly tedious, he assigned it to young Clair Patterson as his dissertation project. Famously he promised that determining the age of the Earth with his new method would be ‘duck soup.’ In fact, it would take years.” Patterson started his work in 1948 and took it with him when he went to Caltech in 1952. It involved making very precise measurements in extremely old rocks. One problem was that they had trouble finding rocks old enough (Odd as it is to think today, they didn’t know yet just why surface rocks were younger than the planet yet.) So Patterson made the leap and assumed (correctly) that meteorites were leftovers from the creation of the solar system, so they would be the same age as the earth. In 1953 he finally got specimens of the Canyon Diablo meteorite and access to a mass spectrometer to study and age them. Shortly afterwards he announced his findings that Earth was around 4.55 billion years old, which is still the number we use today. How did he date the samples? I’m not too good with radiometric dating so a super short and simple version: For uranium-lead dating: Uranium is an unstable element that decays into lead. So by studying the ratio of uranium to lead atoms and using known rates of decay you can tell how long the uranium has been decaying and date something really ancient to the time of its creation. Lead-lead dating is used less often now, but was an essential part of his study. The thing to know here is that elements have different isotopes. Isotopes are different versions of the same element. They’ve got almost the exact same properties except for slightly different weights. As Sam Kean puts it in The Disappearing Spoon, “Each type, or isotope, has a different atomic weight-204, 206, or 207. Some lead of all three types has existed since our supernova birth, but some has been created fresh by uranium. The catch is that uranium breaks down into only two of those types, 206 and 207. The amount of 204 is fixed, since no element breaks down into it.” So to determine a sample’s age he could compare the ratio of lead isotopes created by decay to those that occur naturally. He used three different stony and 2 different iron meteorites to determine the age of the earth. Why both types? Uranium doesn’t mix with iron, but lead does. So iron meteorites retain the original lead isotope proportions, since they didn’t contain any uranium to add new lead atoms. Another problem he kept running into was the sheer volume of lead contamination while he was doing his research. The meteorites were always contaminated with large amounts of atmospheric lead whenever they were exposed to air. So once he established the age of the Earth, he began to look at all this atmospheric lead. He discovered that all research on lead’s effects on humans had been funded by the corporations that made lead additives. It should not come as a shock then that the findings were neither truthful nor accurate. To quote Bryson again “In one such study, a doctor who had no specialized training in chemical pathology undertook a five-year program in which volunteers were asked to breathe in or swallow lead in elevated quantities. Their urine and feces were tested. Unfortunately, as the doctor appears not to have known, lead is not excreted as a waste product. Rather, it accumulated in the bones and blood-that’s what makes it so dangerous-and neither bone nor blood was tested. In consequence, lead was given a clean bill of health.” Patterson wondered how much lead levels had increased over time. Industry said that the amount of lead in the environment had only doubled with its industrial usage. Patterson found that deep ocean water had 3-10 times less lead than surface water while other metal ratios remained steady. Studying ice core samples from Greenland (plugs of ice frozen over the centuries that give a tiny sample of atmospheric conditions from the time when that snow or rain first fell) showed that lead levels started to rise steadily after it began to be used as an additive in gasoline. 90% of lead in the atmosphere was from lead in gasoline. He found the source of the lead that was contaminating his samples, but he was concerned about the public health implications of all that lead and spent much of the rest of his life fighting to make sure the public knew about them. He was battling large and wealthy corporations and individuals, so many research centers were closed to him. (Plenty of them were supposed to be neutral, but then as now money and corporate interests trump good science, due process or public interest. Especially when one director was a Supreme Court judge and another an influential member of the National Geographic Society. ) He became a liability to schools, as companies began to pressure the institutions he worked for to fire him or shut him up. He was excluded from a national research panel studying the effects of lead in 1971, which was particularly egregious since by then he was the foremost expert on the topic. It was through his efforts that there was a clean air act of 1970 (though he reasonably felt it didn’t do enough or act fast enough) and that leaded gasoline was finally taken off the market in 1986. He was also worried about the amounts of lead in soldering and paint. He that lead levels were much higher in canned than fresh foods. Again, it was a few orders of magnitude higher than what the food companies claimed. Appallingly, despite this, lead solder wasn’t removed from food containers in America until 1993! Interestingly, some studies hint that lead poisoning might have been a contributing factor to the rise in crime starting during the postwar era. Thanks to him, while we still have a great deal more lead in our blood than those born before the twentieth century, by the late 1990s lead levels in blood have fallen 80% and are still falling. Clair Patterson died in 1995. Having looked up some of Patterson’s obituaries, several of which completely leave out his work in the public interest, I think Bryson gave him a better one: “He didn’t win a Nobel Prize for his work. Geologists never do. Nor, more puzzlingly, did he gain any fame or even much attention from a half century of consistent and increasingly selfless achievement. A good case could be made that he was the most influential geologist of the twentieth century.”
0.824141
3.560487
The so-called Platonic Solids are regular polyhedra. “Polyhedra” is a Greek word meaning “many faces.” There are five of these, and they are characterized by the fact that each face is a regular polygon, that is, a straight-sided figure with equal sides and equal angles: Four triangular faces, four vertices, and six edges. Six square faces, eight vertices, and twelve edges. Eight triangular faces, six vertices, and twelve edges. Twelve pentagonal faces, twenty vertices, and thirty edges. Twenty triangular faces, twelve vertices, and thirty edges. It is natural to wonder why there should be exactly five Platonic solids, and whether there might conceivably be one that simply hasn't been discovered yet. However, it is not difficult to show that there must be five—and that there cannot be more than five. First, consider that at each vertex (point) at least three faces must come together, for if only two came together they would collapse against one another and we would not get a solid. Second, observe that the sum of the interior angles of the faces meeting at each vertex must be less than 360°, for otherwise they would not all fit together. Now, each interior angle of an equilateral triangle is 60°, hence we could fit together three, four, or five of them at a vertex, and these correspond to to the tetrahedron, the octahedron, and the icosahedron. Each interior angle of a square is 90°, so we can fit only three of them together at each vertex, giving us a cube. (We could fit four squares together, but then they would lie flat, giving us a tesselation instead of a solid.) The interior angles of the regular pentagon are 108°, so again we can fit only three together at a vertex, giving us the dodecahedron. And that makes five regular polyhedra. What about the regular hexagon, that is, the six-sided figure? Well, its interior angles are 120°, so if we fit three of them together at a vertex the angles sum to precisely 360°, and therefore they lie flat, just like four squares (or six equilateral triangles) would do. For this reason we can use hexagons to make a tesselation of the plane, but we cannot use them to make a Platonic solid. And, obviously, no polygon with more than six sides can be used either, because the interior angles just keep getting larger. The Greeks, who were inclined to see in mathematics something of the nature of religious truth, found this business of there being exactly five Platonic solids very compelling. The philosopher Plato concluded that they must be the fundamental building blocks—the atoms—of nature, and assigned to them what he believed to be the essential elements of the universe. He followed the earlier philosopher Empedocles in assigning fire to the tetrahedron, earth to the cube, air to the octahedron, and water to the icosahedron. To the dodecahedron Plato assigned the element cosmos, reasoning that, since it was so different from the others in virtue of its pentagonal faces, it must be what the stars and planets are made of. Although this might seem naive to us, we should be careful not to smile at it too much: these were powerful ideas, and led to real knowledge. As late as the 16th century, for instance, Johannes Kepler was applying a similar intuition to attempt to explain the motion of the planets. Early in his life he concluded that the distances of the orbits, which he assumed were circular, were related to the Platonic solids in their proportions. This model is represented in this woodcut from his treatise Mysterium Cosmographicum. Only later in his life, after his friend the great astronomer Tycho Brahe bequeathed to him an enormous collection of astronomical observations, did Kepler finally reason to the conclusion that this model of planetary motion was mistaken, and that in fact planets moved around the sun in ellipses, not circles. It was this discovery that led Isaac Newton, less than a century later, to formulate his law of gravity—which governs planetary motion—and which ultimately gave us our modern conception of the universe. The beauty and interest of the Platonic solids continue to inspire all sorts of people, and not just mathematicians. For a look at how one artist used these shapes, you may wish to study the M.C. Escher Minitext.
0.834155
3.104035
Unique pair of hidden black holes discovered by XMM-Newton 22 April 2014A pair of supermassive black holes in orbit around one another have been spotted by XMM-Newton. This is the first time such a pair have been seen in an ordinary galaxy. They were discovered because they ripped apart a star when the space observatory happened to be looking in their direction. Most massive galaxies in the Universe are thought to harbour at least one supermassive black hole at their centre. Two supermassive black holes are the smoking gun that the galaxy has merged with another. Thus, finding binary supermassive black holes can tell astronomers about how galaxies evolved into their present-day shapes and sizes. |Artist's impression of a binary supermassive black hole system. Credit: ESA - C. Carreau| To date, only a few candidates for close binary supermassive black holes have been found. All are in active galaxies where they are constantly ripping gas clouds apart, in the prelude to crushing them out of existence. In the process of destruction, the gas is heated so much that it shines at many wavelengths, including X-rays. This gives the galaxy an unusually bright centre, and leads to it being called active. The new discovery, reported by Fukun Liu, Peking University, Beijing, China, and colleagues, is important because it is the first to be found in a galaxy that is not active. "There might be a whole population of quiescent galaxies that host binary black holes in their centres," says co-author Stefanie Komossa, Max-Planck-Institut für Radioastronomie, Bonn, Germany. But finding them is a difficult task because in quiescent galaxies, there are no gas clouds feeding the black holes, and so the cores of these galaxies are truly dark. The only hope that the astronomers have is to be looking in the right direction at the moment one of the black holes goes to work, and rips a star to pieces. Such an occurrence is called a 'tidal disruption event'. As the star is pulled apart by the gravity of the black hole, it gives out a flare of X-rays. In an active galaxy, the black hole is continuously fed by gas clouds. In a quiescent galaxy, the black hole is fed by tidal disruption events that occur sporadically and are impossible to predict. So, to increase the chances of catching such an event, researchers use ESA's X-ray observatory, XMM-Newton, in a novel way. |XMM-Newton slew scans (2001-2010). Credit: ESA/ A. Read (University of Leicester)| Usually, the observatory collects data from designated targets, one at a time. Once it completes an observation, it slews to the next. The trick is that during this movement, XMM-Newton keeps the instruments turned on and recording. Effectively this surveys the sky in a random pattern, producing data that can be analysed for unknown or unexpected sources of X-rays. On 10 June 2010, a tidal disruption event was spotted by XMM-Newton in galaxy SDSS J120136.02+300305.5. Komossa and colleagues were scanning the data for such events and scheduled follow-up observations just days later with XMM-Newton and NASA's Swift satellite. The galaxy was still spilling X-rays into space. It looked exactly like a tidal disruption event caused by a supermassive black hole but as they tracked the slowly fading emission day after day something strange happened. The X-rays fell below detectable levels between days 27 and 48 after the discovery. Then they re-appeared and continued to follow a more expected fading rate, as if nothing had happened. Now, thanks to Fukun Liu, the behaviour can be explained. "This is exactly what you would expect from a pair of supermassive black holes orbiting one another," says Liu. Liu had been working on models of black hole binary systems that predicted a sudden plunge to darkness and then the recovery because the gravity of one of the black holes disrupted the flow of gas onto the other, temporarily depriving it of fuel to fire the X-ray flare. He found that two possible configurations were possible to reproduce the observations of J120136. In the first, the primary black hole contained 10 million solar masses and was orbited by a black hole of about a million solar masses in an elliptical orbit. In the second solution, the primary black hole was about a million solar masses and in a circular orbit. In both cases, the separation between the black holes was relatively small: 0.6 milliparsecs, or about 2 thousandths of a light year. This is about the width of our Solar System. Being this close, the fate of this newly discovered black hole pair is sealed. They will radiate their orbital energy away, gradually spiralling together, until in about two million years time they will merge into a single black hole. Now that astronomers have found this first candidate for a binary black hole in a quiescent galaxy, the search is inevitably on for more. XMM-Newton will continue its slew survey. This detection will also spur interest in a network of telescopes that search the whole sky for tidal disruption events. "Once we have detected thousands of tidal disruption events, we can begin to extract reliable statistics about the rate at which galaxies merge," says Komossa. There is another hope for the future as well. When binary black holes merge, they are predicted to release a massive burst of energy into the Universe but not mostly in X-rays. "The final merger is expected to be the strongest source of gravitational waves in the Universe," says Liu. Gravitational waves are ripples in the space-time continuum. Astronomers around the world are currently building a new type of observatory to detect these ripples. ESA are also involved in opening this new window on the Universe. In 2015, ESA will launch LISA Pathfinder, which will test the necessary technology for building a space-based gravitational wave detector that must be placed in space. The search for elusive gravitational waves is also the theme for one of ESA's next large science missions, the L3 mission in the Cosmic Vision programme. In the meantime, XMM-Newton will continue to look out for the tidal disruption events that betray the presence of binary supermassive black holes candidates. "The innovative use of XMM-Newton's slew observations made the detection of this binary supermassive black hole system possible," says Norbert Schartel, ESA's XMM-Newton Project Scientist. "This demonstrates the important role that long-lasting space observatories have in detecting rare events that can potentially open new areas in astronomy." The results described in this article are reported in "A milli-parsec supermassive black hole binary candidate in the galaxy SDSS J120136.02+300305.5", by F.K. Liu, Shuo Li, and S. Komossa, published in the May 10 issue of The Astrophysical Journal, 2014, Volume 786; doi:10.1088/0004-637X/786/2/103 The European Space Agency's X-ray Multi-Mirror Mission, XMM-Newton, was launched in December 1999. It is the biggest scientific satellite to have been built in Europe and uses over 170 wafer-thin cylindrical mirrors spread over three high throughput X-ray telescopes. Its mirrors are among the most powerful ever developed. XMM-Newton's orbit takes it almost a third of the way to the Moon, allowing for long, uninterrupted views of celestial objects. Department of Astronomy Max-Planck-Institut für Radioastronomie ESA XMM-Newton Project Scientist Directorate of Science and Robotic Exploration European Space Agency
0.909884
3.98518
This was written by Sarah Finne Robinson, WeSpire Founding Partner: I love to think about the ancient royal families who retained personal astronomers, who were responsible for tracking the movements of the celestial bodies in the skies for extraordinary events. Absent networked communications technology, the skies provided the ultimate wide-screen monitor: a vast, ethereal display in which to divine earthly events. For example, Vatican scholars agree that the Magis’ journey to welcome Jesus was guided by an exceptionally bright star, dramatically viewed in a sky unencumbered by man-made light. Today’s astronomers still work around the clock to take note of celestial phenomena. Take just one month, this May, for instance. - May 9: Mercury completed a transit across the Sun. The transit of Mercury occurs only about 13 times in a century, and the next one takes place on November 11, 2019. (Click here for the schedule from the years between 1605 to 2295, inclusive.) - May 21: Blue Moon — the third blue moon of four this season. (A “blue moon” is a second full moon within a single month.) The next monthly Blue Moon will appear on January 31, 2018. - May 22: Mars arrives at opposition to the Sun, and coincidentally will appear close to the spectacular blue moon. During this period, Mars shines at its brightest for 2016, i.e. magnitude minus 2.1, or nearly twice as bright as Sirius, the brightest star in the sky. - Also on May 22: Mercury, one of four planets currently in retrograde, turns direct. Perhaps the most astounding feature of all these spectacular events is that they are happening on a schedule, and in anticipated detail (see italics above). And not only this month, but comme d’habitude. For instance earlier this year, five planets appeared in the dawn sky, starting on January 19 and continuing until February 20. This occurrence will repeat in August, for six days beginning on August 13, and at dusk instead of dawn. Before this year, the five planets had not been together for more than a decade. These fantastic celestial events are not capricious; they are operating on a precise timetable ad with specificity that eclipses the most talented flight scheduler or soufflé chef. About twice a year, the moon passes in front of the Earth, as recorded here by NASA’s new Deep Space Climate Observatory (DSCOVR) satellite. “It is surprising how much brighter Earth is than the moon,” says Adam Szabo, DSCOVR’s project scientist. “Our planet is a truly brilliant object in dark space.” Sky-gazing reminds that the only place to live is exactly where we are. (Tweet this) For all the mayhem, brutality, and depravity of this world, it is a miraculous, flabbergastingly hospitable haven. No offense to Elon Musk and other contemporary rocket expeditioners, but my bets are on this planet: the Earth is the only livable speck in a galaxy of untenable, uncomfortable, alien alternatives. Who on earth would dare to deny or interfere with a natural order that includes such rhythms? Herod, Saul and Priam in ancient times come to mind. Proud, powerful people who privileged their own gain and reaped disaster. Is it naive to hope that the most powerful climate change deniers might eventually take note of this short-sightedness? This evening, I recommend straightening your cervical vertebrae from constant tapping and tweeting on the phone, and gazing skyward for the rare sight of a full “blue” moon with tiny red Mars to the side. And let’s hope our most influential leaders are doing the same, reminding themselves that there are forces operating on a scale monumentally larger and more enduring than our vulnerable human constructions, infinitesimally tiny and inconsequential by comparison. If you happen to know someone who might benefit from a stellar perspective, please invite them along. This post originally appeared on Huffington Post.
0.821338
3.200528
In my opinion you commit an error. These small rocky worlds are thought to have been born in a disc of dust and gas that surrounded the Sun. As time went by, the dust grains snowballed into larger and larger rocks and boulders. About 4. The terrestrial planets we see today are the survivors of a prolonged, chaotic period of colossal impacts which left their surface imprints in the form of giant basins and craters. How can we piece together a planet's history since its formation? On Earth, the geological timeline is quite easy to determine, since we can analyse the rocks and minerals in laboratories. If you had 1 gram of pure radioactive nuclei with a half-life of years, then after years you would have. However, the material does not disappear. Instead, the radioactive atoms are replaced with their decay products. Sometimes the radioactive atoms are called parents and the decay products are called daughter elements. In this way, radioactive elements with half-lives we have determined can provide accurate nuclear clocks. By comparing how much of a radioactive parent element is left in a rock to how much of its daughter products have accumulated, we can learn how long the decay process has been going on and hence how long ago the rock formed. Table 1 summarizes the decay reactions used most often to date lunar and terrestrial rocks. When astronauts first flew to the Moon, one of their most important tasks was to bring back lunar rocks for radioactive age-dating. Until then, astronomers and geologists had no reliable way to measure the age of the lunar surface. Counting craters had let us calculate relative ages for example, the heavily cratered lunar highlands were older than the dark lava plainsbut scientists could not measure the actual age in years. Only inwhen the first Apollo samples were dated, did we learn that the Moon is an ancient, geologically dead world. Using such dating techniques, we have been able to determine the ages of both Earth and the Moon: each was formed about 4. We should also note that the decay of radioactive nuclei generally releases energy in the form of heat. Although the energy from a single nucleus is not very large in human termsthe enormous numbers of radioactive nuclei in a planet or moon especially early in its existence can be a significant source of internal energy for that world. The ages of the surfaces of objects in the solar system can be estimated by counting craters: on a given world, a more heavily cratered region will generally be older than one that is less cratered. We can also use samples of rocks with radioactive elements in them to obtain the time since the layer in which the rock formed last solidified. The half-life of a radioactive element is the time it takes for half the sample to decay; we determine how many half-lives have passed by how much of a sample remains the radioactive element and how much has become the decay product. In this way, we have estimated the age of the Moon and Earth to be roughly 4. Skip to main content. Login Register. We have rocks from the Moon brought backmeteorites, and rocks that we know came from Mars. We can then use radioactive age dating in order to date the ages of the surfaces when the rocks first formed, i. We also have meteorites from asteroids and can date them, too. These are the surfaces that we can get absolute ages for. For the others, one can only use relative age dating such as counting craters in order to estimate the age of the surface and the history of the surface. The biggest assumption is that, to first order, the number of asteroids and comets hitting the Earth and the Moon was the same as for Mercury, Venus, and Mars. There is a lot of evidence that this is true. The bottom line is that the more craters one sees, the older the surface is. This can be interpreted in two ways: why it is important to know the age of a planet or how is age dating important in determining the age of a planet? Based on our study of meteorites and rocks from the Moon, as well as modeling the formation of planets, it is believed pretty much well-established that all of the objects in the Solar System formed very quickly about 4. When we age date a planet, we are actually just dating the age of the surface, not the whole planet. Determining the age of surfaces on Mars We can get absolute ages only if we have rocks from that surface. For others, all we are doing is getting a relative age, using things like the formation of craters and other features on a surface. By studying other planets, we are learning more about our own planet. The effects of impacts and how they might affect us here on Earth, global climate change Venus vs. Earth and what could happen to Earth in an extreme case, etc. From Wikipedia, radioactive decay is the process in which an unstable atomic nucleus spontaneously loses energy by emitting ionizing particles and radiation. This decay, or loss of energy, results in an atom element of one type, called the parent nuclide transforming to an atom of a different type another element or another isotope of the same elementnamed the daughter nuclide. For example: a carbon atom the "parent" emits radiation and transforms to a nitrogen atom the "daughter". It is impossible to predict when a given atom will decay, but given a large number of similar atoms, the decay rate on average is predictable. For example, one could make a map of the surface, color-coding it such that each radiometric dating until we can study rocks from their surfaces in a laboratory. numbers of craters, one asks "what use can we make of craters to determine We can roughly divide the history of crater formation into three periods, from. We have no idea how much older thing B is, we just know that it's older. That's why geologic time is usually diagramed in tall columnar diagrams like this. Just like a stack of sedimentary rocks, time is recorded in horizontal layers, with the oldest layer on the bottom, superposed by ever-younger layers, until you get to the most recent stuff on the tippy top. On Earth, we have a very powerful method of relative age dating: fossil assemblages. Paleontologists have examined layered sequences of fossil-bearing rocks all over the world, and noted where in those sequences certain fossils appear and disappear. FAQ - Radioactive Age-Dating When you find the same fossils in rocks far away, you know that the sediments those rocks must have been laid down at the same time. The more fossils you find at a location, the more you can fine-tune the relative age of this layer versus that layer. Of course, this only works for rocks that contain abundant fossils. Conveniently, the vast majority of rocks exposed on the surface of Earth are less than a few hundred million years old, which corresponds to the time when there was abundant multicellular life here. Look closely at the Geologic Time Scale chartand you might notice that the first three columns don't even go back million years. How do we know the age of the surfaces we see on planets and moons? to rock samples from the Moon to establish a geological chronology for the Moon. directly (see the chapter on Cosmic Samples and the Origin of the Solar System). Scientists measure the age of rocks using the properties of natural radioactivity. We use a variety of laboratory techniques to figure out absolute age-dating method that works from orbit, and although scientists are the major and minor geologic time periods we use to split up the history Relative-age time periods are what make up the Geologic Time Scale. .. Venus' Lower Clouds. It's difficult for scientists to figure out the geological history of Venus. The environment is too harsh for a rover to go there. It is even more difficult. That last, pink Precambrian column, with its sparse list of epochal names, covers the first four billion years of Earth's history, more than three quarters of Earth's existence. Most Earth geologists don't talk about that much. Paleontologists have used major appearances and disappearances of different kinds of fossils on Earth to divide Earth's history -- at least the part of it for which there are lots of fossils -- into lots of eras and periods and epochs. When you talk about something happening in the Precambrian or the Cenozoic or the Silurian or Eocene, you are talking about something that happened when a certain kind of fossil life was present. Major boundaries in Earth's time scale happen when there were major extinction events that wiped certain kinds of fossils out of the fossil record. The geological history of Mars has been divided into three main periods, each ago, but scientists think that the planet endured an extremely high rate of impacts. Eventually, the water vapour in the atmosphere would have condensed into a Many of the valley networks on Mars date from this period, and lakes seem to. The geology of solar terrestrial planets mainly deals with the geological aspects of the four Three of the four solar terrestrial planets (Venus, Earth, and Mars) have scientists have a first-order understanding of the geology and history of the the degree of degradation gives a rough indication of the crater's relative age. For the others, one can only use relative age dating (such as counting craters) in order to estimate the age of the surface and the history of the surface. and comets hitting the Earth and the Moon was the same as for Mercury, Venus, and Mars. By studying other planets, we are learning more about our own planet. This is called the chronostratigraphic time scale -- that is, the division of time the "chrono-" part according to the relative position in the rock record that's "stratigraphy". The science of paleontology, and its use for relative age dating, was well-established before the science of isotopic age-dating was developed. Nowadays, age-dating of rocks has established pretty precise numbers for the absolute ages of the boundaries between fossil assemblages, but there's still uncertainty in those numbers, even for Earth. In fact, I have sitting in front of me on my desk a two-volume work on The Geologic Time Scalefully pages devoted to an eight-year effort to fine-tune the correlation between the relative time scale and the absolute time scale. The Geologic Time Scale is not light reading, but I think that every Earth or space scientist should have a copy in his or her library -- and make that the latest edition. In the time since the previous geologic time scale was published inmost of the boundaries between Earth's various geologic ages have shifted by a million years or so, and one of them the Carnian-Norian boundary within the late Triassic epoch has shifted by 12 million years. Two basic types of dating are possible: absolute and relative. Using the techniques of statistical celestial mechanics, first developed by E. J. Opik . origin and the broad outline of the history of volcanism on Venus could be ascertained Dating techniques are essential to geologic studies of planets and satellites in that. Venus is a planet with striking geology. Of all the other planets in the Solar System, it is the one Much speculation about the geological history of Venus continues today. . implying that the surface of the entire planet is roughly the same age, or at . Gravitational studies suggest that Venus differs from Earth in lacking an. Scientists find the age of the Earth by using radiometric dating of Many great thinkers throughout history have tried to figure out Earth's age. With this kind of uncertainty, Felix Gradstein, editor of the Geologic Time Scale, suggests that we should stick with relative age terms when describing when things happened in Earth's history emphasis mine :. For clarity and precision in international communication, the rock record of Earth's history is subdivided into a "chronostratigraphic" scale of standardized global stratigraphic units, such as "Devonian", "Miocene", " Zigzagiceras zigzag ammonite zone", or "polarity Chron C25r". Unlike the continuous ticking clock of the "chronometric" scale measured in years before the year ADthe chronostratigraphic scale is based on relative time units in which global reference points at boundary stratotypes define the limits of the main formalized units, such as "Permian". The chronostratigraphic scale is an agreed convention, whereas its calibration to linear time is a matter for discovery or estimation. Got that? We can all agree to the extent that scientists agree on anything to the fossil-derived scale, but its correspondence to numbers is a "calibration" process, and we must either make new discoveries to improve that calibration, or estimate as best we can based on the data we have already. To show you how this calibration changes with time, here's a graphic developed from the previous version of The Geologic Time Scalecomparing the absolute ages of the beginning and end of the various periods of the Paleozoic era between and I tip my hat to Chuck Magee for the pointer to this graphic. Fossils give us this global chronostratigraphic time scale on Earth. On other solid-surfaced worlds -- which I'll call "planets" for brevity, even though I'm including moons and asteroids -- we haven't yet found a single fossil. Something else must serve to establish a relative time sequence.Venus's Recent Geologic History That something else is impact craters. Earth is an unusual planet in that it doesn't have very many impact craters -- they've mostly been obliterated by active geology. Venus, Io, Europa, Titan, and Triton have a similar problem. On almost all the other solid-surfaced planets in the solar system, impact craters are everywhere. The Moon, in particular, is saturated with them. We use craters to establish relative age dates in two ways. If an impact event was large enough, its effects were global in reach. For example, the Imbrium impact basin on the Moon spread ejecta all over the place. Any surface that has Imbrium ejecta lying on top of it is older than Imbrium. Any craters or lava flows that happened inside the Imbrium basin or on top of Imbrium ejecta are younger than Imbrium. Imbrium is therefore a stratigraphic marker -- something we can use to divide the chronostratigraphic history of the Moon. How do scientists use relative dating to study the geologic history of venus The other way we use craters to age-date surfaces is simply to count the craters. At its simplest, surfaces with more craters have been exposed to space for longer, so are older, than surfaces with fewer craters. Of course the real world is never quite so simple. There are several different ways to destroy smaller craters while preserving larger craters, for example. Despite problems, the method works really, really well. Most often, the events that we are age-dating on planets are related to impacts or volcanism. Volcanoes can spew out large lava deposits that cover up old cratered surfaces, obliterating the cratering record and resetting the crater-age clock.
0.847636
3.736128
Pulses from a rapidly spinning neutron star are delayed slightly as they head toward Earth, passing through the distorted space around a companion white dwarf. That delay allowed researchers to calculate the mass of the pulsar. Image: BSaxton, NRAO/AUI/NSF Astronomers have found the most massive neutron star yet discovered, a rapidly rotating pulsar orbiting in lockstep with a white dwarf that crams 2.17 solar masses into a city-size sphere just 30 kilometres (18.6 miles) across. The pulsar appears to be close to the tipping point between matter’s ability to resist the crush of gravity versus collapse into a black hole. “Neutron stars are as mysterious as they are fascinating,” said Thankful Cromartie, a graduate student at the University of Virginia and a pre-doctoral fellow at the National Radio Astronomy Observatory in Charlottesville, Virginia. She is first author of a paper accepted by Nature Astronomy. “These city-sized objects are essentially ginormous atomic nuclei. They are so massive that their interiors take on weird properties. Finding the maximum mass that physics and nature will allow can teach us a great deal about this otherwise inaccessible realm in astrophysics.” Neutron stars and their fast-spinning cousins – pulsars – are formed in supernova explosions ...
0.841373
3.393364
Crescent ♋ Cancer Moon phase on 11 September 2085 Tuesday is Last Quarter, 22 days old Moon is in Gemini.Share this page: twitter facebook linkedin Last Quarter is the lunar phase on . Seen from Earth, illuminated fraction of the Moon surface is 44% and getting smaller. The 22 days old Moon is in ♊ Gemini. * The exact date and time of this Last Quarter phase is on 10 September 2085 at 21:07 UTC. Moon rises at midnight and sets at noon. It is visible to the south in the morning. Lunar disc appears visually 4.4% narrower than solar disc. Moon and Sun apparent angular diameters are ∠1824" and ∠1907". Next Full Moon is the Hunter Moon of October 2085 after 21 days on 3 October 2085 at 08:53. There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak. The Moon is 22 days old. Earth's natural satellite is moving through the last part of current synodic month. This is lunation 1059 of Meeus index or 2012 from Brown series. Length of current 1059 lunation is 29 days, 15 hours and 56 minutes. This is the year's longest synodic month of 2085. It is 3 minutes longer than next lunation 1060 length. Length of current synodic month is 3 hours and 12 minutes longer than the mean length of synodic month, but it is still 3 hours and 51 minutes shorter, compared to 21st century longest. This lunation true anomaly is ∠180.2°. At the beginning of next synodic month true anomaly will be ∠204.1°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°). 7 days after point of perigee on 3 September 2085 at 13:43 in ♒ Aquarius. The lunar orbit is getting wider, while the Moon is moving outward the Earth. It will keep this direction for the next 5 days, until it get to the point of next apogee on 16 September 2085 at 12:30 in ♌ Leo. Moon is 392 977 km (244 185 mi) away from Earth on this date. Moon moves farther next 5 days until apogee, when Earth-Moon distance will reach 406 253 km (252 434 mi). Moon is in descending node in ♊ Gemini at 15:17 on this date, it crosses the ecliptic from North to South. Moon will follow the southern part of its orbit for the next 14 days to meet ascending node on 26 September 2085 at 04:15 in ♐ Sagittarius. 12 days after beginning of current draconic month in ♐ Sagittarius, the Moon is moving from the middle to the last part of it. 1 day after previous North standstill on 10 September 2085 at 21:10 in ♊ Gemini, when Moon has reached northern declination of ∠23.764°. Next 14 days the lunar orbit moves southward to face South declination of ∠-23.613° in the next southern standstill on 25 September 2085 at 14:25 in ♐ Sagittarius. After 7 days on 19 September 2085 at 01:07 in ♍ Virgo, the Moon will be in New Moon geocentric conjunction with the Sun and this alignment forms next Sun-Moon-Earth syzygy.
0.83659
3.111527
A team of international astrophysicists led by ANU has shown how most of the antimatter in the Milky Way forms. Antimatter is material composed of the antiparticle partners of ordinary matter - when antimatter meets with matter, they quickly annihilate each other to form a burst of energy in the form of gamma-rays. Scientists have known since the early 1970s that the inner parts of the Milky Way galaxy are a strong source of gamma-rays, indicating the existence of antimatter, but there had been no settled view on where the antimatter came from. ANU researcher Dr Roland Crocker said the team had shown that the cause was a series of weak supernova explosions over millions of years, each created by the convergence of two white dwarfs which are ultra-compact remnants of stars no larger than two suns. "Our research provides new insight into a part of the Milky Way where we find some of the oldest stars in our galaxy," said Dr Crocker from the ANU Research School of Astronomy and Astrophysics. Dr Crocker said the team had ruled out the supermassive black hole at the centre of the Milky Way and the still-mysterious dark matter as being the sources of the antimatter. He said the antimatter came from a system where two white dwarfs form a binary system and collide with each other. The smaller of the binary stars loses mass to the larger star and ends its life as a helium white dwarf, while the larger star ends as a carbon-oxygen white dwarf. "The binary system is granted one final moment of extreme drama: as the white dwarfs orbit each other, the system loses energy to gravitational waves causing them to spiral closer and closer to each other," Dr Crocker said. He said once they became too close the carbon-oxygen white dwarf ripped apart the companion star whose helium quickly formed a dense shell covering the bigger star, quickly leading to a thermonuclear supernova that was the source of the antimatter. The research is published in Nature Astronomy.
0.914968
3.660804
The IceCube Neutrino Observatory is a particle detector at the South Pole that records the interactions of a nearly massless subatomic particle called the neutrino. IceCube searches for neutrinos from the most violent astrophysical sources: events like exploding stars, gamma-ray bursts, and cataclysmic phenomena involving black holes and neutron stars. In 2013, IceCube made headlines with its breakthrough discovery of neutrinos of astrophysical origin. IceCube continues its mission to understand the sources of these mysterious messengers from outer space. This includes a vigorous multiwavelength program supported by IceCube members that also do research in gamma ray astronomy. IceCube also studies the neutrino itself. IceCube and its low-energy extension DeepCore covers an energy range from ten GeV to PeV. It measures the atmospheric neutrino spectrum from the lower energies where neutrinos oscillate to energies as large as 100 TeV, with a statistic of more than 100,000 events per year. IceCube measures the atmospheric oscillation parameters in an energy range that exceeds existing data by one order of magnitude, thus opening a new window on neutrino physics. Other priorities include the observation of the next Galactic supernova and the search for dark matter. Multiple search strategies for dark matter have resulted in world-best limits on the interaction cross section of dark with ordinary matter in large classes of models. - Characterization of cosmic neutrino flux - Neutrino particle physics from atmospheric neutrino oscillations; search for sterile neutrinos and neutrino physics beyond the Standard Model - Origin of cosmic rays, measurements of spectrum and composition in the energy region between the “knee” and the “ankle” in the spectrum - Detection of neutrino bursts from galactic supernovae - Search for dark matter - Development of a next-generation neutrino telescope WIPAC has the primary responsibility for IceCube maintenance and operations through a coorperative agreement with NSF. UW–Madison personnel work closely with the IceCube Collaboration on all aspects of detector operations and maintenance. Early on, personnel contributed substantially to the sensor design, construction, and installation for IceCube construction. Major construction was completed in December 2010, and regular operation with the full detector started in May 2011. Recent scientific analyses have focused on point sources and the search and discovery of a diffuse flux of cosmic neutrinos, all-flavor atmospheric neutrino flux, supernova detection, cosmic ray anisotropies, neutrino oscillations, sterile neutrinos, and PeV gamma rays. Personnel also contribute to both detector maintenance and operations and data analysis. These efforts include simulations, modeling ice properties, laboratory characterization of the optical sensors, and creating and improving tools that have increased sensitivity of the instrument with each year. We are also involved in the optimization of drilling and the enhancement of the digital optical sensors and data systems for a next-generation detector. Select PublicationsSearches for Sterile Neutrinos with the IceCube Detector (Journal Article) Physical Review Letters 117 (2016) 071801; e-print archive arXiv:1605.01990 [astro-ph.HE] journals.aps.org | arxiv.org Evidence for Astrophysical Muon Neutrinos from the Northern Sky with IceCube (Journal Article) Physical Review Letters 115 (2015) 081102; e-print archive arXiv:1507.04005 [astro-ph.HE] journals.aps.org | arxiv.org Evidence for High-Energy Extraterrestrial Neutrinos at the IceCube Detector (Journal Article) Science 342 (2013) 1242856, 22 November 2013; DOI: 10.1126/science.1242856 sciencemag.org | sciencemag.org | arxiv.org First Observation of PeV-energy Neutrinos with IceCube (Journal Article) Physical Review Letters 111 (2013) 021103; e-print archive arXiv:1304.5356 [astro-ph.HE] prl.aps.org | arXiv.org An Absence of Neutrinos Associated with Cosmic-Ray Acceleration in Gamma-Ray Bursts (Journal Article) Nature 484 (2012) 351-354, 19 April 2012 nature.com | arxiv.org
0.879204
4.092138
This column first ran in The Tablet in July 2007; we first ran it here in 2015. In Alicante, on Spain’s Mediterranean coast, a group of us planetary astronomers held a workshop [in 2007] on how asteroids respond to the massive collisions that can lead to their catastrophic disruption. Just north of us, in Valencia, sailors from Switzerland and New Zealand were vying for the America’s Cup. The connection between elegant million dollar yachts and exploding asteroids. is the equations of fluid dynamics. I’ve loved sailing since my childhood. I spent my summers capsizing sailboards on Lake Huron and my winters reading too much Arthur Ransome. As a student in the early 1970s I competed on MIT’s sailing team (the Charles River was indeed “dirty water” especially back then), and attended lectures in their ocean engineering department on the challenges of designing the best shape for a hull that could slip through the water with a minimum of friction while still providing the resistance to leeward slip that lets a sailboat claw its way into the wind. The America’s Cup has long been powered by design advances, and intrigue, as teams compete to keep their own secrets while spying out the advances of their opponents. There are rivalries in the planetary science community as well, but our field has been more cooperative than competitive sailors. The [July 2007] workshop centered around comparing different computer codes for modeling what happens when two asteroids, perhaps a hundred kilometers across, collide at speeds approaching a hundred thousand kilometers per hour. No experiment can reproduce such conditions, but at least in the lab we can begin to measure things like the mechanical properties of rock, and try to estimate at what point the force of gravity is more important than the strength of the rock in holding the asteroid together in the face of such collisions. At that point, the rock flows like a fluid and computer hydrocodes can be invoked that solve the same equations used by sailboat designers – but under very different conditions. Two fascinating new results from this workshop have given us reason to question our results to date. A classic assumption has been that collisions need to reach a certain minimum energy before asteroids will break apart; but lab experiments now suggest that rock can accumulate damage over many impacts, so that (for example) after nine previous collisions, each with a tenth of the disruptive energy, an asteroid can be set up to fall apart with the tenth such impact. One could draw a moral analogy, perhaps: though a thousand venial sins don’t add up to one mortal sin, they certainly weaken your moral fibre enough to make you more susceptible to catastrophic failure when a stronger challenge comes along! Another disquieting result for the computer modelers was the confession from a long-standing guru that his code, solving the motions of a hundred thousand points inside a model asteroid, turns out not to be nearly detailed enough. He suspects now that tens of millions of points need to be followed, which requires a degree of computing power that no lab has – yet. The challenge comes from the nature of fluid flow. We know the equations; but we can’t solve them directly. Computers can only approximate them, and a slight error in our starting guesses can change the outcome completely. It’s what makes the weather so hard to predict. It’s what keeps us puzzling over the formation, and disruption, of planets. And it’s one reason, of course, why we actually need to build the boats and race them. Of course, that’s the least of the reasons. There are other intellectual puzzles, and faster ways to move through water. But the urge to know about the formation of the planets, like the urge to sail, comes ultimately from the human heart. Success in both comes from human judgments as much as from clever calculations. And the race is won by whoever best approachs the pattern of the One who gave us sea and sky. I don't get to do much sailing nowadays...
0.805886
3.569468
In geography, latitude is a geographic coordinate that specifies the north–south position of a point on the Earth's surface. Latitude is an angle (defined below) which ranges from 0° at the Equator to 90° (North or South) at the poles. Lines of constant latitude, or parallels, run east–west as circles parallel to the equator. Latitude is used together with longitude to specify the precise location of features on the surface of the Earth. On its own, the term latitude should be taken to be the geodetic latitude as defined below. Briefly, geodetic latitude at a point is the angle formed by the vector perpendicular (or normal) to the ellipsoidal surface from that point, and the equatorial plane. Also defined are six auxiliary latitudes which are used in special applications. Two levels of abstraction are employed in the definition of latitude and longitude. In the first step the physical surface is modeled by the geoid, a surface which approximates the mean sea level over the oceans and its continuation under the land masses. The second step is to approximate the geoid by a mathematically simpler reference surface. The simplest choice for the reference surface is a sphere, but the geoid is more accurately modeled by an ellipsoid. The definitions of latitude and longitude on such reference surfaces are detailed in the following sections. Lines of constant latitude and longitude together constitute a graticule on the reference surface. The latitude of a point on the actual surface is that of the corresponding point on the reference surface, the correspondence being along the normal to the reference surface which passes through the point on the physical surface. Latitude and longitude together with some specification of height constitute a geographic coordinate system as defined in the specification of the ISO 19111 standard.[a] Since there are many different reference ellipsoids, the precise latitude of a feature on the surface is not unique: this is stressed in the ISO standard which states that "without the full specification of the coordinate reference system, coordinates (that is latitude and longitude) are ambiguous at best and meaningless at worst". This is of great importance in accurate applications, such as a Global Positioning System (GPS), but in common usage, where high accuracy is not required, the reference ellipsoid is not usually stated. In English texts the latitude angle, defined below, is usually denoted by the Greek lower-case letter phi (φ or ϕ). It is measured in degrees, minutes and seconds or decimal degrees, north or south of the equator. For navigational purposes positions are given in degrees and decimal minutes. For instance, The Needles lighthouse is at 50º 39.734’N 001º 35.500’W. The precise measurement of latitude requires an understanding of the gravitational field of the Earth, either to set up theodolites or to determine GPS satellite orbits. The study of the figure of the Earth together with its gravitational field is the science of geodesy. This article relates to coordinate systems for the Earth: it may be extended to cover the Moon, planets and other celestial objects by a simple change of nomenclature. Latitude on the sphere The graticule on the sphere The graticule is formed by the lines of constant latitude and constant longitude, which are constructed with reference to the rotation axis of the Earth. The primary reference points are the poles where the axis of rotation of the Earth intersects the reference surface. Planes which contain the rotation axis intersect the surface at the meridians; and the angle between any one meridian plane and that through Greenwich (the Prime Meridian) defines the longitude: meridians are lines of constant longitude. The plane through the centre of the Earth and perpendicular to the rotation axis intersects the surface at a great circle called the Equator. Planes parallel to the equatorial plane intersect the surface in circles of constant latitude; these are the parallels. The Equator has a latitude of 0°, the North Pole has a latitude of 90° North (written 90° N or +90°), and the South Pole has a latitude of 90° South (written 90° S or −90°). The latitude of an arbitrary point is the angle between the equatorial plane and the normal to the surface at that point: the normal to the surface of the sphere is along the radius vector. The latitude, as defined in this way for the sphere, is often termed the spherical latitude, to avoid ambiguity with the geodetic latitude and the auxiliary latitudes defined in subsequent sections of this article. Named latitudes on the Earth Besides the equator, four other parallels are of significance: Arctic Circle 66° 34′ (66.57°) N Tropic of Cancer 23° 26′ (23.43°) N Tropic of Capricorn 23° 26′ (23.43°) S Antarctic Circle 66° 34′ (66.57°) S The plane of the Earth's orbit about the Sun is called the ecliptic, and the plane perpendicular to the rotation axis of the Earth is the equatorial plane. The angle between the ecliptic and the equatorial plane is called variously the axial tilt, the obliquity, or the inclination of the ecliptic, and it is conventionally denoted by i. The latitude of the tropical circles is equal to i and the latitude of the polar circles is its complement (90° - i). The axis of rotation varies slowly over time and the values given here are those for the current epoch. The time variation is discussed more fully in the article on axial tilt.[b] The figure shows the geometry of a cross-section of the plane perpendicular to the ecliptic and through the centres of the Earth and the Sun at the December solstice when the Sun is overhead at some point of the Tropic of Capricorn. The south polar latitudes below the Antarctic Circle are in daylight, whilst the north polar latitudes above the Arctic Circle are in night. The situation is reversed at the June solstice, when the Sun is overhead at the Tropic of Cancer. Only at latitudes in between the two tropics is it possible for the Sun to be directly overhead (at the zenith). On map projections there is no universal rule as to how meridians and parallels should appear. The examples below show the named parallels (as red lines) on the commonly used Mercator projection and the Transverse Mercator projection. On the former the parallels are horizontal and the meridians are vertical, whereas on the latter there is no exact relationship of parallels and meridians with horizontal and vertical: both are complicated curves. |Normal Mercator||Transverse Mercator| Meridian distance on the sphere On the sphere the normal passes through the centre and the latitude (φ) is therefore equal to the angle subtended at the centre by the meridian arc from the equator to the point concerned. If the meridian distance is denoted by m(φ) then where R denotes the mean radius of the Earth. R is equal to 6,371 km or 3,959 miles. No higher accuracy is appropriate for R since higher-precision results necessitate an ellipsoid model. With this value for R the meridian length of 1 degree of latitude on the sphere is 111.2 km (69.1 statute miles) (60.0 nautical miles). The length of 1 minute of latitude is 1.853 km (1.151 statute miles) (1.00 nautical miles), while the length of 1 second of latitude is 30.8 m or 101 feet (see nautical mile). Latitude on the ellipsoid In 1687 Isaac Newton published the Philosophiæ Naturalis Principia Mathematica, in which he proved that a rotating self-gravitating fluid body in equilibrium takes the form of an oblate ellipsoid. (This article uses the term ellipsoid in preference to the older term spheroid.) Newton's result was confirmed by geodetic measurements in the 18th century. (See Meridian arc.) An oblate ellipsoid is the three-dimensional surface generated by the rotation of an ellipse about its shorter axis (minor axis). "Oblate ellipsoid of revolution" is abbreviated to 'ellipsoid' in the remainder of this article. (Ellipsoids which do not have an axis of symmetry are termed triaxial.) Many different reference ellipsoids have been used in the history of geodesy. In pre-satellite days they were devised to give a good fit to the geoid over the limited area of a survey but, with the advent of GPS, it has become natural to use reference ellipsoids (such as WGS84) with centre at the centre of mass of the Earth and minor axis aligned to the rotation axis of the Earth. These geocentric ellipsoids are usually within 100 m (330 ft) of the geoid. Since latitude is defined with respect to an ellipsoid, the position of a given point is different on each ellipsoid: one cannot exactly specify the latitude and longitude of a geographical feature without specifying the ellipsoid used. Many maps maintained by national agencies are based on older ellipsoids, so one must know how the latitude and longitude values are transformed from one ellipsoid to another. GPS handsets include software to carry out datum transformations which link WGS84 to the local reference ellipsoid with its associated grid. The geometry of the ellipsoid The shape of an ellipsoid of revolution is determined by the shape of the ellipse which is rotated about its minor (shorter) axis. Two parameters are required. One is invariably the equatorial radius, which is the semi-major axis, a. The other parameter is usually (1) the polar radius or semi-minor axis, b; or (2) the (first) flattening, f; or (3) the eccentricity, e. These parameters are not independent: they are related by Many other parameters (see ellipse, ellipsoid) appear in the study of geodesy, geophysics and map projections but they can all be expressed in terms of one or two members of the set a, b, f and e. Both f and e are small and often appear in series expansions in calculations; they are of the order 1/ and 0.0818 respectively. Values for a number of ellipsoids are given in Figure of the Earth. Reference ellipsoids are usually defined by the semi-major axis and the inverse flattening, 1/. For example, the defining values for the WGS84 ellipsoid, used by all GPS devices, are - a (equatorial radius): 6378137.0 m exactly - 1/ (inverse flattening): 298.257223563 exactly from which are derived - b (polar radius): 6356752.3142 m - e2 (eccentricity squared): 0.00669437999014 The difference between the semi-major and semi-minor axes is about 21 km (13 miles) and as fraction of the semi-major axis it equals the flattening; on a computer monitor the ellipsoid could be sized as 300 by 299 pixels. This would barely be distinguishable from a 300-by-300-pixel sphere, so illustrations usually exaggerate the flattening. Geodetic and geocentric latitudes The graticule on the ellipsoid is constructed in exactly the same way as on the sphere. The normal at a point on the surface of an ellipsoid does not pass through the centre, except for points on the equator or at the poles, but the definition of latitude remains unchanged as the angle between the normal and the equatorial plane. The terminology for latitude must be made more precise by distinguishing: - Geodetic latitude: the angle between the normal and the equatorial plane. The standard notation in English publications is φ. This is the definition assumed when the word latitude is used without qualification. The definition must be accompanied with a specification of the ellipsoid. - Geocentric latitude: the angle between the radius (from centre to the point on the surface) and the equatorial plane. (Figure below). There is no standard notation: examples from various texts include θ, ψ, q, φ′, φc, φg. This article uses θ. - Spherical latitude: the angle between the normal to a spherical reference surface and the equatorial plane. - Geographic latitude must be used with care. Some authors use it as a synonym for geodetic latitude whilst others use it as an alternative to the astronomical latitude. - Latitude (unqualified) should normally refer to the geodetic latitude. The importance of specifying the reference datum may be illustrated by a simple example. On the reference ellipsoid for WGS84, the centre of the Eiffel Tower has a geodetic latitude of 48° 51′ 29″ N, or 48.8583° N and longitude of 2° 17′ 40″ E or 2.2944°E. The same coordinates on the datum ED50 define a point on the ground which is 140 metres (460 feet) distant from the tower. A web search may produce several different values for the latitude of the tower; the reference ellipsoid is rarely specified. Length of a degree of latitude where M(φ) is the meridional radius of curvature. The distance from the equator to the pole is For WGS84 this distance is 10001.965729 km. The evaluation of the meridian distance integral is central to many studies in geodesy and map projection. It can be evaluated by expanding the integral by the binomial series and integrating term by term: see Meridian arc for details. The length of the meridian arc between two given latitudes is given by replacing the limits of the integral by the latitudes concerned. The length of a small meridian arc is given by |0°||110.574 km||111.320 km| |15°||110.649 km||107.550 km| |30°||110.852 km||96.486 km| |45°||111.132 km||78.847 km| |60°||111.412 km||55.800 km| |75°||111.618 km||28.902 km| |90°||111.694 km||0.000 km| When the latitude difference is 1 degree, corresponding to π/ radians, the arc distance is about The distance in metres (correct to 0.01 metre) between latitudes − 0.5 degrees and + 0.5 degrees on the WGS84 spheroid is The following graph illustrates the variation of both a degree of latitude and a degree of longitude with latitude. The nautical mile Historically a nautical mile was defined as the length of one minute of arc along a meridian of a spherical earth. An ellipsoid model leads to a variation of the nautical mile with latitude. This was resolved by defining the nautical mile to be exactly 1,852 metres. However, for all practical purposes distances are measured from the latitude scale of charts. As the Royal Yachting Association says in its manual for day skippers: "1 (minute) of Latitude = 1 sea mile", followed by "For most practical purposes distance is measured from the latitude scale, assuming that one minute of latitude equals one nautical mile". There are six auxiliary latitudes that have applications to special problems in geodesy, geophysics and the theory of map projections: - Geocentric latitude - Parametric (or reduced) latitude - Rectifying latitude - Authalic latitude - Conformal latitude - Isometric latitude The definitions given in this section all relate to locations on the reference ellipsoid but the first two auxiliary latitudes, like the geodetic latitude, can be extended to define a three-dimensional geographic coordinate system as discussed below. The remaining latitudes are not used in this way; they are used only as intermediate constructs in map projections of the reference ellipsoid to the plane or in calculations of geodesics on the ellipsoid. Their numerical values are not of interest. For example, no one would need to calculate the authalic latitude of the Eiffel Tower. The expressions below give the auxiliary latitudes in terms of the geodetic latitude, the semi-major axis, a, and the eccentricity, e. (For inverses see below.) The forms given are, apart from notational variants, those in the standard reference for map projections, namely "Map projections: a working manual" by J. P. Snyder. Derivations of these expressions may be found in Adams and online publications by Osborne and Rapp. The geocentric latitude is the angle between the equatorial plane and the radius from the centre to a point on the surface. The relation between the geocentric latitude (θ) and the geodetic latitude (φ) is derived in the above references as The geodetic and geocentric latitudes are equal at the equator and at the poles but at other latitudes they differ by a few minutes of arc. Taking the value of the squared eccentricity as 0.0067 (it depends on the choice of ellipsoid) the maximum difference of may be shown to be about 11.5 minutes of arc at a geodetic latitude of approximately 45° 6′.[c] Parametric (or reduced) latitude The parametric or reduced latitude, β, is defined by the radius drawn from the centre of the ellipsoid to that point Q on the surrounding sphere (of radius a) which is the projection parallel to the Earth's axis of a point P on the ellipsoid at latitude φ. It was introduced by Legendre and Bessel who solved problems for geodesics on the ellipsoid by transforming them to an equivalent problem for spherical geodesics by using this smaller latitude. Bessel's notation, u(φ), is also used in the current literature. The parametric latitude is related to the geodetic latitude by: The alternative name arises from the parameterization of the equation of the ellipse describing a meridian section. In terms of Cartesian coordinates p, the distance from the minor axis, and z, the distance above the equatorial plane, the equation of the ellipse is: The Cartesian coordinates of the point are parameterized by Cayley suggested the term parametric latitude because of the form of these equations. The rectifying latitude, μ, is the meridian distance scaled so that its value at the poles is equal to 90 degrees or π/ radians: where the meridian distance from the equator to a latitude φ is (see Meridian arc) and the length of the meridian quadrant from the equator to the pole (the polar distance) is Using the rectifying latitude to define a latitude on a sphere of radius defines a projection from the ellipsoid to the sphere such that all meridians have true length and uniform scale. The sphere may then be projected to the plane with an equirectangular projection to give a double projection from the ellipsoid to the plane such that all meridians have true length and uniform meridian scale. An example of the use of the rectifying latitude is the equidistant conic projection. (Snyder, Section 16). The rectifying latitude is also of great importance in the construction of the Transverse Mercator projection. The authalic (Greek for same area) latitude, ξ, gives an area-preserving transformation to a sphere. and the radius of the sphere is taken as The conformal latitude, χ, gives an angle-preserving (conformal) transformation to the sphere. The conformal latitude defines a transformation from the ellipsoid to a sphere of arbitrary radius such that the angle of intersection between any two lines on the ellipsoid is the same as the corresponding angle on the sphere (so that the shape of small elements is well preserved). A further conformal transformation from the sphere to the plane gives a conformal double projection from the ellipsoid to the plane. This is not the only way of generating such a conformal projection. For example, the 'exact' version of the Transverse Mercator projection on the ellipsoid is not a double projection. (It does, however, involve a generalisation of the conformal latitude to the complex plane). The isometric latitude, ψ, is used in the development of the ellipsoidal versions of the normal Mercator projection and the Transverse Mercator projection. The name "isometric" arises from the fact that at any point on the ellipsoid equal increments of ψ and longitude λ give rise to equal distance displacements along the meridians and parallels respectively. The graticule defined by the lines of constant ψ and constant λ, divides the surface of the ellipsoid into a mesh of squares (of varying size). The isometric latitude is zero at the equator but rapidly diverges from the geodetic latitude, tending to infinity at the poles. The conventional notation is given in Snyder (page 15): For the normal Mercator projection (on the ellipsoid) this function defines the spacing of the parallels: if the length of the equator on the projection is E (units of length or pixels) then the distance, y, of a parallel of latitude φ from the equator is The isometric latitude ψ is closely related to the conformal latitude χ: Inverse formulae and series The formulae in the previous sections give the auxiliary latitude in terms of the geodetic latitude. The expressions for the geocentric and parametric latitudes may be inverted directly but this is impossible in the four remaining cases: the rectifying, authalic, conformal, and isometric latitudes. There are two methods of proceeding. The first is a numerical inversion of the defining equation for each and every particular value of the auxiliary latitude. The methods available are fixed-point iteration and Newton–Raphson root finding. The other, more useful, approach is to express the auxiliary latitude as a series in terms of the geodetic latitude and then invert the series by the method of . Such series are presented by Adams who uses Taylor series expansions and gives coefficients in terms of the eccentricity. Osborne derives series to arbitrary order by using the computer algebra package Maxima and expresses the coefficients in terms of both eccentricity and flattening. The series method is not applicable to the isometric latitude and one must use the conformal latitude in an intermediate step. Numerical comparison of auxiliary latitudes The plot to the right shows the difference between the geodetic latitude and the auxiliary latitudes other than the isometric latitude (which diverges to infinity at the poles) for the case of the WGS84 ellipsoid. The differences shown on the plot are in arc minutes. In the Northern hemisphere (positive latitudes), θ ≤ χ ≤ μ ≤ ξ ≤ β ≤ φ; in the Southern hemisphere (negative latitudes), the inequalities are reversed, with equality at the equator and the poles. Although the graph appears symmetric about 45°, the minima of the curves actually lie between 45° 2′ and 45° 6′. Some representative data points are given in the table below. The conformal and geocentric latitudes are nearly indistinguishable, a fact that was exploited in the days of hand calculators to expedite the construction of map projections.:108 To first order in the flattening f, the auxiliary latitudes can be expressed as ζ = φ − Cf sin 2φ where the constant C takes on the values [1⁄2, 2⁄3, 3⁄4, 1, 1] for ζ = [β, ξ, μ, χ, θ]. β − φ ξ − φ μ − φ χ − φ θ − φ Latitude and coordinate systems The geodetic latitude, or any of the auxiliary latitudes defined on the reference ellipsoid, constitutes with longitude a two-dimensional coordinate system on that ellipsoid. To define the position of an arbitrary point it is necessary to extend such a coordinate system into three dimensions. Three latitudes are used in this way: the geodetic, geocentric and parametric latitudes are used in geodetic coordinates, spherical polar coordinates and ellipsoidal coordinates respectively. At an arbitrary point P consider the line PN which is normal to the reference ellipsoid. The geodetic coordinates P(ɸ,λ,h) are the latitude and longitude of the point N on the ellipsoid and the distance PN. This height differs from the height above the geoid or a reference height such as that above mean sea level at a specified location. The direction of PN will also differ from the direction of a vertical plumb line. The relation of these different heights requires knowledge of the shape of the geoid and also the gravity field of the Earth. Spherical polar coordinates The geocentric latitude θ is the complement of the polar angle θ′ in conventional spherical polar coordinates in which the coordinates of a point are P(r,θ′,λ) where r is the distance of P from the centre O, θ′ is the angle between the radius vector and the polar axis and λ is longitude. Since the normal at a general point on the ellipsoid does not pass through the centre it is clear that points P' on the normal, which all have the same geodetic latitude, will have differing geocentric latitudes. Spherical polar coordinate systems are used in the analysis of the gravity field. The parametric latitude can also be extended to a three-dimensional coordinate system. For a point P not on the reference ellipsoid (semi-axes OA and OB) construct an auxiliary ellipsoid which is confocal (same foci F, F′) with the reference ellipsoid: the necessary condition is that the product ae of semi-major axis and eccentricity is the same for both ellipsoids. Let u be the semi-minor axis (OD) of the auxiliary ellipsoid. Further let β be the parametric latitude of P on the auxiliary ellipsoid. The set (u,β,λ) define the ellipsoid coordinates.:§4.2.2 These coordinates are the natural choice in models of the gravity field for a rotating ellipsoidal body. The relations between the above coordinate systems, and also Cartesian coordinates are not presented here. The transformation between geodetic and Cartesian coordinates may be found in Geographic coordinate conversion. The relation of Cartesian and spherical polars is given in Spherical coordinate system. The relation of Cartesian and ellipsoidal coordinates is discussed in Torge. Astronomical latitude (Φ) is the angle between the equatorial plane and the true vertical at a point on the surface. The true vertical, the direction of a plumb line, is also the direction of the gravity acceleration, the resultant of the gravitational acceleration (mass-based) and the centrifugal acceleration at that latitude. Astronomic latitude is calculated from angles measured between the zenith and stars whose declination is accurately known. In general the true vertical at a point on the surface does not exactly coincide with either the normal to the reference ellipsoid or the normal to the geoid. The angle between the astronomic and geodetic normals is usually a few seconds of arc but it is important in geodesy. The reason why it differs from the normal to the geoid is, because the geoid is an idealized, theoretical shape "at mean sea level". Points on the real surface of the earth are usually above or below this idealized geoid surface and here the true vertical can vary slightly. Also, the true vertical at a point at a specific time is influenced by tidal forces, which the theoretical geoid averages out. Astronomical latitude is not to be confused with declination, the coordinate astronomers use in a similar way to specify the angular position of stars north/south of the celestial equator (see equatorial coordinates), nor with ecliptic latitude, the coordinate that astronomers use to specify the angular position of stars north/south of the ecliptic (see ecliptic coordinates). - Altitude (mean sea level) - Bowditch's American Practical Navigator - Cardinal direction - Circle of latitude - Declination on celestial sphere - Degree Confluence Project - Geodetic datum - Geographic coordinate system - Geographical distance - Great-circle distance - History of latitude measurements - Horse latitudes - List of countries by latitude - Natural Area Code - Orders of magnitude (length) - World Geodetic System - The current full documentation of ISO 19111 may be purchased from http://www.iso.org but drafts of the final standard are freely available at many web sites, one such is available at the following CSIRO - The value of this angle today is 23°26′11.9″ (or 23.43664°). This figure is provided by Template:Circle of latitude. - An elementary calculation involves differentiation to find the maximum difference of the geodetic and geocentric latitudes. - The Corporation of Trinity House (10 January 2020). "1/2020 Needles Lighthouse". Notices to Mariners. Retrieved 24 May 2020. - Newton, Isaac. "Book III Proposition XIX Problem III". Philosophiæ Naturalis Principia Mathematica. Translated by Motte, Andrew. p. 407. - National Imagery and Mapping Agency (23 June 2004). "Department of Defense World Geodetic System 1984" (PDF). National Imagery and Mapping Agency. p. 3-1. TR8350.2. Retrieved 25 April 2020. - Torge, W. (2001). Geodesy (3rd ed.). De Gruyter. ISBN 3-11-017072-8. - Osborne, Peter (2013). "Chapters 5,6". The Mercator Projections. doi:10.5281/zenodo.35392. for LaTeX code and figures. - Rapp, Richard H. (1991). "Chapter 3". Geometric Geodesy, Part I. Columbus, OH: Dept. of Geodetic Science and Surveying, Ohio State Univ. hdl:1811/24333. - "Length of degree calculator". National Geospatial-Intelligence Agency. Archived from the original on 2013-01-28. Retrieved 2011-02-08. - Hopkinson, Sara (2012). RYA day skipper handbook - sail. Hamble: The Royal Yachting Association. p. 76. ISBN 9781-9051-04949. - Snyder, John P. (1987). Map Projections: A Working Manual. U.S. Geological Survey Professional Paper 1395. Washington, DC: United States Government Printing Office. Archived from the original on 2008-05-16. Retrieved 2017-09-02. - Adams, Oscar S. (1921). Latitude Developments Connected With Geodesy and Cartography (with tables, including a table for Lambert equal area meridional projection (PDF). Special Publication No. 67. US Coast and Geodetic Survey. (Note: Adams uses the nomenclature isometric latitude for the conformal latitude of this article (and throughout the modern literature).) - Legendre, A. M. (1806). "Analyse des triangles tracés sur la surface d'un sphéroïde". Mém. Inst. Nat. Fr. 1st semester: 130–161. - Bessel, F. W. (1825). "Über die Berechnung der geographischen Langen und Breiten aus geodatischen Vermessungen". Astron. Nachr. 4 (86): 241–254. arXiv:0908.1824. Bibcode:2010AN....331..852K. doi:10.1002/asna.201011352. Translation: Karney, C. F. F.; Deakin, R. E. (2010). "The calculation of longitude and latitude from geodesic measurements". Astron. Nachr. 331 (8): 852–861. arXiv:0908.1824. Bibcode:1825AN......4..241B. doi:10.1002/asna.18260041601. - Cayley, A. (1870). "On the geodesic lines on an oblate spheroid". Phil. Mag. 40 (4th ser): 329–340. doi:10.1080/14786447008640411. - Karney, C. F. F. (2013). "Algorithms for geodesics". J. Geodesy. 87 (1): 43–55. arXiv:1109.4448. Bibcode:2013JGeod..87...43K. doi:10.1007/s00190-012-0578-z. - "Maxima computer algebra system". Sourceforge. - Hofmann-Wellenhof, B.; Moritz, H. (2006). Physical Geodesy (2nd ed.). ISBN 3-211-33544-7. - GEONets Names Server, access to the National Geospatial-Intelligence Agency's (NGA) database of foreign geographic feature names. - Resources for determining your latitude and longitude - Convert decimal degrees into degrees, minutes, seconds - Info about decimal to sexagesimal conversion - Convert decimal degrees into degrees, minutes, seconds - 16th Century Latitude Survey - Determination of Latitude by Francis Drake on the Coast of California in 1579
0.835841
3.065233
Astronomers have found out a weird exoplanet that rains iron at night. The daytime side of this environment, dubbed WASP-76 b, isn’t any less hellish, either. Temperatures can attain up to four,300 levels Fahrenheit (2,400 levels Celsius) — warm plenty of to vaporize metal. “One could say that this world receives wet in the evening, besides it rains iron,” University of Geneva astronomer David Ehrenreich, who led the new study, mentioned in a press launch. WASP-76 b is somewhat smaller than Jupiter and sits some 640 gentle-many years from Earth in the constellation Pisces. Its horrifying temperature is induced by its really extraordinary orbit. Fuel huge worlds like WASP-76 b are called warm Jupiters because they orbit uncomfortably shut to their residence stars — in this circumstance, approximately ten instances nearer than Mercury is to our sun. That proximity leaves WASP-76 b “tidally locked” to its star, with one particular side completely baking in gentle and the other caught in eternal darkness. WASP-76 b’s daytime side receives hit with thousands of instances extra radiation than Earth gets from the sun. And this scorching radiation vaporizes iron on the dayside. Winds driven by extraordinary temperature dissimilarities then force the metal close to the world to the nighttime hemisphere. There, significantly cooler temperatures let the iron condense into drops and tumble as a odd rain. “Surprisingly, nonetheless, we never see iron vapor on the other side of the world in the early morning,” University of Geneva researcher Christophe Lovis mentioned in a media launch. “The conclusion is that the iron has condensed in the course of the night. In other text, it rains iron on the night side of this extraordinary exoplanet.” It is the 1st time astronomers have detected this type of day-to-night chemical variation on a warm Jupiter like WASP-76 b. Researchers identified the world employing the European Southern Observatory’s Incredibly Large Telescope (VLT) in Chile. Exclusively, the discovery was produced possible thanks to an instrument called the Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations (ESPRESSO). Astronomers originally planned to use this VLT instrument to study Earth-like planets close to stars like our sun. Nevertheless, they suspected that the VLT’s extraordinary size would be fantastic for finding out the atmospheres of other exoplanets. It turns out they ended up suitable. Their discovery of iron rain on WASP-76 b was produced in the course of ESPRESSO’s 1st-at any time science observations. And that indicates there is very likely numerous extra weird worlds out there just waiting around to be discovered. “What we have now is a whole new way to trace the weather of the most extraordinary exoplanets,” Ehrenreich mentioned.
0.876368
3.886677
There’s already a strange story behind comet 41P/Tuttle-Giacobini-Kresák, or just 41P: It took almost 100 years to identify and occasionally flares. And now, its spin is rapidly slowing down. Scientists across the world observed 41P when it approached Earth in 2017—it was close enough and bright enough to see with binoculars. One team of scientists, from the University of Maryland, watched the comet’s rotation rate drop rapidly, from one rotation every 20 hours to one every 46 hours. This is larger than any change in comet rotation measured yet, and it could help scientists learn more about how comets evolve over time. Comets are ancient hunks of ice and dust that orbit the Sun—41P’s orbit takes around five years. Typically, things that revolve around the Sun constantly rotate, like our own Earth does, around a single axis. But comets aren’t planets. Gas streaming off of these rocks produces torque, or a rotational force. Think about popping a hole in a balloon tied to a fan blade—it could make the fan speed up or slow down. Torque that induces a spin speedup can cause the comet to fall apart, but slowdowns can cause the entire comet to change its spinning direction. Imagine if one day, the Earth started rotating with a different axis; entire seasons would be different, there would be more or less sunlight depending on where you lived—things would be weird. The researchers used telescopes at Lowell Observatory in Arizona and a telescope on the Swift satellite to watch the comet and its stream of gas. The extreme slowdown could have been due to an outgassing in a sort-of sweet spot, like hitting a tether-ball in the exact right place to slow down its spin. Their modeling also predicted that the comet could have slowed down by yet another factor of two, to a period of 100 hours, and would have been much faster during earlier appearances near Earth. They published their research in Nature today. Jessica Agarwal, researcher from the Max Planck Institute for Solar System Research, points out that this means a few things: one, the comet might have once spun much faster, explaining some bright outbursts other astronomers have seen in past visits, according to a commentary also published in Nature. On the other hand, if the comet continues to slow, it may soon wobble like a top and spin in another direction—exposing different parts to different levels of heating from the sun. She notes that observations from 2017 to its next visit in 2022 “could document this yet-to-be-seen phase of cometary evolution, and reveal valuable information about the nature of comets and other planetary bodies.” In other words, maybe other comets act weird like this, too. Scientists just haven’t observed them, yet.
0.872451
4.029435
The atmosphere of Mars is less than 1% of Earth’s, so it does not protect the planet from the Sun’s radiation nor does it do much to retain heat at the surface. It consists of 95% carbon dioxide, 3% nitrogen, 1.6% argon, and the remainder is trace amounts of oxygen, water vapor, and other gases. Also, it is constantly filled with small particles of dust(mainly iron oxide), which give Mars its reddish hue. Scientist believe that the atmosphere of Mars is so negligible because the planet lost its magnetosphere about 4 billion years ago. A magnetosphere would channel the solar wind around the planet. Without one, the solar wind interacts directly with the ionosphere stripping away atoms, lowering the density of the atmosphere. These ionized particles have been detected by multiple spacecraft as they trial off into space behind Mars. This leads the surface atmospheric pressure to be as low as 30 Pa(Pascal) with an average of 600 Pa compared to Earth’s average of 101,300 Pa. The atmosphere extends to about 10.8 km, about 4 km farther than Earth’s. This is possible because the planet’s gravity is slighter and does not hold the atmosphere as tightly. A relatively large amount of methane has been found in the atmosphere of Mars. This unexpected find occurs at a rate of 30 ppb. The methane occurs in large plumes in different areas of the planet, which suggests that it was released in those general areas. Data seems to suggest that there are two main sources for the methane: one appears to be centered near 30° N, 260° W, with the second near 0°, 310° W. It is estimated that Mars produces 270 ton/year of methane. Under the conditions on Mars, methane breaks down as quickly as 6 months(Earth time). In order for the methane to exist in the detected quantities, there must be a very active source under the surface. Volcanic activity, comet impacts, and serpentinization are the most probable causes. Methanogenic microbial life is a very remote alternative source. The atmosphere of Mars will cause a great number of obstacles for human exploration of the planet. It prevents liquid water on the surface, allows radiation levels that humans can barely tolerate, and would make it difficult to grow food even in a greenhouse. NASA and other space agencies are confident that they will be able to engineer solutions for the problem within the next 30 years, though. Good luck to them. Of course, we have written many articles about Mars’ atmosphere. Here’s an article about how the planet once held enough moisture for drizzle or dew. And here’s an article about the Mars methane mystery.
0.843136
3.915408
Astronomers have discovered that a distant galaxy — seen from Earth with the aid of a gravitational lens — appears like a cosmic ring, thanks to the highest resolution images ever taken with the Atacama Large Millimeter/submillimeter Array (ALMA). Forged by the chance alignment of two distant galaxies, this striking ring-like structure is a rare and peculiar manifestation of gravitational lensing as predicted by Albert Einstein in his theory of general relativity. Gravitational lensing occurs when a massive galaxy or cluster of galaxies bends the light emitted from a more distant galaxy, forming a highly magnified, though much distorted image. In this particular case, the galaxy known as SDP.81 and an intervening galaxy line up so perfectly that the light from the more distant one forms a nearly complete circle as seen from Earth. Discovered by the Herschel Space Observatory, SDP.81 is an active star-forming galaxy nearly 12 billion light-years away, seen at a time when the Universe was only 15 percent of its current age. It is being lensed by a massive foreground galaxy that is a comparatively nearby 4 billion light-years away. “Gravitational lensing is used in astronomy to study the very distant, very early Universe because it gives even our best telescopes an impressive boost in power,” said ALMA Deputy Program Scientist Catherine Vlahakis. “With the astounding level of detail in these new ALMA images, astronomers will now be able to reassemble the information contained in the distorted image we see as a ring and produce a reconstruction of the true image of the distant galaxy.” The new SDP.81 images were taken in October 2014 as part of ALMA’s Long Baseline Campaign, an essential program to test and verify the telescope’s highest resolving power, achieved when the antennas are at their greatest separation: up to 15 kilometers apart. The highest resolution image of SDP.81 was made by observing the relatively bright light emitted by cosmic dust in the distant galaxy. This striking image reveals well-defined arcs in a pattern that hints at a more complete, nearly contiguous ring structure. Other slightly lower-resolution images, made by observing the faint molecular signatures of carbon monoxide and water, help complete the picture and provide important details about this distant galaxy. Though this intriguing interplay of gravity and light in SDP.81 has been studied previously by other observatories, including radio observations with the Submillimeter Array and the Plateau de Bure Interferometer, and visible light observations with the Hubble Space Telescope, none has captured the remarkable details of the ring structure revealed by ALMA. “The exquisite amount of information contained in the ALMA images is incredibly important for our understanding of galaxies in the early Universe,” said astronomer Jacqueline Hodge with the National Radio Astronomy Observatory in Charlottesville, Va. “Astronomers use sophisticated computer programs to reconstruct lensed galaxies’ true appearance. This unraveling of the bending of light done by the gravitational lens will allow us to study the actual shape and internal motion of this distant galaxy much more clearly than has been possible until now.” For these observations, ALMA achieved an astounding maximum resolution of 23 milliarcseconds, which is about the same as seeing the rim of a basketball hoop atop the Eiffel Tower from the observing deck of the Empire State Building. “It takes a combination of ALMA’s high resolution and high sensitivity to unlock these otherwise hidden details of the early Universe,” said ALMA Director Pierre Cox. “These results open a new frontier in astronomy and prove that ALMA can indeed deliver on its promise of transformational science.” SDP.81 is one of five targets selected for study during the ALMA Long Baseline Campaign. The others include the protoplanetary disk HL Tau, the asteroid Juno, the star Mira, and the quasar 3C138. Papers describing these publicly available data and the overall outcome of the ALMA Long Baseline Campaign are to be published in the Astrophysical Journal, Letters. # # # The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of the European Organisation for Astronomical Research in the Southern Hemisphere (ESO), the U.S. National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI). ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA. Contact: Charles Blue, Public Information Officer (434) 296-0314; [email protected]
0.892799
4.140358
Where does the world begin and where does it end? In many creation stories the Earth has well-defined edges. In early Mesopotamian mythology it is a flat disk floating in the ocean and surrounded by a circular sky. The Hopi people of northeastern Arizona envision it as a series of layered worlds, of which humans have emerged into the fourth tier, escaping from the turmoil below through a hollow reed in the Grand Canyon. The ancient Greeks were probably the first to light on the idea of the Earth as science understands it today: as a sphere, and therefore without an end point on its surface. Through time, science and exploration have changed where frontiers lie and how those frontiers are imagined. Such inquiries have shown them to be tenuous and unstable. Frontiers share the qualities of both a boundary and a threshold: they at once define, delineate, and exclude, but also act as permeable borders through which one may pass into a strange country and be transformed by the experience, or through which the unknown passes into the realm of the familiar. The Greeks obtained much of their evidence for the theory of a spherical earth from observing the heavens. The Earth, noted Aristotle, casts a circular shadow on the moon during a lunar eclipse, and constellations on the southern horizon rise in the sky as you travel south. But at least one line of reasoning followed from a hard slog on the ground, even if its logical foundations were shakier. Elephants were a prized weapon of war in Alexander the Great’s conquest of the Persian Empire. In Egypt, his successors went to enormous pains to import them from parts of Africa, far to the south and west. If there were elephants to the east and elephants to the south and west, and the north was icebound, the Greeks reasoned, didn’t that show that the Earth was elephants all the way round? Mappae mundi offered instruction and wonder to their viewers. They showed possibilities and alternatives beyond distant frontiers. Seeing the Earth as round rather than as flat, or in some other form, affected the Greeks’ perception of frontiers. At that expansive phase of their history, distant lands might be unknown but they were not necessarily beyond reach. There were no restrictions on how far men might travel, only the limitations that they imposed on their own endeavors—a notion that has persisted in the West for most of the last 500 years. Today, with the Earth mapped, imaged and charted down to the last square foot, the frontier is supposedly in outer space: Mars, the moons of Jupiter, and beyond. But humans’ power to transform themselves and their environment suggests that the most important contemporary frontiers lie in the realm of inner space, in the possibilities for conceptual and moral transformation. It is at these boundaries that our future will be decided. Europeans redrew the frontiers of their world after the collapse of Greek and Roman civilization. This is not a figure of speech. They produced beautiful world maps, or mappae mundi, to represent a mythic and religious order circumscribed by God rather than by proto-scientific enquiry. The new vision lasted more than 1,000 years and can be seen today in a fine example from about 1300 A.D., kept at Hereford cathedral in England. The world, round but flat, is centered on the holy city of Jerusalem. Europe and the Mediterranean are distorted but just about recognizable to the modern eye. (East is at the top of the map.) In most of the rest of the map, especially the peripheries beyond land and sea where few Europeans had ventured, wonders dwell—strange beasts, hybrids, and men. Intricate lettering on the map reveals the Lynx, a wolf-like creature that sees through walls and produces a valuable carbuncle in its secret parts. The Manticor, in India, has a lion’s body, a scorpion’s tail, a triple row of teeth in a man’s face, and the voice of a Siren. Semi-humans such as the Phanesii, bat-like people with enormous drooping ears, live in Asia, as do the Spopodes, who have horses’ feet. The king of the Agriophani Ethiopes has one eye in his forehead, and his people feed on the flesh of panthers and lions. The Gangines of India live exclusively on the scent of forest apples and die instantly if they perceive any other smell. The Arimaspians fight with griffins for diamonds. Fully human but utterly foreign, and terrifying, are the Scythians: they love war, drink the blood of their enemies from the gushing wounds they inflict, and make cups from the skulls of the vanquished. The Hyperboreans, by contrast, are the happiest race of men. They live without quarreling and without sickness for as long as they like. Only when they are tired of living do they throw themselves from a prominent rock into the sea. Like the books of beasts, called bestiaries, that were created in the same period, mappae mundi offered instruction and wonder to their viewers. They showed possibilities and alternatives beyond distant frontiers, but also warned of the terrors awaiting those who were overly curious. Wonder was permissible so long as one did not stray too far from revealed and unchanging doctrine. Curiosity had been condemned by St Augustine as concupiscentia oculorum—lust of the eyes. The inner lives of our fellow creatures offer a frontier of wonder and beauty to explore as wide and deep as the sea, and they change humanity’s sense of its own borders. In the 15th century, however, a revolution in thinking and behavior began to take root. In 1417 the scholar Poggio Bracciolini discovered a manuscript of De Rerum Natura by the Roman poet Lucretius, which propagated the explosive idea that everything is made of atoms (the one scientific idea, Richard Feynman later said, he would pick to survive the collapse of civilization). The revival of ancient knowledge, combined with new reports from adventurous travelers, contributed to a growing sense of man’s importance and potential. The Erdapfel, or Earth apple, embodies this change. It is a terrestrial globe made in Nuremberg in 1492. In a particularly dramatic break with the medieval European past, and showing knowledge that the ancients never possessed, the Erdapfel features the entire African coast and the route around the Cape to the other side of the world, where the Portuguese monarchy, for whom the globe was made, hoped to find fabulous wealth. Gone are the fanciful creatures and races of men. The emphasis is on the practical—on showing navigable routes to the far East that would bypass the Islamic world. Unless they fail to notice altogether—something that happens surprisingly often—most people who look at the Erdapfel today are struck by one thing: there is no North or South America. Sail west from Lisbon and your landfall is in Japan. But this and other inaccuracies, such as the distorted shape of the African continent, pale beside the confidence that the Erdapfel proclaims in the human capacity to reach any part of the world, no matter how remote, and to find prizes there for the taking. There is a risk of over-romanticizing the age of discovery and the scientific revolution that followed. The empires of early modern Europe could be as cruel as any in history, and many leading figures in the emerging field of natural philosophy had little time for the sense of wonder that is palpable in medieval mappae mundi. Francis Bacon, who is widely regarded as the father of the scientific method, was dismissive of wonder, calling it “broken knowledge.” His aim was the betterment of the condition of humanity, or—not quite the same thing—European adventurers. Results, not frills. But the cumulative experience of the scientific age suggests that greater knowledge does not abolish wonder. Discoveries are often more amazing than the mysteries they resolve. They uncover depths that are more beautiful and far stranger than anything we had previously imagined, and motivate ever greater endeavors. They have expanded our sense of the possible, with breakthroughs that redraw not only the location of the frontier but also its significance. The geologists who disinterred “deep time” at the turn of the 19th century illustrate the power of scientific discovery to precipitate profound conceptual shifts. Their analysis of rocks and fossils shone a light backwards into the dark, and revealed prehistory’s almost unimaginably vast contours with precision. For those accustomed to thinking of the world as only a few thousand years old, the discovery of “deep time” was shocking, like a swoop into stereoscopic vision for someone who has previously only seen in two dimensions and suddenly finds himself on a high promontory above a chasm. “The mind [grew] giddy by looking so far into the abyss of time,” wrote John Playfair, a friend of the geological pioneer James Hutton, in 1788. The past, it became clear, was another country, or rather several. There had been periods lasting millions of years filled with monstrous creatures such as mososaurs (huge and ferocious marine reptiles) and pterosaurs (enormous flying lizards). The anatomist George Cuvier, who named both creatures, wrote in 1812 that he and his successors would “burst the limits of time” by making prehistory legible, just as astronomers had “burst the limits of space” by making the solar system knowable to human beings who were confined to one small planet. Charles Darwin’s theory of natural selection, first described in detail in his 1859 book On the Origin of Species, followed elegantly from the discovery of deep time. The English naturalist was not the first to argue that species evolved and changed, but every previous account had failed to provide a sufficiently detailed and robust explanation of how this happened, and orthodoxy remained on the side of the immutability of forms. Prehistoric monsters were interpreted as special creations of a different age. Darwin’s insight was that small gradations over what were inconceivably long periods of time, at least to humans, could lead cumulatively to profound transformations that turned monsters into familiar animals. Darwin’s vision was of life as both astonishingly fecund and productive, and as a field of relentless war and destruction. But it was not, in the end, bleak. The Origin concludes, famously, with a declaration that there is grandeur in the view of life in which “from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.” Darwin’s theory indicated that, contrary to Christian teaching and 300 years of scientific orthodoxy, humanity was animal in origin and wholly continuous with (though not the same as) the rest of nature. It broke down boundary conditions accepted by most philosophers and scientists since Descartes. Not all of Darwin’s contemporaries were receptive to this message, however. On seeing an orangutan named Jenny at the London Zoo in 1842, Queen Victoria declared it “frightful and painfully and disagreeably human.” Cousin to half the crowned heads of Europe though she may have been, here was one frontier the young Queen was not willing to breach on grounds of consanguinity. A few years before Jenny received Victoria, Darwin had observed the more hirsute of the two in her cage, writing: Let man visit Ouranoutang in domestication, hear its expressive whine, see its intelligence when spoken to; as if it understands every word said; see its affection to those it knew; see its passion & rage, sulkiness, & very actions of despair; ... and then let him boast of his proud pre-eminence. But the gulf between accepting humanity’s biological proximity to other creatures, and appreciating their unique social, moral, and cognitive worlds, was not an easy one for Darwin’s successors to bridge. When the zoologist Donald Griffin wrote in 1976 that biologists should investigate “the possibility that mental experiences occur in animals and have important impacts on their behaviors,” it was still a radical suggestion. In the decades since, numerous studies have proven him right. Many of the characteristics thought to be important for higher consciousness, such as brain size and a sense of self, turn out not to be unique to humans. Last year, leading neuroscientists signed what they called “The Cambridge Declaration on Consciousness,” stating that “humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all [sic] mammals and birds, and many other creatures, including octopuses, also possess [them].” Working memory and episodic memory are widespread among animals, as are social inclinations born of environmental pressures that favor their evolution. The distinction between cognition and emotion is also increasingly seen as a false one. Crows and other members of the corvid family have self-awareness and a theory of mind. Octopuses can solve some problems as well as 3-year-old children, not to mention perform feats of dexterity far beyond the scope of humans. Chimpanzees grieve for non- related individuals, and records of their reactions to stimuli such as a majestic waterfall and the birth of a baby chimp suggest that they may be capable of a sense of wonder. The inner lives of our fellow creatures offer a frontier of wonder and beauty to explore as wide and deep as the sea, and they change humanity’s sense of its own borders. The animal kingdom is a symphony of mental activity and other intelligent processes of which we apprehend only a small part. Take bird song: what sounds to our ears like simple single notes resolve at slow speed and lower pitch into dense, subtle tone-poems. Dental hygiene was not well advanced in 17th-century Holland. One trembles to think of the sight and smell that greeted Antonie van Leeuwenhoek when, in 1683, the cloth merchant and gentleman microscopist gazed into the mouths of two old men who had never cleaned their teeth in their lives. But his curiosity yielded astonishing results. Looking at samples taken from the men’s mouths under his microscope, he found “an unbelievably great company of living animalcules, a-swimming more nimbly than any I had ever seen up to this time ... [A]ll the water… seemed to be alive.” Van Leeuwenhoek was, of course, describing bacteria. It was the first glimpse of a domain whose nature and importance scientists are, even now, only beginning to appreciate. For the first 300 years after their discovery, bacteria and other microorganisms were studied with a view to understanding their role as agents of disease and decay. Only in recent decades has a fuller, more nuanced picture emerged—one that challenges our sense of the frontiers between beings, and within the category of life itself. The microbiologist Lynn Margulis was rejected by about 15 leading journals before her pathbreaking paper on symbiosis was published in 1967. She argued that the complex cells of protists, plants, and animals resulted from earlier and simpler organisms merging and cooperating. The ancestors of chloroplasts and mitochondria, the organelles in plants and animal cells that provide them energy, were once free-living bacteria that larger organisms then swallowed. But instead of becoming lunch, the bacteria took up residence, like Jonah in the belly of the whale. Unlike Jonah, however, they paid for their keep by performing a new role as ‘batteries.’ Life on these stars would have a very slow metabolism and rate of consciousness, taking 1,000 years to complete a single thought. Today the evidence for Margulis’s theory of endosymbiosis, as it has become known, is overwhelming. The physician and essayist Lewis Thomas captured the essential point in an essay published in the 1970s, proposing “some biomythology.” A bestiary for modern times, he argued, should be a micro-bestiary, since microbes teach us an essential lesson: “There is a tendency for living things to join up, establish linkages, live inside each other, return to earlier arrangements, get along whenever possible.” The implications of this insight are deeply personal. In the last decade, science has attempted to reckon with the power of the bacteria that live on us and, mostly, in us, in a miniature ecosystem known as the microbiome. It has been shown to play a role in digestion and immune response, and there is mounting evidence that the composition of the microbiome affects cognition and the risk of contracting illnesses like heart disease. The abundance of its constituents is astonishing: there may be around 500 trillion bacteria with us at any one time, outnumbering our own cells by 10 to 1. During a lifetime we excrete five elephants’ weight of them. Micro-organisms are, quite literally, part of our make-up. The human body is littered with scars from the viral attacks on cellular life that have been going on since at or near its beginning. Some 8 percent of our DNA is made up of remains of endogenous retroviruses that invaded us in the past. The ascendancy of the microbial domain and its coevolution with the human body raises the question of where the human ends and the microbe begins. A viral infection is thought likely to have given rise to the placenta in the ancestor of all mammals. Without it, we might still be laying eggs. The abyssal plain beneath much of the world’s ocean is, at first sight, a dark, silty no-place. But it harbors 2.9×1029 single-celled organisms—10 million trillion for every human on the planet. Even the deepest trenches, 11,000 meters below the surface, teem with microbial life. At volcanic “black smokers” on the sea bed, in almost total darkness and scorching temperatures, chemosynthetic bacteria and archaea use hydrogen from the vents to support assemblages no fabulist could have dreamed of: gutless worms taller than men, crabs with hairy claws, and bleached octopuses that bear a striking resemblance to Marge Simpson. Microbes have been found in stupendous abundance in the most unlikely of places. Tourists may be familiar with some of the spectacles that they create on land, such as the Grand Prismatic Spring in Yellowstone National Park, a round lake whose rainbow rim has been painted by heat-loving bacteria. Yet fewer know of the strange phenomena in the Antarctic interior, far inland from a frontier that is almost impenetrable to most multi-cellular life forms. At the Blood Falls in east Antarctica, sulfur-and iron-eating microbes buried in the oxygen-parched ice give the surface water a bright sanguine hue. Beneath the icy crust of Lake Untersee, to the north and west, cyanobacteria find enough light to photosynthesize and build pinnacles and cones that may resemble some of the earliest forms of life on Earth. Moving to Antarctica’s South Shetland Islands, below the tip of South America, bacteria living 15 meters beneath the permafrost have been found to be capable of surviving gamma-ray radiation exposures 5,000 times greater than any other known organism. Such discoveries have recast our sense of where the frontiers congenial to life might lie—including those beyond Earth. The idea that we might find living things in space long predates the discovery of a microorganism such as Halorubrum lacusprofundi, whose adaptations to the cold are, in the words of one researcher, likely to allow it “to survive not only in Antarctica, but elsewhere in the universe.” In the preface to the work that inspired van Leeuwenhoek, the Micrographia of 1665, Robert Hooke wrote “there may be yet invented several other helps for the eye, as much exceeding those already found, as those do the bare eye, such as by which we may perhaps be able to discover living Creatures in the Moon, or other Planets.” Technology may finally be catching up with Hooke’s prediction. Recent observations indicate that there may be as many as 17 billion stars in the Milky Way galaxy which are orbited by Earth-sized planets at life-friendly temperatures. Within 10 years, the James Webb Space Telescope may be able to tell us if the atmospheres of those nearby exhibit the chemical signatures of a biosphere. If extraterrestrial life does exist, how “weird” might it be? The adjective can be used in a semi-precise way to mean any life form with which, unlike everything we know of on Earth, we do not share a common ancestor. On the principle that life can evolve or endure where there is a flow of energy to be harvested, one of the most statistically likely places is in the vicinity of white dwarf stars—common enough objects in the universe—where collisions with dark matter will continue to provide a steady trickle of energy until the universe is 1025 years old, or about 10,000 trillion times long as it took life to appear on Earth. Life on these stars, if it were to exist, would have a very slow metabolism and rate of consciousness, taking 1,000 years to complete a single thought. For the moment, we have a sample size of precisely one from which to draw conclusions. Seeking to anticipate just how different life might look, however, requires us to look again at what we think we know—and, often, realize just how little we understand so far. There is no such place as home, and we live there. Here and now on Earth, life itself may be facing a frontier unlike any other it has known before. The manipulation of organisms to forge “creatures born from an idea, not an ancestor,” as an editorial in Nature put it just before the 200th anniversary of Darwin’s birth, challenges the foundations of the distinction between organic and inorganic forms. Genomics and genetic engineering have made it possible to swap traits and capabilities between species, or insert them into newly created species, to spur adaptation at vastly quicker speeds that has ever been possible. In 2010 a team headed by Hamilton Smith and Craig Venter announced, to much fanfare, that they had created life from scratch. The claim was not entirely as it seemed: What they had done was to make a copy of the genome of a pre-existing microbe, and put it inside the cell walls of another one. Their experiments are the most visible of those pointing the ways in which it might be possible to reprogram the code of life. An inventive riff on DNA and RNA molecules, known as “xenonucleic acid” or “XNA,” can, it is claimed, carry genetic information and do all the things its organic counterparts can, and more. The physicist Freeman Dyson has dreamed of a future in which anyone and everyone is able to manipulate the building blocks of life, and a new generation of entrepreneurs and artists who will be “writing genomes as fluently as Blake and Byron wrote verses,” making the planet “beautiful as well as fertile, hospitable to hummingbirds as well as to humans.” Influence is not the same as control. Humans are no more in control of the planet than a heavy smoker is in control of his lungs. Whatever one thinks of the plausibility and desirability of Dyson’s vision, the artists and inventors who are exploring this biological boundary herald a growing awareness of the fact that life’s borders are shifting beneath us. Artistic collaborators Oron Catts and Ionat Zurr have made Guatemalan worry dolls out of “semi-living” human tissue. Dutch designer Joris Laarman has built a lamp illuminated by hamster cells modified with firefly DNA. Christian Bök, the Canadian poet, aims to engineer a life-form so that its genetic code becomes “not only a durable archive for storing a poem, but also an operant machine for writing” one, generating a protein-based “literary” product he calls the Xenotext. Humans need not be exempt from this tinkering, although proposals to adapt our minds and bodies have proved controversial. Julian Savulescu, professor of practical ethics at Oxford University, advocates intervening chemically and neurologically to mold personalities into forms that are more conducive to civilization’s survival. “Unless you believe that evolution provided just the perfect number of psychopaths in our community and just the right level of selfishness within different individuals,” he writes, “you should believe that we should change that natural distribution for the better.” Savulescu’s detractors counter that the triggers for antisocial behavior lie in social and political conditions, and it is there that the interventions need to be made. How far could this go? A recent report from the U.S. National Science Foundation suggests that by 2020, not only will we be fitter, happier, and more productive, but that people from all backgrounds and of all ranges of ability will “acquire valuable new knowledge and skills more reliably and quickly.” By 2030, “fast, broadband interfaces between the human brain and machines will transform work in factories, control cars, and enable new sports, art forms, and modes of interaction between people.” Most of us will probably welcome many of the foreseeable changes. Who would object to 100 or even 120 years of healthy, “augmented” life? A human/post-human frontier may now be coming into view. Those who support the idea of “the singularity,” the emergence of man-machine hybrids with vastly greater intelligence than our own, tend to believe that humanity will imminently and irreversibly cross that threshold. Nietzsche, for one, would be happy: “What is glorious in Man is that he is only a bridge.” What is notable about the post-human frontier is that the very thing that keeps us on this side of the divide—our limited cognitive capacities—makes it impossible to know what lies beyond it once it is crossed. Perhaps the view would be different from the other side; perhaps it would not seem like a side at all, but rather the commencement of a history for which humanity is prehistory. For now, however, we have no choice but to live within the bounds of what our minds allow, limited by our ability to grasp the mathematics in which the universe appears to be written. In many respects, our frontiers as conscious individuals and as communities are and will remain extremely circumscribed. For all the miracle that is the brain, with each of its 85 billion neurons connected to an average 7,000 others and collectively performing several hundred trillion operations per second, our attention is necessarily finite. Vladimir Nabokov put it well: Reality is a very subjective affair. I can only define it as a kind of gradual accumulation of information: and as specialization… Take a lily[,]... more real to a naturalist than it is to an ordinary person. But it is still more real to a botanist. And yet another stage of reality is reached with the botanist who is a specialist in lilies. You can get nearer and nearer, so to speak, to reality but you never get near enough because reality is an infinite succession of steps, levels of perception, false bottoms, and hence unquenchable, unattainable. You can know more and more about one thing but you can never know every- thing about one thing…So we live surrounded by more or less ghostly objects. Even our most precise conceptions of reality are only fleeting glimpses of what Richard Feynman called “the inconceivable nature of nature.” But this can be a counsel of joy, not despair. Over time we can nurture a consciousness that is open to perception of the most beautiful and surprising patterns around us. Preserving a capacity to appreciate wondrous phenomena, however, demands facing up to urgent ecological challenges, for which individual enhancement will not suffice. An enhanced jerk, after all, is still a jerk. At a time of proliferating discoveries and innovations, humanity’s most important frontier is not scientific or geographic, but moral and political. The concept of the Anthropocene, as many geologists now call the epoch in which we live, distills the dilemma. It proceeds from evidence that humans now exert a massive and decisive influence on the planet and its ecosystems, with consequences that will reach far into the future. People sometimes object to the term on the grounds that it suggests humans are in control. But influence is not the same as control. Humans are no more in control of the planet than a heavy smoker is in control of his lungs. The Anthropocene is likely to be a time of rapid and unpredictable environmental change. Rapid, because the conditions that underpin all life are changing faster than they ever have at any point over tens or even hundreds of millions of years. Unpredictable, because we struggle to foresee the likely consequences of these changes, and because we cannot be sure how people will react to them in the future. We don’t know where some tipping points may be or, indeed, whether they will really unfold as models predict. Such future calamities are Rumsfeldian “unknown unknowns.” We should not dismiss our potential to innovate more intelligently and benignly in the future than has been the case in the past. One way to get a handle on how things might go, however, is to look at times in the past when the Earth system has been subjected to considerable pressure. Paul Wignall, professor of palaeoenvironments at the University of Leeds, says that humans are adding greenhouse gases at a rate and for a duration that are similar to the massive burst of volcanic activity that kicked off the end of the Permian period some 252 mil- lion years ago. On that occasion, about nine-tenths of life on Earth was destroyed. The event has been called “The Great Dying” and it was the greatest catastrophe in the history of life. Ecosystems took more than 10 million years to recover afterwards, with a very different mix of plants and animals. Today’s world is very different, and disaster on such a scale is not inevitable. But there are other factors besides greenhouse gases to think about. The degradation and erosion of fertile soil, on which civilizations throughout history have depended, is a huge problem in many parts of the world. Toxics, plastics, and other man-made chemicals are likely to have damaging, long-term effects on human and animal health. In the global ocean—seven-tenths of the planet’s surface and 90 percent of its habitable space—acidification, pollution, temperature rise, overfishing, and other stresses may, singly and in combination, prove highly destabilizing. Most scientists think a global mass extinction of species is already underway. Loss of biodiversity is a systemic phenomenon; focusing conservation efforts on residual pristine landscapes, protected areas behind artificial frontiers, treats the symptoms and not the causes. Yet technological advances may open new ways to intervene and even to revive what has disappeared. The technical hurdles to recreating recently extinct animals, and even those that died out longer ago, such as the mammoth, have largely been solved. But de-extinction efforts raise more questions than they answer. However powerful our astonishment, delight, or horror at what may be technically feasible, we should not lose sight of the context in which such experiments, if they ever happen, will take place. Species only thrive in an appropriate environment, and individuals from highly social species need a pre-existing social group. “It’s the ecology, stupid,” might be an appropriate mantra for our times. Ignorance is unavoidable, but willful ignorance and reluctance to perform careful experiments is not. As the physicist John Archibald Wheeler said, “our whole problem is to make the mistakes as fast as possible.” Once upon a time the Earth really was elephants all the way around. Straight-tusked elephants, closely related to the Indian elephant, roamed all over Europe and much of northern Asia until about 50,000 years ago. Ancient mastodons and gomphotheres—members of the same order of creatures as living elephants—were widespread in the Americas until the arrival of man. The extirpation of elephants from China occurred at the dawn of the historical period. Today, African and Asian elephants are increasingly endangered in the wild. Perhaps, one day, an abundance of elephants in the most surprising places will be part of our world again. Elephants do not always make easy neighbors, and can crush crops and people with their bulk. But their intelligence, compassion, and playfulness are a reminder that there are other worlds beyond humanity’s din. Perhaps all that we are and all that we treasure exists at a frontier. Humans are creatures of uncertainty, created—at least to date—largely by natural selection acting upon random mutations. Yet here and now on Earth, we should not dismiss our potential to innovate more intelligently and benignly in the future than has been the case in the past. We may yet find a sense of dwelling and celebration, through art and story, and what they have to teach. In Italo Calvino’s novel Invisible Cities, Marco Polo concludes his tales to the Great Khan: The inferno of the living is not something that will be; if there is one, it is what is already here, the inferno where we live every day, that we form by being together. To escape suffering in it ... seek and learn to recognize who, in the midst of inferno, are not inferno, then make them endure, give them space. Medieval Europeans believed the universe to be much smaller and of vastly shorter duration than we now know it to be. But some of them still dreamed magnificently, devoting their lives to the construction of great cathedrals that, for the most part, they would never live to see completed. We can recreate something of their world in ours. A replica of a 12th-century mappa mundi was recently made in England using gold leaf, black ink derived from oak galls, and paint made from ground-up lapis lazuli, malachite, and dragon’s blood, the red extract of a plant root. The gold was formed within stars; the rest by great, creating nature here on Earth. Today, with our vastly greater knowledge and capabilities, what else might we yet conceive? Caspar Henderson is the author of The Book of Barely Imagined Beings: A 21st Century Bestiary (2013, Chicago University Press). This article originally appeared in the Fall 2013 Nautilus Quarterly.
0.832429
3.447623
Newborn stars spew material into the surrounding gas and dust, creating a surreal landscape of glowing arcs, blobs and streaks — and ESO’s Very Large Telescope (VLT) has caught some of them on candid camera. This new image, released today, hails from NGC 6729, a nearby star-forming region in the constellation Corona Australis. The stellar nursery NGC 6729 (RA 19h 01m 54.1s; dec -36° 57′ 12″) is part of one of the closest stellar nurseries to Earth and therefore one of the best studied. The new VLT image gives a close-up view of a section of this strange and fascinating region. The data were selected from the ESO archive by Sergey Stepanenko of the Ukraine, as part of the Hidden Treasures competition. The 2010 competition gave amateur astronomers the chance to search through ESO’s astronomical archives, hoping to find a well-hidden gem that needed polishing by the entrants. Participants vied for prizes, including a free trip to see the VLT in Chile for the overall winner. Stepanenko’s picture of NGC 6729 was ranked third. Stars form deep within molecular clouds and the earliest stages of their development cannot be seen in visible-light telescopes because they kick out so much dust. Although very young stars at the upper left of the image cannot be seen directly, the havoc they have wreaked on their surroundings dominates the picture. High-speed jets of material that travel away from the baby stars at velocities as high as one million kilometers per hour are slamming into the surrounding gas and creating shock waves. The shocks cause the gas to shine and create the strangely-colored glowing arcs and blobs known as Herbig–Haro objects. The astronomers George Herbig and Guillermo Haro were not the first to see one of the objects that now bear their names, but they were the first to study the spectra of these strange objects in detail. They realized that they were not just clumps of gas and dust that reflected light, or glowed under the influence of the ultraviolet light from young stars — but were a new class of objects associated with ejected material in star-forming regions. In this view, the Herbig–Haro objects form two lines marking out the probable directions of ejected material. One stretches from the upper left to the lower center, ending in the bright, circular group of glowing blobs and arcs at the lower center. The other starts near the left upper edge of the picture and extends towards the center right. The peculiar sabre-shaped bright feature at the upper left is probably mostly due to starlight being reflected from dust and is not a Herbig–Haro object. The enhanced-color picture was created from images taken using the VLT’s FORS1 instrument. Images were taken through two different filters that isolate the light coming from glowing hydrogen (shown as orange) and glowing ionized sulphur (shown as blue). The different colors in different parts of this violent star formation region reflect different conditions — for example where ionized sulphur is glowing brightly (blue features) the velocities of the colliding material are relatively low — and help astronomers to unravel what is going on in this dramatic scene.
0.825891
3.826483
Does the worsening galactic cosmic radiation environment observed by CRaTER preclude future manned deep-space exploration? That is the conclusion of of a recently published paper that posits the recent decrease in solar activity has led to increased incidence of cosmic rays, which are dangerously radioactive. That may just put a damper on anyone interested in organizing manned exploration of the Red Planet. The Sun and its solar wind are currently exhibiting extremely low densities and magnetic field strengths, representing states that have never been observed during the space age. The highly abnormal solar activity between cycles 23 and 24 has caused the longest solar minimum in over 80 years and continues into the unusually small solar maximum of cycle 24. As a result of the remarkably weak solar activity, we have also observed the highest fluxes of galactic cosmic rays in the space age, and relatively small solar energetic particle events. We use observations from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) on the Lunar Reconnaissance Orbiter (LRO) to examine the implications of these highly unusual solar conditions for human space exploration. We show that while these conditions are not a show-stopper for long duration missions (e.g., to the Moon, an asteroid, or Mars), galactic cosmic ray radiation remains a significant and worsening factor that limits mission durations.Very interesting and at least for me counter intuitive. I would have thought: Less solar activity means less radiation. But it seems that the solar wind normally has the effect of reducing the amount of dangerous cosmic radiation that can reach the inner solar system. But to that, point, the article points out: While particles and radiation from the Sun are dangerous to astronauts, cosmic rays are even worse, so the effect of a solar calm is to make space even more radioactive than it already is.
0.851309
3.713248
NASA’s Dawn spacecraft has captured new images of dwarf planet Ceres. As NASA’s Dawn spacecraft closes in on Ceres, new images show the dwarf planet at 27 pixels across, about three times better than the calibration images taken in early December. These are the first in a series of images that will be taken for navigation purposes during the approach to Ceres. Over the next several weeks, Dawn will deliver increasingly better and better images of the dwarf planet, leading up to the spacecraft’s capture into orbit around Ceres on March 6. The images will continue to improve as the spacecraft spirals closer to the surface during its 16-month study of the dwarf planet. “We know so much about the solar system and yet so little about dwarf planet Ceres. Now, Dawn is ready to change that,” said Marc Rayman, Dawn’s chief engineer and mission director, based at NASA’s Jet Propulsion Laboratory in Pasadena, California. The best images of Ceres so far were taken by NASA’s Hubble Space Telescope in 2003 and 2004. This most recent images from Dawn, taken January 13, 2015, at about 80 percent of Hubble resolution, are not quite as sharp. But Dawn’s images will surpass Hubble’s resolution at the next imaging opportunity, which will be at the end of January. “Already, the [latest] images hint at first surface structures such as craters,” said Andreas Nathues, lead investigator for the framing camera team at the Max Planck Institute for Solar System Research, Gottingen, Germany. Ceres is the largest body in the main asteroid belt, which lies between Mars and Jupiter. It has an average diameter of 590 miles (950 kilometers), and is thought to contain a large amount of ice. Some scientists think it’s possible that the surface conceals an ocean. Dawn’s arrival at Ceres will mark the first time a spacecraft has ever visited a dwarf planet. “The team is very excited to examine the surface of Ceres in never-before-seen detail,” said Chris Russell, principal investigator for the Dawn mission, based at the University of California, Los Angeles. “We look forward to the surprises this mysterious world may bring.” The spacecraft has already delivered more than 30,000 images and many insights about Vesta, the second most massive body in the asteroid belt. Dawn orbited Vesta, which has an average diameter of 326 miles (525 kilometers), from 2011 to 2012. Thanks to its ion propulsion system, Dawn is the first spacecraft ever targeted to orbit two deep-space destinations. JPL manages the Dawn mission for NASA’s Science Mission Directorate in Washington. Dawn is a project of the directorate’s Discovery Program, managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama. The University of California at Los Angeles (UCLA) is responsible for overall Dawn mission science. Orbital Sciences Corp. in Dulles, Virginia, designed and built the spacecraft. The Dawn framing cameras were developed and built under the leadership of the Max Planck Institute for Solar System Research, Gottingen, Germany, with significant contributions by German Aerospace Center (DLR), Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The Framing Camera project is funded by the Max Planck Society, DLR, and NASA/JPL. The Italian Space Agency and the Italian National Astrophysical Institute are international partners on the mission team.
0.871479
3.519665
Bill Snyder creates his images by peering into a realm that no human can see with the naked eye. An amateur astronomer, Snyder is fascinated by the universe and cosmos spinning around us. He says, ‘It was hard to explain what I could see through my telescope, so I started taking pictures to show people what was out there.’ Astrophotography is a laborious process that takes many hours, sometimes months, to create. He works with a high-tech camera that is mounted on his telescope and specialized software that tracks objects or phenomena in the solar system and records the image. Each image is time-lapsed and shot through specific filters and stacked by the software. One image of a faraway nebula is actually a composite photograph, sometimes upwards of 40 images are overlaid upon one another. These images are essentially black and white before Snyder adds the color in Photoshop. The results are vividly colored photographs of the universe beyond our small planet. Snyder plans out each image before he sets out to shoot the night sky. Researching other images, such as the famous space photographs of the Hubble telescope, he forms an idea of what composition he would like from the universe. Although his home base is in Connellsville, Snyder works with telescopes all around the world. He refers to this as ‘remote imaging,’ where he controls the telescopes via the Internet. The composition of his images are dependent on a variety of factors: the time of year, the weather, and where objects are in the sky. It can take months to create an image, but time can be short when the universe is constantly moving and shifting as the earth rotates. Always a creative person, Snyder’s background is not in art or photography, or even science. He started shooting these images because of his passion for astronomy. It was when he started to print them, that he discovered his artistic side when it came to color correction and how they appear on paper as opposed to on the computer screen. In print, the images do appear slightly different than they do online, but the effect is startling. Some images appear 3-dimensional when viewed in print, the cosmos shifting subtly depending on where you stand when viewing it. Snyder says that he never expected to become an artist, but emphatically believes that astrophotography is indeed a form of art like any other. This type of astrophotography is not used for scientific data as much as it is presented for aesthetic value. Despite this, the astrophotography is both educational and beautiful. Snyder allows us a glimpse into the mysteries of the cosmos, things that are real and yet intangible and stunning to behold. Bill Snyder will be at this year’s Dollar Bank Three Rivers Arts Festival from Wednesday, June 12 – Sunday June 16 at booth #76 in the Artist’s Market. For more information and to order prints, please visit his website athttp://billsnyderastrophotography.com Originally posted June 4, 2013 by Emily O’Donnell at http://www.3riversartsfest.org/2013/06/emerging-artist-profile-bill-snyder
0.855177
3.189393
A 3D map of our galaxy’s space dust was created by scientists in the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab). One of the goals of this 3D space map is to carefully plot each individual dust that exists in our galaxy in order to clear up the deep space view and measure the accelerating expansion rate of the universe. What is space dust? Cosmic dust is typically debris of stars that have died billions of years ago. Collectively, these stardust form into clouds that either become planets or new stars. When they fail to do so, they pose as obstructions to the astronomers' view of celestial objects such as other planets and stars. It also becomes difficult for astrophysicists to see deep into space to be able to learn more about the history, evolution, and formation of our universe. The Earth, for example, is a gigantic lump of space dust which begs the question of what am I? Well, we too are space dust but we can consider ourselves as an extra special piece of dust with our complex and well-structured chemical composition. Planetary dust that exists within the Earth’s atmosphere produces those iconic shades orange and red during sunrises and sunsets. A similar phenomenon happens in outer space when dust cause other celestial objects and galaxies glow red in the sky which results into hiding them and distorting their distance. [Image source: Berkley Lab] The 3D space dust map This 3D dust map stretch out as far as thousands of light-years in our Milky Way galaxy. One of the main goals of this space dust map is to aid the Dark Energy Spectroscopic Instrument (DESI) project led by the Berkley Lab to measure the rate of the universe’s accelerating expansion when it kicks-off in 2019. Project DESI aims to map out more than 30 million distant galaxies but if a dust is neglected, it will cause warping on the map. The project is led by Edward F. Schlafly and has used data from Pan-STARRs sky survey in Hawaii and from a different survey known as APOGEE from the Apache Point in New Mexico. A technique called infrared spectroscopy was used to slice through dust which naturally hides celestial objects and give a more accurate description of a star's color. The video below provides a 3D animation of space dust that encompasses thousands of light years through and out of the Milky Way's galactic plane. Despite the thoroughness of this 3D dust map, there is still “one-third of the galaxy that’s missing" said Schlafly. A number of anomalies were found when the 3D space dust map was created where Schlafly said "The message to me is that we don’t yet know what’s going on. I don’t think the existing (models) are correct, or they are only right at the very highest densities". Once all the cosmic dust are accounted for, astronomers and enthusiasts like me can confidently believe that one day we will be able to discover some of our universe’s mysteries.
0.85672
3.927442
Remember that planet discovered near Alpha Centauri almost exactly a year ago? As you may remember, it’s the closest system to Earth, making some people speculate about how quickly we could get a spacecraft in that general direction. Four light years is close in galactic terms, but it’s a little far away for the technology we have now — unless we wanted to wait a few tens thousands of years for the journey to complete. Meanwhile, we can at least take pictures of that star system. The Hubble Space Telescope team has released a new picture of Alpha Centauri’s sister star, Proxima Centauri. While Proxima is technically the closest star to Earth, it’s too faint to be seen by the naked eye, which is not all that surprising given it is only an eighth of the sun’s mass. Sometimes, however, it gets a little brighter. “Proxima is what is known as a ‘flare star’, meaning that convection processes within the star’s body make it prone to random and dramatic changes in brightness.” stated the Hubble European Space Agency Information Centre. “The convection processes not only trigger brilliant bursts of starlight but, combined with other factors, mean that Proxima Centauri is in for a very long life.” How long? Well, consider the following: the universe is about 13.8 billion years old and Proxima is expected to remain in middle age for another four TRILLION years. Plenty of time for us to send a spacecraft over there if we’re patient enough. (The universe itself is expected to last a while, as Wise Geek explains.) The picture was nabbed with Hubble’s Wide Field and Planetary Camera 2, with neighbouring stars Alpha Centauri A and B out of the frame.
0.88373
3.325024
Discovered by Sir William Herschel in March 1781, gas giant Uranus is the penultimate planet of the Solar System and currently well placed for observation in the constellation of Pisces. Despite being four times the diameter of Earth, its immense distance from the Sun (19 Astronomical Units, or 2,870 million kilometres) means that most visual observers consider discerning its tiny 3.7-arcsecond, magnitude +6, blue-green disc in backyard telescopes is achievement enough. However, ambitious owners of large instruments equipped with CCD cameras may care to emulate observers in France and Australia who have recently succeeded in imaging enormous storms raging on the planet at infrared and visual wavelengths. “The weather on Uranus is incredibly active,” said Imke de Pater, professor and chair of astronomy at the University of California, Berkeley, and leader of the team that detected eight large storms on Uranus’s northern hemisphere when observing the planet with adaptive optics on the W. M. Keck Observatory in Hawaii on 5th and 6th August 2014. One event was the brightest storm ever seen on Uranus at 2.2 microns, a wavelength that reveals clouds just below the tropopause where the pressure ranges from about 300 to 500 mbar, equivalent to half the atmospheric pressure at the surface of the Earth. The storm accounted for 30 percent of all light reflected by the rest of the planet at this wavelength. “This type of activity would have been expected in 2007, when Uranus’s once every 42-year equinox occurred and the Sun shined directly on the equator,” states co-investigator Heidi Hammel of the Association of Universities for Research in Astronomy. “But we predicted that such activity would have died down by now. Why we see these incredible storms now is beyond anybody’s guess.” Amateur astronomers soon learned of the bright Uranian storms. Australian amateur Anthony Wesley of Murrumbateman, NSW, succeeded in imaging storm features on 19th September and 2nd October 2014 with his 16-inch Newtonian. Marc Delcroix in France processed amateur pictures and confirmed the discovery of a bright spot on an image by fellow French amateur Régis De-Bénedictis, then in others taken by amateurs in September and October 2014. Delcroix had his own chance to photograph it with the Pic du Midi one-metre telescope where, on 4th October, “I caught the feature when it was transiting, and I thought, ‘Yes, I got it!'” he said. Delcroix, who works for an auto parts supplier in Toulouse, has observed with his backyard telescope since 2006 and has a particular interest in Jupiter. He has used the Pic du Midi telescope occasionally since 2012. “I was thrilled to see such activity on Uranus. Getting details on Mars, Jupiter or Saturn is now routine, but seeing details on Uranus and Neptune are the new frontiers for us amateurs and I did not want to miss that,” said Delcroix. “I was so happy to confirm myself these first amateur images on this bright storm on Uranus, feeling I was living a very special moment for planetary amateur astronomy.” Uranus’s atmosphere is composed mainly of hydrogen and helium with a blue tint caused by small amounts of methane, which condenses into highly reflective clouds of methane ice when it rises to the surface. Since the planet has no internal source of heat, its atmospheric activity is believed to be driven solely by sunlight which is now weak in the northern hemisphere, so astronomers were surprised when these observations showed such intense activity. Interestingly, the extremely bright storm seen by the 10-metre Keck II telescope in the near infrared is not the one seen by the amateurs. De Pater’s colleague Larry Sromovsky, a planetary scientist at the University of Wisconsin, Madison, identified the amateur spot as one of the few features on the Keck Observatory images from 5th August that was only seen at 1.6 microns, and not at 2.2 microns. The 1.6-micron light is emitted from deeper within the Uranian atmosphere, which means that this feature is below the uppermost cloud layer of methane ice. “These unexpected observations remind us keenly of how little we understand about atmospheric dynamics in outer planet atmospheres,” wrote De Pater, Sromovsky, Hammel and Pat Fry of the University of Wisconsin in their report delivered to a meeting of the American Astronomical Society’s Division for Planetary Sciences in Tucson, Arizona on 12th November.
0.864989
3.749317