text
stringlengths
286
572k
score
float64
0.8
0.98
model_output
float64
3
4.39
NASA’s GRAIL app has specific information about the science and purpose of the mission, and features daily mission news updates, images of the spacecraft during assembly and testing, videos and a countdown timer to launch. The act of two or more aircraft flying together in a disciplined, synchronized manner is one of the cornerstones of military aviation, as well as just about any organized air show. But as amazing as the U.S. Navy's elite Blue Angels or the U.S. Air Force's Thunderbirds are to behold, they remain essentially landlocked, anchored if you will, to our planet and its tenuous atmosphere. What if you could take the level of precision of these great aviators to, say, the moon? "Our job is to ensure our two GRAIL spacecraft are flying a very, very accurate trail formation in lunar orbit," said David Lehman, GRAIL project manager at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "We need to do this so our scientists can get the data they need." Essentially, trail formation means one aircraft (or spacecraft in this case), follows directly behind the other. Ebb and Flow, the twins of NASA's GRAIL (Gravity Recovery And Interior Laboratory) mission, are by no means the first to synch up altitude and "air" speed while zipping over the craters, mountains, hills and rills of Earth's natural satellite. That honor goes to the crew of Apollo 10, who in May 1969 performed a dress rehearsal for the first lunar landing. But as accurate as the astronauts aboard lunar module "Snoopy" and command module "Charlie Brown" were in their piloting, it is hard to imagine they could keep as exacting a position as Ebb and Flow. "It is an apples and oranges comparison," said Lehman. "Lunar formation in Apollo was about getting a crew to the lunar surface, returning to lunar orbit and docking, so they could get back safely to Earth. For GRAIL, the formation flying is about the science, and that is why we have to make our measurements so precisely." As the GRAIL twins fly over areas of greater and lesser gravity at 3,600 mph (5,800 kilometers per hour), surface features such as mountains and craters, and masses hidden beneath the lunar surface, can influence the distance between the two spacecraft ever so slightly. How slight a distance change can be measured by the science instrument beaming invisible microwaves back and forth between Ebb and Flow? How about one-tenth of one micron? Another way to put it is that the GRAIL twins can detect a change in their position down to one half of a human hair (0.000004 inches, or 0.00001 centimeters). For those of you who are hematologists or vampires (we are not judging here), any change in separation between the two twins greater than one half of a red corpuscle will be duly noted aboard the spacecraft's memory chips for later downlinking to Earth. Working together, Ebb and Flow will make these measurements while flying over the entirety of the lunar surface. This begs the question, why would scientists care about a change of distance between two spacecraft as infinitesimal as half a red corpuscle a quarter million miles from Earth? "Mighty oaks from little acorns grow – even in lunar orbit," said Maria Zuber, principal investigator of the GRAIL mission from the Massachusetts Institute of Technology, Cambridge. "From the data collected during these minute distance changes between spacecraft, we will be able to generate an incredibly high-resolution map of the moon's gravitational field. From that, we will be able to understand what goes on below the lunar surface in unprecedented detail, which will in turn increase our knowledge of how Earth and its rocky neighbors in the inner solar system developed into the diverse worlds we see today." Getting the GRAIL twins into a hyper-accurate formation from a quarter million miles away gave the team quite a challenge. Launched together on Sept. 10, 2011, Ebb and Flow went their separate ways soon after entering space. Three-and-a-half months and 2.5 million miles (4 million kilometers) later, Ebb entered lunar orbit. Flow followed the next day (New Year's Day 2012). "Being in lunar orbit is one thing, being in the right lunar orbit for science can be something else entirely," said Joe Beerer, GRAIL's mission manager from JPL. "The twins initial orbit carried them as close to the lunar surface as 56 miles (90 kilometers) and as far out as 5,197 miles (8,363 kilometers), and each revolution took approximately 11.5 hours to complete. They had to go from that to a science orbit of 15 by 53 miles (24.5 by 86 kilometers) and took all of 114 minutes to complete." To reduce and refine Ebb and Flow's orbits efficiently and precisely required the GRAIL team to plan and execute a series of trajectory modification burns for each spacecraft. And each maneuver had to be just right. "Because each one of these maneuvers was so important, we did a lot of planning and testing for each," said Beerer. "Over eight weeks, we did nine maneuvers with Ebb and 10 with Flow to establish the science formation. We would literally be watching our screens for a signal telling us about an Ebb rocket burn, then go into a meeting about the next burn for Flow. Our schedule was very full." Today, the calendar for GRAIL's flight team remains a busy one with the day-to-day operations of keeping NASA's lunar twins in synch. But as busy as the team gets, they still have time to peer skyward. "Next time you look up and see the moon, you might want to take a second and think about our two little spacecraft flying formation, zooming from pole to pole at 3,600 mph," said Lehman. "They're up there, working together, flying together, getting the data our scientists need. As far as I'm concerned, they're putting on quite a show." NASA's Jet Propulsion Laboratory in Pasadena, Calif., manages the GRAIL mission for NASA's Science Mission Directorate, Washington. The Massachusetts Institute of Technology, Cambridge, is home to the mission's principal investigator, Maria Zuber. The GRAIL mission is part of the Discovery Program managed at NASA's Marshall Space Flight Center in Huntsville, Ala. Lockheed Martin Space Systems in Denver built the spacecraft. JPL is a division of the California Institute of Technology in Pasadena.
0.807631
3.268333
Some stars end their lives with a huge explosion called a supernova. The most famous supernovae are the result of a massive star exploding, but a white dwarf, the remnant of an intermediate mass star like our Sun, can also explode. This can occur if the white dwarf is part of a binary star system. The white dwarf accretes material from the companion star, then at some point, it might explode as a type Ia supernova. Because of the uniform and extremely high brightness (about 5 billion times brighter than the Sun) of type Ia supernovae, they are often used for distance measurements in astronomy. However, astronomers are still puzzled by how these explosions are ignited. Moreover, these explosions only occur about once every 100 years in any given galaxy, making them difficult to catch. An international team of researchers led by Ji-an Jiang, a graduate student of the University of Tokyo, and including researchers from the University of Tokyo, the Kavli Institute for the Physics and Mathematics of the Universe (IPMU), Kyoto University, and the National Astronomical Observatory of Japan (NAOJ), tried to solve this problem. To maximize the chances of finding a type Ia supernova in the very early stages, the team used Hyper Suprime-Cam mounted on the Subaru Telescope, a combination which can capture an ultra-wide area of the sky at once. Also they developed a system to detect supernovae automatically in the heavy flood of data during the survey, which enabled real-time discoveries and timely follow-up observations. They discovered over 100 supernova candidates in one night with Subaru/Hyper Suprime-Cam, including several supernovae that had only exploded a few days earlier. In particular, they captured a peculiar type Ia supernova within a day of it exploding. Its brightness and color variation over time are different from any previously-discovered type Ia supernova. They hypothesized this object could be the result of a white dwarf with a helium layer on its surface. Igniting the helium layer would lead to a violent chain reaction and cause the entire star to explode. This peculiar behavior can be totally explained with numerical simulations calculated using the supercomputer ATERUI. "This is the first evidence that robustly supports a theoretically predicted stellar explosion mechanism!" said Jiang. This result is a step towards understand the beginning of type Ia supernovae. The team will continue to test their theory against other supernovae, by detecting more and more supernovae just after the explosion. The details of their study were published in Nature on October 5.
0.827653
4.057971
It’s a cornerstone of modern physics that nothing in the Universe is faster than the speed of light (c). However, Einstein’s theory of special relativity does allow for instances where certain influences appear to travel faster than light without violating causality. These are what is known as “photonic booms,” a concept similar to a sonic boom, where spots of light are made to move faster than c. And according to a new study by Robert Nemiroff, a physics professor at Michigan Technological University (and co-creator of Astronomy Picture of the Day), this phenomena may help shine a light (no pun!) on the cosmos, helping us to map it with greater efficiency. Consider the following scenario: if a laser is swept across a distant object – in this case, the Moon – the spot of laser light will move across the object at a speed greater than c. Basically, the collection of photons are accelerated past the speed of light as the spot traverses both the surface and depth of the object. The resulting “photonic boom” occurs in the form of a flash, which is seen by the observer when the speed of the light drops from superluminal to below the speed of light. It is made possible by the fact that the spots contain no mass, thereby not violating the fundamental laws of Special Relativity. Another example occurs regularly in nature, where beams of light from a pulsar sweep across clouds of space-borne dust, creating a spherical shell of light and radiation that expands faster than c when it intersects a surface. Much the same is true of fast-moving shadows, where the speed can be much faster and not restricted to the speed of light if the surface is angular. At a meeting of the American Astronomical Society in Seattle, Washington earlier this month, Nemiroff shared how these effects could be used to study the universe. “Photonic booms happen around us quite frequently,” said Nemiroff in a press release, “but they are always too brief to notice. Out in the cosmos they last long enough to notice — but nobody has thought to look for them!” Superluminal sweeps, he claims, could be used to reveal information on the 3-dimensional geometry and distance of stellar bodies like nearby planets, passing asteroids, and distant objects illuminated by pulsars. The key is finding ways to generate them or observe them accurately. For the purposes of his study, Nemiroff considered two example scenarios. The first involved a beam being swept across a scattering spherical object – i.e. spots of light moving across the Moon and pulsar companions. In the second, the beam is swept across a “scattering planar wall or linear filament” – in this case, Hubble’s Variable Nebula. In the former case, asteroids could be mapped out in detail using a laser beam and a telescope equipped with a high-speed camera. The laser could be swept across the surface thousands of times a second and the flashes recorded. In the latter, shadows are observed passing between the bright star R Monocerotis and reflecting dust, at speeds so great that they create photonic booms that are visible for days or weeks. This sort of imaging technique is fundamentally different from direct observations (which relies on lens photography), radar, and conventional lidar. It is also distinct from Cherenkov radiation – electromagnetic radiation emitted when charged particles pass through a medium at a speed greater than the speed of light in that medium. A case in point is the blue glow emitted by an underwater nuclear reactor. Combined with the other approaches, it could allow scientists to gain a more complete picture of objects in our Solar System, and even distant cosmological bodies. Nemiroff’s study accepted for publication by the Publications of the Astronomical Society of Australia, with a preliminary version available online at arXiv Astrophysics
0.91728
4.071551
The search for life elsewhere in the universe is one of the most compelling aspects of modern science. Given its scientific importance, significant resources are devoted to this young science of astrobiology, ranging from rovers on Mars to telescopic observations of planets orbiting other stars. The holy grail of all this activity would be the actual discovery of alien life, and such a discovery would likely have profound scientific and philosophical implications. But extraterrestrial life has not yet been discovered, and for all we know may not even exist. Fortunately, even if alien life is never discovered, all is not lost: simply searching for it will yield valuable benefits for society. Why is this the case? First, astrobiology is inherently multidisciplinary. To search for aliens requires a grasp of, at least, astronomy, biology, geology, and planetary science. Undergraduate courses in astrobiology need to cover elements of all these different disciplines, and postgraduate and postdoctoral astrobiology researchers likewise need to be familiar with most or all of them. By forcing multiple scientific disciplines to interact, astrobiology is stimulating a partial reunification of the sciences. It is helping to move 21st-century science away from the extreme specialisation of today and back towards the more interdisciplinary outlook that prevailed in earlier times. By producing broadminded scientists, familiar with multiple aspects of the natural world, the study of astrobiology therefore enriches the whole scientific enterprise. It is from this cross-fertilization of ideas that future discoveries may be expected, and such discoveries will comprise a permanent legacy of astrobiology, even if they do not include the discovery of alien life. It is also important to recognise that astrobiology is an incredibly open-ended endeavour. Searching for life in the universe takes us from extreme environments on Earth, to the plains and sub-surface of Mars, the icy satellites of the giant planets, and on to the all-but-infinite variety of planets orbiting other stars. And this search will continue regardless of whether life is actually discovered in any of these environments or not. The range of entirely novel environments opened to investigation will be essentially limitless, and so has the potential to be a never-ending source of scientific and intellectual stimulation. The cosmic perspective Beyond the more narrowly intellectual benefits of astrobiology are a range of wider societal benefits. These arise from the kinds of perspectives – cosmic in scale – that the study of astrobiology naturally promotes. It is simply not possible to consider searching for life on Mars, or on a planet orbiting a distant star, without moving away from the narrow Earth-centric perspectives that dominate the social and political lives of most people most of the time. Today, the Earth is faced with global challenges that can only be met by increased international cooperation. Yet around the world, nationalistic and religious ideologies are acting to fragment humanity. At such a time, the growth of a unifying cosmic perspective is potentially of enormous importance. In the early years of the space age, the then US ambassador to the United Nations, Adlai Stevenson, said of the world: “We can never again be a squabbling band of nations before the awful majesty of outer space.” Unfortunately, this perspective is yet to sink deeply into the popular consciousness. On the other hand, the wide public interest in the search for life elsewhere means that astrobiology can act as a powerful educational vehicle for the popularisation of this perspective. Indeed, it is only by sending spacecraft out to explore the solar system, in large part for astrobiological purposes, that we can obtain images of our own planet that show it in its true cosmic setting. In addition, astrobiology provides an important evolutionary perspective on human affairs. It demands a sense of deep, or big, history. Because of this, many undergraduate astrobiology courses begin with an overview of the history of the universe. This begins with the Big Bang and moves successively through the origin of the chemical elements, the evolution of stars, galaxies, and planetary systems, the origin of life, and evolutionary history from the first cells to complex animals such as ourselves. Deep history like this helps us locate human affairs in the vastness of time, and therefore complements the cosmic perspective provided by space exploration. I think there is an important political implication inherent in this perspective: as an intelligent technological species, that now dominates the only known inhabited planet in the universe, humanity has a responsibility to develop international social and political institutions appropriate to managing the situation in which we find ourselves.There is a well-known aphorism, widely attributed to the Prussian naturalist Alexander von Humboldt, to the effect that “the most dangerous worldview is the worldview of those who have not viewed the world”. Humboldt was presumably thinking about the mind-broadening potential of international travel. But familiarity with the cosmic and evolutionary perspectives provided by astrobiology, powerfully reinforced by actual views of the Earth from space, can surely also act to broaden minds in such a way as to make the world less fragmented and dangerous. In concluding his monumental Outline of History in 1925, HG Wells famously observed: “Human history becomes more and more a race between education and catastrophe.” Such an observation appears especially germane to the geopolitical situation today, where apparently irrational decisions, often made by governments (and indeed by entire populations) seemingly ignorant of broader perspectives, may indeed lead our planet to catastrophe.
0.859532
3.589191
This is the second in a three-part series on the search for extraterrestrial life. On November 16, 1974, astronomers at the Arecibo radio telescope in Puerto Rico broadcast a powerful signal into outer space. Aiming their transmitter at a star cluster on the edge of our galaxy, they sent out a series of pings — 1,679 of them, to be exact. Why that number? They knew 1,679 was unusual. It is the result of multiplying 23 by 73. Each is a prime number, a type divisible only by one and itself. The product of this equation would be unlikely to occur in nature. So the scientists hoped that if any aliens intercepted their broadcast, the number would show them that the pings were meant to be an intended signal. It might then help them decode the hidden message that those pings had contained (including pictures of DNA, the solar system and a stick figure). Searching for aliens may sound like science fiction. Yet for many scientists, it has become serious business. Here we meet three who are using math in their quest to find other living beings in our universe. One is calculating the likelihood of finding life on other planets. Another is trying to figure out where best to beam a “hello” to E.T. The third is looking for a common language with extraterrestrials — and it will likely be numbers. If we could talk to the aliens Douglas Vakoch has spent a lot of time thinking about what he’d like to say to E.T. He is president of METI International in San Francisco, Calif. (METI stands for Messaging Extraterrestrial Intelligence.) His group is focused on broadcasting signals to outer space in the hope of contacting a civilization on some other world. Vakoch wants to use bright lights, such as lasers or perhaps a powerful radio telescope like the one at Arecibo (Air-eh-SEE-boh). But the big question: How could he write a message that aliens would understand? “We don’t expect the extraterrestrials to be speaking English or German,” Vakoch explains. “So we look to mathematics as a universal language.” The idea is simple. You need to understand math to build things. Any world advanced enough to have the technology to pick up our signals should also know how to work with numbers. It’s not a new idea. Back in the 1820s, when astronomers still thought there might be little green men living on the moon, they suggested using geometry — the math of shapes — to communicate with them. One scientist suggested planting trees or using mirrors to draw an enormous triangle in Siberia, a part of Russia. Another proposed digging a giant trench in the shape of a circle and filling it with kerosene. Then someone would light it on fire at night so that it would be visible from space. For these scientists, math was a way to show the aliens not only that we were here, but also that we were intelligent. Vakoch’s plan is a little closer to what the Arecibo scientists tried in 1974. Back then, they used a binary system: two signals at slightly different frequencies. By sending out the signals in a series of bursts that form a pattern, scientists could create a kind of code, or draw pictures. The Arecibo team used its code to send a dense message. It included pictures. Vakoch would start with something simpler: counting. His first message would be seven signals at the same frequency: “ping-ping-ping-ping-ping-ping-ping.” Next, he’d send seven signals again but using two frequencies, like this: “ping-pong-pong-pong-pong-pong-ping.” He’d repeat that sequence four more times, then finish with seven “pings” again. If you draw that pattern on a piece of paper, you’ll see what the aliens will see if they decode his message: a box. Next, Vakoch would add a third frequency to the code. By dropping in the third frequency at different places in the box, he could count numbers up to 25. By using a binary system — a way of representing numbers by combining zeros and ones — he could count into the millions. (The binary system is commonly used here on Earth. It can be found encoding the data in every computer.) Once he introduced the code, Vakoch could then use it to send information. For instance, he might try to transmit the periodic table of the elements. It would list chemicals by their atomic numbers. This would show the aliens that we understand what the universe is made of. Another message might contain the Fibonacci sequence. This is a series of numbers that increase, with each successive number being the sum of the two before it. It’s a pattern that commonly appears both in nature and human art. Even though he’s speaking in numbers, Vakoch wants to do more than count at the aliens. For him, math is just a tool to establish more meaningful communication. In the end, he says, “I want to know something about their culture, their society, their value system and what they see as beautiful.” Story continues below image Stay on target So you want to talk to an alien. Just point your transmitter at the nearest star system and press “send,” right? Wrong, says Philip Lubin. He’s a physicist at the University of California, Santa Barbara who works on directed-energy systems. These are powerful lasers that could be used to flash signals at other stars. While radio signals spread out as they travel across space, lasers are tightly focused. That means it’s important to aim them precisely. Being off by just a few degrees to either side could cause the signal to miss its target. Big as a star is, hitting it with a laser is not easy. For one thing, when you look at a star in the sky, you’re seeing light that has been traveling through space for years — maybe thousands of years. “What you see is where the star was,” Lubin says. But its light was traveling to Earth, the star has moved. So you have to project your message into the direction where you think that star will be when your message is due to arrive. And don’t forget, it will take years for the light from Lubin’s lasers to travel through space in the other direction. And that star is still moving. “It’s like taking a flashlight and trying to shine it at a spacecraft flying by,” he says. “If you want to shine your flashlight at it and have it hit it, you have to know something about the trajectory of the spacecraft.” Astronomers use math to determine proper motion — a measurement of how objects in outer space change their apparent position in our sky. To do this, the scientists calculate the object’s angle relative to Earth. Next, they figure out how fast it’s moving and in what direction. Many astronomical objects are so distant that those angles are measured in arcseconds or even smaller milli-arcseconds. Each are tiny amounts describing angles that are less than one degree in size. By calculating proper motion, Lubin can figure out where a star system will be when his signal arrives. “You have to figure out not only where the star is now, but where it will be in the future,” he emphasizes. Is anybody out there? For many scientists, trying to communicate with aliens is jumping the gun. They are asking more basic questions: Are we alone in the universe? What are the odds that life exists anywhere else? These scientists use math to figure out whether Earth is likely to be a lonely outpost in space, or one of many inhabited worlds in a universe teeming with life. More than 50 years ago, astronomer Frank Drake devised an equation to estimate the number of extraterrestrial civilizations whose signals we might pick up from Earth. To get this number, he multiplied many factors. These included the rate at which new stars form, the number of stars with planets that host life and the number of life-bearing planets where that life would be intelligent. Just one problem: Almost all of the variables in this now-famous “Drake Equation” are still unknown. “It’s not an equation that you can make predictions with,” says Avi Loeb. “It’s an equation that summarizes what we don’t know.” Loeb is a physicist at Harvard University in Cambridge, Mass. He decided to look at the search for extraterrestrial life from a different perspective. Instead of asking how much life exists in the universe, he wanted to know when in the history of the universe life would be most likely to develop? For this, Loeb developed an equation of his own. It looks at different types of stars, the rate at which they form and how long they live. When he crunched the numbers, Loeb came up with a surprising conclusion: In the scale of cosmic time, the glory days when the universe is full of life might still be far ahead of us. Many scientists had assumed that life most likely would occur in star systems similar to our own. After all, we know our sun can support life. If life exists elsewhere in the universe, sun-like stars are probably where we would find it, right? Those sun-like stars usually burn out after some 6 billion years, Loeb knew. Yet there are stars that live longer. Some very small ones can survive for around 10 trillion years! And many of these small stars have planets. Might these planets also support life? “If the answer is yes, then we know we [on Earth] are premature,” he says. Stars like our sun burn out quickly. So when our sun and its kin are gone, “the life that will remain is life around low-mass stars,” he argues. These are those tiny stars. Unlike the Drake Equation, Loeb’s math contains only one unknown variable: whether low-mass stars can host life. He hopes other scientists will investigate that question in the decades ahead. “Once we know that, it can be folded into my equation,” he says. Scientists in the search for extraterrestrial intelligence, or SETI, know they are unlikely to meet a Vulcan or Klingon in their lifetime. Still, they are excited to explore our universe for signs of life. Whether they’re figuring out the odds that we’re alone, or writing messages to aliens and beaming them out to other worlds, they couldn’t carry out this search without turning to numbers. This is one in a series on careers in science, technology, engineering and mathematics made possible with generous support from Arconic Foundation.
0.801286
3.412279
Hello, Space: The Golden Record - Hello, Space: The Golden Record In 1977, two spacecraft were launched from Earth, never to return: Voyager 1 and Voyager 2. They would travel so fast that eventually, they’d leave the solar system behind altogether, and become the first interstellar objects ever made. On board each Voyager was a Golden Record – a greeting to anyone out there that should find it. It was the least scientific element of the Voyager program, but its cultural importance is still felt today. This is the story of how humankind waved hello into the infinite void. The Decision to Talk to the Stars The Voyager program started life as a “Grand Tour” of the outer planets of the solar system. It capitalised on a rare orbital event that would bring Jupiter, Saturn, Uranus and Neptune into close proximity, allowing a space probe to visit all of them in a single trip. It would give humans the first close look at the giant planets and reveal amazing, unexpected discoveries – like active volcanoes and the tantalising prospect of liquid water on the moons of Jupiter and Saturn. The flight path required enormous speeds – Voyager 1 was propelled to the fastest continuous speed of any human made object. Although it is continuously slowing down, Voyager 1 has a current speed of around 38,000 mph – more than enough to carry it into interstellar space. It’s well over 13 billion miles from home, the farthest any human made object has ever travelled. Voyager 2 is a little slower, so it’s a couple of billion miles behind: the final encounter with Neptune resulted in a net loss of speed and a sharp change of course. It’s still going fast enough to escape the solar system – and will carry on travelling, like its twin Voyager 1, for as long as the path ahead it is clear. That could be forever – or until it’s found. It was known from the start of the mission that the Voyagers would leave the solar system. The cultural significance of this was embraced by Carl Sagan, perhaps the most important celebrity scientist in modern history. The wandering spacecraft would be an ageless, floating monument to human civilisation – which he likened to a message in a bottle. If they were found by intelligent extraterrestrials, how would we tell them who we were, where we came from and how we lived? Carl Sagan oversaw the creation of the Golden Record – a time capsule to be attached to each of the Voyagers. Message in a Bottle The message was encoded in diagrams and audio, data and images. It would require a species with intelligence to decode it, and to devise a way to play a record (unless they’d already been making their own music players). The reverse side of the Golden Record is engraved with instructions on how to play it, a diagram of where it came from and universal units of time. Nobody knew if a civilisation in receipt of the record would have eyes, or ears – and even if they did, would they be able to translate it? And why a record? Was there an assumption that aliens would be into vinyl? Not exactly – at the time, records were the best available method of audio playback (and a lot of people would say that’s still true today). As far as physical storage goes, a record is incredibly robust: magnetic tape and hard drives would be stripped bare by the unimaginable magnetic field of Jupiter, and space radiation would destroy film or printed images, which would bleach and fade. Even vinyl would be damaged beyond repair by the environment of space – so a specially made, copper record was designed, to be plated with pure gold: it won’t tarnish or show a hint of degradation for at least a billion years. If it avoids impacts or getting melted by a star, it may last as long as the universe itself – and still be playable. The record is also date stamped – but not with the day, month or year: these concepts are meaningless to anything other than humans. Instead, an ultra-pure uranium isotope is embedded on the record’s cover. It has a very predictable rate of decay, meaning it can be dated in the same way ancient things on our own planet are dated. So, this message in a bottle tells the discoverer where it came from, how to use it and how old it is. What about when they play the record? A greeting was recorded in 55 languages. Chuck Berry, the great works of Bach and Mozart, obscure folk songs from around the planet and even whale song feature on the record. Sounds of the planet, from human laughter and animal vocalisations, to vehicles and tools being used are also on the record. NASA released the non-musical audio content on Soundcloud, where anyone can listen to it for free: There were images encoded on the record too. They depict people from around the world, how we eat and drink, our homes, how our babies grow and what we can do physically. There are images of Earth from space, Jupiter and even diagrams of our DNA. It’s a broad look at us and our world. The sounds and images cast into space are a poignant reminder to ourselves that we’re human – and that we want to be liked. The fact that we’re showing only our best side was an eerie foreshadowing of how we tend to behave on social media: we haven’t shared the ideological ugliness, the wars and the destruction of our natural world. Only our very best. The Golden Record should be the benchmark of what our best is: a peaceful hand, reaching across infinity, waiting to be taken. Even if it’s never found, never played and never understood – we have committed the best of ourselves to the stars. Let’s commit the same to our home world. Need an out of this world translation agency, without the astronomical prices? Then let’s talk! Contact Kwintessential today – our qualified, experienced translators and interpreters are standing by. Call (UK +44) 01460 279900 or send a message to [email protected].
0.849573
3.265603
NASA’s Hubble Telescope Detects Smallest Clumps Of Dark Matter Dark Matter is the non-luminous matter that is thought to make up about 85% of the cosmos. According to the Cold Dark Matter (CDM) theory, all the galaxies in the Universe are surrounded by regions filled with dark matter. Now, observations made by NASA’s Hubble Space Telescope have found physical evidence that confirms one of the predictions of the theory. Tommaso Treu from the University of California, Los Angeles (UCLA) said: “We made a very compelling observational test for the cold dark matter model and it passes with flying colours.” Using @NASAHubble and a new observing technique, astronomers have found that dark matter forms much smaller clumps than previously known. How the research team targeted eight powerful and distant quasars to calculate tiny dark matter concentrations: https://t.co/pQ4ltnOGmk pic.twitter.com/zqn627Qqiy— NASA (@NASA) January 9, 2020 Cold, in context to CDM, refers to the slow speed of the particles that make up the dark matter. The CDM theory describes dark matter to be made of subatomic particles different from electrons, protons, and neutrons that make up normal matter. These particles come together to form clumps of dark matter because of the slow speeds. While it is not possible to observe dark matter directly, astronomers are able to indirectly detect its presence by observing the effects of its gravity on stars and galaxies. But the Hubble research team used a different technique to observe the dark matter by using eight distant quasars. A quasar is the active galactic nucleus that is extremely luminous, which was used by the team as a ‘streetlight’ to illuminate the dark matter. The quasars cause light from galaxies in the foreground to magnify because of gravitational lensing allowing scientists to easily detect the dark matter clumps along the telescope’s line of sight. Astronomers have previously detected dark matter clumps surrounding medium and large galaxies, and now with Hubble even smaller clumps of dark matter have now been observed. #AAS235 Astronomers have detected dark matter clumps around large- and medium-sized galaxies. Now, using Hubble and a new observing technique, astronomers have found that dark matter forms much smaller clumps than previously known: https://t.co/kaLWfXaEDQ pic.twitter.com/dRz11LGPcf— Hubble (@NASAHubble) January 8, 2020 The new observations show that dark matter is even colder at smaller scales as Anna Nierenberg from NASA's Jet Propulsion Laboratory explains: "Astronomers have carried out other observational tests of dark matter theories before, but ours provides the strongest evidence yet for the presence of small clumps of cold dark matter. By combining the latest theoretical predictions, statistical tools, and new Hubble observations, we now have a much more robust result than was previously possible." Image Credit: NASA/ESA/ D. Player (STScI)
0.811397
3.807054
If our Solar System had a ‘hot Jupiter’ that migrated inward after Mars, Earth and Venus had formed, would any of the terrestrial planets have survived? It’s a question worth pondering given how many hot Jupiters we’ve turned up, raising the question of how these planets form in the first place. One possibility is formation in situ, close to the parent star. But there is also an argument for migration, with planets forming in cooler regions further out in the system and migrating inward as a result of interactions with the protoplanetary disk or other planets. Perhaps the planet known as K2-33b can help us with some of this. It is no more than 11 million years old, in an orbit that creates a transit every 5.4 days. With follow-up observations by the MEarth arrays on Mount Hopkins (AZ) and at the Cerro Tololo Inter-American Observatory in Chile, researchers led by Andrew Mann (University of Texas at Austin) have been able to determine that K2-33b is a Neptune-class world some five times the size of Earth, orbiting at a distance of about 8 million kilometers. The host is an M-class star several million years old. “Young stars tend to be very blotchy, with starspots that can mimic a transiting planet. Our observations ruled out stellar activity and proved that the Kepler signal came from a bona fide planet,” says Elisabeth Newton of the Harvard-Smithsonian Center for Astrophysics (CfA), co-author of a study slated to appear in the Astronomical Journal. “We were also able to measure the planet’s size and orbit more accurately.” High resolution imaging using the Keck II instrument and Doppler spectroscopy at McDonald Observatory in Texas also confirmed the planetary nature of the detection. Two teams went to work independently on this world, the second led by Trevor David (Caltech), using data from the W. M. Keck Observatory in Hawaii to validate the planet. The hope of both is that K2-33b will help us understand planet formation, particularly since the parent star still retains portions of its disk material, a fact confirmed by the Spitzer instrument. Caltech’s David comments: “Astronomers know that star formation has just completed in this region, called Upper Scorpius, and roughly a quarter of the stars still have bright protoplanetary disks. The remainder of stars in the region do not have such disks, so we reasoned that planet formation must be nearly complete for these stars, and that there would be a good chance of finding young exoplanets around them.” Image: K2-33b, shown in this illustration, is one of the youngest exoplanets detected to date and makes a complete orbit around its star in about five days. These two characteristics combined provide new directions for planet-formation theories. K2-33b could have formed on a farther out orbit and quickly migrated inward. Alternatively, it could have formed in situ. Credit: NASA/JPL-Caltech/R. Hurt. Given K2-33b’s proximity to its star, migration would have occurred early indeed, or else the planet is an indication that giant planets actually form this close to their host. Mann’s team points to these possibilities and sketches a course of future research. From the paper, which is to appear in The Astronomical Journal: The upper limit on K2-33b’s age provided by its ≃11 Myr stellar host suggests that it either migrated inwards via disk migration or formed in-situ, as planet-star and planet-planet interactions work on much longer timescales… This discovery makes it unlikely that such long-term dynamical interactions are responsible for all close-in planets. However, it is difficult to draw conclusions about the dominant migration or formation mechanism for close-in planets given the sample size and incomplete understanding of our transit-search pipeline’s completeness. That’s a telling point, for selection effects may be at work here — K2-33b may have an atypical history that made its detection easier for a planet of this age. To learn more, we need to widen the search: A full search of all young clusters and stellar associations surveyed by the K2 mission, with proper treatment of detection completeness is underway. This, along with improved statistics provided by the TESS and PLATO missions, will provide an estimate of the planet occurrence rate as a function of time. Trends (or a lack of trends) in this occurrence rate could set constraints on planetary migration timescales. Publishing in Nature, Trevor David’s team takes note of K2-33b’s peculiarities, which could indeed point to the planet being an outlier: Interestingly, large planets are rarely found close to mature low-mass stars; fewer than 1% of M-dwarfs host Neptune-sized planets with orbital periods of < 10 days, while ∼ 20% host Earth-sized planets in the same period range. This may be a hint that K2-33b is still contracting, losing atmosphere, or undergoing radial migration. Future observations may test these hypotheses, and potentially reveal where in the protoplanetary disk the planet formed. What we do have indisputable evidence for is that a large planet can be found at a small orbital distance not long after the dissipation of the system’s nebular gas. Given the short timescales available here, the paper argues, tidal circularization of an eccentric planet or planet-planet or planet-star interactions cannot explain K2-33b’s current orbit. Formation in place or migration from within the gas disk remain as possibilities. We have a lot of work ahead to figure out just how unusual this planet is, and whether or not it is still in the process of adjusting its orbit. The papers are Mann et al., “Zodiacal Exoplanets in Time (ZEIT) III: A short-period planet orbiting a pre-main-sequence star in the Upper Scorpius OB Association,” accepted at The Astronomical Journal (preprint); and David et al., “A Neptune-sized transiting planet closely orbiting a 5–10-million-year-old star,” Nature, published online 20 June 2016 (abstract).
0.906928
3.945877
The International Space Station (ISS) has been forced to alter trajectory numerous times over the years, but not for any scientific of logistical reason — it was necessary to avoid collisions with space junk. The day of simply stepping out of the way could be coming to an end, though. Researchers from Japan’s Riken Computational Astrophysics Laboratory have proposed a system that could blast dangerous space debris out of the sky before it comes close to the ISS. Scientists estimate there are nearly 3,000 tons of space junk in orbit of the Earth. These are pieces of rocket boosters, decoupling rings, and smaller objects like screws or paint chips. Many of these objects have very low mass, but they can be moving upwards of 20,000 miles per hour relative to the station. That can mean a lot of impact energy. Anything larger than 0.4 inches is considered to be dangerous to the ISS, and even a single breach of the station’s hull could mean big trouble. Thus far, the protocol for dealing with a potential impact has been to give the station a nudge (with enough warning), and for the crew to take shelter in a docked ship that can return to Earth in the event of a collision. The new, more aggressive approach is focused on the Extreme Universe Space Observatory (EUSO), scheduled to be installed on Japan’s ISS module in 2017. This is not by design a space-junk-killing piece of equipment. It’s intended to monitor the atmosphere for ultraviolet emissions caused by cosmic rays. But astrophysicist Toshikazu Ebisuzaki says it could also be used to precisely track nearby space junk that could pose a danger to the station. The business end of the proposed laser system would be a Coherent Amplification Network (CAN) laser that can focus a single powerful beam on a piece of debris. The laser would vaporize the surface of the target, causing a plume of plasma to push the object away from the station and toward the atmosphere. The full-scale version of this system would use a 100,000-watt ultraviolet CAN laser capable of firing 10,000 pulses per second. That would give it a range of about 60 miles, which should be more than enough distance to keep the station safe. This is still just a proposal, but a test version of the laser might be deployed to the station in a few years. This would probably be a more modest system to prove the idea is viable — perhaps just a 10-watt laser capable of 100 pulses per second. A miniature EUSO telescope to go with the test laser has already been accepted as a project for the ISS and could be delivered as soon as 2017 or 2018. If this system proves effective on the ISS, space agencies could put it on satellites that could sweep larger parts of the skies clear of debris. Such a satellite could start in a high orbit of 620 miles and gradually spiral downward several miles each month, blasting junk along the way. In several years, one such satellite could eliminate many of the dangerous objects from orbit.
0.877772
3.756861
If you do a search of articles on Universe Today, you’ll find that a large number of our posts reference the Sloan Digital Sky Survey. SDSS is a comprehensive survey to map the sky, using a dedicated 2.5 meter telescope equipped with a 125- megapixel digital camera and spectrographs. Since 2000, SDSS has created terabytes of data that include thousands of deep, multi-color images, covering more than one-quarter of the sky. SDSS is literally changing the way astronomers do their work, and represents a thousand-fold increase in the total amount of data that astronomers have collected to date. In a new book, “A Grand and Bold Thing; An Extraordinary New Map of the Universe Ushering in a New Era of Discovery,” science journalist Ann Finkbeiner tells the story of how SDSS came about (frighteningly, the survey almost didn’t happen), delving into some of the discoveries made as a result of this survey, and sharing how even armchair astronomers are now probing the far reaches of the Universe with SDSS. SDSS has measured the distances to nearly one million galaxies and over 100,000 quasars to create the largest ever three-dimensional maps of cosmic structure. It also spawned one of our favorite citizen science projects: Galaxy Zoo. For three years, Ann Finkbeiner researched and interviewed astronomers to get the story behind SDSS, to tell the little-known story of this grand project, and how it soon grew into a far vaster undertaking than founder Jim Gunn could have imagined. The book is extremely readable, and Finkbeiner captures the personalities who brought the project to life. If you thought Earth-based observing was passe, this book will make you re-think the future of astronomy. Finkbeiner is a freelance science writer who has been covering astronomy and cosmology for over two decades. She has written feature articles for Science, Sky & Telescope, Astronomy, and more, with columns for USA Today, and Defense Technology International. She is co-author of The Guide to Living with HIV Infection (Johns Hopkins University Press, 1991; sixth edition, 2006), which won the American Medical Writers Association book award. She is also author of “After the Death of a Child,” and “The Jasons,” which won the American Institute of Physics’ Science Writing Award in 2008. Below is a Q & A with Finkbeiner about “A Grand and Bold Thing.” Q: What made you first want to write this book? A: I was finishing a magazine article about the Sloan Digital Sky Survey just as I was beginning the interviews for a book—The Jasons—for which no one at all wanted to talk to me. But the Sloanies I was interviewing were so happy about what they were doing, so intense about it all, and so open (they even showed me their gazillion archived emails) that writing a book about them felt like it would be a blessed relief, like leaving boot camp and going to a good block party. I was writing the magazine article in the first place because I’d attended a talk by Jim Gunn at Johns Hopkins, and while I listened, I realized I hadn’t heard any news from him for a long time. So I afterward, I asked him why he’d gone off radar. He told me he’d been working on getting a survey going, using a little 2.5 meter telescope, and I wasn’t impressed. I thought it was an odd use of his splendid capabilities. I was impressed later, though, when he stayed off radar and I found out that other excellent scientists were doing the same. I started wondering why they were giving up their careers for a sky survey. Q: Has perception of the project changed from the time you first started writing about it until now? A: Between the time I first heard about it—in the late 1990’s—and the present, the perception of the project changed dramatically: today, it’s hard to overstate its importance. But astronomers’ early reactions to the survey were what mine had been: Little telescope. Not spectacular resolution. Can’t go very deep in the past. Astronomers who knew the value of a survey and Jim’s reputation for building nearly-perfect instruments were quicker to see the potential, but the project’s many, many management problems led to the community taking pot shots at the Sloanies. Then when funding agencies started refusing to give astronomers money because the Sloan was going to do their pet projects better than they would, Sloan became a dirty word. Now, astronomers say it changed the way they do their work. Q: What do you think have been the most important benefits of the Sloan Survey’s completion? A: The Sloan was, and still is, the only systematic, beautifully-calibrated survey of the sky and everything in it. And it’s the first survey to be digital. Astronomy before Sloan was photographic, meaning you were at a rich university that owned a telescope, you decided which objects in the sky you liked and took photographs of them, and kept them for yourself. If you wanted to use the only survey of the sky, you bought expensive photographs of it. After the Sloan, you download the objects you want to study onto your computer for free. So whether you’re an astronomer or a regular person, you can study anything you want to with some of the most trustworthy data going. And if you don’t want to learn astronomy jargon and query languages, you can go to GalaxyZoo.com and join the 300,000 people doing astronomy on the Internet using this data. The Sloan has democratized astronomy. It’s made “citizen science” real. And it’s about to become redundant because it triggered a population of other newer, bigger surveys. Q: What do you think the story of the Sloan Survey tells us about current cosmological thought? A: Before Sloan, cosmology was seen as a fluffy science: the universe is big, distant, and hard to observe, so the phrase “precision cosmology” would have been an in-joke. But Sloan’s data is so comprehensive and exquisite that precision cosmology is now the norm. Before the Sloan, cosmology was fractured into many fields whose relation to each other wasn’t obvious and wasn’t being studied. Sloan found all kinds of things in all areas of astronomy: asteroids in whole families, stars that had only been theories, star streams around the Milky Way, the era when quasars were born, the evolution of galaxies, the structure of the universe on the large scale, and compelling evidence for dark energy. So after the Sloan, cosmologists began seeing the universe as a whole, as a single system with parts that interact and evolve. Q: Work like this costs an enormous amount of money, but doesn’t yield the sort of practical results the average American can see. What is the best argument to continue funding science like this? A: The main Sloan survey cost $85 million over 10 or 15 years. In the realm of government budgets, that’s spare change. It cost so little partly because the scientists gave their time for free—they had university salaries already. And since this free time came at the expense of their own research and personal reputations, they’re a case study in altruism. In addition, the universe is mankind’s most fundamental context; and astronomy and cosmology have, I think, some of the appeal of philosophy and religion. Put scientific intelligence together with altruism and questions of origin and place in the universe, throw in beautiful pictures, and I’d give it money in a minute. Q: There are a lot of good stories behind the making of the Survey. What are some of your personal favorites? A: My all-time favorite is Galaxy Zoo, which started when a couple of Sloanies needed to know which galaxies were spirals, which were ellipticals, and which were irregular. But Sloan had a million galaxies, which is a lot for any human to sort through: computers are no good at identifying shapes, humans are superb at it. So the Sloanies put the million galaxies on the internet, asked for help, and within a day, their computer server melted. There are now 300,000 Galaxy Zooites of all ages, all levels of education, from all over the world, and they’ve gone way beyond classifying shapes. Hanny van Arkel, a Dutch primary school teacher, found a strange blue object the Zooites called Hanny’s Voorwerp, and after followups with xray, ultraviolet, and radio telescopes (not to mention the Hubble Space Telescope), the Voorwerp turned out to be a place in an enormous cloud of gas which was being hit by a hard xray jet from a galactic-sized black hole. Zooites also found a new kind of greenish, round galaxy, and then found enough of them that they’re now officially called Green Pea galaxies. Green Peas turn out to be small, nearby, previously unknown galaxies in which stars are being born at a furious rate. Then Zooites went off and taught themselves serious astronomical techniques and began collecting and studying irregular galaxies; astronomers knew of 161 irregulars, the Zooites found 19,000 of them and called their project, Do It Ourselves. I also love Jim Gunn’s professional trajectory from fame to invisibility, and while invisible, his fight-starting and progress-impeding insistence on doing everything as well as it can possibly be done. When Jim started the Sloan, he was extremely famous and highly respected. He walked away from his own research and spent the next 30 (he’s still doing it) years first putting together the collaboration, then building the camera, while also overseeing and micromanaging every detail of every piece of hardware, software, and politics. He’s a perfectionist whose motto is: “if you don’t do it right to begin with, you’ll have to do it again, no matter what the bloody cost and schedule says.” He caused no end of arguments, particularly when the “young astronomers” involved adopted the same motto. The perfectionism was finally controlled, on the surface anyway, by a remarkable project manager, but Jim and the young astronomers kept doing it right on their own time and without permission. The Sloan’s whole value today is that it’s nearly perfect, and this precision has enabled much of its most important contributions. Jim’s now nominally retired and in any case, has turned the survey over to the young astronomers who have, in their turn, turned it over to the whole astronomical community and to the public. Q: One thing that might surprise readers is how “political” scientists sometimes have to be in working with their colleagues, other institutions, and even asking for funding. Why is this, and has it always been this way? A: It’s been that way ever since science stopped being a gentleman’s hobby—Jim’s phrase, “gentleman astronomers in their coats and ties”—and began getting funding from foundations and the government. The amount of funding is limited and everyone has to complete for the same small, fixed pot. It’s hair-raising. The astronomical community solves this brilliantly: they find out what everybody else is doing, then they do something different and complementary, and finally they get together and tell the funders what the community’s priorities are. The result is that astronomy keeps getting funded. Meanwhile, individual astronomers are free to be competitive and dog-eat-dog, just as their human nature requires. Q: What do you hope readers take away from this book? A: The joy and entertainment of watching these impressively intelligent and persistent guys fumble around until they’ve done something remarkable.
0.845016
3.331095
Although scientists are increasingly using pint-size satellites sometimes no larger than a loaf of bread to gather data from low-Earth orbit, they have yet to apply the less-expensive small-satellite technology to observe physical phenomena far from terra firma. Jaime Esper, a technologist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, however, is advancing a CubeSat concept that would give scientists that capability. Dubbed the CubeSat Application for Planetary Entry Missions (CAPE), the concept involves the development of two modules: a service module that would propel the spacecraft to its celestial target and a separate planetary entry probe that could survive a rapid dive through the atmosphere of an extraterrestrial planet, all while reliably transmitting scientific and engineering data. Esper and his team are planning to test the stability of a prototype entry vehicle —the Micro-Reentry Capsule (MIRCA) — this summer during a high-altitude balloon mission from Fort Sumner, New Mexico. “The CAPE/MIRCA concept is like no other CubeSat mission,” Esper said. “It goes the extra step in delivering a complete spacecraft for carrying out scientific investigations. We are the only researchers working on a concept like this.” Under his concept, the CAPE/MIRCA spacecraft, including the service module and entry probe, would weigh less than 11 pounds (4.9 kilograms) and measure no more than 4 inches (10.1 centimeters) on a side. After being ejected from a canister housed by its mother ship, the tiny spacecraft would unfurl its miniaturized solar panels or operate on internal battery power to begin its journey to another planetary body. Once it reached its destination, the sensor-loaded entry vehicle would separate from its service module and begin its descent through the target’s atmosphere. It would communicate atmospheric pressure, temperature, and composition data to the mother ship, which then would transmit the information back to Earth. The beauty of CubeSats is their versatility. Because they are relatively inexpensive to build and deploy, scientists could conceivably launch multiple spacecraft for multi-point sampling — a capability currently not available with single planetary probes that are the NASA norm today. Esper would equip the MIRCA craft with accelerometers, gyros, thermal and pressure sensors, and radiometers, which measure specific gases; however, scientists could tailor the instrument package depending on the targets, Esper said. Balloon Flight to Test Stability The first step in realizing the concept is demonstrating a prototype of the MIRCA design during a balloon mission this summer. According to the plan, the capsule, manufactured at NASA’s Wallops Flight Facility on Virginia’s Eastern Shore, would be dropped from the balloon gondola at an altitude of about 18.6 miles (30 kilometers) to test the design’s aerodynamic stability and operational concept. During its free fall, MIRCA is expected to reach speeds of up to Mach 1, roughly the speed of sound. “If I can demonstrate the entry vehicle, I then could attract potential partners to provide the rest of the vehicle,” Esper said, referring to the service module, including propulsion and attitude-control subsystems. He added that the concept might be particularly attractive to universities and researchers with limited resources. In addition to the balloon flight, Esper said he would like to drop the entry vehicle from the International Space Station perhaps as early as 2016 — a test that would expose the capsule to spaceflight and reentry heating conditions and further advance its technology-readiness level. “The balloon drop of MIRCA will in itself mark the first time a CubeSat planetary entry capsule is flight tested, not only at Goddard, but anywhere else in the world,” he said. “That in turn enables new opportunities in planetary exploration not available to date, and represents a game-changing opportunity for Goddard.” © HPC Today 2020 - All rights reserved. Thank you for reading HPC Today.
0.890607
3.592628
New research links the odd and unexplained six-degree tilt of our Sun to an undiscovered planet in the outer reaches of our solar system. It’s even more evidence that planet Nine is for real. A new paper published in the Astrophysical Journal posits the hypothesis that a large and distant planet at the outer reaches of the solar system is causing the unusual tilt of our sun. All planets in our solar system orbit in a flat plane with respect to the sun (give or take a few degrees), but that plane is tilted six-degrees with respect to the sun. The reason for this crooked angle hasn’t been explained. However, as Caltech’s Konstantin Batygin, Mike Brown, and graduate student Elizabeth Bailey show in their new study, a large planet far, far away could produce this very effect. Earlier this year, Batygin and Brown rocked the science world when they presented evidence pointing to the existence of an undiscovered planet—one about 10 times the size of Earth and with an orbital period of around 15,000 years. The smoking gun was the unlikely orbital configuration of celestial objects in orbit beyond the Kuiper Belt—configurations that could only be explained through the presence of a large gravitational body in the outer reaches of the solar system. In their latest study, the researchers claim that Planet Nine’s gravitational effects are also being felt at the very core of the solar system. “Because Planet Nine is so massive and has an orbit tilted compared to the other planets, the solar system has no choice but to slowly twist out of alignment,” noted Bailey in a statement. The solar system’s tilt has troubled astronomers for years, and they’ve been unable to come up with satisfactory explanations. Normally, because other planets in the solar system reside along a flat plane, their angular momentum helps to keep the whole disk spinning smoothly. But Planet Nine, and its unusual—albeit hypothetical—orbit, is adding a wobble to our solar system—one 4.5 billion years in the making. Previous calculations suggest that Planet Nine’s orbit is about 30 degrees off kilter from the other planets’ orbital plane. Intriguingly, given the hypothesized size, distance, and orbital angle of planet Nine, a six-degree stellar tilt fits perfectly. “It continues to amaze us; every time we look carefully we continue to find that Planet Nine explains something about the solar system that had long been a mystery,” said Batygin. Looking ahead, astronomers would like to figure out how Planet Nine achieved its strange and distant orbit. One theory is that Jupiter kicked it out as the gas giant migrated inwards in the early days of the solar system. Astronomers also need direct evidence of the camera shy Planet Nine in the form of an actual sighting. Encouragingly, Brown and Batygin are working with astronomers to do exactly that.
0.850638
3.906759
Distortion to orbits of asteroids beyond Pluto imply mystery planet is tugging at them, claims astronomer The evidence for ‘Planet X’ – the mysterious hypothesised planet on the edge of our solar system – has taken a new turn thanks to the mathematics of a noted astronomer. Rodney Gomes, an astronomer at the National Observatory of Brazil in Rio de Janeiro, says the irregular orbits of small icy bodies beyond Neptune imply that a planet four times the size of Earth is swirling around our sun in the fringes of the solar system. Planet X – perhaps mis-named now that Pluto has been demoted to a dwarf planet – has been widely hypothesised for decade, but has never been proven. The hypothetical planet – four times the size of Earth – will float beyond Neptune and Pluto and cause disturbances in the Kuiper belt of asteroids Gomes measured the orbits of 92 Kuiper belt objects – small bodies and dwarf planets – and said that six objects appeared to be tugged off-course compared to their expected orbits. He told astronomers at the American Astronomical Society on Tuesday that the most likely reason for the irregular orbits was a ‘planetary-mass solar companion’ – a distant body of planet size that is powerful enough to move the Kuiper belt objects. He suggested the planet would be four times bigger than Earth – around the size of Nepture and would be 140 billion miles from the sun, or about 1,500 times further than the Earth. Alternatively an object the size of Mars on an irregular orbit that bought it to within five billion miles of the sun – close to Neptune’s orbit – could be the solution. However, due to the distances involved, it will be tough to for Earthbound astronomers to catch a glimpse of the hypothetical newest member of our solar system. Even non-planet Pluto is hard to spot thanks to the distances involved. While other astronomers are on the astronomical fence, they have applauded his methods. Rory Barnes, from the University of Washington told National Geographic that Gomes ‘has laid out a way to determine how such a planet could sculpt parts of our solar system. ‘So while, yes, the evidence doesn’t exist yet, I thought the bigger point was that he showed us that there are ways to find that evidence. ‘I don’t think he really has any evidence that suggests it is out there.’ Hal Levison, from the Southwest Research Institute in Boulder, Colorado, said: ‘It seems surprising to me that a [solar] companion as small as Neptune could have the effect he sees. ‘[But] I know Rodney, and I’m sure he did the calculations right.’ The previous ninth planet, Pluto, is one of the largest of the Kuiper belt dwarf planets, at some 1,400 miles wide. It got downgraded by the International Astronomical Union in 2006 for failing to meet all the criteria of a ‘planet’, namely that its mass is not sufficient enough to clear its orbit of surrounding objects. Reference: Daily Mail;
0.842653
3.588115
The mysterious asteroid Vesta may well have more surprises in store. Despite past observations that Vesta would be nearly bone dry, newly published research indicates that about half of the giant asteroid is sufficiently cold and dark enough that water ice could theoretically exist below the battered surface. Scientists working at NASA’s Goddard Space Flight Center in Greenbelt, Md., and the University of Maryland have derived the first models of Vesta’s average global temperatures and illumination by the Sun based on data obtained from the Hubble Space Telescope. “Near the north and south poles, the conditions appear to be favorable for water ice to exist beneath the surface,” says Timothy Stubbs of NASA’s Goddard Space Flight Center in Greenbelt, Md., and the University of Maryland, Baltimore County. The research by Timothy Stubbs and Yongli Wang, of the Goddard Planetary Heliophysics Institute at the University of Maryland, was published in the January 2012 issue of the journal Icarus. If any water lurks beneath Vesta, it would most likely exist at least 10 feet (3 meters) below the North and South poles because the models predict that the poles are the coldest regions on the giant asteroid and the equatorial regions are too warm. If proven, the existence of water ice at Vesta would have vast implications for the formation and evolution of the tiny body and upend current theories. The surface of Vesta is not cold enough for ice to survive all the time because unlike the Moon, it probably does not have any significant permanently shadowed craters where water ice could stay frozen on the surface indefinitely. Even the huge 300 mile diameter (480-kilometer) crater at the South Pole is not a good candidate for water ice because Vesta is tilted 27 degrees on its axis, a bit more than Earth’s tilt of 23 degrees. By contrast, the Moon is only tilted 1.5 degrees and possesses many permanently shadowed craters. NASA’s LCROSS impact mission proved that water ice exists inside permanently shadowed lunar craters. The models predict that the average annual temperature around Vesta’s poles is below minus 200 degrees Fahrenheit (145 kelvins). Water ice is not stable above that temperature in the top 10 feet of Vestan soil, or regolith. At the equator and in a band stretching to about 27 degrees north and south in latitude, the average annual temperature is about minus 190 degrees Fahrenheit (145 kelvins), which is too high for the ice to survive. “On average, it’s colder at Vesta’s poles than near its equator, so in that sense, they are good places to sustain water ice,” says Stubbs in a NASA statement. “But they also see sunlight for long periods of time during the summer seasons, which isn’t so good for sustaining ice. So if water ice exists in those regions, it may be buried beneath a relatively deep layer of dry regolith.” Vesta is the second most massive asteroid in the main Asteroid belt between Mars and Jupiter. NASA’s Dawn Asteroid Orbiter is the very first mission to Vesta and achieved orbit in July 2011 for a 1 year long mission. Dawn is currently circling Vesta at its lowest planned orbit. The three science instruments are snapping pictures and the spectrometers are collecting data on the elemental and mineralogical composition of Vesta. The onboard GRaND spectrometer in particular could shed light on the question of whether water ice exists at Vesta. So far no water has been detected, but the best data is yet to come. In July 2012, Dawn fires up its ion thrusters and spirals out of orbit to begin the journey to Ceres, the largest asteroid of them all. Ceres is believed to harbor huge caches of water, either as ice or in the form of oceans and is a potential habitat for life.
0.841424
4.001719
As explained in earlier blogs, I have discovered, through extensive experience in the field, that Newtonian reflectors can render beautiful, colour free images of a wide variety of celestial objects, from the Moon and bright planets, to a wealth of bodies beyond the solar system. In my own niche area of double star observing, I found that a simple 130mm f/5 Newtonian reflector resolved tighter pairs than a 90mm ED refractor and indeed, as deduced from my previous field notes, also exceeded the performance of a very fine 102mm f/15 classical achromat on good nights of seeing. Refractors have a well earned reputation for garnering very stable images at the eyepiece, a consequence of the decent height of the entrance pupil above the ground, glass properties (crown & flint doublet objectives in particular), less intense tube currents and relatively small apertures which are quite often immune to the vagaries of the atmosphere. In comparison, reflectors can be rather temperamental. With their need for precise collimation and greater tendency to manifest thermal effects coupled to the (often) larger apertures employed in the field, Newtonians typically (but not always) serve up images significantly more unstable from moment to moment. As I also explained before, this is not really a big issue for a seasoned observer, who has more than enough patience to remain observing long enough to ignore or wait for the disappearance of these various bugs that attend the use of a good Newtonian telescope. That having been said, I elected to investigate the effects of some simple modifications that could potentially ameliorate the effects of tube currents in the 130mm f/5 Newtonian, so as to stabilise its images as much as possible. This led me to investigate the effects of insulating the inside of the thin, rolled aluminium tube that houses the optics of this small Newtonian. My researches led me to explore the properties of cork; one of the best, natural and renewable insulators from the Creation. I heard many a yarn recounted by veteran observers, that lining the inner tube with cork could dramatically dampen the effects of tube currents in Newtonian and compound telescopes. It seems to have originated sometime in the late 19th century or early 20th century, but in more recent years, some amateurs, whose work I trust, have also recommended cork as a suitable insulator for their telescope tubes. The theory is fairly simple; the thin aluminium tube is an excellent conductor and radiator of heat. Indeed, under a clear sky, the temperature of a metal tube rarely tracks the ambient air temperature perfectly but instead can often fall off to a few degrees lower than the surrounding air during radiative cooling in the field. But by lining the inside of the tube with some kind of insulating material, one can keep this temperature differential between the aluminium tube and ambient air to a minimum. This should create more stable images, especially at the highest powers, which would in turn make their visual study more profitable, as well as increasing their aesthetic appeal. Materials & Methods I checked out what types of cork were available and settled on the purchase of self adhesing cork sheet, which arrived promptly from the seller. Next, both the primary and secondary optics as well as the focuser were completely removed from the aluminium tube, which was then lined with the cork sheet. Initially, I had just intended simply to paint the cork a flat black colour but was unconvinced that it was really dark enough to compare with regular flocking material. I therefore elected to cover the cork with the flocking material, which, in effect, would act like a double layer of insulation. After lining the tube with the cork overlaid with the flocking material, I also lined the drawtube of the focuser with more flocking material before putting the telescope back together again. I was very pleased at the light dampening properties of the instrument during daylight hours and noted that the images served up by the telescope were a little bit more contrasty than before the flocking material was added. The result is seen below: Finally, I was now ready to study the images of a variety of high resolution targets to see whether or not this tube insulation worked in practice. My tests were carried out over a number of winter evenings, where the ambient temperatures sometimes fell to −10C. Most of my observations were conducted on the evenings of January 18, 19, 20 and 24, but also included some shorter vigils during more unsettled spells. The telescope was given time to cool off to near ambient before commencement of observations. The targets included some tricky double and multiple stars; theta Aurigae, delta Geminorum, iota Cassiopeiae, as well as easier subjects like Castor A & B. In each case, I charged the telescope with a power of beween 260x and 406x diameters (so between 52 and 80x per inch of aperture) and the images studied as they moved across the field of view. The carefully focused stellar images were very impressively presented in the telescope and appeared significantly calmer (read less susceptible to thermal degradation) as they moved across the field, their forms morphing significantly less than I had previously noted in the uninsulated tube. Indeed, during these vigils I enjoyed some of the finest images yet garnered from this modest telescope. Specifically, the stellar Airy disks were much more in keeping with those I have enjoyed during the milder months of Spring and Summer. I was able to pick off faint and close companions much more easily and efficiently than I can remember when using the same optics in previous winters. The activity of insulating the inner metal tube most definitely improved the images from moment to moment, allowing me to enjoy their perfect forms for longer. My correspondence with some highly experienced observers also alerted me to other ways tube currents could be minimised or even completely abated in Newtonians, including housing the optics in an oversized tube, constructing non cylindrical tubes (think hexagonal designs) and using active mirror cooling. A combination of all these strategies have been shown to improve image stability in reflectors and are well worth investigating in their own right. They will surely make an already good telescope into an excellent one. I intend to insulate my larger Newtonians in the same way, and in due course. The author would like to thank Martin Mobberley and Garyth64 for interesting discussions on cork.
0.805804
3.64712
In writing about our personal experiences, we sometimes mention products & services that we use or recommend. This page may contain affiliate links for which we receive a commission. I bet you didn’t know that when you shine a flashlight creating a beam of white light, within that beam all the colors of the rainbow exist. That simple white beam of light is actually a combination of purple, blue, green, yellow, orange and red colored beams. If you take that beam of white light and send it through a glass prism, the different colors will show themselves vividly as they exit at a different angle. Rainbows are created in a similar manner. How Rainbows Are Formed A rainbow is formed when the sun’s rays are hitting raindrops. Light from the sun passing through droplets of moisture in the air takes on the familiar arched shape that we see as a colorful rainbow in the sky. Did you know that red will always be the outermost color and the darker blue to violet will always be the innermost color? Sunlight is refracted as it enters a raindrop, which causes the different wavelengths (colors) of visible light to separate. Longer wavelengths of light (red) are bent the least while shorter wavelengths (violet) are bent the most. Source A double rainbow is rarer, though still quite common. Double rainbows can only be seen when you have your back to the sun and it it raining in front of you. Also, the sun must be less than 42 degrees above the horizon. (That’s why morning and evening are the best times to view a double rainbow.) In a double rainbow, a second arc may be seen above and outside the primary arc, and has the order of its colors reversed. (Red faces inward toward the other rainbow, in both rainbows.) This second rainbow is caused by light reflecting twice inside water droplets. The region between a double rainbow is dark, and is known as Alexander’s band. The reason for this dark band is that, while light below the primary rainbow comes from droplet reflection, and light above the upper (secondary) rainbow also comes from droplet reflection, there is no mechanism for the region between a double rainbow to show any light reflected from water drops. Source Wondering what a double rainbow means? - Is a sign from the cosmic Universe that you are about to have something great fall into your lap. - Is a sign that one good thing will lead to another. Here is the video of a double rainbow that went viral: Rainbows Close To Home Rainbows over Lake Superior are very common. I see them often through the summer months since there is quite a temperature variation here in Duluth, Minnesota which affects the moisture content in the air. Lake level is 600 feet lower than the hillside which surrounds the city, so it can be quite cool down by the lake and 30 to 40 degrees warmer on top of the hill. This makes it nice in August, as there is no such thing as a hot muggy day in downtown Duluth. We’ve all heard the fable of the leprechaun‘s pot of gold being located at the end of the rainbow. Sadly, I can personally debunk that particular fable. The atmosphere within my community had taken on a warm rose color. The winds where calm. The only sounds were those of chirping birds carrying on as if nothing strange was taking place. As the interesting color seemed to persist, the thought that it might be caused by a rainbow really didn’t enter my mind. I thought more along the lines of an impending storm though — because the color permeated the air at ground level, rather than something you would see off in the distance. I thought it was a bit strange. As I drove away leaving our subdivision, it took only about a block to exit from the rose-colored atmosphere and return to normal conditions. I continued on to work. However, when I was a little farther down the road, I looked back toward my home, and it was unmistakable. The end of the arch, created by a very large rainbow extending well over the western part of the city, was directly over my street. As sad as I was to discover that great wealth was not coming my way since there was no pot of gold at the end of the rainbow, it was still the only time in my life that I’ve been able to say I’ve been inside a rainbow! Different Types Of Rainbows The following resources will help you understand the different types of rainbows: - Variations Of Rainbows - Moonbows: Lunar Rainbows - 5 Types Of Rainbows - Rainbows & Other Optical Illusions In The Sky - All About Rainbows - No, Sun Dogs Are Not Rainbows - Optical Effects Of Rainbows I’ve been involved in RVing for 50 years now — including camping, building, repairing, and even selling RVs. I’ve owned, used, and repaired almost every class and style of RV ever made. I do all of my own repair work. My other interests include cooking, living with an aging dog, and dealing with diabetic issues. If you can combine a grease monkey with a computer geek, throw in a touch of information nut and organization freak, combined with a little bit of storyteller, you’ve got a good idea of who I am.
0.832363
3.211844
Material from the sun may have caused Comet 67P/Churyumov-Gerasimenko to flare up nearly 100 times brighter than average in some parts of the visual spectrum, new research reports. At about the same time that charged solar particles slammed into Comet 67P, the European Space Agency's (ESA) Rosetta spacecraft observed that the icy wanderer dramatically brightened. Initially, scientists assumed that unusual effect came from jets of material within the comet. However, newly released observations of 67P suggest that a burst of charged particles from the sun, known as a coronal mass ejection (CME), could have caused the change. "The [brightening] was characterized by a substantial increase in the hydrogen, carbon and oxygen emission lines that increased by roughly 100 times their average brightness on the night of Oct. 5 and 6, 2015," John Noonan told Space.com. Noonan, who just completed his undergraduate degree at the University of Colorado at Boulder, presented the research at the Division for Planetary Sciences meeting in Pasadena, California, in October. [Photos: Europe's Rosetta Comet Mission in Pictures] After reading a report of a CME that hit 67P at the same time, Noonan realized that the increased emissions from water, carbon dioxide and molecular oxygen observed by Rosetta’s R-Alice instrument could all be explained by the collision of the comet with material jettisoned from the sun. "This doesn't yet rule out that an outburst could have happened, but it looks possible that all of the emissions could have been caused by the CME impact," Noonan said. Rosetta entered orbit around Comet 67P in August 2014, making detailed observations until the probe deliberately crashed into the icy body at the end of its mission in September 2016. So Rosetta was tagging along when Comet 67P made its closest pass to the sun in August 2015. (Such "perihelion passages" occur once every 6.45 years — the time it takes the icy object to circle the sun.) As 67P neared the sun, newly warmed jets began to release gas from the surface, building up the cloud of debris around the nucleus known as the coma. Jets continued to spout throughout Rosetta's observations as different regions of the comet rotated into sunlight. Such spouts were initially credited with the extreme brightening that took place in October 2015. In addition to warming the comet, the sun also interacted with it through its solar wind, the constant rush of charged particles streaming into space in all directions. Occasionally, the sun also blows off the collections of plasma and charged particles known as CMEs. When CMEs collide with Earth, they can interact with the planet’s magnetic field to create dazzling auroral displays; this interaction can also damage power grids and satellites. Niklas Edberg, a scientist on the Rosetta Plasma Consortium Ion and Electron Spectrometer instrument on the spacecraft, and his colleagues recently reported that RPC/IES observed a CME impact on Rosetta at the same time as the bizarre brightening. The ESA/NASA Solar and Heliospheric Observatory (SOHO) spacecraft detected the CME as it left the sun on Sept. 30, 2015. [The Sun's Wrath: Worst Solar Storms in History] According to Edberg, the CME compressed the plasma material around the comet. Because Rosetta was orbiting within the coma, the probe hadn't sampled any material streaming from the solar wind since the previous April, and wasn’t expected to do so for several more months. When the CME slammed into the comet, however, the coma was compressed and Rosetta briefly tasted part of the solar wind once again. "This suggests that the plasma environment had been compressed significantly, such that the solar wind ions could briefly reach the detector, and provides further evidence that these signatures in the cometary plasma environment are indeed caused by a solar wind event, such as a CME," Edberg and his team wrote in their study, which was published in the journal Monthly Notices of the Royal Astronomical Society in September 2016. Forces at play For Noonan, the realization that a CME had impacted the comet at the same time of its unusual brightening had an illuminating effect. "I read this [Edberg et al.] paper and realized that the substantial increase in electron density could account for the increased emissions from the coma that R-Alice observed, and set about testing what the density of the coma's water, carbon dioxide and molecular oxygen components would have to be to match what we saw," Noonan said. Charged particles from the CME may have excited cometary material, causing it to release photons, he added. Some of the observed changes could be created only by interacting electrons, causing what Noonan called "unique fingerprints" that let the scientists know electrons were impacting the material. Of special importance was the transition of oxygen line in the spectra, a change that can only be caused by electrons. "During the course of the CME, we saw this line increase in strength by roughly hundredfold," Noonan said. The charged particles were unlikely to have come from the solar wind, which Noonan said would be blocked from ever penetrating this deep. While CMEs have been observed around other comets, they have only been viewed remotely. From such great distances, only large-scale changes in the comets' comas and tails could be observed, Edberg said. Over the course of its two-year mission at Comet 67P, Rosetta's close orbit allowed it to observe other CMEs interacting with the comet, but Noonan said none were as noticeable as the event of Oct. 5-6, 2015. "Prior to Rosetta, these electron impact emissions had never been observed around a comet, and it was these emissions that gave away that the CME might be a factor in causing them," Noonan said. He cautioned that it isn't a given that the influx of charged particles caused the bizarre brightening, which still could be caused by the jets of material. "At this point, we are still working to understand exactly what was the cause to see if it was the CME, and outburst, or both, that caused the emission," Noonan said. Given the timing of the impact, however, it is unlikely that the flare-up was the result of gas released by jets alone. "There are more forces at play than just a higher density of gas," Noonan said.
0.878704
3.961215
Plate tectonics (from the Late Latin: tectonicus, from the Ancient Greek: τεκτονικός, lit. 'pertaining to building') is a scientific theory describing the large-scale motion of seven large plates and the movements of a larger number of smaller plates of the Earth's lithosphere, since tectonic processes began on Earth between 3.3 and 3.5 billion years ago. The model builds on the concept of continental drift, an idea developed during the first decades of the 20th century. The geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s and early 1960s. The lithosphere, which is the rigid outermost shell of a planet (the crust and upper mantle), is broken into tectonic plates. The Earth's lithosphere is composed of seven or eight major plates (depending on how they are defined) and many minor plates. Where the plates meet, their relative motion determines the type of boundary: convergent, divergent, or transform. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along these plate boundaries (or faults). The relative movement of the plates typically ranges from zero to 100 mm annually. Tectonic plates are composed of oceanic lithosphere and thicker continental lithosphere, each topped by its own kind of crust. Along convergent boundaries, subduction, or one plate moving under another, carries the lower one down into the mantle; the material lost is roughly balanced by the formation of new (oceanic) crust along divergent margins by seafloor spreading. In this way, the total surface of the lithosphere remains the same. This prediction of plate tectonics is also referred to as the conveyor belt principle. Earlier theories, since disproven, proposed gradual shrinking (contraction) or gradual expansion of the globe. Tectonic plates are able to move because the Earth's lithosphere has greater mechanical strength than the underlying asthenosphere. Lateral density variations in the mantle result in convection; that is, the slow creeping motion of Earth's solid mantle. Plate movement is thought to be driven by a combination of the motion of the seafloor away from spreading ridges due to variations in topography (the ridge is a topographic high) and density changes in the crust (density increases as newly formed crust cools and moves away from the ridge). At subduction zones the relatively cold, dense oceanic crust is "pulled" or sinks down into the mantle over the downward convecting limb of a mantle cell. Another explanation lies in the different forces generated by tidal forces of the Sun and Moon. The relative importance of each of these factors and their relationship to each other is unclear, and still the subject of much debate. The outer layers of the Earth are divided into the lithosphere and asthenosphere. The division is based on differences in mechanical properties and in the method for the transfer of heat. The lithosphere is cooler and more rigid, while the asthenosphere is hotter and flows more easily. In terms of heat transfer, the lithosphere loses heat by conduction, whereas the asthenosphere also transfers heat by convection and has a nearly adiabatic temperature gradient. This division should not be confused with the chemical subdivision of these same layers into the mantle (comprising both the asthenosphere and the mantle portion of the lithosphere) and the crust: a given piece of mantle may be part of the lithosphere or the asthenosphere at different times depending on its temperature and pressure. The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which ride on the fluid-like (visco-elastic solid) asthenosphere. Plate motions range up to a typical 10–40 mm/year (Mid-Atlantic Ridge; about as fast as fingernails grow), to about 160 mm/year (Nazca Plate; about as fast as hair grows). The driving mechanism behind this movement is described below. Tectonic lithosphere plates consist of lithospheric mantle overlain by one or two types of crustal material: oceanic crust (in older texts called sima from silicon and magnesium) and continental crust (sial from silicon and aluminium). Average oceanic lithosphere is typically 100 km (62 mi) thick; its thickness is a function of its age: as time passes, it conductively cools and subjacent cooling mantle is added to its base. Because it is formed at mid-ocean ridges and spreads outwards, its thickness is therefore a function of its distance from the mid-ocean ridge where it was formed. For a typical distance that oceanic lithosphere must travel before being subducted, the thickness varies from about 6 km (4 mi) thick at mid-ocean ridges to greater than 100 km (62 mi) at subduction zones; for shorter or longer distances, the subduction zone (and therefore also the mean) thickness becomes smaller or larger, respectively. Continental lithosphere is typically about 200 km thick, though this varies considerably between basins, mountain ranges, and stable cratonic interiors of continents. The location where two plates meet is called a plate boundary. Plate boundaries are commonly associated with geological events such as earthquakes and the creation of topographic features such as mountains, volcanoes, mid-ocean ridges, and oceanic trenches. The majority of the world's active volcanoes occur along plate boundaries, with the Pacific Plate's Ring of Fire being the most active and widely known today. These boundaries are discussed in further detail below. Some volcanoes occur in the interiors of plates, and these have been variously attributed to internal plate deformation and to mantle plumes. As explained above, tectonic plates may include continental crust or oceanic crust, and most plates contain both. For example, the African Plate includes the continent and parts of the floor of the Atlantic and Indian Oceans. The distinction between oceanic crust and continental crust is based on their modes of formation. Oceanic crust is formed at sea-floor spreading centers, and continental crust is formed through arc volcanism and accretion of terranes through tectonic processes, though some of these terranes may contain ophiolite sequences, which are pieces of oceanic crust considered to be part of the continent when they exit the standard cycle of formation and spreading centers and subduction beneath continents. Oceanic crust is also denser than continental crust owing to their different compositions. Oceanic crust is denser because it has less silicon and more heavier elements ("mafic") than continental crust ("felsic"). As a result of this density stratification, oceanic crust generally lies below sea level (for example most of the Pacific Plate), while continental crust buoyantly projects above sea level (see the page isostasy for explanation of this principle). Types of plate boundaries Three types of plate boundaries exist, with a fourth, mixed type, characterized by the way the plates move relative to each other. They are associated with different types of surface phenomena. The different types of plate boundaries are: - Divergent boundaries (Constructive) occur where two plates slide apart from each other. At zones of ocean-to-ocean rifting, divergent boundaries form by seafloor spreading, allowing for the formation of new ocean basin. As the ocean plate splits, the ridge forms at the spreading center, the ocean basin expands, and finally, the plate area increases causing many small volcanoes and/or shallow earthquakes. At zones of continent-to-continent rifting, divergent boundaries may cause new ocean basin to form as the continent splits, spreads, the central rift collapses, and ocean fills the basin. Active zones of mid-ocean ridges (e.g., the Mid-Atlantic Ridge and East Pacific Rise), and continent-to-continent rifting (such as Africa's East African Rift and Valley and the Red Sea), are examples of divergent boundaries. - Convergent boundaries (Destructive) (or active margins) occur where two plates slide toward each other to form either a subduction zone (one plate moving underneath the other) or a continental collision. At zones of ocean-to-continent subduction (e.g. the Andes mountain range in South America, and the Cascade Mountains in Western United States), the dense oceanic lithosphere plunges beneath the less dense continent. Earthquakes trace the path of the downward-moving plate as it descends into asthenosphere, a trench forms, and as the subducted plate is heated it releases volatiles, mostly water from hydrous minerals, into the surrounding mantle. The addition of water lowers the melting point of the mantle material above the subducting slab, causing it to melt. The magma that results typically leads to volcanism. At zones of ocean-to-ocean subduction (e.g. Aleutian islands, Mariana Islands, and the Japanese island arc), older, cooler, denser crust slips beneath less dense crust. This motion causes earthquakes and a deep trench to form in an arc shape. The upper mantle of the subducted plate then heats and magma rises to form curving chains of volcanic islands. Deep marine trenches are typically associated with subduction zones, and the basins that develop along the active boundary are often called "foreland basins". Closure of ocean basins can occur at continent-to-continent boundaries (e.g., Himalayas and Alps): collision between masses of granitic continental lithosphere; neither mass is subducted; plate edges are compressed, folded, uplifted. - Transform boundaries (Conservative) occur where two lithospheric plates slide, or perhaps more accurately, grind past each other along transform faults, where plates are neither created nor destroyed. The relative motion of the two plates is either sinistral (left side toward the observer) or dextral (right side toward the observer). Transform faults occur across a spreading center. Strong earthquakes can occur along a fault. The San Andreas Fault in California is an example of a transform boundary exhibiting dextral motion. - Plate boundary zones occur where the effects of the interactions are unclear, and the boundaries, usually occurring along a broad belt, are not well defined and may show various types of movements in different episodes. Driving forces of plate motion It has generally been accepted that tectonic plates are able to move because of the relative density of oceanic lithosphere and the relative weakness of the asthenosphere. Dissipation of heat from the mantle is acknowledged to be the original source of the energy required to drive plate tectonics through convection or large scale upwelling and doming. The current view, though still a matter of some debate, asserts that as a consequence, a powerful source of plate motion is generated due to the excess density of the oceanic lithosphere sinking in subduction zones. When the new crust forms at mid-ocean ridges, this oceanic lithosphere is initially less dense than the underlying asthenosphere, but it becomes denser with age as it conductively cools and thickens. The greater density of old lithosphere relative to the underlying asthenosphere allows it to sink into the deep mantle at subduction zones, providing most of the driving force for plate movement. The weakness of the asthenosphere allows the tectonic plates to move easily towards a subduction zone. Although subduction is thought to be the strongest force driving plate motions, it cannot be the only force since there are plates such as the North American Plate which are moving, yet are nowhere being subducted. The same is true for the enormous Eurasian Plate. The sources of plate motion are a matter of intensive research and discussion among scientists. One of the main points is that the kinematic pattern of the movement itself should be separated clearly from the possible geodynamic mechanism that is invoked as the driving force of the observed movement, as some patterns may be explained by more than one mechanism. In short, the driving forces advocated at the moment can be divided into three categories based on the relationship to the movement: mantle dynamics related, gravity related (main driving force accepted nowadays), and earth rotation related. For much of the last quarter century, the leading theory of the driving force behind tectonic plate motions envisaged large scale convection currents in the upper mantle, which can be transmitted through the asthenosphere. This theory was launched by Arthur Holmes and some forerunners in the 1930s and was immediately recognized as the solution for the acceptance of the theory as originally discussed in the papers of Alfred Wegener in the early years of the century. However, despite its acceptance, it was long debated in the scientific community because the leading theory still envisaged a static Earth without moving continents up until the major breakthroughs of the early sixties. Two- and three-dimensional imaging of Earth's interior (seismic tomography) shows a varying lateral density distribution throughout the mantle. Such density variations can be material (from rock chemistry), mineral (from variations in mineral structures), or thermal (through thermal expansion and contraction from heat energy). The manifestation of this varying lateral density is mantle convection from buoyancy forces. How mantle convection directly and indirectly relates to plate motion is a matter of ongoing study and discussion in geodynamics. Somehow, this energy must be transferred to the lithosphere for tectonic plates to move. There are essentially two main types of forces that are thought to influence plate motion: friction and gravity. - Basal drag (friction): Plate motion driven by friction between the convection currents in the asthenosphere and the more rigid overlying lithosphere. - Slab suction (gravity): Plate motion driven by local convection currents that exert a downward pull on plates in subduction zones at ocean trenches. Slab suction may occur in a geodynamic setting where basal tractions continue to act on the plate as it dives into the mantle (although perhaps to a greater extent acting on both the under and upper side of the slab). Lately, the convection theory has been much debated, as modern techniques based on 3D seismic tomography still fail to recognize these predicted large scale convection cells. Alternative views have been proposed. In the theory of plume tectonics followed by numerous researchers during the 1990s, a modified concept of mantle convection currents is used. It asserts that super plumes rise from the deeper mantle and are the drivers or substitutes of the major convection cells. These ideas find their roots in the early 1930s in the works of Beloussov and van Bemmelen, which were initially opposed to plate tectonics and placed the mechanism in a fixistic frame of verticalistic movements. Van Bemmelen later on modulated on the concept in his "Undulation Models" and used it as the driving force for horizontal movements, invoking gravitational forces away from the regional crustal doming. The theories find resonance in the modern theories which envisage hot spots or mantle plumes which remain fixed and are overridden by oceanic and continental lithosphere plates over time and leave their traces in the geological record (though these phenomena are not invoked as real driving mechanisms, but rather as modulators). The mechanism is nowadays still advocated for example to explain the break-up of supercontinents during specific geological epochs. It has also still numerous followers also amongst the scientists involved in the theory of Earth expansion Another theory is that the mantle flows neither in cells nor large plumes but rather as a series of channels just below the Earth's crust, which then provide basal friction to the lithosphere. This theory, called "surge tectonics", became quite popular in geophysics and geodynamics during the 1980s and 1990s. Recent research, based on three-dimensional computer modeling, suggests that plate geometry is governed by a feedback between mantle convection patterns and the strength of the lithosphere. Forces related to gravity are invoked as secondary phenomena within the framework of a more general driving mechanism such as the various forms of mantle dynamics described above. In moderns views, gravity is invoked as the major driving force, through slab pull along subduction zones. Gravitational sliding away from a spreading ridge: According to many authors, plate motion is driven by the higher elevation of plates at ocean ridges. As oceanic lithosphere is formed at spreading ridges from hot mantle material, it gradually cools and thickens with age (and thus adds distance from the ridge). Cool oceanic lithosphere is significantly denser than the hot mantle material from which it is derived and so with increasing thickness it gradually subsides into the mantle to compensate the greater load. The result is a slight lateral incline with increased distance from the ridge axis. This force is regarded as a secondary force and is often referred to as "ridge push". This is a misnomer as nothing is "pushing" horizontally and tensional features are dominant along ridges. It is more accurate to refer to this mechanism as gravitational sliding as variable topography across the totality of the plate can vary considerably and the topography of spreading ridges is only the most prominent feature. Other mechanisms generating this gravitational secondary force include flexural bulging of the lithosphere before it dives underneath an adjacent plate which produces a clear topographical feature that can offset, or at least affect, the influence of topographical ocean ridges, and mantle plumes and hot spots, which are postulated to impinge on the underside of tectonic plates. Slab-pull: Current scientific opinion is that the asthenosphere is insufficiently competent or rigid to directly cause motion by friction along the base of the lithosphere. Slab pull is therefore most widely thought to be the greatest force acting on the plates. In this current understanding, plate motion is mostly driven by the weight of cold, dense plates sinking into the mantle at trenches. Recent models indicate that trench suction plays an important role as well. However, the fact that the North American Plate is nowhere being subducted, although it is in motion, presents a problem. The same holds for the African, Eurasian, and Antarctic plates. Gravitational sliding away from mantle doming: According to older theories, one of the driving mechanisms of the plates is the existence of large scale asthenosphere/mantle domes which cause the gravitational sliding of lithosphere plates away from them (see the paragraph on Mantle Mechanisms). This gravitational sliding represents a secondary phenomenon of this basically vertically oriented mechanism. It finds its roots in the Undation Model of van Bemmelen. This can act on various scales, from the small scale of one island arc up to the larger scale of an entire ocean basin. Alfred Wegener, being a meteorologist, had proposed tidal forces and centrifugal forces as the main driving mechanisms behind continental drift; however, these forces were considered far too small to cause continental motion as the concept was of continents plowing through oceanic crust. Therefore, Wegener later changed his position and asserted that convection currents are the main driving force of plate tectonics in the last edition of his book in 1929. However, in the plate tectonics context (accepted since the seafloor spreading proposals of Heezen, Hess, Dietz, Morley, Vine, and Matthews (see below) during the early 1960s), the oceanic crust is suggested to be in motion with the continents which caused the proposals related to Earth rotation to be reconsidered. In more recent literature, these driving forces are: - Tidal drag due to the gravitational force the Moon (and the Sun) exerts on the crust of the Earth - Global deformation of the geoid due to small displacements of the rotational pole with respect to the Earth's crust - Other smaller deformation effects of the crust due to wobbles and spin movements of the Earth rotation on a smaller time scale Forces that are small and generally negligible are: - The Coriolis force - The centrifugal force, which is treated as a slight modification of gravity:249 For these mechanisms to be overall valid, systematic relationships should exist all over the globe between the orientation and kinematics of deformation and the geographical latitudinal and longitudinal grid of the Earth itself. Ironically, these systematic relations studies in the second half of the nineteenth century and the first half of the twentieth century underline exactly the opposite: that the plates had not moved in time, that the deformation grid was fixed with respect to the Earth equator and axis, and that gravitational driving forces were generally acting vertically and caused only local horizontal movements (the so-called pre-plate tectonic, "fixist theories"). Later studies (discussed below on this page), therefore, invoked many of the relationships recognized during this pre-plate tectonics period to support their theories (see the anticipations and reviews in the work of van Dijk and collaborators). Of the many forces discussed in this paragraph, tidal force is still highly debated and defended as a possible principal driving force of plate tectonics. The other forces are only used in global geodynamic models not using plate tectonics concepts (therefore beyond the discussions treated in this section) or proposed as minor modulations within the overall plate tectonics model. In 1973, George W. Moore of the USGS and R. C. Bostrom presented evidence for a general westward drift of the Earth's lithosphere with respect to the mantle. He concluded that tidal forces (the tidal lag or "friction") caused by the Earth's rotation and the forces acting upon it by the Moon are a driving force for plate tectonics. As the Earth spins eastward beneath the moon, the moon's gravity ever so slightly pulls the Earth's surface layer back westward, just as proposed by Alfred Wegener (see above). In a more recent 2006 study, scientists reviewed and advocated these earlier proposed ideas. It has also been suggested recently in Lovett (2006) that this observation may also explain why Venus and Mars have no plate tectonics, as Venus has no moon and Mars' moons are too small to have significant tidal effects on the planet. In a recent paper, it was suggested that, on the other hand, it can easily be observed that many plates are moving north and eastward, and that the dominantly westward motion of the Pacific Ocean basins derives simply from the eastward bias of the Pacific spreading center (which is not a predicted manifestation of such lunar forces). In the same paper the authors admit, however, that relative to the lower mantle, there is a slight westward component in the motions of all the plates. They demonstrated though that the westward drift, seen only for the past 30 Ma, is attributed to the increased dominance of the steadily growing and accelerating Pacific plate. The debate is still open. Relative significance of each driving force mechanism The vector of a plate's motion is a function of all the forces acting on the plate; however, therein lies the problem regarding the degree to which each process contributes to the overall motion of each tectonic plate. The diversity of geodynamic settings and the properties of each plate result from the impact of the various processes actively driving each individual plate. One method of dealing with this problem is to consider the relative rate at which each plate is moving as well as the evidence related to the significance of each process to the overall driving force on the plate. One of the most significant correlations discovered to date is that lithospheric plates attached to downgoing (subducting) plates move much faster than plates not attached to subducting plates. The Pacific plate, for instance, is essentially surrounded by zones of subduction (the so-called Ring of Fire) and moves much faster than the plates of the Atlantic basin, which are attached (perhaps one could say 'welded') to adjacent continents instead of subducting plates. It is thus thought that forces associated with the downgoing plate (slab pull and slab suction) are the driving forces which determine the motion of plates, except for those plates which are not being subducted. This view however has been contradicted by a recent study which found that the actual motions of the Pacific Plate and other plates associated with the East Pacific Rise do not correlate mainly with either slab pull or slab push, but rather with a mantle convection upwelling whose horizontal spreading along the bases of the various plates drives them along via viscosity-related traction forces. The driving forces of plate motion continue to be active subjects of on-going research within geophysics and tectonophysics. Development of the theory Around the start of the twentieth century, various theorists unsuccessfully attempted to explain the many geographical, geological, and biological continuities between continents. In 1912 the meteorologist Alfred Wegener described what he called continental drift, an idea that culminated fifty years later in the modern theory of plate tectonics.. Wegener expanded his theory in his 1915 book The Origin of Continents and Oceans. Starting from the idea (also expressed by his forerunners) that the present continents once formed a single land mass (later called Pangea), Wegener suggested that these separated and drifted apart, likening them to "icebergs" of low density granite floating on a sea of denser basalt. Supporting evidence for the idea came from the dove-tailing outlines of South America's east coast and Africa's west coast, and from the matching of the rock formations along these edges. Confirmation of their previous contiguous nature also came from the fossil plants Glossopteris and Gangamopteris, and the therapsid or mammal-like reptile Lystrosaurus, all widely distributed over South America, Africa, Antarctica, India, and Australia. The evidence for such an erstwhile joining of these continents was patent to field geologists working in the southern hemisphere. The South African Alex du Toit put together a mass of such information in his 1937 publication Our Wandering Continents, and went further than Wegener in recognising the strong links between the Gondwana fragments. Wegener's work was initially not widely accepted, in part due to a lack of detailed evidence. The Earth might have a solid crust and mantle and a liquid core, but there seemed to be no way that portions of the crust could move around. Distinguished scientists, such as Harold Jeffreys and Charles Schuchert, were outspoken critics of continental drift. Despite much opposition, the view of continental drift gained support and a lively debate started between "drifters" or "mobilists" (proponents of the theory) and "fixists" (opponents). During the 1920s, 1930s and 1940s, the former reached important milestones proposing that convection currents might have driven the plate movements, and that spreading may have occurred below the sea within the oceanic crust. Concepts close to the elements now incorporated in plate tectonics were proposed by geophysicists and geologists (both fixists and mobilists) like Vening-Meinesz, Holmes, and Umbgrove. One of the first pieces of geophysical evidence that was used to support the movement of lithospheric plates came from paleomagnetism. This is based on the fact that rocks of different ages show a variable magnetic field direction, evidenced by studies since the mid–nineteenth century. The magnetic north and south poles reverse through time, and, especially important in paleotectonic studies, the relative position of the magnetic north pole varies through time. Initially, during the first half of the twentieth century, the latter phenomenon was explained by introducing what was called "polar wander" (see apparent polar wander) (i.e., it was assumed that the north pole location had been shifting through time). An alternative explanation, though, was that the continents had moved (shifted and rotated) relative to the north pole, and each continent, in fact, shows its own "polar wander path". During the late 1950s it was successfully shown on two occasions that these data could show the validity of continental drift: by Keith Runcorn in a paper in 1956, and by Warren Carey in a symposium held in March 1956. The second piece of evidence in support of continental drift came during the late 1950s and early 60s from data on the bathymetry of the deep ocean floors and the nature of the oceanic crust such as magnetic properties and, more generally, with the development of marine geology which gave evidence for the association of seafloor spreading along the mid-oceanic ridges and magnetic field reversals, published between 1959 and 1963 by Heezen, Dietz, Hess, Mason, Vine & Matthews, and Morley. Simultaneous advances in early seismic imaging techniques in and around Wadati–Benioff zones along the trenches bounding many continental margins, together with many other geophysical (e.g. gravimetric) and geological observations, showed how the oceanic crust could disappear into the mantle, providing the mechanism to balance the extension of the ocean basins with shortening along its margins. All this evidence, both from the ocean floor and from the continental margins, made it clear around 1965 that continental drift was feasible and the theory of plate tectonics, which was defined in a series of papers between 1965 and 1967, was born, with all its extraordinary explanatory and predictive power. The theory revolutionized the Earth sciences, explaining a diverse range of geological phenomena and their implications in other studies such as paleogeography and paleobiology. In the late 19th and early 20th centuries, geologists assumed that the Earth's major features were fixed, and that most geologic features such as basin development and mountain ranges could be explained by vertical crustal movement, described in what is called the geosynclinal theory. Generally, this was placed in the context of a contracting planet Earth due to heat loss in the course of a relatively short geological time. Since that time many theories were proposed to explain this apparent complementarity, but the assumption of a solid Earth made these various proposals difficult to accept. The discovery of radioactivity and its associated heating properties in 1895 prompted a re-examination of the apparent age of the Earth. This had previously been estimated by its cooling rate under the assumption that the Earth's surface radiated like a black body. Those calculations had implied that, even if it started at red heat, the Earth would have dropped to its present temperature in a few tens of millions of years. Armed with the knowledge of a new heat source, scientists realized that the Earth would be much older, and that its core was still sufficiently hot to be liquid. By 1915, after having published a first article in 1912, Alfred Wegener was making serious arguments for the idea of continental drift in the first edition of The Origin of Continents and Oceans. In that book (re-issued in four successive editions up to the final one in 1936), he noted how the east coast of South America and the west coast of Africa looked as if they were once attached. Wegener was not the first to note this (Abraham Ortelius, Antonio Snider-Pellegrini, Eduard Suess, Roberto Mantovani and Frank Bursley Taylor preceded him just to mention a few), but he was the first to marshal significant fossil and paleo-topographical and climatological evidence to support this simple observation (and was supported in this by researchers such as Alex du Toit). Furthermore, when the rock strata of the margins of separate continents are very similar it suggests that these rocks were formed in the same way, implying that they were joined initially. For instance, parts of Scotland and Ireland contain rocks very similar to those found in Newfoundland and New Brunswick. Furthermore, the Caledonian Mountains of Europe and parts of the Appalachian Mountains of North America are very similar in structure and lithology. However, his ideas were not taken seriously by many geologists, who pointed out that there was no apparent mechanism for continental drift. Specifically, they did not see how continental rock could plow through the much denser rock that makes up oceanic crust. Wegener could not explain the force that drove continental drift, and his vindication did not come until after his death in 1930. Floating continents, paleomagnetism, and seismicity zones As it was observed early that although granite existed on continents, seafloor seemed to be composed of denser basalt, the prevailing concept during the first half of the twentieth century was that there were two types of crust, named "sial" (continental type crust) and "sima" (oceanic type crust). Furthermore, it was supposed that a static shell of strata was present under the continents. It therefore looked apparent that a layer of basalt (sial) underlies the continental rocks. However, based on abnormalities in plumb line deflection by the Andes in Peru, Pierre Bouguer had deduced that less-dense mountains must have a downward projection into the denser layer underneath. The concept that mountains had "roots" was confirmed by George B. Airy a hundred years later, during study of Himalayan gravitation, and seismic studies detected corresponding density variations. Therefore, by the mid-1950s, the question remained unresolved as to whether mountain roots were clenched in surrounding basalt or were floating on it like an iceberg. During the 20th century, improvements in and greater use of seismic instruments such as seismographs enabled scientists to learn that earthquakes tend to be concentrated in specific areas, most notably along the oceanic trenches and spreading ridges. By the late 1920s, seismologists were beginning to identify several prominent earthquake zones parallel to the trenches that typically were inclined 40–60° from the horizontal and extended several hundred kilometers into the Earth. These zones later became known as Wadati–Benioff zones, or simply Benioff zones, in honor of the seismologists who first recognized them, Kiyoo Wadati of Japan and Hugo Benioff of the United States. The study of global seismicity greatly advanced in the 1960s with the establishment of the Worldwide Standardized Seismograph Network (WWSSN) to monitor the compliance of the 1963 treaty banning above-ground testing of nuclear weapons. The much improved data from the WWSSN instruments allowed seismologists to map precisely the zones of earthquake concentration worldwide. Meanwhile, debates developed around the phenomenon of polar wander. Since the early debates of continental drift, scientists had discussed and used evidence that polar drift had occurred because continents seemed to have moved through different climatic zones during the past. Furthermore, paleomagnetic data had shown that the magnetic pole had also shifted during time. Reasoning in an opposite way, the continents might have shifted and rotated, while the pole remained relatively fixed. The first time the evidence of magnetic polar wander was used to support the movements of continents was in a paper by Keith Runcorn in 1956, and successive papers by him and his students Ted Irving (who was actually the first to be convinced of the fact that paleomagnetism supported continental drift) and Ken Creer. This was immediately followed by a symposium in Tasmania in March 1956. In this symposium, the evidence was used in the theory of an expansion of the global crust. In this hypothesis, the shifting of the continents can be simply explained by a large increase in the size of the Earth since its formation. However, this was unsatisfactory because its supporters could offer no convincing mechanism to produce a significant expansion of the Earth. Certainly there is no evidence that the moon has expanded in the past 3 billion years; other work would soon show that the evidence was equally in support of continental drift on a globe with a stable radius. During the thirties up to the late fifties, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force. Often, these contributions are forgotten because: - At the time, continental drift was not accepted. - Some of these ideas were discussed in the context of abandoned fixistic ideas of a deforming globe without continental drift or an expanding Earth. - They were published during an episode of extreme political and economic instability that hampered scientific communication. - Many were published by European scientists and at first not mentioned or given little credit in the papers on sea floor spreading published by the American researchers in the 1960s. Mid-oceanic ridge spreading and convection In 1947, a team of scientists led by Maurice Ewing utilizing the Woods Hole Oceanographic Institution's research vessel Atlantis and an array of instruments, confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the layer of sediments consisted of basalt, not the granite which is the main constituent of continents. They also found that the oceanic crust was much thinner than continental crust. All these new findings raised important and intriguing questions. The new data that had been collected on the ocean basins also showed particular characteristics regarding the bathymetry. One of the major outcomes of these datasets was that all along the globe, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the "Great Global Rift". This was described in the crucial paper of Bruce Heezen (1960), which would trigger a real revolution in thinking. A profound consequence of seafloor spreading is that new crust was, and still is, being continually created along the oceanic ridges. Therefore, Heezen advocated the so-called "expanding Earth" hypothesis of S. Warren Carey (see above). So, still the question remained: how can new crust be continuously added along the oceanic ridges without increasing the size of the Earth? In reality, this question had been solved already by numerous scientists during the forties and the fifties, like Arthur Holmes, Vening-Meinesz, Coates and many others: The crust in excess disappeared along what were called the oceanic trenches, where so-called "subduction" occurred. Therefore, when various scientists during the early sixties started to reason on the data at their disposal regarding the ocean floor, the pieces of the theory quickly fell into place. The question particularly intrigued Harry Hammond Hess, a Princeton University geologist and a Naval Reserve Rear Admiral, and Robert S. Dietz, a scientist with the U.S. Coast and Geodetic Survey who first coined the term seafloor spreading. Dietz and Hess (the former published the same idea one year earlier in Nature, but priority belongs to Hess who had already distributed an unpublished manuscript of his 1962 article by 1960) were among the small handful who really understood the broad implications of sea floor spreading and how it would eventually agree with the, at that time, unconventional and unaccepted ideas of continental drift and the elegant and mobilistic models proposed by previous workers like Holmes. In the same year, Robert R. Coats of the U.S. Geological Survey described the main features of island arc subduction in the Aleutian Islands. His paper, though little noted (and even ridiculed) at the time, has since been called "seminal" and "prescient". In reality, it actually shows that the work by the European scientists on island arcs and mountain belts performed and published during the 1930s up until the 1950s was applied and appreciated also in the United States. If the Earth's crust was expanding along the oceanic ridges, Hess and Dietz reasoned like Holmes and others before them, it must be shrinking elsewhere. Hess followed Heezen, suggesting that new oceanic crust continuously spreads away from the ridges in a conveyor belt–like motion. And, using the mobilistic concepts developed before, he correctly concluded that many millions of years later, the oceanic crust eventually descends along the continental margins where oceanic trenches—very deep, narrow canyons—are formed, e.g. along the rim of the Pacific Ocean basin. The important step Hess made was that convection currents would be the driving force in this process, arriving at the same conclusions as Holmes had decades before with the only difference that the thinning of the ocean crust was performed using Heezen's mechanism of spreading along the ridges. Hess therefore concluded that the Atlantic Ocean was expanding while the Pacific Ocean was shrinking. As old oceanic crust is "consumed" in the trenches (like Holmes and others, he thought this was done by thickening of the continental lithosphere, not, as now understood, by underthrusting at a larger scale of the oceanic crust itself into the mantle), new magma rises and erupts along the spreading ridges to form new crust. In effect, the ocean basins are perpetually being "recycled," with the creation of new crust and the destruction of old oceanic lithosphere occurring simultaneously. Thus, the new mobilistic concepts neatly explained why the Earth does not get bigger with sea floor spreading, why there is so little sediment accumulation on the ocean floor, and why oceanic rocks are much younger than continental rocks. Beginning in the 1950s, scientists like Victor Vacquier, using magnetic instruments (magnetometers) adapted from airborne devices developed during World War II to detect submarines, began recognizing odd magnetic variations across the ocean floor. This finding, though unexpected, was not entirely surprising because it was known that basalt—the iron-rich, volcanic rock making up the ocean floor—contains a strongly magnetic mineral (magnetite) and can locally distort compass readings. This distortion was recognized by Icelandic mariners as early as the late 18th century. More important, because the presence of magnetite gives the basalt measurable magnetic properties, these newly discovered magnetic variations provided another means to study the deep ocean floor. When newly formed rock cools, such magnetic materials recorded the Earth's magnetic field at the time. As more and more of the seafloor was mapped during the 1950s, the magnetic variations turned out not to be random or isolated occurrences, but instead revealed recognizable patterns. When these magnetic patterns were mapped over a wide region, the ocean floor showed a zebra-like pattern: one stripe with normal polarity and the adjoining stripe with reversed polarity. The overall pattern, defined by these alternating bands of normally and reversely polarized rock, became known as magnetic striping, and was published by Ron G. Mason and co-workers in 1961, who did not find, though, an explanation for these data in terms of sea floor spreading, like Vine, Matthews and Morley a few years later. The discovery of magnetic striping called for an explanation. In the early 1960s scientists such as Heezen, Hess and Dietz had begun to theorise that mid-ocean ridges mark structurally weak zones where the ocean floor was being ripped in two lengthwise along the ridge crest (see the previous paragraph). New magma from deep within the Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. This process, at first denominated the "conveyer belt hypothesis" and later called seafloor spreading, operating over many millions of years continues to form new ocean floor all across the 50,000 km-long system of mid-ocean ridges. Only four years after the maps with the "zebra pattern" of magnetic stripes were published, the link between sea floor spreading and these patterns was correctly placed, independently by Lawrence Morley, and by Fred Vine and Drummond Matthews, in 1963, now called the Vine–Matthews–Morley hypothesis. This hypothesis linked these patterns to geomagnetic reversals and was supported by several lines of evidence: - the stripes are symmetrical around the crests of the mid-ocean ridges; at or near the crest of the ridge, the rocks are very young, and they become progressively older away from the ridge crest; - the youngest rocks at the ridge crest always have present-day (normal) polarity; - stripes of rock parallel to the ridge crest alternate in magnetic polarity (normal-reversed-normal, etc.), suggesting that they were formed during different epochs documenting the (already known from independent studies) normal and reversal episodes of the Earth's magnetic field. By explaining both the zebra-like magnetic striping and the construction of the mid-ocean ridge system, the seafloor spreading hypothesis (SFS) quickly gained converts and represented another major advance in the development of the plate-tectonics theory. Furthermore, the oceanic crust now came to be appreciated as a natural "tape recording" of the history of the geomagnetic field reversals (GMFR) of the Earth's magnetic field. Today, extensive studies are dedicated to the calibration of the normal-reversal patterns in the oceanic crust on one hand and known timescales derived from the dating of basalt layers in sedimentary sequences (magnetostratigraphy) on the other, to arrive at estimates of past spreading rates and plate reconstructions. Definition and refining of the theory After all these considerations, Plate Tectonics (or, as it was initially called "New Global Tectonics") became quickly accepted in the scientific world, and numerous papers followed that defined the concepts: - In 1965, Tuzo Wilson who had been a promoter of the sea floor spreading hypothesis and continental drift from the very beginning added the concept of transform faults to the model, completing the classes of fault types necessary to make the mobility of the plates on the globe work out. - A symposium on continental drift was held at the Royal Society of London in 1965 which must be regarded as the official start of the acceptance of plate tectonics by the scientific community, and which abstracts are issued as Blackett, Bullard & Runcorn (1965). In this symposium, Edward Bullard and co-workers showed with a computer calculation how the continents along both sides of the Atlantic would best fit to close the ocean, which became known as the famous "Bullard's Fit". - In 1966 Wilson published the paper that referred to previous plate tectonic reconstructions, introducing the concept of what is now known as the "Wilson Cycle". - In 1967, at the American Geophysical Union's meeting, W. Jason Morgan proposed that the Earth's surface consists of 12 rigid plates that move relative to each other. - Two months later, Xavier Le Pichon published a complete model based on six major plates with their relative motions, which marked the final acceptance by the scientific community of plate tectonics. - In the same year, McKenzie and Parker independently presented a model similar to Morgan's using translations and rotations on a sphere to define the plate motions. Plate Tectonics Revolution Implications for biogeography Continental drift theory helps biogeographers to explain the disjunct biogeographic distribution of present-day life found on different continents but having similar ancestors. In particular, it explains the Gondwanan distribution of ratites and the Antarctic flora. Reconstruction is used to establish past (and future) plate configurations, helping determine the shape and make-up of ancient supercontinents and providing a basis for paleogeography. Defining plate boundaries Current plate boundaries are defined by their seismicity. Past plate boundaries within existing plates are identified from a variety of evidence, such as the presence of ophiolites that are indicative of vanished oceans. Past plate motions Various types of quantitative and semi-quantitative information are available to constrain past plate motions. The geometric fit between continents, such as between west Africa and South America is still an important part of plate reconstruction. Magnetic stripe patterns provide a reliable guide to relative plate motions going back into the Jurassic period. The tracks of hotspots give absolute reconstructions, but these are only available back to the Cretaceous. Older reconstructions rely mainly on paleomagnetic pole data, although these only constrain the latitude and rotation, but not the longitude. Combining poles of different ages in a particular plate to produce apparent polar wander paths provides a method for comparing the motions of different plates through time. Additional evidence comes from the distribution of certain sedimentary rock types, faunal provinces shown by particular fossil groups, and the position of orogenic belts. Formation and break-up of continents The movement of plates has caused the formation and break-up of continents over time, including occasional formation of a supercontinent that contains most or all of the continents. The supercontinent Columbia or Nuna formed during a period of and broke up about . The supercontinent Rodinia is thought to have formed about 1 billion years ago and to have embodied most or all of Earth's continents, and broken up into eight continents around . The eight continents later re-assembled into another supercontinent called Pangaea; Pangaea broke up into Laurasia (which became North America and Eurasia) and Gondwana (which became the remaining continents). Depending on how they are defined, there are usually seven or eight "major" plates: African, Antarctic, Eurasian, North American, South American, Pacific, and Indo-Australian. The latter is sometimes subdivided into the Indian and Australian plates. The current motion of the tectonic plates is today determined by remote sensing satellite data sets, calibrated with ground station measurements. Other celestial bodies (planets, moons) The appearance of plate tectonics on terrestrial planets is related to planetary mass, with more massive planets than Earth expected to exhibit plate tectonics. Earth may be a borderline case, owing its tectonic activity to abundant water (silica and water form a deep eutectic). Venus shows no evidence of active plate tectonics. There is debatable evidence of active tectonics in the planet's distant past; however, events taking place since then (such as the plausible and generally accepted hypothesis that the Venusian lithosphere has thickened greatly over the course of several hundred million years) has made constraining the course of its geologic record difficult. However, the numerous well-preserved impact craters have been utilized as a dating method to approximately date the Venusian surface (since there are thus far no known samples of Venusian rock to be dated by more reliable methods). Dates derived are dominantly in the range , although ages of up to have been calculated. This research has led to the fairly well accepted hypothesis that Venus has undergone an essentially complete volcanic resurfacing at least once in its distant past, with the last event taking place approximately within the range of estimated surface ages. While the mechanism of such an impressive thermal event remains a debated issue in Venusian geosciences, some scientists are advocates of processes involving plate motion to some extent. One explanation for Venus's lack of plate tectonics is that on Venus temperatures are too high for significant water to be present. The Earth's crust is soaked with water, and water plays an important role in the development of shear zones. Plate tectonics requires weak surfaces in the crust along which crustal slices can move, and it may well be that such weakening never took place on Venus because of the absence of water. However, some researchers[who?] remain convinced that plate tectonics is or was once active on this planet. Mars is considerably smaller than Earth and Venus, and there is evidence for ice on its surface and in its crust. In the 1990s, it was proposed that Martian Crustal Dichotomy was created by plate tectonic processes. Scientists today disagree, and think that it was created either by upwelling within the Martian mantle that thickened the crust of the Southern Highlands and formed Tharsis or by a giant impact that excavated the Northern Lowlands. Observations made of the magnetic field of Mars by the Mars Global Surveyor spacecraft in 1999 showed patterns of magnetic striping discovered on this planet. Some scientists interpreted these as requiring plate tectonic processes, such as seafloor spreading. However, their data fail a "magnetic reversal test", which is used to see if they were formed by flipping polarities of a global magnetic field. Some of the satellites of Jupiter have features that may be related to plate-tectonic style deformation, although the materials and specific mechanisms may be different from plate-tectonic activity on Earth. On 8 September 2014, NASA reported finding evidence of plate tectonics on Europa, a satellite of Jupiter—the first sign of subduction activity on another world other than Earth. On Earth-sized planets, plate tectonics is more likely if there are oceans of water. However, in 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths with one team saying that plate tectonics would be episodic or stagnant and the other team saying that plate tectonics is very likely on super-earths even if the planet is dry. - Atmospheric circulation – The large-scale movement of air, a process which distributes thermal energy about the Earth's surface - Conservation of angular momentum - Geological history of Earth – The sequence of major geological events in Earth's past - GPlates – Open-source application software for interactive plate-tectonic reconstructions - List of plate tectonics topics - List of submarine topographical features – Oceanic landforms and topographic elements. - Supercontinent cycle – quasi-periodic aggregation and dispersal of Earth's continental crust - Tectonics – The processes that control the structure and properties of the Earth's crust and its evolution through time - Little, Fowler & Coulson 1990. - University of the Witwatersrand (2019). "Drop of ancient seawater rewrites Earth's history: Research reveals that plate tectonics started on Earth 600 million years before what was believed earlier". ScienceDaily. Archived from the original on 6 August 2019. Retrieved 11 August 2019. - Read & Watson 1975. - Scalera & Lavecchia 2006. - Stern, Robert J. (2002). "Subduction zones". Reviews of Geophysics. 40 (4): 1012. Bibcode:2002RvGeo..40.1012S. doi:10.1029/2001RG000108. - Zhen Shao 1997, Hancock, Skinner & Dineley 2000. - Turcotte & Schubert 2002, p. 5. - Turcotte & Schubert 2002. - Foulger 2010. - Schmidt & Harbert 1998. - Meissner 2002, p. 100. - "Plate Tectonics: Plate Boundaries". platetectonics.com. Archived from the original on 16 June 2010. Retrieved 12 June 2010. - "Understanding plate motions". USGS. Archived from the original on 16 May 2019. Retrieved 12 June 2010. - Grove, Timothy L.; Till, Christy B.; Krawczynski, Michael J. (8 March 2012). "The Role of H2O in Subduction Zone Magmatism". Annual Review of Earth and Planetary Sciences. 40 (1): 413–39. Bibcode:2012AREPS..40..413G. doi:10.1146/annurev-earth-042711-105310. Retrieved 14 January 2016. - Mendia-Landa, Pedro. "Myths and Legends on Natural Disasters: Making Sense of Our World". Archived from the original on 2016-07-21. Retrieved 2008-02-05. - van Dijk 1992, van Dijk & Okkes 1991. - Holmes, Arthur (1931). "Radioactivity and Earth Movements" (PDF). Transactions of the Geological Society of Glasgow. 18 (3): 559–606. doi:10.1144/transglas.18.3.559. Archived (PDF) from the original on 2019-10-09. Retrieved 2014-01-15. - Tanimoto & Lay 2000. - Van Bemmelen 1976. - Van Bemmelen 1972. - Segev 2002 - Maruyama 1994. sfn error: multiple targets (2×): CITEREFMaruyama1994 (help) - YuenABC 2007. - Wezel 1988. - Meyerhoff et al. 1996. - Mallard, Claire; Coltice, Nicolas; Seton, Maria; Müller, R. Dietmar; Tackley, Paul J. (2016). "Subduction controls the distribution and fragmentation of Earth's tectonic plates". Nature. 535 (7610): 140–43. Bibcode:2016Natur.535..140M. doi:10.1038/nature17992. ISSN 0028-0836. PMID 27309815. Archived from the original on 2016-09-24. Retrieved 2016-09-15. - Spence 1987, White & McKenzie 1989 harvnb error: no target: CITEREFWhiteMcKenzie1989 (help). - Conrad & Lithgow-Bertelloni 2002. - Spence 1987, White & Mckenzie 1989, Segev 2002. - "Alfred Wegener (1880–1930)". University of California Museum of Paleontology. Archived from the original on 2017-12-08. Retrieved 2010-06-18. - Neith, Katie (April 15, 2011). "Caltech Researchers Use GPS Data to Model Effects of Tidal Loads on Earth's Surface". Caltech. Archived from the original on October 19, 2011. Retrieved August 15, 2012. - Ricard, Y. (2009). "2. Physics of Mantle Convection". In David Bercovici; Gerald Schubert (eds.). Treatise on Geophysics: Mantle Dynamics. 7. Elsevier Science. p. 36. ISBN 978-0-444-53580-1. - Glatzmaier, Gary A. (2013). Introduction to Modeling Convection in Planets and Stars: Magnetic Field, Density Stratification, Rotation. Princeton University Press. p. 149. ISBN 978-1-4008-4890-4. - van Dijk 1992, van Dijk & Okkes 1990. - Moore 1973. - Bostrom 1971. - Scoppola et al. 2006. - Torsvik et al. 2010. sfn error: no target: CITEREFTorsvikSteinbergerGurnie_sGaina2010 (help) - Rowley, David B.; Forte, Alessandro M.; Rowan, Christopher J.; Glišović, Petar; Moucha, Robert; Grand, Stephen P.; Simmons, Nathan A. (2016). "Kinematics and dynamics of the East Pacific Rise linked to a stable, deep-mantle upwelling". Science Advances. 2: e1601107. doi:10.1126/sciadv.1601107. - Hughes 2001a. - Wegener 1929. - Wegener 1966, Hughes 2001b. - Runcorn 1956. - Carey 1956. - see for example the milestone paper of Lyman & Fleming 1940. - Korgen 1995, Spiess & Kuperman 2003. - Kious & Tilling 1996. - Frankel 1987. - Joly 1909. - Thomson 1863. - Wegener 1912. - "Pioneers of Plate Tectonics". The Geological Society. Archived from the original on 23 March 2018. Retrieved 23 March 2018. - Stein & Wysession 2009, p. 26 - Carey 1956; see also Quilty 2003. - Holmes 1928; see also Holmes 1978, Frankel 1978. - Lippsett 2001, Lippsett 2006. - Heezen 1960. - Dietz 1961. - Hess 1962. - Mason & Raff 1961, Raff & Mason 1961. - Vine & Matthews 1963. - See summary in Heirzler, Le Pichon & Baron 1966 - Wilson 1963. - Wilson 1965. - Wilson 1966. - Morgan 1968. - Le Pichon 1967. - McKenzie & Parker 1967. - Casadevall, Arturo; Fang, Ferric C. (1 March 2016). "Revolutionary Science". mBio. 7 (2): e00158–16. doi:10.1128/mBio.00158-16. PMC 4810483. PMID 26933052. - Moss & Wilson 1998. - Condie 1997. - Lliboutry 2000. - Kranendonk, V.; Martin, J. (2011). "Onset of Plate Tectonics". Science. 333 (6041): 413–14. Bibcode:2011Sci...333..413V. doi:10.1126/science.1208766. PMID 21778389. - "Plate Tectonics May Have Begun a Billion Years After Earth's Birth Pappas, S LiveScience report of PNAS research 21 Sept 2017". Archived from the original on 2017-09-23. Retrieved 2017-09-23. - Torsvik, Trond Helge. "Reconstruction Methods". Archived from the original on 23 July 2011. Retrieved 18 June 2010. - Torsvik 2008. - Butler 1992. - Scotese, C.R. (2002-04-20). "Climate History". Paleomap Project. Archived from the original on 15 June 2010. Retrieved 18 June 2010. - Zhao 2002, 2004 - Valencia, O'Connell & Sasselov 2007. - Kasting 1988. - Bortman, Henry (2004-08-26). "Was Venus alive? "The Signs are Probably There"". Astrobiology Magazine. Archived from the original on 2010-12-24. Retrieved 2008-01-08. - Sleep 1994. - Zhong & Zuber 2001. - Andrews-Hanna, Zuber & Banerdt 2008. - Wolpert, Stuart (August 9, 2012). "UCLA scientist discovers plate tectonics on Mars". Yin, An. UCLA. Archived from the original on August 14, 2012. Retrieved August 13, 2012. - Connerney et al. 1999, Connerney et al. 2005 - Harrison 2000. - Dyches, Preston; Brown, Dwayne; Buckley, Michael (8 September 2014). "Scientists Find Evidence of 'Diving' Tectonic Plates on Europa". NASA. Archived from the original on 4 April 2019. Retrieved 8 September 2014. - Soderblom et al. 2007. - Valencia, Diana; O'Connell, Richard J. (2009). "Convection scaling and subduction on Earth and super-Earths". Earth and Planetary Science Letters. 286 (3–4): 492–502. Bibcode:2009E&PSL.286..492V. doi:10.1016/j.epsl.2009.07.015. - van Heck, H.J.; Tackley, P.J. (2011). "Plate tectonics on super-Earths: Equally or more likely than on Earth". Earth and Planetary Science Letters. 310 (3–4): 252–61. Bibcode:2011E&PSL.310..252V. doi:10.1016/j.epsl.2011.07.029. - O'Neill, C.; Lenardic, A. (2007). "Geological consequences of super-sized Earths". Geophysical Research Letters. 34: L19204. Bibcode:2007GeoRL..3419204O. doi:10.1029/2007GL030598. - Stern, Robert J. (July 2016). "Is plate tectonics needed to evolve technological species on exoplanets?". Geoscience Frontiers. 7 (4): 573–580. doi:10.1016/j.gsf.2015.12.002. - Butler, Robert F. (1992). "Applications to paleogeography" (PDF). Paleomagnetism: Magnetic domains to geologic terranes. Blackwell. ISBN 978-0-86542-070-0. Archived from the original (PDF) on 17 August 2010. Retrieved 18 June 2010. - Carey, S.W. (1958). "The tectonic approach to continental drift". In Carey, S.W. (ed.). Continental Drift – A symposium, held in March 1956. Hobart: Univ. of Tasmania. pp. 177–363. Expanding Earth from pp. 311–49. - Condie, K.C. (1997). Plate tectonics and crustal evolution (4th ed.). Butterworth-Heinemann. p. 282. ISBN 978-0-7506-3386-4. Retrieved 2010-06-18. - Foulger, Gillian R. (2010). Plates vs Plumes: A Geological Controversy. Wiley-Blackwell. ISBN 978-1-4051-6148-0. - Frankel, H. (1987). "The Continental Drift Debate". In H.T. Engelhardt Jr; A.L. Caplan (eds.). Scientific Controversies: Case Studies in the Resolution and Closure of Disputes in Science and Technology. Cambridge University Press. ISBN 978-0-521-27560-6. - Hancock, Paul L.; Skinner, Brian J.; Dineley, David L. (2000). The Oxford Companion to The Earth. Oxford University Press. ISBN 978-0-19-854039-7. - Hess, H.H. (November 1962). "History of Ocean Basins" (PDF). In A.E.J. Engel; Harold L. James; B.F. Leonard (eds.). Petrologic studies: a volume to honor of A.F. Buddington. Boulder, CO: Geological Society of America. pp. 599–620. - Holmes, Arthur (1978). Principles of Physical Geology (3 ed.). Wiley. pp. 640–41. ISBN 978-0-471-07251-5. - Joly, John (1909). Radioactivity and Geology: An Account of the Influence of Radioactive Energy on Terrestrial History. Journal of Geology. 18. London: Archibald Constable. p. 36. Bibcode:1910JG.....18..568J. doi:10.1086/621777. ISBN 978-1-4021-3577-4. - Kious, W. Jacquelyne; Tilling, Robert I. (February 2001) . "Historical perspective". This Dynamic Earth: the Story of Plate Tectonics (Online ed.). U.S. Geological Survey. ISBN 978-0-16-048220-5. Retrieved 2008-01-29. Abraham Ortelius in his work Thesaurus Geographicus... suggested that the Americas were 'torn away from Europe and Africa... by earthquakes and floods... The vestiges of the rupture reveal themselves, if someone brings forward a map of the world and considers carefully the coasts of the three [continents].' - Lippsett, Laurence (2006). "Maurice Ewing and the Lamont-Doherty Earth Observatory". In William Theodore De Bary; Jerry Kisslinger; Tom Mathewson (eds.). Living Legacies at Columbia. Columbia University Press. pp. 277–97. ISBN 978-0-231-13884-0. Retrieved 2010-06-22. - Little, W.; Fowler, H.W.; Coulson, J. (1990). Onions C.T. (ed.). The Shorter Oxford English Dictionary: on historical principles. II (3 ed.). Clarendon Press. ISBN 978-0-19-861126-4. - Lliboutry, L. (2000). Quantitative geophysics and geology. Eos Transactions. 82. Springer. p. 480. Bibcode:2001EOSTr..82..249W. doi:10.1029/01EO00142. ISBN 978-1-85233-115-3. Retrieved 2010-06-18. - McKnight, Tom (2004). Geographica: The complete illustrated Atlas of the world. New York: Barnes and Noble Books. ISBN 978-0-7607-5974-5. - Meissner, Rolf (2002). The Little Book of Planet Earth. New York: Copernicus Books. p. 202. ISBN 978-0-387-95258-1. - Meyerhoff, Arthur Augustus; Taner, I.; Morris, A.E.L.; Agocs, W.B.; Kamen-Kaye, M.; Bhat, Mohammad I.; Smoot, N. Christian; Choi, Dong R. (1996). Donna Meyerhoff Hull (ed.). Surge tectonics: a new hypothesis of global geodynamics. Solid Earth Sciences Library. 9. Springer Netherlands. p. 348. ISBN 978-0-7923-4156-7. - Moss, S.J.; Wilson, M.E.J. (1998). "Biogeographic implications from the Tertiary palaeogeographic evolution of Sulawesi and Borneo" (PDF). In Hall R; Holloway JD (eds.). Biogeography and Geological Evolution of SE Asia. Leiden, The Netherlands: Backhuys. pp. 133–63. ISBN 978-90-73348-97-4. - Oreskes, Naomi, ed. (2003). Plate Tectonics: An Insider's History of the Modern Theory of the Earth. Westview. ISBN 978-0-8133-4132-3. - Read, Herbert Harold; Watson, Janet (1975). Introduction to Geology. New York: Halsted. pp. 13–15. ISBN 978-0-470-71165-1. OCLC 317775677. - Schmidt, Victor A.; Harbert, William (1998). "The Living Machine: Plate Tectonics". Planet Earth and the New Geosciences (3 ed.). p. 442. ISBN 978-0-7872-4296-1. Archived from the original on 2010-01-24. Retrieved 2008-01-28.CS1 maint: BOT: original-url status unknown (link) - Schubert, Gerald; Turcotte, Donald L.; Olson, Peter (2001). Mantle Convection in the Earth and Planets. Cambridge: Cambridge University Press. ISBN 978-0-521-35367-0. - Stanley, Steven M. (1999). Earth System History. W.H. Freeman. pp. 211–28. ISBN 978-0-7167-2882-5. - Stein, Seth; Wysession, Michael (2009). An Introduction to Seismology, Earthquakes, and Earth Structure. Chichester: John Wiley & Sons. ISBN 978-1-4443-1131-0.CS1 maint: ref=harv (link) - Sverdrup, H.U., Johnson, M.W. and Fleming, R.H. (1942). The Oceans: Their physics, chemistry and general biology. Englewood Cliffs: Prentice-Hall. p. 1087.CS1 maint: multiple names: authors list (link) - Thompson, Graham R. & Turk, Jonathan (1991). Modern Physical Geology. Saunders College Publishing. ISBN 978-0-03-025398-0. - Torsvik, Trond Helge; Steinberger, Bernhard (December 2006). "Fra kontinentaldrift til manteldynamikk" [From Continental Drift to Mantle Dynamics]. Geo (in Norwegian). 8: 20–30. Archived from the original on 23 July 2011. Retrieved 22 June 2010.CS1 maint: ref=harv (link), translation: Torsvik, Trond Helge; Steinberger, Bernhard (2008). "From Continental Drift to Mantle Dynamics" (PDF). In Trond Slagstad; Rolv Dahl Gråsteinen (eds.). Geology for Society for 150 years – The Legacy after Kjerulf. 12. Trondheim: Norges Geologiske Undersokelse. pp. 24–38. Archived from the original (PDF) on 23 July 2011[Norwegian Geological Survey, Popular Science]. - Turcotte, D.L.; Schubert, G. (2002). "Plate Tectonics". Geodynamics (2 ed.). Cambridge University Press. pp. 1–21. ISBN 978-0-521-66186-7. - Wegener, Alfred (1929). Die Entstehung der Kontinente und Ozeane (4 ed.). Braunschweig: Friedrich Vieweg & Sohn Akt. Ges. ISBN 978-3-443-01056-0. - Wegener, Alfred (1966). The origin of continents and oceans. Biram John (translator). Courier Dover. p. 246. ISBN 978-0-486-61708-4. - Winchester, Simon (2003). Krakatoa: The Day the World Exploded: August 27, 1883. HarperCollins. ISBN 978-0-06-621285-2. - Yuen, David A.; Maruyama, Shigenori; Karato, Shun-Ichiro; Windley, Brian F. (2007), Superplumes: Beyond Plate Tectonics, ISBN 9781402057502 - Maruyama, Shigenori (1994). "Plume tectonics". Journal of the Geological Society of Japan. 100: 24–49. doi:10.5575/geosoc.100.24. - Yuen, DA; Maruyama, S; Karato, SJ; et al., eds. (2007). Superplumes: beyond plate tectonics. AA Dordrecht, NL: Springer. ISBN 978-1-4020-5749-6. - Andrews-Hanna, Jeffrey C.; Zuber, Maria T.; Banerdt, W. Bruce (2008). "The Borealis basin and the origin of the martian crustal dichotomy". Nature. 453 (7199): 1212–15. Bibcode:2008Natur.453.1212A. doi:10.1038/nature07011. PMID 18580944. - Blackett, P.M.S.; Bullard, E.; Runcorn, S.K., eds. (1965). A Symposium on Continental Drift, held in 28 October 1965. Philosophical Transactions of the Royal Society A. 258. The Royal Society of London. p. 323. - Bostrom, R.C. (31 December 1971). "Westward displacement of the lithosphere". Nature. 234 (5331): 536–38. Bibcode:1971Natur.234..536B. doi:10.1038/234536a0. - Connerney, J.E.P.; Acuña, M.H.; Wasilewski, P.J.; Ness, N.F.; Rème H.; Mazelle C.; Vignes D.; Lin R.P.; Mitchell D.L.; Cloutier P.A. (1999). "Magnetic Lineations in the Ancient Crust of Mars". Science. 284 (5415): 794–98. Bibcode:1999Sci...284..794C. doi:10.1126/science.284.5415.794. PMID 10221909. - Connerney, J.E.P.; Acuña, M.H.; Ness, N.F.; Kletetschka, G.; Mitchell D.L.; Lin R.P.; Rème H. (2005). "Tectonic implications of Mars crustal magnetism". Proceedings of the National Academy of Sciences. 102 (42): 14970–175. Bibcode:2005PNAS..10214970C. doi:10.1073/pnas.0507469102. PMC 1250232. PMID 16217034. - Conrad, Clinton P.; Lithgow-Bertelloni, Carolina (2002). "How Mantle Slabs Drive Plate Tectonics". Science. 298 (5591): 207–09. Bibcode:2002Sci...298..207C. doi:10.1126/science.1074161. PMID 12364804. Archived from the original on September 20, 2009. - Dietz, Robert S. (June 1961). "Continent and Ocean Basin Evolution by Spreading of the Sea Floor". Nature. 190 (4779): 854–57. Bibcode:1961Natur.190..854D. doi:10.1038/190854a0. - van Dijk, Janpieter; Okkes, F.W. Mark (1990). "The analysis of shear zones in Calabria; implications for the geodynamics of the Central Mediterranean". Rivista Italiana di Paleontologia e Stratigrafia. 96 (2–3): 241–70. - van Dijk, J.P.; Okkes, F.W.M. (1991). "Neogene tectonostratigraphy and kinematics of Calabrian Basins: implications for the geodynamics of the Central Mediterranean". Tectonophysics. 196 (1): 23–60. Bibcode:1991Tectp.196...23V. doi:10.1016/0040-1951(91)90288-4. - van Dijk, Janpieter (1992). "Late Neogene fore-arc basin evolution in the Calabrian Arc (Central Mediterranean). Tectonic sequence stratigraphy and dynamic geohistory. With special reference to the geology of Central Calabria". Geologica Ultraiectina. 92: 288. Archived from the original on 2013-04-20. - Frankel, Henry (July 1978). "Arthur Holmes and continental drift". The British Journal for the History of Science. 11 (2): 130–50. doi:10.1017/S0007087400016551. JSTOR 4025726. - Harrison, C.G.A. (2000). "Questions About Magnetic Lineations in the Ancient Crust of Mars". Science. 287 (5453): 547a. doi:10.1126/science.287.5453.547a. - Heezen, B. (1960). "The rift in the ocean floor". Scientific American. 203 (4): 98–110. Bibcode:1960SciAm.203d..98H. doi:10.1038/scientificamerican1060-98. - Heirtzler, James R.; Le Pichon, Xavier; Baron, J. Gregory (1966). "Magnetic anomalies over the Reykjanes Ridge". Deep-Sea Research. 13 (3): 427–32. Bibcode:1966DSROA..13..427H. doi:10.1016/0011-7471(66)91078-3. - Holmes, Arthur (1928). "Radioactivity and Earth movements". Transactions of the Geological Society of Glasgow. 18 (3): 559–606. doi:10.1144/transglas.18.3.559. - Hughes, Patrick (8 February 2001). "Alfred Wegener (1880–1930): A Geographic Jigsaw Puzzle". On the shoulders of giants. Earth Observatory, NASA. Retrieved 2007-12-26. ... on January 6, 1912, Wegener... proposed instead a grand vision of drifting continents and widening seas to explain the evolution of Earth's geography. - Hughes, Patrick (8 February 2001). "Alfred Wegener (1880–1930): The origin of continents and oceans". On the Shoulders of Giants. Earth Observatory, NASA. Retrieved 2007-12-26. By his third edition (1922), Wegener was citing geological evidence that some 300 million years ago all the continents had been joined in a supercontinent stretching from pole to pole. He called it Pangaea (all lands),... - Kasting, James F. (1988). "Runaway and moist greenhouse atmospheres and the evolution of Earth and Venus". Icarus. 74 (3): 472–94. Bibcode:1988Icar...74..472K. doi:10.1016/0019-1035(88)90116-9. PMID 11538226. - Korgen, Ben J. (1995). "A voice from the past: John Lyman and the plate tectonics story" (PDF). Oceanography. 8 (1): 19–20. doi:10.5670/oceanog.1995.29. Archived from the original (PDF) on 2007-09-26. - Lippsett, Laurence (2001). "Maurice Ewing and the Lamont-Doherty Earth Observatory". Living Legacies. Retrieved 2008-03-04. - Lovett, Richard A (24 January 2006). "Moon Is Dragging Continents West, Scientist Says". National Geographic News. - Lyman, J.; Fleming, R.H. (1940). "Composition of Seawater". Journal of Marine Research. 3: 134–46. - Maruyama, Shigenori (1994), "Plume tectonics.", Journal of the Geological Society of Japan, 100: 24–49, doi:10.5575/geosoc.100.24 - Mason, Ronald G.; Raff, Arthur D. (1961). "Magnetic survey off the west coast of the United States between 32°N latitude and 42°N latitude". Bulletin of the Geological Society of America. 72 (8): 1259–66. Bibcode:1961GSAB...72.1259M. doi:10.1130/0016-7606(1961)72[1259:MSOTWC]2.0.CO;2. ISSN 0016-7606. - Mc Kenzie, D.; Parker, R.L. (1967). "The North Pacific: an example of tectonics on a sphere". Nature. 216 (5122): 1276–1280. Bibcode:1967Natur.216.1276M. doi:10.1038/2161276a0. - Moore, George W. (1973). "Westward Tidal Lag as the Driving Force of Plate Tectonics". Geology. 1 (3): 99–100. Bibcode:1973Geo.....1...99M. doi:10.1130/0091-7613(1973)1<99:WTLATD>2.0.CO;2. ISSN 0091-7613. - Morgan, W. Jason (1968). "Rises, Trenches, Great Faults, and Crustal Blocks" (PDF). Journal of Geophysical Research. 73 (6): 1959–182. Bibcode:1968JGR....73.1959M. doi:10.1029/JB073i006p01959. - Le Pichon, Xavier (15 June 1968). "Sea-floor spreading and continental drift". Journal of Geophysical Research. 73 (12): 3661–97. Bibcode:1968JGR....73.3661L. doi:10.1029/JB073i012p03661. - Quilty, Patrick G.; Banks, Maxwell R. (2003). "Samuel Warren Carey, 1911–2002". Biographical memoirs. Australian Academy of Science. Archived from the original on 2010-12-21. Retrieved 2010-06-19. This memoir was originally published in Historical Records of Australian Science (2003) 14 (3). - Raff, Arthur D.; Mason, Roland G. (1961). "Magnetic survey off the west coast of the United States between 40°N latitude and 52°N latitude". Bulletin of the Geological Society of America. 72 (8): 1267–70. Bibcode:1961GSAB...72.1267R. doi:10.1130/0016-7606(1961)72[1267:MSOTWC]2.0.CO;2. ISSN 0016-7606. - Runcorn, S.K. (1956). "Paleomagnetic comparisons between Europe and North America". Proceedings, Geological Association of Canada. 8 (1088): 7785. Bibcode:1965RSPTA.258....1R. doi:10.1098/rsta.1965.0016. - Scalera, G. & Lavecchia, G. (2006). "Frontiers in earth sciences: new ideas and interpretation". Annals of Geophysics. 49 (1). doi:10.4401/ag-4406. - Scoppola, B.; Boccaletti, D.; Bevis, M.; Carminati, E.; Doglioni, C. (2006). "The westward drift of the lithosphere: A rotational drag?". Geological Society of America Bulletin. 118 (1–2): 199–209. Bibcode:2006GSAB..118..199S. doi:10.1130/B25734.1. - Segev, A (2002). "Flood basalts, continental breakup and the dispersal of Gondwana: evidence for periodic migration of upwelling mantle flows (plumes)". EGU Stephan Mueller Special Publication Series. 2: 171–91. Bibcode:2002SMSPS...2..171S. doi:10.5194/smsps-2-171-2002. - Sleep, Norman H. (1994). "Martian plate tectonics" (PDF). Journal of Geophysical Research. 99 (E3): 5639. Bibcode:1994JGR....99.5639S. CiteSeerX 10.1.1.452.2751. doi:10.1029/94JE00216. - Soderblom, Laurence A.; Tomasko, Martin G.; Archinal, Brent A.; Becker, Tammy L.; Bushroe, Michael W.; Cook, Debbie A.; Doose, Lyn R.; Galuszka, Donna M.; Hare, Trent M.; Howington-Kraus, Elpitha; Karkoschka, Erich; Kirk, Randolph L.; Lunine, Jonathan I.; McFarlane, Elisabeth A.; Redding, Bonnie L.; Rizk, Bashar; Rosiek, Mark R.; See, Charles; Smith, Peter H. (2007). "Topography and geomorphology of the Huygens landing site on Titan". Planetary and Space Science. 55 (13): 2015–24. Bibcode:2007P&SS...55.2015S. doi:10.1016/j.pss.2007.04.015. - Spence, William (1987). "Slab pull and the seismotectonics of subducting lithosphere" (PDF). Reviews of Geophysics. 25 (1): 55–69. Bibcode:1987RvGeo..25...55S. doi:10.1029/RG025i001p00055. - Spiess, Fred; Kuperman, William (2003). "The Marine Physical Laboratory at Scripps" (PDF). Oceanography. 16 (3): 45–54. doi:10.5670/oceanog.2003.30. Archived from the original (PDF) on 2007-09-26. - Tanimoto, Toshiro; Lay, Thorne (7 November 2000). "Mantle dynamics and seismic tomography". Proceedings of the National Academy of Sciences. 97 (23): 12409–110. Bibcode:2000PNAS...9712409T. doi:10.1073/pnas.210382197. PMC 34063. PMID 11035784. - Thomson, W (1863). "On the secular cooling of the earth". Philosophical Magazine. 4 (25): 1–14. doi:10.1080/14786446308643410. - Torsvik, Trond H.; Steinberger, Bernhard; Gurnis, Michael; Gaina, Carmen (2010). "Plate tectonics and net lithosphere rotation over the past 150 My" (PDF). Earth and Planetary Science Letters. 291 (1–4): 106–12. Bibcode:2010E&PSL.291..106T. doi:10.1016/j.epsl.2009.12.055. hdl:10852/62004. Archived from the original (PDF) on 16 May 2011. Retrieved 18 June 2010. - Valencia, Diana; O'Connell, Richard J.; Sasselov, Dimitar D (November 2007). "Inevitability of Plate Tectonics on Super-Earths". Astrophysical Journal Letters. 670 (1): L45–L48. arXiv:0710.0699. Bibcode:2007ApJ...670L..45V. doi:10.1086/524012. - Van Bemmelen, R.W. (1976), "Plate Tectonics and the Undation Model: a comparison.", Tectonophysics, 32 (3): 145–182, Bibcode:1976Tectp..32..145V, doi:10.1016/0040-1951(76)90061-5 - Van Bemmelen, R.W. (1972), "Geodynamic Models, an evaluation and a synthesis.", Developments in Geotectonics, 2, Elsevies Publ. Comp., Amsterdam, 1972, 267 Pp. - Vine, F.J.; Matthews, D.H. (1963). "Magnetic anomalies over oceanic ridges". Nature. 199 (4897): 947–949. Bibcode:1963Natur.199..947V. doi:10.1038/199947a0. - Wegener, Alfred (6 January 1912). "Die Herausbildung der Grossformen der Erdrinde (Kontinente und Ozeane), auf geophysikalischer Grundlage" (PDF). Petermanns Geographische Mitteilungen. 63: 185–95, 253–56, 305–09. Archived from the original (PDF) on 5 July 2010. - Wezel, F.-C. (1988), "The origin and evolution of arcs.", Tectonophysics, 146 (1–4) - White, R.; McKenzie, D. (1989). "Magmatism at rift zones: The generation of volcanic continental margins and flood basalts". Journal of Geophysical Research. 94: 7685–729. Bibcode:1989JGR....94.7685W. doi:10.1029/JB094iB06p07685. - Wilson, J.T. (8 June 1963). "Hypothesis on the Earth's behaviour". Nature. 198 (4884): 849–65. Bibcode:1963Natur.198..925T. doi:10.1038/198925a0. - Wilson, J. Tuzo (July 1965). "A new class of faults and their bearing on continental drift" (PDF). Nature. 207 (4995): 343–47. Bibcode:1965Natur.207..343W. doi:10.1038/207343a0. Archived from the original (PDF) on August 6, 2010. - Wilson, J. Tuzo (13 August 1966). "Did the Atlantic close and then re-open?". Nature. 211 (5050): 676–81. Bibcode:1966Natur.211..676W. doi:10.1038/211676a0. - Zhen Shao, Huang (1997). "Speed of the Continental Plates". The Physics Factbook. Archived from the original on 2012-02-11. - Zhao, Guochun, Cawood, Peter A., Wilde, Simon A., and Sun, M. (2002). "Review of global 2.1–1.8 Ga orogens: implications for a pre-Rodinia supercontinent". Earth-Science Reviews. 59 (1): 125–62. Bibcode:2002ESRv...59..125Z. doi:10.1016/S0012-8252(02)00073-9.CS1 maint: multiple names: authors list (link) - Zhao, Guochun, Sun, M., Wilde, Simon A., and Li, S.Z. (2004). "A Paleo-Mesoproterozoic supercontinent: assembly, growth and breakup". Earth-Science Reviews (Submitted manuscript). 67 (1): 91–123. Bibcode:2004ESRv...67...91Z. doi:10.1016/j.earscirev.2004.02.003.CS1 maint: multiple names: authors list (link) - Zhong, Shijie; Zuber, Maria T. (2001). "Degree-1 mantle convection and the crustal dichotomy on Mars" (PDF). Earth and Planetary Science Letters. 189 (1–2): 75–84. Bibcode:2001E&PSL.189...75Z. CiteSeerX 10.1.1.535.8224. doi:10.1016/S0012-821X(01)00345-4. |The Wikibook Historical Geology has a page on the topic of: Plate tectonics: overview| |Wikimedia Commons has media related to Plate tectonics.| - This Dynamic Earth: The Story of Plate Tectonics. USGS. - Understanding Plate Tectonics. USGS. - An explanation of tectonic forces. Example of calculations to show that Earth Rotation could be a driving force. - Bird, P. (2003); An updated digital model of plate boundaries. - Map of tectonic plates. - MORVEL plate velocity estimates and information. C. DeMets, D. Argus, & R. Gordon. - Plate Tectonics on In Our Time at the BBC
0.886997
3.5579
Mauna Kea (John Davies) |Site of the largest concentration of telescopes in the northern hemisphere, the extinct (as far as we know) volcano of Mauna Kea, on the island of Hawaii, is home to almost a dozen optical, near-infrared, sub-millimetre and' radio telescopes. When I first visited the summit, in late 1980, the only telescopes present were the UK 88-inch and 24-inch, the CFHT, NASA's IRTF and the recently-completed UKIRT: add to that list JCMT, CSO, two Kecks, Subaru and a variety of radio dishes.| The view toward Mauna Loa from UKIRT (2/11/99) | UKIRT is a 3.8 metre telescope, optimised for near-infrared work, but figured to sufficient accuracy that it can be used at optical wavelengths. It used to have to only pneumatically-driven chopping, spurting secondary in the business, but brown mirrors were decided to be a bad thing, and it's much better now. See the Joint Astronomy Centre for up to date performance statistics. The CFHT dome /em> |Reputedly the tallest point on Mauna Kea, the CFHT houses a The web-site is here .| Gemini (north) and the UH 88-inch dome from CFHT |CFHT also provides pictures of the summit weather, using a camera pointed toward the dome housing the gemini-north 8-metre (scheduled to start observing in January, 2000 - or so)./td>| The IRTF with Maui in the background The IRTF dome |The IRTF is a 3.0 metre telescope, optimised from infrared observing, and devoted to planetary observing for about 50% of the time. Pictures courtesy of IRTF gallery| Working from left to right, the telescope buildings enclose the JCMT, CSO submillimetre array (both in millimetre valley); Subaru, Keck I and keck II on the far ridge; the UH 24-inch, UKIRT, the IRTF (actually on a peak by itself, behind the main ridge), UH 88-inch, Gemini and the CFHT dome. The island of Maui lies in the background The two Kecks, photographed by Richard Wainscoat from a helicopter over Mauna Kea | The Keck 10-metre telescopes are currently the largest optical telescopes in the world - and will remain the leaders for the foreseeable future. Operated jointly by the California Institute of Technology and the University of California, these two telescopes have a light grasp four times higher than the Palomar 200-inch, and can take advantage of the excellent conditions on Mauna kea (pace the sky in the CFHT photograph). At an altitude of 14,000 feet, the observatories are well above much of the water in the Earth's atmosphere, and a fair amount of the oxygen (barometric pressure is about 2/3rds that at sea level). The night-time temperature is almost always between -2 and +2 Centrigrade - not your customised picture of Hawaii, but good for astronomy. (Of course, nowadays most astronomers using the Kecks do so from the comfort of the Visitor's Quarters in Waimea - the only telescope control room where you can nip out for a Big Mac between exposures.) See here for general information |The outstanding characteristic of the keck telescopes is the mirror construction - a sequence of 36 hexagonal mirrors, each machined to the appropriate curvature, each 2 metres across, and combining to give light grasp equivalent to a 10-meter diameter mirror. The largest dimension across the mirror is closer to 11 metres, in fact, allowing speckle interferometry to achieve that resolution.| Another view of the summit A scenic view of the two Kecks and the Subaru enclosure with Comet Hale Bopp. by John Davies HST in orbit | Just about the only optical telescope which receives regular use at an altitude of more than 14,000 feet, HST is a 2.5 metre telescope equipped with imaging and spectroscopic instrumentation. See the STScI site for further documentation.
0.852594
3.009464
Our galaxy, the Milky Way, is destined to collide with its largest neighbor, a sparkling collection of stars called the Andromeda galaxy. This cataclysm has been foretold by well-known physics, and astronomers know that when the space dust clears, neither galaxy will look the same: Within a billion years or so of first contact, the two will merge and form a much larger, elliptical galaxy. But new measurements of stars within Andromeda, made by the European Space Agency’s Gaia space telescope, are changing predictions for when, and exactly how, that collision will go down. As astronomers report in the Astrophysical Journal, the originally predicted crash date of 3.9 billion years from now has been pushed back by about 600 million years. And instead of a head-on collision, astronomers are predicting more of an initial glancing blow—kind of like knocking into a neighbor’s rear-view mirror. “The overall picture is not too different,” says study author Roeland van der Marel of the Space Telescope Science Institute. “But the exact orbital pathways are different.” Is that good news? It sounds like this collision is still inevitable. It is inevitable. Andromeda, which is currently 2.5 million light-years away, is hurtling toward the Milky Way at nearly 250,000 miles an hour. Astronomers have known this since Vesto Slipher first aimed a telescope at Andromeda and measured the galaxy’s motion in 1912. (He didn’t know it was a galaxy at the time, when conventional wisdom suggested it was a nebulous cloud inside the Milky Way. Needless to say, Slipher’s calculations suggested that idea needed revising). Later, astronomers using the Hubble Space Telescope were able to measure the sideways motion of Andromeda, which determines whether the galaxies are destined for a direct hit or a cosmic brush-pass. Using those observations, in 2012 van der Marel and his team forecast a head-on collision in roughly 3.9 billion years—a prediction they’ve just revised. “It is interesting, even though it is in some ways a fairly minor modification of what was known previously,” says Brant Robertson of the University of California, Santa Cruz. What did Gaia do differently from Hubble? Gaia took a good look at 1,084 of the brightest stars within Andromeda and measured their motions. Then, van der Marel and his team averaged those observations and calculated Andromeda’s rotation rate for the first time, as well as making new calculations of the galaxy’s side-to-side movement. That latter observation is “fiendishly difficult to make at these distances,” says Julianne Dalcanton of the University of Washington. With those new numbers, the team re-derived Andromeda’s trajectory using computer models. And when they put the galaxy on fast-forward, it took a slightly different, more tangential path toward the Milky Way, delaying the eventual collision and delivering more of a side-swipe than a face punch. Now, predictions suggest that initial boop will occur 4.5 billion years from now, which Dalcanton says is not surprising. “Since we’re talking billions of years here,” she says, “even slight changes in the current motions can play out very differently when ‘fast forwarded’ over eons.” So, how will this galactic smackdown play out? At their first close approach, the two galaxies will be about 420,000 light-years apart, or far enough from one another that their glittering disks will not interact. However, galaxies are embedded in a large amount of dark matter, and as the Milky Way and Andromeda pass one another, those dark haloes will snag. “That causes friction, which causes them to slow down and lose energy—and fall back together,” van der Marel says. In other words, the galaxies will U-turn and actually collide, pass through one another, whip around, and collide again. This will happen over and over until eventually those collisions have sculpted them into a single galaxy. What does this mean for Earth? As was true for the original prediction, this merger won’t mean much of anything at all to any earthly lifeforms that still exist in 4.5 billion years. Space is big and stars are far apart, and even when galaxies collide, individual stars rarely crash into one another. “We would still find ourselves orbiting the sun on a more randomly oriented orbit within a large elliptical galaxy,” van der Marel says. Still, the cosmic light show that will unfold overhead promises to be pretty spectacular. As the two galaxies approach one another, Andromeda will grow bigger and bigger in the night sky, eventually distorting into a deformed spiral as the Milky Way’s gravity tugs on it. Then, as the galaxies begin boomeranging and smashing together, compressed gases will ignite bursts of new star formation. “That’s when it really looks pretty on the sky,” van der Marel says. The question is whether anything on Earth’s surface will still be alive to notice. By that point, the sun will be well on its way to becoming a red giant star, which is a natural stage in stellar evolution. As that happens, it will brighten and balloon outward, engulfing Mercury and Venus and turning Earth into a roasted bit of planetary charcoal.
0.868359
3.974287
Uranus and Neptune cannot be seen with the naked eye. Mercury and Venus are only seen near the hours of sunrise and sunset because they are closer to the Sun than the Earth. Mercury is the planet closest to the Sun. It orbits the Sun once every 88 Earth-days. Mercury has very little atmosphere because its gravity is so weak due to its small size. Like the Moon, Mercury's lack of an atmosphere means that it is struck often by other bodies, leaving its surface cratered. If it had a thicker atmosphere, these objects would burn up before they reached the surface. Another similarity between Mercury and the Moon is that Mercury's day is equal to its year at 88 days. Venus is the second-closest planet to the Sun, orbiting it every 224.7 Earth days. After Earth's Moon, it is the brightest object in the night sky. Venus is one of the four terrestrial planets, meaning that, like the Earth, it is a rocky body. In size and mass, it is very similar to the Earth, and is often described as its 'twin'. The diameter of Venus is only 650 km less than the Earth's, and its mass is 80% of the Earth's. However, conditions on the Venusian surface differ radically from those on Earth, due to its dense carbon dioxide atmosphere. The enormously CO2-rich atmosphere generates a strong greenhouse effect that raises the surface temperature to over 400 °C. This makes Venus' surface hotter than Mercury's, even though Venus is nearly twice as distant from the Sun and receives only 25% of the solar irradiance. Earth is the only planet in the Solar System that supports life. Its atmosphere protects the Earth's life forms by absorbing ultraviolet solar radiation, moderating temperature extremes, transporting water vapor, and providing useful gases. The atmosphere is also one of the principle components in determining the weather and climate of the Earth. Mars is the fourth planet from the Sun in our solar system. Mars is also known as "The Red Planet" due to the reddish appearance it has when seen from Earth at night. Mars has two moons, Phobos and Deimos, which are small and oddly-shaped and are possibly captured asteroids. Until the first flyby of Mars by Mariner 4 in 1965, it was thought that Mars had channels of liquid water. We now know that these channels do not exist. Still, of any planet in our solar system after the Earth, Mars is the most likely to harbor liquid water. It is the only planet besides Earth that has seasons. It also has a rotational period nearly the same as our own. It has the highest mountain in the solar system, Olympus Mons, the largest canyon in the solar system, Valles Marineris, and polar ice caps. Jupiter is the fifth planet from the Sun and by far the largest within the solar system. Jupiter is usually the fourth brightest object in the sky (after the Sun, the Moon and Venus); however at times Mars appears brighter than Jupiter. Jupiter is 2.5 times more massive than all the other planets combined. Jupiter also has the fastest rotation rate of any planet within the solar system, making a complete rotation on its axis in slightly less than ten hours, which results in an equatorial bulge easily seen through an Earth-based amateur telescope. Jupiter is perpetually covered with a layer of clouds, and it may not have any solid surface in that the density may simply increase gradually as you move towards the core. Its best known feature is the Great Red Spot, a storm larger than Earth. Saturn is the sixth planet from the Sun. It is a gas giant (also known as a Jovian planet, after the planet Jupiter), the second-largest planet in the solar system after Jupiter. Saturn is probably best known for its planetary rings, which make it one of the most visually remarkable objects in the solar system. Saturn is the only one of the Solar System's planets less dense than water, with an average specific density of 0.69. This means that Saturn would float if you had a large enough body of water to place it in. Like Jupiter, it radiates more energy into space than it receives from the Sun. Saturn has a large number of moons. The precise figure is uncertain as the orbiting chunks of ice in Saturn's rings are all technically moons, and it is difficult to draw a distinction between a large ring particle and a tiny moon. Seven of the moons are massive enough to have collapsed into a spheroid under their own gravitation. Saturn's most noteworthy moon is Titan, the only moon in the solar system to have a dense atmosphere. Uranus is the seventh planet from the Sun. It is a gas giant, the third largest by diameter and fourth largest by mass. Uranus is composed primarily of gas and various ices. The atmosphere is about 85% hydrogen, 15% helium and traces of methane, while the interior is richer in heavier elements, most likely compounds of oxygen, carbon, and nitrogen, as well as rocky materials. This is in contrast to Jupiter and Saturn which are mostly hydrogen and helium. One of the most distinctive features of Uranus is its axial tilt of ninety-eight degrees. Consequently, for part of its orbit one pole faces the Sun continually while the other pole faces away. At the other side of Uranus' orbit the orientation of the poles towards the Sun is reversed. Between these two extremes of its orbit the Sun rises and sets around the equator normally. Neptune is the outermost gas giant in our solar system. It orbits the Sun once every 165 years. It is the fourth largest planet by diameter and the third largest by mass; Neptune is more massive than its near twin Uranus as its stronger gravitational field has compressed it to a higher density. Neptune's atmosphere is primarily composed of hydrogen and helium, with traces of methane that account for the planet's blue appearance. Neptune also has the strongest winds of any planet in the solar system, with estimates as high as 1550 MPH (2,500 km/h). Discovered on September 23, 1846, Neptune is notable for being the only planet discovered based on mathematical prediction rather than regular observations. Perturbations in the orbit of Uranus led astronomers to deduce Neptune's existence. One difference between Neptune and Uranus is the level of meteorological activity. Uranus is visually quite bland, while Neptune's high winds come with notable weather phenomena. The Great Dark Spot, a cyclonic storm system the size of Asia, was captured by Voyager 2 in the 1989 flyby. The storm resembled the Great Red Spot of Jupiter, but was shown to have disappeared in June 1994. However, a newer image of the planet taken by the Hubble Space Telescope on November 2, 1994, revealed that a smaller storm similar to its predecessor had formed over Neptune’s Northern Hemisphere. Unique among the gas giants is the presence of high clouds casting shadows on the opaque cloud deck below. Pluto is no longer considered a planet by astronomers, though it was classed as one between its discovery in 1930 and 2006 when it was reclassified a dwarf planet. It has an eccentric orbit that is highly inclined with respect to the other planets and takes it closer to the Sun than Neptune during a portion of its orbit. It is much smaller than any of the eight planets and indeed is smaller than several of their moons. Pluto itself has a large moon named Charon; two small moons were discovered in 2005, and their names (Hydra and Nix) were announced in June 2006.
0.853793
3.680064
A neutron star is a cold, collapsed star with nuclear density. A particular neutron star has a mass twice that of our Sun with a radius of 12.0 km. (a) What would be the weight of a 100-kg astronaut on standing on its surface? (b) What does this tell us about landing on a neutron star? (a) How far from the center of Earth would the net gravitational force of Earth and the Moon on an object be zero? (b) Setting the magnitudes of the forces equal should result in two answers from the quadratic. Do you understand why there are two positions, but only one where the net force is zero? How far from the center of the Sun would the net gravitational force of Earth and the Sun on a spaceship be zero? Calculate the values of g at Earth’s surface for the following changes in Earth’s properties: (a) its mass is doubled and its radius is halved; (b) its mass density is doubled and its radius is unchanged; (c) its mass density is halved and its mass is unchanged. Suppose you can communicate with the inhabitants of a planet in another solar system. They tell you that on their planet, whose diameter and mass are and , respectively, the record for the high jump is 2.0 m. Given that this record is close to 2.4 m on Earth, what would you conclude about your extraterrestrial friends’ jumping ability? (a) Suppose that your measured weight at the equator is one-half your measured weight at the pole on a planet whose mass and diameter are equal to those of Earth. What is the rotational period of the planet? (b) Would you need to take the shape of this planet into account? A body of mass 100 kg is weighed at the North Pole and at the equator with a spring scale. What is the scale reading at these two points? Assume that at the pole. Find the speed needed to escape from the solar system starting from the surface of Earth. Assume there are no other bodies involved and do not account for the fact that Earth is moving in its orbit. [Hint: Equation 13.6 does not apply. Use Equation 13.5 and include the potential energy of both Earth and the Sun. Consider the previous problem and include the fact that Earth has an orbital speed about the Sun of 29.8 km/s. (a) What speed relative to Earth would be needed and in what direction should you leave Earth? (b) What will be the shape of the trajectory? A comet is observed 1.50 AU from the Sun with a speed of 24.3 km/s. Is this comet in a bound or unbound orbit? An asteroid has speed 15.5 km/s when it is located 2.00 AU from the sun. At its closest approach, it is 0.400 AU from the Sun. What is its speed at that point? Space debris left from old satellites and their launchers is becoming a hazard to other satellites. (a) Calculate the speed of a satellite in an orbit 900 km above Earth’s surface. (b) Suppose a loose rivet is in an orbit of the same radius that intersects the satellite’s orbit at an angle of . What is the velocity of the rivet relative to the satellite just before striking it? (c) If its mass is 0.500 g, and it comes to rest inside the satellite, how much energy in joules is generated by the collision? (Assume the satellite’s velocity does not change appreciably, because its mass is much greater than the rivet’s.) A satellite of mass 1000 kg is in circular orbit about Earth. The radius of the orbit of the satellite is equal to two times the radius of Earth. (a) How far away is the satellite? (b) Find the kinetic, potential, and total energies of the satellite. After Ceres was promoted to a dwarf planet, we now recognize the largest known asteroid to be Vesta, with a mass of and a diameter ranging from 578 km to 458 km. Assuming that Vesta is spherical with radius 520 km, find the approximate escape velocity from its surface. (a) Given the asteroid Vesta which has a diameter of 520 km and mass of , what would be the orbital period for a space probe in a circular orbit of 10.0 km from its surface? (b) Why is this calculation marginally useful at best? What is the orbital velocity of our solar system about the center of the Milky Way? Assume that the mass within a sphere of radius equal to our distance away from the center is about a 100 billion solar masses. Our distance from the center is 27,000 light years. (a) Using the information in the previous problem, what velocity do you need to escape the Milky Way galaxy from our present position? (b) Would you need to accelerate a spaceship to this speed relative to Earth? Circular orbits in Equation 13.10 for conic sections must have eccentricity zero. From this, and using Newton’s second law applied to centripetal acceleration, show that the value of in Equation 13.10 is given by where L is the angular momentum of the orbiting body. The value of is constant and given by this expression regardless of the type of orbit. Using the technique shown in Satellite Orbits and Energy, show that two masses and in circular orbits about their common center of mass, will have total energy . We have shown the kinetic energy of both masses explicitly. (Hint: The masses orbit at radii and , respectively, where . Be sure not to confuse the radius needed for centripetal acceleration with that for the gravitational force.) Given the perihelion distance, p, and aphelion distance, q, for an elliptical orbit, show that the velocity at perihelion, , is given by . (Hint: Use conservation of angular momentum to relate and , and then substitute into the conservation of energy equation.) Comet P/1999 R1 has a perihelion of 0.0570 AU and aphelion of 4.99 AU. Using the results of the previous problem, find its speed at aphelion. (Hint: The expression is for the perihelion. Use symmetry to rewrite the expression for aphelion.)
0.867533
3.482847
(NASA) – Thanks to NASA’s Kepler and Spitzer Space Telescopes, scientists have made the most precise measurement ever of the radius of a planet outside our solar system. The size of the exoplanet, dubbed Kepler-93b, is now known to an uncertainty of just 74 miles (119 kilometers) on either side of the planetary body. The findings confirm Kepler-93b as a “super-Earth” that is about one-and-a-half times the size of our planet. Although super-Earths are common in the galaxy, none exist in our solar system. Exoplanets like Kepler-93b are therefore our only laboratories to study this major class of planet. With good limits on the sizes and masses of super-Earths, scientists can finally start to theorize about what makes up these weird worlds. Previous measurements, by the Keck Observatory in Hawaii, had put Kepler-93b’s mass at about 3.8 times that of Earth. The density of Kepler-93b, derived from its mass and newly obtained radius, indicates the planet is in fact very likely made of iron and rock, like Earth. “With Kepler and Spitzer, we’ve captured the most precise measurement to date of an alien planet’s size, which is critical for understanding these far-off worlds,” said Sarah Ballard, a NASA Carl Sagan Fellow at the University of Washington in Seattle and lead author of a paper on the findings published in the Astrophysical Journal. “The measurement is so precise that it’s literally like being able to measure the height of a six-foot tall person to within three quarters of an inch — if that person were standing on Jupiter,” said Ballard. Kepler-93b orbits a star located about 300 light-years away, with approximately 90 percent of the sun’s mass and radius. The exoplanet’s orbital distance — only about one-sixth that of Mercury’s from the sun — implies a scorching surface temperature around 1,400 degrees Fahrenheit (760 degrees Celsius). Despite its newfound similarities in composition to Earth, Kepler-93b is far too hot for life. To make the key measurement about this toasty exoplanet’s radius, the Kepler and Spitzer telescopes each watched Kepler-93b cross, or transit, the face of its star, eclipsing a tiny portion of starlight. Kepler’s unflinching gaze also simultaneously tracked the dimming of the star caused by seismic waves moving within its interior. These readings encode precise information about the star’s interior. The team leveraged them to narrowly gauge the star’s radius, which is crucial for measuring the planetary radius. Spitzer, meanwhile, confirmed that the exoplanet’s transit looked the same in infrared light as in Kepler’s visible-light observations. These corroborating data from Spitzer — some of which were gathered in a new, precision observing mode — ruled out the possibility that Kepler’s detection of the exoplanet was bogus, or a so-called false positive. Taken together, the data boast an error bar of just one percent of the radius of Kepler-93b. The measurements mean that the planet, estimated at about 11,700 miles (18,800 kilometers) in diameter, could be bigger or smaller by about 150 miles (240 kilometers), the approximate distance between Washington, D.C., and Philadelphia. Spitzer racked up a total of seven transits of Kepler-93b between 2010 and 2011. Three of the transits were snapped using a “peak-up” observational technique. In 2011, Spitzer engineers repurposed the spacecraft’s peak-up camera, originally used to point the telescope precisely, to control where light lands on individual pixels within Spitzer’s infrared camera. The upshot of this rejiggering: Ballard and her colleagues were able to cut in half the range of uncertainty of the Spitzer measurements of the exoplanet radius, improving the agreement between the Spitzer and Kepler measurements. “Ballard and her team have made a major scientific advance while demonstrating the power of Spitzer’s new approach to exoplanet observations,” said Michael Werner, project scientist for the Spitzer Space Telescope at NASA’s Jet Propulsion Laboratory, Pasadena, California.
0.809407
3.754009
It is very easy to spot LEO satellites during dusk or dawn. I am wondering if satellites further out in a geosynchronous orbit are also visible. Of course, if even possible, these would appear more stationary than any LEO satellites. No, and the reason is simple enough. GEO is at an altitude of 35,786 kilometres (22,236 mi) above the Earth's equator and no satellites in geostationary or geosynchronous (GSO) orbit are large enough to reflect sufficient amounts of light towards the observer with their truss and solar panels to be visible to the naked eye on the surface of the Earth. They're simply too far away and the atmospheric diffraction doesn't help either, further blurring small and faint objects of high apparent magnitude. If you're extremely lucky with weather and other conditions from where you're observing (especially the light pollution, described e.g. by Bortle scale, should be as low as possible to detect such faint objects), you might be able to see some with powerful binoculars or a hobbyist-grade telescope, as claimed on e.g. this website. I'd imagine though that it would only be possible from high altitudes where you'd deal with a lot smaller atmospheric effects and shouldn't be much light pollution. Observing them when transiting a brighter object in the background shouldn't help much either, again due to diffraction. If you're incredibly lucky though (just musing with infinitesimally remote chances here), a foreground object in lower orbit or upper atmosphere would momentarily align with a GEO satellite, and you might, might be able to observe slight lensing effect on its body, if the foreground object had magnifying optical properties, say a burst of translucent propellants ejected out of a rocket's nozzle. But what are the chances of that happening? No, but they are easily seen with a small telescope on a sturdy mount. March and September are the best times. Use an app to help you. My favorite way is to keep M11, the Wild Duck Cluster, in view with a medium power eyepiece. Every few minutes, a "star" will slowly track through the southern edge!
0.84309
3.513878
20 APRIL 2020 By now, we have discovered hundreds of stars with multiple planets orbiting them scattered throughout the galaxy. Each one is unique, but a system orbiting the star HD 158259, 88 light-years away, is truly special. The star itself is about the same mass and a little larger than the Sun – a minority in our exoplanet hunts. It’s orbited by six planets: a super-Earth and five mini-Neptunes. After monitoring it for seven years, astronomers have discovered that all six of those planets are orbiting HD 158259 in almost perfect orbital resonance. This discovery could help us to better understand the mechanisms of planetary system formation, and how they end up in the configurations we see. Orbital resonance is when the orbits of two bodies around their parent body are closely linked, as the two orbiting bodies exert gravitational influence on each other. In the Solar System, it’s pretty rare in planetary bodies; probably the best example is Pluto and Neptune. These two bodies are in what is described as a 2:3 orbital resonance. For every two laps Pluto makes around the Sun, Neptune makes three. It’s like bars of music being played simultaneously, but with different time signatures – two beats for the first, three for the second. Orbital resonances have also been identified in exoplanets. But each planet orbiting HD 158259 is in an almost 3:2 resonance with the next planet out away from the star, also described as a period ratio of 1.5. That means for every three orbits each planet makes, the next one out completes two. Using measurements taken using the SOPHIE spectrograph and the TESS exoplanet-hunting space telescope, an international team of researchers led by astronomer Nathan Hara of the University of Geneva in Switzerland were able to precisely calculate the orbits of each planet. They’re all very tight. Starting closest to the star – the super-Earth, revealed by TESS to be around twice the mass of Earth – the orbits are 2.17, 3.4, 5.2, 7.9, 12, and 17.4 days. These produce period ratios of 1.57, 1.51, 1.53, 1.51, and 1.44 between each pair of planets. That’s not quite perfect resonance – but it’s close enough to classify HD 158259 as an extraordinary system. And this, the researchers believe, is a sign that the planets orbiting the star did not form where they are now. “Several compact systems with several planets in, or close to, resonances are known, such as TRAPPIST-1 or Kepler-80,” explained astronomer Stephane Udry of the University of Geneva. “Such systems are believed to form far from the star before migrating towards it. In this scenario, the resonances play a crucial part.” That’s because these resonances are thought to result when planetary embryos in the protoplanetary disc grow and migrate inwards, away from the outer edge of the disc. This produces a chain of orbital resonance throughout the system. Then, once the remaining gas of the disc dissipates, this can destabilise the orbital resonances – and this could be what we’re seeing with HD 158259. And those tiny differences in the orbital resonances could tell us more about how this destabilisation is occurring. “The current departure of the period ratios from 3:2 contains a wealth of information,” Hara said. “With these values on the one hand, and tidal effect models on the other hand, we could constrain the internal structure of the planets in a future study. In summary, the current state of the system gives us a window on its formation.” The research has been published in Astronomy & Astrophysics.
0.930482
3.876966
What does the Mayan Calendar have to do with 2012? Before answering this question, it's best to understand a little more about who the Maya were, and also what the Mayan Calendar is and how it works. The Maya were an early Mesoamerican civilization first established during 2000 BCE - 250 AD (Preclassic period). Mayan cities reached their highest developmental peak during 250 AD - 900 AD (Classic period), and continued throughout the Postclassic until the spanish conquests which occured in the late 16th Century. Often considered one of the greatest civilizations in North America, this remarkable culture is well recognized for its impact on Central American culture in general; and with their advanced developement in art, writing, architecture, agriculture, astronomy, astrology, and mathematics, one can understand why. This culture is noted for having the only known fully developed written language of pre-Columbian America, along with spectacular artwork and monumental architecture; The Maya had built magnificent cities, pyramid structures and sacred stone temples. The intricate monuments found within their cities were used for ritual and ceremonial purposes. Most astounding is the Maya sophisticated mathematical systems and extremely accurate astronomical observations. The maya's knowledge and use of science and mathematics helped them to produce extremely accurate astronomical calculations based on naked-eye observations of the heavens; they were able to predict celestial events such as eclipses and solstices. Their charting of the celestial bodies are superior, or at least equal, to those of any other early civilization that also charted these movements without a telescope. Their primative use of numbers is actually quite complex and astonishingly accurate when compared to our modern day understanding of concepts such as "time". Numbers played an important role in Mayan society and were essential to the workings of the Mayan Calendrical system. The Maya are noted for their development and use of the concept of zero in mathematics, and they also worked with larger sums of number that reached will into the hundreds of millions. Among the many types of calendars, the most important included a 260-day cycle, a 365-day cycle which approximated the solar year, a cycle which recorded lunation periods of the Moon, and also cycle which tracked the synodic period of Venus. The Long Count Calendar was used in most of Mesoamerica and was used to track longer cycles of time. The Maya invented a system of writing that used pictographs to represent sounds. In order to record their observations of the world around them, and that of the society, the Maya kept a record-keeping system that was dependant upon this symbolic/hieroglyphic written language. The Long Count calendar was used for tracking and recording celestial observations and historical events throughout time. The elaborate and symbolic writing was either painted or carved onto artwork such as pottery, statues, and also on monuments such as the pyramids and temples. Books were also created by using folded fig tree bark with folded leaves attached to create pages. In the early 16th century AD the Spanish conquistadores arrived. The conquest mainly took place in the northern and central Yucatán Peninsula and also various regions of the Guatemalan highlands but would eventually spread out for the purpose of colonization and to obtain political rule over much of the western hemisphere. This Spanish conquest of Yucatán was a far more difficult and lengthier of a process than the similar conquests of the Aztec and Inca Empires. Concerning the Maya regions, the spanish were initially motivated by rumors of there being precious metals such as gold and silver; however, most of the Maya lands were actually very poor in such resources. The Conquistadores turned there attention instead to central Mexico and Peru which were rich in these resources. Nevertheless, the conquests continued in the Mayan Yucatán/Guatemalan regions with the spanish attempting to force the Maya to convert to Christianity and to pledge loyalty to the King of Spain. Eventually the Spanish invaders destroyed most of the Mayas' library of books, burning that which they claimed to be the "work of the devil" and thus, erasing most of the recorded mayan history. Three of the books that managed to survive are known as "Codices" and they still remain today; a possible fragment of a fourth codex also exists. The four codices that exist today are known as: The Madrid Codex, The Dresden Codex,The Paris Codex, and The Grolier Codex. What is known today about the Mayan calendars has been pieced together through extensive research; this work has been carried out by Mayan scholars based on archaeological remains, evidence obtained through the sacred Codex books, and of course, by that which has been provided by the remaining Maya culture that still thrives today in South America. The Maya were never truly defeated. The Maya are well known for their extremely accurate and complex calendar systems and almanacs that date back to around 600 BCE (before common era). Early Mesoamerican calendars did not originate with the Maya, but their civilization fully developed and refined them into a more sophisticated system; the Mayan Calendar is the best documented and the most completely understood. These calendrical systems were also adopted by the other Mesoamerican nations such as the Aztecs and the Toltec. There were three important calendars that were used by the Maya and the many other mesoamerican civilizations. These calendars are referred to today as the Tzolk'in, the Haab' and the much talked about Long Count Calendar that is connected to the year 2012. Each number along with it's named day can occur only once within the entire 260-day cycle. After every unique combination of number-plus-day was cycled through ("13 ahaw" being the 260th day), the Haab' count was completed, reset, and would start over again at number one, day one, ("1 Imix"). Here is the complete list of the Tzolk'in day names. Day Name Meaning 1 Imix' - waterlily, crocodile 2 Ik' - wind 3 Ak'b'al - darkness, night, early dawn 4 K'an - maize 5 Chikchan - celestial snake 6 Kimi - death, transformer 7 Manik' - deer 8 Lamat - Venus, star 9 Muluk - jade, water, offering 10 Ok - dog 11 Chuwen - monkey 12 Eb' - rain 13 B'en - green/young maize, seed 14 Ix - jaguar 15 Men - eagle 16 Kib' - wisdom, wax 17 Kab'an - earth 18 Etz'nab' - flint 19 Kawak - storm 20 Ajaw - lord, ruler, sun The Tzolk'in calendar played a major role in Maya society because it would determine when important events would/should take place, such as community rituals, religious observances, sacred ceremonies, celebrations, and other major events. The Tzolk'in calendar was also commonly used for fortune telling and divination. Each of the named days in the Tzolk'in count had its own omens and associations. An individual's personal character traits and even their destiny could be determined depending on the day of birth. The concept is very similar to that of astrology; the maya were excellent astrologers themselves. The Tzolk'in is still used today within several Maya communities in Guatemalan highlands. It's use is quickly spreading throughout the surrounding area, despite opposition from Evangelical Christian converts who have erased its use from many of the communities. The significance of this 260-day cycle is not completely understood. This 260-day period does not seem to relate to any solar or lunar cycle; it's origin and purpose is still quite a mystery. The numbers 13 and 20 were considered sacred to the Maya, and 13 multiplied by 20 equals 260. The Tzolk'in may also have been created to correspond with the cycles of the planet Venus or with the human Gestation period which is around 260 days. It may have also been used for agricultural purposes, corresponding with the planting and harvesting cycles. The Haab' is a civil calendar based on a 365-day cycle which approximates the solar year. The Haab' count consists of 18 named "months" of 20 days each, plus a period of five nameless "nameless days" at the end of the Haab' cycle. This 365-day Haab'' cycle is also referred to as a "Vague Year" because the actual length of one solar year is 1/4 of a day longer (365.25 days) in duration. This is due to the gradual shifting of the earth over time known as the precession of the equinoxes. The Maya knew that the Haab' was shorter than the actual tropical year, but they did not feel the need to change their calendar to reflect this the way we do today, i.e. the leap year. The five intercalated "nameless days" that were added to the end of the Haab' cycle. This 5-day period was called Wayeb or (Uayeb) , and is often referred to as "no time". The Maya dreaded the Wayeb and considered it to be extremely unlucky and even dangerous. When considering these five days of "no time" the Haab' cycle actually becomes a 360-day count. The last day of each month made use of the Maya's concept of zero; instead of being numbered 20, it was represented by a "sign" that indicated the day of "seating" of the month to follow. The day of seating was also associated with a deity whose influence would be felt for the entire 20-day period. The 20th day of each Haab'' month can actually be considered "day 0", followed by days 1-19, with the last day being the day 0 of the next month, and so on. This is because the Maya felt that the influence of any particular span of time is felt before it actually begins, and is also felt for a short while after it's completion. Day Name Meaning 1 Pop - mat 2 Wo - black conjunction 3 Sip - red conjunction 4 Sotz' - bat 5 Sek - unknown 6 Xul - dog 7 Yaxk'in - new sun 8 Ch'en - black storm 9 Yax - green storm 10 Sac - white storm 11 Keh - red storm 12 Mak - enclosed 13 K'ank'in - yellow sun 14 Muwan - owl 15 Pax - planting time 16 K'ayab' - turtle 17 Kumk'u - granary 18 Wayeb' - five unlucky daysThe Haab' calendar is somewhat similar to the gregorian calendar used today, i.e., there are monthly cycles within the yearly cycle. Similar to the Tzolkin calendar, each Haab' day has its own hieroglyph and number to represent it. The Haab' calendar was used for economic and accounting activities in the society, and also for as agricultural purposes. It would take 52 years before the two cycles (the Haab' and Tzolk'in) would again meet to create any individual combination or "calendar round date". Each day in the 260-day Tzolk'in count also had a position in the 360-day Haab'' count. When used together like this each unique day, such as "1 K'an 5 Wayeb" for example, could not return until the next calendar round cycle; this is the equivalent to 8,980 days. Since the calendars were based on the synchronization of the Haab' and the Tzolk'in, the entire combined cycles of both would repeat every 52 Haab' cycles exactly. The calendar round date was satisfactory enough to be considered the "official date" in Maya society considering that 52 years was actually above the general life expectancy of an individual at that time. But of course the Maya understood the cyclical nature of time has existed long before birth and would continue to exist long after death. An accurate and sophisticated way to calculate time in a more linear fashion was needed in order to record historical events and important cycles of time. This brings us to yet another calendar that was used to calculate time periods that exceeded the 52-year calendar round cycle. This is the much talked about "Long Count" calendar that provides a synchronistic connection to the year 2012.` The Long Count calendar was especially important in Mesoamerican culture. It was used for tracking longer durations of time that exceeded the 52 -year Haab' cycle. In this calendar, each individual day is identified by counting the number of days that have elapsed since the end of the last Maya Great Cycle; The entire Long count calendar is made up of smaller cycles that occur within even larger cycles of elapsed time; the higher cycles could record time periods that span thousands of years in duration. Full knowledge of this complex calendar system was known only to the ruling Maya elite and was guarded as a sacred source of great power. The Maya inscribed long count dates - actually the picture-glyphs that represented the numbers - on their many artifacts and monuments. The long count was displayed vertically from top to bottom; the count would start with the Baktun at the top and then descend down in order with each numerical coefficient following a base-20 pattern with the exception of the Winal which cycles to 18. 20 K'atuns = 1 Bak'tun (144,000 days) 20 Tuns = 1 K'atun (7,200 days) 18 Uinals = 1 Tun (360 days) 20 K'ins = 1 Uinal/Winal (20 days) The Maya name for a day is K'in (one day of the sun). The starting point of the current Long Count cycle began on August 11, 3114 BCE. This is considered day 0 in the Long Count calendar and would be displayed as: 184.108.40.206.0 (13 Baktun, 0 Katun, 0 Tun, 0 Uinal, 0 Kin). Each Long Count date also has it's corresponding Calendar round date. The calendar round date for August 11th 3114 BC is: 4 Ajaw 8 Kumk'u. The Maya thought of time as being cyclical in nature, in other words, time is considered to be a circle which consists of reoccurring cycles or "ages". The ending of a Great Cycle cycle (i.e., the ending of one age and the start of a new one) had much significance for the Maya. The Long Count can be considered a more "linear" look at time since it counts the number of days that have passed since a fixed starting point; similar to the way our current Gregorian calendar does. Thirteen Baktuns would complete one "Great Cycle" which is equivalent to 5,200 Tuns (or about 400 tropical years); the 13 Baktuns of the Long Count span a period of about 1,872,000 days. Many Mayan rituals were concerned with the completion and re-occurrences of the various cycles. The calculations on this page use the Correlation Constant 584283 to convert Maya Long Count dates to Gregorian date and vice versa. You can convert any Long Count or Gregorian date using any of the correlation constants by using this Date Conversion Calculator. The Long Count calendar was designed to always work the same way while still being accurate; even millions of years into the future or backwards into the past. The Maya understood and approached time in a unique way, and their calendars reflected this understanding. Before understanding the calendar system, its best to understand the way the Maya used numbers. Since no calculators or computers were available in ancient mesoamerica, a system of bars and dots were used for counting. A dot represented a value of one, a bar represented a value of five, and Zero was represented by a shell shaped glyph symbol. The Numbers were written vertically from bottom to top and each number position represented a value of 20. Advanced mathematical calculations reaching into the millions and beyond could be calculated by using this system. The Long Count Calendar uses a base-20 (vigesimal) numeral system rather than the base-10 decimal system that is mostly used today in the West. In the maya system, for example: 0.0.0.1.5 is equal to 25 and 0.0.0.2.0 is equal to 40. When performing basic mathematics the Maya would use a pure base-20 numbering, but when it came to the cycles of time within a calendar system a modified base-20 system was used. One Tun can be considered a "year" of 360 days; this is the Haab year of 18 months of 20-day each but without the addition of the 5-day wayeb. example: 0.0.1.0.0 - the "1" represents 360 days. Long Count dates found on architectural structures and on several mesoamerican artifacts, were often represented by heiroglyphic images known (pictographs) and arranged vertically; the calendrical cycles along with corresponding celestial events are documented in the Maya Codices. A Long Count Date consists of 5 digits: #.#.#.#.# The Baktun is the first digit in the long-count sequence. The Katun is the second digit. The Tun is the third digit. The Uinal (or the Winal) refers to the fourth digit. The Kin is the fifth (last) digit in the sequence. 1 Kin = 1 day. The rest of the calendar is dependant upon this day-count. In the Long Count, every unit of a given position represents twenty times the unit of the position preceeding it, with the exception of the Winal, which represents multiples of 18×20 instead (i.e. 1 Tun = 18 Uinals/360 days). Since the Long Coung calendar follows a modified base-20 scheme; each number position cycles from 0-19. This base-20 pattern is consistant with one exception: the winal unit resets after only counting to 18 (0-17). The base-20 pattern is use consistently only if the Tun is being considered as the primary unit of measurement instead of the Kin; the k'in and winal units represent the number of days in the tun. The Long Count 0.0.1.0.0 represents 360 days, and not 400 as it would in a pure-base 20 count. Long Count date: 220.127.116.11.18 - 5 Etz'nab 1 Muwan (12 Baktun 19 Katun 15 Tun 17 Uinal 18 Kin - "5 Etz'nab 1 Muwan" is the calendar round date.) The next day would be 18.104.22.168.19 - 6 Kawak 2 Muwan The individual day count (the Kin) is always the first to cycle up until 19 is reached and then it starts its cycle again beginning with 0; after this day, the number immediately to the left will increase in value until it also reaches the end of its cycle, and so on into the higher cycles. Still following the example above, the next day would be 22.214.171.124.0 - 7 Ajaw 3 Muwan. The Kin day has reset to zero so now the 17 (which has also completed its cycle) will reset to zero. The Winal does not turn to 18 because it cycles from 0-17 and not 0-19. The 15 has also changed to 16 because the position to its right has turned to zero. In other words, once a number reaches the end of it's cycle it begins again at zero, but only if and after the number to it's immediate right has cycled to zero. The next day in the count would be 126.96.36.199.1 (which is January 12, 2009 when converted to Gregorian using the Correlation Constant 584283) The Cycles of the Long Count Calendar 1 Baktun = 20 Katuns (144,000 days) 1 Katun = 20 Tuns (7,200 days) 1 Tun = 18 Uinals (360 days/1 solar year) 1 Uinal = 20 Kins (20 days/1 month) 1 K'in = 1 day Another way to look at the above example would be this way: 1 K'in (1 day) 20 Kins = 1 Uinal (20 days) 18 Uinals = 1 Tun (360 days) 20 Tuns = 1 katun (7,200 days) 20 K'atuns = 1 B'aktun (144,000 days) The Maya had even larger base-20 units, for example, piktun, kalabtun, k'inchiltun, and alautun. These higher order cycles would create digits well into the millions and beyond, but these were rarely used. Alautun - 20 Kinichiltuns /64,000,000 tuns (23,040,000,000 days) Kinichiltun - 20 Kalabtuns/3,200,000 tuns (1,152,000,000 days) Kalabtun - 20 Piktuns /160,000 tuns (57,600,000 days) Piktun - 20 Bak'tuns/8000 tuns (2,880,000 days) The 13th Baktun Now let's consider the long count date 188.8.131.52.18. The next day in this long count would be 184.108.40.206.19 On the next day, the Kin position (shown in red) will cycle to zero; this will also bring the 17 to its completion as well and so it will also cycle to zero; and as we can see, so will the Tun position, the Katun, and the Baktun. So the Long Count date after 220.127.116.11.19 will become 18.104.22.168.0 This day in the gregorian calendar is December 21, 2012. Remember, the Long Count system is a consecutive count of days since the ending of the last great cycle and the start of the current cycle of 13 Baktuns. So on December 21, 2012 a total of 1,872,000 has elapsed since the beginnng of the cycle that began; this beginning date was August 11th, 3114 BC - 22.214.171.124.0. Thus, the 13-Baktun Great Cycle of the long count has come to a completion and the next Great Maya Cycle will begin. The ancient Maya (and the many maya that still exist today) would consider this to be a time of great change, the beginning of a new cycle or era of time. It can be thought of as the ending of a past cycle and the beginning of a new one. "Perhaps the greatest gift the Mayan culture has given our world is the Zero Point to the Precessional Cycle of 26,000 years. What Time Is It Anyway? The Maya believed that time was cyclical in nature. The Long Count calendar can be considered a giant Great Cycle of time that contains smaller cycles of time that can be measured relative to one another. Modern calendars are based on a somewhat linear concept of time (time is considered to be constantly moving forward in a straight line leading into infinity), but the Maya thought of time as a "circle" where cycles repeated themelves over and over again infinitely, instead of strictly traveling out into an infinite straight line never to repeat again. Other ancient cultures such as Incan, Mayan, Hopi, and other Native American Tribes, plus the Babylonian, Ancient Greek, Hindu, Buddhist, Jainist, and others have the "wheel of time" concept that regards time as cyclical and consisting of repeating ages. Each day in the Mayan calendar had a specific meaning and was associated with their religious beliefs; the same can be said for the larger cycles of time that would take years to pass. Many rituals and ceremonies, as well as other major aspects of Mayan society were based upon the calendical cycles. You are probably familiar of some of the cycles that exist, such as, the human gestation cycle, the seasonal/weather cycles, the birth/death cycles, and so on. So, the Maya did indeed understand the concept of linear-time (past-present-future). This is one of the main purposes of the Long Count calendar; it counts linear-cycles of time within the greater cycles, and is based upon the amount of days that have elapsed since a fixed starting point. By using the Long Count calendar cultural and historical events could be recorded in a linear relationship to one another; celestial events such as eclipses could be pre-determined, and dates could be recorded over extremely long periods of time. By using the Long Count a day that occurred hundreds of thousands of years in the past or ahead in the future could be determined at any given time. When considering the cyclical nature of time, the influence of individual cycles would have a particular association, influence, or impact on the universe and all that is contained within it at that time. The major cycles within the Long Count could determine when major events in the past had occured and also when those events or influences would occur again in the future. The Maya believed that in order to understand the past, one would need to understand the cyclical influences that create the present. In turn, by knowing and understand present influences one could see the cyclical influences that were to come in the future. Linear-patterns can be identified within these cycles of time with the Baktun, the Katun, the Tun, the Winal, and the Kin. The Kin, or one day of the sun, is the day-count which is the basis for counting the entire span of days with the long count date. The Long Count calendar enabled one to identify a unique date that occurred within the "Great Cycle" of time. The Maya considered one "Great Cycle" to consist of "thirteen Baktuns" which would be a linear count of 1,872,000 days (close to 5,200 years or approximately 400 Tuns). One Great Cycle" is equivilant to 13 Baktuns., and one baktun is equivilent to 144,000 days. 144,000 multiplied by 13 = 1,872,000 days. So this 1,872,000 day period (13 Baktuns) was considered an "era" or "age" of time; but it was more commonly called a "Great Cycle" of time. The Maya are often credited as being the first people to establish chronological record of dates that begin with a "fixed day" in the past; this fixed day would be the starting point of the calendar and each day was numbered accordingly as time progressed. This starting point marks the beginning of recorded time and also indicated the date when they believed the world had came to an end and was recreated. This starting point (August 11, 3114 BC) marks the beginning of a new Great Cycle and is often referred to as the "Mayan Epoch" or day "zero". The ending of one cycle and the beginning of the next would mark a period of extraordinary celebration for the Maya because it would signify a tremendous period of change, or a transition/transformation period of the planet, (perhaps within the universe), and in the collective consiouness itself. The complete significance of this date is unknown but the Maya may have thought of this starting point as a marker of the beginning of the mayan creation era, or of the beginning of a Great cycle of / in time. This date is also believed to be the beginning of what the Mayans called the "fifth creation". The maya may have targeted this future date of 2012 to fall wthin the 13 baktun era In December of that year will occur a unique conjunction of the winter solistice sun with the crossing point of the galactic equator and the ecliptic. The Maya considered the cyclical nature of time to be reflected in the natural laws of the universe; they believed that these cycles would repeat themselves infinitately through time, and that even civilizations of people would rise and fall in a cyclical progression according to these cyclical laws of nature, thus, they could be predicted ahead of time. On December 20, 2012 the Maya Long Count date will be: 126.96.36.199.19. (12 baktun 19 katun 19 tun 17 uinal 19 kin) On December 21, 2012 the Maya Long Count date will be: 188.8.131.52.0. This is the ending of a Great Cycle of 1,872,000 days that have elapsed since its beginnng on August 11th 3114 BCE It's also interesting to note is that 2012 is a Leap Year in the Gregorian Calendar. In 2012 there will be a unique conjunction of the winter solistice sun with the crossing point of the galactiv equator and the ecliptic. It's debated whether or not the maya may have targeted this future date to fall wthin the 13 baktun era. There is also debate amoung Maya scholars and astronomers about which is the correct correlation constant (gregorian date) that marks the start of the thirteenth baktun (184.108.40.206.0). When converting dates using the correlation constant 584285 we see that the 13th Baktun cycle began on August 13th 3113 BC in the gregorian calendar. This is the most widely accepted date. When using the correlation constant 584285 the beginning of the 13th baktun (220.127.116.11.0) would be August 13, 3114 BC and the cycle would be complete on December 23, 2012. This date is not as widely accepted by Mayan scholars after comparing the archeological evidence that contains insriibed Long Count dates. Other Mayan scholars even use completely different correlation constants from the ones given above. It can all get very confusing. No matter what date is correct there is no doubt that the period between December 20-25th of 2012 marks the ending of a cycle and the beginning of the next. Either way this would be a very extraordinary time considering the syncronistic astronomical events that are also taking place around the same time. The Maya would have recognized this as a new era of time, a "new age" and would signify that a great change would take place; a "shifting" of enery if you will. And it is also no coincidence that according to astrology, (something the Maya were quite devoloped in) we are nearing the peak of the "age of aqurius" around this time. The date December 21st, 2012 A.D., represents an extremely close conjunction of the Winter Solstice Sun with the crossing point of the Galactic Equator (Equator of the Milky Way) and the Ecliptic (path of the Sun). The ancient Maya Civilization recognized this as the Sacred Tree. This event has been manifesting very slowly over many thousands of years. It will come to completion at exactly 11:11 am GMT. "The Gregorian calendar was created (as was daylight savings time) as a way to disassociate people with the natural cyclical rhythyms of their body and the Earth. Y2K was a planned scam to divert attention from the truth and to make jaded skeptics of us all. In 2012 one cycle ends and another begins, it does not signify "the end of the world" so much as the beginning of a new cycle of consciousness." A solstice is either of the two events of the year when the sun is at its greatest distance from the equatorial plane. The name is derived from Latin sol (sun) and sistere (to stand still), because at the solstice, the Sun stands still in declination, that is, it reaches a maximum or a minimum. The term solstice can also be used in a wider sense as the date (day) that such a passage happens. The solstices, together with the equinoxes, are related to the seasons. In some languages they are considered to start or separate the seasons; in others they are considered to be center points (in English, in the Northern hemisphere, for example, the period around the June solstice is known as midsummer, and Midsummer's Day is the 24 June — now two or three days after the solstice). The Winter Solstice will occur at 11:11 am Universal Time on December 21, 2012. The 'Message of The Sphinx' with the Sphinx being a type of "clock" telling the "First Time" and the "Last Time." And the "Last Time" will occur that day at 11:11 pm Cairo/Giza time - or 9:11 pm Universal Time. The fact that the Winter Solstice occurs that day at 11:11 am clocks the year 2012 into place. In general, the Judaeo-Christian concept, based on the Bible, is that time is linear, with a beginning, the act of creation by God. The Christian view assumes also an end, the eschaton, expected to happen when Christ returns to earth in the Second Coming to judge the living and the dead. This will be the consummation of the world and time. St Augustine's City of God was the first developed application of this concept to world history. The Christian view is that God is uncreated and eternal so that He and the supernatural world are outside time and exist in eternity. Christian Science defines time as "error" or illusion. Ancient cultures such as Incan, Mayan, Hopi, and other Native American Tribes, plus the Babylonian, Ancient Greek, Hindu, Buddhist, Jainist, and others have a concept of a wheel of time, that regards time as cyclical and quantic consisting of repeating ages that happen to every being of the Universe between birth and extinction.Previous Page Top of Page
0.87003
3.042424
Ø-scillation: oscillating chemistry in zero gravity and beyond How did life originate? Nobody knows. Life might not even be native to our Earth - it might have come from asteroids or the interstellar medium. While pioneering laboratory studies recently made progress for prebiotic (origin-of-life) chemistry, the question arises whether such reactions would also work in zero gravity environments. With Ø-scillation, we aim to achieve the first steps towards an answer using a proxy reaction: how does zero (and hyper-) gravity affect the reaction rates in oscillating reactions? In this study, we develop a robust, compact, and simplified version of the Briggs-Rauscher experiment, an oscillating chemistry event often called the Iodine clock, which cycles through amber and blue colors. The hypothesis to be tested is that different gravity environments do not alter the reaction rates of the involved chemistry. If this can be confirmed, we might just be able to add another piece to puzzle of life. How did life originate? A short question, but one that has driven humankind for millennia. Still, nobody knows the answer. The beginning exploration of space and the rise of exoplanet science with its discovery of temperate worlds open up a new avenue in the search for the origin of life. Multidisciplinary research and pioneering laboratory studies have shown potential pathways for abiogenesis, forming precursors for RNA (e.g. Powner et al., 2009; Patel et al., 2015; Xu et al, 2018; Rimmer et al., 2018). But do the mechanisms working in a laboratory, and proposed to work on a young Earth, also work in more extreme environments? What if life has not actually formed on Earth? Some theories postulate that life might have formed on asteroids, and was then distributed onto Earth (e.g. Ciesla & Sanford, 2012). Moreover, in our search for habitable exoplanets, one can wonder if and how life might have originated on very extreme planets - ones ranging from the size and low-gravity of the Moon to so called Super-Earths, that can have up to 8 times the surface gravity of our Earth. Our goal is to study if this Earth-bound prebiotic chemistry could also work in extreme environments ranging from zero G to multiple G, by investigating how altered gravity affects the mixing and reaction rates, accelerating or decelerating the chemistry. To answer this, we are designing a set of experiments to be conducted over the next 10 years which will help us to answer this question. In our first proposal in this series, we want to make use of the unique opportunity provided by a hyperbolic zero gravity flight, which will let us experience a fluent variation of gravity environments. In doing so, we will build up on and expand the scope of previous experiments (e.g. Fujieda et al., 1996, 1999). In this study, we develop a robust, compact, and simplified version of the Briggs-Rauscher experiment, an oscillating chemistry event often called the Iodine clock. The hypothesis to be tested is that different gravity environments do not change the reaction rates of this oscillating chemistry experiment. Based on the three main components Potassium Iodate, Hydrogen Peroxide and Starch, the reaction changes color from blue to amber in a cyclic way. See the below schematic for a full overview of the reactions. Each color cycle takes about 20 seconds, and a total of circa 10 cycles are completed. The full experiment will run for about 3 minutes. See Video 1 for one cycle of the reaction, hosted in our flight setup. The final assembly and detailed design drawings can be seen in Figs. 1-3. A full overview of the reactions is given by Schematic 1. In the parabolic flight, we will be able to experimentally measure the reaction rates by monitoring the color change over all 10 cycles, which will let us iterate through 2-3 flight parabolas. We will record the color change using optical video cameras, mounted onto the experimental setup (Fig. 3). In the post-processing and video analysis, we will extract the color information from each video pixel, and record a time series of the color expression. We will mount two TERMITES sensors on the setup, which will record the 3D acceleration, temperature, pressure, and more during flight (Fig. 4). We will then cross-correlate these color time series with the TERMITES’ accelerometer time series (Fig. 5). Our experimental setup allows for 2 x 4 experiments, whereby 4 experiments will be conducted at the same time, allowing us to assess the standard deviation intrinsic to the experiment. Finally, we will compare the outcome of the 2 x 4 flight experiments with our ground-based laboratory studies, to see whether any significant changes have occurred. Our laboratory studies will be conducted at 1 G (Earth gravity), as well as in centrifuges to assess the dependency of 1-2 G environments. Video 1: One full cycle of the Briggs-Rauscher reaction, changing color from blue to amber and back to blue within 20 seconds; hosted in our final setup. Credit: Maximilian N. Günther Figure 1: The full design, with the acrylic block at its center, the mounting stage and camera above, and everything eluminated by an LED strip. Credit: Maximilian N. Günther Schematic 1: the steps taking part during the Briggs-Rauscher reaction. Figure 2: The CAD modeling and resulting design drawing for the clear acrylic block. Hosting the Petri dishes and syringes, this is the heart of the setup. Credit: Maximilian N. Günther, Joel Villaseñor, Richard Hall Figure 3: The CAD modeling and resulting design drawing for the camera mounting stage. Credit: Maximilian N. Günther Figure 4: A TERMITES sensor, which we will use to monitor the 3D acceleration, temperature, pressure, and more during flight. Credit: Carson Smuts Figure 5: Example of TERMITES data taken with termites on a Zero G flight in 2018. The orange curve shows the acceleration in z-direction, reflecting the gravity experienced during the parabolas. Credit: Carson Smuts Maximilian N. Günther, with Matt Carney, Juliana Cherston, Maggie Cobletz, Ariel Ekblaw, Natalia Guerrero, Richard Hall, Corinna Kufner, Xin Liu, Janusz Petkowski, Sukrit Ranjan, Paul Rimmer, Jessica Santivañéz, Dimitar Sasselov, Sara Seager, Clara Sousa-Silva, Carson Smuts, Valentina Sumini, Zoe R. Todd, Joel Villaseñor Ciesla, F. J., & Sandford, S. A. (2012). Organic Synthesis via Irradiation and Warming of Ice Grains in the Solar Nebula. Science, 336(6080), 452 LP – 454. https://doi.org/10.1126/science.1217291 Fujieda, S., Mogami, Y. Zhang, W., & Araiso, T. (1996). Instrumental Achievements Experimental System Assembled for Studying the Chemical Oscillation Behavior of Belousov-Zhabotinskii Reactions in the Microgravity. Analytical Sciences, 12(5), 815–818. https://doi.org/10.2116/analsci.12.815 Fujieda, S., Mogami, Y., Moriyasu, K., & Mori, Y. (1999). Nonequilibrium / nonlinear chemical oscillation in the virtual absence of gravity. Advances in Space Research, 23(12), 2057–2063. https://doi.org/10.1016/S0273-1177(99)00163-5 Powner, M. W., Gerland, B., & Sutherland, J. D. (2009). Synthesis of activated pyrimidine ribonucleotides in prebiotically plausible conditions. Nature, 459(7244), 239–242. https://doi.org/10.1038/nature08013 Patel, B. H., Percivalle, C., Ritson, D. J., Duffy, C. D., & Sutherland, J. D. (2015). Common origins of RNA, protein and lipid precursors in a cyanosulfidic protometabolism. Nature Chemistry, 7(4), 301–307. https://doi.org/10.1038/nchem.2202 Rimmer, P. B., Xu, J., Thompson, S. J., Gillen, E., Sutherland, J. D., & Queloz, D. (2018). The origin of RNA precursors on exoplanets. Science Advances, 4, eaar3302. https://doi.org/10.1126/sciadv.aar3302 Xu, J., Ritson, D. J., Ranjan, S., Todd, Z. R., Sasselov, D. D., & Sutherland, J. D. (2018). Photochemical reductive homologation of hydrogen cyanide using sulfite and ferrocyanide. Chemical Communications, 54(44), 5566–5569.
0.829774
3.882469
How did the Practice of Naming Stars Begin? These civilizations—most of which were dominated by the worship to gods of different elements and blessings— began naming stars in reverence and in worship. You may have heard of Polaris and Sirius as well as Vega, Betelgeuse, and many others. These are all stars which were name by the Greeks, the Arabs, and the Romans. These names are still used today and many have been added to that list. However, the expanded list is now made of numbered names rather than those attributed to gods or deities. In the 1950s the astronomical community established a practice of naming stars with numbers, and with good reason. It would be too large an undertaking. Other celestial features like asteroids and comets continue to be named to this day. Mostly, the names are after the discoverers themselves and sometimes they are attributed to some well known individual. You can use this reliable company to buy naming rights to stars. The Star Registry The naming of stars has seen a very long history. Ever since the first men, all cultures and civilizations that have existed all over the world have named the brightest stars in the night sky as symbols of power and as praise to their deities. Quite a few names have remained in common use due to their continued use through different times and civilizations. These include stars like Sirius. However, due to the development and advancement of astronomy through the ages, there has been a need for a universal code in order to name stars without much diligence and thought given to deities and gods. Also, since thousands of stars are being discovered every single decade, it’s impractical for humanity to keep naming them for deities since they are quite limited in number. Astronomers and scholars during the Renaissance tried to produce entire registries filled with star names based on certain rules to solve this problem. Some of the earliest examples of this practice came from the methods and tools introduced by Johann Bayer in the Uranometria atlas of 1603. Bayer took to using Greek letters in the lower case in order to label different stars in different constellations. Following this convention, Alpha was generally the name given to the brightest star of a constellation. Hence you have names like Alpha Centauri and Alpha Cygni. As you might have guessed, Beta was the name given to stars that were slightly less bright and so on. The scheme ran in to difficulties quickly however, due to the stars that were so fast being discovered due to the improved instruments in the hands of astronomers and astrophysicists alike. Alphanumeric Designations & Faint Stars Since the brightest stars in the night sky were named in ancient times, the ones named today are much fainter than the ones that were named before. The Bayer Flamsteed schemes for naming the stars already cover them pretty completely. As astronomers discover the new stars in the night sky it’s standard practice to identify them with alphanumeric designation. Far from being just practical, these designations are deemed necessary due to the thousands and even millions of objects that are discovered in each solar system. As the name suggests, these are stars that vary in their brightness. Since these stars are different from the typical stars that were identified in the night sky, the naming process is different. The process for naming them was proposed first by a German Astronomer Fredrich Wilhelm Argelander. Since Bayer’s scheme utilized the letters A to Q for every single star in the constellation, Argelander built on the scheme and used the remaining letters R to Z for the variable stars. However, as with all previous naming schemes, this one also grew obsolete with the numbers of stars that were being discovered due to improving astronomical tools and methods. The cataloguing of variable stars has gone through many changes and has finally settled in to a consistent naming scheme today. The results vary depending on the order of discovery. The name of the first variable star to be discovered in every constellation would start with the letter R, and would be followed by a Latin genitive such as R Andromedae. The second would begin with S, the third with T, and this convention would be followed until the letter Z. This is followed by names with two letters. In the case that more stars are discovered, a double letter scheme consisting of letters A-Q and A-Z on both positions respectively. Binary Systems and Multiple Systems Stars in binary or multiple systems actually outnumber the total number of single star systems yet observed. This means that the iconic sunset that Luke Skywalker observed in the original Star Wars was a more common occurrence in the universe than previously thought. Binary and multiple star systems are labeled in a variety of different ways. They are usually in capital letters from the Latin alphabet if the star has a common colloquial name; by Bayer name; by Flamsteed designation, or by a catalogue number. For example, the brightest star in the sky, Sirius, has a white dwarf companion which is catalogued as each of the following; Sirius B, Alpha Canis Majoris B, and HD 48915 B. Novae are named very differently compared to normal stars. Novae are stars that have exploded as a result of having used their overall atomic fuel of hydrogen and helium. Due to this, the nuclear explosions that carried on inside the star can no longer be sustained, so the star explodes and thus creates even more heavy elements which go on to form planets. Designations are assigned to novae based on the constellations that they’re in; together with the year in which the incredible event took place. Similarly, designations are assigned to supernovae based on the variable stars. They are named for the year that they occurred as well as an SN in the beginning and an uppercase letter at the end. However, if the year is one which is filled with such events, the double lowercase combination is used to end the designation.
0.848027
3.094282
LOG#027. Accelerated motion in SR.Posted: 2012/09/03 Hi, everyone! This is the first article in a thread of 3 discussing accelerations in the background of special relativity (SR). They are dedicated to Neil Armstrong, first man on the Moon! Indeed, accelerated motion in relativity has some interesting and sometimes counterintuitive results, in particular those concerning the interstellar journeys whenever their velocities are close to the speed of light(i.e. they “are approaching” c). Special relativity is a theory considering the equivalence of every inertial frame ( reference frames moving with constant relative velocity are said to be inertial frames) , as it should be clear from now, after my relativistic posts! So, in principle, there is nothing said about relativity of accelerations, since accelerations are not relative in special relativity ( they are not relative even in newtonian physics/galilean relativity). However, this fact does not mean that we can not study accelerated motion in SR. The own kinematical framework of SR allows us to solve that problem. Therefore, we are going to study uniform (a.k.a. constant) accelerating particles in SR in this post! First question: What does “constant acceleration” mean in SR? A constant acceleration in the S-frame would give to any particle/object a superluminal speed after a finite time in non-relativistic physics! So, of course, it can not be the case in SR. And it is not, since we studied how accelerations transform according to SR! They transform in a non trivial way! Moreover, a force growing beyond the limits would be required for a “massive” particle ( rest mass ). Suppose this massive particle (e.g. a rocket, an astronaut, a vehicle,…) is at rest in the initial time , and it accelerates in the x-direction (to be simple with the analysis and the equations!). In addition, suppose there is an observer left behind on Earth(S-frame), so Earth is at rest with respect to the moving particle (S’-frame). The main answer of SR to our first question is that we can only have a constant acceleration in the so-called instantaneous rest frame of the particle. We will call that acceleration “proper acceleration”, and we will denote it by the letter . In fact, in many practical problems, specially those studying rocket-ships, the acceleration is generally given the same magnitude as the gravitational acceleration on Earth (). Second question: What are the observed acceleration in the different frames? If the instantaneous rest frame S’ is an inertial reference frame in some tiny time , at the initial moment, it has the same velocity as the particle (rocket,…) in the S-frame, but it is not accelerated, so the velocity in the S’-frame vanishes at that time: Since the acceleration of the particle is, in the S’-frame, the proper acceleration, we get: Using the transformation rules for accelerations in SR we have studied, we get that the instantaneous acceleration in the S-frame is given by Since the relative velocity between S and S’ is always the same to the moving particle velocity in the S-frame, the following equation holds We do know that Due to time dilation so in the S-frame, the particle moves with the velocity We can now integrate this equation The final result is: We can check some limit cases from this relativistic result for uniformly accelerated motion in SR. 1st. Short time limit: . This is the celebrated nonrelativistic result, with initial speed equal to zero (we required that hypothesis in our discussion above). 2nd. Long time limit: . In this case, the number one inside the root is very tiny compared with the term depending on acceleration, so it can be neglected to get . So, we see that you can not get a velocity higher than the speed of light with the SR framework at constant acceleration! Furthermore, we can use the definition of relativistic velocity in order to integrate the associated differential equation, and to obtain the travelled distance as a function of , i.e. , as follows We can perform the integral with the aid of the following known result ( see,e.g., a mathematical table or use a symbolic calculator or calculate the integral by yourself): From this result, and the previous equation, we get the so-called relativistic path-time law for uniformly accelerated motion in SR: For consistency, we observe that in the limit of short times, the terms in the big brackets approach , in order to get , so we obtain the nonrelativistic path-time relationship with . In the limit of long times, the terms inside the brackets can be approximated to , and then, the final result becomes . Note that the velocity is not equal to the speed of light, this result is a good approximation whenever the time is “big enough”, i.e., it only works for “long times” asymptotically! And finally, we can write out the transformations of accelaration between the two frames in a explicit way: Check 1: For short times, , i.e., the non-relativistic result, as we expected! Check 2: For long times, . As we could expect, the velocity increases in such a way that “saturates” its own increasing rate and the speed of light is not surpassed. The fact that the speed of light can not be surpassed or exceeded is the unifying “theme” through special relativity, and it rest in the “noncompact” nature of the Lorentz group due to the factor, since it would become infinity at v=c for massive particles. It is inevitable: as time passes, a relativistic treatment is indispensable, as the next figures show The next table is also remarkable (it can be easily built with the formulae we have seen till now with any available software): Let us review the 3 main formulae until this moment We have calculated these results in the S-frame, it is also important and interesting to calculate the same stuff in the S’-frame of the moving particle. The proper time is defined as: We can perform the integral as before Finally, the proper time(time measured in the S’-frame) as a function of the elapsed time on Earth (S-frame) and the acceleration is given by the very important formula: And now, let us set , therefore we can write the above equation in the following way: Remember now, from our previous math survey, that , so we can invert the equation in order to obtain t as function of the proper time since: Inserting this last equation in the relativistic equation path-time for the uniformly accelerated body in SR, we obtain: Similarly, we can calculate the velocity-proper time law. Previous equations yield and thus the velocity-proper time law becomes Remark: this last result is compatible with a rapidity factor . Remark(II): . From this, we can read the reason why we said before that constant acceleration is “meaningless” unless we mean or fix certain proper time in the S’-frame since whenever we select a proper time, and this last relationship gives us the “constant” acceleration observed from the S-frame after the transformation. Of course, from the S-frame, as this function shows, acceleration is not “constant”, it is only “instantaneously” constant. We have to take care in relativity with the meaning of the words. Mathematics is easy and clear and generally speaking it is more precise than “words”, common language is generally fuzzy unless we can explain what we are meaning! As the final part of this log entry, let us summarize the time-proper time, velocity-proper time, acceleration-proper time-proper acceleration and distance- proper time laws for the S’-frame: My last paragraph in this post is related to express the acceleration in a system of units where space is measured in lightyears (we take c=300000km/s) and time in years (we take 1yr=365 days). It will be useful in the next 2 posts: Another election you can choose is so there is no a big difference between these two cases with terrestrial-like gravity/acceleration.
0.804079
3.864556
Astronomers looked in the right place at the right time and saw something that had only been seen in science fiction: a planet orbiting the two suns of its alien solar system. Scientists using the planet-hunting Kepler telescope spotted the Saturn-size world in a star system they call Kepler-16, and today they reveal their discovery in the journal Science. It's the first exoplanet ever spotted orbiting both stars in a binary star system, evoking images of Star Wars' Luke Skywalker gazing into the twin sunset on Tatooine. The Kepler-16 suns are both smaller than ours: One, orange in color, is 69 percent as massive as our sun, and the other, which appears red, is 20 percent as massive. The two stars orbit each other every 41 days. Life on a two-sun planet is difficult to imagine, says Nader Haghighipour, an astronomer at the NASA Astrobiology Institute at the University of Hawaii at Manoa. Sometimes, both stars would have set, and there would be total darkness on the planet. Other times, one sun would be up, and then the other would rise, turning things extremely bright. "The concept of day and night, dark and light, would have totally different meanings," Haghighipour, who was not involved in the study, says. Our own single-star system is actually an oddball; most solar systems have two or more stars. But until now, astronomers have had to exclude this enormous group of star systems from the search for Earth-like planets. That's because one of the primary ways astronomers search for exoplanets is through the "wobble" method—faraway planets are so hard to see, scientists instead look to stars for a telltale wobble that could be evidence of a planet's gravitational pull. When there are two stars, the two enormous masses pull on each other and mask the signatures of planets. In this case, though, astronomers got lucky: They caught sight of the Kepler-16 planet as it passed directly between the telescope and its two suns. It's the first direct evidence of a planet orbiting two stars, according to study lead author Laurance Doyle. "It's a whole new kind of planetary system," he says. The discovery happened only because the planet and both its suns were lined up on a plane with Earth, so that the scientists were at the right angle to see the planet as it crossed its suns. Of the 150,000 stars the Kepler spacecraft is watching, scientists chose a few thousand potential two-star systems to keep an eye on. It just so happened that the planet and the two stars lined up with the telescope's line of sight. "The team had to be smart to choose their initial targets properly, then keep their fingers crossed," Haghighipour says. In the past two years, Haghighipour says, there have been a couple of other announcements of planets orbiting two stars, but they turned out to be false claims. "This one is for real," he says. "It's a very important and profound discovery." Even if we could travel the 200 light-years to Kepler-16b, it would be no vacation destination. The planet's temperature falls between about minus 70 to minus 100 degrees Fahrenheit. At its warmest, Doyle says, the planet is comparable to "a nippy winter in Antarctica." Plus, the newly discovered planet is about the size of Saturn (95 times more massive than Earth), and like our solar system's sixth planet, it would have a thick, gaseous atmosphere. As a result, the Kepler-16 planet probably isn't a good place to look for life. But there is one hope for life-hunters, Doyle says: "If it has a moon, it has a shot." Planets of Kepler-16's size can accommodate moons as big as Earth. If the moon were large enough to hold an atmosphere, Haghighipour says, that could allow the greenhouse effect to take hold and even out the temperatures across the moon. (Saturn's moon Titan, for example, has a thick atmosphere—though it's made of compounds like methane and ethane that aren't terribly hospitable to life as we know it.) If the hypothetical moon were too big, however, it would attract a lot of gas and become too hot; plus, it would exert too much pressure on its core for it to have continental plates, which help to cycle carbon dioxide on Earth and make life possible. The right moon for life would be no smaller than half the size of Earth and no bigger than about three Earths, says Haghighipour. Doyle and his team are searching for this moon, but so far have been unsuccessful. Whether or not a moon is found, the discovery of this planet means that scientists can now begin to search for other planets orbiting two stars by using the same methods. Doyle is confident that there are many similar planets out there. "These are rare in the galaxy, but it's a big galaxy," he says. "Back of the envelope, I'd say there are two million more."
0.896457
3.927821
From: ESA Venus Express Mission Posted: Thursday, May 15, 2008 Venus Express has detected the molecule hydroxyl on another planet for the first time. This detection gives scientists an important new tool to unlock the workings of Venus's dense atmosphere. Hydroxyl, an important but difficult-to-detect molecule, is made up of a hydrogen and oxygen atom each. It has been found in the upper reaches of the Venusian atmosphere, some 100 km above the surface, by Venus Express's Visible and Infrared Thermal Imaging Spectrometer, VIRTIS. The elusive molecule was detected by turning the spacecraft away from the planet and looking along the faintly visible layer of atmosphere surrounding the planet's disc. The instrument detected the hydroxyl molecules by measuring the amount of infrared light that they give off. The band of atmosphere in which the glowing hydroxyl molecules are located is very narrow; it is only about 10 km wide. By looking at the limb of the planet, Venus Express looked along this faint atmospheric layer, increasing the signal strength by about 50. Hydroxyl is thought to be important for any planet's atmosphere because it is highly reactive. On Earth it has a key role in purging pollutants from the atmosphere and is thought to help stabilise the carbon dioxide in the martian atmosphere, preventing it from converting to carbon monoxide. On Mars it is also thought to play a vital role in sterilising the soil, making the top layers hostile to microbial life. The reactive molecule has been seen around comets, but the method of production there is thought to be completely different from the way it forms in planetary atmospheres. "Because the venusian atmosphere had not been studied extensively before Venus Express arrived on the scene, we have not been able to confirm much of what our models tell us by observing what is actually happening. This detection will help us refine our models and learn much more," says one of the Principal Investigators of the VIRTIS experiment, Giuseppe Piccioni, from the Istituto di Astrofisica Spaziale e Fisica Cosmica in Rome, Italy. On Earth, the glow of hydroxyl in the atmosphere has been shown to be closely linked to the abundance of ozone. From this study, the same is thought to be true at Venus. Now, scientists can set about estimating the amount of ozone in the planet's atmosphere. Venus Express has shown that the amount of hydroxyl at Venus is highly variable. It can change by 50% from one orbit to the next and this may be caused by differing amounts of ozone in the atmosphere. "Venus Express has already shown us that Venus is much more Earth-like than once thought. The detection of hydroxyl brings it a step closer" "Ozone is an important molecule for any atmosphere, because it is a strong absorber of ultraviolet radiation from the Sun," says Piccioni. The amount of the radiation absorbed is a key parameter driving the heating and dynamics of a planet's atmosphere. On Earth, it heats the stratosphere (layer of the atmosphere) making it stable and protecting the biosphere from harmful ultraviolet rays. Computer models will now be able to tell how this jump and drop in ozone levels over short intervals affects the restless atmosphere of that world. "Venus Express has already shown us that Venus is much more Earth-like than once thought. The detection of hydroxyl brings it a step closer," says Piccioni. He and his colleagues are only reporting the initial detection from a few orbits in their latest paper. They are working on the analysis of data from about 50 other orbits and more observations will follow. Notes for editors: First detection of hydroxyl in the atmosphere of Venus by G. Piccioni et al. has been published in today's issue of Astronomy & Astrophysics Letters. For more information: Giuseppe Piccioni, VIRTIS co-Principal Investigator, IASF-INAF, Rome, Italy Email: Giuseppe.Piccioni @ iasf-roma.inaf.it Hakan Svedhem, ESA Venus Express Project Scientist Email: Hakan.Svedhem @ esa.int // end //
0.869287
3.990888
The weird asteroid, which looks like a human skull is set to fly by earth once again. Only two months to go until the Spooky Asteroid or the Skull Asteroid will give a chance to the astronomers to get another look at it. The giant asteroid was first spotted on October 10, 2015, by the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) at Hawaii's Haleakala Observatory and since then it is lurking in the solar system. Officially called Asteroid 2015 TB145, the Halloween Asteroid was measured roughly 2,100-foot-wide (640 meters). On October 31, 2015, it came within just 300,000 miles (480,000 kilometres) of earth, slightly further away than the moon's orbit. Now, after three years, the Skull Asteroid aka Asteroid 2015 TB145 is set to pass by the blue planet again. But, this time it won't occur directly on Halloween this time. Astronomers expect that even though won't be as dramatic as the last time, the flyby will occur in mid-November and researchers are still looking forward to it. "Although this approach shall not be so favourable, we will be able to obtain new data which could help improve our knowledge of this mass and other similar masses that come close to our planet," researcher Pablo Santos-Sanz of the Institute of Astrophysics of Andalusia (IAA-CSIC) told SINC back in 2017. He also added that the Spooky Asteroid "is currently 3.7 astronomical units away from Earth that is 3.7 times the average distance from the Earth to the Sun. It has a magnitude of 26.5, which means it is only visible from Earth using very large telescopes or space telescopes." The co-author of this research, which was published in February 2017 in the journal Astronomy & Astrophysics, Thomas G. Müller from the Max-Planck-Institut für extraterrestrische Physik (Germany) said that after 2018, Asteroid 2015 TB145 will be seen during Halloween's day in the year 2088, "when the object approaches Earth to a distance of about 20 lunar distances." "The encounter on Halloween's day 2015 was the closest approach of an object of that size since 2006, and the next known similar event is the passage of 137108 (1999 AN10) on August 7, 2027. Later, 99942 Apophis will follow on April 13, 2029, with an Earth passage at approximately 0.1 lunar distances," he added. As per the researchers, this asteroid may actually be an extinct comet that has lost its water and other volatile materials after many laps around the sun. However, there are debates related to the shape of this asteroid, as many experts believe that its shape doesn't really depict a skull and it is just the human mind's tendency, which tries to find patterns where there aren't any. But, still, in this case, the weird asteroid looks quite spooky to many and scientists will again have an opportunity to study its characteristics.
0.824362
3.458175
For a short time, this New Horizons Long Range Reconnaissance Imager (LORRI) frame of the ‘Wishing Well’ star cluster, taken December 5, 2017, was the farthest image ever made by a spacecraft, breaking a 27-year record set by Voyager 1. About two hours later, New Horizons later broke the record again. Credit: JHU APL / NASA / SwRI NASA’s New Horizons spacecraft recently turned its telescopic camera toward a field of stars, snapped an image — and made history. The routine calibration frame of the ‘Wishing Well’ galactic open star cluster, made by the Long Range Reconnaissance Imager (LORRI) on December 5, was taken when New Horizons was 6.12 billion kilometres, or 40.9 astronomical units from Earth — making it, for a time, the farthest image ever made from Earth. An astronomical unit is a unit of measurement, being the average distance between the Earth and the Sun. New Horizons was even farther from home than NASA’s Voyager 1 when it captured the famous ‘Pale Blue Dot’ image of Earth. That picture was part of a composite of 60 images looking back at the Solar System, on February 14, 1990, when Voyager was 6.06 billion kilometres, or about 40.5 astronomical units from Earth. Voyager 1’s cameras were turned off shortly after that portrait, leaving its distance record unchallenged for more than 27 years. LORRI broke its own record just two hours later with images of Kuiper Belt objects 2012 HZ84 and 2012 HE85 — further demonstrating how nothing stands still when you’re covering more than 1.1 million kilometres of space each day. “New Horizons has long been a mission of firsts — first to explore Pluto, first to explore the Kuiper Belt, fastest spacecraft ever launched,” said New Horizons principal investigator Alan Stern, of the Southwest Research Institute in Boulder, Colorado. “And now, we’ve been able to make images farther from Earth than any spacecraft in history.” Distance and speed New Horizons is just the fifth spacecraft to speed beyond the outer planets, so many of its activities set distance records. On December 9 it carried out the most-distant course-correction manoeuvre ever, as the mission team guided the spacecraft toward a close encounter with a KBO named 2014 MU69 on January 1, 2019. That New Year’s flight past MU69 will be the farthest planetary encounter in history, happening 1.6 billion km beyond the Pluto system — which New Horizons famously explored in July 2015. During its extended mission in the Kuiper Belt, which began in 2017, New Horizons is aiming to observe at least two-dozen other KBOs, dwarf planets and ‘Centaurs,’ former KBOs in unstable orbits that cross the orbits of the giant planets. Mission scientists study the images to determine the objects’ shapes and surface properties, and to check for moons and rings. The spacecraft also is making nearly continuous measurements of the plasma, dust and neutral-gas environment along its path. The New Horizons spacecraft is healthy and is currently in hibernation. Mission controllers at the Johns Hopkins Applied Physics Laboratory in Laurel, Maryland, will bring the spacecraft out of its electronic slumber on June 4 and begin a series of system checkouts and other activities to prepare New Horizons for the MU69 encounter. Adapted from information issued by JHU APL / NASA / SwRI.
0.823112
3.572989
Every die-hard fan of the scientific method knows that Karl Popper was a baller. While his achievements clearly extend far beyond analysis of the scientific method alone, he is arguably best known for his work on empirical falsification. In essence, the idea behind his argument is that a theory is only any good if there exists a direct and clear experimental/observational way to demonstrate that it is incorrect. In other words, it is more important to point out avenues in which your theory can be wrong than to flaunt all the possible ways it could be right. Why am I writing about this? Mike and I just spent a week at 14,000ft on the Big Island directly searching for Planet Nine, and I’ve been thinking a lot about how Popper’s falsifiability criteria apply to the Planet Nine hypothesis… Obviously, if we search the entire sky at sufficient depth and don’t find Planet Nine, then we are plainly wrong. But I don’t think this is going to happen. Instead, I think we (or some other group) are going to detect Planet Nine on a timescale considerably shorter than a decade - maybe even this year if we/they get lucky. Which begs the question: if a planet beyond Neptune is found, how would we proceed to determine that the Planet Nine theory is actually right? Figure 1. Mike and I at the telescope - where colors don't exist. I’m sure this question sounds incredibly stupid, so let me back up a bit. The Batygin & Brown 2016 AJ paper is by no means the first to predict a trans-Neptunian planet with a semi-major axis of a few hundred astronomical units. That accolade goes to George Forbes, who in 1880 proposed a planet located at ~300AU, based upon an analysis of the clustering of the aphelion distances of periodic comets (sound familiar?). Since then, a trans-Neptunian planet has been re-proposed over and over again, which brings us to problem at hand: whose trans-Neptunian planet theory is right and whose is wrong? In my view, there is a very clear and intelligible way to answer this question. Each proposition of a trans-Neptunian planet is uniquely defined by (i) the data it aims to explain and (ii) the dynamical mechanism that sculpts the observations. So in order to be deemed correct, the discovered planet must match both of these specifications of the theorized planet. Figure 2. The current observational census of distant KBOs. When it comes to the Planet Nine hypothesis, point (i) is well-established: Planet Nine is invoked to explain (1) physical clustering of distant Kuiper belt orbits, (2) the perihelion detachment of long-period KBOs such as Sedna and VP113, as well as (3) the origin of nearly-perpendicular orbits of centaurs in the solar system. Embarrassingly, until recently our understanding of the “machinery” behind how Planet Nine generates these observational signatures has been incomplete. That is, although we have plenty of numerical experiments to demonstrate that Planet Nine can nicely reproduce the observed solar system, the theory that underlies these simulations has remained largely elusive. The good news is that this is no longer a problem. In a recently accepted paper that I co-authored with Alessandro “Morby” Morbidelli, the theory of Planet Nine is characterized from semi-analytical grounds. So, for the first time, we not only know what Planet Nine does to the distant Kuiper belt, but we understand how it does it. The first lingering question that Morby and I tackled is that of stability: how do the distant Kuiper belt objects avoid being thrown out of the solar system by close encounters with Planet Nine, when their orbits intersect? Turns out, the answer lies in an orbital clockwork mechanism known as mean motion resonance (MMR). When a Kuiper belt object is locked into an MMR with Planet Nine, it completes an integer number of orbits per (some other) integer number of orbits of Planet Nine. This strict rationality of the orbital periods allows the bodies to exchange orbital energy in a coherent fashion, and ultimately avoid collisions. But how do such configurations arise in nature? Remarkably, the answer in this case is “by chance.” When the Kuiper belt first formed, a staggering number (roughly 30 Earth masses worth) of small, icy asteroid-like bodies were thrown out into the distant realm of the solar system by Neptune (for the interested reader, see papers about the Nice model here and here). Most of these objects were not fortunate enough to accidentally land into mean motion resonances with Planet Nine and were ejected from the solar system. However, the few that were, survive in the distant Kuiper belt to this day, and comprise the anti-aligned cluster of orbits that we observe. As a demonstration of this point, check out the simulated orbital period distribution of surviving Kuiper belt objects in one of our idealized simulations, and note that all distant bodies have rational orbital periods with that of Planet Nine: All of this said, the full picture is of course not as clear-cut. Within the context of our most realistic calculations of distant Kuiper belt evolution, the clustered KBOs chaotically hop between resonances, instead of staying put. Still, the qualitative framework provided by analysis of isolated resonances holds well, even in our most computationally expensive simulations. Ok so this resolves the question of how Kuiper belt objects survive, but it leaves open the question of why their orbits are clustered together. Intriguingly, a qualitatively different dynamical mechanism - known as secular interactions (see here for a neat discussion) - is responsible for the orbital confinement that we see. Plainly speaking, over exceedingly long periods of time (e.g. hundreds of orbits), Planet Nine and the Kuiper belt objects it perturbs will see each-other in almost every possible configuration along their respective orbits. Thus, their long-term evolution behaves as if the mass of Planet Nine has been smeared over its orbital trajectory, and its gravitational field torques the elliptical orbit of the test particle. The plot below shows the eccentricity-longitude of perihelion portrait of this secular dynamic inside the 3:2 mean motion resonance, where the background color scale and contours have been computed analytically and the orange curve represents a trajectory drawn from a numerical simulation. Figure 4. Eccentricity-perihelion diagram showing the secular trajectories of stable KBOs trapped in a 3:2 MMR with P9. Indeed, the fact that the semi-analytic theory predicts looped trajectories that cluster around a P9 longitude of perihelion offset of 180 degrees implies that the raising of perihelion distances (i.e. lowering of eccentricities) of long-period KBOs and anti-aligned orbital confinement are actually the same dynamical effect. In other words, the reason that objects such as Sedna and VP113 have orbits that are not attached to Neptune is because they are entrained in the peculiar anti-aligned secular dynamic with Planet Nine. Finally, there is the puzzle of the highly inclined orbits. Whenever one sees cycling of orbital inclination and eccentricity, there is a temptation to invoke the Kozai-Lidov mechanism as the answer. In the case of Planet Nine, however, the high-inclination dynamics are keenly distinct from those facilitated by the Kozai-Lidov effect. Perhaps the most obvious reason why the dynamics we observe in numerical simulations is not the Kozai-Lidov effect is that in our calculations, highly inclined KBOs develop the highest eccentricities when their orbits are perpendicular to the plane of Planet Nine’s orbit, in direct contrast with perpendicular and circular orbits entailed by the Kozai-Lidov effect. So if it’s not the Kozai-Lidov resonance, then what is it? As it turns out, the high-inclination dynamics induced by Planet Nine is characterized by the bounded oscillation of a octupole-order secular angle which is equal to the difference between the longitude of perihelion of the KBO relative to that of Planet Nine and twice the KBO argument of perihelion. How could we have ever thought it was anything else?… The plot below shows the high-inclination secular resonant trajectories executed by test-particles in our simulation plotted in canonical action-angle coordinates, with the observed objects shown in orange. Examining this plot closely, one detail that I’m reminded of is the fact that the few high-inclination large semi-major axis centaurs that we know of are actually the “freaks” of the overall population, since they all have perihelia on the order of ~10AU. Certainly, detecting a sample of these objects with perihelia well beyond 30AU would be immensely useful to further constraining the parameters of the model. With the above rambling in mind, I will admit that all I’ve mentioned here is an introductory account of the paper. As such, it represents a considerable simplification of our actual calculations, so if you want to better understand the full picture, I can only urge you to read the paper itself. Importantly, however, the work presented in this manuscript not only provides us with a better understanding of how the observed census of distant KBOs has been sculpted by Planet Nine, it finally places the P9 hypothesis within the framework of Popper’s demand for falsifiability, and sincerely allows for the confrontation of the Planet Nine theory with the observational search. The final step is now to find it.
0.875866
3.307611
Everyone has always wondered how the Earth is exactly in the round shape that it is and how it got to be this way. According to scientists, Earth has had a long history of collisions with other celestial objects. Later on, scientists who conducted research to try to explain why our planet is as large as it is now suggested that collisions with other celestial objects helped form Earth. The study has been published in the journal Nature Geoscience, and it argues that the way scientists calculated Earth’s form from that violence has its faults. According to the scientists, some kind of a chaotic event caused the moon to form. Scientists aren’t entirely sure about the way the moon formed; however, they have been working on a few theories which could prove how the moon began to orbit the Earth. According to the scientists, after the moon formed, Earth was bombarded by different celestial objects, which the scientists refer to as “planetesimals.” That’s a nickname for the building blocks of planets. Those objects are even now found in the universe. In young solar systems, the dust lumps together to create blocks and planets. After that, they stick together and form planets. Planetesimals are quite large, as they hit roughly 600 miles across, which is larger than an asteroid. So far, scientists have found up to 0.5% of the mass of our planet to all the planetesimals which hit it during the days it was formed. They got this information through analyses which included elements such as metals like gold and platinum, as our planet received them from space and they are located in the mantle, the area under the Earth’s surface. Scientists say they confirmed this fall that those elements were created through collisions between neutron stars that were extremely dense. However, the scientists that conducted the research believe that model is not as accurate and that the original model underestimates the amount of those metals that sunk all the way to Earth’s core or flew back into space. After they repeated the calculations which were based on the new estimates, the authors of the study explained that planetesimals could have had an impact of five times more of Earth’s mass, compared to the old model. The planetesimals could also explain why scientists struggled with old rock samples; they didn’t display an accurate chemical fingerprint, which further complicated the debate about the moon’s formation. The new study could solve that mystery and also explain how other objects helped form Earth.
0.871693
3.015108
The final supermoon of year is set to rise over Lancashire this evening. The full moon in May is also known as the “flower moon”. This is because it signifies the flowers that bloom during the month. According to Royal Observatory Greenwich, other common names include the hare moon, the corn planting moon and the milk moon. The celestial event is expected to be visible early in the morning as well as after sunset as the moon rises in the south-east. Greg Brown, an astronomer at the Royal Observatory, said: “Technically the exact moment of full moon is 11.45am, however the moon will not be visible in the sky in the UK at that time.” The moon will appear bigger than usual on Thursday morning (May 7) as well as when it rises at around 8.44pm in the evening. The exact time that the moon will rise varies slightly across the UK, but by no more than around 10 minutes. This full moon will also be a supermoon, meaning it will be about 6% larger than a typical full moon and around 14% bigger than a micromoon -which is when the moon is at its furthest point from Earth. This event is set to be the third and final supermoon of this year. The next supermoon will be visible in April 2021. What is a supermoon? As the moon races around our planet, it follows a path that is not perfectly round. As a result, the moon is closer or farther from our planet every single night. When the moon reaches its lowest orbit of Earth, it reaches the so-called lunar perigee. When the moon reaches its highest orbit of Earth, it reaches the co-called lunar apogee. If a full moon happens to fall within 90 percent of lunar perigee, it is known as a supermoon. Dr Brown added: “The moon’s orbit around the Earth is not entirely circular, instead a slightly flattened circle or ellipse. “As such, it is sometimes closer to and sometimes further away from the Earth. “While definitions vary, a supermoon typically occurs when a full moon coincides with the moon being within the closest 10% of its orbit.”
0.828111
3.125019
The supposed discovery of the precise position of a fast radio burst (FRB) captured in near real-time has been called into question, with scientists saying it was never an FRB in the first place. Instead, researchers from Harvard say what was observed came from an active galactic nucleus powered by a supermassive black hole. FRBs, first discovered in 2007, are mysterious bursts that emanate from deep space and last for just a few milliseconds. Their source, however, remains unknown. Most FRBs found so far have been identified in data, long after the event itself has passed. This has meant scientists have been unable to trace it back to a location or event, and that follow-up observations are impossible. In February, a study published in Nature claimed to have recorded an FRB – dubbed FRB 150418 – just a few seconds after it hit the telescope. This allowed the team to identify the location, host galaxy and redshift of the FRB. The latter meant they could get a distance – six billion light years. Identifying the location allowed the team to hypothesise about the source, suggesting it was the afterglow of a merger event, where two stars orbiting each other come together. However, following the announcement scientists began to question the research. In a study that has been accepted for publication in Astrophysical Journal Letters, Peter Williams and Edo Berger have now said FRB 150418 was not one of these mysterious events, but something far more "mundane". Williams and Berger looked at the supposed host galaxy with highly sensitive telescopes that allowed them to monitor the galaxy. Their observations showed that instead of fading away, as an afterglow should, there was a persistent and varying radio source that regularly reached the brightness observed in the Nature study. "What the other team saw was nothing unusual," states Berger. "The radio emission from this source goes up and down, but it never goes away. That means it can't be associated with the fast radio burst." Instead, the scientists say the purported FRB comes from duel jets being blasted out of a supermassive black hole, creating a constant source of radio waves. The apparent flickering is the result of 'scintillation' – the same process that makes stars appear to twinkle. "Part of the scientific process is investigating findings to see if they hold up. In this case, it looks like there's a more mundane explanation for the original radio observations," Williams said, adding there is still some way to go before establishing the true source of FRBs. "Right now the science of fast radio bursts is where we were with gamma-ray bursts 30 years ago. We saw these things appearing and disappearing, but we didn't know what they were or what caused them," he said. "Now we have firm evidence for the origins of both short and long gamma-ray bursts. With more data and more luck, I expect that we'll eventually solve the mystery of fast radio bursts too."
0.813808
3.930736
The meteoroid seen over the UK on September 21, 2012 has created quite a sensation – make that a several sensations. First, the bright object(s) in the night sky were seen across a wide area by many people, and the brightness and duration – 40 to 60 seconds reported and videoed by some observers – had some experts wondering if the slow moving light-show might have been caused by space junk. But analysis by satellite tracker Marco Langbroek revealed this was likely an Aten asteroid, asteroid which have orbits that often cross the Earth’s orbit, but their average distance from the Sun is less than 1 AU, the distance from the Earth to the Sun. Atens are fairly unusual, making this a rather unique event. But then came another analysis that seemed to be so crazy, it might have been true: this meteoroid may have skipped like a stone in and out of Earth’s atmosphere, where it slowed enough to orbit the Earth until appearing as another meteor over Canada, just a few hours after it was seen over the UK and northern Europe. How amazing that would have been! And there was much speculation about this possibility. But, it turns out, after more details emerged and further investigation ensued, it is not possible that the space rock could have boomeranged around the world and been seen in again 2½ hours later over Canada. However, the current thinking is that at least one or two of the largest pieces retained enough velocity that they went into an elliptical Earth orbit, and went perhaps a half an orbit around Earth. “At first it seemed natural to consider a possible dynamical linkage (between the UK and Canadian meteors), partly because the precise location and time over Quebec/Ontario was not well-known early on,” said aerospace engineer and meteor expert Robert Matson, in an email to Universe Today. Matson worked extensively with Esko Lyytinen, a member of the Finnish Fireball Working Group of the Ursa Astronomical Association, to analyze the possible connection between the September 21 UK fireball, and the Quebec fireball that followed about 2½ hours later. At first, the time of the fireball sighting over southeastern Canada and northeastern USA was in doubt, but two Canadian all-sky cameras from the Western Meteor Physics Group captured the meteor, providing an accurate time. “And once I triangulated the location to a spot between Ottawa and Montreal, a linkage to the UK fireball was no longer possible due to the longitude mismatch,” Matson said. Additionally, the 153-minute time difference between meteors places a strict limit on the maximum longitude difference for a “skipping” meteoroid of roughly 38 degrees. This would put the final perigee well off the coast of Newfoundland, south of Greenland, Matson added. More facts emerged, putting a death knoll on the connection between the two. “Independent of the longitude mismatch, triangulation of the Canadian videos revealed that the entry angle was quite steep over Quebec – quite at odds with what an orbiting remnant from a prior encounter would have had,” Matson said. “So the meteors are not only unrelated, their respective asteroid sources would have been in different solar orbits.” Image of fireball taken on Feb. 25, 2004 by the Elginfield CCD camera from the University of Western Ontario. Another duo of astronomers from the British Astronomical Association, John Mason and Nick James concurred, also noting the shallow angle of the UK fireball, in addition to its slow speed. “We get velocities of 7.8 and 8.5 km/s and a height of 62 km ascending,” they wrote in the BAA blog. “These velocities and the track orientation and position are not at all consistent with ongoing speculation that there is a connection between this fireball and a fireball seen in south-eastern Canada/north-eastern USA 155 minutes later.” But did parts of the meteoroid survive and skip out of the atmosphere? “Nearly all of the fragments of the meteoroid did just come in for good during and shortly after the UK passage, but at least one or two of the largest pieces retained enough velocity that they went into elliptical earth orbit,” Matson said. “The perigee of that orbit was a little over 50 km above the UK. The apogee would have been half an orbit later, possibly thousands of kilometers above the South Pacific, south of New Zealand.” Just how high the apogee altitude was depends on how much the meteoroid decelerated over the UK, Matson added. “This is why Esko, myself and others are very interested in determining the velocity of those fragments after they passed through perigee,” he said. “Below 7.9 km/sec, and they never get back out of the atmosphere; between 7.9 and 11.2 km/sec, they go into orbit — and we believe a couple of the biggest pieces were in the lower half of this range.” But Matson said that if any remnant or remnants of the UK fireball did “skip” out of the atmosphere, they certainly had to come back in for good somewhere on the planet. “It is even remotely possible that it happened over Quebec,” Matson said. “But the laws of orbital mechanics do not allow an aerobraked fragment of the UK meteoroid to reenter over Quebec only 2½ hours later. It would have to be more than 4 hours later to line up with Quebec.” The most likely scenario, Matson said, is that the surviving portion(s) of the UK meteoroid came in for good less than 2½ hours later, with the only possible locations during that window being the North Atlantic, Florida, Cuba, Central America, the Pacific, New Zealand, Australia, the Indian Ocean, the Arabian Peninsula, Turkey or southern Europe. Of these, the northern hemisphere locations would be favored. So perhaps we haven’t heard the last of this meteoroid! As crazy as the bouncing bolide sounds, it has happened in the past, according to Kelly Beatty at Sky and Telescope, who mentioned at least one instance where a large meteoroid streaked across the sky and then returned to interplanetary space. This sighting took place over the Rocky Mountains in broad daylight on August 10, 1972, and the meteoroid came as close as 35 miles (57 km) above Earth’s surface before skipping out into space. Beatty added that its velocity was too fast to become captured and return again. You can read more analysis of the UK fireball being an Aten asteroid by Phil Plait at Bad Astronomy Hat tip: Luke Dones This article was updated on 10/9/12
0.89074
3.737513
The solar system has a new most-distant family member. Scientists using ground based observatories have discovered an object that is believed to have the most distant orbit found beyond the known edge of our solar system. Named 2012 VP113, the observations of the object -- possibly a dwarf planet -- were obtained and analyzed with a grant from NASA. A dwarf planet is an object in orbit around the sun that is large enough to have its own gravity pull itself into a spherical, or nearly round, shape. The detailed findings are published in the March 27 edition of Nature. “This discovery adds the most distant address thus far to our solar system’s dynamic neighborhood map,” said Kelly Fast, discipline scientist for NASA's Planetary Astronomy Program, Science Mission Directorate (SMD) at NASA Headquarters, Washington. “While the very existence of the inner Oort Cloud is only a working hypothesis, this finding could help answer how it may have formed.” The observations and analysis were led and coordinated by Chadwick Trujillo of the Gemini Observatory in Hawaii and Scott Sheppard of the Carnegie Institution in Washington. They used the National Optical Astronomy Observatory’s 13-foot (4-meter) telescope in Chile to discover 2012 VP113. The telescope is operated by the Foundation of Universities for Research in Astronomy, under contract with the National Science Foundation. The Magellan 21-foot (6.5-meter) telescope at Carnegie’s Las Campanas Observatory in Chile was used to determine the orbit of 2012 VP113 and obtain detailed information about its surface properties. “The discovery of 2012 VP113 shows us that the outer reaches of our solar system are not an empty wasteland as once was thought,” said Trujillo, lead author and astronomer. “Instead, this is just the tip of the iceberg telling us that there are many inner Oort Cloud bodies awaiting discovery. It also illustrates how little we know about the most distant parts of our solar system and how much there is left to explore.” Our known solar system consists of the rocky planets like Earth, which are close to the sun; the gas giant planets, which are further out; and the frozen objects of the Kuiper belt, which lie just beyond Neptune's orbit. Beyond this, there appears to be an edge to the solar system where only one object somewhat smaller than Pluto, Sedna, was previously known to inhabit for its entire orbit. But the newly found 2012 VP113 has an orbit that stays even beyond Sedna, making it the furthest known in the solar system. Sedna was discovered beyond the Kuiper Belt edge in 2003, and it was not known if Sedna was unique, as Pluto once was thought to be before the Kuiper Belt was discovered in 1992. With the discovery of 2012 VP113, Sedna is not unique, and 2012 VP113 is likely the second known member of the hypothesized inner Oort cloud. The outer Oort cloud is the likely origin of some comets. “The search for these distant inner Oort cloud objects beyond Sedna and 2012 VP113 should continue, as they could tell us a lot about how our solar system formed and evolved," says Sheppard. Sheppard and Trujillo determine that about 900 objects with orbits like Sedna and 2012 VP113 with sizes larger than 621 miles (1000 km) may exist. 2012 VP113 is likely one of hundreds of thousands of distant objects that inhabit the region in our solar system scientists refer to as the inner Oort cloud. The total population of the inner Oort cloud is likely bigger than that of the Kuiper Belt and main asteroid belt. “Some of these inner Oort cloud objects could rival the size of Mars or even Earth,” said Sheppard. This is because many of the inner Oort cloud objects are so distant that even very large ones would be too faint to detect with current technology.” 2012 VP113’s closest orbit point to the sun brings it to about 80 times the distance of the Earth from the sun, a measurement referred to as an astronomical unit or AU. The rocky planets and asteroids exist at distances ranging between .39 and 4.2 AU. Gas giants are found between 5 and 30 AU, and the Kuiper belt (composed of hundreds of thousands of icy objects, including Pluto) ranges from 30 to 50 AU. In our solar system there is a distinct edge at 50 AU. Until 2012 VP113 was discovered, only Sedna, with a closest approach to the Sun of 76 AU, was known to stay significantly beyond this outer boundary for its entire orbit. Both Sedna and 2012 VP113 were found near their closest approach to the sun, but they both have orbits that go out to hundreds of AU, at which point they would be too faint to discover. The similarity in the orbits found for Sedna, 2012 VP113 and a few other objects near the edge of the Kuiper Belt suggests the new object’s orbit might be influenced by the potential presence of a yet unseen planet perhaps up to 10 times the size of Earth. Further studies of this deep space arena will continue. For more details on the new dwarf planet, visit:
0.883315
3.782206
It is a tale of two satellites, a shared destination, and two very different missions. Here on Earth, one lunar orbiter prepares to begin its voyage to the moon. Meanwhile, 235,000 miles away in space, the other plummets from orbit endings its mission in a heap on the lunar surface. This week at Cape Canaveral, Florida, NASA will launch its Lunar Reconnaissance Orbiter (LRO). Presently strapped to the top of an Atlas V rocket at the Kennedy Space Center, it is the $579 million opening salvo of the space agency's "Vision for Space Exploration," the series of missions initially intended to return Americans to the moon, then eventually take them further into space. John Keller, a deputy scientist with the LRO mission, says the objective of the project is to determine whether it's safe--and if essential resources like water exist--to proceed with the plan to colonize the moon. The Kaguya orbiter, launched by the Japanese space agency (JAXA) in late 2007, had strictly scientific objectives. The agency set out to answer some of the moon's remaining unsolved mysteries, not to mention be the first to map the moon using the latest in digital imaging technology. "LRO is not a science mission," Jim Garvin, chief scientist at the Goddard Space Center and one of LRO's founding fathers, told Popular Mechanics. "It has high science value, but it was conceived to provide engineering parameters for our eventual manned return to the moon." Though the orbiters share a few similar instruments--both, for example, boast high-resolution cameras and laser altimeters to provide unprecedented, richly detailed topographic models of the lunar surface--the unique objectives of the mission mean that even seemingly comparable devices actually differ significantly. "The scientific community is awaiting the tremendous data sets that will come from each of these missions," Keller says. But while the Kayuga data will be extremely useful to NASA, Garvin adds, "What we need to know is the terrain at civil engineering scales; temperatures, which they're not mapping; hydrogen, as a resource, at a few miles scale; imaging at the scale of a rock that will break a lander leg. When you start putting all those things together, it's beyond what you want in a general science mission." To obtain scientific data on lunar origins and evolution, and to develop the technology for future lunar exploration. To find safe landing sites, locate potential resources, characterize the radiation environment, and demonstrate new technology. 21 months (September 2007 to June 2009) 1 to 4 years (1 year exploration goal, with possibility of extended mission lasting 3 years) Lunar Radiation: A Charged Particle Spectrometer on Kaguya collected data on high energy particles as they peppered the moon, so that scientists might forecast radiation from cosmic rays. Magnetic Anomalies: Perched at the end of a 12-meter mast, the Lunar Magnetometer obtained measurements on the varied direction, strength and intensity of the moon's magnetic fields, providing the data to produce the most detailed maps of the moon's magnetic anomalies. Gravity Fields: Measurements of the interference in signals sent between Kaguya, a pair of sub-satellites (Usuda and Okina), and radio dishes on Earth provided data on the moon's gravity, and created the first complete maps covering the entire moon's gravitational make-up. Lunar History: The Lunar Radar Sounder, a device emitting low frequency (5MHz) radio pulses into the moon, was used to analyze stratification below the surface, thus providing data for better understanding the moon's tectonic past. HDTV: Kaguya employed a high definition television camera to film the first ever HD video of the lunar surface, and also capture a full Earth-rise as it orbited the moon. Topography: A suite of imaging equipment, the Terrain Camera (TC) and Multi Band Imager (MI), swept over the surface in a continuous "push-broom" fashion. The TC was comprised of two, one-dimensional telescopes and captured black-and-white images with an unprecedented resolution of 10m/pixel. At the same time, a Laser Altimeter attached to the orbiter sent a constant laser pulse to the surface. By timing the reflection between the surface and the orbiter, the altimeter collected precise data used to create the first-ever "global, accurate and precise topographic map of the Moon." Measuring Radiation: Two devices will measure the moon's volatile radiation environment. Similar to Kaguya's Charged Particle Spectrometer, the Lunar Explorer Neutron Detector (LEND), will measure the neutron flux produced by the barrage of cosmic rays that constantly showers the lunar surface. But LRO goes a step further with the Cosmic Ray Telescope for the Effects of Radiation (CRaTER). It will not only detect incoming solar particles as they pass the orbiter, but also carry a layer of Tissue Equivalent Plastic, specially engineered to simulate human tissue and measure the biological effects of particle bombardment. Search for Ice: The LRO has several instruments aimed at determining whether or not water, in the form of ice, exists on the moon. Diviner, a radiometer, will create the first global temperature survey of the lunar surface, detecting cold traps where ice may exist. According to Garvin, it will "tell us where the super cold places are, and how cold they really are. No other mission is doing that, but it's really a fundamental question." A new technology display called Mini RF, a small antenna attached to LRO, will send radio waves to the moon's poles, the signals that return will then be analyzed to determine whether ice is trapped in the poles' deep, unmapped craters. The Lyman-alpha Mapping Project (LAMP) will measure the faint reflection of light created by stars and hydrogen atoms in space to determine the composition of the moon's permanently unlit regions. Search for Landing Sites: While Kaguya's cameras and altimeter created topographic models of exceptional, never-before-seen clarity and detail, NASA aims to better them. Its Lunar Orbit Laser Altimeter (LOLA) will provide terrain mapping data for choosing future landing sites. "It's not like the ones that have flown before," Garvin says. "It will actually map things at a spatial scale of just a few meters, and 10cm vertically. That's the scale [with which] we map ice sheets on the Earth." The LRO's camera package consists of one wide angle lens with 100m/pixel resolution, along with a pair of narrow angle scopes with 50cm/pixel resolution. Together they will capture extremely detailed views--objects as small as 1 meter will be visible. "The cameras on the now impacted Japanese mission make mapping at a scale of like 15 meters, 20 meters, not 50 centimeters," Garvin says. "We will be able to make integrated maps of landing site regions that will predict safety and allow for better design of future landing systems. We massively over-designed Apollo because we had to. Now, we can be smarter." (Scientists work on the Lunar Reconnaissance Orbiter. Image from NASA) (An illustration depicting Japan's Kayuga satellite. Image from JAXA/Kayuga)
0.826263
3.38248
In 1977, as preparations were being made for the launch of the two unmanned Voyager spacecraft, Alan Lomax was contacted by Carl Sagan. Sagan had been tapped by NASA to chair a committee to gather images, sounds, and songs that would represent Earth on a set of phonographic records — to be affixed to the outside of both spacecraft along with stylii and graphic instructions on playing them — and he hoped Lomax would help make the musical selections. Alan ultimately suggested fifteen of the twenty-seven performances that were launched with the probes on what are now popularly known as the "Voyager golden records." While their initial objective was to explore Jupiter and Saturn, Voyagers 2 (launched August 20, 1977) and Voyager 1 (launched September 5, 1977) have traveled farther from Earth than any other man-made object. As of their 36th anniversary, they're in the "Heliosheath" — the outermost layer of the heliosphere where the solar wind is slowed by the pressure of interstellar gas — and, still transmitting data, their mission is now to extend humankind's exploration to the furthest reaches of the solar system, and, if possible, beyond. The copper-plated discs were the equivalent of four sides of a 33.3 rpm 12" LP. One of the sides was to hold digital scientific information – largely diagrams and pictures — as well as human voices in 55 languages, including greetings by then U.N. Secretary General Kurt Waldheim and President Jimmy Carter, and a selection of natural sounds: the sea, wind, thunder, birds, whales, laughter. The other three sides were devoted to the diversity of Earth's music. "Present[s] from a small, distant world," as Jimmy Carter described the records therein, addressing the imagined recipients — be they extraterrestrials or future humans. "A token of our sounds, our science, our images, our music, our thoughts and our feelings. We are attempting to survive our time so we may live into yours." The golden record's musical inclusions, however, were not initially so diverse, as the committee had drawn solely on the Western classical canon. Dr. Sagan thus asked Alan Lomax to participate in the selection process. Lomax had just finished compiling an anthology of world song*, in which he and his colleagues had chosen 700 pieces that they felt most effectively illustrated the breadth and depth of human musical style, and Alan ultimately contributed fifteen of the twenty seven final performances that were featured on the Voyager record. In Murmurs of Earth, a book recounting the Voyager experience, Sagan writes that it was Lomax "who was a persistent and vigorous advocate for including ethnic music even at the expense of Western classical music. He brought pieces so compelling and beautiful that we gave in to his suggestions more often than I would have thought possible. There was, for example, no room for Debussy among our selections, because Azerbaijanis play bagpipes and Peruvians play panpipes and such exquisite pieces had been recorded by ethnomusicologists known to Lomax." In a letter to Sagan, and with his Cantometrics research fresh in mind, Lomax explained that musical style is reflective of social and economic development — making it possible to explore the evolution of human culture through musical recordings — and he listed the following categories as a "a good map" of mankind's main performance style traditions, adding that they served as the fundamental criteria for his selections: - African gatherer - Australian gatherer - North America - Maritime Pacific - Black Africa - European Peasant - Middle East - South Asia - East Asia - Southeast Asia - Mercantile/industrial Europe - Latin America Here are the selections ultimately included on the Voyager record. Items in bold signify Lomax's selections. - Bach, Brandenburg Concerto No. 2 in F. First Movement, Munich Bach Orchestra, Karl Richter, conductor. 4:40 - Java, court gamelan, "Kinds of Flowers," recorded by Robert Brown. 4:43 - Senegal, percussion, recorded by Charles Duvelle. 2:08 - Zaire, Pygmy girls' initiation song, recorded by Colin Turnbull. 0:56 - Australia, Aborigine songs, "Morning Star" and "Devil Bird," recorded by Sandra LeBrun Holmes. 1:26 - Mexico, "El Cascabel," performed by Lorenzo Barcelata and the Mariachi Mexico. 3:14 - "Johnny B. Goode," written and performed by Chuck Berry. 2:38 - New Guinea, men's house song, recorded by Robert MacLennan. 1:20 - Japan, shakuhachi, "Tsuru No Sugomori" ("Crane's Nest,") performed by Goro Yamaguchi. 4:51 - Bach, "Gavotte en rondeaux" from the Partita No. 3 in E major for Violin, performed by Arthur Grumiaux. 2:55 - Mozart, The Magic Flute, Queen of the Night aria, no. 14. Edda Moser, soprano. Bavarian State Opera, Munich, Wolfgang Sawallisch, conductor. 2:55 - Georgian S.S.R., chorus, "Tchakrulo," collected by Radio Moscow. 2:18 - Peru, panpipes and drum, collected by Casa de la Cultura, Lima. 0:52 - "Melancholy Blues," performed by Louis Armstrong and his Hot Seven. 3:05 - Azerbaijan S.S.R., bagpipes, recorded by Radio Moscow. 2:30 - Stravinsky, Rite of Spring, Sacrificial Dance, Columbia Symphony Orchestra, Igor Stravinsky, conductor. 4:35 - Bach, The Well-Tempered Clavier, Book 2, Prelude and Fugue in C, No.1. Glenn Gould, piano. 4:48 - Beethoven, Fifth Symphony, First Movement, the Philharmonia Orchestra, Otto Klemperer, conductor. 7:20 - Bulgaria, "Izlel je Delyo Hagdutin," sung by Valya Balkanska. 4:59 - Navajo Indians, Night Chant, recorded by Willard Rhodes. 0:57 - Holborne, Paueans, Galliards, Almains and Other Short Aeirs, "The Fairie Round," performed by David Munrow and the Early Music Consort of London. 1:17 - Solomon Islands, panpipes, collected by the Solomon Islands Broadcasting Service. 1:12 - Peru, wedding song, recorded by John Cohen. 0:38 - China, ch'in, "Flowing Streams," performed by Kuan P'ing-hu. 7:37 - India, raga, "Jaat Kahan Ho," sung by Surshri Kesarbai Kerkar. 3:30 - "Dark Was the Night," written and performed by Blind Willie Johnson. 3:15 - Beethoven, String Quartet No. 13 in B flat, Opus 130, Cavatina, performed by Budapest String Quartet. 6:37 In 1990, the Voyager probes moved beyond the orbit of Pluto (then, of course, still considered a planet), and entered empty space. It will be 40,000 years before they make a close approach to any other planetary system and, as Sagan frankly stated, "the spacecraft will be encountered and the record played only if there are advanced spacefaring civilizations in interstellar space. But the launching of this bottle into the cosmic ocean says something very hopeful about life on this planet." *This anthology appeared in Lomax's Cantometrics: A Method In Musical Anthropology, EMC Press, University of California, Berkeley, California, 1977.
0.865082
3.129288
Geometry (Ancient Greek: γεωμετÏία; geo- “earth”, -metri “measurement”) is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. Geometry is one of the oldest mathematical sciences. Initially a body of practical knowledge concerning lengths, areas, and volumes, in the 3rd century BC geometry was put into an axiomatic form by Euclid, whose treatment—Euclidean geometry—set a standard for many centuries to follow. Archimedes developed ingenious techniques for calculating areas and volumes, in many ways anticipating modern integral calculus. The field of astronomy, especially mapping the positions of the stars and planets on the celestial sphere and describing the relationship between movements of celestial bodies, served as an important source of geometric problems during the next one and a half millennia. A mathematician who works in the field of geometry is called a geometer. The introduction of coordinates by René Descartes and the concurrent development of algebra marked a new stage for geometry, since geometric figures, such as plane curves, could now be represented analytically, i.e., with functions and equations. This played a key role in the emergence of infinitesimal calculus in the 17th century. Furthermore, the theory of perspective showed that there is more to geometry than just the metric properties of figures: perspective is the origin of projective geometry. The subject of geometry was further enriched by the study of intrinsic structure of geometric objects that originated with Euler and Gauss and led to the creation of topology and differential geometry. In Euclid’s time there was no clear distinction between physical space and geometrical space. Since the 19th-century discovery of non-Euclidean geometry, the concept of space has undergone a radical transformation, and the question arose which geometrical space best fits physical space. With the rise of formal mathematics in the 20th century, also ‘space’ (and ‘point’, ‘line’, ‘plane’) lost its intuitive contents, so today we have to distinguish between physical space, geometrical spaces (in which ‘space’, ‘point’ etc. still have their intuitive meaning) and abstract spaces. Contemporary geometry considers manifolds, spaces that are considerably more abstract than the familiar Euclidean space, which they only approximately resemble at small scales. These spaces may be endowed with additional structure, allowing one to speak about length. Modern geometry has multiple strong bonds with physics, exemplified by the ties between pseudo-Riemannian geometry and general relativity. One of the youngest physical theories, string theory, is also very geometric in flavour. While the visual nature of geometry makes it initially more accessible than other parts of mathematics, such as algebra or number theory, geometric language is also used in contexts far removed from its traditional, Euclidean provenance (for example, in fractal geometry and algebraic geometry). The recorded development of geometry spans more than two millennia. It is hardly surprising that perceptions of what constituted geometry evolved throughout the ages. Geometry originated as a practical science concerned with surveying, measurements, areas, and volumes. Among the notable accomplishments one finds formulas for lengths, areas and volumes, such as Pythagorean theorem, circumference and area of a circle, area of a triangle, volume of a cylinder, sphere, and a pyramid. A method of computing certain inaccessible distances or heights based on similarity of geometric figures is attributed to Thales. Development of astronomy led to emergence of trigonometry and spherical trigonometry, together with the attendant computational techniques. Euclid took a more abstract approach in his Elements, one of the most influential books ever written. Euclid introduced certain axioms, or postulates, expressing primary or self-evident properties of points, lines, and planes. He proceeded to rigorously deduce other properties by mathematical reasoning. The characteristic feature of Euclid’s approach to geometry was its rigor, and it has come to be known as axiomatic or synthetic geometry. At the start of the 19th century the discovery of non-Euclidean geometries by Gauss, Lobachevsky, Bolyai, and others led to a revival of interest, and in the 20th century David Hilbert employed axiomatic reasoning in an attempt to provide a modern foundation of geometry. Ancient scientists paid special attention to constructing geometric objects that had been described in some other way. Classical instruments allowed in geometric constructions are those with compass and straightedge. However, some problems turned out to be difficult or impossible to solve by these means alone, and ingenious constructions using parabolas and other curves, as well as mechanical devices, were found. Numbers in geometry In ancient Greece the Pythagoreans considered the role of numbers in geometry. However, the discovery of incommensurable lengths, which contradicted their philosophical views, made them abandon (abstract) numbers in favor of (concrete) geometric quantities, such as length and area of figures. Numbers were reintroduced into geometry in the form of coordinates by Descartes, who realized that the study of geometric shapes can be facilitated by their algebraic representation. Analytic geometry applies methods of algebra to geometric questions, typically by relating geometric curves and algebraic equations. These ideas played a key role in the development of calculus in the 17th century and led to discovery of many new properties of plane curves. Modern algebraic geometry considers similar questions on a vastly more abstract level. Geometry of position Even in ancient times, geometers considered questions of relative position or spatial relationship of geometric figures and shapes. Some examples are given by inscribed and circumscribed circles of polygons, lines intersecting and tangent to conic sections, the Pappus and Menelaus configurations of points and lines. In the Middle Ages new and more complicated questions of this type were considered: What is the maximum number of spheres simultaneously touching a given sphere of the same radius (kissing number problem)? What is the densest packing of spheres of equal size in space (Kepler conjecture)? Most of these questions involved ‘rigid’ geometrical shapes, such as lines or spheres. Projective, convex and discrete geometry are three sub-disciplines within present day geometry that deal with these and related questions. Leonhard Euler, in studying problems like the Seven Bridges of Königsberg, considered the most fundamental properties of geometric figures based solely on shape, independent of their metric properties. Euler called this new branch of geometry geometria situs (geometry of place), but it is now known as topology. Topology grew out of geometry, but turned into a large independent discipline. It does not differentiate between objects that can be continuously deformed into each other. The objects may nevertheless retain some geometry, as in the case of hyperbolic knots. Geometry beyond Euclid For nearly two thousand years since Euclid, while the range of geometrical questions asked and answered inevitably expanded, basic understanding of space remained essentially the same. Immanuel Kant argued that there is only one, absolute, geometry, which is known to be true a priori by an inner faculty of mind: Euclidean geometry was synthetic a priori. This dominant view was overturned by the revolutionary discovery of non-Euclidean geometry in the works of Gauss (who never published his theory), Bolyai, and Lobachevsky, who demonstrated that ordinary Euclidean space is only one possibility for development of geometry. A broad vision of the subject of geometry was then expressed by Riemann in his 1867 inauguration lecture Über die Hypothesen, welche der Geometrie zu Grunde liegen (On the hypotheses on which geometry is based), published only after his death. Riemann’s new idea of space proved crucial in Einstein‘s general relativity theory and Riemannian geometry, which considers very general spaces in which the notion of length is defined, is a mainstay of modern geometry. Where the traditional geometry allowed dimensions 1 (a line), 2 (a plane) and 3 (our ambient world conceived of as three-dimensional space), mathematicians have used higher dimensions for nearly two centuries. Dimension has gone through stages of being any natural number n, possibly infinite with the introduction of Hilbert space, and any positive real number in fractal geometry. Dimension theory is a technical area, initially within general topology, that discusses definitions; in common with most mathematical ideas, dimension is now defined rather than an intuition. Connected topological manifolds have a well-defined dimension; this is a theorem (invariance of domain) rather than anything a priori. The issue of dimension still matters to geometry, in the absence of complete answers to classic questions. Dimensions 3 of space and 4 of space-time are special cases in geometric topology. Dimension 10 or 11 is a key number in string theory. Research may bring a satisfactory geometric reason for the significance of 10 and 11 dimensions. The theme of symmetry in geometry is nearly as old as the science of geometry itself. The circle, regular polygons and platonic solids held deep significance for many ancient philosophers and were investigated in detail by the time of Euclid. Symmetric patterns occur in nature and were artistically rendered in a multitude of forms, including the bewildering graphics of M. C. Escher. Nonetheless, it was not until the second half of 19th century that the unifying role of symmetry in foundations of geometry had been recognized. Felix Klein‘s Erlangen program proclaimed that, in a very precise sense, symmetry, expressed via the notion of a transformation group, determines what geometry is. Symmetry in classical Euclidean geometry is represented by congruences and rigid motions, whereas in projective geometry an analogous role is played by collineations, geometric transformations that take straight lines into straight lines. However it was in the new geometries of Bolyai and Lobachevsky, Riemann, Clifford and Klein, and Sophus Lie that Klein’s idea to ‘define a geometry via its symmetry group‘ proved most influential. Both discrete and continuous symmetries play prominent role in geometry, the former in topology and geometric group theory, the latter in Lie theory and Riemannian geometry. A different type of symmetry is the principle of duality in projective geometry (see Duality (projective geometry)) among other fields. This is a meta-phenomenon which can roughly be described as follows: in any theorem, exchange point with plane, join with meet, lies in with contains, and you will get an equally true theorem. A similar and closely related form of duality exists between a vector space and its dual space. Modern geometry is the title of a popular textbook by Dubrovin, Novikov and Fomenko first published in 1979 (in Russian). At close to 1000 pages, the book has one major thread: geometric structures of various types on manifolds and their applications in contemporary theoretical physics. A quarter century after its publication, differential geometry, algebraic geometry, symplectic geometry and Lie theory presented in the book remain among the most visible areas of modern geometry, with multiple connections with other parts of mathematics and physics. History of geometry The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, Egypt, and the Indus Valley from around 3000 BCE. Early geometry was a collection of empirically discovered principles concerning lengths, angles, areas, and volumes, which were developed to meet some practical need in surveying, construction, astronomy, and various crafts. The earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets, and the Indian Shulba Sutras, while the Chinese had the work of Mozi, Zhang Heng, and the Nine Chapters on the Mathematical Art, edited by Liu Hui. South of Egypt the ancient Nubians established a system of geometry including early versions of sun clocks. Until relatively recently (i.e. the last 200 years), the teaching and development of geometry in Europe and the Islamic world was based on Greek geometry. Euclid’s Elements (c. 300 BCE) was one of the most important early texts on geometry, in which he presented geometry in an ideal axiomatic form, which came to be known as Euclidean geometry. The treatise is not, as is sometimes thought, a compendium of all that Hellenistic mathematicians knew about geometry at that time; rather, it is an elementary introduction to it; Euclid himself wrote eight more advanced books on geometry. We know from other references that Euclid’s was not the first elementary geometry textbook, but the others fell into disuse and were lost. In the Middle Ages, mathematics in medieval Islam contributed to the development of geometry, especially algebraic geometry[unreliable source?] and geometric algebra. Al-Mahani (b. 853) conceived the idea of reducing geometrical problems such as duplicating the cube to problems in algebra. ThÄbit ibn Qurra (known as Thebit in Latin) (836–901) dealt with arithmetical operations applied to ratios of geometrical quantities, and contributed to the development of analytic geometry. Omar Khayyám (1048–1131) found geometric solutions to cubic equations, and his extensive studies of the parallel postulate contributed to the development of non-Euclidian geometry.[unreliable source?] The theorems of Ibn al-Haytham (Alhazen), Omar Khayyam and Nasir al-Din al-Tusi on quadrilaterals, including the Lambert quadrilateral and Saccheri quadrilateral, were the first theorems on elliptical geometry and hyperbolic geometry, and along with their alternative postulates, such as Playfair’s axiom, these works had a considerable influence on the development of non-Euclidean geometry among later European geometers, including Witelo, Levi ben Gerson, Alfonso, John Wallis, and Giovanni Girolamo Saccheri. In the early 17th century, there were two important developments in geometry. The first, and most important, was the creation of analytic geometry, or geometry with coordinates and equations, by René Descartes (1596–1650) and Pierre de Fermat (1601–1665). This was a necessary precursor to the development of calculus and a precise quantitative science of physics. The second geometric development of this period was the systematic study of projective geometry by Girard Desargues (1591–1661). Projective geometry is the study of geometry without measurement, just the study of how points align with each other. Two developments in geometry in the 19th century changed the way it had been studied previously. These were the discovery of non-Euclidean geometries by Lobachevsky, Bolyai and Gauss and of the formulation of symmetry as the central consideration in the Erlangen Programme of Felix Klein (which generalized the Euclidean and non Euclidean geometries). Two of the master geometers of the time were Bernhard Riemann, working primarily with tools from mathematical analysis, and introducing the Riemann surface, and Henri Poincaré, the founder of algebraic topology and the geometric theory of dynamical systems. As a consequence of these major changes in the conception of geometry, the concept of “space” became something rich and varied, and the natural background for theories as different as complex analysis and classical mechanics. Euclidean geometry has become closely connected with computational geometry, computer graphics, convex geometry, discrete geometry, and some areas of combinatorics. Momentum was given to further work on Euclidean geometry and the Euclidean groups by crystallography and the work of H. S. M. Coxeter, and can be seen in theories of Coxeter groups and polytopes. Geometric group theory is an expanding area of the theory of more general discrete groups, drawing on geometric models and algebraic techniques. Differential geometry has been of increasing importance to mathematical physics due to Einstein‘s general relativity postulation that the universe is curved. Contemporary differential geometry is intrinsic, meaning that the spaces it considers are smooth manifolds whose geometric structure is governed by a Riemannian metric, which determines how distances are measured near each point, and not a priori parts of some ambient flat Euclidean space. Topology and geometry The field of topology, which saw massive development in the 20th century, is in a technical sense a type of transformation geometry, in which transformations are homeomorphisms. This has often been expressed in the form of the dictum ‘topology is rubber-sheet geometry’. Contemporary geometric topology and differential topology, and particular subfields such as Morse theory, would be counted by most mathematicians as part of geometry. Algebraic topology and general topology have gone their own ways. The field of algebraic geometry is the modern incarnation of the Cartesian geometry of co-ordinates. From late 1950s through mid-1970s it had undergone major foundational development, largely due to work of Jean-Pierre Serre and Alexander Grothendieck. This led to the introduction of schemes and greater emphasis on topological methods, including various cohomology theories. One of seven Millennium Prize problems, the Hodge conjecture, is a question in algebraic geometry. The study of low dimensional algebraic varieties, algebraic curves, algebraic surfaces and algebraic varieties of dimension 3 (“algebraic threefolds”), has been far advanced. Gröbner basis theory and real algebraic geometry are among more applied subfields of modern algebraic geometry. Arithmetic geometry is an active field combining algebraic geometry and number theory. Other directions of research involve moduli spaces and complex geometry. Algebro-geometric methods are commonly applied in string and brane theory. - List of geometers - List of geometry topics - List of important publications in geometry - List of mathematics articles - Flatland, a book written by Edwin Abbott Abbott about two- and three-dimensional space, to understand the concept of four dimensions - Interactive geometry software - Why 10 dimensions? - Shulba Sutras - ^ It is quite common in algebraic geometry to speak about geometry of algebraic varieties over finite fields, possibly singular. From a naïve perspective, these objects are just finite sets of points, but by invoking powerful geometric imagery and using well developed geometric techniques, it is possible to find structure and establish properties that make them somewhat analogous to the ordinary spheres or cones. - ^ Kline (1972) “Mathematical thought from ancient to modern times”, Oxford University Press, p. 1032. Kant did not reject the logical (analytic a priori) possibility of non-Euclidean geometry, see Jeremy Gray, “Ideas of Space Euclidean, Non-Euclidean, and Relativistic”, Oxford, 1989; p. 85. Some have implied that, in light of this, Kant had in fact predicted the development of non-Euclidean geometry, cf. Leonard Nelson, “Philosophy and Axiomatics,” Socratic Method and Critical Philosophy, Dover, 1965; p.164. - ^ http://www.maths.tcd.ie/pub/HistMath/People/Riemann/Geom/ - ^ The Journal of Egyptian Archaeology. Vol. 84, 1998 Gnomons at Meroë and Early Trigonometry. pg. 171 - ^ Neolithic Skywatchers. May 27, 1998 by Andrew L. Slayman Archaeology.org - ^ Martin J. Turner,Jonathan M. Blackledge,Patrick R. Andrews (1998). “Fractal geometry in digital imaging“. Academic Press. p.1. ISBN 0127039708 - ^ Amy Shell-Gellasch, Dick Jardine (2005). “From calculus to computers: using the last 200 years of mathematics history“. Cambridge University Press. p.59. ISBN 0883851784 - ^ Boyer (1991). “Euclid of Alexandria”. p. 104. “The Elements was not, as is sometimes thought, a compendium of all geometric knowledge; it was instead an introductory textbook covering all elementary mathematics-“ - ^ R. Rashed (1994), The development of Arabic mathematics: between arithmetic and algebra, London - ^ a b O’Connor, John J.; Robertson, Edmund F., “Arabic mathematics: forgotten brilliance?”, MacTutor History of Mathematics archive, University of St Andrews, http://www-history.mcs.st-andrews.ac.uk/HistTopics/Arabic_mathematics.html . - ^ Boyer (1991). “The Arabic Hegemony”. pp. 241–242. “Omar Khayyam (ca. 1050–1123), the “tent-maker,” wrote an Algebra that went beyond that of al-Khwarizmi to include equations of third degree. Like his Arab predecessors, Omar Khayyam provided for quadratic equations both arithmetic and geometric solutions; for general cubic equations, he believed (mistakenly, as the 16th century later showed), arithmetic solutions were impossible; hence he gave only geometric solutions. The scheme of using intersecting conics to solve cubics had been used earlier by Menaechmus, Archimedes, and Alhazan, but Omar Khayyam took the praiseworthy step of generalizing the method to cover all third-degree equations (having positive roots). .. For equations of higher degree than three, Omar Khayyam evidently did not envision similar geometric methods, for space does not contain more than three dimensions, … One of the most fruitful contributions of Arabic eclecticism was the tendency to close the gap between numerical and geometric algebra. The decisive step in this direction came much later with Descartes, but Omar Khayyam was moving in this direction when he wrote, “Whoever thinks algebra is a trick in obtaining unknowns has thought it in vain. No attention should be paid to the fact that algebra and geometry are different in appearance. Algebras are geometric facts which are proved.”” - ^ O’Connor, John J.; Robertson, Edmund F., “Al-Sabi Thabit ibn Qurra al-Harrani”, MacTutor History of Mathematics archive, University of St Andrews, http://www-history.mcs.st-andrews.ac.uk/Biographies/Thabit.html . - ^ O’Connor, John J.; Robertson, Edmund F., “Omar Khayyam”, MacTutor History of Mathematics archive, University of St Andrews, http://www-history.mcs.st-andrews.ac.uk/Biographies/Khayyam.html . - ^ Boris A. Rosenfeld and Adolf P. Youschkevitch (1996), “Geometry”, in Roshdi Rashed, ed., Encyclopedia of the History of Arabic Science, Vol. 2, p. 447–494 , Routledge, London and New York: “Three scientists, Ibn al-Haytham, Khayyam and al-Tusi, had made the most considerable contribution to this branch of geometry whose importance came to be completely recognized only in the 19th century. In essence their propositions concerning the properties of quadrangles which they considered assuming that some of the angles of these figures were acute of obtuse, embodied the first few theorems of the hyperbolic and the elliptic geometries. Their other proposals showed that various geometric statements were equivalent to the Euclidean postulate V. It is extremely important that these scholars established the mutual connection between tthis postulate and the sum of the angles of a triangle and a quadrangle. By their works on the theory of parallel lines Arab mathematicians directly influenced the relevant investiagtions of their European counterparts. The first European attempt to prove the postulate on parallel lines – made by Witelo, the Polish scientists of the 13th century, while revising Ibn al-Haytham’s Book of Optics (Kitab al-Manazir) – was undoubtedly prompted by Arabic sources. The proofs put forward in the 14th century by the Jewish scholar Levi ben Gerson, who lived in southern France, and by the above-mentioned Alfonso from Spain directly border on Ibn al-Haytham’s demonstration. Above, we have demonstrated that Pseudo-Tusi’s Exposition of Euclid had stimulated both J. Wallis’s and G. Saccheri’s studies of the theory of parallel lines.” - Mlodinow, M.; Euclid’s window (the story of geometry from parallel lines to hyperspace), UK edn. Allen Lane, 1992. |Find more about Geometry on Wikipedia’s sister projects:| |Definitions from Wiktionary| |Images and media from Commons| |Learning resources from Wikiversity| |News stories from Wikinews| |Quotations from Wikiquote| |Source texts from Wikisource| |Textbooks from Wikibooks| |Wikibooks has more on the topic of| - A geometry course from Wikiversity - Unusual Geometry Problems - The Math Forum — Geometry - Nature Precedings — Pegs and Ropes Geometry at Stonehenge - The Mathematical Atlas — Geometric Areas of Mathematics - “4000 Years of Geometry”, lecture by Robin Wilson given at Gresham College, 3 October 2007 (available for MP3 and MP4 download as well as a text file) - Finitism in Geometry at the Stanford Encyclopedia of Philosophy - The Geometry Junkyard - Interactive Geometry Applications (Java and Cabri 3D) - Interactive geometry reference with hundreds of applets - Dynamic Geometry Sketches (with some Student Explorations) This information originally retrieved from http://en.wikipedia.org/wiki/Geometry on Wednesday 27th July 2011 8:01 pm EDT Now edited and maintained by ManufacturingET.org
0.806529
3.765472
After an unprecedented observation, astronomers have directly measured the distance from Earth to a star-forming region at the other end of the Milky Way galaxy. The distance of 66,000 light years goes way beyond the previous record for such measurement and could soon help scientists understand the extent of our galaxy. The record-breaking measurement came with the application of a 180-year-old technique dubbed trigonometric parallax. It was first used in 1838 to measure the distance to a star in the constellation Cygnus. The idea here revolves around measuring the relative change in the sky position of a celestial object by observing it from two distant points such as opposite sides on Earth's orbit around the Sun. Once the angle of position change has been noted, principles of trigonometry can be applied to measure the distance to the object in question. As National Radio Astronomy Observatory (NRAO) points out, smaller angle is indicative of greater distance. That said, scientists calculated the distance to a star-forming region called G007.47+00.05 by observing the precise change in its position from the Very Long Baseline Array (VLBA) – a system of 10 radio telescope spread across North America, Hawaii, and the Caribbean. The observations were made in 2014 and 2015 and the findings were published in journal Science. The distance measured – 66,000 light years – goes way past the bright centre of our galaxy, which is 27,000 light years away from our planet. It also breaks the previous record for parallax measurement, which was of about 36,000 light years. As Milky Way is a spiral galaxy with many arms (one which has our solar system), it is difficult to map its structure, shape without traveling hundreds of thousands of light-years outwards to see its face. However, the idea of measuring the distance to star-forming regions like this one could help with the process of mapping. In fact, scientists plan on using the same technique to get a complete picture of Milky Way over next 10 years, along with answers to other questions related to its actual shape and structure. "Most of the stars and gas in our Galaxy are within this newly-measured distance from the Sun. With the VLBA, we now have the capability to measure enough distances to accurately trace the Galaxy's spiral arms and learn their true shapes," Alberto Sanna of the Max-Planck Institute for Radio Astronomy in Germany (MPIfR), said in the statement.
0.889566
3.809558
Using something known as "gravitational lensing," the ALMA telescope in Chile was able to capture amazing images of a far-off galaxy. A massive telescope sitting in the Chilean desert has capture some stunning images of a far, far away galaxy in the shape of a ring, thanks to an Einsteinian principle known as “gravitational lensing.” Astronomers used the Atacma Large Millimeter Array (ALMA) radio telescope to peer deep into space, and it was able to spot this galaxy thanks to gravitational lensing, which is when light is bent around a cluster of galaxies between us and that distant galaxy, allowing us to see far deeper into space than we otherwise would, according to a Business Standard report. It also explains the ring-like shape of the galaxy, making it look like something out of a Tolkein novel — it’s not actually ring shaped, that’s just how it appears to us due to the distortion of the galaxy cluster between us and it. The observations are even more detailed than that of the Hubble Space Telescope, despite the fact that the latter enjoys views of space unobstructed by our atmosphere. However, thanks to the clear night sky offered by the Chile desert and its incredibly sensitive instruments, the ALMA telescope was able to capture the image. The study accompanying the image was published in Astrophysical Journal Letters. Scientists used ALMA’s Long Baseline Campaign to reconstruct the image, taking advantage of the massive collecting area the telescope offers. Thanks to spectral information from ALMA, they were able to measure both the mass and the rotation of the galaxy. The galaxy appears to have gas that is unstable and is collapsing into itself, which is likely to result in more star formation.
0.846069
3.587792
also called the Dog Star, the brightest star in the night sky and one of the closest to the Earth. A binary, or double, star, Sirius is also one of the 57 stars of celestial navigation. It is the alpha, or brightest, star in the constellation Canis Major, which is located in the Southern Hemisphere. The name Canis Major means “larger dog” and refers to the imagined shape of the constellation. Sirius is located 25 degrees southeast of Orion’s belt. In the Northern Hemisphere, it is visible during the evening in the winter and early spring, and at dawn in midsummer. In the Southern Hemisphere, it is visible at dawn in the early spring and midsummer. It is at its highest in the sky at a 10:00 pm observation on February 16. Sirius represents one of Orion’s hunting dogs. The other hunting dog is Procyon, the alpha star in the constellation Canis Minor. The name Sirius is thought to come from a Greek word meaning “trembling,” “sparkling,” or “scorching.” Because the star is positioned relatively low in the night sky, the Earth’s thick atmosphere usually modulates its brightness, making it appear to twinkle a great deal. Although Sirius’ color is now classified as blue-white, ancient records note its color as reddish. The ancient Egyptians called this star Sothis and regarded it as the Nile Star because it heralded the Egyptian new year and the annual midsummer flooding of the Nile River delta. The Egyptians designed many of their temples so Sirius could be viewed from the inner chambers. The ancient Arabs called Sirius Al Shira, “the Leader,” because of its dominant brightness. Despite the fact that the sun and Sirius are separated by a number of light-years, during the summer, Sirius appears to rise with the sun at dawn. This led people in the Middle Ages to associate the Dog Star with the excessive heat of summer. Thus, the hottest days of summer came to be called the dog days of summer. The star’s path was studied and chronicled from 1834 to 1844 by German astronomer Friedrich W. Bessel, who concluded by indirect observation that Sirius had a companion star. In 1862 American astronomer and telescope maker Alvan Graham Clark was the first to actually observe the companion star. The companion star is usually called Sirius B, (the Pup). Sirius has a mass that is about 2.4 times greater than that of the sun, and its diameter is nearly twice that of the sun; Sirius B, a white dwarf star, has a mass that is nearly equal to that of the sun, but its diameter is roughly 50 times smaller than the sun’s. White dwarfs are very dense stars that can be four million times as dense as water. On the Earth’s surface, matter from a white dwarf would weigh about 120,000 tons per cubic foot. Because of its tremendous density, Sirius B exerts a strong gravitational pull on its sister star, causing it to travel on a helical path through space. About once every 50 years, Sirius and Sirius B make a complete orbit around one another, though their orbits are of considerable eccentricity. The total mass of the binary system of these two stars is 3.29 times greater than the solar mass.
0.888494
3.848689
For science, or for life? FRANKFURT. Should regulations for environmental protection be valid beyond our solar system? Currently, extra-terrestrial forms of life are only deemed worth protecting if they can be scientifically investigated. But what about the numerous, presumably lifeless planets whose oxygen atmospheres open up the possibility of their settlement by terrestrial life forms? Theoretical physicist Claudius Gros from Goethe University has taken a closer look at this issue. On earth, environmental protection has the primary goal of ensuring the availability of clean water and clean air for human beings in the future. Human interests usually take also precedent when it comes to protecting more developed animals and plants. Lower life forms such as bacteria, on the other hand, are considered worthy of protection only in exceptional cases. Claudius Gros, professor for theoretical physics at Goethe University, has now investigated the degree to which the norms for the protection of planets can be derived analogously from issues that arise in environmental protection on Earth. The international COSPAR agreements on space research stipulate that space missions must ensure that any existing life – such as possibly on the Jupiter moon Europa – or traces of previous life forms – perhaps on Mars – are not polluted, so that they remain intact for scientific purposes. The protection of extra-terrestrial life as valuable in and of itself is not stipulated. The COSPAR Guidelines apply to our solar system. But to which extent should they be applied to planetary systems beyond our solar system (exoplanets)? This will become a relevant issue with the advent of launch pads for miniature interstellar space probes, such as of the type in development by the “Breakthrough Starshot" initiative. Gros argues that the protection of exoplanets for the use of humankind could not be justified. Apart from fly-bys, we could carry out scientific studies only with space probes able to slow down in an alien solar system. Using the best technology available today, this would require magnetic sails and missions lasting thousands of years, at the least. According to Gros, the protection of exoplanets would also be irrelevant if these planets were lifeless, even if they were otherwise habitable. This probably includes planet systems such as the Trappist-1 system, whose central star is an M-dwarf star. Planets orbiting in the habitable zone of an M-dwarf star have a dense oxygen atmosphere that was formed through physical processes before cooling. Whether life can develop on such planets is questionable. Free oxygen acts corrosively on prebiotic reaction cycles, which are considered prerequisites for the origin of life. “Whether there is another way for life to form on these oxygen planets is an open question at this time," says Gros. “If not, we would find ourselves living in a universe in which most of the habitable planets are lifeless, and thus suitable for settlement by terrestrial life forms," he adds. Publications: Claudius Gros: Why planetary and exoplanetary protection differ: The case of long duration Genesis missions to habitable but sterile M-dwarf oxygen planets, Acta Astronautica 2019, in press. https://arxiv.org/abs/1901.02286 Claudius Gros: Developing Ecospheres on Transiently Habitable Planets: The Genesis Project, Astrophysics and Space Science 361, 324 (2016) http://link.springer.com/article/10.1007/s10509-016-2911-0
0.852094
3.315309
Nov 12, 2013 Triton is one of the most mysterious objects in the solar system, with plumes of nitrogen spewing from frozen “geysers” near its south pole. Of Neptune’s thirteen satellites, all but one are irregular in shape and mostly rock and ice. Triton is farthest out and is much larger than the others, with a diameter of 2700 kilometers, making it a planet-sized object. Triton rotates in a clockwise, circular direction (looking down from its north pole), with an axial tilt of 157 degrees. Neptune and its other complement of moons all rotate in a counterclockwise direction. Summertime temperature on Triton is a balmy 38 Kelvin, or just above absolute zero. On August 20, 1977, NASA launched the Voyager 2 mission on a multiyear journey to the outer Solar System. Twelve years after launch, on August 25, 1989, Voyager 2 was the first spacecraft to return close up images of Neptune, the most distant planet from the Sun. Since NASA has “downgraded” Pluto to a Kuiper Belt Object, similar to Eris and Sedna, Neptune is now the eighth, and last, “official” planet. While Neptune provided stunning images and a wealth of scientific data that will keep astrophysicists busy for years, the most interesting and controversial aspect of the mission was the discovery of huge, cryogenic “ice volcanoes” on Triton. Several types of this structure were found in the southern region of the moon. The geysers leave behind “wind streaks” and fan-shaped, dark deposits near the point of eruption and are presumed to be liquid nitrogen mixed with frozen methane, as well as organic materials. The energy source for these eruptive features is unknown, but the conventional reason is that the Sun is providing the heat. JPL scientists write: “Trapping of solar radiation in a translucent, low-conductivity surface layer (in a solid-state greenhouse), which is subsequently released in the form of latent heat of sublimation, could provide the required energy. Both the classical solid-state greenhouse consisting of exponentially absorbed insolation in a gray, translucent layer of solid nitrogen, and the ‘super’ greenhouse consisting of a relatively transparent solid-nitrogen layer over an opaque, absorbing layer are plausible candidates.” “Insolation”, or exposure to the Sun’s rays, is an unsatisfactory explanation for these features. Since the Sun is approximately 4496.7 million kilometers from Triton, the overall solar radiation that impacts its surface must be incredibly weak. The effect of electromagnetism is inversely proportional to distance, so the amount of energy received by Triton falls off with the square of the distance. That means Triton receives about 10^-3 times less solar radiation than Earth, a mere 1.5 watts per meter. In a previous Picture of the Day, an examination of the arachnoid features and “dalmatian spots” on Mars associates them with electric discharges. The features on Mars are indicative of “particle beams” impacting the extremely cold, carbon dioxide frost and subliming it to gas. As was noted: “If the dark spotting on Mars’ south polar ice is indeed caused by charged particle streams, one of the first things we should look for is an active response of the surface to these events. Since the dark spotting is occurring in the Martian south polar spring, that would be the time to look for signs of energetic activity–not unlike the so-called “volcanic” plumes of Jupiter’s closest moon Io, or the “geysers” of Saturn’s moon Enceladus.” To this may now be added the dark spotting, wind streaks and geysers on Triton. Click here for a Spanish translation
0.836836
3.846764
July 19, 2016 – NASA and its partners are getting ready to launch a rocket-borne camera to the edge of space at 12:36 p.m. Mountain time on July 19, 2016, on its second flight to study the sun. The clarity of images returned will provide scientists around the world with clues to one of the biggest questions in heliophysics – why the sun’s atmosphere, or corona, is so much hotter than its surface. The precision instrument, called the High Resolution Coronal Imager, or Hi-C, will fly aboard a Black Brant sounding rocket, lifting off July 19 from White Sands Missile Range in New Mexico. “Our team, which includes partners foreign and domestic, have done incredible work to get us ready for launch,” said Jonathan Cirtain, principal investigator for Hi-C and manager of the Science Research Office at NASA’s Marshall Space Flight Center in Huntsville, Alabama. “The instrument has demonstrated the power of high resolution coronal imaging and we expect to demonstrate that conclusively July 19. We are ‘go’ for launch.” Scientists anticipate Hi-C’s reflight will deliver images that could help explain why the corona is so hot. The sounding rocket will fly into space for only five minutes of observation time — but those five minutes can provide key information, because the observations occur up above Earth’s atmosphere, which blocks the extreme ultraviolet sun rays that hold information necessary to untangle the coronal heating mystery. During its first flight in July 2012, Hi-C captured the highest-resolution images ever taken of the million-degree solar corona, revealing previously unseen magnetic activity. For decades, scientists have suspected that activity in the sun’s magnetic field is heating the corona. “The magnetic field plays a crucial role in dictating the structure of the sun’s atmosphere,” said Cirtain. “It also acts as a conduit for mass and energy to flow into the solar corona and solar wind – some of it heading toward Earth as powerful solar flares that can disrupt radio and GPS communications. It’s critical to understand the process by which the sun releases these bursts of energy.” The telescope, the centerpiece of a payload weighing 464 pounds and measuring 10-feet long, is designed to observe a large, active region in the sun’s corona in fine detail. The telescope will acquire data for five minutes, taking about one image every five seconds. Partners associated with the development of the Hi-C telescope include the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts; the University of Central Lancashire in England; and Lockheed Martin’s Solar Astrophysical Laboratory in Palo Alto, California. Hi-C is supported through NASA’s Sounding Rocket Program at the agency’s Wallops Flight Facility on Wallops Island, Virginia, which is managed by NASA’s Goddard Space Flight Center in Greenbelt, Maryland. NASA’s Heliophysics Division manages the sounding-rocket program for the agency.
0.80171
3.68684
From the various excited scientific reports, I deduce that the black hole at the centre of the M87 galaxy poses no immediate threat to us on Earth. Sinceit's some 55 million light years away, there are no more visions of being sucked away into a bottomless pit. Pit No, that’s not possible, because a black hole has no bottom as far as we know. Also, it's out there in the universe, so it could well be Elysium, heaven, etc, therefore being the converse of the netherworld. Perhaps that is what it means when there is no space-time and all human constructs wither away, leaving you with, well, eternity. When those beautiful 'golden doughnut' photographs of M87 black hole were released on Wednesday, the sheer do-ability of such a capture overwhelmed us. Apart from the fact that such a cosmic gobbler exists, that is. And not just one, but several thousand in our very own Milky Way, the one in its very centre being a massive black hole at least 30,000 light years away from our solar system. But so many questions are floating around, especially among non-scientists like me. In an interview for ET in 2008, I had asked astrophysicist Michio Kaku, who has played such a big role in popularising science, if smashing particles in a laboratory, like at the European Organisation for Nuclear Research, would create black holes. Black holes come in all sizes, replied Kaku. You have tigers and small domestic cats. They are both from the cat family. One is dangerous, the other is harmless. Similarly, you have big black holes the size of stars and you have subatomic small holes whose energy can barely light a bulb, which are totally harmless. In fact, Earth is being hit by such particles all the time, and nothing happens. Now, that was very reassuring. But Kaku added, If space is a fabric, then of course fabrics can have ripples, which we have now seen directly. But fabrics can also rip. Then the question is, what happens when the fabric of space and time is ripped by a black hole Gulp. That's not a very comforting thought, however fascinating it may be. Frankly, I can't say I really understand what that means. Perhaps, inside a black hole there is another universe, one that we can't even conceptualise Who knows, if one manages to reach a black hole's 'event horizon' a region in space-time beyond which events cannot affect an outside observer, a kind of 'point of no return' and then 'disappear', maybe one would turn 'immortal', ethereal, attain Satori the Japanese Buddhist term for 'awakening', or total comprehension or, as Stephen Hawking theorised, that your 'information' may get 'smeared' on the event horizon as you fall in. saving score / loading statistics ...
0.81064
3.369444
The observations are the first to get a detailed look at the Trojans' colors: both the leading and trailing packs are made up of predominantly dark, reddish rocks with a matte, non-reflecting surface. What's more, the data verify the previous suspicion that the leading pack of Trojans outnumbers the trailing bunch. The new results offer clues in the puzzle of the asteroids' origins. Where did the Trojans come from? What are they made of? WISE has shown that the two packs of rocks are strikingly similar and do not harbor any "out-of-towners," or interlopers, from other parts of the solar system. The Trojans do not resemble the asteroids from the main belt between Mars and Jupiter, nor the Kuiper belt family of objects from the icier, outer regions near Pluto. "Jupiter and Saturn are in calm, stable orbits today, but in their past, they rumbled around and disrupted any asteroids that were in orbit with these planets," said Tommy Grav, a WISE scientist from the Planetary Science Institute in Tucson, Ariz. "Later, Jupiter re-captured the Trojan asteroids, but we don't know where they came from. Our results suggest they may have been captured locally. If so, that's exciting because it means these asteroids could be made of primordial material from this particular part of the solar system, something we don't know much about." Grav is a member of the NEOWISE team, the asteroid-hunting portion of the WISE mission. The first Trojan was discovered on Feb. 22, 1906, by German astronomer Max Wolf, who found the celestial object leading ahead of Jupiter. Christened "Achilles" by the astronomer, the roughly 81-mile-wide (130-kilometer-wide) chunk of space rock was the first of many asteroids detected to be traveling in front of the gas giant. Later, asteroids were also found trailing behind Jupiter. The asteroids were collectively named Trojans after a legend, in which Greek soldiers hid inside in a giant horse statue to launch a surprise attack on the Trojan people of the city of Troy. "The two asteroid camps even have their own 'spy,'" said Grav. "After having discovered a handful of Trojans, astronomers decided to name the asteroid in the leading camp after the Greek heroes and the ones in the trailing after the heroes of Troy. But each of the camps already had an 'enemy' in their midst, with asteroid 'Hector' in the Greek camp and 'Patroclus' in the Trojan camp." Other planets were later found to have Trojan asteroids riding along with them too, such as Mars, Neptune and even Earth, where WISE recently found the first known Earth Trojan: http://www.jpl.nasa.gov/news/news.php?release=2011-230 . Before WISE, the main uncertainty defining the population of Jupiter Trojans was just how many individual chunks were in these clouds of space rock and ice leading Jupiter, and how many were trailing. It is believed that there are as many objects in these two swarms leading and trailing Jupiter as there are in the entirety of the main asteroid belt between Mars and Jupiter. To put this and other theories to bed requires a well-coordinated, well-executed observational campaign. But there were many things in the way of accurate observations -- chiefly, Jupiter itself. The orientation of these Jovian asteroid clouds in the sky in the last few decades has been an impediment to observations. One cloud is predominantly in Earth's northern sky, while the other is in the southern, forcing ground-based optical surveys to use at least two different telescopes. The surveys generated results, but it was unclear whether a particular result was caused by the problems of having to observe the two clouds with different instruments, and at different times of the year. Enter WISE, which roared into orbit on Dec. 14, 2009. The spacecraft's 16-inch (40-centimeter) telescope and infrared cameras scoured the entire sky looking for the glow of celestial heat sources. From January 2010 to February 2011, about 7,500 images were taken every day. The NEOWISE project used the data to catalogue more than 158,000 asteroids and comets throughout the solar system. "By obtaining accurate diameter and surface reflectivity measurements on 1,750 Jupiter Trojans, we increased by an order of magnitude what we knew about these two gatherings of asteroids," said Grav. "With this information, we were able to more accurately than ever confirm there are indeed almost 40 percent more objects in the leading cloud." Trying to understand the surface or interior of a Jovian Trojan is also difficult. The WISE suite of infrared detectors was sensitive to the thermal glow of the objects, unlike visible-light telescopes. This means WISE can provide better estimates of their surface reflectivity, or albedo, in addition to more details about their visible and infrared colors (in astronomy "colors" can refer to types of light beyond the visible spectrum). "Seeing asteroids with WISE's many wavelengths is like the scene in 'The Wizard of Oz,' where Dorothy goes from her black-and-white world into the Technicolor land of Oz," said Amy Mainzer, the principal investigator of the NEOWISE project at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "Because we can see farther into the infrared portion of the light spectrum, we can see more details of the asteroids' colors, or, in essence, more shades or hues." The NEOWISE team has analyzed the colors of 400 Trojan asteroids so far, allowing many of these asteroids to be properly sorted according to asteroid classification schemes for the first time. "We didn't see any ultra-red asteroids, typical of the main belt and Kuiper belt populations," said Grav. "Instead, we find a largely uniform population of what we call D-type asteroids, which are dark burgundy in color, with the rest being C- and P-type, which are more grey-bluish in color. More research is needed, but it's possible we are looking at the some of the oldest material known in the solar system." Scientists have proposed a future space mission to the Jupiter Trojans that will gather the data needed to determine their age and origins. The results were presented today at the 44th annual meeting of the Division for Planetary Sciences of the American Astronomical Society in Reno, Nev. Two studies detailing this research are accepted for publication in the Astrophysical Journal. JPL manages, and operated, WISE for NASA's Science Mission Directorate. The spacecraft was put into hibernation mode in 2011, after it scanned the entire sky twice, completing its main objectives. Edward Wright is the principal investigator and is at UCLA. The mission was selected competitively under NASA's Explorers Program managed by the agency's Goddard Space Flight Center in Greenbelt, Md. The science instrument was built by the Space Dynamics Laboratory in Logan, Utah. The spacecraft was built by Ball Aerospace & Technologies Corp. in Boulder, Colo. Science operations and data processing take place at the Infrared Processing and Analysis Center at the California Institute of Technology in Pasadena. Caltech manages JPL for NASA. More information is online at http://www.nasa.gov/wise , http://wise.astro.ucla.edu and http://jpl.nasa.gov/wise . News Media ContactWhitney Clavin 818-354-4673 Jet Propulsion Laboratory, Pasadena, Calif.
0.916076
3.852427
We know from observation that galaxies seem to move away in an accelerated way. This observation was synthesized in the famous “Hubble Law” . To explain this accelerated scape an entity was proposed that became known as “Dark Energy” . Subsequently, it was also observed that stars orbit galaxies at a much higher velocity than calculated. As if there was much more matter inside the galaxies than the one observed. To explain this phenomenon another entity was proposed that became known as “Dark Matter” . However, despite many efforts, no other evidence of both “Dark Energy” and “Dark Matter” has been found. So we are proposing a new explanation that replaces these two “dark” entities with another hypothesis: The “Decreasing Universe” Hypothesis . By reducing the number of hypotheses―from two to one―we will be in agreement with the “Ocam Razor”, beyond which, as we shall see, we can also derive the “Hubble Law”. In the “Decreasing Universe” model it is established that the gravitational field causes a space contraction that can be detected by an observer who is not subjected to such field. In this way, all objects within this space are also contracted, especially instruments of these observers. Particularly our planet is subjected, in a greater or lesser degree, to various fields: The gravitational field of the Earth itself, the Sun, the Moon, the distant galaxies, and on. If, for example, we have a measuring instrument as a scaler with the length “L” (=1 meter), then, from the point of view of an observer who does not suffer gravitational influence, it will notice that this scaler, with the time, will decrease in size. Of course, for observers subject to these there will be no change, since all space and everything that is immersed in it is contracting at the same time so that it will have no change in the measurements made by them. For example, here on Earth, a table with 2 m length, after millions of years, will continue to measure 2 m, as the table shrinks in the same proportion as the measuring scaler and, locally, no difference can be observed. However, in the intergalactic space the gravitational field is practically null, and, therefore, this space does not suffer the same contraction that we here on Earth are undergoing. Thus, in the absence of a considerable gravitational field in intergalactic space, the space between us and a distant galaxy will not contract in the same proportion as our own terrestrial space is contracting. Consider, for example, the enormous time a photon, emitted by a distant galaxy, takes to reach us. In this long period of time, which may be billions of years from the emission of the photon until it reaches our planet, our space―and our scalers―will be reduced in size compared to the original size they had when this photon was emitted under the view of an observer who is not subject to such a gravitational field. This reduction of our local space and the size of our “scalers” will cause us to distance ourselves to the star larger than it was at the time the photon was emitted (even if its actual distance did not change in that period). 2. Defining Some Concepts Let’s call of “Local Space” (=“LS”) the region of space that is subject to a non-negligible gravitational field and thus suffers spatial contraction. Let’s call of “Local Observer” (=“LO”) the observer who belongs to a “LS” and therefore subject―him and his instruments―the spatial contraction. For example, the planet Earth is an “LS” and we are “LO”. Let’s call of “Outer Space” (=“OS”) the region of space that is subject to a very weak and despicable gravitational field. Let’s call of “Sidereal Observer” (=“SO”) those observers located in this spatial region. For example, observers in the intergalactic region would be an “OS”. To clarify ideas, we may think that observers in the “OS” (=“SO”) play a role similar to an observer in an inertial frame as opposed to observers located in the “LS” that would play an analogous role to observers in a non-inertial referential. 2.1. Exemplifying the Concepts Consider, for example, at an arbitrary initial instant any “t0” in the “LS”, a scaler of length “L0” that a “LO” uses to make its measurements. Suppose that at this moment “t0” an “SO”, in intergalactic space, take this measure of this same “L0” scaler as the standard measure for your own measurements. Then, at time “t0”, both observers (“LO” and “SO”) will consider the pattern “L0” of the same size. However, the “LS” will continue to contract in relation to the “OS”. The “LO” will not notice the variation of the scaler “L0” because both its measuring scaler and everything in his “LS” decrease in the same proportion. However, the “SO”, after a time “t” (“t” > “t0”) will see the scaler of the “LO” decrease to a smaller size “L”. Let us call “TJ” (Jocaxian Time) the time period (Δt = t − t0) necessary for an “SO” see the space (and the scaler) of the OL contracting at half size it had at time t0. That is, to shrink itself to a size L = L0/2 at time t0 + Tj. Before we go on let’s make some simplifications. 2.2. Some Simplifications Before we continue, we will consider that: - If the gravitational field is constant, the time required for the “LS” (and all that is contained in it), to contract to half its size, called TJ, measured by an “SO”, will also be constant. We will also consider that the galaxies, from the point of view of an “SO” are not necessarily rapidly moving away from each other. For simplicity we will calculate the effects of “Dark Energy” and “Dark Matter” only as a result of our gravitational contraction, keeping constant its distances (from the point of view of an “SO”). Also we will not consider the effect of time dilation due to the gravitational force in the “LS” in relation to the “OS”. 3. Local Space Contraction Formula (“LS”) We can mathematically translate the concepts we saw above into the following formula: (Formula of space contraction from the point of view of an “SO”) L(t) = Measure of L0 in the “LS” by a “Sidereal Observer” t0 = Initial time (arbitrary) L0 = Length measured in t = t0 Tj = Jocaxian Time ∆t = t - t0 Note that for an “LO”, L = L0 (always!), that is, the size of the scaler does not change with time in the “LS”. At each “TJ” period of time, our space (and our scales) are contracted in half (from the point of view of an “SO”). If we define: We can rewrite (E1): We can also rewrite the same Jocaxian Factor (Fj) in a more friendly way: 4. “Dark Energy” Effect Of course, if intergalactic space does not contract, and if our “scale of measurement” decrease in size, then this intergalactic space should seem to us larger, in the same proportion as our “scale of measurement” contract itself. If, for example, at t = t0, we measure the distance to a galaxy “X” with our swcale of length L0, as being D0, after a time “Tj” our rule will be measuring half of its initial size L0, and therefore, when we (“LO”) measure the distance to that galaxy, we will measure it as being We should note that a measure within our “LS” sizes do not change, as everything decreases along with our scaler and rulers, but the “OS” does not contract like our “LS”. So we will have this illusion that the galaxy “X” is moving away from us. This is what we can call the “Dark Energy Effect”. 5. Apparent Distance Formula The measured distance is inversely proportional to the length of the measurement pattern. We can synthesize this idea mathematically with the following formula (see the Appendix A): (Distance Formula with the Jocaxian Factor). “t0” is the time at which the photon was emitted by the galaxy. “t” is the time at which Earth received this photon. “D0” is the distance we would measure, from Earth to galaxy at time “t0”. “D(∆t)” is the distance we measured from Earth to a galaxy after “∆t” time. “Δt” = t - t0 Time period. As Fj(Δt) grows exponentially with time (E3) the Earth’s distance from the galaxy will also appear to increase exponentially with time. 6. Hubble’s Law With the formula of the apparent distance (E4) we can calculate the apparent distance speed: (Distant galaxies’ apparent distance speed formula.) But Hubble’s law is exactly like this: where (H0 = Hubble’s Constant and D is the distance from the galaxy) As E6 = E7, now we can determine Tj: Substituting (E8) into (E3) we will have: (Jocaxian Factor in terms of the Hubble constant) What provides us: (Apparent distance formula in terms of the Hubble constant) If we want to calculate the real distance from Earth to the galaxy, using the measurements that our scales had at the time the photon was emitted (at t = t0) then: (Time for a photon emitted from the galaxy to reach us, where c = speed of light) From (E11) and (E10) we will have: (Apparent distance of a galaxy depending on the actual distance) As H0 = 2.2e−18 s−1, we can replace it in (E8) and find Tj That is, the Jocaxian Time, the time necessary for our space to contract in half, is 10 billion years. We can now find the contraction rate of our space for every billion years: For every 1 billion years our space (and our scalers) are contracted 7% of their original size. It is interesting to note that this value (7%) corresponds exactly to the contraction rate calculated from the “Redshift” of the Galaxy NGC3034 . Currently the apparent distance of the “NGC3034” is about 11E6 light years, (or 1E23 meters). Applying (E12) to the galaxy NGC3034 and knowing that H0/c = 8E−27 m−1 we will have the following equation for the distance to the galaxy NGC3034: (Equation of the real distance of the Galaxy NGC3034) Using a solver we will obtain for the real distance: D0 = 9E22 that is, this galaxy is about 10% closer to Earth than it appears to be. 7. Dark Matter Dark Matter can also be observed being an effect of our spatial contraction. As nomenclature, we will suppress the subscripts “obs” of the measurements observed here from Earth. So we’ll simplify: Vobs = V; (Rotation Speed Observed) Dobs = D; (Distance Observed) Robs = R; (Radius Observed) Wobs = W; (Angular Speed Observed) Mobs = M; (Mass Observed) Considering the illustration: Suppose that from Earth we now observe a star circling the periphery of a galaxy that is at an observed (apparent) distance “D” from our planet. According to Figure 1, the observed “R” radius of the galaxy will be proportional to this distance: Figure 1. Show how we observe a star rotating a distant galaxy. (Orbit radius as a function of observed distance and angle) As we saw earlier, at the time the photon was emitted, the real distance would be D0, so: (Real orbit radius as a function of actual distance and angle) From (E9) and (E11) We can define the Jocaxian Factor of the Galaxy: (Jocaxian Factor of the Galaxy) So, we will have: (Real Radius as a function of the Jocaxian factor of the Galaxy) If M is the observed mass of the galaxy where the star orbits, and V is its tangential velocity observed from the Earth, and G is the Gravitational constant of the Galaxy, we will have : (Equation of Velocity as a function of mass and radius) In terms of the angular velocity, we have: (Equation of Velocity as a function of angular velocity and radius) From (E19) and (E20) we derive: (Equation of the angular velocity as a function of Mass and Radius) From (E21), at t = t0, we have: (Equation of the angular movement in function of the Real Radius and Real Mass) The angular velocity does not change with the observed distance, since the time interval between two emitted photons is the same interval when they arrive to Earth. So: (The angular speed is the same for an “LO” as for a “SO”) From (E20), (E22), (E23) we find: (Equation of the tangential velocity as a function of the Barium Mass and Apparent Radius) Comparing (E25) with (E19) we conclude that: (Apparent mass as a function of actual mass) From Earth we observe a mass M for the galaxy larger than the mass at t = t0. Then the effect “Dark matter” will be the difference of M with the real mass M0: (Dark matter equation as a function of the Hubble constant and the actual distance) If we adopt the “Decreasing Universe” where the gravitational field shrinks the space in which it crosses, we find that the accelerated separation of galaxies, often explained by the so-called “Dark Energy” is a kind of “illusion” resulting from this space contraction and, therefore, unnecessary. The “Dark Matter”, on the other hand, can also be explained by the same effect of the gravitational contraction of our space, since the radius of the galaxies is observed as greater than it really is; consequently, the speed of translation of a star is seen as above-expected with the observed baryonic mass, providing the false impression that there is an extra, invisible matter responsible for the effect. Derivation of the Distance Formula from the Local Contraction Formula. At t = t0 we will take as the measurement standard for both observers the measure “L0”, for example, L0 = 1 meter. Thus, all distances will be taken as a number that multiplies the L0 pattern; Then both observers, sidereal and terrestrial, measure the same distance to a given galaxy: (D0 is the distance that is measured to the galaxy by taking the measurement pattern L0) After a time t (>t0), from the point of view of a sidereal observer, the terrestrial space shrank, and the rule tablet L0 decreased to L according to E1: (Spatial contraction formula (E2)) As in our hypothesis, from the point of view of the Sidereal Observer, the galaxy does not move away, the distance covered must be the same, that is: (D is the measured Distance to the galaxy according to the L pattern, by the Terrestrial observer) We must keep in mind that for the local terrestrial observer, L = L0, since he does not perceive his own contraction). From (A1) and (A2) we have to: (From the point of view of the sidereal observer, the distance is not altered) Using (E1), we have finally: (Measured distance to the galaxy according to the terrestrial observer, as a function of time) The Equivalence Principle and the End of the Dark Energy. Solver for Equations. https://www.mathway.com/Algebra
0.821648
4.072594
Almost weightless and able to pass through the densest materials with ease, neutrinos seem to defy the laws of nature. But these mysterious particles may hold the key to our deepest questions about the universe, says physicist Heinrich Päs. In The Perfect Wave, Päs serves as our fluent, deeply knowledgeable guide to a particle world that tests the boundaries of space, time, and human knowledge. The existence of the neutrino was first proposed in 1930, but decades passed before one was detected. Päs animates the philosophical and scientific developments that led to and have followed from this seminal discovery, ranging from familiar topics of relativity and quantum mechanics to more speculative theories about dark energy and supersymmetry. Many cutting-edge topics in neutrino research—conjectures about the origin of matter, extra-dimensional spacetime, and the possibility of time travel—remain unproven. But Päs describes the ambitious projects under way that may confirm them, including accelerator experiments at CERN and Fermilab, huge subterranean telescopes designed to detect high-energy neutrino radiation, and the Planck space observatory scheduled to investigate the role of neutrinos in cosmic evolution. As Päs’s history of the neutrino illustrates, what is now established fact often sounded wildly implausible and unnatural when first proposed. The radical side of physics is both an exciting and an essential part of scientific progress, and The Perfect Wave renders it accessible to the interested reader.
0.828255
3.697137
Our planet is pretty amazing and unique. It’s the only planet in our solar system that has life, as far as we know, and it’s also the prettiest. (We may be biased here, but you should always be biased towards your mother’s beauty.) There’s always something new to learn and discover about this living rock hurling through space that we all share, so here are 25 Shocking Facts You Never Knew About Earth! Most people know that Earth is the only planet in our Solar System with an atmosphere that readily supports life (oxygen & water). What most people don't realize is that Earth is one of only four terrestrial planets (meaning it's rocky at the surface). Venus, Mars and Mercury are the other three. Every 100 years the Earth's orbit spins approximately 2 milliseconds slower. We're slowing down. Surprisingly, we haven't explored much of Earth. About 71% of the Earth's surface is covered by water, and we've barely explored the oceans. In fact, less than 10% (some say at little as 5%) of the oceans have been explored. Over 200,000 marine species have been identified in the 10% that's been explored, so just imagine what's left down there that we have no idea about. Check out more about Earth’s oceans in 25 Surprising Things You Might Not Know About Earth’s Oceans. Despite most of the Earth's surface being covered in water, 68% of the fresh water on Earth is permanently frozen as ice caps and glaciers. The Earth isn't perfectly round. It's slightly football shaped due to it's constant spinning. So despite the perfect sphere we so often see depicted, it's actually a little squished. Mount Everest isn't technically the highest point on Earth. Oops. Since the Earth isn't perfectly round, anything along the equator is slightly "higher" or closer to space than objects further from the equator. So Mount Chimborazo in Ecuador is over 9,000 feet "shorter" than Everest when measuring by feet from Sea Level. However, due to the "bump" of the Equator, its top is actually about a mile and a half further from the center of the Earth than the top of Everest. There are no true black flowers. The planet just doesn't grow them. They're all very deep shades of purple or red, some so much so that our eyes perceive them as black, but they aren't true black. The largest Earthquake ever recorded happened on May 22nd, 1960 in southern Chile near Valdivia. It's referred to as the "Great Chilean Earthquake," coming in at a magnitude of 9.5. A Great Bristlecone Pine in California is thought to be the oldest living organism on Earth at an estimated 5,067 years old. It has no name. More well known, but younger, is the named tree of the same species named Methuselah, which is 4,850 years old. Tides exist because of the Moon. No, really. The moon's orbit controls sea levels, which results in...tides. Moonquakes - like Earthquakes, but on the moon! - also can affect the tides. No moon, no tides. So think twice before you threaten to steal or blow up our dear Luna, okay? The largest mountain range and the deepest valley are both under the ocean. The Mariana Trench is seven miles deep - that's seven miles BELOW the ocean floor, kids - and three people have been to the bottom of it. Despite the insane pressure of all that water, things still live down there. Yet despite these highest highs and lowest lows, the Earth is pretty smooth. Considering how big it is - 24,901 mi circumference - all those mountain and canyons, when taken into account, 1/5000th of the total circumference. Which means if the Earth was small enough to hold, it would seem as smooth as a bowling ball. Antarctica is one of the best places to find meteorites. This isn't because it gets more, but rather because they're pretty easy to find there, due to the lack of vegetation and lots of snow. More Metorites have been found in Antarctica than anywhere else. Were all that ice in Antarctica to melt, sea levels would rise 200ft across the Earth. For reference, the highest point in all of Florida is only 312ft above sea level. The magnetic poles of Earth are moving. They have moved before. They will move again. Eventually they will be fully switched from when they started, and then move back. It's not the end of the world. There are five main layers to the Earth's atmosphere -Exosphere, Thermosphere, Mesosphere, Stratosphere & Troposphere. The higher you go, the thinner it gets - so it's the most dense in the lowest layer, the troposphere, which is where weather happens. Learn more about the Earth’s atmosphere in our list of 25 Facts About The Earth’s Atmosphere That Are Truly Majestic. Earth has boiling rivers. In the Peruvian rainforest, a legit shaman cares for and protects the sacred healing site of Mayantuyacu. Mayantuyacu has a 4 mile river named Shanay-timpishka whose temperatures reach up to 196 degrees Fahrenheit, though in some parts it will actually boil. Um, if you fall in, you die. At least 30 different places on Earth have sand dunes that...sing? They sing and croak, and it sounds like something between a swarm of bees and chanting monks. I'm sure you'd like us to tell you why now, but...nobody is sure. The Earth's tectonic plates are constantly shuffling around each other, causing earthquakes, tsunamis, and forming mountains. They ALSO play a very important role in The Carbon Cycle which means carbon based life forms continue to do pretty well here. Due to the amount of heavy elements in Earth's make up - Lead, Uranium, Fruitcake - Earth is the most dense planet in the Solar System, giving it the highest surface gravity of any terrestrial object (planets, dwarf planets, or moons) in the solar system. (Also the best John Mayer song ever, Gravity.) Climate overall tends to shift from really really hot to really really cold. There have been at least 5 MAJOR Ice Ages throughout the history of the planet, and technically we're still living at the tail end of the last one, which started a little over 3 million years ago and peaked about 20,000 years ago. According to scientists, Ice Ages start slowly and end abruptly, sometimes warming globally as much as 20°F over the course of only a few years! In the last 100,000 years alone, the Earth has experienced at least 24 of these rapid temperature changes. I like big Moons and I cannot lie. Earth's Moon - which doesn't have an official name like other planets' moons - is huge compared to the size of Earth. Most scientists think this is because the Moon used to be a PART of Earth. The theory goes that there was a violent separation (possibly several) millions of years ago of the rocks that eventually became the Moon. She just wants to stay close to home, aw. The softest mineral on Earth is talc. Yes, Talc as in Talcum Powder that we use in cosmetics and on babies' bums, as well as in ceramic glaze and paper production. There's a place where it thunderstorms every night. Nothwestern Venezuela is home to where the Catatumbo River meets Lake Maracaibo, and it is here, that every night, a Thunderstorm happens. They can last up ten hours and average nearly 30 lightening strikes per minute. 40,000 tons of space dust falls on our planet annually. It's made of oxygen, nickle, iron, carbon, and other elements. It's literally Stardust. The planet is covered it in. We breath it in. It's pretty cool to think about. Photo Credits: Feature image: shutterstock, 25-24. wikimedia commons (public domain), 23-22. pexels (public domain), 21. Rdevany, Mt. Everest from Gokyo Ri November 5, 2012, CC BY-SA 3.0, 20-19. pexels (public domain), 18. wikimedia commons (public domain), 17. Famartin, 2015-07-13 09 11 13 A Great Basin Bristlecone Pine along the North Loop Trail about 7.3 miles west of the trailhead in the Mount Charleston Wilderness, Nevada, CC BY-SA 4.0, 16-12. pexels (public domain), 11. wikimedia commons (public domain), 10-6. pexels (public domain), 5. USDA via flickr (public domain), 4-1. pexels (public domain)
0.806537
3.127837
The more you go to space, the cheaper it gets. Hurtling through space is easy. But getting started? Chemical propellants are great for an initial push, but your precious kerosene will burn up in a matter of minutes. After that, expect to reach the moons of Jupiter in, oh, five to seven years. Propulsion needs a radical new method. But before you break into outer space, a rogue bit of broke-ass satellite comes from out of nowhere and caps your second-stage fuel tank. No more rocket. Launch adapters, lens covers, even a fleck of paint can punch a crater in critical systems. Whipple shields—layers of metal and Kevlar—can protect against the bitsy pieces, but nothing can save you from a whole satellite. Some 4, orbit Earth, most dead in the air. So starting now, all satellites will have to fall out of orbit on their own. That might be a century hence—or a lot sooner if space war breaks out. If someone like China? Essential to the future of space travel: world peace. The Deep Space Network, a collection of antenna arrays in California, Australia, and Spain, is the only navigation tool for space. Everything from student-project satellites to the New Horizons probe meandering through the Kuiper Belt depends on it to stay oriented. Tools of the (Astronaut) Trade But as more and more missions take flight, the network is getting congested. The switchboard is often busy. So in the near term, NASA is working to lighten the load. Atomic clocks on the crafts themselves will cut transmission time in half, allowing distance calculations with a single downlink. And higher-bandwidth lasers will handle big data packages, like photos or video messages. The farther rockets go from Earth, however, the less reliable this method becomes. Sure, radio waves travel at light speed, but transmissions to deep space still take hours. When these particles knock into the atoms of aluminum that make up a spacecraft hull, their nuclei blow up, emitting yet more superfast particles called secondary radiation. A better solution? One word: plastics. NASA is testing plastics that can mitigate radiation in spaceships or space suits. Or how about this word: magnets. Scientists on the Space Radiation Superconducting Shield project are working on a magnesium diboride superconductor that would deflect charged particles away from a ship. It works at — degrees Celsius, which is balmy for superconductors, but it helps that space is already so damn cold. Lettuce got to be a hero last August. But large-scale gardening in zero g is tricky. Moon exploration, facts and information Also, existing vehicles are cramped. Some veggies are already pretty space-efficient ha! Proteins, fats, and carbs could come from a more diverse harvest—like potatoes and peanuts. GMOs could help here too. He likens it to how your small intestine recycles what you drink. Weightlessness wrecks the body: It makes certain immune cells unable to do their jobs, and red blood cells explode. It gives you kidney stones and makes your heart lazy. Artificial gravity would fix all that. Creating an environment that can sustain human life in the almost total absence of gravity, as well as no electrical outlets or oxygen, takes a lot of experimentation. We compiled 30 common items that were invented for use in the race for space. Unlike modern inventions we no longer use, these inventions are employed daily to save lives, improve environmental sustainability, and keep humans healthy. After NASA developed scratch-resistant astronaut helmets, the agency gave a license to Foster-Grant Corporation to continue experimenting with scratch-resistant plastics, which now comprise most sunglasses and prescription lenses. Needing to monitor astronauts' vital signs in space, the Goddard Space Flight Center created monitoring systems that have been adapted to regulate blood sugar levels and release insulin as needed. The polymers created for use in space suits have been valuable in creating flame-retardant, heat-resistant suits for firefighters. - RAW FACTS ARE REAL FICTION. - Why Exploring Space And Investing In Research Is Non-Negotiable. - Space Facts? - Absolute Surrender [Annotated]! - Ergonomics and Psychology: Developments in Theory and Practice (Ergonomics Design & Mgmt. Theory & Applications). Newer suits also feature circulating coolant to keep firefighters from succumbing to heat and advanced breathing systems modeled after astronaut life support systems. This led to the creation of the ultra-light, compact, cordless DustBuster. Technology used to track astronauts' eyes during periods in space in order to assess how humans' frames of reference are affected by weightlessness has become essential for use during LASIK surgery. leondumoulin.nl/language/mythology/take-me-away.php The device tracks a patient's eye positions for the surgeon. Shock absorbers designed to protect equipment during space shuttle launches are now used to protect bridges and buildings in areas prone to earthquakes. Out of a need to power space missions, NASA has invented, and consistently improved, photovoltaic cells, sharing the advancements with other companies to accelerate the technology. In the s, NASA developed filtration systems that utilized iodine and cartridge filters to ensure that astronauts had access to safe, tasteless water. This filtering technology is now standard. The material is stronger than steel and adds thousands of miles of life to the tires. Astronomy and space. Our research Radio astronomy For more than 60 years we've been a world leader in radio astronomy: managing observatories, developing new technologies and revealing the structure of the Universe. ASKAP and the Square Kilometre Array We're building a next-generation radio telescope in remote Western Australia and playing a leading role in developing the world's largest observatory. Astronomy education programs Our astronomy education programs are supporting teachers and inspiring students to learn about science and the Universe. - A Fish in the Water. - PI Binks: The Case of the Busted Bees Nest! - Press Releases? - What is a Makerspace? Is it a Hackerspace or a Makerspace?. - Space race: Inventions we use every day were created for outer space? - The Falling (The Morrelini Chronicles Book 2)! Spacecraft tracking For more than 50 years we've been working with NASA and other international space agencies at the forefront of space science. Visit our telescopes See astronomy and space exploration in action, look around exhibitions, catch a movie, take in great views, and lots more, at our telescope visitors centres. Earth observation We're world leaders in the development of Earth observation analytics, tools and applications to inform and manage environmental challenges. Supporting supply chains Our partnerships with industry support supply chains via imaging and sensor technologies through to autonomous robotics and new materials and manufacturing processes. Our people Meet our astronomy and space science leadership team.
0.859816
3.166267
This interested me. Astronomers in Baltimore (led by Christine Chen) have managed to image what they believe to be a Kuiper Belt around another star. HD 181327 in the constellation of Pictor is a young Sun-like star 165 light years away, and you have to admit, this image makes some compelling evidence. While I like to theorise that all stars are likely to exhibit similar features to the Sun (like planets, starspots, asteroid belts…) it’s nice to know that these things are actually being confirmed by observations! Stars, as I’ve said before, are dusty things. Lots of stars are swathed in a circle of dust. It’s actually quite easy to spot; the dust is heated by the star and shines brightly in infrared. Better than that, you can tell a lot about what’s there by the wavelength of the light given off. More importantly, you can look at the light that isn’t given off. The green blotches on that image are where water molecules have absorbed light in the infrared at around 63 microns. That far from the star, that water would almost definitely be ice, encrusted into comets and other objects. Impressively, this team have used a lot of scopes. Hubble, Gemini South and Spitzer were used to confirm this! Three of the big ones… Evidently, their proposal writing skills are formidable. They also suggest that this icy dust needs to be replenished, and that perhaps that big blotch to the left of the star is the remnant of a recent collision, causing all of that ice. Those green clouds do seem to be trailing the blotch. It might be wild speculation, but… Personally, I quite like the idea that all of that dust might be gravitationally bound to a planet, leaving a trail in it’s wake. The trailing cloud does seem like it could be being swept up in a lagrange L5 point (like the asteroids that share Jupiter’s orbit). Could an ice giant like neptune do that? Those ice clouds are around 90AU from the central star, so around three times as far out as Neptune. Though it is vaguely reminiscent of the sub-nebula in orbit around AB Aurigae. But there’s no clearing. Mind you, that could be because these ice clouds are being swept up but not actually accreted. I don’t know. Exoplanets aren’t exactly my specialty, even if dust is. But it’s an interesting idea. The paper, sans any flagrant conjecture on my part, is available on arXiv: [Chen at al (2008)]
0.877297
3.799566
Ten years ago, a rocket slammed into the moon. The impact sent a plume of lunar material from the moon’s south pole flying out into space. For a few minutes, the spacecraft that had unleashed the rocket coasted through the mist, its instruments absorbing as much data as they could. Amid the molecules of methane, ammonia, carbon dioxide, and other compounds, the spacecraft detected something wonderfully familiar: water. Not liquid water, but grains of water ice. The discovery helped reshape our understanding of Earth’s satellite. Though scientists had long believed that the moon was quite dry, they had begun to harbor suspicions that water might lurk somewhere in its shadowy regions. The excavated material showed them they were right to wonder. It wasn’t much, but it was enough to suggest there was a lot more. This is where NASA wants to go next, to craters along the moon’s south pole untouched by sunlight. Jim Bridenstine, the agency’s administrator, brings up water ice almost every time he talks about the Artemis program, the Trump administration’s effort to return astronauts to the moon in the next five years. The hope is that future spacefarers could mine the ice as a resource for their moon bases. “We know that there’s hundreds of millions of tons of water ice on the surface of the moon,” Bridenstine says. Sometimes he says there’s hundreds of billions of tons. But Bridenstine doesn’t know that—not for sure. No one does. Out in the cosmos, water is actually everywhere, usually in the form of ice. Its signature has been found beneath the surface of Mars, in the atmospheres of exoplanets, and inside dusty interstellar clouds. It is not strange to find water beyond Earth, though our planet, cloaked in oceans, certainly wears it best. The moon had a dry reputation from the very beginning. According to the leading theory, it formed from the debris of a collision between Earth and a Mars-size object about 4.5 billion years ago. The impact was so fiery that scientists suspected that any water, whether it came from Earth or the mystery object, would have boiled away for good. But in the late 1990s, an orbiting spacecraft flew over the poles and detected an abundance of hydrogen, which, when combined with oxygen, forms water. The hydrogen felt like a bread crumb beckoning scientists to follow. The moon’s axis has a very small tilt, which means sunlight never reaches some polar regions. The cold and dark conditions, some scientists predicted, could protect any water ice from the sun’s destructive glare. Scientists tested this theory, a decade later, with force. The Lunar Crater Observation and Sensing Satellite (LCROSS) arrived at the moon in 2009 with a rocket booster, now empty, that had helped launch it into space. Hurled down to the surface, the projectile exhumed those grains of pure water ice, hidden in darkness for perhaps billions of years, and lofted them into the light. Other experiments around that time provided more evidence of lunar water. Scientists detected hints of water in Apollo samples of volcanic glass, a remnant of the moon’s fiery beginnings. One astronomer, poring over images of the south pole, noticed that some spots on the surface shared similarities with minerals that require water to form. Even Cassini, a mission bound for Saturn, caught something. Cassini had turned toward the moon—a nice dry target—to calibrate its instruments on its way out and picked up some contamination that later turned out to be a signal for water. It was becoming very difficult to ignore the new story of the moon. Today, scientists believe the moon harbors water inside and out. There is likely an ancient reservoir deep in its interior, and when the moon was young and molten, water escaped through volcanoes, froze in the vacuum of space, and rained down on the surface in beads of glass, the kind that astronauts later collected. The glass is found beyond the moon’s polar regions, but the trapped water is difficult to extract, says Anthony Colaprete, a planetary scientist at NASA’s Ames Research Center who led the LCROSS mission. “The glass needs to be heated to very high temperatures to drive the water out,” Colaprete says. The surface, even the parts that receive sunlight, is sprinkled with traces of water. This water arises when charged hydrogen atoms from the sun strike the lunar regolith, split oxygen bonds in the soil, and join with them to produce water. The final product is only a few molecules deep, though—no use for thirsty astronauts. The stuff that future astronauts really want lies in the shadows, inside craters where the sun never shines. Like Earth, the moon was probably bombarded with water-bearing asteroids and comets in its early days. Without an atmosphere to disintegrate them, the objects smashed into bits on the surface. Particles of water ice, newly exposed, scattered. The ice that drifted into darkened craters, or even small shadows cast by boulders, survived. “Once a molecule got into one of these areas, it could never get out,” says Carle Pieters, a planetary scientist at Brown University who oversaw a mineralogy instrument on an Indian robotic mission to the moon in 2008. NASA and commercial space companies have set their sights on these mysterious, enduring reservoirs at the south pole, and they have based their estimates of the supply on the past decade of exploration. Bridenstine’s estimate, according to his office, likely comes from NASA’s chief scientist, Jim Green. Several scientists tell me the estimate is not outlandish, but they stress that it is merely a range. “If you take 10 scientists in a room, you get multiple answers,” says Thomas Zurbuchen, the NASA associate administrator who leads the agency’s science programs. So what if astronauts were on the moon right now, and they could rappel into one of these craters and see the water ice for themselves? They would probably find something like “a dirty snowbank,” says Lindy Elkins-Tanton, a planetary scientist at Arizona State University. “It’s going to be a mess,” she says. “It’s going to be water and sulfur oxides and ammonia, and it’s going to be shards of rock and glass from the moon, and then it’s going to be a lot of organic materials.” Astronauts would have to figure out how to extract lunar dust, metals, and other materials that could be harmful if consumed. “There’s been a little bit of an assumption that we can use the water-purification systems that we use here on Earth,” Elkins-Tanton says. “The water we’re going to find on the moon is not like any water we’ve ever had to process on Earth.” To pin down the nature of water ice on the moon, scientists need fresh data from new missions. NASA wants to send landers and rovers to probe the silvery terrain for answers. But even the most advanced rovers could miss signs of water, says Rick Elphic, a planetary scientist at Ames who is working on science instruments for the moon program. Spacecraft observations have shown that while total darkness provides the right conditions for water ice, it doesn’t guarantee the presence of water ice. And direct hits from meteors can scramble the shadowed regions that do have ice. “As time goes by, impacts and micrometeoroid particles would continually erode, remove, bury, and redistribute the icy material,” Elphic says. “Occasional impacts would act like a hole punch, removing areas of ice-bearing material.” The cosmic hole-punching, over the course of hundreds of millions of years, could mold a landscape of icy islands, with barren rock in between. “These ice-bearing islands might be few and far between, and very irregularly shaped,” Elphic says. A NASA rover could drive for miles and not detect a single crystal. In a world where funding didn’t matter and physics cooperated, scientists would send machines crawling all over the lunar surface, from the equator to the poles. But moon missions are difficult and expensive, even without any people on board. Just last month, an Indian spacecraft designed to explore water ice near the south pole stopped transmitting right before touchdown, and hasn’t been heard from since. For now, scientists must contend with what they have: hints and traces, the decade-old data from a plume of disentombed particles. Until rovers—or people—start roaming the bottom of the moon with drills and chisels, the basins of water ice that Bridenstine preaches about remain in the realm of daydreams.
0.875073
3.898607
2.4.2 Routine Measurement Orbit A routine SCIAMACHY orbit starts above the northern hemisphere with an observation of the rising sun. In order to acquire also light from the sparsely illuminated atmosphere at the limb in the direction of the rising sun, a sequence of limb measurements precedes each sun occultation measurement. Once the sun has risen, it is tracked by the ESM for the complete pass through the SO&C window. After about 175 sec the sun leaves the limb TCFoV at the upper edge. In order to fully exploit the high spatial resolution during occultation, measurement data readout with a high rate is required in the SO&C window. Until the passage of the sub-solar point, a series of matching limb/nadir observations are executed. At the sub-solar point the sun, generally close to descending node crossing, has reached its highest elevation relative to ENVISAT. Whether a sub-solar measurement is actually executed depends on whether a sub-solar calibration opportunity has been assigned by ENVISAT. Because of the vignetting of the sub-solar TCFoV by the Ka-band antenna in operational position, only 3 orbits per day with sub-solar opportunities are possible, of which nominally one has to be selected. Another sequence of matching limb/nadir measurements follows. Above the southern hemisphere, the moon becomes visible during the monthly moon visibility period, otherwise matching limb/nadir observations continue. The rising moon is observed similarly to the rising sun from bottom to top of the limb TCFoV. A series of limb/nadir observations concludes the illuminated part of the SCIAMACHY orbit. Because the instrument is still viewing sunlight in nadir direction while the projected ground-track in the flight direction will already have seen sunset, the final measurements in this phase are only of the nadir type. When ENVISAT enters the eclipsed part of the orbit, dedicated eclipse observations can be executed until SCIAMACHY moves towards another sunrise and the orbit sequence starts again (fig. 2-15).
0.828864
3.661047
64,000 mph asteroid was fastest on record At about 14:51 GMT on April 22, 2012, a fireball was seen throughout the western United States, accompanied by a loud booming sound heard over much of California's Sierra Nevada mountains around Lake Tahoe. Scientists have now carried out a thorough analysis of the meteorite and found that it was the fastest meteor ever recorded at 28.6 km/s (64000 mph). The Sutter's Mill meteorite (now it's official name) was by some definitions a small asteroid, roughly 2.5 to 4 meters (8-13 ft) in size and weighing in the neighborhood of 40 metric tons (88,000 lb). The fireball was described as green, lasting 5 seconds or so, and had a luminosity of magnitude -18 to -20, meaning that it appeared midway in brightness between the Sun and the Moon. It was reported by observers to be bright enough to dazzle the eye and subsequent analysis has found that it was made up of a rare type of carbonaceous chondrite seldom seen before. "It sounded like a sonic boom but longer,” said Alan Ehrgott, who lives in the Sutter's Mill area. “It seemed to last 45 seconds. It stopped me in my tracks." As a result of its enormous velocity, the asteroid entered the atmosphere with a kinetic energy of about 4 kilotons (compare to the Hiroshima bomb's 12.5 kilotons). The detonation occurred at an altitude of 47.6 km (30 miles) and located some 30 km (18 miles) north of Merced, CA – a largely uninhabited area about 130 km (80 miles) south of Coloma, where Sutter's Mill is located. A massive recovery effort has resulted in finding 0.9 kg (2.0 lb) of meteorites related to the Sutter's Mill asteroid. The recovery effort was greatly assisted by records of the fall as seen by several weather radars, which allowed analysis of the asteroid's trajectory, and also directly detected a fall of stones near Columa. Three fragments of the meteorite were found just two days after impact – a fortunate circumstance as quantities of highly soluble minerals were found in the recovered material and a heavy rain on April 27 would probably have degraded this valuable mineral. The recovered fragments had a dark, crumbly texture showing that the asteroid was a primitive carbonaceous chondrite – stony meteorites that contain significant quantities of carbon-containing materials. In particular, it is a CM chondrite, characterized by large amounts of water (~10 percent) and the presence of multiple amino acids, although in smaller quantities than in some carbonaceous chondrites. These amino acids are made up of both right- and left-handed varieties, indicating that they are not the result of terrestrial contamination (amino acids produced by living organisms are entirely left-handed.) This indicates that it is a remnant from the formation of the solar system, virtually unaltered since that time (4.5 billion years ago.) The analysis showed that the asteroid had been subjected to intense solar radiation and cosmic rays for no more than about 100,000 years, so it was probably a recent fragment from a collision in the asteroid belt. "The meteorite was a jumbled mess of rocks, called a regolith breccia, that originated from near the surface of a primitive asteroid," said NASA's Derek Sears. Some pieces had experienced heating up to 300º C (400º F), a level thought unlikely to be due to the flash of heat they experienced in the atmosphere. The asteroid contained an unusual mineral called oldhamite (calcium sulfide), usually associated with another form of chondrite. This is being taken as an indication that "primitive and highly evolved asteroids collided with each other even at early times when the debris accumulated that now makes the meteorite matrix," according to Mike Zolensky, a mineralogist at NASA’s Johnson Space Flight Center. The various pictures, videos, and radar tracks of the Sutter's Mill asteroid allowed NASA scientists to determine the orbit it followed before striking the Earth. As shown above, it had an orbit reaching nearly out to Jupiter. It came from the asteroid belt, probably thrown onto its Earth-crossing orbit by Jupiter. That orbit was in a three to one resonance with Jupiter, meaning that the asteroid orbited the sun three times for each revolution of Jupiter. This video captured by Shon Bollock shows the fireball in motion.
0.81712
3.580692
In 2018, millions of people around the world caught glimpses of the planet Mars, discernible as a bright red dot in the summer’s night skies. Every 26 months or so, the red planet reaches a point in its elliptical orbit closest to Earth, setting the stage for exceptional visibility. This proximity also serves as an excellent opportunity for launching exploratory Mars missions, the next of which will occur in 2020 when a global suite of rovers will take off from Earth. The red planet was hiding behind the overcast, drizzling Boston sky on Oct. 11, when Mars expert John Grotzinger gave audiences a different perspective, taking them through an exploration of Mars' geologic history. Grotzinger, the Fletcher Jones Professor of Geology at the Caltech and a former professor in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), also used the eighth annual John Carlson Lecture to talk to the audience gathered at the New England Aquarium about the ongoing search for life on Mars. Specializing in sedimentology and geobiology, Grotzinger has made significant contributions to understanding the early environmental history of the Earth and Mars and their habitability. In addition to involvement with the Mars Exploration Rover (MER) mission and the High Resolution Science Experiment (HiRISE) onboard the Mars Reconnaissance Orbiter (MRO), Grotzinger served as project scientist of the Mars Science Laboratory mission, which operates the Curiosity roving laboratory. Curiosity explores the rocks, soils, and air of the Gale Crater to find out whether Mars ever hosted an environment that was habitable for microbial life during its nearly 4.6-billion-year history. “What I’d like to do is give you a very broad perspective of how we as scientists go about exploring a planet like Mars, with the rather audacious hypothesis that there could have been once life there,” he told the audience. “This is a classic mission of exploration where a team of scientists heads out into the unknown.” “Simple one-celled microorganisms we know have existed on Earth for the last three-and-a-half billion years — a long time. They originated, they adapted, they evolved, and they didn’t change very much until you had the emergence of animals just 500 million years ago,” Grotzinger said. “For basically 3 billion years, the planet was pretty much alone with microbes. So, the question is: Could Mars have done something similar?” Part of the research concerning whether or not Mars ever hosted ancient life involves identifying the environmental characteristics necessary for the survival of living organisms, including liquid water. Currently, the thin atmosphere around Mars prevents the accumulation of a standing body of water, but that may not always have been the case. Topographic features documented by orbiters and landers suggest the presence of ancient river channels, deltas and possibly even an ocean on Mars, “just like we see on Earth,” Grotzinger said. “This tells us that, at least, for some brief period of time if you want to be conservative, or maybe a long period of time, water was there [and] the atmosphere was denser. This is a good thing for life.” To describe how scientists search for evidence of past habitability on Mars, Grotzinger told the story of stratigraphy — a discipline within geology that focuses on the sequential deposition and layering of sediments and igneous rocks. The changes that occur layer-to-layer indicate shifts in the environmental conditions under which different layers were deposited. In that manner, interpreting stratigraphic records is simple, he said. “It’s like reading a book. You start at the bottom and you get to the first chapter, and you get to the top and you get to the last chapter,” Grotzinger said. “Sedimentary rocks are records of environmental change … what we want to do is explore this record on Mars.” While Grotzinger and Curiosity both continue their explorations of Mars, scientists from around the world are working on pinpointing new landing sites for future Mars rovers which will expand the search for ancient life. This past summer, the SAM (Sample Analysis on Mars) instrument aboard the Curiosity rover detected evidence of complex organic matter in Gale Crater, a discovery which further supports the notion that Mars may have been habitable once. “We know that Earth teems with life and we have enough of a fossil record to know that it’s been that way since we get to the oldest, well-preserved rocks on Earth. But yet, when you go to those rocks, you almost never find evidence of life,” Grotzinger said, leaving space for hope. “And that’s because, in the conversion of the sedimentary environment to the rock, there are enough mineralogic processes that are going on that the record of life gets erased. And so, I think we’re going to have to try over and over again.” Following the lecture, members and friends of EAPS attended a reception in the main aquarium that featured some of the research currently taking place in the department. Posters and demonstrations were arranged around the aquarium’s cylindrical 200,000-gallon tank simulating a Caribbean coral reef, and attendees were able to chat with presenters and admire aquatic life while learning about current EAPS projects. EAPS graduate student, postdoc, and research scientist presenters included Tyler Mackey, Andrew Cummings, Marjorie Cantine, Athena Eyster, Adam Jost, and Julia Wilcots from the Bergmann group; Kelsey Moore and Lily Momper from the Bosak group; Eric Beaucé, Ekaterina Bolotskaya, and Eva Golos from the Morgan group; Jonathan Lauderdale and Deepa Rao from the Follows group; Sam Levang from the Flierl group; Joanna Millstein and Kasturi Shah from the Minchew group; and Ainara Sistiaga, Jorsua Herrera, and Angel Mojarro from the Summons group. The John H. Carlson Lecture series communicates exciting new results in climate science to general audiences. Free of charge and open to the general public, the annual lecture is made possible by a generous gift from MIT alumnus John H. Carlson to the Lorenz Center in the Department of Earth, Atmospheric and Planetary Sciences. Anyone interested in join the invitation list for next year’s Carlson Lecture is encouraged to contact Angela Ellis.
0.857258
3.601421
Astronomers recently announced that the nearby star Proxima Centauri hosts an Earth-sized planet in its habitable zone. Proxima Centauri is a small, cool, red dwarf star only one-tenth as massive and one-thousandth as luminous as the Sun. However, new research shows that it is Sun-like in one surprising way: it has a regular cycle of starspots. Like cosmic ballet dancers, the stars of the Pleiades cluster are spinning, but all at different speeds. By watching these stellar dancers, NASA’s Kepler space telescope has helped amass the most complete catalogue of rotation periods for stars in a cluster. This information can provide insight into where and how planets form around these stars, and how such stars evolve. Astronomers searching for the galaxy’s youngest planets have found compelling evidence for one unlike any other, a newborn “hot Jupiter” whose outer layers are being torn away by the star it orbits every 11 hours. Dubbed “PTFO8-8695 b,” the suspected planet orbits a star about 1,100 light-years from Earth and is at most twice the mass of Jupiter. Contradicting the long-standing idea that large Jupiter-mass planets take a minimum of 10 million years to form, astronomers have just announced the discovery of a giant planet in close orbit around a 2 million-year-old star that still retains a disc of circumstellar gas and dust. CI Tau b is at least eight times larger than Jupiter and 450 light-years from Earth. Astrophysicists from Germany and America have for the first time measured the rotation periods of stars in a cluster nearly as old as the Sun. It turns out that these stars spin once in about twenty-six days — just like our Sun. This discovery significantly strengthens what is known as the solar-stellar connection, a fundamental principle that guides much of modern solar and stellar astrophysics. Astronomers have used interferometry to create a time-lapse of the nearby star zeta Andromedae over one of its 18-day rotations that show starspots — sunspots outside our solar system. The pattern of spots on the star is very different from their typical arrangement on our Sun, challenging current theories of how stars’ magnetic fields influence their evolution. Nearly four billion years ago, life arose on Earth. Life appeared because our planet had a rocky surface, liquid water, a blanketing atmosphere and a protective magnetic field. A new study of the young, Sun-like star Kappa Ceti shows that a magnetic field plays a key role in making a planet conducive to life.
0.914207
4.0072
CERN congratulates James Peebles, Michel Mayor and Didier Queloz on the award of the Nobel Prize in physics “for contributions to our understanding of the evolution of the universe and Earth’s place in the cosmos”. Peebles receives the prize “for theoretical discoveries in physical cosmology” and Mayor and Queloz are recognised “for the discovery of an exoplanet orbiting a solar-type star”. Cosmology studies the universe’s origin, structure and ultimate fate. Peebles’ theoretical framework of cosmology, developed since the mid-1960s, is the foundation of our knowledge of the cosmos today. Thanks to his seminal theoretical work, physicists now have a model that can describe the universe from its earliest moments to the present day, and into the distant future. Meanwhile, Mayor and Queloz have explored our cosmic neighbourhood and announced in 1995 the first discovery of an exoplanet – a planet outside our Solar System – orbiting a solar-type star in the Milky Way. The discovery of this exoplanet, dubbed 51 Pegasi b, was a milestone in astronomy and has since led to the discovery of more than 4000 exoplanets in our galaxy. Particle physics, like cosmology and astronomy, seeks to understand what the universe is made of and how it works. But instead of telescopes on the ground and in space, CERN uses particle accelerators to probe the building blocks of the universe. Although cosmological observations indicate that the universe is mostly made of dark energy and dark matter, in addition to a small amount of ordinary matter, physicists have yet to find out the particular nature of these two dark constituents. Experiments at CERN are trying to hunt down new, unknown particles that could make up dark matter and shed light on the observed evolution of the universe. For example, the CMS and NA64 experiments have recently reported new results on searches for dark photons, and the ATLAS experiment on a search for light supersymmetric particles. At the same time, theoretical physicists at CERN help guide these experiments and prompt new ones with their research into high-energy physics, cosmology and related fields. “The properties of dark matter, the need for Einstein’s cosmological constant, and the existence of dark energy still pose considerable empirical problems,” explained Peebles in a CERN interview in 2016. “I think that the new generation of experiments will drive us to a deeper understanding and a reconsideration of our previous ideas about these topics and I am looking forward to that.”
0.827484
3.879662
This article needs additional citations for verification. (November 2015) (Learn how and when to remove this template message) In nuclear physics and particle physics, the strong interaction is the mechanism responsible for the strong nuclear force, and is one of the four known fundamental interactions, with the others being electromagnetism, the weak interaction, and gravitation. At the range of 10−15 m (1 femtometer), the strong force is approximately 137 times as strong as electromagnetism, a million times as strong as the weak interaction, and 1038 times as strong as gravitation. The strong nuclear force holds most ordinary matter together because it confines quarks into hadron particles such as the proton and neutron. In addition, the strong force binds these neutrons and protons to create atomic nuclei. Most of the mass of a common proton or neutron is the result of the strong force field energy; the individual quarks provide only about 1% of the mass of a proton. The strong interaction is observable at two ranges and mediated by two force carriers. On a larger scale (about 1 to 3 fm), it is the force (carried by mesons) that binds protons and neutrons (nucleons) together to form the nucleus of an atom. On the smaller scale (less than about 0.8 fm, the radius of a nucleon), it is the force (carried by gluons) that holds quarks together to form protons, neutrons, and other hadron particles. In the latter context, it is often known as the color force. The strong force inherently has such a high strength that hadrons bound by the strong force can produce new massive particles. Thus, if hadrons are struck by high-energy particles, they give rise to new hadrons instead of emitting freely moving radiation (gluons). This property of the strong force is called color confinement, and it prevents the free "emission" of the strong force: instead, in practice, jets of massive particles are produced. In the context of atomic nuclei, the same strong interaction force (that binds quarks within a nucleon) also binds protons and neutrons together to form a nucleus. In this capacity it is called the nuclear force (or residual strong force). So the residuum from the strong interaction within protons and neutrons also binds nuclei together. As such, the residual strong interaction obeys a distance-dependent behavior between nucleons that is quite different from that when it is acting to bind quarks within nucleons. Additionally, distinctions exist in the binding energies of the nuclear force of nuclear fusion vs nuclear fission. Nuclear fusion accounts for most energy production in the Sun and other stars. Nuclear fission allows for decay of radioactive elements and isotopes, although it is often mediated by the weak interaction. Artificially, the energy associated with the nuclear force is partially released in nuclear power and nuclear weapons, both in uranium or plutonium-based fission weapons and in fusion weapons like the hydrogen bomb. The strong interaction is mediated by the exchange of massless particles called gluons that act between quarks, antiquarks, and other gluons. Gluons are thought to interact with quarks and other gluons by way of a type of charge called color charge. Color charge is analogous to electromagnetic charge, but it comes in three types (±red, ±green, ±blue) rather than one, which results in a different type of force, with different rules of behavior. These rules are detailed in the theory of quantum chromodynamics (QCD), which is the theory of quark-gluon interactions. Before the 1970s, physicists were uncertain as to how the atomic nucleus was bound together. It was known that the nucleus was composed of protons and neutrons and that protons possessed positive electric charge, while neutrons were electrically neutral. By the understanding of physics at that time, positive charges would repel one another and the positively charged protons should cause the nucleus to fly apart. However, this was never observed. New physics was needed to explain this phenomenon. A stronger attractive force was postulated to explain how the atomic nucleus was bound despite the protons' mutual electromagnetic repulsion. This hypothesized force was called the strong force, which was believed to be a fundamental force that acted on the protons and neutrons that make up the nucleus. It was later discovered that protons and neutrons were not fundamental particles, but were made up of constituent particles called quarks. The strong attraction between nucleons was the side-effect of a more fundamental force that bound the quarks together into protons and neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a color charge, although it has no relation to visible color. Quarks with unlike color charge attract one another as a result of the strong interaction, and the particle that mediated this was called the gluon. Behavior of the strong force The word strong is used since the strong interaction is the "strongest" of the four fundamental forces. At a distance of 1 femtometer (1 fm = 10−15 meters) or less, its strength is around 137 times that of the electromagnetic force, some 106 times as great as that of the weak force, and about 1038 times that of gravitation. The strong force is described by quantum chromodynamics (QCD), a part of the standard model of particle physics. Mathematically, QCD is a non-Abelian gauge theory based on a local (gauge) symmetry group called SU(3). The force carrier particle of the strong interaction is the gluon, a massless boson. Unlike the photon in electromagnetism, which is neutral, the gluon carries a color charge. Quarks and gluons are the only fundamental particles that carry non-vanishing color charge, and hence they participate in strong interactions only with each other. The strong force is the expression of the gluon interaction with other quark and gluon particles. All quarks and gluons in QCD interact with each other through the strong force. The strength of interaction is parameterized by the strong coupling constant. This strength is modified by the gauge color charge of the particle, a group theoretical property. The strong force acts between quarks. Unlike all other forces (electromagnetic, weak, and gravitational), the strong force does not diminish in strength with increasing distance between pairs of quarks. After a limiting distance (about the size of a hadron) has been reached, it remains at a strength of about 10,000 newtons (N), no matter how much farther the distance between the quarks. As the separation between the quarks grows, the energy added to the pair creates new pairs of matching quarks between the original two; hence it is impossible to create separate quarks. The explanation is that the amount of work done against a force of 10,000 newtons is enough to create particle-antiparticle pairs within a very short distance of that interaction. The very energy added to the system required to pull two quarks apart would create a pair of new quarks that will pair up with the original ones. In QCD, this phenomenon is called color confinement; as a result only hadrons, not individual free quarks, can be observed. The failure of all experiments that have searched for free quarks is considered to be evidence of this phenomenon. The elementary quark and gluon particles involved in a high energy collision are not directly observable. The interaction produces jets of newly created hadrons that are observable. Those hadrons are created, as a manifestation of mass-energy equivalence, when sufficient energy is deposited into a quark-quark bond, as when a quark in one proton is struck by a very fast quark of another impacting proton during a particle accelerator experiment. However, quark–gluon plasmas have been observed. Residual strong force It is not the case that every quark in the universe attracts every other quark in the above distance independent manner. Color confinement implies that the strong force acts without distance-diminishment only between pairs of quarks, and that in collections of bound quarks (hadrons), the net color-charge of the quarks essentially cancels out, resulting in a limit of the action of the forces. Collections of quarks (hadrons) therefore appear nearly without color-charge, and the strong force is therefore nearly absent between those hadrons. However, the cancellation is not quite perfect, and a residual force (described below)remains. This residual force does diminish rapidly with distance, and is thus very short-range (effectively a few femtometers). It manifests as a force between the "colorless" hadrons, and is sometimes known as the strong nuclear force or simply nuclear force. The nuclear force acts between hadrons, known as mesons and baryons. This "residual strong force", acting indirectly, transmits gluons that form part of the virtual π and ρ mesons, which, in turn, transmit the force between nucleons that holds the nucleus (beyond protium) together. The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. This same force is much weaker between neutrons and protons, because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (van der Waals forces) are much weaker than the electromagnetic forces that hold electrons in association with the nucleus, forming the atoms. Unlike the strong force itself, the residual strong force, does diminish in strength, and it in fact diminishes rapidly with distance. The decrease is approximately as a negative exponential power of distance, though there is no simple expression known for this; see Yukawa potential. The rapid decrease with distance of the attractive residual force and the less-rapid decrease of the repulsive electromagnetic force acting between protons within a nucleus, causes the instability of larger atomic nuclei, such as all those with atomic numbers larger than 82 (the element lead). Although the nuclear force is weaker than strong interaction itself, it is still highly energetic: transitions produce gamma rays. The mass of nuclei is significantly different from the masses of the individual nucleons. This mass defect is due to the potential energy associated with the nuclear force. Differences between mass defects power nuclear fusion and nuclear fission. The so-called Grand Unified Theories (GUT) aim to describe the strong interaction and the electroweak interaction as aspects of a single force, similarly to how the electromagnetic and weak interactions were unified by the Glashow–Weinberg–Salam model into the electroweak interaction. The strong interaction has a property called asymptotic freedom, wherein the strength of the strong force diminishes at higher energies (or temperatures). The theorized energy where its strength becomes equal to the electroweak interaction is the grand unification energy. However, no Grand Unified Theory has yet been successfully formulated to describe this process, and Grand Unification remains an unsolved problem in physics. If GUT is correct, after the Big Bang and during the electroweak epoch of the universe, the electroweak force separated from the strong force. Accordingly, a grand unification epoch is hypothesized to have existed prior to this. - Nuclear binding energy - Color charge - Coupling constant - Nuclear physics - QCD matter - Quantum field theory and Gauge theory - Standard model of particle physics and Standard Model (mathematical formulation) - Weak interaction, electromagnetism and gravity - Intermolecular force - Yukawa interaction - Relative strength of interaction varies with distance. See for instance Matt Strassler's essay, "The strength of the known forces". - The four forces: the strong interaction Duke University Astrophysics Dept website - The four forces: the strong interaction Duke University Astrophysics Dept website - on Binding energy: see Binding Energy, Mass Defect, Furry Elephant physics educational site, retr 2012-07-01 - on Binding energy: see Chapter 4 Nuclear Processes, The Strong Force, M. Ragheb 1/27/2012, University of Illinois - Feynman, R.P. (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. p. 136. ISBN 978-0-691-08388-9. The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of 'color', which has nothing to do with color in the normal sense. - Fritzsch, op. cite, p. 164. The author states that the force between differently colored quarks remains constant at any distance after they travel only a tiny distance from each other, and is equal to that need to raise one ton, which is 1000 kg × 9.8 m/s² = ~10,000 N. - "Quark-gluon plasma is the most primordial state of matter". About.com Education. Archived from the original on 2017-01-18. Retrieved 2017-01-16. - Fritzsch, H. (1983). Quarks: The Stuff of Matter. Basic Books. pp. 167–168. ISBN 978-0-465-06781-7. - Christman, J.R. (2001). "MISN-0-280: The Strong Interaction" (PDF). Project PHYSNET. External link in - Griffiths, David (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 978-0-471-60386-3. - Halzen, F.; Martin, A.D. (1984). Quarks and Leptons: An Introductory Course in Modern Particle Physics. John Wiley & Sons. ISBN 978-0-471-88741-6. - Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 978-0-201-11749-3. - Morris, R. (2003). The Last Sorcerers: The Path from Alchemy to the Periodic Table. Joseph Henry Press. ISBN 978-0-309-50593-2. |Wikiquote has quotations related to: Strong interaction|
0.803022
3.726969
Our solar system’s shadowy ninth (dwarf) planet was the subject of furious speculation and a frantic search for almost a century before it was finally discovered by Clyde Tombaugh in 1930. And remarkably, Pluto’s reality was deduced using a heady array of reasoning, observation and no small amount of imagination. The 18th and 19th centuries were thick with astronomical discoveries; not least were the planets Uranus and Neptune. The latter, in particular, was predicted by comparing observed perturbations in the orbit of Uranus to what was expected. This suggested the gravitational influence of another nearby planet. John Couch Adams and Urbain-Jean-Joseph Le Verrier calculated the orbit of Neptune by comparing these perturbations in Uranus’ orbit to those of the other seven known planets. Neptune was hence discovered in the predicted location in 1846. Soon after this, French physicist Jacques Babinet proposed the existence of an even more distant planet, which he named Hyperion. Le Verrier wasn’t convinced, stating that there was “absolutely nothing by which one could determine the position of another planet, barring hypotheses in which imagination played too large a part”. Despite that lack of evidence for perturbations in Neptune’s orbit, many predicted the existence of a ninth planet over the next 80 years. Frenchman Gabriel Dallet called it “Planet X” in 1892 and 1901, and the famed American astronomer William Henry Pickering proposed “Planet O” in 1908. In addition to the perturbations of known planets there were other hypotheses that foretold unknown bodies beyond Neptune. In the 19th century, it was understood that many comets had highly elliptical orbits that swung past the outer planets at their farthest points from the sun. It was believed that these planets diverted the comets into their eccentric orbits. In 1879 the French astronomer Camille Flammarion predicted a planet with an orbit 24 times that of Earth’s based on comet measurements. Using the same method, George Forbes, professor of astronomy at Glasgow University, confidently announced in 1880 that “two planets exist beyond the orbit of Neptune, one about 100 times, the other about 300 times the distance of the earth from the sun”. Depending on how the calculations were done, the results predicted anything from one to four planets. Other predictions were based on what can be described as numerical curiosities or speculations. One of these was the now-discredited Bode’s law, a sort of Fibonacci sequence for planets. The American mathematician Benjamin Pierce was not a fan, claiming that “fractions which express the law of vegetable growth” were more accurate than Bode’s law. As well as these earnest astronomers, the trans-Neptunian planet idea attracted cranks and visionaries. An interesting contribution came in 1875 from Count Oskar Reichenbach, who accused Le Verrier and Adams of conspiring to conceal the locations of two trans-Neptunian planets. Theories and calculations were all well and good, but many hoped to actually see the hitherto invisible planet(s). From the late 1800s new powerful telescopes equipped with the latest dry-plate photographic technologies were employed to search for undiscovered planets. Amateur astronomers such Isaac Roberts and William Edwards Wilson used the predictions of George Forbes to search the skies, taking many hundreds of photographic plates in the process. They found no lurking trans-Neptunian planets. The professionals fared no better. Edward Charles Pickering, director of the Harvard Observatory and William’s brother, spent around ten years from 1900 searching using his own data and those of earlier astronomers such as Dallet, all to no avail. In 1906 a new approach was introduced by the veteran astronomer Percival Lowell. Although best known to us for his (mistaken) observations of canals on Mars, Lowell bought a new rigour to analysing the orbit of Uranus based on observational data from 1750 to 1903. With these improved calculations, hope for a visual fix on the elusive planet was renewed. With the aid of the brothers Vesto and Earl Slipher, Lowell spend the rest of his life scanning photographic plates with a hand magnifier and finally with a Zeiss blink comparator. In September 1919 William Pickering kicked off another search for “Planet O” based on deviations in Neptune’s orbit. Milton L Humason, from the Mount Wilson Observatory in California, started a search based on these new predictions as well as Lowell’s and Pickering’s 1909 predictions. This search again failed to find any new planets. Pickering continued to publish articles on hypothetical planets but by 1928 he had become discouraged. As part of Lowell’s legacy, the Lowell Observatory built a special astrographic telescope. It was completed in 1929, and under Vesto Slipher’s direction, a young assistant was assigned to take and examine the photographs of the farthest reaches of the solar system. His name was Clyde Tombaugh. This was grim, unglamorous work. Each plate was exposed for an hour or more, with Tombaugh adjusting the telescope precisely to keep pace with the slowly turning sky. Today a computer would make the comparisons, but in 1929 they were made by eye, manually flicking between two images. Stars would remain motionless while other bodies would seem to jump between views. Some images would have 40,000 stars, others up to 1 million. Nearly a year had elapsed when, on February 18, 1930, two images fifteen times fainter than Neptune were found among 160,000 stars on the photographic plates. The discovery was confirmed by examining earlier images. On February 20 the planet was observed to be yellowish, rather than bluish like Neptune. The new planet had revealed its true colours at last. Slipher waited until March 13 to announce the discovery. This was both Lowell’s birthday and the anniversary date of the discovery of Uranus. The announcement set off a worldwide rush to observe and photograph the new planet. Now that astronomers, amateur and professional alike, knew what they were looking for, it turned out that Pluto had been hiding in plain view. Re-examination of Humanson’s plates showed four images of Pluto from his 1919 survey, and there were many others. On March 14, an Oxford librarian read the news to his 11-year old granddaughter Venetia Burney, who suggested the name Pluto. It was also suggested independently in a letter by William Henry Pickering. To complete the circle, some of Clyde Tombaugh’s remains are in a canister attached to the New Horizons spacecraft. Most people alive today would not remember a universe without Pluto. And from 2015, its patterned surface will enter our visual vocabulary of the planets. Once seen, it can never again be unseen. Planet X, welcome to our world. Now, as Pluto retreats into the distance, the slow trickle of data can begin. Sent to us at a rate of just 1 kilobit a second, it will take months to receive it all, and astronomers around the world are waiting on tenterhooks to get their hands on the data. Like our own Earth, Pluto has an oversized satellite, Charon. It was discovered back in 1978 and is more than half the diameter of its parent. But how did this satellite system come to be? And why the striking similarity to our double-planet? If we look at the great majority of satellites in our solar system we find that they can be split into two groups. First, have those that we think formed around their host planet like miniature planetary systems, mimicking the process of planet formation itself. These regular satellites most likely accreted from disks of material around the giant planets as those planets gobbled up material from the proto-planetary disk from which they formed. This explains the orbits of those satellites – perfectly aligned with the equator of their hosts and moving on circular orbits. Then we have the irregular satellites. These are (with a couple of noteworthy exceptions) tiny objects, and move on a wide variety of orbits that are typically great distances from their host planets. These, too, are easily explained – thought to be captured from the debris moving around the solar system late in its formation, relics of the swarm of minor bodies from which the planets formed. By contrast, our moon and Pluto’s Charon are far harder to explain. Their huge size, relative to their host, argues against their forming like the regular satellites. Likewise, their orbits are tilted both to the plane of the equator and to the plane of the host body’s orbit around the sun. It also seems very unlikely they were captured – that just doesn’t fit with our observations. The answer to this conundrum, in both cases, is violent. Like our moon, Charon (and by extension Pluto’s other satellites) are thought to have been born in a giant collision, so vast that it tore their host asunder. This model does a remarkable job of explaining the makeup of our own moon, and fits what we know (so far) about Pluto and its satellites. Pluto and its moons will therefore be the second shattered satellite system we’ve seen up close, and the results from New Horizons will be key to interpreting their formation. Studying the similarities and differences between Pluto and Charon will teach us a huge amount about that ancient cataclysmic collision. We already know that Pluto and Charon are different colours, but the differences likely run deeper. If Pluto was differentiated at the time of impact (in other words, if it had a core, mantle and crust, like the Earth) then Charon should be mostly comprised of material from the crust and mantle (like our moon). So it will be less dense and chemically different to Pluto. The same goes for Pluto’s other moons: Nix, Hydra, Styx and Kerberos. The most exciting discoveries from New Horizons will likely be those we can’t predict. Every time we visit somewhere new, the unexpected discoveries are often the most scientifically valuable. When we first visited Jupiter, 36 years ago, we found that its moon Io was a volcanic hell-scape. We also found that Europa hosts a salty ocean, buried beneath a thick ice cap. Both of these findings were utterly unexpected. At Saturn, we found the satellite Mimas looked like the Death Star and another, Iapetus, like a two tone cricket ball, complete with a seam. Uranus had a satellite, Miranda, that looked like it had been shattered and reassembled many times over, while Neptune’s moon Triton turned out to be dotted with cryo-volcanoes that spew ice instead of lava. The story continues for the solar system’s smaller bodies. The asteroid Ida, visited by Galileo on its way to Jupiter, has a tiny moon, Dactyl. Ceres, the dwarf planet in the asteroid belt, has astonishingly reflective bright spots upon its surface. Pluto, too, will have many surprises in store. There have already been a few, including the heart visible in the latest images (see top) – possibly the most eye catching feature to date. The best is doubtless still to come. Despite the difficulties posed by being more than four and a half billion kilometres from home, New Horizons is certain to revolutionise our understanding of the Pluto system. The data it obtains will shed new light on the puzzle of our solar system’s formation and evolution, and provide our first detailed images of one of the system’s most enigmatic objects. But the story doesn’t end there. Once Pluto recedes into the distance, New Horizons will continue to do exciting research. The craft has a limited amount of fuel remaining, nowhere near enough to turn drastically, but enough to nudge it towards another one or two conveniently placed targets. Since the launch of New Horizons, astronomers have been searching for suitable targets for it to visit as it hurtles outward through the Edgeworth-Kuiper belt, en-route to the stars. In October 2014, as a result of that search, three potential targets were identified. Follow up observations of those objects narrowed the list of possible destinations to two, known as 2014 MU69 (the favoured target) and 2014 PN70. The final decision on which target to aim for will be taken after New Horizons has left Pluto far behind, but we can expect to keep hearing about the spacecraft for years to come. What an amazing time for space exploration. The picture of the solar system from my childhood is now complete, as seen in this great family portrait produced by Ben Gross, a research fellow at the Chemical Heritage Foundation, and distributed via twitter. I love this image because it shows each world in close-up, using some of the latest pictures from space exploration. As we celebrate seeing Pluto for the first time, it’s remarkable to think that this completes a 50 year task. It has been NASA that has provided the first close-up views of all these worlds. Here’s the rundown: But science never stays still. When New Horizons left Earth… View original post 591 more words
0.896924
3.901889
Guest post by Roy W. Spencer, Ph. D. I’ll admit to being a skeptic when it comes to other skeptics’ opinions on the potential effects of sunspot activity on climate. Oh, it’s all very possible I suppose, but I’ve always said I’ll start believing it when someone shows a quantitative connection between variations in global cloud cover (not temperature) and geomagnetic activity. Maybe my skepticism is because I never took astronomy in college. Or, maybe it’s because I can’t see or feel cosmic rays. They sound kind of New Age to me. After all, I can see sunlight, and I can feel infrared radiation…but cosmic rays? Some might say, “Well, Roy, you work with satellite microwave data, and you can see or feel those either!” True, but I DO have a microwave oven in my kitchen…where’s your cosmic ray oven? Now…where was I? Oh, yeah. So, since I’ve been working with 9 years of global reflected sunlight data from the CERES instrument flying on NASA’s Terra satellite, last night I decided to take a look at some data for myself. The results, I will admit, are at least a little intriguing. The following plots show detrended time series of monthly running 5-month averages of (top) CERES reflected shortwave deviations from the average seasonal cycle, and (bottom) monthly running geomagnetic Ap index values from the NOAA Space Weather Prediction Center. As I understand it, the Ap index is believed to be related to the level of cosmic ray activity reaching the Earth. (I will address the reason for detrending below). Note that there is some similarity between the two plots. If we do a scatterplot of the data (below), we get an average linear relationship of about 0.05 W per sq. meter increase in reflected sunlight per 1 unit decrease in Ap index. This is at least qualitatively consistent with a decrease in solar activity corresponding to an increase in cloud cover. (I’ve also shown a 2nd order polynomial fit (curved line) in the above plot for those who think they see a nonlinear relationship there.) But just how big is this linear relationship seen in the above scatterplot? From looking at a 70-year plot of Ap data (originally from David Archibald), we see that the 11-year sunspot cycle modulates the Ap index by at least 10 units. Also, there are fairly routine variations on monthly and seasonal time scales of about 10 Ap units, too (click on image to see full-size): When the 10 Ap unit variations are multiplied by the 0.05 scale factor, it suggests about a 0.5 W per sq. meter modulation of global reflected sunlight during the 11 year solar cycle (as well as in monthly and yearly variations of geomagnetic activity). I calculate that this is a factor of 10 greater than the change in reflected sunlight that results from the 0.1% modulation of the total solar irradiance during the solar cycle. At face value, that would mean the geomagnetic modulation of cloudiness has about 10 times the effect on the amount of sunlight absorbed by the Earth as does the solar cycle’s direct modulation of the sun’s output. It also rivals the level of forcing due to anthropogenic greenhouse gas emissions, but with way more variability from year to year and decade to decade. (Can anyone say, “natural climate variability”?) Now, returning to the detrending of the data. The trend relationship between CERES reflected sunlight and the Ap index is of the opposite sign to that seen above. This suggests that the trend in geomagnetic activity during 2000-2008 can not explain the trend in global reflected sunlight over the same period of time. However, the ratio of the trends is very small: +0.004 Watts per sq. meter per unit Ap index, rather than -0.045. So, one can always claim that some other natural change in cloud cover is overpowering the geomagnetic modulation of cloudiness. With all kinds of climate forcings all mingled in together, it would be reasonable to expect a certain signal to emerge more clearly during some periods, and less clearly during other periods. I also did lag correlation plots of the data (not shown), and there is no obvious lag in the correlation relationship. All of this, of course, assumes that the observed relationship during 2000-2008 is not just by chance. There is considerable autocorrelation in the reflected sunlight and geomagnetic data, which I have made even worse by computing monthly running 5-month averages (the correlation strengths increased with averaging time). So, there are relatively few degrees of freedom in the data collected during 2000-2008, which increases the probability of getting a spurious relationship just by chance. All of the above was done in a few hours, so it is far from definitive. But it IS enough for me to keep an open mind on the subject of solar activity affecting climate variations. As usual, I’m just poking around in the data and trying to learn something…while also stirring up some discussion (to be enjoyed on other blogs) along the way. UPDATE (12:30 p.m. 10 December 2009) There is a question on how other solar indices compare to the CERES reflected sunlight measurements. The following lag correlation chart shows a few of them. I’m open to suggestions on what any of it might mean.
0.828553
3.321695
Physicists are gearing up to send a re-engineered science instrument originally designed for lofty balloon flights high in Earth’s atmosphere to the International Space Station next week to broaden their knowledge of cosmic rays, subatomic particles traveling on intergalactic routes that could hold the key to unlocking mysteries about supernovas, black holes, pulsars and dark matter. Fastened in the cargo bay of a SpaceX Dragon capsule, the cosmic ray observatory will be robotically connected to a port outside the space station’s Japanese Kibo laboratory for a three-year science campaign sampling cosmic rays, particles accelerated to nearly the speed of light by violent and mysterious forces in the distant Universe. First discovered more than a century ago, most cosmic rays are blocked by the atmosphere from reaching Earth’s surface, requiring scientists to send up detectors on high-altitude balloon flights or space missions. Their name is a misnomer. Cosmic rays are not a form of light like gamma-rays or X-rays, but bits of matter sent careening through space by powerful forces elsewhere in our galaxy and beyond. “Cosmic rays are direct samples of matter from outside our solar system, possibly from the most distant reaches of the Universe,” said Eun-Suk Seo, lead scientist on the Cosmic Ray Energetics and Mass, or CREAM, instrument and a professor of physics at the University of Maryland. Scientists have flown variants of the CREAM instrument seven times on balloon research missions, logging more than six months of flight time. Engineers modified the existing science payload for the rigors of spaceflight, finishing the instrument for as little as $10 million to $20 million, Seo said, a fraction of the cost of a standalone space mission or an instrument developed from scratch. Changes to the balloon-borne instrument included making the on-board electronics more robust against radiation, and ensuring the package could survive the shaking of a rocket launch. Dozens of stacked layers of silicon pixels, carbon targets, tungsten planes and scintillating fibers will detect particles, ranging from subatomic units of relatively light hydrogen to heavy iron, coming from deep space and determine their mass, charge and trajectory. Each cosmic ray comes with its own backstory, and the particles will reveal clues about their origins as they collide with the matter inside CREAM’s detector. Scientists will trace the shower of secondary particles generated by each cosmic ray’s crash into the instrument’s cross section of pixels and targets. The most energetic cosmic rays can penetrate all the way to Earth’s surface, but detectors on the ground only pick up the leftovers generated from collisions with oxygen and nitrogen atoms in the atmosphere, producing “air showers” of secondary particles the rain down on the planet. “The original cosmic rays, for you to detect them, you have to fly an instrument in space,” Seo said. “That’s what we are doing. We identify (cosmic rays) particle-by-particle, tell what they are, how much energy they have, and characterize them. We (sample) them directly before they are broken up in the atmosphere.” CREAM will be sensitive to cosmic rays with higher energies than previous cosmic ray detectors flown in space, including the $2 billion Alpha Magnetic Spectrometer delivered to the space station on the second-to-last space shuttle flight in 2011. “What CREAM is going to do is to extend the direct measurements to the highest energies possible, to energies that are capable of generating these gigantic air showers that can reach all the way to the ground,” Seo said. Huge explosions like stellar supernovae, along with extreme gravitational forces from other cosmic phenomena, send cosmic rays shooting through space at mind-boggling velocities approaching the speed of light. One of the CREAM instrument’s chief objectives is to study where the particles come from. NASA’s Fermi Gamma-ray Space Telescope proved some cosmic rays come from the expanding debris remnants of supernovae, but the case is still open for other types of cosmic rays. “It’s generally believed that cosmic rays originate in supernovae,” Seo said. “There are other possible contributions or accelerators, pulsars, colliding galaxies, black holes, AGNs (active galactic nuclei).” But some cosmic rays are believed to be too energetic to be accelerated by supernovae. “A supernova is very powerful, but still it’s a finite engine,” Seo said. Subatomic particles like protons are the most common type of cosmic ray at lower energies, and cosmic rays become rarer as scientists look at higher energies. But balloon science campaigns found the drop-off in particle detections at higher energies is not as steep as predicted, a result known as spectral hardening. “At high energies that are in our energy range … there are more cosmic rays than were expected from the simple supernova acceleration scenario,” Seo said. Comparisons of two types of particles — protons and helium — suggest low-energy and high-energy cosmic rays could come from different sources. “At lower energies, we already know protons are the most dominant component, but as you approach this acceleration limit you expect to see this composition change,” Seo said. “But this hasn’t been observed yet because we are not able to do the direct measurements at that higher energy. With CREAM, we are to explore these higher energies to actually observe such composition changes to confirm such a supernova acceleration scenario.” Seo said CREAM will build up statistics on the flux, or variability, of high-energy cosmic rays with continuous observations not possible on a short-duration balloon flight. “By utilising the space station, we can increase our exposure by an order of magnitude,” Seo said. “In order words, every day on the station, we will increase the statistics, and as the statistical uncertainties get reduced, and we can detect higher energies than before.” One way physicists say cosmic rays could be born is during collisions between particles of dark matter, a mysterious substance that makes up about 27 percent of all the mass and energy in the Universe. Only 5 percent of the Universe is regular matter — stuff we can see and touch — while the rest is dark energy, an enigmatic force that helps drive the expansion of the Universe. “The question of whether these are from an exotic source like dark matter has generated lots of excitement, but for us to actually know whether there is some exotic source like dark matter, or an astrophysical source like a pulsar … we will need a lot more understanding of cosmic rays,” Seo said. Scientists from the United States, South Korea, France and Mexico are part of the CREAM project. The instrument weighs about 2,773 pounds (1,258 kilogrammes) inside the Dragon spacecraft’s payload trunk. Liftoff from NASA’s Kennedy Space Center in Florida is scheduled for Aug. 14. “It’s a very exciting time for us in high-energy particle astrophysics, and the long development road of CREAM culminating in this space station mission has been a world-class success story,” Seo said. Email the author. Follow Stephen Clark on Twitter: @StephenClark1.
0.850038
3.95371
Combining the assets of multiple telescopes in the technique known as interferometry has a long pedigree. Using a cluster of small telescopes rather than a single gigantic one is a way to achieve high resolution at sharply lower costs. Take a look at this list of astronomical interferometers working from the visible to the infrared and you’ll see how widely spread the technique has become as we’ve moved from earlier long wavelength observations (including the Very Large Array and MERLIN) toward optical installations and submillimeter interferometers and, now under construction, the Atacama Large Millimeter Array. Observing Earth-like planets from space has often been studied in terms of a space-based array, with separated spacecraft operating in tandem, as in the infrared interferometer concept shown in this image (Credit: JPL). Both the now stalled Terrestrial Planet Finder and the canceled Darwin mission from ESA were looking at interferometry concepts that would have used a technique called nulling to reduce the light of a central star so that the planets orbiting it could be studied. New work may give these ideas renewed life and improve the concept. For Stefan Martin (JPL) and A.J. Booth (Sigma Space Corp., Lanham, MD) have created a nulling interferometer that combines the light from four different telescopes to achieve effects no previous nulling techniques have been able to equal. Says Martin: “Our null depth is 10 to 100 times better than previously achieved by other systems. This is the first time someone has cross-combined four telescopes, set up in pairs, and achieved such deep nulls. It’s extreme starlight suppression…And because this system makes the light from the star appear 100 million times fainter, we would be able to see the planet we’re looking for quite clearly.” 100 million times fainter is impressive indeed, and should be weighed against the fact that at infrared wavelengths, host stars can emit 10 million times more infrared than the terrestrial world being sought. The device, under study at JPL, is shown below. Image: From left to right: JPLers Felipe Santos Fregoso, Piotr Szwaykowski, Kurt Liewer and Stefan Martin with the nulling interferometer testbed at JPL, where the device is built and refined. Image credit: NASA/JPL-Caltech. The ideal would be to find a terrestrial world circling a nearby star, one whose atmosphere we could hope to study with spectroscopic techniques, looking for the signatures of life. We have to find such planets first, of course, for of the 463 planets now known, none could support life as we know it. But this could change literally overnight as the three ongoing studies of Alpha Centauri A and B continue, with every prospect that we’ll have answers about planets there within the next few years. How Martin and Booth’s technique would cope with the tricky situation around the central binary stars there I don’t know, but rocky worlds around either of them would give even more impetus to the search for other ‘Earths’ in the interstellar neighborhood. A recent paper on the Planet Detection Testbed is Martin et al., “Demonstration of the Exoplanet Detection Process Using Four-Beam Nulling Interferometry,” Aerospace Conference 2009 IEEE (abstract).
0.814672
3.802428
LIGO makes waves with gravitational announcement, and Australian telescopes follow up Several weeks ago, physicists at The Laser Interferometer Gravitational-Wave Observatory (LIGO) made waves with the announcement that gravitational waves – ripples in space time caused by a violent cosmic event taking place in the distant Universe – had finally been observed 100 years after Albert Einstein predicted their existence. Australian researchers and telescopes played an exciting part in the follow-up observations, showcasing the capabilities of the Square Kilometre Array precursors located in the Western Australian outback. Research published last month describes the follow-up program that began soon after the gravitational wave candidate was first identified by LIGO in September 2015. Within two days of the trigger, 21 teams responded to the alert and began observations with satellites and ground-based telescopes around the world. Over the next three months, observations were performed using facilities spanning from radio to gamma-ray wavelengths. The Australian radio telescopes involved in the EM follow-up are both located at CSIRO’s Murchison Radio-astronomy Observatory (MRO) – the Australian SKA Pathfinder (ASKAP) and the Murchison Widefield Array (MWA). Both instruments offer a wide field of view, high sensitivity and quick response times, and are complemented by high-speed supercomputing capabilities, making them valuable additions to the LIGO/Virgo collaboration. With its huge field of view, the MWA was the first radio telescope in the world to respond to the call from LIGO to hunt down the source of the unconfirmed gravity wave detection. Not long after, ASKAP swung into action, using the first of its antennas equipped with a Phased Array Feed (PAF) receiver to make multiple observations of the trigger region over the course of the following week. The challenge of conducting follow up work for gravitational wave surveys is that the position of the source is not well know and is located somewhere within a large region of sky. For telescopes with a large field of view, like ASKAP and MWA, this is a great advantage. “LIGO gives us two things: the area of sky to look at and the time the trigger started,” explains Keith Bannister, one of the CSIRO astronomers involved in the ASKAP follow-up, “With a wide field of view, good sensitivity and the 1 GHz observing frequency, ASKAP is an ideal instrument for finding EM counterparts. And, we now have the infrastructure in place so that when LIGO detects a trigger, we have the capability to do a follow-up observation.” “We’re yet to detect anything, but just to be involved in this program is a great thrill,” he continues, “It’s exciting to think how much more we’ll be able to contribute with the full ASKAP telescope, when all 36 antennas are installed with Mk II PAFs.” The MWA, which has been in full operations since mid 2013, has cooperative agreements with LIGO and other telescopes to follow up time-critical astronomical events such as gravitational wave events and gamma-ray bursts. Having no moving parts, the MWA is very agile and can be observing the sky moments after receiving the alert. “The MWA automatically accepts alert messages from other instruments like LIGO and begins observing the source in less than eight seconds,” MWA Director Dr Randall Wayth of Curtin University (ICRAR) said. “Although there was no detection for this first event, we’re excited about being part of the international electromagnetic follow-up to these events.” The MWA is currently undergoing an expansion phase that will double its angular resolution on the sky. This increased resolution will help localise the sources of GW events when a follow-up detection occurs. “It’s been fantastic to follow-up this momentous event with the MWA. The unique capabilities of the telescope really shone in this, allowing us to dominate with our coverage of the sky. The hardest part has been keeping this quiet for the last few months,” said MWA project scientist, Professor David Kaplan. The sentiment is echoed by ASKAP Project Scientist Lisa Harvey-Smith, who notes that two of the ASKAP Survey Science Projects, VAST and CRAFT, are dedicated to transient searches. “It is very exciting for ASKAP to be involved in the international efforts to detect electromagnetic emission from this gravitational wave event. Our PAF receivers make ASKAP a very powerful instrument for detecting radio waves from these events, and our science teams are looking forward to continuing this work as part of the Early Science program that will start this year. ” Click the images below to open a larger version. The SKA will look at different gravitational waves from those detected by LIGO. While LIGO has detected gravitational waves as they pass through the Earth, the SKA will detect gravitational waves as they pass through our galaxy, helping us confirm our model of the expansion of the Universe and the formation of galaxies themselves. See here for more https://www.skatelescope.org/news/ligo-ska-gw/
0.832267
3.863187
by Robert Bee Purely for the heck of it and because I enjoy calculating things, I had a go at using basic formulae to use the data provided by the Event Horizon Telescope project which produced the amazing image of the black hole in galaxy M87 to calculate the angular size (as seen from Earth) of the black hole (actually its event horizon) and also its mass in solar masses. I was interested to see if my humble calculations aligned with the experts’ figures. NOTE: These are my personal calculations done to the best of my mathematical ability. If any reader can find an error in them, I would greatly appreciate your feedback so I might make a correction. So, here is the data provided by EHT team: EH Diameter: D = 38 x 10^9 km Black Hole (BH) Mass: M = 6.5 x 10^9 M0. (M0 is 1 solar mass) Distance from Earth to M87: d = 53.49 x 10^6 light years. 1 light year = 9.461 x 10^12 km Gravitational Constant: G = 6.67 x 10^(-11) m^3/kg/s^2 Angular size of the Event Horizon: Let the angular size of the EH be α: Using the geometric formula: D(km) = α (radians) x d (km), Then α (radians) = D/d. Using the numbers above (in appropriate units): α (radians) = (38 x 10^9 km)/(53.49 x 10^6 x 9.461 x 10^12 km) = 0.075089 x 10^(-9) radians Now 1 arcsec = pi/648,000 rad. Reversing this, we get: 1 rad = 206,264.806 arc sec. Therefore, the apparent angular size of the Event Horizon, seen from Earth, α, is: α = 0.075089 x 10^(-9) x 206,264.806, giving α = 15.49 x 10^(-6) arc seconds. That’s a very small angle. To put that into perspective, if we assume that a human hair has a thickness of 0.1mm (which is average for a human hair), how far away would you have to place a human hair for its thickness to have the same angular size as the M87 Event Horizon? Using the same technique as above: Distance d x α = 0.1mm = 10^(-4) m That is, d = 10^(-4)/α But for the EH, α = 0.075089 x 10^(-9) radians Substituting, we get: d = 10^(-4)m/(0.075089 x 10^(-9) radians) = 13.32 x 10^5m = 1,332 km. Try to imagine that. I can’t. That’s more than the distance from Sydney to Adelaide. Now to calculate the mass of the black hole. There is a formula for the escape velocity of an object (say a rocket) from a planet. v = sq.root (2GM/r). or v = (2GM/r)^(1/2) where G is the gravitational constant, M is the mass of the planet, r is the distance of the object from the centre of the planet. This same formula applies to the escape velocity from a black hole, but that velocity is set at greater than the speed of light c. The diameter of the Event Horizon of a black hole is called the Schwarzschild Radius and its value is calculated by setting the escape velocity equal to c. So, by rearranging the equation for the escape velocity, we get an expression for the Schwarzchild radius of: Rs = 2GM/c^2 Now the EHT project has given us Rs (half of 38 billion km = 19 x 10^9 km = 19 x 10^12 m) and we know G and c. So let’s calculate the mass of the black hole. M = (Rs x c^2)/2G = [(19 x 10^12) x (3 x 10^8)^2]/[2 x 6.67 x 10^(-11)] = 12.8186 x 10^39 kg Now the mass of our Sun M0 = 1.9891 x 10^30 kg. Divide this into the value above for M and you get: M = 6.44 x 10^9 (or 6.44 billion) times the mass of our Sun M0. That’s very close to the reported value of the black hole’s mass of 6.5 billion solar masses. And I think that’s pretty cool.
0.857706
3.628717
Willy Wonka had his great glass elevator, but soon, the rest of us may be able to take an out-of-this-world ride of our very own. Launching rockets to the moon is a very expensive process, because the amount of fuel and power needed to break through the Earth’s gravitational force is huge, and for more than 100 years scientists have been debating the concept of a permanent elevator that travels between the Earth and the Moon. Can you imagine that? Recently, two university students, Zephyr Penoyre from Cambridge University and Emily Sandford from Columbia University have proposed the idea of a Spaceline that seems more attainable and a lot cheaper than even a space elevator! So just what is the Spaceline? Imagine a really (really, really) long cable, about as thin as the lead in your pencil extending all the way down from the Moon to several thousand feet above the surface of the Earth. This line, which will be made from Kevlar (the same, super tough, almost unbreakable material used to make bulletproof vests) will cover the 200,000 miles from the Moon to a point in the geosynchronous orbit above the Earth where the gravity of the Earth and the Moon balance each other out (think like a see-saw when both people on it weigh the exact same amount!). This will make sure that the cable remains stable enough to transport materials and even people between Earth and the Moon. The transport would be done via solar-powered capsules that would run along the length of the cable. Do we really need a Spaceline? What would it do? By constructing a Spaceline, the cost of sending materials into space, outside the gravitational pull of the Earth would become significantly cheaper. Rockets would only have to be launched up to the base of the SpaceLine, without the need to break through the Earth’s gravity at all. From the base of the Spaceline, where in theory, a base could be built since the forces of gravity cancel each other out, creating a stable environment for construction, materials and people could be transferred to the solar-powered capsules that would carry them to the Moon. Similarly, geological materials from the Moon could be shipped back down to Earth via the Spaceline and used in construction and other areas. The base could serve as a centre where rockets for deeper space exploration could be built, and since they would also not have to think about overcoming the Earth’s gravity, the cost of space exploration and even space travel for humans would dramatically reduce. Sounds pretty cool, right? Is it going to happen? In reality this idea is still a long way from coming to fruition, but this research paper is definitely a step in the right direction. How much will it cost to make this Spaceline? What is particularly exciting is that the cost of this entire contraption would be about 1 billion dollars, which would be recovered by approximately 53 trips to the Moon. Things to think about: What about space junk – could debris damage this Spaceline? And perhaps more importantly, how much more space junk will we be generating in space? Do you fancy a trip to the Moon? It just might become possible! Written by: Disha Mirchandani. Disha is a former lawyer turned freelance content writer. She is a fitness enthusiast and amateur aerialist with her own fitness photo-blog as well.
0.845957
3.572519
Methane's Secrets, From Diamonds to Neptune Hydrocarbons from the Earth make up the oil and gas that heat our homes and fuel our cars. The study of the various phases of molecules formed from carbon and hydrogen under high pressures and temperatures, like those found in the Earth’s interior, helps scientists understand the chemical processes occurring deep within planets, including Earth. New research from a team led by Carnegie’s Alexander Goncharov hones in on the hydrocarbon methane (CH4), which is one of the most abundant molecules in the universe. Despite its ubiquity, methane’s behavior under the conditions found in planetary interiors is poorly understood due to contradictory information from various modeling studies. The work is published by Nature Communications. Lead author Sergey Lobanov explains: "Our knowledge of physics and chemistry of volatiles inside planets is based mainly on observations of the fluxes at their surfaces. High-pressure, high-temperature experiments, which simulate conditions deep inside planets and provide detailed information about the physical state, chemical reactivity, and properties of the planetary materials, remain a big challenge for us." For example, methane’s melting behavior is known only below 70,000 times normal atmospheric pressure (7 GPa). The ability to observe it under much more extreme conditions is fundamental information for planetary models. Moreover, its reactivity under extreme conditions also needs to be understood. Previous studies indicated little information about methane’s chemical reactivity under pressure and temperature conditions similar to those found in the deep Earth. This led to the assumption that methane is the main hydrocarbon phase of carbon, oxygen, and hydrogen-containing fluid in some parts of the Earth’s mantle. But the team’s work shows that it is necessary to question this assumption. Using high-pressure experimental techniques, the team–including Carnegie’s Lobanov, Xiao-Jia Chen, Chang-Sheng Zha, and Ho-Kwang "Dave" Mao–was able to examine methane’s phases and reactivity under a range of temperatures and pressures mimicking environments found beneath Earth’s surface. At pressures reaching 790,000 times normal atmospheric pressure (80 GPa), methane’s melting temperature is still below 1,900 degrees Fahrenheit. This suggests that methane is not a solid under any conditions met deep within most planets. What’s more, its melting point is even lower than melting temperatures of water (H2O) and ammonia (NH3), other very important components in the interiors of giant planets. As the temperature increases above about 1,700 degrees Fahrenheit, methane becomes more chemically reactive. First, it partly disassociates into elemental carbon and hydrogen. Then, with further temperature increases, light hydrocarbon molecules start to form. Pressure also affects the composition of the carbon-hydrogen system, with heavy hydrocarbons becoming apparent at pressures above 250,000 times atmospheric pressure (25 GPa), indicating that under deep mantle conditions the environment is likely methane poor. These findings have implications both for Earth’s deep chemistry and for the geochemistry of icy gas giant planets such as Uranus and Neptune. The team argues that this reactivity may play a role in the formation of ultradeep diamonds deep within the mantle. They assert that their findings should be taken into account in future models of the interiors of Neptune and Uranus, which are believed to have mantles consisting of a mixture of methane, water, and ammonia components. Studying the interior of the Earth is important in understanding how physical processes affect the habitability of our planet. The chemistry and environment of the subsurface of our planet is also of interest as a habitat for unique forms of life that live deep below the ground, and possibly independent of energy from the Sun. Publication of press-releases or other out-sourced content does not signify endorsement or affiliation of any kind.
0.830689
3.842415
I’ve talked quite a bit about planets around other stars, known as exoplanets. Most of the exoplanets we’ve discovered are Neptune-sized worlds, but we’ve found exoplanets smaller than Mercury. But in terms of size, that is about our limit given current technology. Given what we understand about our own solar system, we would expect that these exoplanetary systems also have smaller objects, including asteroids and comets. We haven’t observed any exo-asteroids, but we have detected exocomets. This is pretty remarkable, since most of the exoplanets we’ve discovered are through things like transit data and the like. We have only imaged a few of the larger exoplanets, and then only as a small blur. So how can we detect comets around other stars? Even though comets are much smaller than planets, they vent dust and gas when they are active, which produces the coma and tail of a comet. It is the vented gas that we can detect. For example, an upcoming article in Astronomy and Astrophysics looked at data from the High Accuracy Radial Velocity Planet Searcher (HARPS).1 The main goal of HARPS is to measure the Doppler motion of stars, and to do that it needs to make good observations of the line spectra from stars. These line spectra can be used to “fingerprint” the various elements and molecules that exist in the star. In this case the data was from a star known as HD 172555, which is a young star where a planetary system is still forming. This means it still has a disk of gas and dust around it. The HARPS telescope looked at line spectra from this circumstellar disk, and the team found that a few of the spectral lines were transient. Sometimes being visible in the spectrum of the star, but not seen at other times. When the team measured the Doppler shift of these transient lines, they didn’t match the overall Doppler shift of the star. This means the lines were not due to some change in the star itself. The most likely explanation is that these transient lines occur when gaseous material passes in front of the star, thus absorbing some of the starlight at particular wavelengths. This is exactly the type of thing you would expect if a comet passes in front of its star. Thus we have observational evidence of comets around other stars. One of the interesting things about the exocomets detected so far is that they are all part of young solar systems. We typically think of comets as icy remnants from the formation of our solar system, the leftover bits that never became part of planets. That’s true for the comets of our system, because ours is an older solar system. But as a planetary system forms, clumps of dust and ice form into comets and protoplanetary asteroids, many of which collide to form planets over time. These exocomets are not remnants of an old solar system, but rather the seeds of new ones. Kiefer, Flavien, et al. “Exocomets in the circumstellar gas disk of HD 172555.” Astronomy & Astrophysics 561 (2014): L10. ↩︎
0.886756
4.021204
Elusive Mercury is second evening star alongside Venus Orion is striding proudly across the meridian as darkness falls, but, even before the twilight dims, we have our best chances this year to spot Mercury low down in the west and close to the more familiar brilliant planet Venus. Both evening stars lie within the same field-of-view in binoculars for much of March, so the fainter Mercury should be relatively easy to locate using Venus as a guide. Provided, of course, that we have an unobstructed horizon. Mercury never strays far from the Sun’s glare, making it the most elusive of the naked-eye planets – indeed, it is claimed that many astronomers, including Copernicus, never saw it. Blazing at magnitude -3.9, Venus hovers only 9° above Edinburgh’s western horizon at sunset on the 1st and sets 64 minutes later. Mercury, one tenth as bright at magnitude -1.3, lies 2.0° (four Moon-breadths) below and to its right and may be glimpsed through binoculars as the twilight fades. Mercury stands 1.1° to the right of Venus on the 3rd and soon becomes a naked eye object as both planets stand higher from night to night, becoming visible until later in the darkening sky. By the 15th, Mercury lies 4° above-right of Venus and at its maximum angle of 18° from the Sun, although it has more than halved in brightness to magnitude 0.2. The slender young Moon sits 5° below-left of Venus on the 18th and 11° above-left of the planetary pairing on the 19th. Earthshine, “the old Moon in the new Moon’s arms”, should be a striking sight over the following few evenings. On the 22nd, the 30% illuminated Moon creeps through the V-shaped Hyades star cluster and hides (occults) Taurus’ leading star Aldebaran between 23:31 and 00:14 as they sink low into Edinburgh’s west-north-western sky. Falling back towards the Sun, Mercury fades sharply to magnitude 1.4 by the 22nd when it passes 5° right of Venus and becomes lost from view during the following week. At the month’s end, Venus stands 15° high at sunset and sets two hours later. The Sun climbs 12° northwards in March to cross the sky’s equator at the vernal equinox at 16:15 on the 20th, which is five days before we set our clocks forward at the start of British Summer Time. Sunrise/sunset times for Edinburgh change from 07:04/17:47 GMT on the 1st to 06:46/19:49 BST (05:46/18:49 GMT) on the 31st. The Moon is full on the 2nd, at last quarter on the 9th, new on the 17th, at first quarter on the 24th and full again on the 31st. Orion is sinking to our western horizon at our star map times while the Plough, the asterism formed by the brighter stars of Ursa Major, is soaring high in the east towards the zenith. To the south of Ursa Major, and just reaching our meridian, is Leo which is said to represent the Nemean lion strangled by Hercules (aka Heracles) in the first of his twelve labours. Leo appears to be facing west and squatting in a similar pose to that of the lions at the foot of Nelson’s Column in Trafalgar Square. Leo’s Sickle, the reversed question mark that curls above Leo’s brightest star Regulus, outlines its head and mane and contains the famous double star Algieba whose two component stars, both much larger than our Sun, take more than 500 years to orbit each other and may be seen through a small telescope. Regulus, itself, is occulted as they sink towards Edinburgh’s western horizon at 06:02 on the morning of the 1st. Jupiter, easily our brightest morning object, rises at Edinburgh’s east-south-eastern horizon at 00:47 GMT on the 1st and at 23:41 BST (22:41 GMT) on the 31st, climbing to pass around 17° high in the south some four hours later. Brightening from magnitude -2.2 to -2.4, it is slow moving in Libra, being stationary on the 9th when its motion reverses from easterly to westerly. Jupiter is obvious below the Moon on the 7th when a telescope shows the Jovian disk to be 40 arcseconds wide. If we look below and to the left of Jupiter in the south before dawn, the three objects that catch our attention are the red supergiant star Antares in Scorpius and, further from Jupiter, the planets Mars and Saturn. Mars lies in southern Ophiuchus, between Antares and Saturn, and is heading eastwards into Sagittarius and towards a conjunction with Saturn in early April. The angle between the two planets falls from 17° to only 1.5° this month as Mars brightens from magnitude 0.8 to 0.3 and its distance falls from 210 million to 166 million km. Mars’ disk swells from 6.7 to 8.4 arcseconds, becoming large enough for surface detail to be visible through decent telescopes. Sadly, Mars (like Saturn) is so far south and so low in Scotland’s sky that the “seeing” is unlikely to be crisp and sharp. Incidentally, on the morning of the 19th Mars passes between two of the southern sky’s showpiece objects, being a Moon’s breadth below the Trifid Nebula and twice this distance above the Lagoon Nebula. Both glowing clouds of hydrogen, dust and young stars appear as hazy patches through binoculars but are stunning in photographs. Saturn, creeping eastwards just above the Teapot of Sagittarius, improves from magnitude 0.6 to 0.5 and has a 16 arcseconds disk set within its superb rings which span 37 arcseconds at midmonth and have their northern face tipped towards us at 26°. The waning Moon lies above-left of Mars on the 10th and close to Saturn on the 11th. Diary for 2018 March Times are GMT until March 25, BST thereafter. 1st 06h Moon occults Regulus (disappears at 06:02 for Edinburgh) 2nd 01h Full moon 4th 14h Neptune in conjunction with Sun 5th 18h Mercury 1.4° N of Venus 7th 07h Moon 4° N of Jupiter 9th 10h Jupiter stationary (motion against stars reverses from E to W) 9th 11h Last quarter 10th 01h Moon 4° N of Mars 11th 02h Moon 2.2° N of Saturn 15th 15h Mercury furthest E of Sun (18°) 17th 13h New moon 18th 01h Mercury 4° N of Venus 18th 18h Moon 8° S of Mercury 18th 19h Moon 4° S of Venus 20th 16:15 Vernal equinox 23rd 00h Moon occults Aldebaran (23:31 to 00:14 for Edinburgh) 24th 16h First quarter 25th 01h Start of British Summer Time 27th 02h Moon 1.8° S of star cluster Praesepe in Cancer 31st 14h Full moon This is a slightly revised version, with added diary, of Alan’s article published in The Scotsman on February 28th 2018, with thanks to the newspaper for permission to republish here. Jupiter conspicuous at opposition in Leo The Sun, now climbing northwards at its fastest pace for the year, crosses the equator of the sky at 04:30 GMT on the 20th, the time of our vernal equinox. It then rises due east and sets due west, and days and night are equal in length around the globe. The Sun’s progress means that our nights are falling rapidly later, an effect that appears to enjoy a step-change when we set our clocks forward to British Summer Time on the 27th, though, in this instance, the daylight we gain in the evening is lost in the morning. It is noticeable, too, that the stars at nightfall are shifting quickly to the west. Orion, for example, dominates in the south as darkness falls at present, but has tumbled well over into the south-west by the month’s end. The Plough is nearing the zenith at our map times and it is the squat figure of Leo the Lion and the prominent planet Jupiter that dominate our southern sky. Jupiter is edging westwards beneath Leo’s hindquarters and passes just below the fourth magnitude star Sigma Leonis over the first few days of the month. Above and to its left is Denebola, the Lion’s tail, while further west (right) is Leo’s leading star Regulus in the handle of the Sickle. Algieba (see chart) appears as a glorious double star through a telescope. Jupiter comes to opposition on the 8th when it stands opposite the Sun so that it rises in the east at sunset and is unmistakable as it climbs through our south-eastern evening sky to pass 40° high on Edinburgh’s meridian in the middle of the night. Eleven times wider than the Earth and yet with a day lasting under ten hours, it is 664 million km distant at opposition and shines at magnitude -2.5, more than twice as bright as any star other than the Sun. View Jupiter through binoculars or a telescope, and the fun really begins. Binoculars show its four main moons, Io, Europa, Ganymede and Callisto, which change their relative positions to east and west of the planet’s disk from night to night as they orbit almost directly above the equator. Were it not for Jupiter’s glare, we could see all four of these with the naked eye. With numerous sulphurous volcanoes, Io is the most geologically active body we know, while Europa is the only one of the four to be smaller than our Moon and is thought to harbour a deep ocean of water beneath its icy crust. This makes it so irresistible as a potential home for life that the US Congress has urged NASA to add a lander craft to a planned mission to Europa over the next decade. The Jovian disk appears 44 arcseconds wide when we view it through a telescope at present. Even a small telescope shows its main cloud belts but the smaller cloud features that indicate Jupiter’s rotation are more of a challenge. The famous Great Red Spot in the southern hemisphere is a storm that has raged for at least 185 years but is now shrinking noticeably. By the time Jupiter is sinking in the west before dawn, the two brightest objects low in the south are Mars and Saturn. Mars stands 18° to the right of Saturn and is slightly the brighter of the two at present – their magnitudes being 0.3 and 0.5 respectively, with both of them outshining the red supergiant star Antares in Scorpius which lies more than 5° lower and between them. The Moon stands above-left of Mars on the 1st, above Saturn on the 2nd, and above and between them both on the 29th. This month Saturn improves only slightly to magnitude 0.4 and hardly moves in southern Ophiuchus, being stationary in position on the 25th. Mars, tracking eastwards from Libra to Scorpius, more than doubles in brightness to magnitude -0.5 as it approaches from 161 million to 118 million km. It also swells in diameter from 9 to 12 arcseconds and telescopes are starting to show surface features, including its north polar cap. There is no comparison, though, with the beauty of Saturn whose superb rings have their north face tipped Earthwards at 26°, near their maximum tilt, and stretch across 38 arcseconds. Saturn’s disk is 17 arcseconds wide and has much more subdued cloud belts than Jupiter. Although Venus is brilliant at magnitude -3.9, we have slim hopes of seeing it deep in our south-eastern twilight for just a few more mornings. Mercury, already lost from view, reaches superior conjunction on the Sun’s far side on the 23rd. The sunrise/sunset times for Edinburgh change from 07:03/17:48 GMT on the 1st to 06:45/19:50 BST (05:45/18:50 GMT) on the 31st. The Moon is at last quarter on the 1st, new on the 9th, at first quarter on the 15th, full on the 23rd and at last quarter again on the 31st. New moon on the 9th brings the first and best of this year’s four eclipses when a total eclipse of the Sun occurs along a path that travels eastwards across Indonesia before swinging north-eastwards over the Pacific to end to the north of Hawaii. Surrounding areas enjoy a partial eclipse but there is nothing to see from Europe. The Moon slims the outer and lighter shadow of the Earth during a penumbral lunar eclipse on the 23rd. Also best seen over the Pacific, it is partly visible from most of the Americas and eastern Asia, but only a minor fading of the southern part of the Moon may be expected.
0.887471
3.644073
Saturn at full tilt as Comet Halley’s meteors fly Our charts capture the sky in transition between the stars of summer, led by the Summer Triangle of Deneb, Vega and Altair in the west, and the sparkling winter groups heralded by Taurus and the Pleiades star cluster climbing in the east. Indeed, if we look out before dawn, as Venus blazes in the east, we see a southern sky centred on Orion that mirrors that of our spectacular February evenings. October also brings our second opportunity this year to spot debris from Comet Halley. As the ashes of the Cassini spacecraft settle into Saturn, the planet reaches a milestone in its 29-years orbit of the Sun when its northern hemisphere and rings are tilted towards us at their maximum angle of 27.0° this month. In practice, our view of the rings’ splendour is compromised at present by its low altitude. Although it shines at magnitude 0.5 and is the brightest object in its part of the sky, Saturn hovers very low in the south-west at nightfall and sets around 80 minutes before our map times. The rings span 36 arcseconds at mid-month while its noticeably rotation-flattened disk measures 16 arcseconds across the equator and 14 arcseconds pole-to-pole. Catch it below and to the right of the young crescent Moon on the 24th. The Sun moves 11° further south of the equator this month as sunrise/sunset times for Edinburgh change from 07:16/18:48 BST (06:16/17:48 GMT) on the 1st to 07:18/16:34 GMT on the 31st, after we set our clocks back on the 29th. Jupiter is now lost in our evening twilight as it nears the Sun’s far side on the 26th. Saturn is not alone as an evening planet, though, for both Neptune and Uranus are well placed. They are plotted on our southern chart in Aquarius and Pisces respectively but we can obtain more detailed and helpful diagrams of their position via a Web search for a Neptune or Uranus “finder chart” – simply asking for a “chart” is more likely to lead you to astrological nonsense. Neptune, dimly visible through binoculars at magnitude 7.8, lies only 0.6° south-east (below-left) of the star Lambda Aquarii at present and tracks slowly westwards to sit a similar distance south of Lambda by the 31st. It lies 4,346 million km away on the 1st and its bluish disk is a mere 2.3 arcseconds wide. Uranus reaches opposition on the 19th when it stands directly opposite the Sun and 2,830 million km from Earth. At magnitude 5.7 it is just visible to the unaided eye in a good dark sky, and easy through binoculars. Currently 1.3° north-west of the star Omicron Piscium and also edging westwards, it shows a bluish-green 3.7 arcseconds disk if viewed telescopically. North of Aquarius and Pisces are Pegasus and Andromeda, the former being famous for its relatively barren Square while the fuzzy smudge of the Andromeda Galaxy, M31, lies 2.5 million light years away and is easy to glimpse through binoculars if not always with the naked eye. Mercury slips through superior conjunction on the Sun’s far side on the 8th and is out of sight. Venus remains resplendent at magnitude -3.9 in the east before dawn though it does rise later and stand lower each morning. On the 1st, it rises for Edinburgh at 04:44 BST (03:44 GMT) and climbs to stand 20° high at sunrise. By the month’s end, it rises at 05:30 GMT and is 13° high at sunrise. Against the background stars, it speeds from Leo to lie 5° above Virgo’s star Spica by the 31st. Mars is another morning object, though almost 200 times dimmer at magnitude 1.8 as it moves from 2.6° below-left of Venus on the 1st to 16° above-right of Venus on the 31st. The pair pass within a Moon’s breadth of each other on the 5th and 6th when Venus appears 11 arcseconds in diameter and 91% sunlit and Mars (like Uranus) is a mere 3.7 arcseconds wide. Comet Halley was last closest to the Sun in 1986 and will not return again until 2061. Twice each year, though, the Earth cuts through Halley’s orbit around the Sun and encounters some of the dusty debris it has released into its path over past millennia. The resulting pair of meteor showers are the Eta Aquarids in early-May and the Orionids later this month. Although the former is a fine shower for watchers in the southern hemisphere, it yields only the occasional meteor in Scotland’s morning twilight. The Orionids are best seen in the morning sky, too, and produce fewer than half the meteors of our main annual displays. This time the very young Moon offers no interference during the shower’s broad peak between the 21st and 23rd. In fact, Orionids appear throughout the latter half of October as they diverge from a radiant point in the region to the north and east of the bright red supergiant star Betelgeuse in Orion’s shoulder and close to the feet of Gemini. Note that they streak in all parts of the sky, not just around the radiant. Orionids begin to appear when the radiant rises in the east-north-east at our map times, building in number until it passes around 50° high in the south before dawn. Under ideal conditions, with the radiant overhead in a black sky, as many as 25 fast meteors might be counted in one hour with many leave glowing trains in their wake. Rates were considerably higher than this between 2006 and 2009, so there is the potential for another pleasant surprise. This is a slightly revised version of Alan’s article published in The Scotsman on September 30th 2017, with thanks to the newspaper for permission to republish here. Brilliant Venus plunges into the evening twilight Stargazers will be hoping for better weather as Orion and the stars of winter depart westwards in our evening sky, Venus dives into the evening twilight and around the Sun’s near side, while all the other bright planets are on view too. Indeed, Venus has the rare privilege of appearing as both an evening star and a morning star over several days, provided our western and eastern horizons are clear. Orion still dominates our southern sky at nightfall as Leo climbs in the east and the Plough balances on its handle in the north-east. The Sun’s northwards progress and our lengthening days mean that by nightfall at the month’s end Orion has drifted lower into the south-west, halfway to his setting-point in the west. He is even lower in the west-south-west by our star map times when it is the turn of Leo to reach the meridian and the Plough to be almost overhead. Leo’s leading star, Regulus, sits at the base of the Sickle of Leo, the reversed question-mark of stars from which meteors of the Leonids shower stream every November. The star Algieba in the Sickle (see chart) appears as a glorious double star through a telescope. Its components are larger and much more luminous than our Sun and lie almost 5 arcseconds apart, taking some 510 years to orbit each other. The pair lie 130 light years away and are unrelated to the star less than a Moon’s breadth to the south which is only half as far from us. The Sun travels northward across the equator at 10:28 GMT on the 20th, the moment of the vernal (spring) equinox in our northern hemisphere. On this date, nights and days are of roughly equal length around the globe. Sunrise/sunset times for Edinburgh change from 07:04/17:47 GMT on the 1st to 06:46/17:49 BST (05:46/18:49 GMT) on the 31st after we set our clocks forwards to BST on the morning of the 26th. The lunar phases change from first quarter on the 5th to full on the 12th, last quarter on the 20th and new on the 28th. Look for the young earthlit Moon well to the left of the brilliant magnitude -4.6 Venus on the 1st when telescopes show the planet’s dazzling crescent to be 47 arcseconds in diameter and 16% sunlit. Venus’ altitude at sunset plummets from 29° in the west-south-west on that day to only 7° in the west on the 22nd as its diameter swells to 59 arcseconds and the phase shrinks to only 1% – indeed, a few keen-sighted people might be able to discern its crescent with the naked eye and this is certainly easy to spot through binoculars. Venus dims to magnitude -4.0 by the time it sweeps 8° north of the Sun and only 42 million km from the Earth at its inferior conjunction on the 25th. This marks its formal transition from the evening to the morning sky, but because it passes so far north of the Sun as it does every eight years or so, Venus is already visible in the predawn before we lose it in the evening. In fact, it is 7° high in the east at sunrise on the 22nd, and it only gets better as the month draws to its close. Before Venus exits our evening sky, it meets Mercury as the latter begins its best spell as an evening star this year. On the 20th, the small innermost planet lies 10° to the left of Venus, shines at magnitude -1.2 and sets at Edinburgh’s western horizon 78 minutes after the Sun. By the 29th, it is 10° high forty minutes after sunset and shines at magnitude -0.4, easily visible through binoculars and 8° to the right of the very young Moon. Mars, near the Moon on the 1st and again on the 30th, dims from magnitude 1.3 to 1.5 this month as it tracks from Pisces into Aries. By the month’s end, it lies to the left of Aries’ main star Hamal and sets at our map times. It is now more than 300 million km away and its disk, less than 5 arcseconds across, is too small to be of interest telescopically. The Moon has another encounter with the Hyades star cluster on the night of the 4th-5th, hiding several of its stars but setting for Scotland before it reaches Taurus’ main star Aldebaran. The latter, though, is occulted later as seen from most of the USA. The Moon passes just below Regulus on the night of the 10th-11th and meets the planet Jupiter on the 14th. Jupiter, conspicuous at magnitude -2.3 to -2.5, rises in the east at 21:37 GMT on the 1st and only 31 minutes after Edinburgh’s sunset on the 31st. Now edging westwards above the star Spica in Virgo, it is unmistakable as it climbs through our south-eastern sky to cross the meridian in the small hours and lie in the south-west before dawn. Its disk, 43 arcseconds wide at mid-month, shows parallel cloud bands through almost any telescope, while its four moons may be glimpsed through binoculars as they orbit from one side to the other. Saturn, the last of the night’s planets, rises in the south-east at 03:44 GMT on the 1st and almost two hours earlier by the 31st. Improving very slightly from magnitude 0.5 to 0.4 during March, it is the brightest object about 10° above the south-south-eastern horizon before dawn. Look for it 4° below-left of the Moon on the 20th. This is a slightly-revised version of Alan’s article published in The Scotsman on February 28th 2017, with thanks to the newspaper for permission to republish here. Venus highest and brightest as evening star If you doubt that February offers our best evening sky of the year, then consider the evidence. The unrivalled constellation of Orion stands astride the meridian at 21:00 GMT tonight, and two hours earlier by February’s end. Around him are arrayed some of the brightest stars in the night sky, including Sirius, the brightest, and Capella, the sixth brightest which shines yellowish in Auriga near the zenith. This month also sees Venus, always the brightest planet, reach its greatest brilliancy and stand at its highest as an evening star. By our map times, a little later in the evening, Orion has progressed into the south-south-west and Sirius, nipping at his heel as the Dog Star in Canis Major, stands lower down on the meridian. All stars twinkle as their light, from effectively a single point in space, is refracted by turbulence in the Earth’s atmosphere, but Sirius’ multi-hued scintillation is most noticeable simply because it is so bright. On the whole, planets do not twinkle since their light comes from a small disk and not a point. I mentioned two months ago how Sirius, Betelgeuse at Orion’s shoulder and Procyon, the Lesser Dog Star to the east of Betelgeuse, form a near-perfect equilateral triangle we dub the Winter Triangle. Another larger but less regular asterism, the Winter Hexagon, can be constructed around Betelgeuse. Its sides connect Capella, Aldebaran in Taurus, Rigel at Orion’s knee, Sirius, Procyon and Castor and Pollux in Gemini, the latter pair considered jointly as one vertex of the hexagon. Aldebaran, found by extending the line of Orion’s Belt up and to the right, just avoids being hidden (occulted) by the Moon on the 5th. At about 22:20 GMT, the northern edge of the Moon slides just 5 arcminutes, or one sixth of the Moon’s diameter, below and left of the star. Earlier that evening, the Moon occults several stars of V-shaped Hyades cluster which, together with Aldebaran, form the Bull’s face. Sunrise/sunset times for Edinburgh change from 08:07/16:46 on the 1st to 07:06/17:45 on the 28th. The Moon is at first quarter on the 4th and lies to the west of Regulus in Leo when full just after midnight on the night of the 10th/11th. It is then blanketed by the southern part of the Earth’s outer shadow in a penumbral lunar eclipse. The event lasts from 22:34 until 02:53 with an obvious dimming of the upper part of the Moon’s disk apparent near mid-eclipse at 00:33. This time, the Moon misses the central dark umbra of the shadow where all direct sunlight is blocked by the Earth, but only by 160 km or 5% of its diameter. Following last quarter on the 18th, the Moon is new on the 26th when the narrow track of an annular solar eclipse crosses the south Atlantic from Chile and Argentina to southern Africa. Observers along the track see the Moon’s ink-black disk surrounded by a dazzling ring of sunlight while neighbouring regions, but not Europe, enjoy a partial eclipse of the Sun. Venus, below and to the right of the crescent Moon as the month begins, stands at it’s highest in the south-west at sunset on the 11th and 12th and blazes at magnitude -4.6, reaching its greatest brilliancy on the 17th. It stands further above-and to the right of the slim impressively-earthlit Moon again on the 28th. Viewed through a telescope, Venus’ dazzling crescent swells in diameter from 31 to 47 arcseconds and the illuminated portion of the disk shrinks from 40% to 17%. Indeed, steadily-held binoculars should be enough to glimpse its shape. This month its distance falls from 81 million to 53 million km as it begins to swing around its orbit to pass around the Sun’s near side late in March. Mars stands above and to the left of Venus but is fainter and dimming further from magnitude 1.1 to 1.3 during February. It appears closest to Venus, 5.4°, on the 2nd but the gap between them grows to 12° by the 28th as they track eastwards and northwards through Pisces. Both set before our map times at present but our charts pick them up at midmonth as they pass below-left of Algenib, the star at the bottom-left corner of the Square of Pegasus. Mars shrinks below 5 arcseconds in diameter this month so few surface details are visible telescopically. This is certainly not the case with Jupiter, whose intricately-detailed cloud-banded disk swells from 39 to 42 arcseconds. We do need to wait, though, for two hours beyond our map times for Jupiter to rise in the east and until the pre-dawn hours for it to stand at its highest in the south. Second only to Venus, it shines at magnitude -2.1 to -2.3 and lies almost 4° due north of Virgo’s leading star Spica where it appears stationary on the 6th when its motion switches from easterly to westerly. Look for the two below-left of the Moon on the 15th and to the right of the Moon on the 16th. Saturn is a morning object, low down in the south-east after its rises for Edinburgh at 05:25 on the 1st and by 03:48 on the 28th. At magnitude 0.6 to 0.5, it stands on the Ophiuchus-Sagittarius border where it is below-right of the waning Moon on the 21st. It is a pity that telescopic views are hindered by its low altitude because Saturn’s disk, 16 arcseconds wide, is set within wide-open rings which measure 16 by 36 arcseconds and have their northern face tipped 27° towards the Earth. Mercury remains too deep in our south-eastern morning twilight to be seen this month. This is a slightly-revised version of Alan’s article published in The Scotsman on January 31st 2017, with thanks to the newspaper for permission to republish here. Moon between Venus and Mars on the 2nd The new year opens with the Moon as a slim crescent in our evening sky, its light insufficient to hinder observations of the Quadrantids meteor shower. Lasting from the 1st to the 6th, the shower is due to reach its maximum at about 15:00 GMT on the 3rd. Perhaps because of the cold weather, or a lingering hangover from Hogmanay, this may be the least appreciated of the year’s top three showers. It can, though, yield more than 80 meteors per hour under the best conditions, with some blue and yellow and all of medium speed. It can also produce some spectacular events – I still recall a Quadrantids fireball many years ago that flared to magnitude -8, many times brighter than Venus. Although Quadrantids appear in all parts of the sky, perspective means that their paths stream away from a radiant point in northern Bootes. Plotted on our north map, this glides from left to right low across our northern sky during the evening and trails the Plough as it climbs through the north-east later in the night. The shower’s peak is quite narrow so the optimum times for meteor-spotting are before dawn on the 3rd, when the radiant stands high in the east, and during the evening of that day when Quadrantids may follow long trails from north to south across our sky. Mars and Venus continue as evening objects, improving in altitude in our south-south-western sky at nightfall and, in the case of Venus, becoming still more spectacular as it brightens from magnitude -4.3 to -4.6. Mars, more than one hundred times fainter, dims from magnitude 0.9 to 1.1 but is obvious above and to Venus’ left, their separation falling from 12° to 5° during the month as they track eastwards and northwards from Aquarius to Pisces. On the evening of the 1st, Mars stands only 18 arcminutes, just over half a Moon’s breadth, above-left of the farthest planet Neptune though, since the latter shines at magnitude 7.9, we will need binoculars if not a telescope to glimpse it. At the time, Neptune, 4,556 million km away, is a mere 2.2 arcseconds wide if viewed telescopically and Mars appears 5.7 arcseconds across from a range of 246 million km. On that evening, the young Moon lies 8° below and right of Venus, while on the 2nd the Moon stands directly between Mars and Venus. The pair lie close to the Moon again on the 31st. As its distance falls from 115 million to 81 million km this month, Venus swells from 22 to 31 arcseconds in diameter and its disk changes from 56% to 40% sunlit. In theory, dichotomy, the moment when it is 50% illuminated like the Moon at first quarter, occurs on the 14th. However, the way sunlight scatters in its dazzling clouds means that Venus usually appears to reach this state a few days early when it is an evening star – a phenomenon Sir Patrick Moore named the Schröter effect after the German astronomer who first reported it. Venus stands at its furthest to the east of the Sun, 47°, on the 12th. The Sun climbs 6° northwards during January and stands closer to the Earth in early January than at any other time of the year. At the Earth’s perihelion at 14:00 GMT on the 4th the two are 147,100,998 km apart, almost 5 million km less than at aphelion on 3 July. Obviously, it is not the Sun’s distance that dictates our seasons, but rather the Earth’s axial tilt away from the Sun during winter and towards it in summer. Sunrise/sunset times for Edinburgh change from 08:43/15:49 on the 1st to 08:09/16:44 on the 31st. The Moon is at first quarter on the 5th, full on the 12th, at last quarter on the 19th and new on the 28th. The Moon lies below the Pleiades on the evening of the 8th and to the left of Aldebaran in Taurus on the next night. Below and left of Aldebaran is the magnificent constellation of Orion with the bright red supergiant star Betelgeuse at his shoulder. Soon in astronomical terms, but perhaps not for 100,000 years, Betelgeuse will disintegrate in a supernova explosion. The relics of a supernova witnessed by Chinese observers in AD 1054 lies 15° further north and just 1.1° north-west of Zeta Tauri, the star at the tip of Taurus’ southern horn. The 8th magnitude oval smudge we call the Crab Nebula contains a pulsar, a 20km wide neutron star that spins 30 times each second. The conspicuous planet in our morning sky is Jupiter which rises at Edinburgh’s eastern horizon at 01:27 on the 1st and at 23:37 on the 31st. Creeping eastwards 4° north of Spica in Virgo, it brightens from magnitude -1.9 to -2.1 and is unmistakable in the lower half of our southern sky before dawn. Catch it just below the Moon on the 19th when a telescope shows its cloud-banded disk to be 37 arcseconds broad at a distance of 786 million km. We need just decent binoculars to check out the changing positions of its four main moons. Saturn, respectable at magnitude 0.5, stands low in our south-east before dawn, its altitude one hour before sunrise improving from 3° to 8° during the month. Look to its left and slightly down from the 6th onwards to glimpse Mercury. This reaches 24° west of the Sun on the 19th and brightens from magnitude 0.9 on the 6th to -0.2 on the 24th when the waning earthlit Moon stands 3° above Saturn. This is a slightly-revised version of Alan’s article published in The Scotsman on December 31st 2016, with thanks to the newspaper for permission to republish here. November nights end with planets on parade With the return of earlier and longer nights, astronomy enthusiasts have plenty to observe in November. As in October, though, the real highlight is the parade of bright planets in our eastern morning sky. The first to appear is Jupiter which rises above Edinburgh’s eastern horizon at 02:04 GMT on the 1st and by 00:35 on the 30th. More conspicuous than any star, it brightens from magnitude -1.8 to -2.0 this month as it moves 4° eastwards in south-eastern Leo. It lies 882 million km away and appears 33 arcseconds wide through a telescope when it stands 4° to the left of the waning Moon on the 6th. Following close behind Jupiter at present is the even more brilliant Venus. This rises 34 minutes after Jupiter on the 1st and stands 5° below and to its left as they climb 30° into the south-east before dawn. In fact, the two were only 1° apart in a spectacular conjunction on the morning of October 26 and Venus enjoys an even closer meeting with the planet Mars over the first few days of November. On the 1st, Venus blazes at magnitude -4.3 and lies 1.1° to the right of Mars, some 250 times fainter at magnitude 1.7. The pair are closest on the 3rd, with Venus only 0.7° (less than two Moon-breadths) below-right of Mars, before Venus races down and to Mars’ left as the morning star sweeps east-south-eastwards through the constellation Virgo. Catch Mars and Venus 2° apart on the 7th as they form a neat triangle with the Moon, a triangle that contains Virgo’s star Zavijava. Venus lies only 4 arcminutes above-left of the star Zaniah on the 13th, and 1.1° below-left of Porrima on the 18th. The final morning of the month finds it 4° above-left of Virgo’s leading star Spica. By then Mars is 14° above and to the right of Venus and 1.3° below-right of Porrima, while Jupiter is another 19° higher and to their right. Venus dims slightly to magnitude -4.2 during November, its gibbous disk shrinking as seen through a telescope from 23 to 18 arcseconds as its distance grows from 110 million to 142 million km. Mars improves to magnitude 1.5 and is only 4 arcseconds wide as it approaches from 329 million to 296 million km. Neither Mercury nor Saturn are observable during November as they reach conjunction on the Sun’s far side on the 17th and 30th respectively. More than 15° above and to the right of Jupiter is Leo’s leading star Regulus, while curling like a reversed question-mark above this is the Sickle of Leo from which meteors of the Leonids shower diverge between the 15th and 20th. The fastest meteors we see, these streak in all parts of the sky and are expected to be most numerous, albeit with rates of under 20 per hour, during the morning hours of the 18th. The Sun plunges another 7.5° southwards during November as sunrise/sunset times for Edinburgh change from 07:19/16:33 GMT on the 1st to 08:18/15:44 on the 30th. The Moon is at last quarter on the 3rd, new on the 11th, at first quarter on the 19th and full on the 25th. As the last of the evening twilight fades in early November, the Summer Triangle formed by bright stars Vega, Deneb and Altair fills our high southern sky. By our star map time of 21:00 GMT, the Triangle has toppled into the west to be intersected by the semicircular border of both charts – the line that arches overhead from east to west and separates the northern half of our sky from the southern. The maps show the Plough in the north as it turns counterclockwise below Polaris, the Pole Star, while Cassiopeia passes overhead and Orion rises in the east. The Square of Pegasus is high in the south with Andromeda stretching to its left as quintet of watery constellations arc across our southern sky below them. These are Capricornus the Sea Goat, Aquarius the Water Bearer, Pisces the (Two) Fish, Cetus the Water Monster and Eridanus the River. One of Pisces’ fish lies to the south of Mirach and is joined by a cord to another depicted by a loop of stars dubbed the Circlet below the Square. Like the rest of Pisces, they are dim and easily swamped by moonlight or street-lighting. Just to the left of the Circlet, one of the reddest stars known is visible easily though binoculars. TX Piscium is a giant star some 760 light years away and has a surface temperature of perhaps 3,200C compared with our Sun’s 5,500C. Omega Piscium, to the left of the Circlet, is notable because it sits only two arcminutes east of the zero-degree longitude line in the sky – making it one of the closest naked-eye stars to the celestial equivalent of our Greenwich Meridian. The celestial counterparts of latitude and longitude are called declination and right ascension. Declination is measured northwards from the sky’s equator while right ascension is measured eastwards from the point at which the Sun crosses northwards over the equator at the vernal equinox. That point lies 7° to the south of Omega but drifts slowly westwards as the Earth’s axis wobbles over a period of 26,000 years – the effect known as precession. Below Pisces lies Cetus, the mythological beast from which Perseus rescued Andromeda. Its brightest stars, Menkar and Deneb Kaitos, are both orange-red giants, the latter almost identical in brightness to Polaris at magnitude 2.0. Another, Mira, takes 11 months to pulsate between naked-eye and telescopic visibility and is currently near its minimum brightness.
0.937772
3.685884
An intriguing hint of a certain type of gamma-ray light at the center of the Milky Way might be a product of elusive dark matter — or it might not be. For the past several years, scientists have debated whether the light is really there, and what it means. Now, researchers are petitioning the management team of NASA's Fermi Gamma-Ray Space Telescope, the observatory that saw the light, to change its observing strategy to determine once and for all whether the signal really exists. However, even if there are extra gamma-ray photons coming from the center of the galaxy, scientists are a ways from knowing whether the photons were made by dark matter. Theories suggest some mysterious form of matter that can't be seen or touched is rife throughout the universe, making its presence known only through its gravitational pull. The leading theory behind this dark matter posits that it's made of a new kind of fundamental particle called a WIMP (weakly interacting massive particle). [Graphic: Dark Matter Explained] Because WIMPs are thought to be their own antiparticle (antimatter is a mirror version of normal matter that annihilates ordinary particles when it meets them), if two WIMPs were to collide, they would destroy each other on the spot. These explosions, which should be more common toward the center of the galaxy where dark matter would be densest, would likely create new particles that would give rise to gamma-ray photons of a precise energy. That light is what Fermi might have seen. "It's pretty ambiguous — it could be a statistical fluke, it could be a systematic effect or it could be a true signal," said Christoph Weniger, an astrophysicist at the University of Amsterdam in the Netherlands. "Right now, there are signs of all three." Weniger is lead author of a recent white paper suggesting the Fermi telescope spend more time looking toward the center of the Milky Way in search of this feature. The paper was submitted in response to a call for alternative Fermi observing strategies by the telescope's project scientist Julie McEnery, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Md. A new strategy Fermi was launched in June 2008, and has been surveying the entire sky evenly since then. Although one of its goals is to learn more about dark matter, the observatory is used for many areas of research, including spinning stars called pulsars and glowing supermassive black holes in other galaxies, both of which emit gamma-ray light. Weniger's proposal recommends that Fermi observe the center of the galaxy whenever it is visible, which would more than double the rate at which it collects data from this part of the sky. However, the intent is not to divert too much time away from other projects. "We are very concerned about having a negative impact on other people's science projects," said Harvard University astrophysicist Doug Finkbeiner, a co-author of the white paper. "We're just really trying to do what's right for the project." Fermi is funded to continue operating through at least 2016, potentially offering plenty of time to settle the question of the galactic center light. "We think if we started a new observing strategy immediately, we could have the answer by 2015," Finkbeiner told SPACE.com. Armed with more data from the center of the galaxy, the scientists hope to determine if there really is an excess of gamma-ray light in the particular energy range — 130 gigaelectron volts (GeV) — that Fermi has seen hints of so far. It's possible that these hints are just a statistical fluctuation, and with more data, the excess will go away. It's also possible that Fermi's data really do show an excess of these photons, but that they're due to some artifact in the instrument — a systematic error. "We've already gone through a lot of hypotheses about what could be wrong with the instrument, and all of them fail in some way or another," Finkbeiner said. "Something unlikely has happened here. Either a very unlikely statistical fluctuation, some kind of problem in the instrument that is masking itself in some unlikely way, or we have 130 GeV photons. They're all actually very unlikely, but one of them still happened." "In my opinion, the most important issue is to rule out the possibility that the line feature in the data might be instrumental in origin," said Simona Murgia, an astrophysicist at the University of California, Irvine and a member of the Fermi collaboration galactic center analysis team. "Additional data from modified observations would help understand this better." The situation is also complicated by a second, apparently unrelated, potential indication of dark matter in the Fermi data. In addition to 130-GeV photons, scientists have seen an excess of lower-energy gamma rays in the range of 2-3 GeV. While this signal is strong enough to rule out the chance that it's a statistical fluctuation, it could also be caused by regular astrophysical sources, such as pulsars. But if the 130-GeV signal persists and can't be attributed to a systematic error, then astronomers may have found the first proof that dark matter exists, and a look at what it's made of. "If it is a real line, it would be a 'smoking gun' of dark matter," said University of California, Irvine astrophysicist Kevork Abazajian, who's studied the other, lower-energy 2-3 GeVFermi gamma-ray signal. The proposed observing strategy would not shed much light on his feature, but it would help resolve the higher-energy signal, Abazajian said. "They make a pretty compelling case," said Dan Hooper, an astronomer at the Fermi National Accelerator Laboratory in Batavia, Ill., and the University of Chicago who has also studied the lower-energy gamma-ray signal. "It would be great to have some more data from this direction of the sky, and the downsides of their proposed strategy seem minimal." Hooper said he was skeptical that the signal Weniger and his team are chasing is actually dark matter, but that more data would help settle the matter. Other projects are currently chasing after dark matter in different ways. The Alpha Magnetic Spectrometer (AMS), a particle detector attached to the outside of the International Space Station, is also looking for signs of dark-matter annihilation explosions in space. The first data from that experiment, announced in April, show a hint of evidence that could be caused by dark matter, but the findings are very preliminary. And if they do end up pointing toward dark matter, they suggest a different mass of WIMP from the Fermi results, so the two results aren't necessarily complementary. Other experiments hope to catch dark-matter particles directly, on the very rare occasions they do collide with normal matter particles. Such detectors — which include the XENON Dark Matter Project in Italy, the LUX (Large Underground Xenon) experiment in South Dakota, and the SuperCDMS (Cryogenic Dark Matter Search) experiment in Minnesota — are buried deep underground, where almost nothing but dark matter can reach them. None has found definitive results so far. The team behind the new Fermi proposal said it's likely what Fermi's seeing isn't dark matter — but they'd rather know for sure. "I think you always hope a little bit, but then you have to remember: You're a scientist; you just want to get to the truth," Finkbeiner said. "If the truth is there's a 130-GeV WIMP, then that will be fantastic; we'll understand something new about physics." And if that's not the case, then they'll know it's time to move on, he said. At least they will have left no stone unturned. Copyright 2013 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
0.869708
4.061801
Back in 2011, astronomers observed a huge burst of plasma expelled from the sun crashing into Jupiter, resulting in giant and colorful auroras on the planet. These auroras were the first time researchers were able to observe directly how solar storms can trigger X-ray auroras on Jupiter. According to the data, this phenomenon was eight times brighter than the usual auroras surrounding the planet. A new study featured in the Journal of Geophysical Research – Space Physics explains how wind coming from the sun disturbs Jupiter’s magnetosphere as it smashes into the planet. New observations found that such a disturbance has a great effect on Jupiter’s auroras, so studying them will allow scientists to know more about how the sun interacts indirectly with the largest planet in the solar system. Astronomers are aware of Jupiter’s auroras thanks to the Chandra X-ray Observatory, a telescope that scans the universe with X-ray, not visible light. These observations scientists made with the help of Chandra were made over the course of 11 hours back in October 2011. Based on the data it collected, scientists then built a 3D model that highlighted the X-ray emissions. While they were observed only in X-ray, these auroras might be strong enough to be seen in visible light, given the right conditions. According to John Clarke, a scientist who wasn’t involved in this study, “X-ray auroras would also emit at other wavelengths.” The auroras we have on Earth – also known as the northern or southern lights – are also caused by sun particles smashing into the planet’s magnetic field. As they are pulled toward the planet’s poles, some of these particles reach the upper atmosphere, where they cause neutral particles to glow in red, greens, and purples. But the auroras on Jupiter aren’t just caused by the sun. Astronomer Tom Cravens, who also didn’t participate in the study, explains that solar activity is not responsible for “the main Jovian aurora, at lower latitudes, and mainly ultraviolet.” The cause for those auroras is Jupiter’s rotation and the fact that the magnetospheric plasma moves in line with this rotation. This discovery will also help the scientific team working on the Juno spacecraft, which should reach Jupiter in July this year. Juno is mainly designed to collect more data about the interaction between the sun and Jupiter, so it’s essential that we know more about how solar storms disrupt the planet’s magnetosphere. Image Source: Imperial
0.840079
3.905349
Washington D.C. [USA], May 11 (ANI): Several recent observations of Mars have hinted that it might presently harbour liquid water, a requirement for life as we know it. However, in a new paper in Nature Astronomy, a team of researchers have shown that stable liquids on present-day Mars are not suitable environments for known terrestrial organisms. "Life on Earth, even extreme life, has certain environmental limits that it can withstand," noted Dr Edgard G Rivera-Valentin, a Universities Space Research Association (USRA) scientist at the Lunar and Planetary Institute (LPI) and lead author of the investigation. "We investigated the distribution and chemistry of stable liquids on Mars to understand whether these environments would be suitable to at least extreme life on Earth." Due to Mars' low temperatures and extremely dry conditions, should a liquid water droplet be placed on Mars, it would nearly instantaneously either freeze, boil, or evaporate away. That is unless that droplet had dissolved salts in it. Such salt water, or brine, would have a lower freezing temperature and would evaporate at a slower rate than pure liquid water. Because salts are found across Mars, brines could form there. "We saw evidence of brine droplets forming on the strut of the Phoenix lander, where they would have formed under the warmed spacecraft environment", noted Dr German Martinez, a USRA scientist at the LPI, co-investigator of the Mars 2020 Perseverance rover, and co-author of the study. Further, some Martian salts can undergo a process called deliquescence. When salt is at the right temperature and relative humidity, it will take in water from the atmosphere to become a salty liquid. "We've been conducting experiments under Martian simulated conditions at the University of Arkansas for many years now to study these types of reactions. Using what we've learned in the lab, we can predict what will likely happen on Mars," says Dr Vincent Chevrier, co-author of the investigation at the University of Arkansas. The team of researchers used laboratory measurements of Mars-relevant salts along with Martian climate information from both planetary models and spacecraft measurements. They developed a model to predict where, when, and for how long brines are stable on the surface and shallow subsurface of Mars. They found that brine formation from some salts can lead to liquid water over 40 per cent of the Martian surface but only seasonally, during 2 per cent of the Martian year. "In our work, we show that the highest temperature a stable brine will experience on Mars is -48°C (-55° F). This is well below the lowest temperature we know life can tolerate," says Dr Rivera-Valentin. "For many years we have worried about contaminating Mars with terrestrial life as we have sent spacecraft to explore its surface. These new results reduce some of the risks of exploring Mars," noted Dr Alejandro Soto at the Southwest Research Institute and co-author of the study. "We have shown that on a planetary scale the Martian surface and shallow subsurface would not be suitable for terrestrial organisms because liquids can only form at rare times, and even then, they form under harsh conditions. However, there might be unexplored life on Earth that would be happy under these conditions," added Dr Rivera-Valentin. More environmental data from Mars, such as through the upcoming Mars 2020 mission to Jezero crater, along with further exploration of Earth's biome may shed some light on the potential to finding life on Mars today. (ANI)
0.887984
3.828952
The Almagest is a 2nd-century Greek-language mathematical and astronomical treatise on the apparent motions of the stars and planetary paths, written by Claudius Ptolemy. The Almagest is of the most influential scientific texts of all time, it canonized a geocentric model of the Universe that was accepted for more than 1200 years from its origin in Hellenistic Alexandria, in the medieval Byzantine and Islamic worlds, and in Western Europe through the Middle Ages and early Renaissance until Copernicus. It is also a key source of information about ancient Greek astronomy. Ptolemy set up a public inscription at Canopus, Egypt, in 147 or 148. N. T. Hamilton found that the version of Ptolemy’s models set out in the Canopic Inscription was earlier than the version in the Almagest. Hence the Almagest could not have been completed before about 150, a quarter-century after Ptolemy began observing. The Syntaxis Mathematica consists of thirteen sections, called books. As with many medieval manuscripts that were hand-copied or, particularly, printed in the early years of printing, there were considerable differences between various editions of the same text, as the process of transcription was highly personal. An example illustrating how the Syntaxis was organized is given below. It is a Latin edition printed in 1515 at Venice by Petrus Lichtenstein. - Book I contains an outline of Aristotle’s cosmology: on the spherical form of the heavens, with the spherical Earth lying motionless as the center, with the fixed stars and the various planets revolving around the Earth. Then follows an explanation of chords with a table of chords; observations of the obliquity of the ecliptic (the apparent path of the Sun through the stars); and an introduction to spherical trigonometry. - Book II covers problems associated with the daily motion attributed to the heavens, namely risings and settings of celestial objects, the length of daylight, the determination of latitude, the points at which the Sun is vertical, the shadows of the gnomon at the equinoxes and solstices, and other observations that change with the spectator’s position. There is also a study of the angles made by the ecliptic with the vertical, with tables. - Book III covers the length of the year and the motion of the Sun. Ptolemy explains Hipparchus’ discovery of the precession of the equinoxes and begins explaining the theory of epicycles. - Books IV and V cover the motion of the Moon, lunar parallax, the motion of the lunar apogee, and the sizes and distances of the Sun and Moon relative to the Earth. - Book VI covers solar and lunar eclipses. - Books VII and VIII cover the motions of the fixed stars, including precession of the equinoxes. They also contain a star catalogue of 1022 stars, described by their positions in the constellations, together with ecliptic longitude and latitude. Ptolemy states that the longitudes (which increase due to precession) are for the beginning of the reign of Antoninus Pius (138 AD), whereas the latitudes do not change with time. The constellations north of the zodiac and the northern zodiac constellations (Aries through Virgo) are in the table at the end of Book VII, while the rest are in the table at the beginning of Book VIII. The brightest stars were marked first magnitude (m = 1), while the faintest visible to the naked eye were sixth magnitude (m = 6). Each numerical magnitude was considered twice the brightness of the following one, which is a logarithmic scale. (The ratio was subjective as no photodetectors existed.) This system is believed to have originated with Hipparchus. The stellar positions to are of Hipparchan origin, despite Ptolemy’s claim to the contrary. Ptolemy identified 48 constellations: The 12 of the zodiac, 21 to the north of the zodiac, and 15 to the south. - Book IX addresses general issues associated with creating models for the five naked-eye planets, and the motion of Mercury. - Book X covers the motions of Venus and Mars. - Book XI covers the motions of Jupiter and Saturn. - Book XII covers stations and retrograde motion, which occurs when planets appear to pause, then briefly reverse their motion against the background of the zodiac. Ptolemy understood these terms to apply to Mercury and Venus as well as the outer planets. - Book XIII covers motion in latitude, that is, the deviation of planets from the ecliptic. The cosmology of the Syntaxis includes five main points, each of which is the subject of a chapter in Book I. What follows is a close paraphrase of Ptolemy’s own words from Toomer’s translation. - The celestial realm is spherical and moves as a sphere. - The Earth is a sphere. - The Earth is at the center of the cosmos. - The Earth, in relation to the distance of the fixed stars, has no appreciable size and must be treated as a mathematical point. - The Earth does not move. The star catalog As mentioned, Ptolemy includes a star catalog containing 1022 stars. He says that he “observed as many stars as it was possible to perceive, even to the sixth magnitude“, and that the ecliptic longitudes are for the beginning of the reign of Antoninus Pius (138 AD). But calculations show that his ecliptic longitudes correspond more closely to around 58 AD. He states that he found that the longitudes had increased by 2° 40′ since the time of Hipparchos. This is the amount of axial precession that occurred between the time of Hipparchos and 58 AD. It appears therefore that Ptolemy took a star catalog of Hipparchos and simply added 2° 40′ to the longitudes. Many of the longitudes and latitudes have been corrupted in the various manuscripts. Most of these errors can be explained by similarities in the symbols used for different numbers. For example, the Greek letters Α and Δ were used to mean 1 and 4 respectively, but because these look similar copyists sometimes wrote the wrong one. In Arabic manuscripts, there was confusion between for example 3 and 8 (ج and ح). (At least one translator also introduced errors. Gerard of Cremona, who translated an Arabic manuscript into Latin around 1175, put 300° for the latitude of several stars. He had apparently learned from Moors, who used the letter “sin” for 300, but the manuscript he was translating came from the East, where “sin” was used for 60.) Even without the errors introduced by copyists, and even accounting for the fact that the longitudes are more appropriate for 58 AD than for 137 AD, the latitudes and longitudes are not very accurate, with errors of large fractions of a degree. Some errors may be due to atmospheric refraction causing stars that are low in the sky to appear higher than where they really are. A series of stars in Centaurus is off by a couple of degrees, including the star we call Alpha Centauri. These were probably measured by a different person or persons from the others, and in an inaccurate way. Ptolemy’s planetary model Ptolemy assigned the following order to the planetary spheres, beginning with the innermost: - Sphere of fixed stars Other classical writers suggested different sequences. Plato (c. 427 – c. 347 BC) placed the Sun second in order after the Moon. Martianus Capella (5th century AD) put Mercury and Venus in motion around the Sun. Ptolemy’s authority was preferred by most medieval Islamic and late medieval European astronomers. Ptolemy inherited from his Greek predecessors a geometrical toolbox and a partial set of models for predicting where the planets would appear in the sky. Apollonius of Perga (c. 262 – c. 190 BC) had introduced the deferent and epicycle and the eccentric deferent to astronomy. Hipparchus (2nd century BC) had crafted mathematical models of the motion of the Sun and Moon. Hipparchus had some knowledge of Mesopotamian astronomy, and he felt that Greek models should match those of the Babylonians inaccuracy. He was unable to create accurate models for the remaining five planets. The Syntaxis adopted Hipparchus’ solar model, which consisted of a simple eccentric deferent. For the Moon, Ptolemy began with Hipparchus’ epicycle-on-deferent, then added a device that historians of astronomy refer to as a “crank mechanism“: He succeeded in creating models for the other planets, where Hipparchus had failed, by introducing a third device called the equant. Ptolemy wrote the Syntaxis as a textbook of mathematical astronomy. It explained geometrical models of the planets based on combinations of circles, which could be used to predict the motions of celestial objects. In a later book, the Planetary Hypotheses, Ptolemy explained how to transform his geometrical models into three-dimensional spheres or partial spheres. In contrast to the mathematical Syntaxis, the Planetary Hypotheses is sometimes described as a book of cosmology. Ptolemy’s comprehensive treatise of mathematical astronomy superseded most older texts of Greek astronomy. Some were more specialized and thus of less interest; others simply became outdated by the newer models. As a result, the older texts ceased to be copied and were gradually lost. Much of what we know about the work of astronomers like Hipparchus comes from references in the Syntaxis. The first translations into Arabic were made in the 9th century, with two separate efforts, one sponsored by the caliph Al-Ma’mun. Sahl ibn Bishr is thought to be the first Arabic translator. By this time, the Syntaxis was lost in Western Europe or only dimly remembered. Henry Aristippus made the first Latin translation directly from a Greek copy, but it was not as influential as a later translation into Latin made by Gerard of Cremona from the Arabic (finished in 1175). Gerard translated the Arabic text while working at the Toledo School of Translators, although he was unable to translate many technical terms such as the Arabic Abrachir for Hipparchus. In the 12th century, a Spanish version of the Almagest was produced, which was later translated under the patronage of Alfonso X. In the 15th century, a Greek version appeared in Western Europe. The German astronomer Johannes Müller (known, from his birthplace of Königsberg, as Regiomontanus) made an abridged Latin version at the instigation of the Greek churchman Johannes, Cardinal Bessarion. Around the same time, George of Trebizond made a full translation accompanied by a commentary that was as long as the original text. George’s translation, done under the patronage of Pope Nicholas V, was intended to supplant the old translation. The new translation was a great improvement; the new commentary was not, and aroused criticism. The Pope declined the dedication of George’s work, and Regiomontanus’s translation had the upper hand for over 100 years. During the 16th century, Guillaume Postel, who had been on an embassy to the Ottoman Empire, brought back Arabic disputations of the Almagest, such as the works of al-Kharaqī, Muntahā al-idrāk fī taqāsīm al-aflāk (“The Ultimate Grasp of the Divisions of Spheres”, 1138/9). *This article was originally published at en.wikipedia.org.
0.886398
3.877615
The Fermi-LAT collaboration has published its fourth source catalog, named 4FGL. Based on eight years of data, it contains 5064 celestial objects emitting gamma rays at energies around 1 GeV, adding more than 2000 high-energy sources to the previous collection (published in 2015). More than one fourth of the objects are of unknown nature, calling for numerous follow-up studies. Although its volume is modest compared to the billions of sources listed in optical catalogs, the 4FGL catalog is by far the deepest in gamma-ray astronomy and serves as a reference to the entire domain. The catalog, coordinated by a researcher at the Astrophysics Department (AIM Laboratory) of CEA-Irfu at Paris-Saclay, is accessible on line at the NASA Fermi web site. In parallel, the 4LAC census of active galactic nuclei (coordinated by a researcher at CNRS/CENBG) is also made available to the community. Surveying the gamma-ray sky continuously Fermi is a NASA satellite launched in June 2008, which carries the LAT (Large Area Telescope), a wide field telescope collecting gamma rays from 30 MeV to 1 TeV. It surveys the sky every three hours since August 2008. This continuous survey results in a catalog of sources provided to the scientific community, now at its fourth update called 4FGL. More than two thousand sources were discovered since the previous catalog (3FGL) published in 2015. This improvement was made possible by doubling the observing time (8 years for 4FGL), understanding the detector better (Pass 8 data), improving the analysis methods, and modeling at higher resolution the interstellar emission of our Milky Way, which forms a complex background (see figure below) from which individual sources are difficult to sort out. The catalog provides for each source its localization, its energy spectrum, its temporal variability (common) and its counterpart at other wavelengths when it can be found. Most sources (62%) are blazars, giant black holes at the center of faraway galaxies whose powerful jets of particles point toward us (making the jets much brighter). A smaller fraction (5%) is made of pulsars, magnetized neutron stars rotating very fast, with a period less than one second, down to a few milliseconds. An even smaller fraction (3%) is shared between other types of sources located in our Milky Way including supernova remnants, which are suspected of accelerating the cosmic rays producing the interstellar emission. Add a drop (2%) of other extragalactic sources, radio and starburst galaxies. The rest (28%) could not be associated to known objects, partly because the catalogs of counterparts are not deep enough, and because of confusion in the disk of the Milky Way. New types of sources are probably lurking among those unidentified sources, but it is difficult today to tell them apart from the commoners. A pool of sources for follow-up studies This work has already been cited 150 times and will be, like the three previous catalogs, a starting point for many follow-up studies. The first of those is the 4LAC catalog of the 2863 active galactic nuclei detected by Fermi (see press release at CENBG), detailing their emission over the entire electromagnetic spectrum from radio waves to gamma rays. Another application is the search for new pulsars, which can be set apart from blazars based on their curved spectrum and very weak variability. In a few years, the Fermi-LAT catalog will be the main pool of targets for the Cherenkov observatory CTA, currently in construction, which will detect gamma rays at even higher energies (around 1 TeV). The LAT telescope keeps working and the results are regularly updated. The first incremental 4FGL catalog (Data Release 2) covers 10 years of data and is now on line at the NASA Fermi Science Support Center. The two additional years of observations allowed finding some 700 new gamma-ray sources. Most are close to the detection limit, but a few are very variable blazars whose jet became active over the last two years. The next incremental catalog (DR3) will cover 12 years of observations and should be available in early 2021. In the longer run, the Fermi group at Saclay keeps improving the model of interstellar emission in order to increase the reliability of faint sources in the Milky Way. The LAT telescope has no identified successor to this day. It will be hard to beat!
0.911212
4.054204
Spitzer spots new neighbours NASA has announced the discovery of a nearby alien solar system. The space agency’s Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of the planets are located in a habitable zone, close enough to the central star that a rocky planet would likely have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our Solar System. “This discovery could be a significant piece in the puzzle of finding habitable environments, places that are conducive to life,” said Thomas Zurbuchen, associate administrator of NASA’s Science Mission Directorate in Washington. “Answering the question ‘are we alone’ is a top science priority and finding so many planets like these for the first time in the habitable zone is a remarkable step forward toward that goal.” The alien system is about 40 light-years (378 trillion km) from Earth in the constellation Aquarius. This exoplanet system is called TRAPPIST-1, named for The Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile. In May 2016, researchers using TRAPPIST announced the discovery of three planets in the system. Assisted by several ground-based telescopes, including the European Southern Observatory's Very Large Telescope, Spitzer confirmed the existence of two of these planets and discovered five additional ones, increasing the number of known planets in the system to seven. The new results have been published in the journal Nature. The team was able to precisely measured the sizes of the seven planets and developed estimates of the masses of six of them, allowing their density to be inferred. Based on these assumed densities, all of the TRAPPIST-1 planets are likely to be rocky. Further observations will reveal whether they are rich in water, and potentially reveal whether any could have liquid water on their surfaces. All seven of the TRAPPIST-1 planets orbit closer to their host star than Mercury is to our Sun, but because the TRAPPIST-1 star is cooler - classified as an ultra-cool dwarf – liquid water could still survive on planets orbiting very close to it. If a person was standing on the surface of one of the planets, they could gaze up and see neighbouring planets in such detail that they could spot geological features or even clouds, as they would sometimes appear larger than the moon in Earth's sky. “The seven wonders of TRAPPIST-1 are the first Earth-size planets that have been found orbiting this kind of star,” said Michael Gillon, lead author of the paper and the principal investigator of the TRAPPIST exoplanet survey. “It is also the best target yet for studying the atmospheres of potentially habitable, Earth-size worlds.”
0.884204
3.699044
Astrophysical Journal 866, 131 Link to Article [DOI: 10.3847/1538-4357/aadf34] Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218, USA 1I/’Oumuamua is the first interstellar interloper to have been detected. Because planetesimal formation and ejection of predominantly icy objects are common by-products of the star and planet formation processes, in this study we address whether 1I/’Oumuamua could be representative of this background population of ejected objects. The purpose of the study of its origin is that it could provide information about the building blocks of planets in a size range that remains elusive to observations, helping to constrain planet formation models. We compare the mass density of interstellar objects inferred from its detection to that expected from planetesimal disks under two scenarios: circumstellar disks around single stars and wide binaries, and circumbinary disks around tight binaries. Our study makes use of a detailed study of the PanSTARRS survey volume; takes into account that the contribution from each star to the population of interstellar planetesimals depends on stellar mass, binarity, and planet presence; and explores a wide range of possible size distributions for the ejected planetesimals, based on solar system models and observations of its small-body population. We find that 1I/’Oumuamua is unlikely to be representative of a population of isotropically distributed objects, favoring the scenario that it originated from the planetesimal disk of a young nearby star whose remnants are highly anisotropic. Finally, we compare the fluxes of meteorites and micrometeorites observed on Earth to those inferred from this population of interstellar objects, concluding that it is unlikely that one of these objects is already part of the collected meteorite samples.
0.838075
3.632816
Previously, astronomers had been under the impression that the heavy elements — gold, platinum, lead, uranium, etc — came from supernova explosions. But now scientists have announced a new theory for these highly valuable elements, this one involving two ultra-dense neutron stars and one spectacularly violent, grossly expensive collision. We Are All Made of Stars Essentially, we’re all here to today because some star somewhere in space exploded once upon a time. Down in the interior of stars, the high pressure and heat cooks up elements like carbon and oxygen atoms (the stuff we’re made of). So when it inevitably comes time for that star to die, that explosion shoots out all of the ingredients for life as we know it. This explanation didn’t, however, quite manage to explain where the more dense elements got their start. Because while the majority of light elements come with a fairly simple recipe, a heavier one like gold requires 79 protons, 79 electrons and 118 neutrons — that’s a hell of lot of ingredients, which is why it takes these absurdly dense neutron stars, which come packing way more atomic supplies, to give us all those beautiful, heavy, glittering goods. What’s a Neutron Star? When a massive star enters Type II, Type Ib, or Type Ic supernova — or in other words, when its core is essentially crushed by the force of its own gravity — there are two potential outcomes. It can either turn into a black hole or emerge from its supernova cocoon as a neutron star. To the get the latter, you’d need to start with a star about 4 to 8 times the size of our sun. Once the star burns off enough nuclear fuel to the point that the core can no longer support itself, gravity finally wins and collapses the core with enough force to cause protons and electrons to assimilate. Which creates neutrons. Which, as you may have already guessed, is where neutron stars get their name. To get an idea of just how dense a neutron star is, a mere teaspoon of the stuff would weigh about 10 billion tons. (Of course, if you actually did extract a teaspoon of neutronium goo, you’d lose all that wonderful gravitational force holding everything together, and the whole thing would immediately explode into a giant mass of neutrons about the size of a planet that would then break down to its individual proton and electron parts. To put it bluntly, you, dear reader, would die. But that’s neither here nor there.) When Two Neutron Stars Love Each Other Very Much… So under most circumstances, these insanely dense dead stars will float around the universe doing no one any harm. But in binary star systems, the two are destined to collide. And this is what NASA’s Swift space telescope observed during an all-sky survey on June 3. After seeing a flash of light called a short gamma-ray burst (GRB) far, far way in the constellation Leo, astronomers were quickly able to deduce (with the help of a few theoretical models) that what they were seeing was the radioactive afterglow from a gargantuan mass of heavy metals created in the wake of a neutron star collision. Previously, scientists had only been able to hypothesize that GRBs were the result of two colliding neutron stars, but now we have actual proof. When they make contact, several exciting things happen very quickly. Most of the material actually collapses to form a black hole. Some of the material then gets sucked into the black hole. That is the event that causes the gamma-ray burst. Some of the material gets spewed out into space. That material, since it came from neutron stars, is very rich in neutrons, and as a result, is very efficient at forming these heavy elements, including gold. And considering how many particles these neutron stars have pushed together (literally until they can be pushed together no more), it makes sense that two of them combined would be able to make quite a bit of gold — enough to equal about 20 times the mass of Earth, to be more specific. Which is also enough to fill around 100 trillion oil tankers. But hey, gold isn’t everyone’s thing. Neutron stars get that — they also produce about eight times that amount of platinum. Artist’s impression of neutron star collision via NASA. But while mind-boggling in quantity, it’s not quite gold as you imagine it; what you’re getting from a neutron collision is atomized gold. It won’t make into into your hands or onto your teeth (you do you) until it finds a big cloud of particles. These will eventually get shoved together by gravity and come out a beautiful solar system. Then, as the gold particles come together and the planet applies geological pressure, the particles will coalesce and, after about 1 billion years, become something you can see with your naked eye and subsequently covet. With this new theory, it’s incredibly likely that all of our beautiful gold originates from this massively violent destructive force. Which is an incredibly cool thought. So friends, when you go home tonight, make sure to hug your gold tight and thank it for being here — it’s had pretty rough ride. [Sydney Morning Herald, NASA, National Geographic]
0.87974
3.852304
I’ll be working my way through John D. Barrow, The Book of Universes: Exploring the Limits of the Cosmos (New York and London: W. W. Norton and Company, 2011). At the time he wrote the book, Dr. Barrow was a professor of mathematical sciences and the director of the Millennium Mathematics Project at the University of Cambridge, in the United Kingdom. When I was a school kid, we all knew that there was absolutely nothing remarkable about the planet on which we live. It’s an ordinary place in an unremarkable galaxy, right? Move along. There’s nothing to see here. But this turns out not to be true. There are a number of remarkable things about Earth. Here is just one of them: As the Earth orbits the Sun, the line through the Earth’s North and South Poles around which it rotates each day is not perpendicular to the line its orbit traces. It is tilted away from the vertical at about 23.5 degrees. This has many remarkable consequences: it is the reason for the seasons. If there were no tilt then there would be no seasonal changes to climate; if the tilt were much larger then the seasonal variations would be far more dramatic. (5) In the first chapter of his book, Dr. Barrow surveys some of the early theories of the solar system — which is to say, given the way that things were viewed in the ancient world, of the universe altogether — including those of Aristotle, of Ptolemy and his enormously influential second century AD Almagest, and of Giovanni Riccioli in his 1651 Almagestum Novum (“The New Almagest”), as well as that of Copernicus. Getting things right turns out to have been very, very difficult. The problem of understanding the motions of the planets wasn’t a simple one, because those motions aren’t simple. It is not easy to understand the universe just by looking at it. We are confined to the surface of a particular type of planet in orbit, with others, around a middle-aged star. As a result, what we see in the night sky is strongly determined by where we are located on the Earth’s surface, when we look, and by any preconceptions we might have about where we should be in the grand scheme of things. Our world view predetermines our world model. (19-21) He is making a point with regard to astronomy that I’ve made several times with respect to broader issues that are even more difficult to pin down with “objective data”: We see things from a certain perspective, from within a particular time, and facts don’t simply interpret themselves. They take their meanings, to a considerable extent, from the way they’re embedded in theories or world views.
0.875115
3.898052
A new John Hopkins University study has shown that the presence of oxygen in a planet’s atmosphere is not a sure sign that there is life on other planets. In the search for life in solar systems, researchers have accepted the observation of oxygen as an indication of the presence of life on other planets. This new study recommends a reconsideration of that rule. The findings have been published in the journal ACS Earth and Space Chemistry. The researchers simulated the atmospheres of planets beyond the solar system in the lab, and successful created organic compounds and oxygen that were absent of life, showing that oxygen does not necessarily indicate life. The researchers tested nine different gas mixtures, based on predictions for super-Earth and mini-Neptune type exoplanet atmospheres. The Earth’s atmosphere Oxygen makes up 20 percent of the Earth’s atmosphere. It is considered one of the most robust biosignature gases in Earth’s atmosphere. In the search for life beyond the Earth’s solar system, however, different energy sources initiate chemical reactions and how they can create biosignatures such as oxygen. Previously other researchers have ran photochemical models on computers to predict what exoplanet atmospheres might be able to create. Life on other planets Chao He, assistant research scientist in the Johns Hopkins University Department of Earth and Planetary Sciences and the study’s first author, said: “Our experiments produced oxygen and organic molecules that could serve as the building blocks of life in the lab, proving that the presence of both doesn’t definitively indicate life. Researchers need to more carefully consider how these molecules are produced.” Explaining further, He added: “People used to suggest that oxygen and organics being present together indicates life, but we produced them abiotically in multiple simulations. This suggests that even the co-presence of commonly accepted biosignatures could be a false positive for life.”
0.827658
3.328213
While scientists have long known that dark matter exists, they have never been able to touch it. That could change, however, with the development of what might one day be the biggest and most capable dark matter experiment in the world. The Department of Energy (DOE) and the National Science Foundation (NSF) last week announced funding for the second-generation Large Underground Xenon (LUX) experiment, dubbed LUX-ZEPLIN (ZonEd Proportional scintillation in LIquid Noble gases). UC Santa Barbara physics professor Harry Nelson is the scientific leader of the LUX-ZEPLIN (LZ) collaboration, and UCSB physicists, who led the design and building of LUX’s ultrapure water tank, will design a new element for LZ. The LUX site, located a mile deep in the Black Hills of South Dakota at the Sanford Underground Research Facility, will also be used for the LZ experiment. When completed, LZ will be the largest dark matter detector in the world. Dark matter — the predominant form of matter in the universe and so named because it neither emits nor absorbs light — has been observed only through its gravitational effects on galaxies and clusters of galaxies. However, physicists do not know what constitutes dark matter. The leading theoretical candidates for dark matter are weakly interacting massive particles (WIMPs), so-called because they rarely interact with ordinary matter except through gravity. Finding WIMPs is the aim of both LUX and LZ. The LZ detector — 20 times bigger than LUX’s — will utilize seven tonnes of active liquid xenon. Xenon is a chemical element found in trace amounts in Earth’s atmosphere. If a WIMP strikes a xenon atom in the detector, it recoils from other xenon atoms and emits photons (light) and electrons. The electrons are drawn upward by an electric field and interact with a thin layer of xenon gas at the top of the tank, releasing more photons. The light signals are detected by 488 photomultiplier tubes, which are deployed above and below the liquid xenon. The locations of photon signals — one at the collision point, the other at the top of the tank — can be pinpointed to within a few millimeters. The energy of the interaction can be precisely measured from the brightness of the signals, which ensures that each WIMP event’s unique signature of position and energy will be precisely recorded. Experiments such as LUX and LZ are situated deep underground in order to shield the detectors from cosmic rays, which can produce false results. However, radiation from the natural decay of uranium and thorium in the surrounding material, which can also produce false results, still remains. To combat this, the LZ detector will employ additional layers of particle detection outside the seven tonnes of liquid xenon in the heart of the detector. The UCSB group will design the outer detector to contain scintillator liquid, a clear oil that lights up when a neutron or gamma ray interacts with it. All of the LZ detector is immersed in a large tank of ultrapure water. Michael Witherell, UCSB’s vice chancellor of research and a professor of physics, is the current project manager for LZ’s outer detector. “We are designing nine big acrylic (Plexiglas) vessels to hold 27 tons of scintillator liquid,” he said. “Once the vessels are built, we have to get them into the tanks and assemble them very carefully,” Witherell explained. “We have to fill the outer detector with the scintillator liquid simultaneously with the water tank because we have to keep the pressure even on both sides. This is a mechanical engineering challenge, and UCSB is well-equipped to meet it because we have good mechanical engineers working with our LZ group.” LZ will be more sensitive to dark matter than the ultimate LUX result by a factor of 100 and more sensitive than the present LUX result by a factor of 500. “We project that a three-year run of the LZ experiment will achieve a sensitivity close to fundamental limits from the cosmic ray neutrino background,” said Nelson, who helped design, build and fill the sophisticated water tank that houses the LUX experiment and will also be used in the LZ detector. “Our dream would be after about a year’s worth of data that there would be a signal of dark matter,” Nelson said. In fact, LZ’s greatly improved sensitivity to dark matter may one day allow scientists to observe up to five events over a three-year experiment period. “That’s how rare a dark matter event is,” Witherell said. The schedule for building LZ will depend on when the funds become available, but according to Nelson, the DOE and NSF intend to fully fund the project. That means that UCSB could be ready to start building the biggest components in 2015 and get to the point of bringing the detector into operation in early 2018.
0.802267
4.033641
A “supermoon” rises near the Lincoln Memorial on March 19, 2011, in Washington, D.C. Image Credit: NASA/Bill Ingalls A last act in a cosmic play, the third and final full supermoon of 2014 graces the night sky this weekend. And on Monday evening, a Canary Island telescope will webcast its arrival. (Related: “#supermoon.”) Although the full phase of the moon officially occurs at 9:38 p.m. EDT on Monday, September 8, it will be at its closest point to Earth 22 hours earlier. So sky-watchers will get to see the lunar disk at its largest on Sunday, September 7 at 11:38 p.m. EDT, when the silvery orb will be just 222,698 miles (358,398 kilometers) from Earth. Astronomers say, though, that only the most keen-eyed observers will notice that the moon will appear 15 percent brighter and 7 percent larger than the run-of -the-mill full moon. The super moon that occurred on August 10 was the closest and brightest of the lunar triad this year, when it approached the Earth at only 221,765 miles (356,896 kilometers). In terms of celestial mechanics, what is happening during a full moon? The moon orbits the Earth on an egg-shaped orbit, with our planet sitting a bit off center. This means that once a month in its orbit, the moon reaches its closest point to Earth, known as its perigee. This is when the moon looks the largest in diameter. At the same time, the moon is also at the point in its 28-day-long orbit around the Earth that it passes opposite the Sun. When viewed from the Earth, the moon will be fully illuminated, or “full.” But because the Earth moves around the Sun, the exact position in the moon’s orbit where it reaches its full phase changes. So what this means for sky-watchers is that every once in a while, perigee and a full moon coincide. We like to call it a supermoon, but astronomers prefer to call the event by a less catchy name, a perigee full moon. “It’s the marriage of the two occurrences when we get a brighter and larger-than-normal full moon,” said Geza Gyuk, astronomer at the Adler Planetarium in Chicago. “While this is nothing special from a science perspective, it is no doubt very poetical and very romantic.” See for Yourself When is the best time to catch the event? A full moon is visible, weather permitting, all night. The exact moment of perigee and the exact moment of fullness don’t matter too much, says Gyuk. “Just find a time that is convenient and where you can spend a few minutes just looking and appreciating,” he said. “Try and look for the moon when it is near the horizon, that’s when it gives an extra thrill, as it appears larger and more colorful than when it is overhead.” The moon will appear to rise above the local eastern horizon just after local sunset and will set at sunrise in the west. Starting at 9:30 pm EDT, the Slooh observatory on the Canary Islands will webcast the full moon. Webcast courtesy of Slooh. These rising and setting times are also when photo hounds can get the best lunar portraits because the moon is perched just above foreground objects, like houses, trees, and bodies of water. “The setup isn’t too important, but I’d recommend something with not too large a field of view or the moon will simply seem too tiny, said Gyuk. “Slightly after sunset, when the moon is low in the sky and the sky is darkening, is very dramatic for viewing and photography.” What causes the moon to look bigger at the horizon? This is really still a mystery of sorts to scientists. It is clearly an optical illusion, because cameras show the moon as precisely the same size, regardless of where it is in the sky. However, it is a convincing illusion. According to Gyuk, some research has suggested it’s because, at the horizon, we can compare it to objects we are familiar with, while others have claimed it’s because, as a species, we are tuned to pay more attention to things on the horizon that could pose more of a threat compared with those above. “No flying lions on the savanna,” he added. While most professional astronomers may tire of hearing of the supermoon phenomenon, which has really gone viral in the last few years, some experts like Gyuk actually welcome the interest. “I don’t think astronomers necessarily scoff at the publicity. They may be a little bemused, but it is wonderful that people take an interest in what is going on in the heavens,” he explained. “Anything that gets people looking up and wondering is great in my book!”
0.866144
3.587586
It was a brief but historic tap on the surface of the asteroid Ryugu, at 7:29 am Japan time on 22 February. The Hayabusa2 spacecraft touched down at its target location, where it shot a projectile at the surface. The probe then began ascending to its ‘parking’ altitude, the Japan Aerospace Exploration Agency said in a series of tweets. The mission team is now waiting to confirm whether the spacecraft collected a sample from the space rock, Nature reported. If successful, it will be only the second time in history that a probe has collected a sample from an asteroid, after a predecessor mission, Hayabusa, did so in 2005. The manoeuvre was considered one of the high-risk highlights of this mission — Ryugu’s surface is strewn with boulders that could damage the craft. The mission aims to return samples of asteroid material back to Earth at the end of 2020 for study. “The touchdown has progressed very smoothly,” said Satoshi Hosoda, from the Japanese space agency JAXA, in a webcast. “Those people who were involved were very tense but then we saw nice smiles and they were all hugging each other.” Yuichi Tsuda, the mission project manager, later confirmed at a press conference that the sequence for the projectile firing to collect samples had happened as planned. As the probe gently touched down, a bullet fired into the surface, kicking up sand, pebbles and fragments of rock into a collection chamber, called a sampler horn. If this failed, the horn has teeth that can raise surface material to the probe. Hayabusa2 began its slow fall towards Ryugu around 26 hours earlier, starting from an altitude of 20 kilometres, where it had been hovering. (Ryugu’s weak gravity means it is difficult for objects to remain in orbit around it.) The spacecraft autonomously guided its descent, ready to abort by gently pushing itself upwards if any anomalies were to occur. As the asteroid’s surface slowly rotated below it, Hayabusa2 locked onto a ‘target maker’ — essentially a small, reflective beanbag it had previously deployed to the surface, Nature writes. Mission scientists had dubbed the target location L08-E1. They had selected it as one of the ‘least bad’ options on the asteroid, which is almost entirely covered with rocks of varying sizes. Hitting a boulder during a touchdown manoeuvre could have disastrous consequences for the mission, Nature adds. Hayabusa2 launched in late 2014 and arrived at Ryugu — an object only 1 kilometre wide in an orbit not far from Earth’s — in June 2018. JAXA then mapped the surface in detail, and later selected the sites for a multi-pronged assault on the space rock. The mothership craft has already released three small probes onto the surface, from where they beamed back pictures in September and October.
0.806406
3.284506
Our Milky Way is a frugal galaxy. Supernovas and violent stellar winds blow gas out of the galactic disk, but that gas falls back onto the galaxy to form new generations of stars. In an ambitious effort to conduct a full accounting of this recycling process, astronomers were surprised to find a surplus of incoming gas. “We expected to find the Milky Way’s books balanced, with an equilibrium of gas inflow and outflow, but 10 years of Hubble ultraviolet data has shown there is more coming in than going out,” said astronomer Andrew Fox of the Space Telescope Science Institute, Baltimore, Maryland, lead author of the study to be published in The Astrophysical Journal. Fox said that, for now, the source of the excess inflowing gas remains a mystery. One possible explanation is that new gas could be coming from the intergalactic medium. But Fox suspects the Milky Way is also raiding the gas “bank accounts” of its small satellite galaxies, using its considerably greater gravitational pull to siphon away their resources. Additionally, this survey, while galaxy-wide, looked only at cool gas, and hotter gas could play a role, too. The new study reports the best measurements yet for how fast gas flows in and out of the Milky Way. Prior to this study, astronomers knew that the galactic gas reserves are replenished by inflow and depleted by outflow, but they did not know the relative amounts of gas coming in compared to going out. The balance between these two processes is important because it regulates the formation of new generations of stars and planets. Astronomers accomplished this survey by collecting archival observations from Hubble’s Cosmic Origins Spectrograph (COS), which was installed on the telescope by astronauts in 2009 during its last servicing mission. Researchers combed through the Hubble archives, analyzing 200 past ultraviolet observations of the diffuse halo that surrounds the disk of our galaxy. The decade’s worth of detailed ultraviolet data provided an unprecedented look at gas flow across the galaxy and allowed for the first galaxy-wide inventory. The gas clouds of the galactic halo are only detectable in ultraviolet light, and Hubble is specialized to collect detailed data about the ultraviolet universe. “The original Hubble COS observations were taken to study the universe far beyond our galaxy, but we went back to them and analyzed the Milky Way gas in the foreground. It’s a credit to the Hubble archive that we can use the same observations to study both the near and the more distant universe. Hubble’s resolution allows us to simultaneously study local and remote celestial objects,” noted Rongmon Bordoloi of North Carolina State University in Raleigh, North Carolina, a co-author on the paper. Because the galaxy’s gas clouds are invisible, Fox’s team used light from background quasars to detect these clouds and their motion. Quasars, the cores of active galaxies powered by well-fed black holes, shine like brilliant beacons across billions of light-years. When the quasar’s light reaches the Milky Way, it passes through the invisible clouds. “Studying our own galaxy in detail provides the basis for understanding galaxies across the universe, and we have realized that our galaxy is more complicated than we imagined.” — Philipp Richter The gas in the clouds absorbs certain frequencies of light, leaving telltale fingerprints in the quasar light. Fox singled out the fingerprint of silicon and used it to trace the gas around the Milky Way. Outflowing and inflowing gas clouds were distinguished by the Doppler shift of the light passing through them — approaching clouds are bluer, and receding clouds are redder. Currently, the Milky Way is the only galaxy for which we have enough data to provide such a full accounting of gas inflow and outflow. “Studying our own galaxy in detail provides the basis for understanding galaxies across the universe, and we have realized that our galaxy is more complicated than we imagined,” said Philipp Richter of the University of Potsdam in Germany, another co-author on the study. Future studies will explore the source of the inflowing gas surplus, as well as whether other large galaxies behave similarly. Fox noted that there are now enough COS observations to conduct an audit of the Andromeda galaxy (M31), the closest large galaxy to the Milky Way. The Hubble Space Telescope is a project of international cooperation between ESA (the European Space Agency) and NASA. NASA’s Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope. The Space Telescope Science Institute (STScI) in Baltimore, Maryland, conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy in Washington, D.C.
0.873711
4.015429
This week, $3 Million was awarded by the Breakthrough Prize to Sergio Ferrara, Daniel Z. Freedman and Peter van Nieuwenhuizen, the discoverers of the theory of supergravity, part of a special award separate from their yearly Fundamental Physics Prize. There’s a nice interview with Peter van Nieuwenhuizen on the Stony Brook University website, about his reaction to the award. The Breakthrough Prize was designed to complement the Nobel Prize, rewarding deserving researchers who wouldn’t otherwise get the Nobel. The Nobel Prize is only awarded to theoretical physicists when they predict something that is later observed in an experiment. Many theorists are instead renowned for their mathematical inventions, concepts that other theorists build on and use but that do not by themselves make testable predictions. The Breakthrough Prize celebrates these theorists, and while it has also been awarded to others who the Nobel committee could not or did not recognize (various large experimental collaborations, Jocelyn Bell Burnell), this has always been the physics prize’s primary focus. The Breakthrough Prize website describes supergravity as a theory that combines gravity with particle physics. That’s a bit misleading: while the theory does treat gravity in a “particle physics” way, unlike string theory it doesn’t solve the famous problems with combining quantum mechanics and gravity. (At least, as far as we know.) It’s better to say that supergravity is a theory that links gravity to other parts of particle physics, via supersymmetry. Supersymmetry is a relationship between two types of particles: bosons, like photons, gravitons, or the Higgs, and fermions, like electrons or quarks. In supersymmetry, each type of boson has a fermion “partner”, and vice versa. In supergravity, gravity itself gets a partner, called the gravitino. Supersymmetry links the properties of particles and their partners together: both must have the same mass and the same charge. In a sense, it can unify different types of particles, explaining both under the same set of rules. In the real world, we don’t see bosons and fermions with the same mass and charge. If gravitinos exist, then supersymmetry would have to be “broken”, giving them a high mass that makes them hard to find. Some hoped that the Large Hadron Collider could find these particles, but now it looks like it won’t, so there is no evidence for supergravity at the moment. Instead, supergravity’s success has been as a tool to understand other theories of gravity. When the theory was proposed in the 1970’s, it was thought of as a rival to string theory. Instead, over the years it consistently managed to point out aspects of string theory that the string theorists themselves had missed, for example noticing that the theory needed not just strings but higher-dimensional objects called “branes”. Now, supergravity is understood as one part of a broader string theory picture. In my corner of physics, we try to find shortcuts for complicated calculations. We benefit a lot from toy models: simpler, unrealistic theories that let us test our ideas before applying them to the real world. Supergravity is one of the best toy models we’ve got, a theory that makes gravity simple enough that we can start to make progress. Right now, colleagues of mine are developing new techniques for calculations at LIGO, the gravitational wave telescope. If they hadn’t worked with supergravity first, they would never have discovered these techniques. The discovery of supergravity by Ferrara, Freedman, and van Nieuwenhuizen is exactly the kind of work the Breakthrough Prize was created to reward. Supergravity is a theory with deep mathematics, rich structure, and wide applicability. There is of course no guarantee that such a theory describes the real world. What is guaranteed, though, is that someone will find it useful.
0.813979
3.699561
Although dark matter exists only in theory, scientists strongly believe that it is an all-pervasive reality in galaxy clusters, accounting for 85 percent of all matter in the known and unknown universe. Their conviction is based on astrophysical observations such as unexplained gravitational forces, which, obviously, can’t come from nothing. Meaning, while they can see the powerful gravitational effects of the so-called dark matter, they can’t really see the matter itself; hence, the name. Well, they now have a reason to rejoice as a new study claims to have found a way to track the dark matter. Using deep-space imagery captured by the Hubble Telescope, astronomers Mireia Montes (School of Physics, University of New South Wales, Australia) and Ignacio Trujillo (Instituto de Astrofísica de Canarias, La Laguna, Tenerife, Spain) were able to see the invisible matter in an unprecedented light, literally. This is a press release from our latest work where we find that the diffuse light in clusters of galaxies is a good tracer of how dark matter distributes in those clusters. Check it! https://t.co/GTf6MwIbBB — Mireia Montes (@mireiamontesq) December 23, 2018 The powerful data from the Hubble Frontier Fields program of the NASA/ESA-owned telescope enabled Montes and Trujillo to demonstrate that the faint light emitted by abandoned stars in galaxy clusters, known as intracluster light (ICL), actually follow the distribution of dark matter within them. “We have found a way to ‘see’ dark matter,” said Montes, who’s the lead author of the joint study published online in the MONTHLY NOTICES of the Royal Astronomical Society. “We have found that very faint light in galaxy clusters, the intracluster light, maps how dark matter is distributed,” she added. Could the faint glow between the members of massive galaxy clusters be a clue to the nature of dark matter? A new study explores the tantalizing possibilities: https://t.co/mCqP0JXotJ #Hubble #HubbleScience #NASA #DarkMatter pic.twitter.com/2F0pltrqQE — HubbleTelescope (@HubbleTelescope) December 20, 2018 Experts believe that this new dark matter-hunting technique will lead the way to more discoveries about this mysterious phenomenon that permeates almost all of the universe. “There are exciting possibilities that we should be able to probe in the upcoming years by studying hundreds of galaxy clusters,” says study co-author Ignacio Trujillo. A galaxy cluster is a collection of galaxies bound together by each other’s powerful gravitational forces. Our own Milky Way is part of a ‘Local Group’ of 54 galaxies spanning 10 million lightyears, which in turn is part of a much larger group of hundreds of thousands of galaxies known as the Laniakea Supercluster spanning 500 million lightyears. Due to the strong galactic influences at play in a cluster, stars are sometimes torn away from their home galaxy and drift aimlessly through the cluster. These galactic orphans emit the faint intracluster light, or ICL, discussed earlier, which aligns with the gravitational forces of the invisible dark matter, which is how Montes and Trujillo were able to locate its position. “The reason that intracluster light is such an excellent tracer of dark matter in a galaxy cluster is that both the dark matter and these stars forming the intracluster light are free-floating on the gravitational potential of the cluster itself—so they are following exactly the same gravity,” says Montes. “We have found a new way to see the location where the dark matter should be, because you are tracing exactly the same gravitational potential,” she said, adding, “We can illuminate, with a very faint glow, the position of dark matter.” In the past, astronomers have used “gravitational lensing models” to follow the distribution of dark matter within clusters, which, for all practical purposes, is a complex time-consuming method. Montes and Trujillo compared the distribution of ICL they discovered with previous dark matter maps created through the gravitational lensing method and found that both distribution patterns were identical. “These stars have an identical distribution to the dark matter, as far as our current technology allows us to study,” Montes said. The Montes-Trujillo way of doing things is simpler, more efficient and faster because all that is needed is deep-space imagery. This new method will now make it possible for astronomers to study many more clusters in the shortest possible time. “This method puts us in the position to characterize, in a statistical way, the ultimate nature of dark matter,” Montes said. “The idea for the study was sparked while looking at the pristine Hubble Frontier Field images,” said Trujillo. “The Hubble Frontier Fields showed intracluster light in unprecedented clarity,” he said adding that “the images were inspiring.” “Still, I did not expect the results to be so precise. The implications for future space-based research are very exciting,” he added.
0.865745
3.835669
The “trick” to understanding gravity is to think in terms of pushing rather than pulling. The graviton is a theoretical particle thought to deliver the force of gravity. We will discuss it further below. Inertia is Isaac Newton’s First Law of Motion. If the platinum slab is moving, it may not be there when you would have otherwise collided with it, and you would fall below it. Now the slab is no longer protecting you from force B, but rather from force A above. So, the graviton balance pushes you back upward. Lather, rinse, repeat, and you are now in orbit around the slab. While air & water push down on the outside of your body, gravitons push you from the inside by physically crashing into the nuclei of your atoms. So, as you are floating in the void of space, each one of your atoms is being constantly bombarded by a stream of gravitons coming from every angle such that they all balance out and you don’t move. But is space really filled, wall-to-wall, with gravitons? Why not? One thing I think everybody agrees upon is that gravity is everywhere. There is never a moment when gravity releases its grip on you. It is always there; morning, noon, and night; 24x7x365; forever. Sometimes light is there, but many times it is pitch dark. Some things are magnetic, but most things are not. Gravity is everywhere; all the time; like god. Gravity is the invincible iron-fist of the universe. That’s why it makes sense to compare it to air pressure and water pressure as I have done in the cartoons. Physicists describe gravity as the “weak force” but meanwhile our entire galaxy is being sucked into The Great Attractor at 1.3 million mph! (The speed of light is 670.6 million mph.) Because gravity is everywhere at all times, so must be gravitons – or some other thing that mediates the gravitational force. It’s just that simple. And there is precedent. If we can have a Cosmic Microwave Background, why couldn’t we have a Cosmic Graviton Background? Michelson-Morley proved that there is no ether, but gravitons are mass-less, speaking of which… The graviton is similar to the photon in that it has no mass, but still packs a punch. If somebody shines a flashlight on your back, you don’t feel the flimsy photons crashing into your skin. But if you forget your sunblock when you go out in the sun, then you will definitely feel the effects of being hit by too many photons. So, just because something doesn’t have mass doesn’t mean that it can’t act on you. In the Photoelectric Effect photons crash into electrons and send them flying. So, it is entirely plausible that gravitons can push us around, provided there are enough of them, and that does indeed seem to be the case. Here is a quote from Barak Shoshany: “A typical gravitational wave is composed of roughly 1,000,000,000,000,000 gravitons per cubic centimeter.” So, there are plenty of gravitons around. And we know that gravity acts on the insides of our bodies, like when your heart needs to overcome gravity to pump blood up to your brain. So, the idea that there is a perpetual tug-of-war on our atoms created by an ever-present swarm of gravitons is not so far-fetched. While Sir Isaac Newton invented the most famous theory of gravity, he did not put forth a mechanism for the transmission of gravitational force from one body to another. He thought that it would be irresponsible to do so given the current state of scientific knowledge. Basically, Newton didn’t know, and refused to pretend that he did (read more here). So, Newton’s theory was what we might call math-y. He invented a mathematical system that mapped pretty closely to reality, but was not a description of reality itself. Nevertheless, it was still a massive advancement in scientific knowledge. In fact, even today scientists prefer to use Newton’s equations in many cases because they are much easier than Einstein’s. Some people claim that Einstein debunked the very concept of gravity itself. According to Einstein, the Sun does not pull the Earth but rather bends space such that the Earth has no choice but to follow the freshly-plowed fabric of space. But Einstein never defined the characteristics of that fabric. We are told that it is just a thing that bends. In this respect, Einstein’s theory is just as math-y as Newton’s. But again, that’s not a bad thing, and Einstein’s math can do many things that Newton’s cannot. But the idea that bodies bend space couldn’t possibly be true. For example, consider this diagram from space.com that is frequently used to explain gravity, and the explanation below: “Imagine setting a large body in the center of a trampoline. The body would press down into the fabric, causing it to dimple. A marble rolled around the edge would spiral inward toward the body, pulled in much the same way that the gravity of a planet pulls at rocks in space.” This explanation would only make sense if the trampoline were the top of the universe. In our actual three-dimensional universe, wouldn’t the body be distorting space in all directions equally? Why would the marble spiral downward instead of upward, or to the left instead of the right? And keep in mind that the sun is moving through space at 43,000 miles per hour, where it allegedly plows a tunnel like one of Elon Musk’s boring machines; just bending space like crazy. Then the planets fall down the gravity well, following the sun. But then, at perihelion, somehow the planets get out in front of the sun. I suppose that if you rolled a bowling ball behind a boring machine it would catch up, but could it then blast through the rock itself and get out in front of the machine before orbiting back around behind? That doesn’t seem likely, but then again, we have no definition for the fabric of space. How fast does the bent space unbend? If gravity waves propagate at the speed of light, that doesn’t seem to be enough time for Pluto to squeak by before the tunnel closes. Or maybe it doesn’t close, and the universe is like a gigantic gopher warren. And what happens to the bent space? Let’s say that you have a Star Trek style warp engine in your car. Now, you want to drive from New York to Los Angeles, so you step on the gas, space is warped such that fly-over country is shoved down into a giant crevice, and New York & L.A. wind up right next door. You get out of your car, hop over the crevice, and head to the beach without a care in the world for the millions of people that you just massacred. Furthermore, as the Earth falls down the path plowed by the Sun, what is corralling it onto that path? The bowling ball would follow the boring machine because of the way the rock had been shaped. But what is being shaped in space? Whatever it is would have to be enormously durable to be able to corral an entire planet. Clearly, bending space cannot be a real thing. The energy required to bend space that contained matter would be unimaginable. And if you think that the Earth could be bent and then snap back into place like a rubber band, with no damage, then I submit that you might be a crazy person. The “mass bends space” model is shockingly lacking in common sense, not to mention, detail. The whole thing just doesn’t make any sense. Now if you want to say that mass bends the gravity flux that permeates the universe, then you would be onto something. A laser weapon concentrates a stream of trillions of photons on a target to heat it up and eventually burn a hole through it. So, photons have been weaponized. What about gravitons? In the cartoons above, the platinum slab could be considered a weapon fired at our stick figure. And indeed, if you could cause a sufficiently large mass to materialize near an enemy ship, you could cause it to crash. But then again why not just smack it directly? Of course, moving such a mass around would require an enormous amount of energy, which could no doubt be better used in a more-efficient weapon. Who knows what more we will learn about gravitons, but the prospects for anti-gravity vehicles and gravity weapons seem bleak. Astronomers agree that gravity causes stars, planets, moons, etc. to be spherical. However, they don’t explain exactly why all the molecules “pull toward a common center of gravity.” In my theory, the explanation is simple: the gravitons from an infinite (or perhaps spherical) universe converging toward any given point are all of equal strength, therefore molding matter into spherical shapes. But if the body is spinning fast enough, like the Earth does, the sphere can get flattened to various degrees by centripetal force. Another problem with Einstein’s model: if the Sun plows a tunnel through space, and the Earth rolls down that tunnel, how does the Moon follow the Earth? Is the Earth plowing another tunnel inside of the one that the Sun plowed? And while the Moon is falling toward the Earth, how does it manage to pull on the Earth’s oceans? In my theory, this is an easy problem: as the Moon flies over the oceans, it shields them from incoming streams of gravity that were pushing down on the oceans, thus enabling the tidal effect. And then there is the whole matter (ha, ha) of Dark Matter. Einstein’s theory of gravity is so far off on a galactic scale that enormous amounts of Dark Matter have to be theoretically added to galaxies to make the gravitational numbers come out right. It’s a rather huge elephant in the room. Instead of having a small army of scientists scrambling to discover the mythical Dark Matter, why not just admit that the theory is wrong? As Wikipedia put it: “In philosophy of science, dark matter is an example of an auxiliary hypothesis, an ad hoc postulate that is added to a theory in response to observations that falsify it.” Update: scientists have found a galaxy, exactly one, that conforms to Einstein’s theory. Ironically, this makes Einstein look even more wrong because it blows up the whole elaborate theory of Dark Matter that scientists were expecting to fix Einstein’s theory just as soon as one of those slippery Dark-Matter particles was finally caught. The idea that large numbers of particles can sail through your body is supported by the neutrino, which are smaller than electrons and have no charge. So, they pass through matter very easily and rarely collide with atoms. Billions pass through you every day. (This page has a photo of a neutrino colliding with a hydrogen proton.) Photons can’t pass through your body, but gravitons can. So, there is a graviton flux. Photons are the quanta of the electromagnetic field. Similarly, gravitons are the quanta of the gravitational field, making the graviton a gauge boson. Ironically, bosons were conceived by Satyendra Nath Bose, and the abolisher of gravity himself, Albert Einstein.
0.818889
3.531489
The international Gemini Observatory composite color image of the planetary nebula CVMP 1 imaged by the Gemini Multi-Object Spectrograph on the Gemini South telescope on Cerro Pachón in Chile. Credit: The international Gemini Observatory/NSF’s National Optical-Infrared Astronomy Research Laboratory/AURA 20-Second zoom out from the core of the planetary nebula CVMP 1. Credit: The international Gemini Observatory/NSF’s National Optical-Infrared Astronomy Research Laboratory/AURA. Gemini Observatory Image Release The latest image from the international Gemini Observatory showcases the striking planetary nebula CVMP 1. This object is the result of the death throes of a giant star and is a glorious but relatively short-lived astronomical spectacle. As the progenitor star of this planetary nebula slowly cools, this celestial hourglass will run out of time and will slowly fade from view over many thousands of years. Located roughly 6500 light-years away in the southern constellation of Circinus (The Compass) this astronomical beauty formed during the final death throes of a massive star. CVMP 1 is a planetary nebula; it emerged when an old red giant star blew off its outer layers in the form of a tempestuous stellar wind . As this cast-aside stellar atmosphere sped outwards into interstellar space, the hot, exposed core of the progenitor star began to energize the ejected gases and cause them to glow. This formed the beautiful hourglass shape captured in this observation from the international Gemini Observatory, a program of NSF’s National Optical-Infrared Astronomy Research Laboratory. Planetary nebulae like CVMP 1 are formed by only certain stars — those with a mass somewhere between 0.8 and 8 times that of our own Sun . Less massive stars will gently fizzle out, transitioning into white dwarfs at the end of their long lives, whereas more massive stars live fast and die young, ending their lives in gargantuan explosions known as supernovae. For stars lying between these extremes, however, the final stretch of their lives results in a striking astronomical display such as the one seen in this image. Unfortunately, the spectacle provided by a planetary nebula is as brief as it is glorious; these objects typically persist for only 10,000 years — a tiny stretch of time compared to the lifespan of most stars, which lasts billions of years. These short-lived planetary nebulae come in myriad shapes and sizes, and several particularly striking forms are well known, such as the Helix Nebula which is captured in this image from 2003 which combined OIR Lab facilities at Kitt Peak National Observatory with the Hubble Space Telescope. The great diversity of shapes stems from the diversity of progenitor star systems, whose characteristics can greatly influence the ensuing planetary nebula. The presence of companion stars, orbiting planets, or even the rotation of the original red giant star can help determine the shape of a planetary nebula, but we don’t yet have a detailed understanding of the processes sculpting these beautiful astronomical fireworks displays. But CVMP 1 is intriguing for more than just its aesthetic value. Astronomers have found that the gases making up the hourglass are highly enriched with helium and nitrogen, and that CVMP 1 is one of the largest planetary nebulae known. These clues together suggest that CVMP 1 is highly evolved, making it an ideal object to help astronomers understand the later lives of planetary nebulae. Astronomical measurements have revealed the characteristics of CVMP 1’s central star. By measuring the light emitted from the gas in the planetary nebula, astronomers infer that the temperature of the central star is at least 130,000 degrees C (230,000 degrees F). Despite this scorching temperature, the star is doomed to steadily cool over thousands of years. Eventually, the light it emits will have too little energy to ionize gas in the planetary nebula, causing the striking hourglass shown in this image to fade from view. The international Gemini Observatory, comprises telescopes in the northern and southern hemispheres, which together can access the entire night sky. Similar to many large observatories, a small fraction of the observing time of the Gemini telescopes is set aside for the creation of color images that can share the beauty of the Universe with the public. Objects are chosen for their aesthetic appeal — such as this striking celestial hourglass. Despite their name, planetary nebulae have nothing to do with planets. This misnomer originates from the round, planet-like appearance of these objects when viewed through early telescopes. As telescopes improved, the striking beauty and stellar origin of planetary nebulae became more obvious, but their original name has persisted. Which in turn implies that our own Sun will form a planetary nebula after burning through its hydrogen fuel, around 5 billion years from now. NSF’s National Optical-Infrared Astronomy Research Laboratory Gemini Observatory, Hilo HI Desk: +1 808-974-2510 Cell: +1 808-936-6643 News Archive Filter The GEMMA Podcast A podcast about Gemini Observatory and its role in the Era of Multi-Messenger Astronomy. Featuring news related to multi-messenger astronomy (MMA), time-domain astronomy (TDA), our visiting instrument program, and more through interviews with astronomers, engineers, and staff both here at Gemini (North and South) and abroad.
0.887936
3.966777
You’ve all heard me talk about watching the Moon occult a bright star. That’s when we get a great example of stellar parallax from our Earthly viewpoint! But did you know that there are several other heavenly bodies that can cause an occultation that’s easy to view through an amateur telescope if you just know when and where to look? Then let’s take this opportunity to check it out… On the night of August 3/4, 2009 Leonard Ellul-Mercer of Malta caught this while watching Jupiter! What you’re seeing is a time lapse animation of the mighty Jove occulting HIP 107302, a 6th magnitude star you might know better as 45 Capricorni. How many of us may have glanced at something like that while making a cursory observation of the planet and taken it for a galiean moon? OK… It’s sixth magnitude. Not alot of you, but maybe you might not have watched long enough to know it would occult. (Besides, there’s a whole lot of cool things in that image. Watch the GRS float by, followed by the mushroom impact cloud and the whirl of the moons!) So how do you go about getting predictions? There’s a wonderful set of worldwide resources that you can find through the International Occultation Timing Association (IOTA). This page will take you to their main frame where you can branch into several areas – including how to time occultations and submit your information. To find information on occultations by planets and asteroids for other areas of the world, be sure to visit the IOTA European section, too! While you might watch an occultation just for fun, if you do decide to contribute your timing information you’re doing real science. By studying exactly the point in time when a star disappears and reappears, astronomers are able to take more accurate measurements of a planet or asteroid’s size and shape – and better calculate their distances at any given time. It’s a way to engage in new types of complimentary research that doesn’t require multi-million dollar equipment and gives back useful pertinent scientific data. After all, you might possibly discover a new moon of Jupiter – or one too small to be seen by your telescope – in just this way! Even a momentary dimming of a star might mean there’s something more there than meets the eye. Enjoy your voyage of discovery! There are four major lunar events coming up during the month of August, including another Jupiter/star event for Europe. Get out there and have fun!
0.8925
3.388783
Naiad is one of the 13 moons of Neptune. Neptune was not discovered until 1989 through studying photos taken by the Voyager 2 probe. Thus, the Voyager Science Team is credited with its discovery. It was the last moon discovered by the probe, which helped scientists find five moons altogether. The last five new moons were discovered in the first decade of the 21st century. The satellite was given its official name on September 16, 1991. At first the satellite was designated S/1989 N 6. Neptune’s moons are named after figures from mythology that have to do with the Roman god Neptune – or its Greek equivalent Poseidon – or the oceans. The irregular satellites of Neptune are named after the Nereids, which are the daughters of Nereus and Doris, who are Neptune’s attendants in Roman mythology. Naiad was named after a type of nymph in Greek mythology that presided over brooks, streams, wells, springs – all fresh water things. Naiad is the closest satellite to the planet Neptune. It orbits about 48,230kilometers from the top of the planet’s atmosphere. Naiad is a very small satellite with a diameter of only approximately 58 kilometers. That is about one-sixtieth the size of the Earth’s Moon. Naiad’s mass is so small that it is only 0.00001% of the Moon’s mass. It takes Naiad less than one day – seven hours and six minutes to be precise – to orbit Neptune because of its proximity to its planet. With a decaying orbit, the satellite may crash into Neptune or be ripped apart and become part of one of its planetary rings. This may happen soon. Naiad is an irregularly shaped satellite, which some have compared to a potato. In one of the pictures Voyager 2 took of it, the moon appears to be elongated because of smearing in the picture. Astronomers believe that the moon is made up of fragments from Neptune’s original satellites, some of which were destroyed when Neptune’s gravity captured Triton as a satellite. They do not think the moon has changed at all geologically since it was formed. After the Voyager 2 probe passed by Neptune, the planet and its satellites have been studied by many observatories as well as the Hubble Space Telescope and the Keck telescope. Although scientists have been trying to observe Naiad and some of the other smaller irregular moons, scientists still do not know very much about the satellite. This is especially true because Naiad and similar satellites are so small. Astronomy Cast has an episode on Neptune you will want to see.
0.808188
3.525553
Chain of Five Sub-Neptune Planets Citizen scientists have discovered five exoplanets orbiting the star K2-138. The worlds orbit the red dwarf K star with periods of 2.35 days, 3.56 days, 5.40 days, 8.26 days, and 12.76 days, caught in resonant circuits that have a planet finishing about 3 orbits for every 2 orbits of the world just beyond it. Jessie Christiansen (Caltech) presented the result January 11th at the winter American Astronomical Society meeting. The exoplanets, found as part of the Exoplanet Explorers project hosted on Zooniverse, are all sub-Neptunes; only the innermost one is potentially rocky. There also might be a sixth planet farther out, at 42 days, which would maintain the resonant pattern if there are two planets between it and the inner five. Read more in Caltech’s press release. Camille M. Carlisle Swarm of Clouds Whizzing from Milky Way’s Center Astronomers have detected more than 100 clouds of neutral hydrogen — together containing the equivalent of a million Suns’ worth of mass — flying away from our galaxy’s center. The clouds have an average velocity of 330 km/s (740,000 mph). They seem to be blowing out through the same region as the Fermi bubbles, two gargantuan, 25,000-light-year-tall lobes that stick out from the Milky Way’s center. Discovered by F. Jay Lockman (Green Bank Observatory) and his colleagues while following up on 2013 work, the clouds found so far reach at least 5,000 light-years above and below the disk. The team hopes that the objects will serve as markers to measure the speed of the galactic wind they ride and potentially help reveal what created the Fermi bubbles in the first place. Read more in Green Bank’s press release. Camille M. Carlisle Comet 41P's Spin Is Slowing Down During the most recent apparition of Comet 41P/ Tuttle-Giacobini-Kresák (which our readers photographed in abundance), ground-based telescopes gauged its spin before its closest approach to the Sun on April 9th to have a rotational period of about 20 hours. But when the newly renamed Neil Gehrels Swift telescope observed the comet on May 7–9, after its closest approach, the rotation period had lengthened to between 46 and 60 hours — more than twice as long! (The large range in measurements is due to the uncertainties caused by the comet fading as it retreats from Sun.) Dennis Bodewitz (University of Maryland) announced the results at the Washington D.C. meeting of the American Astronomical Society and in the January 11th Nature. Swift observations also showed that more than half of the small comet’s surface was covered in sunlight-activated jets, compared to the roughly 3% coverage typical of most comets. The jets likely pushed the comet around, slowing its spin. If the slowing trend continues, the comet could have a rotation period of more than 100 hours by now, which is slow enough that it could start tumbling around on more than one axis. The comet is now behind the Sun from Earth's perspective, but when it emerges on the other side, Bodwitz’s team plans to update rotation measurements using the Hubble Space Telescope. Read more in NASA’s press release.
0.875826
3.392586
[ I am reviving the Bowler Hat Science blog as a quick way to link all my new publications. Subscribe to the feed to keep up with all my stories! ] From Scientific American: Many major discoveries in astronomy began with an unexplained signal: pulsars, quasars and the cosmic microwave background are just three out of many examples. When astronomers recently discovered x-rays with no obvious origin, it sparked an exciting hypothesis. Maybe this is a sign of dark matter, the invisible substance making up about 85 percent of all the matter in the universe. If so, it hints that the identity of the particles is different than the prevailing models predict. The anomalous x-rays, spotted by the European Space Agency’s orbiting XMM–Newton telescope, originate from two different sources: the Andromeda Galaxy and the Perseus cluster of galaxies. The challenge is to determine what created those x-rays, as described in a study published last month in Physical Review Letters. (See also an earlier study published in The Astrophysical Journal.) The signal is real but weak and astronomers must now determine whether it is extraordinary or has a mundane explanation. If that can be done, they can set about the work of identifying what kind of dark matter might be responsible. [Read more at Scientific American ] The biggest black holes in the Universe reside at the centers of the largest galaxies. However, a new study suggests they may be proportionally even larger, compared with other galaxies. The bright cluster galaxies (BCGs) are huge galaxies found in the middle of galaxy clusters, where they grew by merging with and absorbing smaller galaxies. However, based on their X-ray and radio luminosity, their black holes may have grown much bigger—perhaps as much as ten times the mass in previous estimates. That means the largest black holes in the Universe are perhaps 60 billion times the mass of the Sun…or more. A recent study has used an independent means of estimating black hole masses, based on their brightness in X-rays and radio light. J. Hlavacek-Larrondo, A. C. Fabian, A. C. Edge, and M. T. Hogan examined the massive central galaxies in 18 galaxy clusters and found that previous measurements could be off by as much as a factor of ten. In other words, if the luminosity-based measurements are correct, a black hole currently believed to be 6 billion times the mass of the Sun could actually be 60 billion times more massive. That leaves two possibilities: either black holes in bright cluster galaxies behave differently by producing more light than we think they should, or the biggest black holes in the Universe might be astoundingly ultramassive. [Read more…] I write articles and posts on a lot of different topics, both for my own blog and at Ars Technica. Many of those subjects drift pretty far from my putative area of expertise, but occasionally I get to write about something I know pretty well. To wit: last week, a group of researchers using the 4-meter Blanco telescope at Cerro Tololo (best known for its use in discovering dark energy in 1998) have measured distances to galaxy clusters very precisely. (Here’s my galaxy cluster primer, written as a podcast for 365 Days of Astronomy.) Their study, as with a major chunk of my thesis work, was intended to pin down the effect of dark energy—cosmic acceleration—on galaxy cluster formation and evolution. In particular, if dark energy’s effects change over time, that would have a profound influence on the number and size of galaxy clusters that form in a given era. To get a handle on this, we need a detailed census of clusters, dating back to the earliest times. A new survey of galaxy clusters marks the beginning of a promising effort to map the birth and growth of galaxy clusters back to relatively early times. Jeeseon Song and colleagues used optical and infrared telescopes to measure the distances of 158 bright clusters in a large patch of the southern sky, looking back in time to when the Universe was less than one-third its current age. These observations provide the beginnings of a history of galaxy cluster evolution, which should help constrain models of dark energy. [Read more…]
0.812975
3.739659
On the origin of … in the lattice Universe We introduce a perfect isotropic lattice, which is purely imaginary, and we will call it the cosmic lattice. The development of its free energy of deformation is expressed per unit volume, which depends linearly on volume expansion and quadratically on volume expansion, shear strain and torsional rotation deformations, and allows us to deduce the Newton’s equation of the lattice. We are also interested in the propagation of waves in the cosmic lattice. There appears quite surprising phenomena, such as a longitudinal mode coupled to the propagation of transverse waves that are polarized linearly, which disappears for circularly polarized transverse waves. There is also the possibility of propagation of longitudinal waves. But under certain conditions of expansion, the longitudinal propagation mode disappears in favor of localized vibration modes of the expansion. Among the surprising behavior that may be present in a cosmic lattice is the curvature of wave rays by a volume expansion gradient due to the presence of a strong topological singularity of expansion . This curvature can lead to the formation of “black holes” absorbing all waves passing in its vicinity, or impenetrable “white holes” pushing all the waves away from its vicinity. Considering a finite imaginary sphere of a cosmic lattice, we can introduce the concept of “cosmological evolution” of the lattice, assuming that one injects a certain amount of kinetic energy inside the lattice. In this case, the lattice has strong temporal variations of its volume expansion, that can be modeled very simplistically assuming that volume expansion remains perfectly homogeneous throughout the lattice during its evolution. We start by showing that we can separate the field of volume expansion from the other fields in the Newton equation of a cosmic lattice in the case where the concentration of point defects are constant. Then we use these results to obtain the Maxwell’s equations of evolution of a lattice in the case where the volume expansion can be treated as constant. We apply here the Lorentz transformation to the topological singularities in motion in order to obtain, in the absolute frame of the lattice, the fields of dynamical distortions and velocities associated to screw and edge dislocations, localized rotation charges, twist loops and edge loops moving at relativistic speed. From these fields, their total energy will be calculated. The total energy is the sum of the potential energy stored by the dynamic distortions of the lattice created by the presence of the moving charge and the kinetic energy stored in the lattice by the movement of said charges. The total energy will be shown to satisfy a relativistic dynamics. We will show with the Lorentz transformation that a relativistic term of force is acting on the charges of rotation in movement, term that is perfectly analogous to the Lorentz force in electromagnetism. In a perfect cosmic lattice satisfying , all microscopical topological singularities like dislocation lines and dislocations/disclination loops satisfy Lorentz transformations based on the transversal wave velocity. As a consequence, a localized cluster of topological singularities which interact with each other via their rotation fields is also submitted globally to the Lorentz transformations. On this base, we discuss the analogies which exist between our theory of the perfect cosmic lattice and the Special Relativity. We discuss among others the role of «aether» that the lattice plays vis-a-vis a cluster of singularities in movement interacting via their rotation fields. We show that this notion of «aether» gives us a completely new perspective on the theory of Special Relativity, as well as a very elegant explanation to the famous paradox of the twins in Special Relativity. Here, we study in detail the gravitational interactions of twist disclination loops (TL), which will yield a strong analogy with Newtonian gravitation in the far-field but that will exhibit differences in the near-field. We will also exhibit a dependence of the constant of gravitation on the volume expansion of the lattice. Next, we focus on the Maxwell formulation of the equations of evolution, which corresponded to the expression of the local laws of physics, such as electromagnetism, as seen by an imaginary Grand Observer (GO). We focus on a hypothetical local observer we call the Homo Sapiens observer (HS) which would be linked to a local framework, and himself composed of clustered singularities of the lattice. This HS observer only knows of local measures with local rods and local clocks constituting his local reference frame. It will be shown that this makes for him the Maxwell equations to become invariant with respect to volume expansion. There then appears a relativistic notion of time for the local HS observers, which will present a strong analogy with the time in the theory of General relativity of Einstein. We will discuss in detail the analogies, and the differences and advantages when compared to General Relativity. Here, we will look at the very short distance gravitational interaction between a twist disclination loop (TL) and an edge dislocation loop (EL) due to the charge of curvature of the EL and the charge of rotation of the TL and their respective perturbations to the field of expansion. We show that this interaction between charges of rotation and curvature corresponds to a repulsive force at very short distance that scales in when the loops are separated, but that it is an attractive force between the two loops when they form a dispiration. This gravitational interaction between a charge of rotation and a charge of curvature presents numerous analogies with the famous ‘weak force’ of particle physics. In a lattice universe, it is possible to imagine a scenario of cosmological evolution of the topological singularities which form after the big-bang. This scenario explains the formation of galaxies, the phenomenon of dark matter of the astro-physicists, the disappearance of anti-matter from the Universe, the formation of massive black holes in the center of galaxies of matter, the formation of stars and the formation of neutron stars during the gravitational collapse of matter. Intuitively, we can see that Quantum Mechanics (QM) could be linked to the existence of dynamical solutions of the second partial Newton’s equation of the cosmic lattice, under the form of temporal fluctuations of the field of expansion, associated with the topological singularities of the cosmic lattice when it is without longitudinal waves in the domain . Here, we will show a wave function directly deduced from the second partial derivative equation of Newton for the perturbations of expansion of the lattice which is intimately linked to the moving topological singularities of the lattice, whether they are clusters of singularities or isolated single loops. We will thus give a rather simple ‘wave interpretation’ of quantum mechanics: the quantum wave function represents the amplitude and phase of gravitational fluctuations coupled to topological singularities. This interpretation implies that the square of the amplitude of the normalized wave function is indeed linked to the probability of presence of the topological singularity which is associated with it! At the same time, we will recover the Heisenberg incertitude principle, the QM notions of bosons, fermions, and non-discernibility, the Pauli exclusion principle, as well as a physical comprehension of intriguing phenomenas such as entanglement and quantum decoherence. Here, we will find a solution to the second partial equation of Newton with the torus around a SDL. We will show that there are no static solutions to this equation and that, as a consequence, we will have to search for a dynamic solution for the gravitational perturbations of expansion in the immediate vicinity of the loop. This dynamic solution will turn out to be a quantized movement of rotation of the loop on itself. This solution satisfies the second partial equation of Newton, which becomes in this case the Schrödinger equation as we have seen in the previous chapter! This movement of rotation of the loop about itself is nothing else than the «spin» of the loop, and we can show that a magnetic moment is associated with it, which corresponds exactly to the magnetic moment of particle physics! Furthermore we will show that, within our theory, this is a real movement of rotation, and that it does not infringe on special relativity, contrary to what the early pioneers of quantum mechanics thought of spin! We have shown previously that the perfect lattice presents strong analogies with the great theories of modern physics, namely the equations of electro-magnetism, general relativity, special relativity, black holes, cosmology, dark energy and quantum mechanics, and that we can have 3 types of basic topological loop singularities which possess respectively the analogue of an electrical charge, an electric dipole moment or a curvature charge by flexion. It should be noted that the curvature charge is unique to our theory, and explain rather simply several mysterious phenomena, such as the weak coupling force of two topological loops, the dark matter, the galactic black holes, and the disappearance of anti-matter. Here, we will strive to find and describe the ingredients which could explain, on the basis of topological singularities, the existence of the standard model of particle physics. In other words, we will strive to find mechanisms to generate the fundamental particles such as leptons and quarks, and what could cause three generations of these fundamental particles, and from whence could come the strong force which binds quarks to form baryons and mesons. This chapter does not pretend to give an elaborate theory or a final, quantitative solution to explain the standard model of particle physics, but rather to show with a few specific arguments that it could be the choice of a microscopic structure of the lattice that could answer the questions of the standard model. Here we will show that there appears a full ‘zoology’ of loops in a well choosen structure of the lattice, and that it resembles the elementary particles of the standard model. We will show the presence of an asymptotic strong force which could bind the topological loops!
0.805917
3.908129
BRITE Constellation / Austria BRITE (BRIght-star Target Explorer) Constellation / BRITE Austria, UniBRITE BRITE (BRIght-star Target Explorer) / CanX-3 (Canadian Advanced Nanosatellite eXperiment-3) is a low-cost Austrian/Canadian constellation of nanosatellites - a collaborative science demonstration mission of the University of Toronto, Institute for Aerospace Studies/Space Flight Laboratory (UTIAS/SFL), the Graz University of Technology (TU Graz), and the Institute of Astronomy at the University of Vienna, both of Austria, with Sinclair Interplanetary, and Ceravolo Optical Systems Ltd., Ontario, as subcontractors. 1) 2) 3) 4) 5) 6) 7) 8) 9) The objective is to make photometric observations of some of the apparently brightest stars in the sky and to examine these stars for variability (low-level oscillations and temperature variations). The observations will have a precision at least 10 times better than achievable using ground-based observations; the paylaod will be packaged inside a CanX-class nanosatellite (CanX-3). The mission's science team includes collaborators from Canada and Austria: the University of British Columbia (UBC), l'Université de Montréal, the University of Toronto, and the University of Vienna (Universität Wien). The BRITE constellation aims to be a modest sized instrument operating in low Earth orbit, above the effects of the atmosphere, capable of fulfilling the science objectives. It consists of a maximum of two nanosatellites, each equipped with a small-lens telescope, able to observe the brightest stars in the sky to visual magnitude 3.5, over a FOV of 24º, with a sampling time of up to 15 minutes once per satellite orbit (typically 100 minutes), and with a differential brightness measurement accurate to at least 0.1% per sample (all numbers are minimum requirements; actual performance may be better). A cluster of four satellites is needed to improve the duty cycle and would obtain color information (with two satellites having blue and two having red filters). Table 1: Science requirements for a bright star photometry mission Selection of a constellation: There are many reasons for using a constellation of nanosatellites over a single satellite. For one, each telescope will be optimized to work with only one color filter. By collecting color and intensity data from at least two different BRITE satellites, each with a different filter, the science capacity of BRITE is greatly enhanced. Color provides temperature information that helps astronomers to better identify modes of oscillation. The BRITE telescope is being designed to have no moving parts (low cost amd risk mitigation). This implies that filter changes cannot be done by using one BRITE telescope in orbit. In having two nanosatellites in the same orbit, each with a different filter, the development costs can be minimized by keeping each BRITE satellite identical except for the telescope and by reducing nonrecurring engineering costs. Another reason to have a constellation of nanosatellites is that multiple satellites observing the same region of interest can increase the overall duty cycle of observation beyond what a single nanosatellite can provide. Since the stars of interest are located in all parts of the sky, a continuous viewing zone for all targets of interest is not possible. - Having two pairs of BRITE satellites in two slightly different orbits that have different viewing times for the same region of interest doubles the duty cycle and significantly improves the spectral window. A final reason for using a constellation of nanosatellites is the mitigation of risk. By having multiple, low cost satellites instead of a single large satellite, the associated launch risk is greatly reduced. Additionally, the failure of any single BRITE satellite would not end the mission and the other satellites in the BRITE-Constellation would still achieve all the primary science objectives (Ref 3). The primary satellite design requirement calls for attitude control. The strict photometric error-tolerance places a high precision requirement on the ADCS system to ensure BRITE will always keep regions of interest surrounding each star within the same pixel areas on the imager. Table 2: Minimum mission requirements for the BRITE-Constellation platform (Ref. 3) The BRITE constellation will be the world‘s first nanosatellite constellation dedicated to an astronomy mission. Table 3: Some background in the early phases of the BRITE Constellation 10) Extension of the Nanosatellite Constellation: 1) The University of Vienna and FFG/ALR (Austria’s space agency) are financing the development of two BRITE satellites and development is nearing completion. 2) SRC/PAS (Space Research Center/ Polish Academy of Sciences of Warsaw, Poland will be preparing two additional satellites. The first Polish satellite, BRITE -PL 1, will be a modified version of the original SFL design. The second Polish satellite, BRITE-PL 2, will include the significant changes to be implemented by SRC PAS. The Polish technical participation in the BRITE project is supported by the Ministry of Science and High Education. The participation in the BRITE consortium gives Poland the possibility to launch its first Polish scientific satellite into space. The Polish participation in the BRITE consortium was established in October 2009 by SRC/PAS and NCAC/PAS (Nicolaus Copernicus Astronomical Center/ Polish Academy of Sciences). 13) 3) The CSA (Canadian Space Agency) is also funding two satellites in the constellation. In January 2011, CSA signed an agreement to support the BRITE constellation by contributing two satellites targeting a launch in 2012. The two Canadian satellites will join the four other satellites funded by the Austrian and Polish governments. 14) The operation of three pairs of BRITE nanosats will significantly improve the coverage of the parameter space addressed in this proposal (compared to only one BRITE), and in particular the statistical significance of the science conclusions to be drawn. The best configuration would be to have two launches, each with a pair of BRITE nanosats with each kind of filter, and each orbit pair separated in the sky as much as possible. This is the rationale behind the BRITE constellation of four individual satellites. Each BRITE satellite utilizes a number of innovative technologies including miniature reaction wheels, star tracker and optical telescope, all sized and designed around the generic CanX (Canadian Advanced Nanosatellite Experiment) bus of UTIAS/SFL (University of Toronto, Institute for Aerospace Studies/Space Flight Laboratory). For a better overview of the BRITE constellation, the eoPortal will create a separate file for the spacecraft of each consortium participant when the information becomes available — with this file being the description of the BRITE Austria constellation. The initial BRITE constellation is based on pioneering Canadian space technology, built in partnership (joint venture) with Austrian institutions. The Austrian share of the project (BRITE-Austria and UniBRITE) is funded by the Austrian Space Program - representing the first Austrian satellites. • The University of Vienna (Austria) has funded UTIAS/SFL to build one satellite, called UniBRITE. • The Graz University of Technology (TU Graz) is also cooperating with UTIAS/SFL to produce a further nanosatellite of the constellation, called BRITE-Austria, funded by FFG/ALR. Each of these two BRITE satellites will have a different filter, one will be equipped with a 390-460 nm (blue-spectrum) bandpass filter and the other will be equipped with a 550-700 nm (red spectrum) bandpass filter. Table 4: BRITE constellation consisting of 6 nanosatellites operating in pairs, all are based on GNB of UTIAS/SFL All spacecraft in the constellation use the GNB (Generic Nanosatellite Bus) platform, also referred to as CanX-3, developed at UTIAS/SFL (of CanX-2 heritage). The first BRITE satellite, UniBRITE, is being built by SFL for the University of Vienna. The second BRITE satellite, BRITE-Austria, is being developed by the Graz University of Technology with assistance (components) from SFL. The BRITE team has asked the Canadian Space Agency (CSA) to complete the BRITE constellation with funding for two Canadian BRITE nanosatellites. Figure 1: Artist's rendition of the BRITE-Austria nanosatellite in orbit (image credit: TU Graz) 17) The GNB is a modular spacecraft bus designed around a 20 cm x 20 cm x 20 cm cube form factor that provides all basic functional capabilities for a wide range of nanosatellite missions (up to 12 kg and potentially bigger), and provides a platform for state-of-the-art, high-performance applications not previously achievable with nanosatellites. A typical GNB consists of one ARM7 housekeeping computer, two ARM7 computers for attitude/propulsion control and payload operations, CMOS imagers, a power system with triple-junction solar cells and Lithium-ion batteries, passive thermal control, UHF uplink, a 32-256 kbit/s S-band downlink, and a 1 arcmin three-axis attitude control system consisting of tiny reaction wheels, sun sensors, magnetometer and star tracker. 18) 19) EPS (Electrical Power Subsystem): The GNB design incorporates a direct energy-transfer power system utilizing between four and eight solar cells on each face and battery storage to supply 10 W peak and 5.6 W nominal for the satellite. Each BRITE nanosatellite contains three processor boards, the OBC board handles the housekeeping and communications, a second computer is used for all ADCS support functions, while the third board is used for the science payload and its data handling. Communications between the boards occurs through serial peripheral interfaces. Each processor board is based around an ARM7TDMI processor with a code memory of 256 kB and triple-vote 2 MB of hardware EDAC (Error Detection And Correction) protected SRAM memory to store program variables and data. Longer term payload data storage is provided in 256 MB of flash memory. The on-board computer uses a multi-threaded operating system developed at SFL. Figure 2: Illustration of the CanX-3/BRITE spacecraft (image credit: UTIAS/SFL) ADCS (Attitude Determination and Control Subsystem). The spacecraft is 3-axis stabilized with 1 arcmin stability. The altitude control software implements an Extended Kalman Filter that uses the various attitude sensors to predict and correct the attitude of the spacecraft. The attitude sensor suite in each GNB spacecraft comprises of a three-axis magnetometer, six sun sensors (each consists of a phototransistor and digital pixel arrays for coarse and fine attitude determination), and a star tracker. Three magnetorquers are employed to provide coarse attitude control and momentum dumping capability. Three orthogonal reaction wheels perform fine attitude controls. A reaction wheel for nanosatellites was developed by Sinclair Interplanetary of Toronto in collaboration with SFL. The wheel fits within a box of 5 cm x 5 cm x 4 cm, has a mass of 185 g, and consumes only 100 mW of power at nominal speed. A max torque of 2 mNm is provided (30 mNms momentum capacity). No pressurized enclosure is required, and the motor is custom made in one piece with the flywheel. The reaction wheel technology is scalable. CanX-3/BRITE is the first mission to use this actuator (3 wheels). 20) 21) UniBRITE and BRITE-Austria use the ComTech/AeroAstro MST (Miniature Star Tracker) as their primary attitude sensor for attitude determination. The update rate of the MST limits the cadence of the attitude determination and control cycle to 0.5 Hz. This slow cadence has a significant impact on the overall attitude performance, but this can be mitigated through the use of high bandwidth attitude filter and controller. 22) The attitude subsystem of the BRITE satellites is among the most critical spacecraft systems in ensuring mission success. For massive stars, the period of light variations are on the scale of hours to months, therefore the satellites will perform 15 minute observations of multiple target star fields each orbit. Upon returning to a previously imaged target, the attitude system is required to hold the point-spread-function of the imaged stars to within 3 pixels of the original point locations, in order to trim out the pixel-to-pixel variations of the telescope detector. This stringent requirement implies a one arc-minute pointing control with long duration attitude stability and rapid reacquisition. Until recent advances in the miniaturization of attitude hardware, these requirements were insurmountable on a nanosatellite scale. Despite these advancements, a major challenge for the BRITE satellites was to characterize and tune these attitude components (reaction wheels and star trackers, in particular), as they operate at the edge of the required performance envelope. Further, novel attitude estimation and control techniques were applied, which were essential, given both the hardware compliment and their limitations (Ref. 22). Figure 3: Photo of the MST Star Tracker (image credit: TU Graz) Reaction wheels: The BRITE satellites make use of the Sinclair-SFL 30 mNms reaction wheels. These highly capable reaction wheels have over five years of flight heritage onboard the CanX-2 satellite, and an additional three year of heritage aboard the AISSat-1 satellite and continue to operate without incident. The reaction wheels are capable of storing more than 30 mNms of angular momentum and delivering torques up to 2 mNm. Figure 4: Photo of three GNB miniature reaction wheels (image credit: Sinclair Interplanetary) Figure 5: Photo of the magnetometer (image credit: TU Graz) RF communications: Each CanX-3/BRITE nanosatellite is capable of full duplex communications with the ground. The uplink uses UHF while an S-band transmitter with BPSK modulation is used in the downlink. Each satellite can also support a VHF beacon. The expected data volume is 2-8 MB/day. Table 5: Summary of the CanX-3 / BRITE-Austria / TUGSat-1 spacecraft bus specifications Figure 6: Block diagram of the BRITE/CanX-3 nanosatellite (image credit: UTIAS/SFL, TU Graz) Figure 7: Exploded view of the CanX-3 nanosatellite showing the structural elements (image credit: UTIAS/SFL, TU Graz) Figure 8: Photo of the TUGSat-1 / BRITE-Austria nanosatellite (image credit: TU Graz) Launch: The first two nanosatellites of the BRITE/CanX-3 constellation [Austrian mission: BRITE-Austria (CanX-3B) and UniBRITE (CanX-3A) were launched as secondary payloads on Feb. 25, 2013 on the PSLV-C20 vehicle of ISRO/Antrix from Shriharikota/India. The primary payload on this mission was SARAL (Satellite with Argos and AltiKa), a collaborative mission of ISRO and CNES. 23) 24) 25) To facilitate rapid launches, SFL has adopted an approach to build customizable separation systems for any nanosatellite. These separation systems can be integrated with the satellites prior to launch site delivery and hence, make launch coordination easier. The SFL XPOD (Experimental Push Out Deployer) separation system interfaces the GNB-based spacecraft to practically any launch vehicle. Spacecraft up to 12 kg may be accommodated in existing XPOD designs. UTIAS/SFL refers to the XPOD launch of the nanosatellites as NLS-8 (Nanosatellite Launch Service-8). UniBRITE (NLS-8.1), BRITE-Austria (NLS-8.2) and AAUSAT3 (NLS-8.3). Figure 9: Illustration of the nanosatellite (GNB) separation system (XPOD Duo), image credit: UTIAS/SFL The six secondary payloads manifested on this flight were: • Sapphire (Space Surveillance Mission of Canada), a minisatellite with a mass of 148 kg. • NEOSSat (Near-Earth Object Surveillance Satellite), a microsatellite of Canada with a mass of 74 kg. • BRITE-Austria/TUGSat-1 (Graz University of Technology), Austria, a nanosatellite with a mass of ~6.5 kg. • UniBRITE (Technical University of Vienna), Austria, a nanosatellite with a mass of ~ 6.5 kg. • AAUSat-3 (Aalborg University CubeSat-3), a student-developed nanosatellite (1U CubeSat) of AAU, Aalborg, Denmark. The project is sponsored by DaMSA (Danish Maritime Safety Organization). • STRaND-1 (Surrey Training, Research and Nanosatellite Demonstrator), a 3U CubeSat (nanosatellite) of SSTL (Surrey Satellite Technology Limited) and the USSC (University of Surrey Space Centre), Guildford, UK. STRaND-1 has a mass of ~ 4.3 kg. Orbit: Sun-synchronous near-circular orbit, mean altitude = ~775 km, inclination = 98.6295º, orbital period of 100.32 minutes, LTAN (Local Time on Ascending Node) = 6:00 hours. Note: SARAL (and also the BRITE constellation) will fly on the same orbit as Envisat, to ensure a continuity of altimetry observations in the long term. On the other hand, the local time of passage over the equator will be different due to specific cover requirements for the instruments constellation of the Argos system. Figure 10: This image shows the various spacecraft on the PSLV-C20 upper stage (image credit: UTIAS/SFL, Ref. 25) Mission status of the BRITE constellation: • October 2018: On 25 February 2013 the first Austrian satellites TUGSAT-1/BRITE-Austria and its sister satellite UniBRITE were launched. They are part of the world’s first nanosatellite constellation, called BRITE (BRIght Target Explorer), dedicated to the observation of the brightness variations of massive luminous stars. The constellation consists of five nearly identical spacecraft from Austria, Canada, and - 37 star field observations, typically lasting for up to 180 days have been completed, five are currently on going. So far, 502 stars have been observed and 4.2 million photometric measurements been made by BRITE Constellation. The selection of star fields is in the hands of the Executive BRITE Science Team (BEST), the operational teams are responsible for mission planning, configuration and control of the spacecraft. - Although the CCD sensors of the spacecraft experience degradation by radiation, a method called chopping was introduced to mitigate these effects. Although the design and testing did not strictly follow ECSS standards, care was taken not to compromise on testing. Especially, substantial effort was dedicated to system-level testing, often under-represented in nanosatellite projects. In addition, 1000 hours of burn-in tests were carried out. - Since BRITE uses the Science S-band in downlink at rates up to 256 kbit/s and amateur radio UHF spectrum in the uplink, interference problems started in late 2013. These problems were cured by upgrading of the communications system. - Five and a half years of successful operation of a constellation of five nanosatellites in an international context with three ground stations in Europe and Canada show that a science mission with demanding requirements can be conducted reliably and with excellent scientific results. - Operational procedures allowed to optimize battery lifetime and thermal conditions in the spacecraft. The chopping method introduced during the mission proved to mitigate the degradation effects on the CCD (hot pixels and warm columns) by radiation. This was made possible by the excellent ADCS performance which is better than specified and is in the 1 arcminute range. - An analysis of the performance of the Austrian BRITE satellites, which are in space for the longest period, indicates that they can be operated for at least two more years. - Scientifically, the BRITE Constellation: a) comprised the collection of high-presicion brightness measurements (6 months long) for more than 500 stars in two colors b) allowed the detailed investigation of pulsations driven by gravity modes in one of the brightest Slowly Pulsating B star c) enabled the modeling of spots induced rotational variations in the presence of pulsations d) led to a unique study of the ‚heart beat‘ phenomenon of close interacting binary systems e) provided unprecedented details of the interaction between stellar pulsations, circumstellar disks and shells. - This proves that well engineered and carefully tested nanosatellites can be operated over several years meeting stringent requirements. • October 25, 2017: A Canadian-led international team of astronomers recently discovered that spots on the surface of a supergiant star are driving huge spiral structures in its stellar wind (Figure 11). Their results are published in a recent edition of Monthly Notices of the Royal Astronomical Society. 27) 28) - Massive stars are responsible for producing the heavy elements that make up all life on Earth. At the end of their lives they scatter the material into interstellar space in catastrophic explosions called supernovae - without these dramatic events, our solar system would never have formed. - Zeta (ς) Puppis is an evolved massive star known as a 'supergiant'. It is about sixty times more massive than our sun, and seven times hotter at the surface. Massive stars are rare, and usually found in pairs called 'binary systems' or small groups known as 'multiple systems'. Zeta Puppis is special, however, because it is a single massive star, moving through space alone, at a velocity of about 60 km/s. "Imagine an object about sixty times the mass of the Sun, travelling about sixty times faster than a speeding bullet!" the investigators say. Dany Vanbeveren, professor at Vrije Universiteit Brussel, gives a possible explanation as to why the star is travelling so fast; "One theory is that Zeta Puppis has interacted with a binary or a multiple system in the past, and been thrown out into space at an incredible velocity". - Using a network of 'nanosatellites' from the "BRIght Target Explorer" (BRITE) space mission, astronomers monitored the brightness of the surface of Zeta Puppis over a six-month period, and simultaneously monitored the behavior of its stellar wind from several ground-based professional and amateur observatories. - Tahina Ramiaramanantsoa (PhD student at the Université de Montréal and member of the Centre de Recherche en Astrophysique du Québec; CRAQ) explains the authors' results: "The observations revealed a repeated pattern every 1.78 days, both at the surface of the star and in the stellar wind. The periodic signal turns out to reflect the rotation of the star through giant 'bright spots' tied to its surface, which are driving large-scale spiral-like structures in the wind, dubbed CIRs (Co-rotating Interaction Regions)." - "By studying the light emitted at a specific wavelength by ionized helium from the star's wind," continued Tahina, "we clearly saw some 'S' patterns caused by arms of CIRs induced in the wind by the bright surface spots!". In addition to the 1.78-day periodicity, the research team also detected random changes on timescales of hours at the surface of Zeta Puppis, strongly correlated with the behavior of small regions of higher density in the wind known as "clumps" that travel outward from the star. "These results are very exciting because we also find evidence, for the first time, of a direct link between surface variations and wind clumping, both random in nature", comments investigating team member Anthony Moffat, emeritus professor at Université de Montréal, and Principal Investigator for the Canadian contribution to the BRITE mission. - After several decades of puzzling over the potential link between the surface variability of very hot massive stars and their wind variability, these results are a significant breakthrough in massive star research, essentially owing to the BRITE nanosats and the large contribution by amateur astronomers. "It is really exciting to know that, even in the era of giant professional telescopes, dedicated amateur astronomers using off-the-shelf equipment in their backyard observatories can play a significant role at the forefront of science", says investigating team member Paul Luckas from the International Centre for Radio Astronomy Research (ICRAR) at the University of Western Australia. Paul is one of six amateur astronomers who intensively observed Zeta Puppis from their homes during the observing campaign, as part of the 'Southern Amateur Spectroscopy initiative'. - The physical origins of the bright surface spots and the random brightness variations discovered in Zeta Puppis remain unknown at this point, and will be the subject of further investigations, probably requiring many more observations using space observatories, large ground-based facilities, and small telescopes alike. Figure 11: Artist’s impression of the hot massive supergiant Zeta Puppis. The rotation period of the star indicated by the new BRITE observations is 1.78 d, and its spin axis is inclined by (24 ± 9)º with respect to the line of sight (image credit: Tahina Ramiaramanantsoa) • May 2016: The BRITE-Constellation has been successfully observing the night sky’s most luminous stars since late 2013. Over 300 stars have been studied during the course of 15 completed campaigns, with each star assigned to a PI and tied to a unique observation proposal. To date, data sets corresponding to the first 10 of these campaigns have been released to BRITE PIs. Within only a short time since the first data sets were released, notable scientific outputs have already been produced. Three papers have been published in refereed journals with many more in the pipeline. With a huge volume of data already being studied on the ground and with the capabilities of the constellation expanding all the time, the future looks bright for further discovery. 29) - BRITE is a unique and stand-out mission in many ways. It is the first and only mission to conduct two-color spaceborne photometry, being critical for stellar pulsation mode identification (as observed pulsation frequencies and amplitudes will vary across measurement bands), which has already paid dividends in revealing first-time observed insights of stellar properties (Ref. 32). The fact that BRITE is a constellation enables uninterrupted and overlapping coverage of star fields, allowing the mission to resolve stellar pulsation modes with periods from scales of hours to months. The BRITE satellites are believed to be the first nanosatellites dedicated to astronomy, the first orbiting astronomy constellation of any size ever launched, and the first spacecraft at this scale to achieve arc-second level pointing. - As a result of its many accomplishments, the mission has been a large success. The constellation produces a quality of output which meets and exceeds both requirements and expectations. Although BRITE-Constellation has already swollen to a size not dreamed of when it was first conceived, the mission continues to garner interest in participation from other nations. Attracted by the enticing nature of the science, the success of the transfer programs with Austria and Poland and the relatively low-entry cost, to date, six other nations have expressed interest in contributing additional satellites to the constellation. In anticipation of potential future expansion, discussion had taken place on how those satellites can best be used to further enhance the science as the addition of BRITE-Austria did years before. At the top of the list is a BRITE satellite capable of observing in the ultraviolet, an addition which would push this already highly valuable mission into even new territory. - Observation targets are selected by the BRITE Executive Science Team (BEST), which is comprised of twelve voting members (four per member country) and six non-voting members. 30) Once per year BEST convenes to generate observation schedules and strategies (based on the evaluation of scientific merit of submitted target proposals), review mission status, and provide overall management and control of the mission. Spacecraft operations, which are heavily automated, are managed part-time by small engineering teams residing at each of the respective mission control centers. - Spanning the two and a half years since the BRITE constellation began nominal science operations, fifteen observation campaigns have been executed. A Lambert projection of the night sky,illustrating the star fields observed (as well as those planned in the near future) by the BRITE constellation is shown in Figure 12, where the ovals represent the field of view of the imager at each field. Within the figure, it is evident that the majority of the observed fields are clustered in a single plane. This is not a coincidence, as the greatest density of stars (which include BRITE’s target luminous stars) lie within the Milky Way’s galactic plane. • Feb. 5, 2016: The BRITE Constellation of five nanosatellites is operational in 2016. — The constellation is revealing new information about a well-studied star, Alpha Circini. A data analysis of the BRITE Constellation Team shows a behavior in this star that has not been observed before according to Werner Weiss from the University of Vienna, Austrian Principal Investigator for BRITE and lead author of the paper. The ”BRITE Constellation shows complex behavior in Alpha Cir due to both rotation and pulsation. Moreover, that behavior is different when observed in different colors. This result clearly demonstrates the power of BRITE-Constellation and the unique science that is possible using these tiny two-color precision instruments in space.” 31) 32) - The BRITE-Constellation is a coordinated mission of five nanosatellites in LEO, each hosting an optical telescope of 3 cm aperture feeding an uncooled CCD, and observing selected targets in a 24º FOV. Each nanosatellite is equipped with a single filter; three have a red filter (~ 620 nm) and two have a blue filter (central wavelength (~ 420 nm). The satellites have overlapping coverage of the target fields to provide two-color, time-resolved photometry. - The five nanosatellites, each of ~7 kg, are designated as: BRITE-Austria and UniBRITE (Austria), BRITE-Lem and BRITE-Heweliusz (Poland) and BRITE-Toronto (Canada). A sixth nanosat, BRITE-Montreal, did not detach from its launch vehicle. Figure 13: Location of α Circini in the southern constellation Circinus (image credit: BRITE collaboration) - The BRITE data are so valuable to astronomers due to their multi-colored observation. For stars, color and temperature go hand-in-hand. Having the ability to examine stars in different colors with data taken every few minutes for up to six months is providing new insights into their inner workings. - Using these precision instruments, the BRITE-Constellation’s mission is to perform a survey of the most luminous stars in the Earth’s sky via a branch of astronomy called asteroseismology – literally, the study of “starquakes”. Typically, massive and short-lived, these stars dominate the ecology of the Universe and are responsible for seeding the interstellar medium with the “heavy” elements critical for the formation of planetary systems and organic life. In short, BRITE studies classes of stars that, billions of years ago, made life on Earth possible. - With an apparent magnitude of 3.19, Alpha Cir is the brightest star in the southern constellation Circinus and belongs to the class of stars known as rapidly oscillating Ap stars. The star in question was observed by four of the BRITE satellites from March to August 2014 and will be observed again in 2016. It is hoped, that these new observations will provide both a better understanding of its complex behavior and a chance to confirm a new oddity about this already peculiar star; that its speed of rotation is decelerating. • On Feb. 25, 2015 the BRITE-Austria and UniBRITE nanosatellites were 2 years on orbit. BRITE-Austria is operated from Graz while UniBRITE is operated from Toronto. 33) - BRITE-Constellation is the world‘s first nanosatellite constellation dedicated to astronomy. Five components of BRITE-Constellation are operational. • Oct. 2014: Of the five operational spacecraft, UniBRITE and BRITE-Toronto are operated by UTIAS/SFL, from the Toronto Earth station. BRITE-Austria is operated by the Technical University of Graz and the Polish BRITEs are operated by the Polish Space Research Center and the Copernicus Astronomical Center. 34) - Due to the fact that UniBRITE and BRITE-Toronto are based on SFL’s GNB (Generic Nanosatellite Bus), with which SFL has considerable on-orbit experience, commissioning of the core hardware complement was conducted at an accelerated pace. Within two weeks UniBRITE had commissioned everything but the payload and fine pointing hardware/algorithms. As one of the first BRITE spacecraft in orbit, a few notable hurdles were encountered and overcome before regular science operations were achieved in Sept 2013. — By the launch of BRITE-Toronto in June 2014, operations were running very smoothly and, tellingly, within eight days of the launch of BRITE-Toronto it had been fully commissioned and was collecting science data (Ref. 34). - Fine Pointing Stability and Accuracy: The performance of fine-pointing on BRITE is typically assessed using high cadence star-field images taken by the payload. Shown in Figure 14 is the motion of the centroid of a stellar PSF, over a single 15-minute observation, for both UniBRITE and BRITE-Toronto. In each plot the red circle represents the 78” requirement and the green circle the RMS error. Clearly, even for UniBRITE with the AA-MST, performance (45”) is well within requirements, and quite consistent with the simulation results. BRITE-Toronto, with the ST-16 startracker, over a similar time frame is almost four times better than UniBRITE, having RMS performance of 12”. - BRITE not only needs to point stably during an observation, but also over the span of several months as the target is lost and reacquired. To assess pointing performance over a longer period, BRITE instrument scientist Rainer Kuschnig of the University of Vienna, compiled several weeks of UniBRITE and BRITE-Toronto data and found the long-term RSS values of the standard deviations in each axis to be 52” for UniBRITE and 15” for BRITE-Toronto. Hence, in both cases, from field acquisition to reacquisition, the spacecraft is able to return the stars to the same set of pixels, and the observed pointing stability is significantly better than the requirement. • UniBRITE commenced routine science observations of the Orion star-field in October 2013. Starting in December, BRITE-Austria joined UniBRITE on the campaign providing the mission’s first two-color photometric data. Conveniently, in late 2013, MOST (Microvariability and Oscillations of STars) was also observing Eta-Orionis (a quadruple system with an eclipsing binary pair) and, for a time, all three spacecraft were performing simultaneous observations. Through the simultaneous observations with MOST, the team was able to confirm that, to first order, the quality of the BRITE data was as good as expected. A minimally post-processed light-curve of Eta-Orionis, taken by UniBRITE and with a binary transit clearly evident, is shown in Figure 15 (Ref. 34). Figure 15: Light curve from Eta-Orionis, from UniBRITE data (image courtesy of Rainer Kuschnig, University of Vienna). The large black squares are the means of the vertical groups of small red squares. The light curve is entirely consistent with simultaneous observations from MOST on the same star (image credit: University of Vienna, UTIAS/SFL) • Summer 2014: The commissioning of the Austrian BRITEs, being the first to launch, was complete by October 2013, and since then they have been routinely collecting science data, each orbit. The first field investigated, from October 2013 to March 2014, was Orion. Once Orion had set, the spacecraft moved on to observing the field Centaurus, while also observing secondary target field Sagittarius in tandem (Ref. 10). - While commissioning of such a cutting-edge mission met with some hurdles, all now have been cleared, resulting in a quality of science generation which meets and exceeds both requirements and expectations. Although BRITE-Constellation has already swollen to a size not dreamed of when it was first conceived, the mission continues to garner interest in participation from other nations. Attracted by the enticing nature of the science, the success of the transfer programs with Austria and Poland and the relatively low-entry cost, to date, at least six other nations have expressed interest in contributing additional satellites to the constellation. • Feb. 2014: The BRITE constellation is the world‘s first nanosatellite constellation dedicated to an astronomy mission. 35) - The first 3 members of the BRITE constellation are on orbit, operating nominally - The planned constellation will be completed this year - Scientific & mission requirements fully met - Scientific data collection under way. • Activities in the period summer – fall 2013 (Ref. 35). - Payload characterization (TUGSAT-1): Verification of PSF (Point Spread Function), CCD sensitivity, identifications of hot pixels on CCD (removal by software). - Attitude control system optimization (UniBRITE) - Reduction of commissioning time - Fine pointing achieved in November 2013, performance better than specification - Science data collection since November. • October 1, 2013: Calibration of attitude sensors of BRITE-Austria is ongoing to increase the fine pointing performance. In addition payload characterization and hot pixel variations of the instrument's CCD sensor are investigated (Ref. 39). Table 6: On-orbit attitude performance of a nanosatellite telescope (Ref. 22) • In Sept. 2013, UniBRITE and BRITE-Austria have started to observe stars in the Orion constellation. While testing and optimization efforts are still ongoing science data are also collected regularly. In Figure 16, the exposure time was set to 1 second. The outer circle marks the unvignetted field of view of the instrument which has has a diameter of about 24º. • In September 2013, the commissioning of BRITE-Austria is not yet completed, as the fine tuning and in-orbit calibration of sensor parameters, as well as performance characterization of the instrument is still ongoing. 36) - Nevertheless, the ground station network and distributed software concept has already been used and validated. On the one hand, contacts to UniBRITE were successfully established from Graz, on the other hand during a breakdown of the ground station in Graz, the communication with BRITE-Austria was commanded through the ground station in Warsaw/Poland. - Furthermore, during commissioning it was successfully shown, that the station in Graz is able to run autonomously and the remote access enables operators to monitor and control the ground station’s and satellite’s behavior although off-site. Figure 17: Ground coverage of BRITE-Austria for the ground station network (image credit: TU Graz) • July 15, 2013: BRITE-Austria is currently performing the transition to fine pointing. The overall satellite is in a good health state and is looking forward to start observations of bright stars (Ref. 39). • On March 28, 2013, the UniBRITE nanosatellite of the University of Vienna experienced a very close encounter with OSCAR-15 [aka UoSat-4, a microsatellite of SSTL which was launched on January 22, 1990 as a secondary payload to SPOT-2 of CNES from Kourou. OSCAR-15 experienced an on-board electronics failure shortly after launch, and is not operational anymore]. The UniBRITE project team experienced some anxious moments during the predicted close flyby of OSCAR-15. 37) • March 23, 2013: After successful detumbling the TUGSat-1 spacecraft was put into coarse pointing mode. During the 404th orbit the first star image was taken by the scientific instrument (telescope) in coarse 3-axis pointing. The first star image showing Delta Corvus B9V (magnitude 2.95) was downloaded and analyzed by the experts from the Institute of Astrophysics in Vienna. Initial assessment of the payload performance indicates that the specifications are met. 38) Figure 18: Image of Delta Corvus B9V mag=2.95 (left); point spread function of the brightness distribution (right), image credit: TU Graz, University of Vienna (Ref. 35) • March 5, 2013: TUGSAT-1 completed its 100th orbit. All subsystems tested so far show excellent health status. At present the attitude control system is checked out. Detumbling of the spacecraft is planned for the next days. • A successful contact with the TUGSat-1 (BRITE-Austria) nanosatellite was established during the first orbital pass over Graz (3 hours after launch). This constituted the start of the commissioning phase of the satellite which is expected to last for 3 months. 39) The objective is to examine the apparently brightest stars in the sky for variability using the technique of precise differential photometry in time scales of hours and more. The constellation of four nanosatellites is divided into two pairs, with each member of a pair having a different optical filter. The requirements call for observation of a region of interest by each nanosatellite in the constellation for up to 100 days or longer. 40) The science payload of each nanosatellite consists of a five-lens telescope with an aperture of 30 mm and the interline transfer progressive scan CCD detector KAI 11002-M from Kodak with 11 M pixels, along with a baffle to reduce stray light. The optical elements are housed inside the optical cell and are held in place by spacers. The photometer has a resolution of 26.52 arcsec/pixel and a field-of-view of 24º. The mechanical design for the blue and for the red instrument is nearly identical; only the dimensions of the lenses are different (Ref. 6) and Ref. 7). Figure 19: Illustration of the BRITE telescope and baffle (image credit: UTIAS/SFL, TU Graz) Figure 20: The optics design of the photometer (image credit: UTIAS/SFL, Ceravolo) The effective wavelength range of the instrument is limited in the red by the sensitivity of the detector and in the blue by the transmission properties of the glass used for the lenses. The filters were designed such that for a star of 10,000 K (average temperature for all BRITE target stars) both filters would generate the same amount of signal on the detector. The blue filter covers a wavelength range of 390-460 nm and the red filter 550-700 nm; both are assumed to have a maximum transmission of 95%. Table 7: Characteristics of the Kodak KAI 11002-M CCD detector The photometer instrument has a mass of ≤ 0.9 kg and a power consumption of ≤ 3.5 W. The instrument uses a custom set of electronics to operate the imager. The electronics include four A/D converters (14 bit) to convert the analog pixel values, and 32 MB of memory to temporarily hold a full frame image. The imager and memory timing and signals are being controlled using a CPLD (Complex Programmable Logic Device). Figure 21: Schematic layout of the CMOS detector array (image credit: University of Vienna) Figure 22: The BRITE photometer and star tracker (image credit: TU Graz) Figure 23: Photo of the CCD detector (image credit: TU Graz) Ground segment of the BRITE constellation: All participants in the BRITE constellation will have their own ground station. 41) • Graz, Austria, TUG (Mission Control for BRITE-Austria and UniBRITE) • TUV (Technical University Vienna), a second station is located at the ITC of TUV • Toronto, Canada, UTIAS/SFL (Mission Control for BRITE/CanX-3). A ground station at UTIAS-SFL has already been established since early 2003. It has served as a technical template design and concept for two other ground stations (one in Vienna and one in Vancouver) which communicate with the MOST satellite. Furthermore, this station has been used now regularly for broadcasting with three other satellites in orbit. This well-proven, equipement is also the baseline for BRITE-Constellation ground-space communications. At UTIAS/SFL , an additional ground station will be installed in 2011 which will also support BRITE-Constellation operations. • Warsaw, Poland (Mission Control for BRITE-PL). The ground station is located at the CAC (Nicolaus Copernicus Astrophysical Center) in Warsaw. All stations will track and collect data from all BRITE nanosatellites. • Distributed automatic ground station operations • Science teams can retrieve verified raw data from servers. Ground Station Network: For operating the BRITE constellation, a ground station network is used. The advantages of the network are the increased availability and redundancy (Ref. 42). The general operations concept foresees that each master ground station is in charge of its own satellite(s) and tracks other satellites only in case of unavailability of the correspondent master station or in case of emergency. The master station is responsible for its own satellite(s) and is the only one actually controlling the spacecraft. If other stations attempt contacting a satellite, they normally act as relay stations, up-linking incoming commands from the master station and forwarding downlinked data to the master station. In case of failure of a master station, its duties can be temporarily taken over by another station in the network. An example of ground station network operations and data flow for BRITE-Austria is shown in Figure 24. While all stations can establish contact with the satellite, the entire data flow is handled by the Graz ground station as the satellite’s master station. The data flow is handled by a distributed ground software concept. BRITE-Austria ground station: A major driver for the ground station design is the amount of science data to be downloaded. For BRITE-Austria, the data amount to be downloaded is up to 10 MByte/day. The contact times with the satellite are limited to about 10-12 minutes/pass and a total contact time of roughly 1 hour/ day. A downlink data rate of 32 kbit/s allows to download the daily data in about 42 minutes, providing sufficient margins. 42) In addition, the ground station in Graz serves as mission control for BRITE-Austria. It is responsible for the spacecraft and shall guarantee data integrity and storage of raw satellite data. The ground station antennas are a 3 m parabolic meshed antenna for S-band (35 dBi gain) and an 18 element, circular polarized cross-Yagi antenna for UHF (16 dBi gain). Both antenna are mounted on the same tower and are controlled by the same azimuth and elevation rotators, allowing to achieve the same tracking performance for uplink and downlink. Figure 25: Block diagram of the BRITE-Austria ground (image credit:TU Graz) Figure 26: The orbits and ground station network of the BRITE-constellation (image credit: UTIAS/SFL) A. F. J. Moffat, W. W. Weiss, S. M. Rucinski, R. E. Zee, M. H. van Kerkwijk, S. W. Mochnacki, J. M. Matthews, J. R. Percy, P. Ceravolo, C. C. Grant, “The Canadian BRITE NanoSatellite Mission,” Proceedings of ASTRO 2006 - 13th CASI (Canadian Aeronautics and Space Institute) Canadian Astronautics Conference, Montreal, Quebec, Canada, April 25-27, 2006, URL: 2) N. C. Deschamps, C. C. Grant, D. G. Foisy, R. E. Zee, A. F. J. Moffat, W. W. Weiss, ”The BRITE Space Telescope: A Nanosatellite Constellation for High-Precision Photometry of Bright Stars,” Proceedings of the 20th Annual AIAA/USU Conference on Small Satellites, Logan, UT, Aug. 14-17, 2006, paper: SSC06-X-1, URL: http://www.utias-sfl.net/docs/brite-ssc-2006.pdf 3) N. C. Deschamps, C. C. Grant, D. G. Foisy, R. E. Zee, A. F. J. Moffat, W. W. Weiss, ”The BRITE Space Telescope: Using a Nanosatellite Constellation to Measure Stellar Variability in the Most Luminous Stars,,” Proceedings of the 57th IAC/IAF/IAA (International Astronautical Congress), Valencia, Spain, Oct. 2-6, 2006, IAC-06-B5.2.7, URL: http://www.utias-sfl.net/docs/brite-iac-2006.pdf 4) O. Koudelka, G. Egger, B. Josseck, N. Deschamps, C. Grant, D. Foisy, R. Zee, W. Weiss, R. Kuschnig, A. Scholtz, W. Keim, “TUGSAT-1 / BRITE-Austria - The First Austrian Nanosatellite,” Proceedings of the 57th IAC/IAF/IAA (International Astronautical Congress), Valencia, Spain, Oct. 2-6, 2006, IAC-06-B5.2.06, URL: http://www.utias-sfl.net/docs/brite-iac-2006b.pdf 8) O. Koudelka, G. Egger, B. Josseck, N. Deschamps, C. Grant, D. Foisy, R. Zee, W. Weiss, R. Kuschnig, A. Scholtz, W. Keim, “TUGSAT-1 / BRITE-Austria - The First Austrian Nanosatellite,” Acta Astronautica, Vol. 64, 2009, pp. 1144-1149 9) Otto F. Koudelka, “The BRITE Nanosatellite Constellation,” Proceedings of the 50th Session of Scientific & Technical Subcommittee of UNCOPUOS, Vienna, Austria, Feb. 11-22, 2013, URL: http://www.oosa.unvienna.org/pdf/pres/stsc2013/tech-61E.pdf Karan Sarda, C. Cordell Grant, Monica Chaumont, Seung Yun Choi, Bryan Johnston-Lemke, Robert E. Zee, “On-Orbit Performance of the Bright Target Explorer (BRITE) Nanosatellite Astronomy Constellation,” Proceedings of the AIAA/USU Conference on Small Satellites, Logan, Utah, USA, August 2-7, 2014, paper: SSC14-XII-6, URL: http://digitalcommons.usu.edu 11) Piotr Orleanski, Rafał Graczyk, Mirosław Rataj, Aleksander Schwarzenberg-Czerny, Tomasz Zawistowski, Robert E.Zee, “BRITE-PL – the first Polish scientific satellite,” Proceeding of the 4th Microwave & Radar Week, MRW-2010, Vilnius, Lithuania, June 14-18, 2010 12) Otto Koudelka, Werner Weiss, “BRITE-Austria, TUGSat-1,” UN/Austria/ESA Symposium on Small Satellite Programs for Sustainable Development: Payloads for Small Satellite Programs, Sept. 21-24, 2010, Graz, Austria “The first Polish scientific satellite BRITE-PL will help in understanding the inner structure of brightest stars in our 14) “Canada adds two satellites to BRITE Constellation,” January 19, 2011, URL: 16) “Der erste österreichische Satellit - TUGSAT-1 / BRITE-Austria,” URL: http://www.tugsat.tugraz.at/info/press/press-release 17) Otto F. Koudelka, “The BRITE Nanosatellite Constellation,” Proceedings of the 49th Session of UNCOPUOS-STSC (UN Committee on the Peaceful Uses of Outer Space-Scientific and Technical Subcommittee), Vienna, Austria, Feb. 6-17, 2012, URL: http://www.oosa.unvienna.org/pdf/pres/stsc2012/tech-47E.pdf F. M. Pranajaya, Robert E. Zee, “Generic Nanosatellite Bus for Responsive Mission,” 5th Responsive Space Conference, Los Angeles, CA, USA, April 23-26, 2007, AIAA-RS5 2007-5005, URL: http://www.responsivespace.com/Papers/RS5 Guy de Carufel, “Assembly, Integration and Thermal Testing of the Generic Nanosatellite Bus,” Thesis submitted for the degree of Master of Applied Science, University of Toronto, 2009, URL: https://tspace.library.utoronto.ca/bitstream 20) D. Sinclair, C. C. Grant, R. E. Zee, “Developing, Flying and Evolving a Canadian Microwheel Reaction Wheel - Lessons Learned,” Proceedings of ASTRO 2010, 15th CASI (Canadian Aeronautics and Space Institute) Conference, Toronto, Canada, May 4-6, 2010 21) D. Sinclair, C. C. Grant, R. E. Zee, “Enabling Reaction Wheel Technology for High Performance Nanosatellite Attitude Control,” Proceedings of the 21st Annual AIAA/USU Conference on Small Satellites, Logan, UT, USA, Aug. 13-16, 2007, SSC07-X-3, URL: http://www.sinclairinterplanetary.com/SSC07-X-3.pdf 22) Bryan Johnston-Lemke, Karan Sarda, Cordell C. Grant, Robert E. Zee, “BRITE-Constellation: On-Orbit Attitude Performance of a Nanosatellite Telescope,” Proceedings of the 64th International Astronautical Congress (IAC 2013), Beijing, China, Sept. 23-27, 2013, paper: IAC-13-C1.1.4 O. Koudelka, R. Kuschnig, M. Wenger, ”Operational Experience with a Nanosatellite Science Mission,” Proceedings of the 69th IAC (International Astronautical Congress) Bremen, Germany, 1-5 October 2018, paper: IAC-18.B4.3.6, URL: https://iafastro.directory/iac/proceedings 27) ”Spots on Supergiant Star Drive Spirals in Stellar Wind,” Royal Astronomical Society, 25 Oct. 2017, URL: https://www.ras.org.uk/news-and-press 28) Tahina Ramiaramanantsoa, Anthony F. J. Moffat, Robert Harmon, Richard Ignace, Nicole St-Louis, Dany Vanbeveren, Tomer Shenar, Herbert Pablo, Noel D. Richardson, Ian D. Howarth, Ian R. Stevens, Caroline Piaulet, Lucas St-Jean, Thomas Eversberg, Andrzej Pigulski , Adam Popowicz, Rainer Kuschnig ,Elżbieta Zocłońska, Bram Buysschaert, Gerald Handler, Werner W. Weiss, Gregg A. Wade, Slavek M. Rucinski ,Konstanze Zwintz, Paul Luckas, Bernard Heathcote, Paulo Cacella, Jonathan Powles, Malcolm Locke, Terry Bohlsen, André-Nicolas Chené, Brent Miszalski, Wayne L. Waldron, Marissa M. Kotze, Enrico J. Kotze, Torsten Böhm, ”BRITE-Constellation high-precision time-dependent photometry of the early-O-type supergiant Zeta Puppis unveils the photospheric drivers of its small- and large-scale wind structures,” Monthly Notices of the Royal Astronomical Society (MNRAS), stx2671, https://doi.org/10.1093/mnras/stx2671, Published: 13 October 2017 29) Karan Sarda, C. Cordell Grant, Robert E. Zee, ”Three stellar years (and counting) of precision photometry by the BRITE astronomy constellation,” Proceedings of the 14th International Conference on Space Operations (SpaceOps 2016), Daejeon, Korea, May 16-20, 2016, URL: http://arc.aiaa.org/doi/book/10.2514/MSPOPS16 30) , W. W. Weiss, S. M. Rucinski, A. F. J. Moffat, A. Schwarzenberg-Czerny, O.F. Koudelka, C. C. Grant, R. E. Zee, R. Kuschnig, St. Mochnacki, J. M. Matthews, P. Orleanski, A. Pamyatnykh, A. Pigulski, J. Alves, M. Guedel, G. Handler, G. A. Wade, K. Zwintz, CCD, Photometry Tiger Teams, ”BRITE-Constellation: Nanosatellites for Precision Photometry of Bright Stars, ” Publications of the Astronomical Society of the Pacific, Vol. 126, No. 940, June 2014, pp. 573-585, URL: http://arxiv.org/pdf/1406.3778v1.pdf 32) W. W. Weiss, H.-E. Fröhlich, A. Pigulski, A. Popowicz, D. Huber, R. Kuschnig, A. F. J.Moffat, J. M. Matthews, H. Saio, A. Schwarzenberg-Czerny, C. C. Grant, O. Koudelka, T. Lüftinger, S. M. Rucinski, G. A. Wade, J. Alves, M. Guedel, G. Handler, St. Mochnacki, P. Orleanski, B. Pablo, A. Pamyatnykh, T. Ramiaramanantsoa, J. Rowe, G. Whittaker, T. Zawistowski, E. Zocłonska, K. Zwintz, ”The roAp star α Circini as seen by BRITE-Constellation,” Astronomy & Astrophysics manuscript No. alCirArchiv (@ESO 2016), January 20, 2016, URL: http://arxiv.org/pdf/1601.04833v1.pdf 33) O. Koudelka, M. Unterberger, P. Romano, W. Weiss, R. Kuschnig, “BRITE Constellation – 2 Years in Orbit,” 52nd session of the Scientific and Technical Subcommittee, UNOOSA (United Nations Office for Outer Affairs), Vienna, Austria, Feb. 2-13, 2015, 34) C. Cordell Grant, Karan Sarda, Monica Chaumont, Robert E. Zee, “On-Orbit Performance of the BRITE Nanosatellite Astronomy Constellation,” Proceedings of the 65th International Astronautical Congress (IAC 2014), Toronto, Canada, Sept. 29-Oct. 3, 2014, paper: IAC-14-B4.2.3 35) O. Koudelka, M. Unterberger, P. Romano, W. Weiss, R. Kuschnig, “BRITE – One Year in Orbit,” Proceedings of the 51st Session of Scientific & Technical Subcommittee of UNCOPUOS, Vienna, Austria, Feb. 11-22, 2014, URL: http://www.unoosa.org/pdf/pres/stsc2014/tech-45E.pdf 36) Manuela Unterberger, Patrick Romano, Michael Bergmann, Rainer Kuschnig, Otto Koudelka, “Experience in Commissioning and Operations of the BRITE-Austria Nanosatellite Mission,” Proceedings of the 64th International Astronautical Congress (IAC 2013), Beijing, China, Sept. 23-27, 2013, paper: IAC-13-B6.2.9 37) “UniBRITE nach Schrecksekunde in bester Verfassung!,” March 28, 2013, URL: http://medienportal.univie.ac.at/uniview/forschung 40) A. Kaiser, S. Mochnacki, W. W. Weiss, “BRITE-Constellation: Simulation of Photometric Performance,” Communications in Asteroseismology, Volume 152, January, 2008 41) Alexander M. Beattie, Daniel D. Kekez, Andrew Walker, Robert E. Zee, “Evolution of Multi-Mission Nanosatellite Ground Segment Operations,” Proceedings of SpaceOps 2012, The 12th International Conference on Space Operations, Stockholm, Sweden, June 11-15, 2012, URL: http://www.spaceops2012.org/proceedings/documents/id1292362-Paper-001.pdf Romano, Manuela Unterberger, Otto Koudelka, “BRITE-Austria Ground Segment and Distributed Operations Concept,” Proceedings of the 63rd IAC (International Astronautical Congress), Naples, Italy, Oct. 1-5, 2012, paper: IAC-12-B4.3.9 The information compiled and edited in this article was provided by Herbert J. Kramer from his documentation of: ”Observation of the Earth and Its Environment: Survey of Missions and Sensors” (Springer Verlag) as well as many other sources after the publication of the 4th edition in 2002. - Comments and corrections to this article are always welcome for further updates ([email protected]).
0.853152
3.486006
VS * 23] THE HINDU PRIME MERIDIAN *j ^ The Hindu astronomers generally state the dimensions of the mania and sighra epicycles of a pia net in terms of degrees and minutes, where a degree stands for the 360th part of the planet's mean orbit and a minute for the 60th part of a degree. The author of the present work, following Xryabhata I, has stated here the dimensions of the mmda and Bghra epicycles of the planets in terms of degrees, after dividing them by 4J. This division has been evi- dently made to simplify calculation. These epicycles will be required in tfee next chapter in finding the true longitudes of the planets. 1 Position of the Sun's apogee and the epicycles of the Sun and the Moon : 22. (The longitude of) the Sun's apogee, in degrees, is 70 plus 8; his epicycle is 3, and that of the Moon 7. 2 The previous remark applies to these epicycles also. Position of the Hindu prime meridian : 23. The line which passes through Lanka, Vatsyapura, Avanti, Sthanesvara, and "the abode of the gods'* is the prime meridian. 3 Lanka in Hindu astronomy denotes the place where the meridian of Ujjain (latitude 23°11'N ) longitude 75°52'E from i Greenwich) intersects the equator. It is one of the four hypothetical cities on the equator, called Lanka, Romaka, Siddhapura and Yamakoti (or Yavakoti). Lanka is des- cribed in the Surya-siddhlnta as a great city (mahapuri) situated on an island (dvipa) to the south of Bharata-varsa (India). The island of Ceylon, which bears the name Lanka, however, is not the astronomical Lanka, as the former is about six degrees to the north of the equator. Vatsyapura is the same place as the Vatsagulma of the Maha-Bhaskariya* It may be identified with the town of Basim or Wasim (pronounced as Basim or Vasim), situated at a distance of 52 miles from the city of Akola 1 See infra, ii. 9-10, 1 1-13, etc. 2 Gf. MBh, vii. 12(i), 16. 3 Gf. MBh, ii. 1-2. - xii. 37, 39. - ii. 1-2.
0.830367
3.007298
Every year throughout its 4.5-billion-year life, ice volcanoes on the dwarf planet Ceres generate enough material to fill a movie theater—13,000 cubic yards, according to a new study. The study marks the first time researchers have calculated a rate of cryovolcanic activity from observations—and the findings help solve a mystery about Ceres’ missing mountains, researchers say. Discovered in 2015 by NASA’s Dawn spacecraft, the 3-mile-tall ice volcano Ahuna Mons rises over the surface of Ceres. Still geologically young, the mountain is at most 200 million years old, meaning that—though it is no longer erupting—it was active in the recent past. Is Ahuna Mons a loner? Ahuna Mons’ youth and loneliness presented a mystery. It seemed unlikely Ceres laid dormant for eons and suddenly erupted in one place. But if other ice volcanoes had risen out of the Cerean surface in ages past, where are they now? Why is Ahuna Mons so alone? Researchers set out to answer these questions. They report their findings in Nature Astronomy. In an earlier paper the researchers published last year, they theorized that a natural process called “viscous relaxation” erased evidence of older volcanoes on the dwarf planet. Viscous materials, like honey or putty, can begin as a thick blob, but the weight of the blob causes it to ooze into a flatter shape over time. “Rocks don’t do that under normal temperatures and timescales, but ice does,” says Michael Sori, a planetary scientist at the University of Arizona. Because Ceres is made of both rock and ice, Sori pursued the theory that formations on the dwarf planet flow and move under their own weight, similar to how glaciers move on Earth. The formations’ composition and temperature would affect how quickly they relax into the surrounding landscape. The more ice in a formation, the faster it flows; the lower the temperature, the slower it flows. Though Ceres never grows warmer than -30 degrees Fahrenheit, the temperature varies across its surface. “Ceres’ poles are cold enough that if you start with a mountain of ice, it doesn’t relax,” Sori says. “But the equator is warm enough that a mountain of ice might relax over geological timescales.” Computer simulations showed that Sori’s theory was viable. Model cryovolcanoes at the poles of Ceres remained frozen in place for eternity. At other latitudes on the dwarf planet, model volcanoes began life tall and steep, but grew shorter, wider, and more rounded as time passed. To prove the computer simulations had played out in reality, Sori scoured topographic observations from the Dawn spacecraft, which has been orbiting Ceres since 2015, to find landforms that matched the models. Across the 1 million square miles of Cerean surface, Sori and his team found 22 mountains including Ahuna Mons that looked exactly like the simulation’s predictions. “The really exciting part that made us think this might be real is that we found only one mountain at the pole,” Sori says. 1 volcano every 50 million years Though it is old and battered by impacts, the polar mountain, dubbed Yamor Mons, has the same overall shape as Ahuna Mons. It is five times wider than it is tall, giving it an aspect ratio of 0.2. Mountains found elsewhere on Ceres have lower aspect ratios, just as the models predicted: they are much wider than they are tall. By matching the real mountains to the model mountains, Sori was able to determine the age of many of them. Researchers studied the volcanoes’ topography to estimate their volume, and by combining age and volume, Sori’s team was able to calculate the rate at which cryovolcanoes form on Ceres. “We found that one volcano forms every 50 million years,” Sori said. This amounts to an average of more than 13,000 cubic yards of cryovolcanic material each year—enough to fill a movie theater or four Olympic-sized swimming pools. This is much less volcanic activity than what is seen on Earth, where rocky volcanoes generate more than 1 billion cubic yards of material in a year. In addition to being less productive, volcanic eruptions on Ceres are tamer than those on Earth. Instead of explosive eruptions, cryovolcanoes create the icy equivalent of a lava dome: the cryomagma—a salty mix of rocks, ice, and other volatiles such as ammonia—oozes out of the volcano and freezes on the surface. Most of the once-mighty cryovolcanoes on Ceres likely formed this way before they relaxed away. The causes of cryovolcanic eruptions on Ceres are still a mystery, but future research might yield answers, as signs of ice volcanoes have been spotted on other bodies in the solar system as probes have flown by. Ceres is the first cryovolcanic body a mission has orbited, but Europa and Enceladus, moons of Jupiter and Saturn, are likely candidates for cryovolcanism, as are Pluto and its moon Charon. Europa is of special interest because researchers believe it has liquid oceans trapped below a thick icy shell, which some believe may be dotted with ice volcanoes. “There might be similarities between Europa and Ceres, but we need to send the next mission there before we can say for sure,” Sori says. NASA’s Dawn Guest Investigator Program funded the work. Source: University of Arizona
0.889893
3.938937
Bold banding may be a common feature of brown dwarf skies. Scientists have spotted evidence of Jupiter-like stripes in the thick atmosphere of a nearby brown dwarf, a new study reports — and this evidence was gathered in a novel way. Brown dwarfs are bigger than planets but not big enough to host fusion reactions in their interiors. For this reason, these curious objects are also known as "failed stars." NASA's recently retired Spitzer Space Telescope previously detected banding patterns on multiple brown dwarfs, by tracking in detail how the objects' brightness varied over time. But in this new study, scientists inferred banding via polarimetry, the measurement of polarized light. Polarized light oscillates in the same direction rather than in multiple, random avenues the way "normal" light does. Polarimetric instruments take advantage of this alignment, much as polarized sunglasses do to reduce the glare of light from Earth's star. The study team used a polarimetric instrument on the European Southern Observatory's Very Large Telescope (VLT) in Chile to study the brown dwarf Luhman 16A, which is about 30 times heftier than Jupiter. The failed star is part of a brown-dwarf binary; it and its similar-sized partner, Luhman 16B, are the nearest such pair to Earth, a mere 6 light-years away. The VLT instrument, known as NaCo, detected an excess of polarization in the brown dwarf's light. That's a strong indication of atmospheric banding, researchers said. After all, the light was unpolarized when it was first emitted deep within Luhman 16A, becoming polarized by scattering off haze particles high in the brown dwarf's skies. In a uniform, unbanded atmosphere, this polarization would average out into an unpolarized glow, Caltech representatives explained in a video about the new results. The scientists further interpreted the VLT observations using sophisticated computer models of Luhman 16A's thick atmosphere. The combined work suggests that the brown dwarf is striped, perhaps with two major, broad bands, researchers said. "Polarimetry is the only technique that is currently able to detect bands that don't fluctuate in brightness over time," study lead author Maxwell Millar-Blanchaer, a postdoctoral astronomy researcher at the California Institute of Technology (Caltech) in Pasadena, said in a statement. "This was key to finding the bands of clouds on Luhman 16A, on which the bands do not appear to be varying." The team's models also show that Luhman 16A probably has patches of very turbulent weather, as Jupiter and other gas-giant planets do. "We think these storms can rain things like silicates or ammonia," study co-author Julien Girard, of the Space Telescope Science Institute in Baltimore, said in the same statement. "It's pretty awful weather, actually." The new study marks the first time polarimetry has been used to understand clouds on an object beyond the solar system, team members said. Similar techniques can be used to study other brown dwarfs, and next-generation telescopes in space and on the ground could bring exoplanets into play as well. Polarimetry can help characterize planetary surfaces, potentially allowing scientists to spot liquid water on some alien worlds, study team members said. "Polarimetry is receiving renewed attention in astronomy," co-author Dimitri Mawet, an astronomy professor at Caltech and a senior research scientist at the Jet Propulsion Laboratory, which Caltech manages for NASA, said in the same statement. "Polarimetry is a very difficult art, but new techniques and data analysis methods make it more precise and sensitive than ever before, enabling groundbreaking studies on everything from distant supermassive black holes, newborn and dying stars, brown dwarfs and exoplanets, all the way down to objects in our own solar system," Mawet said. The new study has been accepted for publication in The Astrophysical Journal.
0.837337
4.025792