text
stringlengths 286
572k
| score
float64 0.8
0.98
| model_output
float64 3
4.39
|
---|---|---|
On January 19, 2006 the “fastest spacecraft ever launched” – the New Horizons space probe – lifted off from Cape Canaveral, Florida on a mission to Pluto. In February of 2007 it collected data from Jupiter as it flew by for a gravity assist catapult as it continued on to Pluto. On July 14th, 2015, New Horizons made its closest approach to Pluto. Scientists have now had an opportunity to review the initial data and pictures from New Horizons about the dwarf planet Pluto, and many have been forced into a stunning but unavoidable admission: they’ve been wrong about Pluto for a long time.
What they found on Pluto was not at all what they were expecting to find. Scientists were expecting to find it heavily cratered, “a flat, dead world similar to our moon.” Instead, what they actually found was:
- Icy Volcanoes
- The heart shaped area (right side partially faded) visible in the picture above named sputnik planum, of which scientists note “…this Texas sized basin of ice appears to be boiling.” Planetary scientist Jani Radebaugh likens it to “a lava lake in slow motion” made of nearly frozen Nitrogen cooled until the texture is that of tooth paste.
- Other areas feature a young looking surface, with no record of crater bombardment as expected. “These features are very, very young…Pluto is active today. That’s the headline.” says Planetary scientist Dan Durda.
- An active geology driven by heat
- and “there’s pretty good circumstantial evidence that Pluto has a massive ocean in its interior” says New Horizons mission principle investigator Alan Stern.
While scientists are willing to fess up to being wrong when confronted with objective data like that supplied to them from their own instruments aboard the New Horizons space probe, it is unlikely that they are willing to acknowledge error with regard to the below lessons, save the first, which they cannot deny without being accused of being science deniers. Continue Reading | 0.864376 | 3.25471 |
Red giants are massive stars nearing the end of their lives. They are usually 5-10 billion years old, depending on the size of the star. Supergiants are much larger and much younger, usually 50 million years old or younger, very young for a star. These giants occur because a star is running out of unfused protons and the nucleus becomes composed of alpha particles (helium nuclei). The outer hydrogen is not hot enough to fuse. The star cools and outer layers fall into the core. A inner shell becomes hot enough to fuse hydrogen, but this radiation is weaker, so, as the heat in the core expands the star and this weaker radiation is the main source of light, the star becomes a red giant. If the star is a giant, it will soon cast away it's layers to become a planetary nebula, hiding it's now extremely dense core, an Earth-size white dwarf. The star, if it's a supergiant, eventually heats up enough to fuse alpha particles into larger ones, like carbon, nitrogen, and oxygen. If the alphas are used up, neon magnesium, silicon, and sulfur are made, which soon becomes iron. However, iron fusion uses more energy than it's worth, and with no force, gravity destroys the star. The star sheds it's layers in a monumental explosion, a Type II Supernova. The force fuses iron, making all of the elements heavier than iron. If the core has 1.5 to 3 times the mass of the Sun, an extremely dense city-size neutron star, which is made of neutrons. If it is greater, it makes a black hole, which makes a puncture in space-time continuum. | 0.831949 | 3.360944 |
The event of an exploding star is popularly known to the public as a supernova. When that initial explosion occurs, the explosion creates a shockwave that scientists call the “shock breakout.” We know this exists but unfortunately we’ve never seen it before. A new study led by a team of international researchers, however, has led to the first ever observation of the early flash of a exploding star — in visible light, no less.
Using the planet-hunting Kepler space telescope, the cohort of astronomers looking for supernovae analyzed light captured by Kepler over a three-year span that scanned over 500 different galaxies at various distances — amounting to about 50 trillion stars. They stumbled upon the explosion of two red supergiants: KSN 2011a (about 300 times bigger than our sun, and just a stone’s throw away at 700 light-years distance), and KSN 2011d (500 times bigger than the sun, and a little farther at 1.2 billion light-years distance).
It’s extremely difficult to capture an event like this. It’s the equivalent of looking up and watching a plane crash occur before your eyes at just the right moment.
“In order to see something that happens on timescales of minutes, like a shock breakout, you want to have a camera continuously monitoring the sky,” said Peter Garnavich, an astrophysicist at the University of Notre Dame and lead author of the new study, in a NASA news release. “You don’t know when a supernova is going to go off, and Kepler’s vigilance allowed us to be a witness as the explosion began.”
In a lucky happenstance, the astronomers were able to watch not one, but two different supernovae as KSN 2011a and KSN 2011d both exploded in a burst of cataclysmic energy.
Strangely enough, no shock breakout was observed in the smaller star. The research team suspects this is because KSN 2011a was surrounded by enough gas to mask the shockwave as it reached the star’s surface.
The research team says studying these kinds of rare yet violent events can help us better understand the nature of how cosmic dust and energy scatters across the universe — especially in our own Milky Way galaxy. Heavy metals and other elements are expelled by supernovae and travel great distances to lead to the formation of other planets, including Earth. | 0.893713 | 3.851825 |
Union astronomique internationale (UAI)
National members from 82 countries
|Formation||28 July 1919|
|Ewine van Dishoeck|
|Maria Teresa Lago|
The International Astronomical Union (IAU; French: Union astronomique internationale, UAI) is an international association of professional astronomers, at the PhD level and beyond, active in professional research and education in astronomy. Among other activities, it acts as the recognized authority for assigning designations and names to celestial bodies (stars, planets, asteroids, etc.) and any surface features on them.
The IAU is a member of the International Science Council (ISC). Its main objective is to promote and safeguard the science of astronomy in all its aspects through international cooperation. The IAU maintains friendly relations with organizations that include amateur astronomers in their membership. The IAU has its head office on the second floor of the Institut d'Astrophysique de Paris in the 14th arrondissement of Paris.
This organisation has many working groups. For example, the Working Group for Planetary System Nomenclature (WGPSN), which maintains the astronomical naming conventions and planetary nomenclature for planetary bodies, and the Working Group on Star Names (WGSN), which catalogs and standardizes proper names for stars. The IAU is also responsible for the system of astronomical telegrams which are produced and distributed on its behalf by the Central Bureau for Astronomical Telegrams. The Minor Planet Center also operates under the IAU, and is a "clearinghouse" for all non-planetary or non-moon bodies in the Solar System.
The IAU was founded on 28 July 1919, at the Constitutive Assembly of the International Research Council (now the International Science Council) held in Brussels, Belgium. Two subsidiaries of the IAU were also created at this assembly: the International Time Commission seated at the International Time Bureau in Paris, France, and the International Central Bureau of Astronomical Telegrams initially seated in Copenhagen, Denmark. The 7 initial member states were Belgium, Canada, France, Great Britain, Greece, Japan, and the United States, soon to be followed by Italy and Mexico. The first executive committee consisted of Benjamin Baillaud (President, France), Alfred Fowler (General Secretary, UK), and four vice presidents: William Campbell (USA), Frank Dyson (UK), Georges Lecointe (Belgium), and Annibale Riccò (Italy). Thirty-two Commissions (referred to initially as Standing Committees) were appointed at the Brussels meeting and focused on topics ranging from relativity to minor planets. The reports of these 32 Commissions formed the main substance of the first General Assembly, which took place in Rome, Italy, 2-10 May 1922. By the end of the first General Assembly, ten additional nations (Australia, Brazil, Czecho-Slovakia, Denmark, the Netherlands, Norway, Poland, Romania, South Africa, and Spain) had joined the Union, bringing the total membership to 19 countries. Although the Union was officially formed eight months after the end of World War I, international collaboration in astronomy had been strong in the pre-war era (e.g., the Astronomische Gesellschaft Katalog projects since 1868, the Astrographic Catalogue since 1887, and the International Union for Solar research since 1904).
The first 50 years of the Union's history are well documented. Subsequent history is recorded in the form of reminiscences of past IAU Presidents and General Secretaries. Twelve of the fourteen past General Secretaries in the period 1964-2006 contributed their recollections of the Union's history in IAU Information Bulletin No. 100. Six past IAU Presidents in the period 1976-2003 also contributed their recollections in IAU Information Bulletin No. 104.
As of 1 August 2019, the IAU includes a total of 13,701 individual members, who are professional astronomers from 102 countries worldwide. 81.7% of all individual members are male, while 18.3% are female, among them the union's former president, Mexican astronomer Silvia Torres-Peimbert.
Membership also includes 82 national members, professional astronomical communities representing their country's affiliation with the IAU. National members include the Australian Academy of Science, the Chinese Astronomical Society, the French Academy of Sciences, the Indian National Science Academy, the National Academies (United States), the National Research Foundation of South Africa, the National Scientific and Technical Research Council (Argentina), KACST (Saudi Arabia), the Council of German Observatories, the Royal Astronomical Society (United Kingdom), the Royal Astronomical Society of New Zealand, the Royal Swedish Academy of Sciences, the Russian Academy of Sciences, and the Science Council of Japan, among many others.
The sovereign body of the IAU is its General Assembly, which comprises all members. The Assembly determines IAU policy, approves the Statutes and By-Laws of the Union (and amendments proposed thereto) and elects various committees.
The right to vote on matters brought before the Assembly varies according to the type of business under discussion. The Statutes consider such business to be divided into two categories:
On budget matters (which fall into the second category), votes are weighted according to the relative subscription levels of the national members. A second category vote requires a turnout of at least two-thirds of national members in order to be valid. An absolute majority is sufficient for approval in any vote, except for Statute revision which requires a two-thirds majority. An equality of votes is resolved by the vote of the President of the Union.
Since 1922, the IAU General Assembly meets every three years, with the exception of the period between 1938 and 1948, due to World War II. After a Polish request in 1967, and by a controversial decision of the then President of the IAU, an Extraordinary IAU General Assembly was held in September 1973 in Warsaw, Poland, to commemorate the 500th anniversary of the birth of Nicolaus Copernicus, soon after the regular 1973 GA had been held in Sydney, Australia.
|Ist IAU General Assembly (1st)||1922||Rome, Italy|
|IInd IAU General Assembly (2nd)||1925||Cambridge, England, United Kingdom|
|IIIrd IAU General Assembly (3rd)||1928||Leiden, Netherlands|
|IVth IAU General Assembly (4th)||1932||Cambridge, Massachusetts, United States|
|Vth IAU General Assembly (5th)||1935||Paris, France|
|VIth IAU General Assembly (6th)||1938||Stockholm, Sweden|
|VIIth IAU General Assembly (7th)||1948||Zürich, Switzerland|
|VIIIth IAU General Assembly (8th)||1952||Rome, Italy|
|IXth IAU General Assembly (9th)||1955||Dublin, Ireland|
|Xth IAU General Assembly (10th)||1958||Moscow, Soviet Union|
|XIth IAU General Assembly (11th)||1961||Berkeley, California, United States|
|XIIth IAU General Assembly (12th)||1964||Hamburg, West Germany|
|XIIIth IAU General Assembly (13th)||1967||Prague, Czechoslovakia|
|XIVth IAU General Assembly (14th)||1970||Brighton, England, United Kingdom|
|XVth IAU General Assembly (15th)||1973||Sydney, New South Wales, Australia|
|XVIth IAU General Assembly (16th)||1976||Grenoble, France|
|XVIIth IAU General Assembly (17th)||1979||Montreal, Quebec, Canada|
|XVIIIth IAU General Assembly (18th)||1982||Patras, Greece|
|XIXth IAU General Assembly (19th)||1985||New Delhi, India|
|XXth IAU General Assembly (20th)||1988||Baltimore, Maryland, United States|
|XXIst IAU General Assembly (21st)||1991||Buenos Aires, Argentina|
|XXIInd IAU General Assembly (22nd)||1994||The Hague, Netherlands|
|XXIIIrd IAU General Assembly (23rd)||1997||Kyoto, Kansai, Japan|
|XXIVth IAU General Assembly (24th)||2000||Manchester, England, United Kingdom|
|XXVth IAU General Assembly (25th)||2003||Sydney, New South Wales, Australia|
|XXVIth IAU General Assembly (26th)||2006||Prague, Czech Republic|
|XXVIIth IAU General Assembly (27th)||2009||Rio de Janeiro, Brazil|
|XXVIIIth IAU General Assembly (28th)||2012||Beijing, China|
|XXIXth IAU General Assembly (29th)||2015||Honolulu, Hawaii, United States|
|XXXth IAU General Assembly (30th)||2018||Vienna, Austria|
|XXXIst IAU General Assembly (31st)||2021||Busan, South Korea|
Commission 46 is a Committee of the Executive Committee of the IAU, playing a special role in the discussion of astronomy development with governments and scientific academies. The IAU is affiliated with the International Council of Scientific Unions (ICSU), a non-governmental organization representing a global membership that includes both national scientific bodies and international scientific unions. They often encourage countries to become members of the IAU. The Commission further seeks to development, information or improvement of astronomical education. Part of Commission 46, is Teaching Astronomy for Development (TAD) program in countries where there is currently very little astronomical education. Another program is named the Galileo Teacher Training Program (GTTP), being a project of the International Year of Astronomy 2009, among which Hands-On Universe that will concentrate more resources on education activities for children and schools designed to advance sustainable global development. GTTP is also concerned with the effective use and transfer of astronomy education tools and resources into classroom science curricula. A strategic plan for the period 2010-2020 has been published. | 0.833588 | 3.154032 |
High value for Hubble constant from two gravitational lenses
The expansion rate of the Universe today is described by the so-called Hubble constant and different techniques have come to inconsistent results about how fast our Universe actually does expand. An international team led by the Max Planck Institute for Astrophysics (MPA) has now used two gravitational lenses as new tools to calibrate the distances to hundreds of observed supernovae and thus measure a fairly high value for the Hubble constant. While the uncertainty is still relatively large, this is higher than that inferred from the cosmic microwave background.
Gravitational lensing describes the fact that light is deflected by large masses in the Universe, just like a glass lens will bend a light right on Earth. In recent years, cosmologists have increasingly used this effect to measure distances by exploiting the fact that, in a multiple image system, an observer will see photons arriving from different directions at different times due to the difference in optical path lengths for the various images. This measurement thus gives a physical size of the lens, and comparing it to an observed size in the sky gives a geometric distance estimate called the “angular diameter distance”. Such distance measurements in astronomy are the basis for measurements of the Hubble constant, named after the astronomer Edwin Hubble, who found a linear relationship between the redshifts (and thus the expansion velocity of the Universe) and the distances of galaxies (which was also independently found by Georges Lemaître).
“There are multiple ways to measure distances in the Universe, based on our knowledge of the object whose distance is being measured,” explains Sherry Suyu (MPA/TUM), who is a world expert in using gravitational lensing for determining the Hubble constant. “A well-known technique is the luminosity distance using supernovae explosions; however, they must adopt an external calibrator of the absolute distance scale. With our analysis of gravitational lens systems we can provide a completely new, independent anchor for this method.”
The team used two strong gravitational lens systems B1608+656 and RXJ1131 (see Figure 1). In each of these systems, there are four images of a background galaxy with one or two foreground galaxies acting as lenses. This relatively simple configuration allowed the scientists to produce an accurate lensing model and thus measure the angular diameter distances to a precision of 12 to 20% per lens. These distances were then applied as anchors to 740 supernovae in a public catalogue (Joint Light-curve Analysis dataset).
“By construction, our method is insensitive to the details of the assumed cosmological model,” states Inh Jee (MPA), who did the statistical analysis and combined the supernova data with the lensing distances. “We get a fairly high result for the Hubble constant and although our measurement has a larger uncertainty than other direct methods, this is dominated by statistical uncertainty because we use only two lens systems.”
The value for the Hubble constant based on this new analysis is about 82 +/- 8 km/s/Mpc. This is consistent with values derived from the distance ladder method, which uses different anchors for the supernova data, as well as with values from time-delay distances, where other gravitational lensing systems were used to determine the Hubble constant directly.
“Again this new measurement confirms that there seems to be a systematic difference in values for the Hubble constant derived directly from local or intermediate sources and indirectly from the cosmic microwave background,” states Eiichiro Komatsu, director at MPA, who oversaw this project. “If confirmed by further measurements, this discrepancy would call for a revision of the standard model of cosmology.”
Variability in B1608+656
Variability observed in the lens system B1608+656, the labels are the same as in Figure 1. The arrows denote a flare seen at different times in the four images. | 0.881371 | 4.11325 |
Titania is the largest of the 27 known moons of Uranus and both planet and all of its moons are some of the furthest out in space. Uranus is the seventh planet in our solar system and all of the objects in its realm receive less sunlight. The distance of the moons made it impossible to discover until the era of space technology.
Nearing 200 years after its discovery, the Voyager 2 spacecraft flew by Titania and found signs of that it was potentially geologically active. Finding a moon with so many large fault valleys, surprised and delighted scientists. The crust breaks that are carried out in two directions indicate that Titania has some form of tectonic disruption.
Titania follows suit with many of the moons of Uranus in that it is a kind of neutral gray color. The moon also has some highly reflective deposits along the valley walls facing the sun which could possibly be frost.
- Orbits: Uranus
- Discovered By: William Herschel
- Discovery Date: January 11, 1787
- Diameter: 1,577.8 km
- Mass: 3.42 × 10^21 kg (4.7% Moon)
- Orbital Period: 8.7 days
- Orbit Distance: 436,300 km
- Surface Temperature: – 203 degrees C
Famed British astronomer, William Herschel discovered Titania on January 11, 1787. It was on this same day that he also discovered the second largest moon of Uranus, Oberon. It was not too much later that he put in a report that he had discovered four additional satellites, although the scientific community didn’t that report serious until fifty years later when the sophistication of telescopes became a bit more refined and other astronomers saw the moons that Herschel reported.
Titania was named after a character in William Shakespeare’s 16th century play, “A Midsummer Night’s Dream.” In the play, Titania is the queen of the fairies.
Formation, Structure and Surface:
Scientists have observed Titania for a long time and theorized that it is composed of equal parts of ice and rock. The rock may include organic compounds and carbonaceous materials. The concept was later supported that Titania has an unusually high density for one of Uranus’ moons through the use of infrared spectroscopic observations from 2001-2005. The studies showed that the surface of Titania has crystalline water ice.
Scientists also think that Titania may be different in that it has an icy mantle that surrounds a rocky core. Researchers estimate that the radius of the core would be around 320 mi/520 km and would make up about 66% of the moon’s radius and 58% of its mass.
The state of the icy mantle isn’t known, however, if the ice contains ammonia this would act as an antifreeze and allow mushy or liquid water. If these conditions exist then it gives an increased percentage that there is a layer of liquid ocean at the boundary of the core-mantle. Scientists estimate that if it exists, the ocean thickness would be around 31 mi/50 km.
The features on Titania range from areas that have suffered impact craters to faults and valleys. The surface does show some rather large impact basins, however, a majority of them are smaller. Near the top of the moon there is a big, double-walled crater, but many of the craters seem to be partially submerged. This surface feature indicates that there has been activity to resurface or cover the impacts, making the surface young.
Titania’s diameter is 1,578 km and it is the largest of all of the moons of Uranus and the eighth largest moon in the entire solar system. Of the five major moons, it is also the second farthest moon from its parent planet
Titania is tilted slightly towards Uranus’ equator. It has an orbital and rotational period that are both exactly 8.7 days. This means that like many of the moons, Titania is tidally locked, allowing only one side to face Uranus.
Uranus is also slightly tilted and orbits the sun on its side. All of the moons of Uranus orbit the planet on the equatorial plane and therefore they all have extreme seasonal cycles. Both the southern and northern poles experience 42 year cycles of either complete sunlight or complete darkness.
Atmosphere and Magnetosphere:
Titania doesn’t have any atmosphere or a magnetosphere.
Titania does have a carbon cycle that exists where carbon dioxide is created in the pole regions. When that pole region faces the sun it heats up the carbon dioxide and it migrates to the equatorial area and other pole. Titania doesn’t have the ability to hold onto any gases that are created and therefore it leaves and escapes into space.
Could Life Exist?
The scientific community has established criteria that is required for a planet or moon to support life. Liquid water is one of the requirements but even if Titania has some form of liquid water under its icy surface, it doesn’t have the rest of the requirements.
- Due to Titania’s tilt, each of Titania’s poles experience 42 years in the sun and 42 years in the dark.
- Scientists believe that Titania’s high density indicates that it was created either from the leftover debris when Uranus was first formed or from the debris of a possible collision with Uranus that caused the tilt of the planet to its side.
- The exact materials that make up Titania are unknown, but due to the high density it’s thought that Titania is made up of mostly water ice combined with some rocky materials.
- Researchers don’t know if Titania has any geological processes still occurring, but the presence of the large canyons on the icy crust may be the result of the freezing process right after the moon was formed. The process of freezing would have expanded the moon around 7% causing the surface to crack and fissures to be exposed.
- An unusual situation occurs due to Uranus and its moons orbiting on their sides. The moons get more solar radiation at their pole areas than at the equators. When this happens any carbon dioxide that has collected at the poles is warmed and when it reaches -1880 C, the carbon dioxide moves in a process called sublimation to the opposite polar and equatorial area. When the opposite pole is facing the sun the process is reversed. This gives Titania a kind of “carbon cycle.”
- It’s believed that the original surface of Titania has been hidden or covered over due to resurfacing.
- Scientists are unsure whether the resurfacing process of Titania is due to tectonic movement because so much of the surface is covered in the ejecta from impacts that have hidden any impact craters.
- Given that Uranus and all of its moons are tilted on their side, the fact that Titania has an almost circular orbit is unusual. Titania takes 8.7 days to complete its orbit and travels at an average speed of 3.64 km/second.
- For over fifty years, the astronomer William Herschel was the only one that had observed Titania. Herschel’s telescope seemed to be better than others during his time, but even then, astronomers in his day didn’t believe him.
- Titania has a huge canyon named Messina Chasma that extends 927 mi/1,492 km. The chasm begins at the moon’s equator and stretches almost to its south pole.
- The Messina chasm is made up of two regular faults that cut across a lot of impact craters.
- The Messina chasm is named after a location in the William Shakespeare play “Much Ado About Nothing.”
The Voyager 2 spacecraft located an additional 10 moons around Uranus when it visited in 1986. When it did a flyby past Titania it was during the dark winter phase of the northern hemisphere which was the same on all of the moons. Voyager 2 couldn’t see a lot of detail and only a bit of the southern hemisphere.
Astronomers are using high powered Earth-based and space telescopes to accomplish additional research on Titania, Uranus and many of the other moons orbiting Uranus. Using the telescopes scientists located three additional moons of Mab, Cupid, and Margaret in 2003.
Voyager 2 did flybys of Titania in 1986. The probe was only able to photograph 40% of the moon’s surface due to the darkened winter condition. Only 24% of the images were within precision required for geological mapping.
Facts about Titania Moon for Kids:
- Titania is considered to be intermediate in brightness compared to the other moons of Uranus. It is somewhere in the middle between Oberon and Umbriel’s dark and the brightness of Miranda and Ariel.
- Titania seems to match all of the moons of Uranus in that its geology shows a combination of resurfacing and impact craters. It’s believed that the smoothness of the resurfacing is due to the settling of materials from impacts.
- Scientists that have studied Titania recognize that it has three geological feature classes that include: craters, faults, and canyons.
- Geologists sometimes call canyons grabens and faults are sometimes calls scarps.
- The largest crater on Titania is named Gertrude, after Hamlet’s mother.
- The canyons on Titania vary in size from 12-31 mi/20-50 km and 2-5 km in depth.
- The most noticeable chasm on Titania is the Messina Chasm which is around 930 mi/1,500 km.
- Scientists think that the canyons may be the youngest of Titania’s geological features since they cut through pre-existing craters.
- Other surface features on Titania are named after characters in William Shakespeare’s plays including Ursula, Jessica, and Imogen, characters in Much Ado About Nothing, The Merchant of Venice, and Cymebline.
- Titania has the presence of carbon dioxide but it is believed to only exist on a seasonal basis. Scientists believe that other gases might be present including methane and nitrogen.
- Titania’s weak gravity lets any gases escape into space and therefore it doesn’t have an atmosphere.
Kim Stanley Robinson’s 1997 “Blue Mars” includes a description of a colony that exists on Titania where human beings have adapted to the low light levels and gravity.
In Earth 2160, the ED (Eurasian Dynasty) has a military prison located on Titania. It is later destroyed by LC forces that have gone rogue.
In the series “Expanse,” Titania is a location for the furthest outpost for humanity.
Pink Floyd created the song “Asronomy Domine” in 1967 that includes a line that mentions Titania. | 0.882431 | 3.738127 |
First discovered in 2008, WASP 14b is an interesting exoplanet. It is roughly seven times as massive as Jupiter, but only 30% larger, making it among the densest known exoplanets. Recently, it was the target of observations from the Spitzer space telescope which was able to pick out the infrared radiation emitted by the planet and is giving astronomers new clues to how the atmospheres of Hot Jupiters function, contradicting expectations based on observations of other exoplanet atmospheres.
Images of the system were taken by a team of astronomers led by Jasmina Blecic and Joseph Harrington at the University of Central Florida. The team took images using three filters which allowed them to analyze the light at specific wavelengths. The brightness in each one was then compared to predictions made by models of atmospheres which included molecules such as H2O, CO, CH4, TiO, and VO as well as more typical atmospheric gasses like hydrogen, oxygen, and nitrogen.
While not having a large number of filters wouldn’t allow the team to conclusively match a specific model, they were able to confidently rule out some possible characteristics. In particular, the team rules out the presence of a layer of atmosphere that changes sharply in temperature from the regions directly around it, known as a “thermal inversion layer”. This comes as quite a surprise since observations of other hot Jupiters have consistently shown evidence of just such a layer. It was believed that all hot Jupiter type exoplanets should feature them if their atmospheres contained TiO or VO, molecules which filter out visible light. If they were present at a specific altitude, then that sudden layer of absorption would create a sudden shift in the temperature. The lack of this layer supports a 2009 study which suggested that such heavy molecules should settle out of the atmosphere and not be responsible for the thermal inversion layers. But this leaves astronomers with a fresh puzzle: If those molecules don’t cause them, then what does?
The team also found that the planet was brighter than expected when it was near the full phase which suggested that it is not as capable of redistributing its heat as some other exoplanets have been found to be. The team also confirmed that the planet has a notably elliptical orbit, despite being close to the star which should circularize the orbit. The astronomers that originally made the discovery of this planet postulated that this may be due to the presence of another planet which had a recent interaction that placed WASP 14b into its present orbit. | 0.851727 | 4.055121 |
Astronomers have come to the conclusion that solid planets, similar in size to Earth and orbiting around a red star, could not sustain life as we know it.
The search for life in the Universe is an activity that currently represents the interest of many scientists, as well as an important part of the general public. Unfortunately, such a mission, no matter how noble it may seem, is an extremely complex task. First of all, we must not forget that the Universe is extremely large and that, to date, only a fraction of the sky we see in the evening has been scanned by scientists. Secondly, another problem is represented by the object of our search. More precisely, although we seek the extraterrestrial life, the truth is that scientists operate largely based upon their own assumptions and from what exists on the Earth related to evolution and signs of life.
In a recent study, astronomers have narrowed this search down by eliminating an entire class of planets. Thus, starting from the exoplanet LHS 3844b, which is 48 light-years away from our planet (in the constellation Indus) – a world made of rock which is only 30% larger than our planet but orbiting a red sun – scientists have come to the conclusion that it is too hot to have an atmosphere and, implicitly, to support life on it.
The possibility that these Earth-like planets orbiting a red star could host life is a debate that has been taking place for some years in the academic world. However, this study on planet LHS 3844b could put an end to this debate.
The study concluded that exoplanets that revolve around their star in a manner similar to the Moon could not accommodate an atmosphere suitable for life.
“To have life as we know it, you need liquid water,” explains Abraham Leob, an astronomer at Harvard University’s Center for Astrophysics. He added that liquid water can exist on the surface of a planet only under the presence of an atmosphere.
This finding could be used to redefine the conditions that a planet must meet in order to sustain life on it. | 0.814357 | 3.157852 |
American space agency NASA's probe, on a way to interstellar space, detected an increase in cosmic rays that originated outside the solar system, according to NASA's Jet Propulsion Lab.
Voyager 2, launched in 1977, is about 17.7 billion kilometers from Earth, more than 118 times the distance from Earth to the Sun. And it will become the second human-made object, after Voyager 1, to enter interstellar space after it exits the heliosphere.
An cosmic ray instrument on the probe had measured about a 5 percent increase in the rate of cosmic rays since late August, compared to early August. Another instrument on it also detected a similar increase in higher-energy cosmic rays, fast-moving particles that originate outside the solar system.
Some of these cosmic rays are blocked by the heliosphere, so mission planners expect that Voyager 2 will measure an increase in the rate of cosmic rays as it approaches and crosses the boundary of the heliosphere, according to NASA's recent statement.
In May 2012, Voyager 1 experienced an increase in the rate of cosmic rays approximately three months before Voyager 1 crossed the heliopause and entered interstellar space.
However, the increase in cosmic rays is not a definitive sign that the probe is about to cross the heliopause.
Voyager 2 is in a different location in the "heliosheath" or the outer region of the heliosphere than Voyager 1 had been, and possible differences in these locations points to the fact that Voyager 2 may experience a different exit timeline than Voyager 1, partly because the heliopause moves inward and outward during the Sun's 11-year activity cycle. | 0.864071 | 3.334092 |
Fancy seeing the sky in neutrino? Supermassive black holes and enormous stellar explosions may give up their secrets now that neutrinos from space can be detected.
The South Pole IceCube neutrino observatory has seen a handful of ghostly high-energy neutrinos that almost certainly came from outer space, opening up the skies for neutrino astronomy.
“We are witnessing the birth of this field,” says Dan Hooper, a theoretical astrophysicist at Fermilab in Batavia, Illinois, who is not a member of IceCube.
Last month, the IceCube collaboration published news of the detection of two high-energy neutrinos, each with an energy of about one petaelectronvolt. These neutrinos, discovered by accident a year ago and nicknamed Bert and Ernie, prompted the collaboration to go back and look at their data in more detail.
The new analysis, reported today at the IceCube Particle Astrophysics symposium at the University of Wisconsin-Madison, has raised the stakes.
IceCube, which monitors a cubic kilometre of ice at the South Pole, saw 26 more neutrinos of about 50 teraelectronvolts between May 2010 and May 2012. There is only a 0.004 per cent chance that these 28 detections are due to chance. “This is not a statistical fluctuation,” says Francis Halzen the leader of the IceCube collaboration at the University of Wisconsin-Madison.
Up to half of the observed events could be so-called atmospheric neutrinos, produced when cosmic rays smash into the upper atmosphere, but the rest must be coming from outside our solar system, according to IceCube team member Thomas Gaisser of the University of Delaware in Newark.
A key indication of this is the distribution of the different types of neutrinos. Neutrinos come in three flavours: muon, tau and electron. As they travel through space, they can change, or oscillate, from one type to another. Atmospheric processes produce twice as many muon neutrinos as electron neutrinos. However, the events detected by IceCube suggest that the different types are coming in equal numbers. “This is typical of a neutrino beam that has oscillated over a very long distance,” says Halzen.
Being chargeless, neutrinos zip from a source direct to Earth without being waylaid. This makes them useful for investigating supernovae, mysterious objects called microquasars and active galactic nuclei – galaxies powered by supermassive black holes, all of which are thought to produce neutrinos.
“If you have something that looks into the hearts of these beasts, maybe it’ll help untangle what’s going on,” says John Learned at the University of Hawaii, who is not part of IceCube.
Neutrino astronomy could also pinpoint the sources of cosmic rays, which are thought to be produced by the same processes as neutrinos. Because they are charged, cosmic rays get bent by intervening magnetic fields. This has made it impossible to work out their source by the time they reach Earth.
“IceCube being able to see the sky in neutrinos for the first time is absolutely going to transform how we view cosmic ray physics,” says Hooper.
Neutrinos are also expected to be produced in regions such as the centre of the Milky Way, where dark matter particles are thought to accumulate in large numbers and smash into each other. Now that IceCube has shown it can detect high-energy astrophysical neutrinos, the next step is to work out their direction. Neutrinos with a specific energy coming from the galactic centre would be an indirect detection of dark matter.
While the latest IceCube neutrino detections are too few to provide information about their direction, Halzen says that his team is sitting on a goldmine of unanalysed data collected after May 2012. “We will have lots of events to work with, and we will figure this out, if not this year then next year,” he says. “You cannot imagine the excitement in the collaboration.”
More on these topics: | 0.87997 | 4.078417 |
Scientist sees evidence of planet formation in narrow rings of other solar systems
Narrow dense rings of comets are coming together to form planets on the outskirts of at least three distant solar systems, astronomers have found in data from a pair of NASA telescopes.
Estimating the mass of these rings from the amount of light they reflect shows that each of these developing planets is at least the size of a few Earths, according to Carey Lisse, a planetary scientist at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Maryland.
Over the past few decades, using powerful NASA observatories such as the Infrared Telescope Facility in Hawaii and the Spitzer Space Telescope, scientists have found a number of young debris disk systems with thin but bright outer rings composed of comet-like bodies at 75 to 200 astronomical units from their parent stars—about two to seven times the distance of Pluto from our own sun. The composition of the material in these rings varies from ice-rich (seen in the Fomalhaut and HD 32297 systems) to ice-depleted but carbon rich (the HR 4796A system).
Presenting his results today at the American Astronomical Society's Division for Planetary Sciences meeting in Provo, Utah, Lisse said scientists are especially intrigued by the red dust ring surrounding HR 4796A, which shows unusually tight form for an infant solar system.
Lisse traces the extreme red color to the burnt-out rocky organic remains of comets, a result of the system's ring being close enough to the star that they have all boiled off. The researchers don't see red ring dust in Fomalhaut or HD 32297, but instead see normal bluish comet dust containing ices—because these systems' rings are far enough out that their comets are cold and mostly stable.
"The narrow confines of these rings is still a great puzzle—you don't typically see this kind of tight order in such a young system," Lisse said. "Usually, material is moving every which way before an exoplanetary system gets cleaned out and settles down so that planetary bodies rarely cross each other's path, like in our present-day solar system."
After eliminating other possibilities due to the lack of primordial circumstellar gas seen in these systems, Lisse and his co-authors have attributed the tight structure to multiple coalescing bodies "shepherding" material through the rings.
"Comets crashing down onto these growing planet surfaces would kick up huge clouds of fast-moving, ejected 'construction dust,' which would spread over the system in huge clouds," Lisse said. "The only apparent solution to these issues is that multiple mini-planets are coalescing in these rings, and these small bodies, with low kick-up velocities, are shepherding the rings into narrow structures—much in the same way many of the narrow rings of Saturn are focused and sharpened."
This is a paradigm shift, he added, because instead of building a planet from one big construction site, it's coming from many small ones, which will eventually merge their work into the final product. Recent studies have yielded similar theories about the formation of the giant gas planets Uranus and Neptune, that each had multiple "subcores" that were eventually covered by thick atmospheres.
In Fomalhaut and HD 32297, researchers expect that millions of comets are contributing to form the cores of ice giant planets like Uranus and Neptune—although without the thick atmospheres enveloping the cores of Uranus and Neptune, since the primordial gas disks that would form such atmospheres are gone. In HR 4796A, with its warmer dust ring, even the ices normally found in the rings' comets evaporated over the last million years or so, leaving behind core building blocks that are rich only in leftover carbon and rocky materials.
"These systems appear to be building planets we don't see in our solar system—large multi-Earth mass ones with variable amounts of ice, rock and refractory organics," Lisse said. "This is very much like the predicted recipe for the super-Earths seen in abundance in the Kepler planet survey."
"Much still has to happen, though, before these rings could become planets the size of the gas giants," he continued. "Why it's taking so long to make outer planets in these systems—after their primordial gas disks have been stripped away—is a big mystery."
Lisse, C. M., Sitko, M. L., Marengo, M., et al. 2017, "Infrared Spectroscopy of HR 4796Aʼs Bright Outer Cometary Ring+Tenuous Inner Hot Dust Cloud," Astron. J (in press, astro-ph) | 0.91012 | 3.95695 |
Back in 1993, Carl Sagan encountered a puzzle. The Galileo spacecraft spotted flashes coming from Earth, and nobody could figure out what they were. They called them ‘specular reflections’ and they appeared over ocean areas but not over land.
The images were taken by the Galileo space probe during one of its gravitational-assist flybys of Earth. Galileo was on its way to Jupiter, and its cameras were turned back to look at Earth from a distance of about 2 million km. This was all part of an experiment aimed at finding life on other worlds. What would a living world look like from a distance? Why not use Earth as an example?
Fast-forward to 2015, when the National Oceanographic and Atmospheric Administration (NOAA) launched the Deep Space Climate Observatory (DSCOVER) spacecraft. DSCOVER’s job is to orbit Earth a million miles away and to warn us of dangerous space weather. NASA has a powerful instrument on DSCOVER called the Earth Polychromatic Imaging Camera (EPIC.)
Every hour, EPIC takes images of the sunlit side of Earth, and these images can be viewed on the EPIC website. (Check it out, it’s super cool.) People began to notice the same flashes Sagan saw, hundreds of them in one year. Scientists in charge of EPIC started noticing them, too.
One of the scientists is Alexander Marshak, DSCOVR deputy project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. At first, he noticed them only over ocean areas, the same as Sagan did 25 years ago. Only after Marshak began investigating them did he realize that Sagan had seen them too.
Back in 1993, Sagan and his colleagues wrote a paper discussing the results from Galileo’s examination of Earth. This is what they said about the reflections they noticed: “Large expanses of blue ocean and apparent coastlines are present, and close examination of the images shows a region of [mirror-like] reflection in ocean but not on land.”
Marshak surmised that there could be a simple explanation for the flashes. Sunlight hits a smooth part of an ocean or lake, and reflects directly back to the sensor, like taking a flash-picture in a mirror. Was it really that much of a mystery?
When Marshak and his colleagues took another look at the Galileo images showing the flashes, they found something that Sagan missed back in 1993: The flashes appeared over land masses as well. And when they looked at the EPIC images, they found flashes over land masses. So a simple explanation like light reflecting off the oceans was no longer in play.
“We found quite a few very bright flashes over land as well.” – Alexander Marshak, DSCOVR Deputy Project Scientist
“We found quite a few very bright flashes over land as well,” he said. “When I first saw it I thought maybe there was some water there, or a lake the sun reflects off of. But the glint is pretty big, so it wasn’t that.”
But something was causing the flashes, something reflective. Marshak and his colleagues, Tamas Varnai of the University of Maryland, Baltimore County, and Alexander Kostinski of Michigan Technological University, thought of other ways that water could cause the flashes.
The primary candidate was ice particles high in Earth’s atmosphere. High-altitude cirrus clouds contain tiny ice platelets that are horizontally aligned almost perfectly. The trio of scientists did some experiments to find the cause of the flashes, and published their results in a new paper published in Geophysical Research Letters.
“Lightning doesn’t care about the sun and EPIC’s location.” – Alexander Marshak, DSCOVR Deputy Project Scientist
As their study details, they first catalogued all of the reflective glints that EPIC found over land; 866 of them in a 14 month period from June 2015 to August 2016. If these flashes were caused by reflection, then they would only appear on locations on the globe where the angle between the Sun and Earth matched the angle between the DSCOVER spacecraft and Earth. As the catalogued the 866 glints, they found that the angle did match.
This ruled out something like lightning as the cause of the flashes. But as they continued their work plotting the angles, they came to another conclusion: the flashes were sunlight reflecting off of horizontal ice crystals in the atmosphere. Other instruments on DSCOVR confirmed that the reflections were coming from high in the atmosphere, rather than from somewhere on the surface.
“The source of the flashes is definitely not on the ground. It’s definitely ice, and most likely solar reflection off of horizontally oriented particles.” -Alexander Marshak, DSCOVR Deputy Project Scientist
Mystery solved. But as is often the case with science, answering one question leads to a couple other questions. Could detecting these glints be used in the study of exoplanets somehow? But that’s one for the space science community to answer.
As for Marshak, he’s an Earth scientist. He’s investigating how common these horizontal ice particles are, and what effect they have on sunlight. If that impact is measurable, then it could be included in climate modelling to try to understand how Earth retains and sheds heat. | 0.879163 | 3.600788 |
Researchers have identified nitrogen previously thought to be “missing” in comets, helping to solve a longstanding mystery about the icy space rocks. In analyzing the Comet 67P/Churyumov-Gerasimenko, which was visited and studied by the European Space Agency’s Rosetta spacecraft, scientists have uncovered significant amounts of ammonium salts that ended up revealing this “missing” nitrogen. Our solar system — which includes our sun and all of the planets and objects like comets and asteroids — formed from the condensation of a gaseous cloud known as the solar nebula. Scientists have long thought that the nitrogen-to-carbon ratio (N/C) of the sun should be roughly the same in comets, which formed in the cold outer reaches of the solar nebula far from the sun.
Source: Solar system mystery finally solved, thanks to salty space rock | 0.851489 | 3.535561 |
This course introduces the concept of atomic structure and covers the following topics:
- atoms, subatomic particles
- atomic number and mass number
- various atomic theories
Matter is anything that has mass and takes up space (has volume). Everything around us — the food we eat, the air we breathe, clouds, stars, plants, animals, water, dust– is made up of matter.
Characteristics of particles of matter are the following:
- Particles of matter are very small and are normally not visible to the naked eye.
- Matter particles keep moving continuously .
- The kinetic energy associated with the continuous motion of the particles is directly proportional to their temperature.
- The particles of matter attract each other.
- The attracting force of the particles keeps the particles together; however, the strength of the attracting force varies from one kind of matter to another.
Atoms are the building blocks of matter. The word ‘atom’ has been derived from the Greek word ‘a-tomio’ which means ‘non-divisible’. The diversity of chemical behavior of different elements is due to the differences in the internal structure of atoms of these elements.
Atoms are composed of three type of particles: protons, neutrons, and electron (Figure 1). At the center of an atom is a nucleus, which is made up of protons and neutrons.
- Protons are positively charged particles.
- Neutrons are about the size of protons but have no charge.
- Electrons are negatively charged particles that orbit the nucleus.
Table showing Differences in Electron, Proton and Neutron
|Definition||Negatively charged sub-atomic particle found in an atom||Positively charged sub-atomic particle found in an atom||Neutral sub-atomic particle found in an atom|
Location in the atom
|1 –||1 +||0|
|Reactions||Take part in both chemical and nuclear reactions||Take part in nuclear reaction||Only gets exposed to nuclear reaction|
|Relative Mass||0 Atomic mass unit (amu)||1 amu||Very close to 1 amu|
|Mass (Kg)||9.109 x 10-31|
1.673 x 10-27
|1.675 x 10-27|
|Discovery||J.J Thompson||Ernest Rutherford||James Chadwick|
The positive charge on nucleus is due to protons. The atomic number (Z) is the number of protons present in the nucleus. For example, number of protons in hydrogen atom is 1 and therefore its atomic number is 1.
The mass number of an atom is the total number of protons plus neutrons in its nucleus (Figure 2). The mass number (A) is the sum of the atomic number (Z), which is the number of protons, and the number of neutrons (N) in the atomic nucleus of an isotope of an element, i.e. A=Z+N
Atoms with same atomic number (number of protons), but different mass numbers (number of protons and neutrons) are called isotopes. They occur naturally or can be produced artificially.
Isotopes are separated through mass spectrometry.
For example, the simplest and commonest form of hydrogen (Figure 3) has a nucleus that consists of a single proton; it is the only atom with no neutrons: its mass number is 1.
A rarer form of hydrogen known as deuterium has one proton and one neutron: its mass number is 2.
A third form of hydrogen known as tritium has one proton and two neutrons: its mass number is 3.
The stability of a nucleus depends on the ratio of protons to neutrons in it. Atoms with unstable nucleus are called radioactive atoms. They spontaneously decay, emitting alpha, beta, or gamma rays until they reach a stability. For example, Uranium has three naturally occurring isotopes. These are uranium-234, uranium-235, and uranium-238. Since each atom of uranium has 92 protons, the isotopes must have 142, 143 and 146 neutrons respectively.
Around 500 BC, an Indian Philosopher Maharishi Kanad, postulated the concept of indivisible part of matter and named it ‘pramanu.’
Atomic theory has been revised over the years as scientists discovered more information about the atoms (Figure 4). Atomic isotopes and the inter-conversion of mass and energy has been added. In addition, the discovery of subatomic particles indicated that atoms can be divided into smaller parts.
Dalton’s Atomic Theory
John Dalton (1766-1844) is the scientist credited for proposing the atomic theory in 1808. The atomic mass unit is designated as Dalton. Dalton developed the law of multiple proportions based on the works of Antoine Lavoisier and Joseph Proust.
Dalton’s atomic theory states the following:
- All matter, whether an element, a compound, or a mixture is composed of small particles called atoms. Atoms are indivisible and indestructible.
- All atoms of a given element are identical in mass and properties
- Compounds are formed by a combination of two or more different kinds of atoms. In each compound, the relative number and kinds of atoms are constant.
- A chemical reaction is a rearrangement of atoms.
Discovery of Electrons
Electron was discovered by J. J. Thomson in 1897, when he was studying the properties of cathode ray.
He constructed a glass tube which was partially evacuated i.e. air was pumped out of the tube (Figure 5). Then he applied a high electrical voltage between two electrodes at either end of the tube. He detected that a stream of particle (ray) was coming out from the negatively charged electrode (cathode) to positively charged electrode (anode). It is called cathode ray and the tube is called cathode ray tube.
Following are the properties of cathode ray particle:
- They travel in straight lines
- They are independent of the material composition of the cathode
When electric field is applied in the path of cathode ray, it deflects the ray towards positively charged plate. Hence, cathode ray consists of negatively charged particles.
Discovery of Nucleus
Ernest Rutherford discovered the nucleus of the atom in 1911. His gold foil model contradicted Thomson’s atomic model (Figure 6). Rutherford, in his experiment, directed high energy streams of α-particles from a radioactive source at a thin sheet (100 nm thickness) of gold.
To study the deflection caused due to α-particles, he placed a fluorescent zinc sulphide screen around the thin gold foil. He observed that the major fraction of the α-particles bombarded towards the gold sheet passed through it without any deflection, some of them were deflected by the gold sheet by very small angles, while some were deflected back at 1800.
Rutherford proposed the atomic structure of elements based on his observations. According to the Rutherford atomic model:
- The positively charged particles and most of the mass of an atom was concentrated in an extremely small volume called nucleus
- Rutherford model proposed that the negatively charged electrons surround the nucleus of an atom. He also claimed that the electrons surrounding the nucleus revolve around it with very high speed in circular paths called orbits
- Electrons and nucleus are held together by a strong electrostatic force of attraction
Bohr’s Atomic Model
In 1913, Neils Bohr proposed his quantized shell model of the atom. His theory explained how electrons can have stable orbits around the nucleus.
According to this “planetary” model, electrons encircle the nucleus of the atom in orbits (Figure 8). Each orbit has a definite energy and is called an energy shell or energy level. When the electron is in one of these orbits, its energy is fixed. Orbits further from the nucleus exist at higher energy levels and vice-versa.
When an electron jumps from a higher energy level to lower energy level, it emits energy, and when an electron absorbs sufficient energy it jumps from a lower energy level to a higher energy level. | 0.828152 | 3.397763 |
The average velocity of a gas molecule depends on the temperature of the gas, and at room temperature it is comparable to that of the speeding bullet, quite below the "escape velocity" needed for escaping Earth's gravity. However, that is just an average: actual velocities are expected to be distributed around that average, following the "Maxwellian distribution" first derived by James Clerk Maxwell, whom we meet again in the discovery of the three color theory of light (section #S-4) and the prediction of electromagnetic waves (section #S-5). According to Maxwell's theory, a few molecules always move fast enough to escape, and if they happen to be near the top of the atmosphere, moving upwards and and avoiding any further collisions, such molecules could be lost.
For Earth, the loss is too slow to matter, but with the Moon, having only 1/6 of the surface gravity, it can be shown that any atmosphere would be lost within geological time. The planet Mercury, only slightly larger, also lacks any atmosphere, while Mars, with 1/3 the Earth's surface gravity, only retains a very thin atmosphere.
Water evaporates easily and once in gas form, is quickly lost by the same process. That suggested the "maria" could not possibly be oceans, though their name remains. They actually turned out to be basaltic flows, hardened lava which long ago flowed out of fissures on the Moon; no present-day volcanism on the Moon has been reliably identified. The vast majority of craters may date back to the early days of the solar system, because the lava of the maria has very few craters on it, suggesting it flooded and obliterated older ones.
The picture of a dry Moon was reinforced by Moon rocks brought back by US astronauts. Earth rocks may contain water bound chemically ("water of hydration"), but not these. Water, of course, would be essential to any human outpost on the Moon. Yet small amounts of water may still exist, brought by comets which occasionally hit the Moon. All this water is sure to evaporate in the heat of the collision, but some of it may re-condense in deep craters near the Moon's pole, which are permanently in the shade and therefore extremely cold. Observations by the "Clementine" spacecraft suggest that one such crater may indeed contain a layer of ice. | 0.817599 | 3.923242 |
The origins of the Solar System’s heavy elements such as gold and platinum have been a source of great interest to astronomers. One of the most popular theories is that they were scattered into space by neutron star collisions.
New research, however, has found another origin: an oft-overlooked type of star explosion, or supernova. These, the researchers assert, could be responsible for at least 80 percent of the heavy elements in the Universe.
The particular type in question are collapsar supernovae, produced by rapidly spinning stars more than 30 times the mass of the Sun; they explode in spectacular fashion before collapsing into black holes.
“Our research on neutron star mergers has led us to believe that the birth of black holes in a very different type of stellar explosion might produce even more gold than neutron star mergers,” said physicist Daniel Siegel of the University of Guelph.
The neutron star collision detection in 2017 brought the first solid evidence that such collisions produce heavy elements. In the electromagnetic data produced by GW 170817, scientists detected, for the first time, the production of heavy elements including gold, platinum and uranium.
As we previously reported, this happens because a powerful explosion, such as a supernova or stellar merger, can trigger the rapid neutron-capture process, or r-process – a series of nuclear reactions in which atomic nuclei collide with neutrons to synthesise elements heavier than iron.
The reactions need to happen quickly enough that radioactive decay doesn’t have a chance to occur before more neutrons are added to the nucleus, which means it needs to happen where there are a lot of free neutrons floating about, such as an exploding star.
In the case of GW 170817, these r-process elements were detected in the disc of material that bloomed out around the neutron stars after they had merged. While working on understanding the physics of this, Siegel and his team realised that the same phenomenon might occur in association with other cosmic explosions.
So, using supercomputers, they simulated the physics of collapsar supernovae. And, boy did they ever strike gold.
“Eighty percent of these heavy elements we see should come from collapsars,” Siegel said.
“Collapsars are fairly rare in occurrences of supernovae, even more rare than neutron star mergers – but the amount of material that they eject into space is much higher than that from neutron star mergers.”
Moreover, the quantities and distribution of these elements produced in the simulation were “astonishingly similar” to what we have here on Earth, he noted.
So does that mean that 0.3 percent of Earth’s r-process elements didn’t come from a neutron star collision 4.6 billion years ago, as a different team of astronomers found earlier this year? Well, not necessarily. Under the parameters of Siegel’s simulations, up to 20 percent of these elements could still have come from neutron star and black hole smash-ups.
The team hopes the James Webb Space Telescope, currently slated for a 2021 launch, could shed more light on the matter. Its sensitive instruments could detect the radiation pointing to a collapsar supernova in a distant galaxy, as well as elemental abundances across the Milky Way.
“Trying to nail down where heavy elements come from may help us understand how the galaxy was chemically assembled and how the galaxy formed,” Siegel said.
“This may actually help solve some big questions in cosmology as heavy elements are a nice tracer.”
The research has been published in . | 0.894405 | 4.052171 |
One year later, the impact of the surprise Russian meteor explosion is still being felt all over the world.
On Feb. 15, 2013, a 65-foot-wide (20 meters) asteroid detonated in the skies over the Russian city of Chelyabinsk, causing millions of dollars of damage and injuring 1,500 people. The dramatic event served as a wake-up call, many scientists say, alerting the world to the dangers posed by the millions of space rocks that reside in Earth's neck of the cosmic woods.
"These types of events are no longer hypothetical," David Kring, of the Lunar and Planetary Institute in Houston, said in December at the annual fall meeting of the American Geophysical Union (AGU) in San Francisco. "We've been up here talking about these types of things for years, but now the entire world understands that they can be real." [Photos: Russian Meteor Explosion of Feb. 15, 2013]
Caught off guard
The asteroid that caused the Russian fireball came streaking into Earth's atmosphere shortly after dawn one year ago today, exploding about 14 miles (23 kilometers) above the ground.
The blast generated a shock wave that hit the city of Chelyabinsk within a minute or two, breaking thousands of windows. (Shards of flying glass caused most of the injuries.)
Chelyabinsk "was the first asteroid-impact disaster in human history," Clark Chapman, of the Southwest Research Institute in Boulder, Colo., said at the December AGU meeting. "Nobody was killed, but nonetheless, the early estimates of the total damage of several tens of millions of dollars ranks it with a typical United States presidentially-declared major disaster."
Adding to the celestial drama, the Chelyabinsk impact occurred on the same day that a 100-foot-wide (30 m) space rock called 2012 DA14 cruised within 17,200 miles (27,000 km) of Earth, coming closer than many communications satellites circling our planet.
Scientists knew about 2012 DA14 and had predicted its close approach. But the Russian fireball caught everybody off-guard, as the asteroid that caused it had escaped detection until its dying day.
And there are plenty of other space rocks like the Chelyabinsk object out there, cruising unnamed and unknown through the dark depths of space. Indeed, scientists have catalogued just 10,600 near-Earth asteroids out of a total population believed to number in the millions.
For decades, researchers have been saying that they need more money and more instruments to start filling in the big gaps on the near-Earth asteroid map. And Chelyabinsk gave them a powerful example with which to augment their argument.
The events of Feb. 15, 2013 got the attention of power brokers as well as the general public, said David Morrison of NASA's Ames Research Center and the SETI (Search for Extraterrestrial Intelligence) Institute.
"There was a planetary defense conference that was, by coincidence, scheduled for two months after Chelyabinsk," Morrison said during a public lecture in Silicon Valley in November. "We had had mostly geeky engineers and scientists at these conferences — until this one, when two high-ranking people from FEMA [the Federal Emergency Management Agency] came and spent the whole time there to begin to study what the civil-defense or disaster implications would be of impacts like this."
And a few weeks after the fireball, he added, the Russian and United States militaries began talking about how to work together to find and defend Earth against hazardous asteroids.
Further, the U.S. Congress held several hearings about planetary defense in the aftermath of Chelyabinsk, and the Obama adminstration asked Congress to double NASA's asteroid-hunting budget, to $40 million.
Finally, last June, NASA announced that it was launching an asteroid "Grand Challenge," which would solicit ideas from industry, academia and the general public about the best ways to detect potentially hazardous asteroids and prevent them from hitting Earth.
The extra attention could help new instruments such as the privately funded Sentinel Space Telescope get off the ground. The nonprofit B612 Foundation is developing the infrared Sentinel, which it plans to launch to a Venus-like orbit in 2018. From there, the scope should be able to spot 500,000 new asteroids in less than six years of operation, officials say.
"We have the technology to deflect asteroids, but we cannot do anything about the objects we don’t know exist," B612 Foundation chairman and CEO Ed Lu, a former NASA astronaut, wrote in a blog post shortly after Chelyabinsk. | 0.875993 | 3.418663 |
It seems years ago — November 29, 2005, to be exact — since a Japanese spacecraft named Hayabusa touched down on a small asteroid in the hope of grabbing samples of its dusty surface and returning them to Earth. Had the mission gone according to plan, the precious bits from asteroid 25143 Itokawa would have reached waiting scientists in June 2007.
But the flight of Hayabusa, Japanese for "falcon," has been anything but nominal. In fact, it's been more of a train wreck.
The craft was nearly lost during its grab-and-go encounter due to a series of malfunctions that should have doomed the spacecraft. But it hung on, despite suffering a massive fuel leak, battery failure, and being incommunicado for two months. Then its attitude-control system failed. The loss of three of its four xenon-powered engines meant it would take three extra years to get the crippled craft home, nursed every step of the way by its dedicated team of engineers.
Well, folks, Hayabusa is almost home. Late word from project manager Jun'ichiro Kawaguchi is that the sole remaining engine was commanded to shut down on March 27th, having gently accelerated the craft by 900 miles per hour (400 m per second) over the past year and nudged it onto a trajectory that will pass within several thousand miles of Earth. "What is left is a series of trajectory corrections," Kawaguchi explains, "and the project team is finalizing the preparations for them."
Barring an 11th-hour setback, in mid-June a small, 38-pound (17-kg) descent capsule will separate from the main spacecraft and slam into the atmosphere over south-central Australia. The larger craft will then maneuver to avoid Earth. Streaking through the darkness at 7.6 miles (12.2 km) per second, the capsule should parachute to the ground somewhere along a target zone, measuring 60 by 10 miles (100 by 15 km), in the remote Woomera Test Range.
After whisking it back to a clean room at the Japan Aerospace Exploration Agency (JAXA), scientists will carefully open the 16-inch-wide (40-cm) capsule to learn, finally, whether it contains any asteroidal bits. It's hardly a sure thing — despite sitting on Itokawa's surface for 30 minutes, Hayabusa failed to fire two small tantalum pellets designed to kick surface material into a collection cone.
Hayabusa's successful return would be a Big Deal in Japan, and plans for the welcome-home party are well under way. Kawaguchi has been careful not to divulge the exact date publicly, pending the engine shutdown and a sign-off from Australian authorities. "It is not at the beginning of June, and it is not at the end of June," he teases. JAXA has produced an informative 21-minute video about the mission, in English, that you can view here. There's even a dramatic movie treatment: Hayabusa: Back to the Earth.
Because spacecraft rarely come down through the atmosphere so fast — Earth-orbiting satellites fall in about a third slower — there's plenty of scientific interest in the reentry itself. The capsule should create an artificial fireball beginning at an altitude of about 120 miles (200 km) and hit a peak brightness of magnitude -6.7 (several times brighter than Venus) before deploying its parachute.
For the past year, meteor specialist Peter Jenniskens (SETI Institute) has been organizing an international team to observe the capsule's arrival from a instrument-packed DC-8 jet flying near the recovery zone. Jenniskens mounted a similar effort for the return of the Stardust sample capsule in January 2006.
Will Hayabusa, despite all its problems, make it back to Earth? Will the capsule contain hard-won bits of asteroid Itokawa? Will Kawaguchi and his team get a ticker-tape parade through downtown Tokyo? Stay tuned for the final chapter of this remarkable mission! | 0.819169 | 3.341051 |
A new technique that relies on identifying stellar twins yields a novel way to measure distances to the stars.
Paula Jofré was roughly 39,000 feet above the Atlantic Ocean when she had an idea. In between bouts of turbulence, she pondered a question her colleagues had posed earlier: What could they learn from nearby stars with identical spectra? Jofré’s revelation answered the question simply: their distances.
Stars with identical spectra will have other identical characteristics, like their brightnesses — which is a tell-tale sign of their distances.
The idea relies on an age-old relation and was so simple that when she rushed home, she expected to find it within one of her textbooks. But when she couldn’t find it referenced anywhere, she ran a quick test and proved that her theory would work. “The day after I went to Gerry Gilmore, my boss in Cambridge, and told him the story,” recalls Jofré. “He just said, ‘Beautiful! You made my day.’”
Two months after that fateful flight, Jofré and her colleagues published a new method of measuring the distances to stars that had previously been too far away to assess reliably. The article appeared in the Monthly Notices of the Royal Astronomical Society on August 25, 2015.
Cosmic Rulers to the Stars
By the 1600s, astronomers understood that light obeyed the “inverse-square law.” If two stars have the same absolute brightness, but one is twice as far away, it appears one-fourth as bright as the nearby one. So relative distances are easy to measure, but the problem is determining the closer star’s distance in the first place.
The most accurate “cosmic yardstick” used today doesn’t rely on a star’s intrinsic brightness but rather its parallax — the tiny back-and-forth motion that it makes with respect to background stars as Earth loops around the Sun. The closer the star is to Earth, the more pronounced its shift. So this method can only be applied to stars in our immediate neighborhood, because for very distant stars the shift is too tiny to measure reliably. The Gaia satellite, which launched in December 2013, will be able to measure a star’s parallax 10 times better than before. It will also chart 1 billion stars. But that colossal number is only 1% of the stars in the Milky Way Galaxy.
For more distant stars, astronomers have to rely on models based on a star’s temperature, surface gravity, or chemical composition. Astronomers might watch stars that vary in brightness or wait for stars to explode. These characteristics hint at a star’s absolute magnitude and allow astronomers to roughly determine its distance.
But these indirect methods can lead to fuzzy results, so astronomers are always on the hunt for new, more precise methods.
A New Cosmic Ruler: Stellar Twins
Jofré’s method looks at stellar twins. Although these stars come from different stellar nurseries (in fact, they might be hundreds of light-years away from each other), their identical spectra imply identical luminosities. Then, if the nearer star’s distance is known via parallax measurements, the inverse-square law makes quick work of determining how much farther it is to the more distant twin.
“It's an exceptionally simple yet powerful idea,” says co-author Andrew Casey (University of Cambridge).
In just two months, Jofré and her colleagues analyzed 536 stable, Sun-like stars for which high-resolution spectra were available. She and co-author Thomas Mädler (University of Cambridge, UK) worked almost every evening when their children were finally tucked into bed. “I would come to work exhausted,” Jofré says, “but excited to talk to [my colleagues] about the progress.”
Within those 536 stars, the researchers found 175 pairs of spectroscopic twins. And for each set of twins, one star had a reliable parallax measurement. With that in hand, they could easily calculate the distance to the other with the inverse-square method.
Their technique showed just a 7.5% difference with known parallax measurements, which in turn have an uncertainty of about 3.5%. So their method might not be quite as accurate, but the uncertainty doesn’t increase for more distant stars — a nagging problem with parallax-based determinations.
“Most of what we know about astrophysics is limited by our inability to accurately measure stellar distances,” says Casey. The size of the galaxy, the size of the universe, and the acceleration of the universe all hinge on accurately measuring distances. “That's why the billion-dollar Gaia mission was launched: to map out the positions of a billion stars in the Milky Way,” continues Casey. “But Gaia can't solve everything.”
Most Milky Way stars lie beyond Gaia's reach, and in a few years Gaia will stop running completely. “In the long-term future, other distance methods will be needed again,” says Jofré.
P. Jofré et al. “Climbing the Cosmic Ladder with Stellar Twins.” Monthly Notices of the Royal Astronomical Society. August 25, 2015. | 0.857141 | 3.976496 |
Located in the constellation of Perseus and just a mere 750 light years from Earth, a young protostar is very busy spewing forth copious amounts of water. Embedded in a cloud of gas and dust, the hundred thousand year old infant is blasting out this elemental life ingredient from both poles like an open hydrant – and its fast moving droplets may be seeding our Universe…
“If we picture these jets as giant hoses and the water droplets as bullets, the amount shooting out equals a hundred million times the water flowing through the Amazon River every second,” said Lars Kristensen, a postdoctoral astronomer at Leiden University in the Netherlands and lead author of the new study detailing the discovery, which has been accepted for publication in the journal Astronomy & Astrophysics.. “We are talking about velocities reaching 200,000 kilometers [124,000 miles] per hour, which is about 80 times faster than bullets flying out of a machine gun.”
To capture the the quicksilver signature of hydrogen and oxygen atoms, the researchers employed the infrared instruments on-board the European Space Agency’s Herschel Space Observatory. Once the atoms were located, they were followed back to the star where they were formed at just a few thousand degrees Celsius. But like hitting hot black top, once the droplets encounter the outpouring of 180,000-degree-Fahrenheit (100,000-degree-Celsius) gas jets, they turn into a gaseous format. “Once the hot gases hit the much cooler surrounding material – at about 5,000 times the distance from the sun to Earth – they decelerate, creating a shock front where the gases cool down rapidly, condense, and reform as water.” Kristensen said.
Like kids of all ages playing with squirt guns, this exciting discovery would appear to be a normal part of a star “growing up” – and may very well have been part of our own Sun’s distant past. “We are only now beginning to understand that sun-like stars probably all undergo a very energetic phase when they are young,” Kristensen said. “It’s at this point in their lives when they spew out a lot of high-velocity material – part of which we now know is water.”
Just like filling summer days with fun, this “star water” may well be enhancing the interstellar medium with life-giving fundamentals… even if that “life” is the birth of another star. The water-jet phenomenon seen in Perseus is “probably a short-lived phase all protostars go through,” Kristensen said. “But if we have enough of these sprinklers going off throughout the galaxy – this starts to become interesting on many levels.”
Skip the towel. I’ll let the Sun dry me off.
Original Story Source: National Geographic. | 0.89041 | 3.717822 |
Image credit: NASA
NASA has tested a new high-power ion engine which could give future spacecraft significantly more thrust to accomplish exploration of the solar system. The High Power Electric Propulsion (HiPEP) ion engine should eventually be 10 times as powerful as NASA’s Deep Space 1 ion engine which was tested a few years ago. An engine like this will probably power the JIMO probe allowing it to go into and out of orbit around several of Jupiter’s moons and map them in great detail.
NASA’s Project Prometheus recently reached an important milestone with the first successful test of an engine that could lead to revolutionary propulsion capabilities for space exploration missions throughout the solar system and beyond.
The test involved a High Power Electric Propulsion (HiPEP) ion engine. The event marked the first in a series of performance tests to demonstrate new high-velocity and high-power thrust needed for use in nuclear electric propulsion (NEP) applications.
“The initial test went extremely well,” said Dr. John Foster, the primary investigator of the HiPEP ion engine at NASA’s Glenn Research Center (GRC), Cleveland. “The test involved the largest microwave ion thruster ever built. The use of microwaves for ionization would enable very long-life thrusters for probing the universe,” he said.
The test was conducted in a vacuum chamber at GRC. The HiPEP ion engine was operated at power levels up to 12 kilowatts and over an equivalent range of exhaust velocities from 60,000 to 80,000 meters per second. The thruster is being designed to provide seven-to-ten-year lifetimes at high fuel efficiencies of more than 6,000-seconds specific impulse; a measure of how much thrust is generated per pound of fuel. This is a contrast to Space Shuttle main engines, which have a specific impulse of 460 seconds.
The HiPEP thruster operates by ionizing xenon gas with microwaves. At the rear of the engine is a pair of rectangular metal grids that are charged with 6,000 volts of electric potential. The force of this electric field exerts a strong electrostatic pull on the xenon ions, accelerating them and producing the thrust that propels the spacecraft. The rectangular shape, a departure from the cylindrical ion thrusters used before, was designed to allow for an increase in engine power and performance by means of stretching the engine. The use of microwaves should provide much longer life and ion-production capability compared to current state-of-the-art technologies.
This new class of NEP thrusters will offer substantial performance advantages over the ion engine flown on Deep Space 1 in 1999. Overall improvements include up to a factor of 10 or more in power; a factor of two to three in fuel efficiency; a factor of four to five in grid voltage; a factor of five to eight in thruster lifetime; and a 30 percent improvement in overall thruster efficiency. GRC engineers will continue testing and development of this particular thruster model, culminating in performance tests at full power levels of 25 kilowatts.
“This test represents a huge leap in demonstrating the potential for advanced ion technologies, which could propel flagship space exploration missions throughout the solar system and beyond,” said Alan Newhouse, Director, Project Prometheus. “We commend the work of Glenn and the other NASA Centers supporting this ambitious program.”
HiPEP is one of several candidate propulsion technologies under study by Project Prometheus for possible use on the first proposed flight mission, the Jupiter Icy Moons Orbiter (JIMO). Powered by a small nuclear reactor, electric thrusters would propel the JIMO spacecraft as it conducts close-range observations of Jupiter’s three icy moons, Ganymede, Callisto and Europa. The three moons could contain water, and where there is water, there is the possibility of life.
Development of the HiPEP ion engine is being carried out by a team of engineers from GRC; Aerojet, Redmond, Wash.; Boeing Electron Dynamic Devices, Torrance, Calif.; Ohio Aerospace Institute, Cleveland; University of Michigan, Ann Arbor, Mich.; Colorado State University, Fort Collins, Colo.; and the University of Wisconsin, Madison, Wis.
A print quality photograph of the HiPEP ion engine is at:
For information about NASA on the Internet, visit:
For more information about NASA’s Glenn Research Center, visit:
For more information about Project Prometheus on the Internet, visit:
Information about JIMO is available on the Internet at:
Original Source: NASA News Release | 0.851557 | 3.421218 |
It’s not easy getting to Mercury. At its closest, the innermost planet of the Solar System might be only 77 million km away from Earth. It will, however, require more energy for the European-Japanese spacecraft BepiColombo to get to its destination than it would reaching the dwarf planet Pluto, orbiting between 4.4 and 7.3 billion km away from the Sun.
The spacecraft’s cruise through the inner Solar System takes seven years before its two orbiters, ESA’s Mercury Planetary Orbiter and the Mercury Magnetospheric Orbiter of the Japanese Aerospace Exploration Agency, can be placed into the correct orbits around the scorched rocky planet. In comparison, Solar Orbiter, which heads even closer to the Sun, within the orbit of Mercury, needs just under two years to reach its destination.
Why does it take so long for BepiColombo then? Travelling from Earth towards the Sun, BepiColombo needs to constantly brake against the gravitational pull of the massive star in order to be able to reach Mercury with just the right velocity to enter a stable orbit around the planet. To slow down, BepiColombo uses a combination of the solar electric propulsion system aboard the Mercury Transfer Module, one of the components of the mission, and overall nine gravity-assist flyby manoeuvres at planets Earth, Venus and Mercury.
The interactive ‘Where is Bepi’ tool below allows you to explore BepiColombo’s trajectory, including its flybys, and follow the position of the spacecraft on every single day of its journey. | 0.877962 | 3.346392 |
|Credit: University of Colorado at Boulder|
Astronomers have caught a supermassive black hole in a distant galaxy snacking on gas and then "burping" -- not once, but twice.
The galaxy under study, called SDSS J1354+1327 (J1354 for short), is about 800 million light-years from Earth. The team used observations from NASA's Hubble Space Telescope, the Chandra X-ray Observatory, as well as the W.M. Keck Observatory in Mauna Kea, Hawaii, and the Apache Point Observatory (APO) near Sunspot, New Mexico.
Chandra detected a bright, point-like source of X-ray emission from J1354, a telltale sign of the presence of a supermassive black hole millions or billions of times more massive than our Sun. The X-rays are produced by gas heated to millions of degrees by the enormous gravitational and magnetic forces near the black hole. Some of this gas will fall into the black hole, while a portion will be expelled in a powerful outflow of high-energy particles.
By comparing X-ray images from Chandra and visible-light (optical) images from Hubble, the team determined that the black hole is located in the center of the galaxy, the expected address for such an object. The X-ray data also provide evidence that the supermassive black hole is embedded in a heavy veil of dust and gas.
The results indicate that in the past, the supermassive black hole in J1354 appears to have consumed, or accreted, large amounts of gas while blasting off an outflow of high-energy particles. The outflow eventually switched off then turned back on about 100,000 years later. This is strong evidence that accreting black holes can switch their power output off and on again over timescales that are short compared to the 13.8-billion-year age of the universe.
"We are seeing this object feast, burp, and nap, and then feast and burp once again, which theory had predicted," said Julie Comerford of the University of Colorado (CU) at Boulder's Department of Astrophysical and Space Science, who led the study. "Fortunately, we happened to observe this galaxy at a time when we could clearly see evidence for both events."
So why did the black hole have two separate meals? The answer lies in a companion galaxy that is linked to J1354 by streams of stars and gas produced by a collision between the two galaxies. The team concluded that clumps of material from the companion galaxy swirled toward the center of J1354 and then were eaten by the supermassive black hole.
The team used optical data from Hubble, Keck, and APO to show that electrons had been stripped from atoms in a cone of gas extending some 30,000 light-years south from the galaxy's center. This stripping was likely caused by a burst of radiation from the vicinity of the black hole, indicating that a feasting event had occurred. To the north they found evidence for a shock wave, similar to a sonic boom, located about 3,000 light-years from the black hole. This suggests that a burp occurred after a different clump of gas had been consumed roughly 100,000 years later.
"This galaxy really caught us off guard," said CU Boulder doctoral student Rebecca Nevin, a study co-author who used data from APO to look at the velocities and intensities of light from the gas and stars in J1354. "We were able to show that the gas from the northern part of the galaxy was consistent with an advancing edge of a shock wave, and the gas from the south was consistent with an older outflow from the black hole."
Our Milky Way galaxy's supermassive black hole has had at least one burp. In 2010, another research team discovered a Milky Way belch using observations from the orbiting Fermi Gamma-ray Observatory to look at the galaxy edge on. Astronomers saw gas outflows dubbed "Fermi bubbles" that shine in the gamma-ray, X-ray, and radio wave portion of the electromagnetic spectrum.
"These are the kinds of bubbles we see after a black hole feeding event," said CU postdoctoral fellow Scott Barrows. "Our galaxy's supermassive black hole is now napping after a big meal, just like J1354's black hole has in the past. So we also expect our massive black hole to feast again, just as J1354's has."
Other co-authors on the new study include postdoctoral fellow Francisco Muller-Sanchez of CU Boulder, Jenny Greene of Princeton University, David Pooley from Trinity University, Daniel Stern from NASA's Jet Propulsion Laboratory in Pasadena, California, and Fiona Harrison from the California Institute of Technology. | 0.825031 | 4.047867 |
The Milky Way has about 100 billion stars, most of which were formed when our galaxy was half its current age. Over the time, the star formation rate has considerably slowed in our galaxy. CNRS researchers and their international colleagues1 provide a new explanation for this phenomenon by showing that stellar winds issued from massive stars disturb the gas clouds in which stars like the Sun form, slowing their formation. Using NASA’s SOFIA observatory, scientists mapped the “footprint” left by stellar winds on the gas clouds of the Orion Nebula (see image). In particular, they were able to measure the amount of energy deposited in the cloud with an unprecedented accuracy. These results reveal that the influence of stellar winds is even greater than that of supernovae, which are considered to be the most violent phenomena in the universe. The study is published on January 7, 2019 in the journal Nature.
1 Institut de recherche en astrophysique et planétologie (CNRS/Université Toulouse III Paul Sabatier), Institut de Radioastronomie Millimétrique, Leiden Observatory, Institute of Physics – University of Cologne, Instituto de Física Fundamental (CSIC), Telespazio Vega UK Ltd for ESA/ESAC, Universities Space Research Association/SOFIA, NASA Ames Research Center, Department of Astronomy – University of Maryland and Max-Planck Institute for Radio Astronomy.
- Article : Disruption of the Orion Molecular Core 1 by the stellar wind of the massive star θ1 Ori C. C. Pabst, R. Higgins, J.R. Goicoechea, D. Teyssier, O. Berne, E. Chambers, M.Wolfire, S. Suri, R. Guesten, J. Stutzki, U.U. Graf, C. Risacher, A.G.G.M.Tielens. Nature, le 7 janvier 2019. http://dx.doi.org/10.1038/s41586-018-0844-1
- SOFIA Science Center Press Release : Lifting the Veil on Star Formation in the Orion Nebula
- Olivier Berné, [email protected] | 0.867756 | 3.175783 |
They say good things come to those who wait. Never was this more exemplified than this evening after several hours in bitterly cold conditions on Culloden moor with my video telescope. The cold made setup and targeting much more fraught than usual, and the small gas stove I’d balanced pecariously beside the monitor did little to help.
However, near the end of my session I hit the jackpot when this stunning image of the Whirlpool galaxy, over 23 million light years away, materialised from the video screen.
This image is a true testament to the power of video astronomy and the huge increase in aperture it lends to amature telescopes. Dust lanes and connective spiral arms are clearly in evidence here. The best naked eye views of the Whirlpool I’ve seen have only really resolved the two central cores of the interacting galaxies. You generally need a scope of 16 inches or more to reveal dust tendrils in this much detail.
This is how the Earl of Rosse sketched the galaxy back in 1845 with his monstrous 72 inch dobsonian from the grounds of Birr Castle in Ireland.
Of course back then these structures were given the loose classification of ‘nebulae’ and were assumed part of our local galaxy. It wasn’t until the 1920s when Edwin Hubble observed cepheid variable stars within each bright core of the Whirlpool that this image was understood to be two distinct but interacting galaxies, the larger of which has been estimated to be 35% the size of our own Milky Way galaxy.
M51 is still a hot target for professional astronomers, not least because of the black hole that exists within the heart of the larger galaxy. This central region is undergoing rapid stellar changes and star formation. | 0.809752 | 3.043154 |
Physics opens a Pandora’s Box every time physicists take a step to know it better. Quantum physics, also known as quantum mechanics is the physics of and related to the tiny, such as atoms and subatomic particles. The first quantum theory was discovered by Max Karl Ernst Ludwig Planck or Max Planck in a late 19th century and won a noble prize in 1918 for the discovery and of quantum physics. Max Plank referred to the quantum theory as energy quantum, in these days; there are many principles of quantum theory but only three revolutionary principles. Quantized properties, particles of light and wares of matter, so, what are these?
Max plank chased the explanation of the distribution of colors emitted over the spectrum in the red-hot and white-hot object’s glow, such as the filaments of the light bulb. In the 19s century, he found out making his equation’s physical sense, which he has obtained to describe this distribution that combinations of certain colors were emitting, specifically those which were whole number multiplying of some base values. Mysteriously, colors were quantized! No one expecting it because the light was supposed to act like Waves which means the colors values should be a continues spectrum what was forbidding atoms from producing the colors between these whole- number multiplies?
It was so strange that Planck considered quantization as nothing more than a trick of mathematics “If a revolution accrued in physic in 1900. Nobody seemed to notice it. Planck was no exception” Hedge Krogh worth in the equation of Max pluck, the reluctant revolutionary. The equation of Max Planck had a number that becomes very important for the future development of quantum mechanics today; it’s now called Planks Constant.
Quantization also helped to explain several mysteries of physic. Even Albert Einstein used the theory of Planck of the equation in 1907 to describe why the temperature of solid changes by different amounts if you put the same amount of heat into the changing the starting temperature. Since the early 18s, the spectroscopy’s science showed different elements emit and absorb certain colors of light known as spectral lines.
Although spectroscopy was a reliable method for determining the elements contained in objects, for example, distant stars. Scientists were replaced as they were unable to understand why every element gave off those specific lines in the first place! Johannes Rydberg derived an equation showing the spectral lines are emitted by hydrogen and nobody was able to explain why the equation worked.
Later the, it changed in 1913, when Niels Behr applied the quantization’s theory of Max Planck to the planetary model of Ernest Rutherford which Earnest created in 1911, proving electrons orbit the nucleus similarly as planets orbit the sun. Bohr also said that electrons were restricted to special orbits around the nucleus of an atom. They could jump between the special orbits and the energy produced by the jumps caused specific colors of light that are observed as the spectral lines, this is how the quantized properties became the founding principle of quantum mechanics from just a mathematics trick.
Particles of Light
Concerning a heuristic point of view toward the emission a transformation of light this paper was published by Albert Einstein in 1905. In the paper, Einstein referred that light travels not as eve but as some manner of energy quanta. He proposed that this packet of energy could be absorbed or generated only as a whole, precisely when an atom jumps between the quantized vibration rates. This can be also applied, has been shown some years later when an electron jumps between the quantized orbits. The energy quanta of Einstein contained the energy difference of the jumps under this model and he divides by the constant of Planck, the energy difference determined the color of light carried by those quanta.
Albert Einstein proposed insights the nine different phenomena’s behavior with this new way to visualized light, including Planck’s those specific colors which he described emitted from a filament of the light bulb. It also described how certain colors of light could emit electrons off the metal surface, which is a phenomenon, The Photoelectric Effect.
‘The photoelectric: rehabilitating the story for the physics classroom,’ Said Stephen Klassen, an associate professor of physics at the University of Winnipeg worth in this 2008s paper, however, Einstein wasn’t completely justified in taking this leap. In the statement, Klassen stated that the energy quanta of Einstein aren’t compulsory for explaining the nine phenomena certain mathematical treatments of light as just a wave is capable to explain the planks specific colors that emitted from the light bulb filament and the photoelectric effect
Approximately after almost 20 years of the Einstein’s paper them photon become popular describing the energy quanta, it happened because of the Arthur Compton’s work of 1923, showing that light tossed by an electron beam changed in color. And this proved that the light particles’ photons were actually colliding with the electrons, the particles of matter confirming the hypothesis of Einstein. Now it proves that light behaves both as particles and a wave placing the wave particles duality of light the creation of quantum mechanics.
The Wave of Light
Everyone was pretty sure that all matters exist in the forms of particles also the evidence was forming slowly after the discovery of the electron in 1896. But the demonstration of wave particles duality of light forced to acting only as particles. The wave-particle duality can ring true also for matters. Lous de Broglie was the first scientist making substantial headway with this reasoning in 1924, using the equations of Theory of Relativity.
He showed that particles can show particles like characteristics later, two scientists applied de Broglie’s reasoning using separate lines of mathematical thinking in 1925, for explaining how electrons whirled around in atoms, it’s a phenomenon that was unexplainable using the classical mechanic’s equations. A German physicist Werner Heisenberg with Max Born and Pascual Jordan accomplished it developing matrix mechanics. Erwin Schrödinger an Austrian physicist built a close hypothesis known as wave mechanics. Though Wolfgang Pauli, a Swiss physicist sent an unpublished result to Jordan which showed that matrix mechanics was more complete, still in 1926, Erwin showed that these two approaches were equal.
The model of Erwin and Heisenberg, in which electrons act like wave also sometimes referred to as cloud around the nucleus of the atom replaced the Rutherford-Bohr model. The new model’s stipulation was that the wave’s end that forms an electron must meet.
Melvin Hanna wrote in the Quantum mechanics in chemistry 3rd edition, “The imposition of the boundary conditions has restricted the energy to discrete values. The result of the stipulation is that only whole numbers of crests and troughs are allowed, which describes why some properties are quantized. In The Model of Heisenberg and Erwin of the Atoms, Electrons obey a “wave function” and occupy orbitals rather than orbits unlike the round orbits of the Rutherford-Bohr model.
Atomic orbitals have variations in shapes ranging from spheres to dumbbells of daisies. Later, Fritz London and Welter Hailer developed the wave mechanics in 1927, to exam plain how atomic orbitals could combine to form molecular orbital’s effectively showing why atoms bond to each other to form molecules. It was one more problem that had been unsolvable using the classical mechanic’s mathematics. Later, these insights rose in the fields of quantum chemistry.
This is how quantum physics contributed to helping us understand such small but important things in physics. And scientists say that it’ll help more but what will be its future well, we know quantum mechanics as a research disciplined ended in the sixties with the arrival quantum field theory, the Rigorous Formalization of mixed states of Von Neumann and the Bell’s theorems discovered by John Stewart Bell, with the first playing largest role and together these represent the traditions quantum mechanics extension to a lot of things, such as the prediction that the extinct of particles spine from first principles, establish the quantum mechanics cannot be described with the help of classical theory except if the information is allowed to travel faster than light.
Also, include the special reality that quantum field theory accomplishes beautifully etc. So the actual future of quantum mechanics is what remains now are not questions for quantum mechanics but questions about the application of quantum mechanics. | 0.802511 | 3.628641 |
Jan 30, 2013
What takes place in thunderstorms on Earth is most likely a smaller version of large scale phenomena.
“I have always believed that astrophysics should be the extrapolation of laboratory physics, that we must begin from the present Universe and work our way backward to progressively more remote and uncertain epochs.”
— Hannes Alfvén
Previous Picture of the Day articles discussed electric fields that build up in and around thunderstorms. Since Earth is electrically charged, it maintains an electric field at its surface of between 50 and 200 volts per meter. In other words, for every meter of altitude the voltage increases by that measure.
Electromagnetic fields beneath thunderstorms increase to 10,000 volts per meter because the storms and the Earth act like the plates of a capacitor, storing electrical energy from the surrounding environment. A “wind” of charged particles blows toward the developing storm, pulling neutral air molecules along with the current, creating powerful updrafts that can occasionally rise into the stratosphere. Once the storm reaches a critical threshold, the stored energy is released as a lightning bolt.
Thunderstorms act like “particle accelerators,” launching massive discharges upward to space, as well as downward to ground. The upward strokes are known as red sprites and blue jets but are not easy to detect, since they last just a few milliseconds and are at high altitude.
Red sprites are massive, diffuse flashes above active thunderstorms, coinciding with normal lightning strokes. They can be single events, or multiple, with filaments above and below, often extending to altitudes close to 100 kilometers. Some of the largest sprites contain dozens of individual smaller sprites, covering horizontal distances of 50 kilometers, with a volume of 10,000 cubic kilometers.
Blue jets are distinct from sprites, since they propagate upward in narrow cones that disappear at an altitude of about 50 kilometers. They are also more powerful because the electric discharges are confined within a smaller spatial volume. Geophysicists are beginning to realize that sprites and jets are part of every moderate to large storm system and are an essential component in Earth’s electric circuit.
Electric Universe theorists propose that what is observed on other planets, within galaxies, or in free space should be used as examples of what can occur on Earth, as opposed to using our planet to model the Universe. We are part of a cosmic “ecology” that maintains a coherent physical aspect, so that aspect ought to apply here.
The European Space Agency’s (ESA) International Gamma-Ray Astrophysics Laboratory (INTEGRAL), was launched from the Baikanor Cosmodrome on October 17, 2002. It is the first space-based observatory that can be used to simultaneously study objects in gamma ray, X-ray, and visible light. One of INTEGRAL’s major finds was the observation in 2008 of an extreme X-ray source from the center of a remote galaxy cluster.
X-ray emissions are far too intense to be generated from hot gas in the cluster, so “shockwaves must be rippling through the gas.” Astrophysicists suggested that the shockwaves had “turned the galaxy into a giant particle accelerator.”
The temperature of gases in the cluster core was measured at 100 million Kelvin. Researchers think that electrons accelerated by shockwaves traveling through the cluster gas generate the intense X-rays. The shockwaves are said to be created when two galaxy clusters “collide and merge.”
By referring to material with a temperature of 100 million Kelvin as “hot gas,” ESA scientists are highlighting their complete ignorance of plasma and its behavior. No atom can remain intact at such temperatures: electrons are stripped from the nuclei and powerful electric fields develop. The gaseous matter becomes plasma, capable of conducting electricity and forming double layers.
Nobel laureate Hannes Alfvén maintained that double layers are a unique celestial object, and that intense X-ray and gamma ray sources could be due to double layers “shorting out” and exploding. Double layers can accelerate charged particles up to enormous energies in a variety of frequencies, forming “plasma beams.” If the double layer breaks the circuit, the double layer may explode, drawing electricity from the entire circuit and discharging more energy than was contained in the double layer.
Double layers dissipate when they accelerate particles and emit radiation, so they must be powered by external sources. Birkeland currents are theorized to transmit electric power over many light-years through space, perhaps over thousands of light-years, so they are most likely the power source for the extreme X-ray generator in Ophiuchus.
So-called “particle accelerators” in thunderstorms and galaxy clusters are most likely manifestations of Birkeland currents pouring electricity into double layers. Sprites and jets exhibit filamentary structure, as does terrestrial lightning. Streamers of plasma can be seen flowing through galaxy clusters. In time, it may become evident that the scaleable nature of the plasma Universe reveals itself through electrical events both large and small.
Click here for a Spanish translation | 0.844372 | 3.850787 |
Moon* ♓ Pisces
Moon phase on 5 September 2055 Sunday is Full Moon, 14 days old Moon is in Pisces.Share this page: twitter facebook linkedin
Moon rises at sunset and sets at sunrise. It is visible all night and it is high in the sky around midnight.
Moon is passing about ∠7° of ♓ Pisces tropical zodiac sector.
Lunar disc appears visually 6.5% narrower than solar disc. Moon and Sun apparent angular diameters are ∠1783" and ∠1903".
The Full Moon this days is the Harvest of September 2055.
There is high Full Moon ocean tide on this date. Combined Sun and Moon gravitational tidal force working on Earth is strong, because of the Sun-Earth-Moon syzygy alignment.
The Moon is 14 days old. Earth's natural satellite is moving through the middle part of current synodic month. This is lunation 688 of Meeus index or 1641 from Brown series.
Length of current 688 lunation is 29 days, 8 hours and 5 minutes. This is the year's shortest synodic month of 2055. It is 25 minutes shorter than next lunation 689 length.
Length of current synodic month is 4 hours and 39 minutes shorter than the mean length of synodic month, but it is still 1 hour and 30 minutes longer, compared to 21st century shortest.
This lunation true anomaly is ∠336.4°. At the beginning of next synodic month true anomaly will be ∠352.8°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°).
12 days after point of perigee on 24 August 2055 at 04:11 in ♍ Virgo. The lunar orbit is getting wider, while the Moon is moving outward the Earth. It will keep this direction for the next 3 days, until it get to the point of next apogee on 8 September 2055 at 13:59 in ♈ Aries.
Moon is 402 042 km (249 817 mi) away from Earth on this date. Moon moves farther next 3 days until apogee, when Earth-Moon distance will reach 406 177 km (252 387 mi).
2 days after its descending node on 3 September 2055 at 07:35 in ♒ Aquarius, the Moon is following the southern part of its orbit for the next 12 days, until it will cross the ecliptic from South to North in ascending node on 17 September 2055 at 19:18 in ♌ Leo.
15 days after beginning of current draconic month in ♌ Leo, the Moon is moving from the second to the final part of it.
4 days after previous South standstill on 31 August 2055 at 23:31 in ♑ Capricorn, when Moon has reached southern declination of ∠-20.442°. Next 10 days the lunar orbit moves northward to face North declination of ∠20.530° in the next northern standstill on 15 September 2055 at 17:00 in ♋ Cancer.
The Moon is in Full Moon geocentric opposition with the Sun on this date and this alignment forms Sun-Earth-Moon syzygy. | 0.860247 | 3.100539 |
DSCOVR satellite to keep a weather eye on solar storms
Sunday's delayed launch means that NOAA's Deep Space Climate Observatory (DSCOVR) will wait at least a day before it can take up its job of helping warn of potentially damaging solar flares. If Monday's rescheduled liftoff goes as planned, the unmanned spacecraft will be on its way to a point between the Earth and the Sun, where it will act as a space weather observatory and early warning station.
The Sun regularly goes through periods of great activity as it throws off massive solar flares many times larger than the Earth. Most of these flares blast into empty space, but some end up heading for our planet. When these reach us, they interact with the Earth's magnetic field to produce solar or geomagnetic storms that disrupt communications, satellites, GPS, transportation, and power grids, as well as posing an increased radiation hazard to aircraft flying in the polar regions. US government estimates place the potential damages from a large storm at up to US$2 trillion.
Despite this threat, Earth's defenses are surprisingly sparse. At the moment, the only solar weather satellite that can currently provide real-time warnings is the Advanced Composition Explorer (ACE), which was launched in 1997 and is at the end of its service life. If it fails, there would be little or no warning before a solar flare struck.
DSCOVR, formerly known as Triana, is a first step toward remedying this. It began in the late 1990s as an Earth observation satellite. Though DSCOVR was built, it ended up in storage when the mission was canceled in 2001. There it remained until NOAA and the US Air Force took it out of mothballs in 2008. It was then refurbished by NASA and equipped with updated instruments, while the Air Force offered to foot the bill for a SpaceX Falcon 9 launch vehicle to put it into space on a five-year mission to provide warnings of incoming solar flares approaching the Earth.
Equipped with two deployable solar arrays, a propulsion module, boom, and high-gain antenna, DISCOVR is equipped with a battery of instruments for monitoring solar weather. These include the Solar Wind Plasma Sensor (Faraday Cup) and Plasma-Magnetometer (PlasMag), which measures solar wind velocity and magnetic field intensity, the National Institute of Standards and Technology Advanced Radiometer (NISTAR) for measuring the power of electromagnetic radiation reflected and emitted from the entire sunlit face of the Earth, the Electron Spectrometer (ES) for high temporal resolution solar wind observations, and the Pulse Height Analyzer (PHA) for real-time measurements of particle events that might affect DSCOVR’s electronics.
But the party piece of DSCOVR is the Earth Polychromatic Imaging Camera (EPIC), which is a 30 cm (11.8 in) telescope operating in the ultraviolet and visible spectrum. Its job is to take images of the sunlit side of the Earth to study weather, climate, and pollution. With a resolution between 25 and 35 km (14.5 to 21.7 mi), it is the first satellite capable of sending back a single high-resolution image of Earth instead of building one up from a mosaic of images.
DSCOVR is scheduled to be launched atop a SpaceX Falcon 9 v 1.1 launch vehicle from Cape Canaveral on Monday. If successful, it will spend about 110 days traveling to its destination, which is the Sun-Earth Lagrangian point 1 (L1), 1.5 million km (930,000 mi) from Earth. This is the point between the Earth and the Sun where the gravitational forces balance out, so the satellite remains on station. It allows the satellite to always stay sunward of Earth at a distance that provides 45 to 30 minutes of warning, depending on the speed of the incoming solar particles.
The video below outlines the DSCOVR mission.
Update: DISCOVR was successfully launched on February 11, 2015 from Cape Canaveral at 6:03 EST. | 0.802618 | 3.121107 |
This month's Full Moon is traditionally known as the Flower Moon, Corn Planting Moon or the Milking Moon. But the Moon will also be "super" this week, meaning astronomers expect it to be bigger and brighter than usual. Although the term Supermoon is not scientific, the event is popular and draws in large crowds of stargazers.
What is a Supermoon?
Supermoons are loosely defined as Full Moons within 90 percent of the lunar perigee - that is the Moon's lowest orbit of Earth.
Because the Moon follows an elliptic path around the planet, every night it is closer or farther from our planet.
During a Supermoon, the lunar orb may appear up to 30 perecent brighter and 14 percent brighter than ususual.
NASA's Gordon Johnston said: "For 2020, the four full Moons from February through May meet this 90 percent threshold."
Supermoon 2020: The next Full Moon is a beautiful Supermoon
When is the next Supermoon?
The upcoming Supermoon coincides with the May Flower Moon, which peaks on Thursday, May 7.
Here in the UK, the Full Moon will peak in brightness during the day so you will have to wait until the evening for the Supermoon to appear.
When viewed from London, the Moon will creep over the horizon at about 8.44am BST.
Astronomer Deborah Byrd of EarthSky.org said: "The Moon appears full to the eye for two to three nights.
"However, astronomers regard the moon as full at a precisely defined instant, when the Moon is exactly 180-degrees opposite the sun in ecliptic longitude."
Great Big Lockdown Survey: Tell us what life's like for you by answering THESE questions
How to see the Supermoon in the UK?
When viewed from London, the Moon will rise in the east-southeast skies.
The Moon appears full to the eye for two to three nights
The Moon will feature fairly low in the horizon, heading in a west-southwest direction.
Eagle-eyed astronomers will notice a bright star up and to the left of the Moon - the gas giant Jupiter.
The Moon will then set the following morning at about 6.09am BST.
Virgin Galactic: How YOU could soon fly into space [INSIGHT]
Meteor fireball lights up the skies over US and Canada [VIDEO]
Space travel breakthrough: Solar sail promises fast speeds in space [INSIGHT]
You will also have a chance to watch the Supermoon online and from the comfort of your home.
Courtesy of the Virtual Telescope Project in Italy, the spectacle will be broadcast live online and free of charge.
The YouTube stream will start at 7.30pm BST (6.30pm UTC) on May 7.
Astrophysicist Gianluca Masi said: "The Supermoon is back for the fourth and last time this year, to celebrate the Flower Full Moon.
"The Virtual Telescope Project will bring to you the show of this Supermoon while it rises and shines above the legendary skyline of Rome.
"Join our free, live webcast: you just need a computer/tablet/smartphone and an internet connection.
Create your own survey at doopoll.co
"We will admire our satellite rising above the breathtaking skyline of Rome, the Eternal City. It will be an unforgettable experience."
The Supermoon will also be broadcast live online by the robotic telescope service Slooh.
The Slooh stream will start at midnight on May 7 (11pm UTC on May 6).
Slooh said: "Slooh's team of expert guest will explain everything there is to know about Supermoons and, while viewers watch the live feeds from Slooh's observatories in the Canary Islands and Chile, our special guests will delve into the impact the Moon has on the Earth and the natural world." | 0.896291 | 3.252611 |
Note: written during the 2001 sci.astro debates.
Expanded with Light Speed Limitation section during IRC Session.
Planet X and the 12th Planet are one and the same.
Speed, in space, is a relative thing. Your submarines move more slowly
than your cars because they deal with less drag. Likewise, objects shot
into space or incoming feel little distress when out where the atmosphere
is negligible, and tend to heat up and burn when in the thick of Earth's
atmosphere. Thus, objects in space have no ill effects from a
high speed, other than what they might encounter. What might that be, in
the case of Planet X, which we have described as traversing the solar
system from one side of Saturn's orbit to the other in 3 short months [Note: see 2003 Date explanation, as this was part of the
May 15, 2003 white lie].
- Gravity Draw from the Sun
- Human scientists who deal with gravity as some mysterious "force",
unexplained except by the math that describes it, would be
boggled by the path of Planet X we have described. An object comes on,
and depending upon its speed it will either pass by a gravity draw, with
an "escape velosity", or be drawn in to crash, ultimately, on the
surface of the gravity draw or into some sort of circular or eliptical
orbit. So the theory goes. Apply the particle explanation to the force
of gravity, as we have described it, and you have another scenario,
which by the way explains why your Moon remains up there when
according to Newton it should not. Planet X is, of course, drawn by the
gravity pull of the Sun, and thus its periodic passage. But it is also
pushed away by the gravity particle streams emitted by the Sun, which
can be described as a fire hose of force, meeting the fire hose of force
from Planet X itself. They buffer away from each other, forcing the
speeding Planet X to bypass the Sun, at a distance based on
its mass and the mass of the Sun. The reducing mass of the Sun explains
why Planet X is coming closer, during its passage, at the present time,
than its past passages which were through the Asteroid Belt.
- Perturbations from Earth or Other Planets
- This is a variable that depends on speed as well as mass. By the time
Planet X enters the solar system, its speed toward the Sun ensures that
it will move past any other planet, including Jupiter, that it may come
close to. Should Jupiter stand directly in the path of Planet X during a
passage, this would case a perturbation on other planets that
would temporarily change their paths, but they would both resume
essentially the same orbit or path after the encounter. The speed of
Planet X ensures this, as does the significant mass of both these
planets. Were Planet X to encounter a smaller object, such as occurred
in the Asteroid Belt in the past, it would either be treated like a
meteor or if large enough to engage the Repulsion Force of gravity,
become a moon satellite of Planet X as many objects have. The pelting to
pieces that occurred in the Asteroid Belt was due to collisions of
objects not of significant size to invoke the Repulsion Force. Small
planets, passing close to Planet X during its high-speed passage, might
become a satellite moon, or be pelted to pieces by one of Planet X's
trailing moons, though this has by change not occurred except in the
heavily crowded Asteroid Belt, which contained some 24 planets and
various moons of same prior to the past passages.
- Solar Wind
- The effect on Planet X is, as with meteors entering your atmosphere,
peripheral, so that the outer edges of the atmosphere are altered,
peeled off in the worst case, and need to be rebuilt from the oceans
that cover most of Planet X. This same atmosphere rebuilding occurs
after the passage on Earth, from its oceans, as we have described.
Temporarily, the clouds are lower on Earth, but the adjustment is
remarkably quick, so that survivors are unaware of anything other than a
lower cloud cover during the first few months.
- Light Speed Limitations
- In the dozen or so years prior to a passage, Planet X speeds up from
almost a standstill to a zoom, toward the foci it is approaching.
Imagine the Earth without atmosphere, and a rock some miles overhead.
What is the speed limit on this rock as it plummets? There is no
limit in space, only that which mankind assumes. During math discussions
on sci.astro, it has been surmised that the speed of Planet X approaches
the speed of light during its most rapid approach, and this astonishes
those in the discussion. Why is it assumed that light is the fastest
thing in the universe, re travel? Man thinks this because it is
something he can measure. He is aware of such a small percentage of
matter and energy about him that to say that he comprehends 1% of what
the universe is composed of would be an overstatement. Our space travel,
in 4th Density and even 3rd Density, is faster than light, and we do not
melt. Man does not understand, so we cannot give him satisfaction in our
explanations. Suffice it to say that our explanation is correct,
and Planet X travels rapidly into our midst, thence the Repulsion Force
is invoked, thence it floats past between the Earth and Sun.
All rights reserved: [email protected] | 0.89118 | 3.657407 |
Space Telescopes / Keyhole
Keyhole satellites always have been of special interest because of their proposed similarity to the Hubble Space Telescope (HST). The big difference is that Keyholes are military satellites and that they are used for Earth observation instead of space research. For the astrophotographer interesting is the fact that the orbits of Keyhole satellites are highly elliptical. This means that they sometimes can be observed during very low passes above the surface, increasing the chance to capture more detail. But as stated on the page 'Satellites in their last Orbits', it is important to realize that due to the high angular speed of objects passing in low orbits, it is technically a bigger challange to ensure that images of these objects are free of motion blur. This is especially important when objects are tracked fully manually, the technique that I use in all cases.
The Keyhole satellites are a family of reconnaissance satellites launched by the US National Reconnaissance Office (NRO) since the mid-seventies. They are known under different code names, but often named Evolved Enhanced CRYSTAL (EEC). Observers recognize them mostly as KH-11 KENNAN satellites. The later versions are actually newer types often classified as KH-12 or Advanced KENNAN or Improved Crystal. It is often said that the Keyhole satellites are similar to the Hubble Space Telescope. According to sources they were shipped in similar containers and a NASA history stated that they used military Keyhole technology in the Hubble to minimize costs. In 2011, NRO offerend NASA two surplus Keyhole satellites. On that occasion, NASA engineers were able to visit the KH-hardware. It was reported that the focal length of the Keyholes was shorter then that of the Hubble. The shorter telescopes enabled the military to oversee larger areas on Earth. This gain of the Keyholes over the astronomical Hubble was interesting for NASA as it could assist research in different astronomical fields. In 2010, I managed to photograph some of the Keyhole satellites in rare favorable occasions.
The image below shows the KH-129 satellite captured on September 4, 2010 from a range of 336 kilometers, taken during a pass not far from the lowest point of the orbit of the satellite, even below the orbit of the International Space Station. The image is presented in both positive and negative. The model on the right features a proposed presentation of the telescope main shape, without the solar panels.
On the left we see the actual image taken with a 10 inch aperture telescope, on the right we see one of the proposed models developed by insiders. There seem to be incredible similarities. Note especially the thicker bright part of the telescope-tube below. Also some segments in the tube appear to be visible. Interesting are some elements that are mounted on the satellite that could be satellite dishes or solar panels. The biggest difference from the model is the big structure on the right side of the tube that is most likely a solar panel. Although the smaller structure on the left side of the tube suggests something like a dish antenna, it is probably the other solar panel. Other images taken a day later with the same setup under different circumstances seems to confirm the second solar panel. This and other images of the KH-129 also suggest an aperture door like on the Hubble. This is the structure visible on top of the image but it could also be some other element that we not know.
These are the first known detail images of USA-245, at the time of writing, the most recent Keyhole spacecraft in the program captured with a 25 cm telescope on August 5, 2015 near perigee from a distance of 289 kilometers while it was 277 kilometers above the ground. At the time of imaging, the satellite had been in space for almost 2 years. In these images, different elements such as the telescope tube and the thick compartment for the instrument bay can be clearly seen. The thicker instrument unit appears bright in nearly all ground based images of Keyhole satellites. Also, there is a hint of solar panels. USA-245 or NROL-65 was launched from Vandenberg Air Force Base at 18:03 UTC (11:03 local time) on 28 August 2013.
Comparing these USA-245 observations with those of USA-129 earlier on this page, shows many similarities in visible detail.
We can conclude that Keyhole satellites show in all these observations at least clearly a telescope tube, consisting of an longer thinner part and a shorter thicker part. The thicker part is the telescope main mirror housing that shows up bright in all images, indicating that this is a highly reflective element. The thicker part seems clearly flanked by 2 elements that appear to be solar panels but their visibility depends on observing conditions like angle and illumination.
Processings of USA-245 images on August 5, 2015. The different parts of the telescope tube are well recognizable
Processing of one of the USA-245 frames taken on August 5, 2015 that clearly shows an indication of visible solar panels or other types of panels
Below: Unprocessed raw frames of the USA-245 imaging session on August 5, 2015 | 0.832578 | 3.271315 |
Astronomer’s just discovered two massive bubbles in a far away galaxy, NGC 3079. But they’re not like the bubbles we see here on Earth when you blow air into a straw in your glass of water, but rather these bubble’s are made of highly charged particles and gases that have been trapped within itself from some cosmic cataclysmic event.
NGC3079 is located 67 million lightyears from Earth and has been the place of research for a lot of astronomers. These bubbles have been measured to be around the same size of on another, one being 4900 lightyears in diameter and the other 3600 lightyears. To compare, our Sun’s effects reach out only about 2 lightyears in distance, that makes these super bubbles to be about 2000 lightyears longer than our solar system.
Using the Chandra X-ray observatory, astronomer’s found that right around the rim of these cosmic super bubbles, there is a particle accelerator, meaning there are particles being accelerated at great speeds, to be exact, 100 times more energetically and faster than what the Large Hadron Collider (LHC) here on Earth is capable of producing.
Something even more interesting is that these super bubbles are not a rarity in our universe in fact, our very own galaxy has super bubbles too, and they’re called Fermi Bubbles. That’s right the Milky Way galaxy has bubbles too, which were first detected in 2010.
Astronomers believe these super bubbles formed two different ways. The first, is through the fall of matter into a supermassive black hole, found at the center of our galaxy. When this happens, a release of highly charged particles and radiation are emitted back into space. This may cause strong amounts of wind and energy to disrupt other matter in the galaxy and the bubbles may form. The second way is through very strong stellar wind from hot young stars. These winds are not only fast and powerful but also contain excited electrons and charged particles that can interfere with nearby matter, hence possibly causing the formation of these comic bubbles.
Knowing that these bubbles are accelerating particles to immense speeds, scientists believe they may be tied together with something known as Ultra-High Energy Cosmic Rays (UHECRs). Which have been recorded all throughout our universe, and infant are not rays at all but instead are highly charged particles.
Might these cosmic particle accelerators give us the answers that scientists have been looking for through the experimentations being done in Geneva, Switzerland (where LHC is located)? Could these super bubbles be what produces the UHECRs that we see happening all throughout universe? And can we make find a way to predict when these energetic events will occur and harvest its energy for interstellar travel? | 0.879088 | 3.927246 |
A team of astronomers shocked the world back in 2016 when they revealed evidence of an Earth-sized exoplanet in the habitable zone of our nearest stellar neighbor, a star called Proxima Centauri. Scientists are hunting for a second planet in this system—and maybe, maybe, they’ve found something.
A team of researchers led by Raffaelle Gratton at the INAF - Astronomical Observatory of Padua are reporting the results of a search for this second planet, using images obtained by the SPHERE instrument on the Very Large Telescope in Chile. They detected some signal unlikely to be caused by random noise alone, though they’re blunt about their results, as per their abstract: “we did not obtain a clear detection.” They hope that more observations will soon confirm or rule out the signal.
“What we are looking at is essentially a spot,” Gratton explained to Gizmodo. “It’s more or less the same as when you look at a planet of the solar system illuminated by the Sun, it reflects the light of the Sun and we see that. But this object is not near the Sun. It’s near another star, and this makes things much more difficult, because it is very far from us and very close to the star.”
Scientists have long speculated about the possibility of other planets around Proxima Centauri. But this past January, astronomers led by INAF-Astrophysical Observatory of Turin astronomer Mario Damasso first spotted evidence of a second exoplanet. The starlight seemed to contain a periodic signal demonstrating a change in the star’s velocity, possibly from the gravitational influence of a second planet much farther away from the star than the first—around 1.48 AU, or around 1.48 times the average distance between Earth and the Sun. Damasso joined Gratton’s team, and the group dug into images taken by the Very Large Telescope during a four-year exoplanet survey called SHINE.
The analysis attempted to separate what could be a signal from the noise of background stars. And there was some evidence of a signal—certainly not enough evidence to be a detection, but a speck of light whose behavior seemed unlikely to have come from noise alone. If the speck were a planet, it would be 7.2 or 8.6 times the mass of Earth, depending on how they interpreted the data. They even speculated, based on their calculations, that such a planet could have had a system of rings or dust clouds surrounding it, according to the paper to be published in Astronomy and Astrophysics, which we noticed thanks to a tweet by Lee Billings.
Alycia Weinberger, Carnegie Institute of Washington astronomer who was not part of the study, told Gizmodo it isn’t yet time to get too excited. She called the paper a valiant effort but said she had some reservations about how the team calculated their signal-to-noise ratios. There are also still potential background sources in some of the source data that the team used for comparison.
Meanwhile, both Weinberger and Meredith MacGregor, assistant professor at the University of Colorado Boulder who was not involved in the study, pointed out that the analysis relies on evidence of the presence of a dust disk around Proxima Centauri. However, more recent observations from the ALMA observatory in Chile failed to find any evidence of that disk. Therefore, MacGregor told Gizmodo in an email, she was very skeptical of the result.
Guillem Anglada-Escudé, another astronomer not involved in the study who led the team that discovered the first planet around Proxima, told Gizmodo that he liked the study but stressed the preliminary nature of the work. He was excited about the potential for follow-up observation. “These were survey images that were not necessarily optimized for this target in particular,” he told Gizmodo. In principle, if observatories resume operations, a targeted observation could confirm or refute the presence of the planet.
Gratton stressed that this is not an announcement of a discovery; it’s just the outcome of an analysis that neither confirms nor rules out the existence of another planet. But with more analysis, we may have an answer soon. | 0.8595 | 3.895101 |
Optical Calibration Target
The target plate is a flat rectangle of known color and brightness, fixed to the spacecraft so the instruments on the movable scan platform (cameras, infrared instrument, etc.) can point to a predictable target for calibration purposes.
Photopolarimeter Subsystem (PPS)
The Photopolarimeter Subsystem uses a 0.2 m telescope fitted with filters and polarization analyzers. It covers eight wavelengths in the region between 235 nm and 750 nm. The experiment is designed to determine the physical properties of particulate matter in the atmospheres of Jupiter, Saturn, and the Rings of Saturn by measuring the intensity and linear polarization of scattered sunlight at eight wavelengths in the 2350-7500A region of the spectrum. The experiment will also provide information on the texture and probable composition of the surfaces of the satellites of Jupiter and Saturn and the properties of the sodium cloud around Io. During the planetary encounters a search for optical evidence of electrical discharges (lighting) and auroral activity will also be conducted. | 0.887445 | 3.257148 |
Japan green lights first ever mission to sample a Martian moon
JAXA, Japan's space agency, is moving ahead with a first-of-a-kind mission to explore the two moons of the Mars system, Phobos and Deimos. All going to plan, the Martian Moon Exploration (MMX) mission will return to Earth with the first ever samples of a Martian moon by the end of the decade, which scientists hope may offer a few clues about Mars' formation and its watery past.
The Japanese government officially approved the development phase of the MMX mission this week, which means the scientists working on it will now turn their attention to building the hardware and software needed for a pioneering return journey to the Red Planet. The spacecraft will begin with a one-year orbit of Mars, and then turn its attention to its pair of moons, which are of considerable interest to the science community.
This is because their formation is the source of much debate. Phobos and its smaller sibling Deimos have the appearance of asteroids, and may have been collected from the asteroid belt and swung inwards by Mars' gravity. An alternative theory is that they formed as a result of some kind of large impact with Mars. Either way, scientists expect the moons to serve as valuable time capsules full of ancient materials ejected by Mars over billions of years, offering insights into the formation of bodies and water transport within the Martian system.
The Japan Aerospace Exploration Agency's MMX spacecraft will use 11 onboard instruments to study both Phobos and Deimos from orbit, but will then close in on Phobos for landing and sample collection, similar to Japan's Hayabusa2 spacecraft that is currently returning to Earth with a sample collected from the asteroid Ryugu.
Where Hayabusa2 collected a tiny sample of 0.1 g from the surface, the MMX spacecraft has lofty aims of collecting a 10-g (0.35-oz) sample. It will do this using a corer instrument capable of burrowing 2 cm (0.8 in) into the surface, before leaving and carrying the sample back to Earth. If it is successful, it will mark the first ever round-trip to the Martian system.
Another aspect of the mission is the consideration of a future human base in the area. By visiting Mars and its moons, the MMX spacecraft will demonstrate the technology needed to enter and escape Mars' gravitational well, land and traverse the surface of a small, low-gravity body, and deploy scientific instruments. In addition, the mission will assess the radiation it encounters, which remains a key consideration for future Mars missions.
“Humans can realistically explore the surfaces of only a few objects and Phobos and Deimos are on that list,” notes NASA Chief Scientist, Jim Green. “Their position orbiting about Mars may make them a prime target for humans to visit first before reaching the surface of the Red Planet, but that will only be possible after the results of the MMX mission have been completed.”
The MMX mission is slated for launch in 2024 and is expected to arrive at Mars in 2025. With departure scheduled for 2028, all going to plan the spacecraft will return to Earth with its sample in tow in 2029. | 0.846167 | 3.388748 |
Alpha Centauri, the closest star system to Earth at just 4.37 light years away, is thought to have a pretty good chance of possessing a habitable planet. That’s enough to have spurred Stephen Hawking and Russian billionaire Yuri Milner to sponsor a $100 million-plus endeavor to send laser-powered nanocraft into deep space look for extraterrestrials and other signs of habitability in the region.
That project, called the Breakthrough Starshot initiative, apparently won’t be the only privately-backed foray investigating Alpha Centauri. Meet Project Blue: a new venture announced today whose goal is to build and launch lightweight telescope into Earth’s orbit by 2019 specifically to observe Alpha Centauri.
What makes Alpha Centauri so special? It’s not simply that it’s close to us — it also possesses the kind of ingredients astronomers look for when assessing the potential for a start system to host habitable worlds. The binary star system is pretty stable, and boasts twice the size of a “goldilocks zone” (the region around a star where liquid surface water could exist. Currently, we know of no planets that orbit Alpha Centauri, but some scientists think there is as high as an 85 percent chance Alpha Centauri possess an Earth-like planet.
Project Blue, sponsored by a scientific consortium led by the BoldlyGo Institute and Mission Centaur, will look to directly image Alpha Centauri in visible light and look for signs of Earth-like planets in the area between 2019 and 2022. The telescope would be about the size of a refrigerator and have support from NASA’s experts. Direct imaging of any planets would help scientists follow up with data collection related to biosignatures like carbon dioxide and oxygen.
The big question, of course, is how much will this cost. And right now, there is no clear idea — BoldlyGo Institute CEO Jon Morse told Space.com, “We will have, hopefully, some further announcements that we’re not quite ready to talk about.” He said funding could range between $10 million and $50 million over the entire mission’s lifetime — a broad range, but about 33 percent the cost of a typical NASA mission with similar investigative goals. The consortium expects to raise money from sponsorships, community involvement, and other unspecified methods.
An easier way of getting funding would be to go straight through NASA and submit a mission proposal. That’s exactly the route NASA scientists Ruslan Belikov and Eduardo Bendek have taken with their Alpha Centauri Exoplanet Satellite (ACESat) design. Even before Breakthrough Starshot, the pair were pushing forward the idea of putting together a space telescope just to study Alpha Centauri.
Unfortunately, while it would be a lot easier to fund ACESat through NASA, there still a 15 percent chance that there is nothing worth looking at in the goldilocks zones of both Alpha Centauri stars. That’s a 15 percent chance of failure — something that gives the risk-averse NASA great pause in greenlighting ACESat.
Project Blue would essentially bypass those concerns — yet it remains to be seen whether they will find enough money to meet the 2019 deadline.
There’s also, of course, the question of whether Project Blue could search for signs of life on Proxima b — the potentially habitable world orbiting the nearby Proxima Centauri star (4.22 light-years away). The answer is no. Proxima Centauri is a small and pretty dim red dwarf. Its planet orbits way too close to the star in order for Project Blue telescope, ACESat, or most other low-cost instruments to properly study it.
Nevertheless, if Project Blue or another telescope can be successfully built and launched, we might finally have a shot at determining whether Earth 2.0 resides just a stone’s throw away. | 0.824054 | 3.196853 |
In the first half, guest host Jimmy Church (email) welcomed theoretical archaeo-astronomer Walter Cruttenden, who discussed the recent scientific discoveries of a possible new planet as well as "gravity waves." Ten years ago, Cruttenden presented the theory that, based on the effects on the outer objects in the solar system, as well as the strangely inclined and elliptical orbit of Pluto, that there had to be another large gravitational attractor besides the sun affecting the solar system. "There's multiple items that point to a large object out there," he said, and added that the theory of another large, but unseen object beyond Pluto was even being discussed just after the time of its discovery in 1930.
Cruttenden spoke extensively about Mike Brown, the astronomer leading the research team at Caltech who originally led the effort to demote Pluto to "dwarf planet" status, and who has since tried to locate a suspected giant planet in the outer solar system. As more objects were discovered by Brown and his team, they noticed that they were lining up in an unusual way that could not be explained by the gravitational influence of the Sun or any of the outer planets. Since the existence of a new ninth planet has been proposed, Cruttenden noted that there are probably "a thousand different astronomers" now searching for it. Cruttenden noted that the object may be covered in dark material so that it would be very difficult for space telescopes such as Hubble or Kepler to observe.
He also discussed the possibility that the object may be a small, very old star that is locked in a binary system with the Sun. "I think stars like companions as much as people do," he joked. In addition to the two celestial motions (Earth's rotation and orbit around the Sun) which cause day and night and the seasons, Cruttenden proposed that the discovery of a third motion (around a distant binary star) may lead to a new way of looking at the history of the planet. There are hints in ancient literature about a cosmic 10- to 20,000 year cycle, which is the proposed orbital period for the unknown planet or star, he added.
The importance of the recent discovery of "gravity waves" was also discussed. Cruttenden, who said that it "gives us a new way to look at the universe," explained that up to now we have been looking at the stars by visible light, radio waves, and other forms of electromagnetic radiation, and with some excitement, noted that we may now be able to "see things we didn't know existed" by observing the effects of these waves.
During the second half, Jimmy and listener Don discussed the possibility that our reality is a simulation in a larger universe. Jimmy then proposed that an alien civilization may have uploaded their consciousness into a kind of computer software and then "downloaded" it into a biological entity when they found a suitable planet to explore, such as Earth. Later, Carl asked whether Einstein's theories were correct or viable, and "DC" from the Bronx checked in to express her concern over the rapid and unchecked development of artificial intelligence.
Bumper music from Friday February 12, 2016
Midnight Express (The Chase)
Back on the Chain Gang
Breaking the Law
I Fought The Law
Folsom Prison Blues
Breaking The Chains
Take the Money and Run
Steve Miller Band
Band on the Run
Paul McCartney & Wings | 0.913112 | 3.725372 |
Is there definitive evidence for an expanding universe?
Evolution out of the ‘Dark Ages’
Published: 19 August 2014 (GMT+10)
Expansion of the universe is fundamental to the big bang cosmology. No expansion means no big bang. By projecting cosmological expansion backwards in time, they assert, one will, hypothetically, come to a time where all points are the same. Since these points are all there is, then it logically follows that there is no space or time ‘before’ this moment. It is the singularity, and we cannot use language couched in concepts of time when no time (or space) exists.
Yet there are Christians who use this presumed fact as evidence in support of the Genesis 1 account and even for the existence of God Himself. They argue that only God could have started the big bang. Though it is true the universe does need a first cause, it is an enormous leap into the unknown to suppose that the big bang story is that which is described in the Genesis 1 narrative. The sequence of events is nothing like it. See The big bang is not a Reason to Believe!
At the end of the 1920s, Edwin Hubble made a significant discovery. He found a proportionality between the amount by which the spectral lines in the light coming from relatively nearby galaxies are redshifted1 (z) and their distances (r) from Earth. That relationship is now called the Hubble Law c z = H0 r, where c is the speed of light and H0 is Hubble’s famous constant of proportionality.
The Hubble Law has since been extended to very great redshifts (therefore by inference, distances) in the cosmos, via the redshift-distance relationship. At small redshifts, and by interpretation at small distances, this becomes precisely the Hubble Law.
Redshifts have been interpreted as a velocity of recession, i.e. that galaxies are moving through space. And that the recession implies expansion of the universe. But Hubble, up to the time of his death, was not so convinced of this interpretation. He was open to the possibility that there could be another mechanism to explain redshifts.
In 1935 Hubble wrote:2
“… the possibility that red-shift may be due to some other cause, connected with the long time or distance involved in the passage of the light from the nebula to observer, should not be prematurely neglected.” [emphasis added]
Yet the ‘true’ cosmologists nowadays ‘know’ that redshifts mean that the galaxies are essentially stationary in space and they are dragged apart as the universe expands. This is called cosmological expansion. But whether or not motion of galaxies through space or expansion of space is the correct interpretation, is there any strong evidence for expansion of the universe, of any kind?
To test expansion
Once the distance to a target galaxy exceeds several million light years, methods of measuring distance in the universe, apart from the Hubble Law, become extremely problematic. Generally at large redshifts the redshift-distance relationship, the large-scale extension of the Hubble Law, is used, so that redshift then is a proxy for distance. However, it may well be true that the Hubble Law applies, as a method of determining distance, but that the mechanism for generating the redshifts is, as yet, unknown.3 In other words it may not be the result of expansion of the universe, yet it may still give us a measure of cosmic distance back to the source galaxies.
It should be reiterated that the Hubble Law itself, though derivable from General Relativity, is not sufficient grounds to conclude that redshifts are a reliable proxy for distance in the universe. One counter-example where an astronomical object with a very large redshift (z ~ 2) is seen ejected towards us out of the nucleus of a relatively low redshift spiral galaxy (z ~ 0.02) is sufficient to prove that the Hubble Law as a method of determining distance is not so robust. High redshift should always mean great distance if the Hubble Law is true, hence this counter-example calls into doubt the notion of cosmological expansion. See Big-bang-defying giant of astronomy passes away.4
The way to test the idea of the expanding universe is to look for a parameter that would be different as a function of distance, and hence a function of historically elapsed time, in an expanding universe as compared to a static one. One such parameter is the angular size of galaxies; another is the surface brightness of galaxies. Angular size is not easy to test because you would first need to establish a standard size galaxy that you observe at different redshifts, but surface brightness is somewhat easier to test for.
In these tests you assume redshift is a proxy for distance. You don’t need to know why. But if the test for expansion fails, it must then lead to the conclusion that either the universe is not expanding, i.e. it is static, or, redshifts cannot be used as a proxy for distance. These types of tests have been performed and I have published a summary of those results in Does observational evidence indicate the universe is expanding?—part 2: the case against expansion. (See also the table at the end of the article).
Hubble Ultra-Deep Field
“Universe is Not Expanding After All, Scientists Say” was the headline of one online news site.5 This was in relation to a peer-reviewed paper6 published on a study of the surface brightness of about a thousand galaxies as a function of redshift. The method, first proposed by Tolman in 1935,2 is independent of any particular cosmological model of the structure or history of the universe. It only relies on the fact that if the universe is expanding, and therefore more distant galaxies are at greater redshifts, then their surface brightness would be expected to be much lower than in a static universe. The assumption was made that redshift is a measure of distance in both the expanding and static universe, applying a simple Hubble Law for that relationship in the static universe case, but without a mechanism why the Hubble Law would hold.
Strong agreement for a static universe was found from extremely high redshifted galaxies out to redshift z ~ 5 in the Hubble Ultra-Deep Field. The Hubble Ultra-Deep Field (HUDF) is a survey where the Hubble Space Telescope and some earth-based telescopes looked at galaxies out to the limit of the visible universe, hence they have very high redshifts. Though by itself this study is not definitive it was found that the evidence is inconsistent with an expanding universe. It was found that surface brightness was independent of redshift and therefore a static universe was favoured.
But surely the best evidence for expansion of the universe is to just simply look at many galaxies at successive redshifts, i.e. at z = 10, 9, 8, 7, 6, etc? Since redshift is meant to represent different past epochs or ‘time steps’, a change in the number density of galaxies should show evidence of expansion over time, which would be redshift in an expanding universe.
By counting the number of galaxies at each redshift one should observe the density of galaxies to decrease as the universe expands. This means one would expect to see a systematic trend of lower density or concentration of galaxies at lower redshifts. Going from higher to lower redshift in an expanding universe implies going from an earlier time period to a more recent time period.
Figure 1 illustrates the presumed cosmic history of the universe according to the standard big bang cosmology. Redshift is shown across the top and the corresponding time, counting backwards from the alleged big bang, supposedly 13.7 billion years ago, is shown across the bottom axis. ‘Recombination’ is the name given to the hypothetical period when atoms condensed out of the hot-yet-cooling plasma from the big bang fireball. After that there supposedly ensued a period of cosmic ‘Dark Ages’ before the first stars and hence galaxies formed. Following that was the alleged period called ‘Reionization’ when neutral hydrogen atoms in the intergalactic medium were ionized, by starlight as stars and galaxies began to form, and hence the intergalactic medium became transparent to electromagnetic radiation (i.e. it was no longer in darkness). This means that today, we should be able to detect emissions from those ionized atoms.
It is now claimed from studies of HUDF galaxies that we observe galaxy formation at the beginning of the era of Reionization. The latter is supposed to have occurred in the period between z = 12 and z = 8 as indicated in Figure 1 (between dotted lines labelled Hubble 2012 and Hubble 2009). Prior to this Reionization period all radiation at wavelengths consistent with the states of neutral hydrogen was absorbed and hence it is labelled the cosmic ‘Dark Ages’. These atoms, according to the theory, absorbed rather than emitted light, hence the label.
After the Dark Ages the ensuing galaxy formation process is described:
“Current models for galaxy formation follow the picture in which dark matter halos form by collisionless collapse, after which baryons fall into these potential wells, are heated to virial temperature, and then cool and condense at the centers of the halos to form galaxies as we know them. In short, baryons fall into the gravitational potentials of ‘halos’ of dark matter at the same time that those halos grow in size, hierarchically aggregating small clumps into larger ones.”7 [emphasis added]
The authors here, writing on the alleged early history of the big bang universe, write as if they have definitive knowledge of dark matter providing the necessary gravitational energy to collapse the hydrogen gas into stars and galaxies. Understand please that dark matter here is essential to overcome the problem of naturalistic galaxy formation. No dark matter, no galaxy formation! More on that later but this is how the story goes.
Look at these epochs or redshifts and you will see a growth of structure so a progressive increase in density of galaxies from a redshift of z = 12 towards z = 8. Remember, decreasing redshift (z) implies the forward arrow of time from the alleged big bang towards the present. Then after z = 8 you should see a decrease in density due to the expansion of the universe. All the while the universe was supposedly expanding but the growth of structure (i.e. size and numbers of galaxies per unit volume) in the period of redshift z = 12 to z = 8 outweighs the dissipating effect of the expansion.
This is illustrated in Figure 2, assembled from 10 years of observational data from the Hubble Space Telescope, called the Hubble eXtreme Deep Field (XDF). There you see an increase in density from the more distant hence ‘more than 9 billion years’ frame to the middle frame labelled ‘5 billion to 9 billion years’ and then a decrease in density towards the closest or more recent frame labelled ‘less than 5 billion years’.
This is what one new study claims.8 In the XDF data they identified 7 galaxies at redshift z ~ 9, 1 galaxy at z ~ 10 and 1 galaxy at z ~ 11. They studied ultra-violet (UV) light emission, which is assumed to indicate star formation (hence growth of structure or ‘evolution’) and concluded,
“ … an accelerated evolution beyond z ~ 8, and signify a very rapid build-up of galaxies with MUV < −17.7 mag within only ~200 Myr from z ~ 10 to z ~ 8, in the heart of cosmic reionization.”
With extremely scant evidence, this means that it was concluded that there was an accelerated evolution of the size of galaxies because of the few bright galaxies they observed between redshifts z ~ 8 and z ~ 10 and their UV light-emissions. But this is not even the real issue.
The real issue is that ‘evolution’ is the ‘catch all’ used to explain everything. By adjusting the evolution rate of accumulation of size and galaxy density one can adjust the model to fit the data—to fit any data. Put it this way: if the expansion rate of the universe appears to be too slow, and there is a faster build-up of galaxies with decreasing redshift than expected, one simply adjusts the evolution rate to compensate. Just turn the evolution ‘knob’ by the appropriate amount!
But if the dark matter, which does not interact with any normal matter, was not present in the first place no evolution could occur, since no galaxies would grow and the expanding universe model would be in serious trouble, because there would be no galaxies in their universe.
Is the evidence really consistent with an expanding universe or not? Well, it is equivocal.9,10 What remains is a ‘good’ story based on observations that rely on unprovable assumptions, that are either consistent with an expanding universe or in conflict with the idea. But when in conflict with that story the ‘knobs are turned’ in the standard big bang model such that it can be made to fit any evidence. Evolution of galaxy size is used to counter those apparently contradictory lines of evidence. This means that whatever observations are proffered, one way or another, an alternate explanation can always be found. Unfortunately this is the very nature of big bang cosmology. No wonder a substantial and growing number of even secular physicists and cosmologists are frustrated by what they say is its totally unwarranted stranglehold on thinking—even extending to denial of publication/funding of alternative notions, regardless of quality. See Secular scientists blast the big bang.
Comment on this article (added by John Hartnett 8 September 2014)
My (non-biblical-creationist) friend Hilton Ratcliffe, a South African astronomer and author, posted the following on FaceBook while sharing the link to a mirror of this article on my own site. The following (published with his consent) highlights the true nature of the battle. It’s not science but philosophy and ideology.
References and notes
- i.e. shifted towards the red end of the spectrum. Return to text.
- Hubble, E. and Tolman, R.C., Two methods of investigating the nature of nebular red-shift, Astrophys. J. 82:302–307, 1935. Return to text.
- Marmet, L. On the Interpretation of Red-Shifts: A Quantitative Comparison of Red-Shift Mechanisms, marmet.org. Return to text.
- See also Hartnett, J.G., Universe: Expanding or static?, biblescienceforum.com. Return to text.
- Universe is Not Expanding After All, Controversial Study Suggests, 23 May 2014, sci-news.com. Return to text.
- Lerner, E.J., Falomo, R., and Scarpa, R., UV surface brightness of galaxies from the local universe to z ~ 5, Int. J. Mod. Phys. D, DOI: 10.1142/S0218271814500588, 2014; available at arxiv.org. Return to text.
- Ratra, B., and Vogeley, M.S., The Beginning and Evolution of the Universe, Pub. Astron. Soc. Pac. 120(865):235–265, 2008. Return to text.
- Oesch, P.A., et al., Probing the dawn of galaxies at z ∼ 9–12: New constraints from HUDF12/XDF and CANDELS data, Astrophys. J. 773:75, 2013. Return to text.
- Hartnett, J. G., Does observational evidence indicate the universe is expanding? part 1: the case for time dilation, J. Creation 25(3):109–114, December 2011; creation.com/expanding-universe-1. Return to text.
- Hartnett, J. G., Does observational evidence indicate the universe is expanding? part 2: the case against expansion, J. Creation 25(3):115–120, December 2011; creation.com/expanding-universe-2. Return to text. | 0.856016 | 3.316468 |
The Juno probe successfully made it to an orbit around Jupiter after the five year journey on July 4th. This spacecraft/probe has reached Jupiter with no hiccups in the journey to the largest planet in the Solar System and will enable humanity to see what has never been seen of this complex region, collecting data which could greatly inform scientific theory. The key focus of Juno is to learn and understand the evolution and origins of Jupiter, and to peer behind the dense veil of clouds. This mission has been a hallmark of success, efficiency, and the use of increasingly amazing technology.
The mission overview for the Juno probe is multifaceted, and a continued investment in the awareness of the various complexity of the Milky Way. The four key mission objectives of Juno are to;
This presents a fantastic challenge balanced with incredible opportunity, to which the NASA program has been working since its inception. Jupiter is magnificently large, “It has an equatorial radius of 71,492 km, which is 11.2 times larger than Earth's...In fact, it accounts for 2.5 times as much mass as all of the other planets in the Solar System” (Coffey). Preliminary study of the environment and context of Jupiter discovered, the planet is constantly feeding off of its moons to create a little known ring system. The rings are so faint that it wasn’t until the flyby of Voyager 1 that they were discovered. The “Main” ring is about 7,000 km wide and has an outer boundary 129,130 km from the center of the planet. (Coffey). No doubt now with Juno in place armed with and clear camera and spectral array humanity will begin to get a much clearer picture of this diverse planet and its surrounding space.
Juno has been specially equipped to probe under the dense cloud cover with hides the mysteries of this giant. This fundamental purpose is what got Juno its name which has been taken from Roman and Greek mythology. In this pantheon, the god Jupiter purposely drew a veil of clouds around himself to hide his mischievous deeds. Jupiter’s wife, the goddess Juno, was the only one equipped to see through the veil of clouds to reveal Jupiter’s true nature (NASA). This name helps keep the mission objective in perspective.
The Big Bang theory has a lot of holes, which they hope to fill with data collection and analysis. Scientists believe, “Underneath its dense cloud cover, Jupiter safeguards secrets to the fundamental processes and conditions that governed our solar system during its formation” (NASA). Understanding the evolutionary path of any body in the solar system can go a long way towards filling in the gaps in the Big Bang theory. As such, theories about solar system formation all begin with the collapse of a giant cloud of gas and dust, or nebula, most of which formed the infant sun. Like the sun, Jupiter is mostly hydrogen and helium, so it must have formed early, capturing most of the material left after our star came to be. (NASA)
Currently, scientists do not understand how this occurred. Questions abound, and only more specific data can help point analysts in the right direction. Such questions that Juno hopes to aid as, “Did a massive planetary core form first and gravitationally capture all that gas, or did an unstable region collapse inside the nebula, triggering the planet's formation? Differences between these scenarios are profound” (NASA). After the journey to the planet, Juno entered the atmosphere during a 35-minute engine burn to slow down the probe. This occurred at Earth time at 8:53 pm. PDT (11:53 p.m. EDT) on Monday, July 4th, 2016. A fortuitous day; ‘Independence Day always is something to celebrate, but today we can add to America’s birthday another reason to cheer – Juno is at Jupiter,’ said NASA Administrator Charlie Bolden. ‘And what is more American than a NASA mission going boldly where no spacecraft has gone before? With Juno, we will investigate the unknowns of Jupiter’s massive radiation belts to delve deep into not only the planet’s but into how Jupiter was born and how our entire solar system evolved.’ (NASA)
The flight plan was right on, and only a small planned correction shifting Juno’s attitude to direct the main engine in the correct direction, and then increasing Juno’s rotational rate from 2 to 5 revolutions per minute (RPM) to help stabilize it was required for this first part of the mission to be successfully completed (NASA). This correction was needed due to the incredible gravity of Jupiter, which greatly affected Juno. Rick Nybakken, Juno project manager from NASA’s Jet Propulsion Laboratory explains, "as planned, we are deep in the gravity well of Jupiter and accelerating. Even after we begin firing our rocket motor, Jupiter will continue to pull us, making us go faster and faster until we reach the time of closest approach. The trick is, by the end of our burn, we will slow down just enough to get into the orbit we want." (NASA)
The atmospheric entry, called Jupiter orbit insertion (JOI) was the final aspect of the Juno mission which required special attention and supreme technical specificity. Rick Nybakken, Juno project manager from JPL, reported, “The spacecraft worked perfectly, which is always nice when you’re driving a vehicle with 1.7 billion miles on the odometer” (NASA). Progress is being made at phenomenal rates in all areas of human technology and application. During the time in-between launching Juno, and arrival, the team figured out how to boost productivity. Bolton shares, “official science collection phase begins in October, but we’ve figured out a way to collect data a lot earlier than that. Which when you’re talking about the single biggest planetary body in the solar system is a really good thing” (NASA). It can only be suspected that this is the first of many fine tunings and applications of space technology which will reap incalculable rewards.
The Juno mission was launched from Cape Canaveral Air Force Station in Florida on Aug. 2011. Juno is an integral aspect of NASA’s New Frontiers Program, which is managed at NASA's Marshall Space Flight Center in Huntsville, Alabama, which is part of the agency’s Science Mission Directorate. The Juno spacecraft cost $1.1 billion and is a representation of the finest space tech available (Kofsky). This is a relatively short mission plan, as “Juno's mission is planned to last for a much shorter period, as it is currently being targeted to impact Jupiter in February 2018” (Howell). Scientists involved believe that this short time will be enough to gather the data they need to have a better understanding of the planet, and will likely be the first of many specialized probes. Juno is and prepared for its journey with the finest instrumentation (Chang).
The first of many firsts, Juno captured video footage of the four Galilean moons (Callisto, Europa, Ganymede and Io) orbiting Jupiter. This greatly excited the scientists who had been anticipating the first, saying that in “’all of history, we've really never been able to see the motion of any heavenly body against another,’ Juno principal investigator Scott Bolton said Monday night during a news conference here at NASA's Jet Propulsion Laboratory after Juno’s successful arrival” (Wall). Scientists call Jupiter the King of the solar system, and to be able to see the king’s court after so long is very fulfilling. Much mystery is about to be uncovered in Juno’s effort.
Currently the Juno probe is skimming the cloudy atmosphere of Jupiter, and is ready to begin transmitting close-up views and data of the giant. The first images have already revealed surprises, as “Jupiter's second-largest moon, Callisto, appeared dimmer than initially thought” (Kofsky). Unlike past probes, Juno will take a series of risky dives beneath Jupiter’s intense radiation belts where it will study the gas giant from as close as 2,600 miles over the planet's cloud tops. Galileo, the last mission to the gas giant that ended in 2003, spent most of its mission five times farther away than Juno will get. (Kofsky)
This is the closest flyby of Jupiter yet, and may explain why the mission plan is so short. That distance is highly risky, and may not leave the probe unscathed and able to function optimally. No matter what occurs the data that will be gained will help specialize future probes. However, close up shots of the Red Giant must be worth it, as “There's also the mystery of its Great Red Spot. Recent observations by the Hubble Space Telescope revealed the centuries-old monster storm in Jupiter's atmosphere is shrinking” (Kofsky). So far the project has been an incredible success. The project organizers are joyous, “’Juno sang to us and it was a song of perfection,’ JPL project manager Rick Nybakken said during a post-mission briefing. The possibilities for the growth of awareness, and a stronger sense of how humanity fits into the diverse solar system may be just around the corner.
The Jupiter probe Juno offers the promise of incredible new views of the great giant, and its moons, and subtle ring systems. The space program has become highly specialized, and the technology it employs to accomplish data harvesting of some of the most complex questions humanity faces is incredibly refined and becoming more so every day. Within the short time Juno travels in Jupiter’s orbit no doubt the understanding of the planet, its origins, and its environment will be considerably informed. How this data will apply to lingering and pressing questions about the origins of life on Earth remain to be seen, but there is no doubt it will be an amazing show.
Chang, Alicia. “NASA Spacecraft Reaches Jupiter.” U.S. News, 5 July 2016. Retrieved from: http://www.usnews.com/news/news/articles/2016-07-04/nasas-juno-spacecraft-prepares-for-cosmic-date-with-jupiter
Coffey, Jerry. “How many Earths can fit in Jupiter?” Universe Today, 24 Dec. 2015. Retrieved from: http://www.universetoday.com/65365/how-many-earths-can-fit-in-jupiter/
Howell, Elizabeth. “Juno Spacecraft: NASA's New Mission To Jupiter.” Space.com, 5 July 2016. Retrieved from: http://www.space.com/32742-juno-spacecraft.html
Kofsky, Michael. “NASA's Juno poised to begin transmitting close-up views of Jupiter.” USA Today, 5 July 2016. Retrieved from: http://www.usatoday.com/story/tech/2016/07/05/nasas-juno-probe-enters-jupiters-orbit/86697540/
NASA. “Juno.” Nasa.gov, 2016. Retrieved from: https://www.nasa.gov/mission_pages/juno/main/index.html
NASA. “Juno Spacecraft in orbit around mighty Jupiter.” Nasa.gov, 4 July 2016. Retrieved from: https://www.missionjuno.swri.edu/news/juno_spacecraft_in_orbit_around_mighty_jupiter
Wall, Mike. “Jupiter Probe Captures First-Ever View of Moons Moving.” Space.com, 5 July 2016. Retrieved from: http://www.space.com/33350-nasa-juno-jupiter-moons-video.html | 0.896643 | 3.369939 |
Just nine months after its launch, NASA’s Transiting Exoplanet Survey Satellite (TESS) has found at least eight planets, with more than 300 planetary candidates waiting in the wings.
A bizarre planet at least 23 times the mass of Earth, unveiled on 7 January, is among the confirmed planets—some of which have been reported before.
The newly described planet whizzes around its star on a stretched-out orbit once every 36 days, says Xu Chelsea Huang, a TESS scientist at the Massachusetts Institute of Technology (MIT) in Cambridge. Even stranger, there are hints that another planet not much bigger than Earth is orbiting closer to the star.
How a small inner planet stays on that path as a bigger planet lurches on an elliptical orbit around the same star is a mystery. “This is the most extreme system with this type of architecture,” Huang says. “We don’t know how that could form.” The star is known as HD 21749 and lies 16 parsecs (53 light years) from Earth in the constellation Reticulum.
Huang reported the findings at a meeting of the American Astronomical Society in Seattle, Washington.
A catalogue of oddities
TESS’s other discoveries include a super-hot world, LHS 3844 b, that whirls around its star—a red dwarf only 15% the size of the Sun—every 11 hours. Details on another 20 to 30 planets discovered by TESS are on the verge of being published, Huang says.
TESS works better than team members had dared to dream, says George Ricker, a physicist at MIT and the mission’s principal investigator. Its four cameras can see objects 20% fainter, and focus more sharply, than originally expected.
The spacecraft also does more than hunt planets. Mission scientists have studied 101 stars that brightened suddenly, probably because they were exploding supernovae, says Michael Fausnaugh, an astronomer at MIT. Because TESS stares non-stop at one slice of the sky for 27 days, then moves to a neighbouring slice, it captures an unprecedented view of these exploding stars as they brighten and then dim.
“Based on the brightness and shape of that flare, there’s a lot of science that can be done,” Fausnaugh says. For instance, astronomers can scrutinize the way in which the light increases for clues to the type of star that exploded to create a particular flash. TESS discovered six supernovae in just its first month of observing; its predecessor, NASA’s Kepler space telescope, discovered five over the course of four years, Fausnaugh says.
TESS is in the process of scanning the entire southern sky, after which it will turn and canvass the northern sky.
The spacecraft could conceivably keep working for decades, Ricker says. His team is now writing a proposal to NASA asking that TESS’s mission be extended past its initial two years. That deadline for the proposal was 1 February—but the ongoing partial US government shutdown means Ricker isn’t sure how that timing could change.
This article is reproduced with permission and was first published on January 8, 2019. | 0.892668 | 3.833169 |
Venus is often referred to as Earth’s twin planet (evil twin planet is more like it, when you consider the scorching temperatures). It’s almost the same size, mass, gravity and overall composition. The composition of Venus is pretty similar to Earth, with a core of metal, a mantle of liquid rock, and an outer crust of solid rock.
Unfortunately, scientists have no direct knowledge about Venus composition. Here on Earth, scientists use seismometers to study how seismic waves from earthquakes propagate through the planet. How these waves bounce and turn inside the Earth tell scientists about its composition. Since the surface of Venus is hot enough to melt lead, and no spacecraft have survived on the surface for longer than a few hours, there just isn’t the information about Venus’ internal composition.
Scientists can calculate the density of Venus, though. Since it’s similar to Earth, and the other terrestrial planets, scientists guess that the internal structure of Venus is similar to Earth. One of the big differences between our two planets, however, is the lack of plate tectonics on Venus. For some reason, plate tectonics on Venus shut down billions of years ago. This has prevented the interior of Venus from losing as much heat as the Earth does, and could be the reason Venus doesn’t have an internally generated magnetic field.
Before spacecraft missions were sent to Venus, scientists had no idea what the composition of Venus was. They could calculate the planet’s density, but the surface of Venus was obscured by dense clouds. Spacecraft equipped with radar were able to penetrate the thick clouds and map out features on the planet’s surface, showing that it has impact craters and ancient volcanoes. It’s believed that Venus went through some kind of global resurfacing event about 300-500 million years ago, which is the age of the planet’s surface (calculated by the number of impact craters).
The crust of Venus is thought to be about 50 km thick, and composed of silicious rocks. Beneath that is the mantle, which is thought to be about 3,000 km thick. The composition of the mantle is unknown. And then at the center of Venus is a solid or liquid core of iron or nickel. Since Venus doesn’t have a global magnetic field, scientists think that the planet doesn’t have convection in its core. The planet doesn’t have a large difference in temperature between the inner and outer core, and so the metal doesn’t flow around and generate a magnetic field.
We have written many articles about Venus for Universe Today. Here’s an article about Venus’ wet, volcanic past, and here’s an article about how Venus might have had continents and oceans in the ancient past.
We have recorded a whole episode of Astronomy Cast that’s only about planet Venus. Listen to it here, Episode 50: Venus. | 0.847719 | 3.880533 |
The extra distant a galaxy is in Space, the extra historic it is in Time. For this cause, extremely far-flung, historical galaxies are usually too faint to be found, even by way of astronomers the use of the most important and most powerful telescopes. But Nature has supplied astronomers with a gift–gravitational lensing. Gravitational lenses can bend, warp, and distort streaming mild in such a manner that a faraway object may be magnified by using the gravity of a foreground object (the lens), thus making the background lensed galaxy less difficult for astronomers to see. In January 2018, an international crew of astronomers led through Dr. Harald Ebeling from the University of Hawaii (Manoa), introduced their critical discovery of one of the maximum intense examples of magnification of a faraway item by using a gravitational lens. Using the Hubble Space Telescope (HST) to survey a pattern of giant clusters of galaxies, the team located a far-off galaxy, dubbed eMACsJ1341-OG-1, this is magnified 30 times thanks to the distortion of Spacetime created by means of a foreground large galaxy cluster that warps its visiting streams of mild.
The term gravitational lensing itself refers to the direction that visiting mild has taken whilst it has been deflected. This takes place while the mass of an object situated in the foreground warps the light streaming out from an extra far off item located in the history. The light does no longer have to be seen light–it could be any shape of electromagnetic radiation. As an end result of gravitational lensing, mild beams that might typically not were observable are bent in this sort of way that their paths wander closer to the observer. Conversely, light also can be bent in this sort of manner that its beams can wander away from the observer.
There are different kinds of gravitational lenses: sturdy lenses, weak lenses, and microlenses. The differences that exist between the three forms of gravitational lenses has to do with the position of the background item this is emitting the mild, the foreground lens this is bending the mild, and the placement of the observer–in addition to the shape and mass of the foreground lens. The foreground item determines how a great deal of the background object’s mild will be warped, in addition to wherein this light will wander on its path through Spacetime.
The Cosmos that we look at nowadays sparkles with the fabulous flames of billions and billions of stars that populate the greater than 100 billion galaxies inhabiting the enormously small part of the Universe that we’re able to examine. We cannot observe whatever may exist past the cosmological horizon–or facet–of the observable Universe due to the fact light streaming out from luminous objects, inhabiting those unimaginably remote regions, has not had sufficient time to reach us for the reason that Big Bang. This is due to the expansion of the Universe. The velocity of light–the well-known pace restrict–has made it impossible for us to observe what can also exist past the cosmological horizon of our visibility. When we appearance deep into Space, we appearance back in Time. This is because the greater far off a shining item is in Space, the longer it has taken for its light to attain us.
No recognized sign inside the Universe can travel quicker than light in a vacuum, and the light flowing out from far-flung celestial objects can not tour to us quicker than this widely wide-spread velocity limit will permit. It is impossible to find an item in Space with out also finding it in Time. Hence, the time period Spacetime. Time is the fourth measurement. The 3 spatial dimensions that symbolize our familiar global are again-and-forth, side-to-side, and up-and-down.
Gravitational lensing changed into predicted by Albert Einstein in his Theory of General Relativity (1915), and has when you consider that been observed in many instances via astronomers. Einstein’s first principle of Relativity, the Special Theory of Relativity (1905), describes a Spacetime that is regularly likened to an artist’s canvas. The artist paints factors and contours in this canvas which represents the level where the generic drama is being played–and now not the drama itself. The super success uniting the degree with the drama came a decade later with the Theory of General Relativity–where Space turns into a celebrity within the everyday drama itself. Space tells mass how to move, and mass tells area how to curve. Spacetime is as elastic as a child’s outdoor trampoline. Imagine a bit girl tossing a heavy bowling ball onto the trampoline. The ball represents a heavy mass, like that of a star. It creates a dimple–or a “gravitational nicely”–inside the stretchy elastic cloth of the trampoline. Now, if the little female then throws a handful of marbles onto the trampoline, they’ll wander down curved paths across the “famous person”–as if they had been planets in orbit round a actual superstar. If the bowling ball is then eliminated, the marbles will take instantly paths, instead of curved ones. The marbles–or “planets”–journey consistent with the more huge “megastar’s” warpage of the flexible fabric of the trampoline, which represents Spacetime. The stage and the drama are united. The drama will hold till the show’s very last curtain.
The first gravitational lens changed into observed in 1979, and these days lensing offers astronomers a superb view of the extraordinarily dim Universe quickly after its mysterious delivery. Gravitational lensing turned into first verified all through a solar eclipse in 1919 when historical past stars had been seen to be offset in exactly the manner that Albert Einstein had expected. Astronomers now use these celestial magnifying glasses to find out about remote items that would in any other case be so faint that they would almost be invisible. Indeed, far-flung and historical galaxies may also well monitor to astronomers a treasure trove of records about our own Milky Way Galaxy. These distortions of Spacetime caused by huge items towards Earth may be utilized by astronomers to take a look at nearby stars and their retinues of planets.
Extreme Celestial Magnifying Glass Detects Dim Galaxies In The Primeval Universe
Gravitational lensing can dramatically enlarge far-flung celestial resources in the primeval Universe, just so long as there is a sufficiently huge foreground item situated between the heritage source and the prying eyes of curious astronomers.
Clusters of galaxies are substantial concentrations of dark remember and searing-hot gas surrounding masses–or even lots–of individual galaxies. All of the constituent galaxies belonging to a cluster are connected together with the aid of their mutual gravitational appeal. These clusters are of extremely good price to astronomers due to the fact they serve as effective gravitational lenses. By functioning as magnifying glasses for faint items that could in any other case be hidden behind them, large galaxy clusters can serve as natural telescopes that allow astronomers to observe items lengthy in the past and a long way away in Spacetime–assets that could in any other case be past the reach of telescopes.
Dark count is a ghostly and invisible form of count this is the notion to be composed of individual non-atomic debris that does not have interaction with light or any other shape of electromagnetic radiation–which is why it is transparent. The weird darkish rely is a great deal greater ample than the misnamed “ordinary” atomic depend on that makes up our acquainted world–the arena that we can see. The so-known as “everyday” atomic count number is the stuff of stars, planets, moons, and those, and its money owed for literally all the factors indexed inside the Periodic Table.
There is an image depicting the quiescent galaxy eMACSJ1341, as captured by the HST. The photograph suggests a yellow dotted line that traces the boundaries of the galaxy’s gravitationally lensed photograph. An inset at the top left of the picture indicates what eMACSJ1341 would appear like if it had been observed immediately, with out the valuable aid of the foreground cluster lens. A very dramatic amplification and distortion due to the intervening big galaxy cluster can comfortably be seen.
Quiescent galaxies are those in which big name-delivery has all however ceased completely. Therefore, quiescent galaxies represent the very last phase of galaxy evolution. This is what makes eMACSJ1341 intriguingly unusual. Galaxies that are as historic and far-flung as eMACSJ1341 is are usually younger enough no longer to have depleted their supply of famous person-birthing fuel. For this cause, getting to know why eMACSJ1341 has stopped generating new child stars have become a full-size medical quest.
Dr. Ebeling and his colleagues, running with the facts acquired from HST, are continuing their research using both the HST statistics and floor-primarily based contraptions. The astronomers are carrying out further evaluation of the lens model, doing away with distortions from the magnified image.
“The very excessive magnification of the photograph affords us with an unprecedented opportunity to research the stellar populations of the distant object and, in the long run, to reconstruct the undistorted shape and houses,” commented have a look at team member Dr. Johan Richard in a January 31, 2018 University of Hawaii Press Release. Dr. Richard, who do the lensing calculations, is of the University of Lyon in France.
Even even though in addition intense gravitational magnifications have been determined formerly, this discovery units a new file for the magnification of an extraordinary quiescent galaxy. Dr. Ebeling explained in a January 31, 2018, University of Hawaii Press Release that “We specialize in finding extremely large clusters that act as herbal telescopes and have already observed many exciting cases of gravitational lensing. This discovery stands out although because the large magnification furnished via eMACJ1341 permits us to examine in element a very rare kind of galaxy.”
Representing the very last segment of galaxy evolution, quiescent galaxies are a considerable population in the neighborhood Universe. “However, as we take a look at more distant galaxies, we are also looking again in time, so we’re seeing items which can be younger and ought to no longer but have used up their fuel supply. Understanding why this galaxy has already stopped forming stars might also deliver us vital clues approximately the techniques that govern how galaxies evolve,” explained Dr. Mikkel Stockmann, a observe team member from the University of Copenhagen, and a professional in galaxy evolution. | 0.836676 | 3.981517 |
If your a fan of the Fantastic Four: Rise of the Silver Surfer movie, this news might be exciting. An unknown object has entered the solar system, and it is likely to have come from an alien world.
This object was first spotted by amateur astronomer Gennady Borisov in Ukraine on 30 August. The Minor Planet Centre that is part of the Smithsonian Astrophysical Observatory released a statement where they said that the object, now called C/2019 Q4, has an orbit in the shape of a hyperbola. It is moving too fast to be pulled in by the sun's gravity, which would mean that it hails from outside our solar system. However, more observations are needed to confirm this theory. If it turns out to be true, then the object will be given a name that starts with 2I (which denotes that it's the second interstellar object detected)
Here's a remarkable animation of #gb00234, which may be our second known interstellar visitor, taken by astronomer Gennady Borisov - who discovered the object.
" Jonathan O'Callaghan (@Astro_Jonny) September 11, 2019
The first time an interstellar object entered our solar system, astronomers and scientists caught it too late. The asteroid, Oumuamua, was leaving and this left them with very little time to study it.
In an interview with Business insider, Oliver Hainaut, the astronomer who studied the first asteroid said, "We had to scramble for telescope time," Hainaut said. "This time we are ready."
With this object, it has been detected while still entering the system. This will mean that we will have a lot more time to study it.
Hainaut said, "The main difference from Oumuamua and this one is that we got it a long, long time in advance."
Plans to study it
An early image of C/2019 Q4 shows that it is followed by a small tail or a halo of dust, which could mean that it is a comet. Comet tails are made up of gas and dust particles and are usually on the opposite side of the sun (Could it not be an exhaust plume from an interstellar engine?).
Since they have time, scientists will study the rock in detail. They are trying to plot the trajectory of the object. This will also help determine if this space rock is of interstellar origin or if it originated within our Solar System.
How will they do that, you might ask?
When they plot the orbit of the object, if it is elliptical, it originated from inside the solar system and it is orbiting around something, maybe our sun.
If it is a hyperbolic orbit, then the object is an outsider. It is on an open-ended trajectory.
Hainaut told Business Insider, "Here we have something that was born around another star and travelling toward us. It's the next best thing to sending a probe to a different solar system."
The object will pass Mars and if it is an interstellar object then scientists will be able to study it till it leaves our solar system in 2021. | 0.905899 | 3.087293 |
But it is a thesis too radical for our times and one that no scientist is likely to feel the courage to entertain.
It is geostasis. In short, a stationary Earth or, at least, a revolving but not orbiting Earth.
Nuts? Well, maybe.
Yet this would most easily explain Airey’s failure, the Michelson result and the Sagnac result and would dispense with any need to posit a constant for the speed of light. Nor would it require any of the contradictions of SR and GR. Curved space-time reverts to the academic museum of curios from whence it ought never to have escaped.
One past astronomer of international fame, faithfully reproducing what he observed in the night sky, did produce a model of the solar system which accords completely with what is observed.
He was the famous Danish Astronomer and mentor of Johannes Kepler, his pupil, namely Tycho Brahe, Astronomer-Imperial to the Holy Roman Emperor.
Tycho Brahe was a Danish astronomer of Rennaissance times. His system - even today - accords most accurately with what you see in the night sky.
Kepler inherited all Tycho’s calculations but later falsified them to fit his new-fangled ellipse theory.
Nevertheless, it is the Tychonian system which, even today, remains the most true to what is observed in the night sky. It explains, for instance, the apparent reversal of motion of Venus and Mercury which the Copernican system explains only by the use of epicycles.
There is a common misconception that the Copernican model did away with the need for epicycles. This is not true. The Copernican model could not explain all the details of planetary motion on the celestial sphere without epicycles. Indeed, the Copernican system required more than the Ptolemaic system. However, the Tychonian system requires none at all.
In the Tychonian system, the Moon and the Sun rotate around the Earth. Mercury, Venus, Mars, Jupiter, and Saturn rotate around the sun but the orbits of all save Mercury and Venus encompass also the Earth. Thus, from the perspective of the Earth, both Mercury and Venus appear, at one point in their sub-orbit round the Sun, to go into reverse as the Sun orbits the Earth.
But what of Stellar Parallax? This was the proof, surely, of heliocentrism?
Distance measurement by parallax is a special case of the principle of triangulation. The careful measurement of the length of one baseline can fix the scale of an entire triangulation network.
In parallax, the triangle is extremely long and narrow, and by measuring both its shortest side (the motion of the observer) and the small top angle (the other two being close to 90 degrees), the length of the long sides (in practice considered to be equal) can be determined.
Two different measurements are taken six months apart, to allow for the presumed 186 million miles that the earth travels in its alleged orbit during that time period, from one side of the sun to the other.
Parallax is the appearance that stars move in the sky in relation to each other, or in other words, more “distant” stars shift every six months in relation to nearby stars that are allegedly closer. This is put forth as the evidence that the earth revolves around the sun annually.
Stellar parallax proves only that either the Earth is moving with respect to the stars, or that the stars are moving with respect to the Earth. In other words, it simply does not resolve the issue.
There is a problem however: most of the star data that is catalogued by NASA exhibits negative parallax, in other words, most of the stars do not shift in the direction they need them to in order to support the heliocentric model.
In fact, however, stellar parallax proves only that either the Earth is moving with respect to the stars, or that the stars are moving with respect to the Earth. In other words, it simply does not resolve the issue.
In reality, it may be that the relationship between Sun and Earth, governed by multi-body dynamics with gravitational forces interacting to produce regular movement, cannot be determined by those who inhabit one of the bodies. without some static background which, of course, does not exist.
This is perhaps the reason why the late Professor Fred Hoyle, another Astronomer-Royal, felt able to say that the heliocentric model is as good as any other but no better (The Intelligent Universe, page 17).
Sir Fred Hoyle, Astronomer-Royal, felt able to say that the heliocentric model is as good as any other but no better.
Science can hypothesize and demolish an hypothesis but science cannot dogmatize. Doubts remain about each of the planetary system hypotheses and they cannot be resolved by refernece to unscientific dogma.
In matters of science, where there is no room for dogma, no final “court” and no final magisterial authority, the received wisdom is even more vulnerable to disproof. Yet many hypotheses have been treated as a kind of Delphic Oracle above and beyond any kind of challenge.
The result is that the real truth, whatever it may be, about the motion of the planets will, for the time being, remain obscured by wholly inappropriate scientific dogma rather than real science.
A generation that slavishly follows the media and fashion is unlikely to see the point.
Perhaps, after all, that much-maligned philosopher, theologian and scientist, St Robert Bellarmine, has still much yet to teach us? | 0.868567 | 3.59299 |
In the mid-1990s the U.S. embarked on a new strategy for exploring the Red Planet. In response to the 1993 failure of the Mars Observer mission—a billion-dollar, decade-in-the-making probe that mysteriously lost contact with ground controllers just before it was scheduled to go into orbit around the planet—NASA administrator Daniel Goldin decided to shift to smaller, less expensive spacecraft and create a sustained exploration campaign by sending one or two probes to Mars at every launch opportunity. (These opportunities come every two years or so, when Earth and Mars are properly aligned.) The new strategy spread out the inherent risk of interplanetary travel and ensured that the engineering experience and scientific data acquired by one mission could be rapidly used by the next. The approach has proved a brilliant success, putting three NASA spacecraft into orbit around Mars and three rovers on the planet’s surface (Pathfinder, Spirit and Opportunity). The Phoenix Mars Lander, which left Earth in August, is expected to reach the Red Planet next May, and NASA plans to launch the Mars Science Lab in 2009.
Subsequent missions are in jeopardy, however. Alan Stern, associate administrator for NASA’s Science Mission Directorate, warned in July that at least one of the future Mars probes may have to be scrapped to free up funding for a much costlier mission, tentatively scheduled for the 2018–2020 period, that would collect samples of Martian rock and bring them to Earth. Moreover, highly placed scientists and program leaders report that the new plan may actually require the sacrifice of all other Mars spacecraft after 2009.
Putting aside the question of whether the redirected funds would actually be devoted to the Mars Sample Return (MSR) mission, such a reorganization would be a very bad idea. A one-shot mission to bring Martian rocks to Earth for laboratory analysis is not really a good way to address the central question of Mars science. The Red Planet is a critical test bed for the hypothesis that life is likely to arise wherever the appropriate physical conditions—notably, the presence of liquid water—prevail on a planet for a sufficiently long time. Scientists now know that Mars probably had standing bodies of water on its surface between three billion and four billion years ago, when there was already plentiful microbial life on Earth. Because asteroid and comet impacts facilitate the transfer of rocks between Mars and Earth, the discovery of microfossils on the Martian surface would not in itself prove that life arose independently on Mars. To settle the question, researchers would need to find living organisms on the planet and examine their biochemistry. These organisms, if they exist, are most likely to be found in groundwater. Thus, the most important goal of the exploration program is to identify sites on Mars where groundwater is within practical drilling distance of the surface. This task can best be done not with an MSR mission but with a comprehensive scouting program involving orbiters, rovers, drillers and robotic aircraft with ground-penetrating radar.
Furthermore, even if one concedes considerable importance to the MSR mission, it is doubtful whether the reorganization plan is the right way to get there. If NASA halts its Mars exploration for a decade, all the best people will leave the team and be replaced by those who enjoy drawing charts and schedules. Instead of wrecking the current Mars program and hoping for the best, the space agency should build on it. New orbital spacecraft and aircraft should extend the reconnaissance of the Red Planet and identify sites containing potential fossils and near-surface groundwater. With such discoveries building well-justified public interest, NASA will be able to ask for extra funding to add an MSR mission to the queue. While the space agency is preparing the mission, it can send rovers and drillers to the most promising sites and cache samples that could reveal the truth about life on Mars. In addition to providing major scientific discoveries, such a mission might well give NASA the boost it needs to send human explorers to the Red Planet.
The Mars exploration program is one of the brightest jewels in the crown of American science; indeed, it represents one of the great cultural accomplishments of contemporary human civilization. It should not be discarded lightly. Rather than breaking from it, we should build on it. That is the way to Mars. | 0.821621 | 3.50384 |
Before moving on to discuss how the present atmosphere of Venus might be made breathable for humans, there is one additional topic, related to the collisional impacts issue discussed earlier, that should be addressed. This concerns the incredibly slow rotation rate of Venus, equivalent to a slothful 224.7 Earth days (see Table 7.1).
To a certain extent, the slow rotation may be a nonissue, but many researchers have suggested it might be desirable to increase the Venusian spin to something like that of the Earth's. Here the idea is that many terrestrial plants and crops do not grow well in permanent daylight conditions. This is likely a problem that can presumably be solved by genetic modification, and we should also note that initially all the food crops will be grown in artificial environments where the outside light can be easily controlled. If one does wish to produce a more rapidly spinning Venus, however, then it had better be done early on in the terraforming stage, before human colonization has begun.
Perhaps the simplest way to both cool Venus and induce an artificial day/night cycle shorter than the natural one (a period equivalent to 116.75 Earth days) is to place a louvered sunshade or variable transparency parasol at the Venusian L1 point. A circular parasol located at the Venusian L1 point would need to be about 25,000 km in diameter in order to completely obscure the Sun. This is a colossal size, over twice that of the planet itself, and a poignant reminder of just how complex the engineering and material resource requirements for terraforming Venus will be. By introducing a variable transparency or louvered system the sunlight levels on the daylight hemisphere of Venus could be turned on and off as required. On the night-side hemisphere of Venus, however, one would have to use either a series of orbital mirrors to reflect sunlight onto inhabited regions, or rely solely on artificial lighting.
The most basic spin-up mechanism would be one resulting from glancing impacts. This is not a mechanism that is active now, but when the Solar System was newly formed (4.56 billion years ago) and the planets themselves were still growing through accretion, there were many large, multithousand kilometer-sized objects moving along dynamically unstable orbits around the Sun. Indeed, the origin of Earth's Moon, the large obliquity of Venus along with its slow rotation rate, the relatively large iron core of Mercury, and the high obliquity of Uranus are all attributed to offcenter impacts from large proto-planetary bodies that occurred late in the planetary formation stage.
Arranging collisions from large, several hundred kilometers in diameter KBOs is probably the most straightforward way to increase the spin rate of Venus.8 Indeed, there is much to recommend the collisional method for partially denuding the Venusian atmosphere, spinning up the planet, and potentially generating Venusian moons and/or an equatorial debris shade. Certainly, the directed-impacts method smacks of a rather Neanderthal approach, but it is nonetheless a highly practical way of achieving some of the desired initial goals in the terraforming of Venus. There is a ready supply of large KBOs in the outer Solar System to perform the task at hand, and the essential technical means of altering and guiding an impactor's orbit are already known to us in principle. The practical ability to realize the required engineering, however, is still likely to be many centuries away—a further indication, if one was actually still needed, that the terraforming of Venus will not be a quick or easy task.
Directed collisions are by no means the only ways by which our descendants might spin-up Venus or for that matter an asteroid, satellite, or other planet. Freeman Dyson (Institute for Advanced Studies, Princeton), who is never afraid of thinking both big and bold, suggested in the mid-1960s that an electric motor arrangement could be engineered to increase the spin rate of a planet. His starting point for the idea came about from thinking about how the existence of a technically advanced extraterrestrial society might be recognized. This line of thinking eventually led Dyson to the idea of what are now called Dyson spheres and the concept of what are known as Kardashev Type II civilizations.9
Since an advanced civilization would undoubtedly require a vast resource of raw material, it seemed reasonable to conclude that it would develop the means of disassembling large asteroids and possibly even planets (planets, presumably, that is, not required for terraforming). Rather than disassemble such objects by direct mining, Dyson reasoned that it would be simpler for the asteroid or planet to disassemble itself. This self-destructive step could be achieved, he argues, by inducing rapid spin. Indeed, if the centrifugal force due to rotation exceeds the tensile strength of the material body, then the body will literally fly apart, and this is what Dyson had in mind. In a somewhat medieval sense, it might be said that this mechanism literally flogs itself to death.
The full physical details of the Dyson motor need not concern us here,10 but the idea is to turn the planet (or asteroid) into a giant electric motor. Indeed, by generating very specific magnetic field topologies around the object to be spun up and by placing numerous electrical generators in orbit around it, the planet/asteroid will behave like a massive armature. At this stage, suffice it to say, the end result of the engineering is that angular momentum is transferred to the entrapped body, and it will begin to spin faster.
Eventually the spin limit, the point at which the object flies apart, will be achieved, and the assorted pieces can then be captured and further processed into building material.
Dyson's idea is certainly elegant, but it seems overly complicated. Although the directed-collision approach can achieve the same end goals more simply than the Dyson motor in the asteroid or small moon disruption cases, the physical destruction of a planetary-sized object, should this ever be desired, may well have to proceed by a method such as that proposed by Dyson.
Was this article helpful? | 0.885474 | 4.037294 |
On May 14, 2009 - Just ten years ago today, an Ariane 5 rocket lifted off from the Guiana Space Center in Kourou, French Guiana, with two spacecraft, Herschel and Planck.
The two spaceborne observatories, Herschel and Planck, built by Thales Alenia Space as prime contractor for the European Space Agency (ESA), were designed to further our understanding of the Universe. One of the largest contracts ever signed for a scientific mission, Herschel and Planck were designed to operate from an orbit at Lagrange Point 2 (L2), about 1.5 million kilometers (a million miles) from the Earth.
These two spaceborne telescopes were designed to unveil some of the Universe’s most closely guarded secrets, namely the formation of stars and galaxies (Herschel) and the big bang (Planck). Behind this daunting endeavor was a project team counting more than 500 people, as well as over 90 subcontractors from 17 countries. Virtually all TAS facilities were involved in the project, including Cannes, Turin, Madrid, Charleroi, Milan, Toulouse, Rome and L’Aquila, along with Bristol and Zurich, which would join Thales Alenia Space a few years later.
Thales Alenia Space invested the sum total of its expertise and passion in this project, leading up to the launch of the two spacecraft from the Guiana Space Center, Europe’s Spaceport in French Guiana (South America). Once in orbit, Herschel and Planck could start their scientific missions.
The Herschel space telescope was able to observe cold and dust-laden regions of the Universe that were inaccessible to other telescopes at the time. It studied the birth of galaxies and how stars were formed, as well as the gas and dust clouds that would eventually become stars, proto-planetary disks and complex organic molecules in the tails of comets. In particular, Herschel was the first spacecraft to study the complete spectrum of wavelengths in the far infrared bandwidth.
The Planck scientific observatory was designed to study the cosmic background noise, or in other words the fossil radiation from the “first light” of the Universe, emitted some 380,000 years after the big bang, or about 13.8 billion years ago. Planck delivered vital information concerning the creation of the Universe and our own Solar System. For instance, it detected a number of areas where stars are about to be born, or are just starting their development cycle.
Herschel and Planck carried out their missions to perfection, and even beyond the call of duty, since Planck exceeded its specified mission lifetime by 18 months and Herschel by 6 months, finishing at the end of October 2013 and the end of April 2013, respectively. They are still considered two of the most complex scientific spacecraft ever built in Europe.
This performance earned kudos from the global scientific community and also won awards from the leading astronautics associations 3AF in France (Association Aéronautique et Astronautique de France) in 2010 and AIAA in the United States (American Institute of Aeronautics and Astronautics) in 2015. From the industry standpoint, the program won the 2014 Gold Medal from the International Project Management Association (IPMA) in the category “Mega Sized Projects”.
For Thales Alenia Space, the Herschel/Planck program augured well for future success in the scientific sphere, as the company subsequently won both the Euclid contract and part of the Plato mission. Both of these missions entail requirements and orbits which are similar to those on Herschel/Planck.
Date: May 14, 2019 | 0.898762 | 3.77497 |
ann15059 — Annonce
ALMA Greatly Improves Capacity to Search for Water in Universe
Band 5 receivers achieve first fringes
17 juillet 2015
After more than five years of development and construction, ALMA successfully opened its eyes on another frequency range after obtaining the first fringes with a Band 5 receiver, specifically designed to detect water in the local Universe. Band 5 will also open up the possibility of studying complex molecules in star-forming regions and protoplanetary discs, and detecting molecules and atoms in galaxies in the early Universe, looking back about 13 billion years.
ALMA observes the Universe in radio waves: light that is invisible to the human eye. The weak electromagnetic glow from space is captured by the array of 66 antennas, each with diameters up to twelve metres. Their receivers transform this weak radiation into an electrical signal.
To scout a broad range of frequencies, each ALMA antenna is equipped with up to ten different receivers, each one specially designed to cover a specific range of wavelengths. The new Band 5 receiver is the eighth type to be integrated and covers a range of wavelengths from 1.4 to 1.8 millimetres (frequencies from 163 to 211 GHz), probing a part of the electromagnetic spectrum that has only been poorly explored before.
“Band 5 will open up new possibilities to explore the Universe and bring new discoveries,” explains ESO’s Gianni Marconi, who is responsible for the integration of Band 5. “The frequency range of this receiver includes an emission line of water that ALMA will be able to study in nearby regions of star formation. The study of water is, of course, of intense interest because of its role in the origin of life.”
With Band 5 ALMA will also be able to probe the emission from ionised carbon from objects seen soon after the Big Bang, opening up the possibility of probing the earliest epoch of galaxy formation. “This band will also enable astronomers to study young galaxies in the early Universe about 500 million years after the Big Bang,” added Gianni Marconi.
The Band 5 receivers were originally designed and prototyped by Onsala Space Observatory's Group for Advanced Receiver Development (GARD) at Chalmers University of Technology in Sweden, in collaboration with the Rutherford Appleton Laboratory, UK, and ESO, under the European Commission supported Framework Programme FP6 (ALMA Enhancement). After having successfully tested the prototypes, the first production-type receivers were built and delivered to ALMA by a consortium of NOVA and GARD in the first half of 2015. Two receivers were used for the first light. The remainder of the 73 receivers ordered, including spares, will be delivered between now and 2017.
ESO placed the European contract for the cryogenically cooled receivers with NOVA, the research school for astronomy in the Netherlands, in partnership with Onsala Space Observatory’s Advanced Receiver Development group. NRAO built the high-precision local oscillators that tune the receivers, so that the output from all antennas can be precisely combined to make high-resolution images.
The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of ESO, the US National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI).
ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.
ESO Public Information Officer
Garching bei München, Germany
Tel: +49 89 3200 6655
Cell: +49 151 1537 3591
À propos de l'annonce | 0.809546 | 3.711614 |
Of all the 100 billion stars in our universe, the one closest to us might just be one that supports alien life, if reports of a new discovery are to be believed, reports Daily Mail.
A newly-spotted planet in our galactic neighbourhood might have the right conditions for life, according to reports.
Scientists spotted the planet, which is believed to be 'Earth-like', orbiting the star Proxima Centauri, the nearest stellar neighbour to our sun.
The researchers are due to unveil the discovery later this month and apparently believe it orbits its star at a distance that could favour life - the so-called habitable zone.
Proxima Centauri is part of the Alpha Centauri star system just 4.2 light years from our own solar system.
According to Der Spiegel, the European Southern Observatory (ESO) will announce the finding at the end of August.
ESO spokesman Richard Hook said he is aware of the report, but refused to confirm or deny it.
Proxima Centauri is is the closest star to our own. A planet orbiting the star would be the closest exoplanet to Earth.
Discovered in 1915, Proxima Centauri is one of three stars in the Alpha Centauri system, a constellation mainly visible from the southern hemisphere.
The planet is thought to be in the star's 'habitable zone' - an area around a star in which an orbiting planet's surface could hold liquid water.
For over a century, astronomers have known about Proxima Centauri and believed it is part of a trinary star system, along with Alpha Centauri A and B.
Located just 0.237 light years from the binary pair, this low-mass red dwarf star is also 0.12 light years closer to Earth, making it the closest star system to our own.
The report gave no further details.
Nasa has announced the discovery of new planets in the past, but most of those worlds were either too hot or too cold to host water in liquid form, or were made of gas, like our Jupiter and Neptune, rather than of rock, like Earth or Mars.
Last year, the US space agency unveiled an exoplanet that it described as Earth's 'closest-twin'.
Named Kepler 452b, the planet is about 60 per cent larger than Earth and could have active volcanoes, oceans, sunshine like ours, twice as much gravity and a year that lasts 385 days.
But at a distance of 1,400 light-years away, humankind has little hope of reaching this Earth-twin any time soon.
In comparison, the exoplanet orbiting Proxima Centauri, if confirmed, is just 4.24 light-years away.
This is a mere stepping stone in relation to the scale of the universe but still too far away for humans to reach in present-generation chemical rockets. | 0.900816 | 3.025955 |
Exoplanets: "This illustration depicts stars with more than one planet. The planets eclipse or
transit their host star from the vantage point of the observer, an angle called edge-on," National Geographic says.
Launched in 2009, NASA's $591 million Kepler Space Telescope has now discovered most of the planets orbiting nearby stars. "We've hit the motherlode; we've got a veritable exoplanet bonanza," says Kepler co-leader Jack Lissauer of NASA's Ames Research Center in Moffett Field, California.
The newly announced exoplanets reinforce the view that most solar systems around sunlike stars have smaller-size planets. Most of those planets range in width from Earth-size (on the smaller side) to Neptune-size (on the larger). That's quite a change from the Jupiter-size planets that were often spotted orbiting nearby stars during the early planet searches that started in 1995."Nature likes to make small planets," says the Massachusetts Institute of Technology's Sara Seager, who was not part of the discovery team but commented on the findings at a Wednesday NASA briefing.
Four of the newly discovered planets orbit around their stars in "habitable zones"—regions where temperatures are just right for oceans, which bring with them the possibility of life. But the four planets are all a little more than twice the width of Earth, which may make their atmospheres unfriendly to life as we know it. (See: "Earth-Size Planets Come in Two Flavors.")
The newly discovered 715 planets orbit in solar systems around 305 stars, mostly ones the size of the sun or smaller. Many of the planets orbit in what is beginning to be seen as a more typical solar system, in which the largest planet is Neptune-size and a bevy of smaller Earth-size planets orbit close-in to their star and close to one another. "These new Kepler results are very helpful in filling out the statistics of solar systems," says Princeton's Adam Burrows, who was not part of the discovery team. "The goal is to see how typical is our own solar system, and ones unlike it."
Our solar system might be typical in some ways and atypical in others, I suspect, just as there are variations among humans and within nature on our planet we call home. This raises the question on why there should not be symmetry or statistical variations in our universe, known or unknown. If there are planets that can support life, human or otherwise, it would confirm decades of science fiction and popular TV shows and films. This would not only be exciting, but would also have the added benefit of placing humans in their proper place.
You can read more of this article at [NatGeo] | 0.897672 | 3.804456 |
Brief and bright, fast and furious, uncommonly pure. Fast radio bursts have baffled astronomers since their discovery six years ago, but now they are revealing their true nature.
Two new studies suggest that FRBs are as common as dirt, that they produce more energy in a millisecond than the sun does in a million years, and that their single, intense flash of radio waves may be created when a neutron star is severed from its magnetic field as it collapses into a black hole. This explanation has also led to a more evocative name – blitzars – after the German word blitz for lightning.
In 2007, Duncan Lorimer and David Narkevic of West Virginia University in Morgantown discovered the first fast radio burst . Although many pulsars – spinning magnetic stars – give off brief periodic flashes of radio waves, the “Lorimer burst” was a singular event, lasting just a few milliseconds. What’s more, it seemed to originate a few billion light years away, much more distant than the furthest pulsars that we can detect, suggesting that it must be exceptionally powerful.
Today Dan Thornton of the University of Manchester, UK, and colleagues have taken the total count to six in one fell swoop. In the journal Science, they report the discovery of four more FRBs using the 64-metre radio telescope in Parkes, Australia.
With this many FRBs to observe, they were able to glean more details than previous studies did. The extent to which the radio waves are slowed by electrons in space reveals that they have travelled for billions of light years, confirming that FRBs are exceptionally powerful. The team concludes that they emit as much energy in a few milliseconds as the sun does in a million years.
Their comparative analysis also suggests that FRBs are frequent, with one being produced in every galaxy in the universe roughly every 1000 years. The fact that only six have been detected so far reflects their very short lifetime and the fact that astronomers can’t keep an eye on the whole sky all the time.
So what could give rise to an FRB? Several ideas have been put forward, including a collision between neutron stars – the ultradense, magnetic remains of supernova explosions – but most theories fail to explain why FRBs emit purely at radio wavelengths. Other energetic cosmic explosions shine across a much broader spectrum, including in visible light, x-rays and gamma rays..
But another study, posted online today, details a scenario that can account for this. Heino Falcke of Radboud University in Nijmegen, the Netherlands, and Luciano Rezzolla of the Max Planck Institute for Gravitational Physics in Potsdam, Germany, suggest that an FRB occurs when the supernova explosion of a giant star leaves behind a compact neutron star that is slightly overweight.
The object’s own gravity would cause it to collapse into a black hole, the researchers say, if not for the centrifugal effect of its fast rotation. But within a few million years, the interaction of the neutron star’s magnetic field with the surrounding interstellar material slows the spin. Eventually, gravity wins and the star turns into a black hole after all.
“When the black hole forms, the magnetic field will be cut off from the star and snap like rubber bands,” explains Falcke. “This can produce the observed giant radio flashes.” By contrast, other types of radiation, which would come from the star itself rather than around it, cannot escape the gravitational collapse.
Thornton doesn’t want to comment on Falcke and Rezzolla’s idea before the paper has been accepted for publication in a refereed journal. “Our favourite explanation for FRBs is a giant burst from a magnetar – a highly magnetised type of neutron star,” he says. These explosions are expected to happen more or less as frequently as the mysterious radio bursts.
But Falcke and Rezzolla have already coined the term “blitzar” to describe FRBs.
More on these topics: | 0.898506 | 4.047194 |
Earth’s second emissary to interstellar space, Voyager 2, is phoning home with new views of the solar system’s ragged edge. But what it sees could be very different to what its predecessor glimpsed, revealing new details of the sun’s immediate neighbourhood.
Voyager 2 has reached the heliosheath, the beginning of the end of the solar system. If the experience of its twin, Voyager 1, is anything to go by, Voyager 2 is about two-thirds of the way to the heliopause – the outer edge of the sun’s influence, also considered to be where interstellar space begins. Voyager 1 crossed this boundary two years ago this week, according to NASA and most Voyager scientists. Not everyone agrees, though, because readings sent back by Voyager 1 left a little room for doubt.
One clue that Voyager 1 had passed the heliopause was that its instruments measured a slowing, sparser solar wind. That’s not happening yet for Voyager 2, says Rob Decker at Johns Hopkins University in Maryland.
That could be because the sun’s sphere of influence isn’t a sphere. Solar radiation blows a bubble of charged particles about 15 billion kilometres in radius, but the sun’s motion through the galaxy gives that bubble a windsock shape, with a rounded part in the direction of travel and a tail trailing behind. Voyager 1 is moving in the same direction as the sun, but Voyager 2 – 3 billion kilometres behind – is headed more sideways and down.
In addition to the sun’s motion, particles and plasma from interstellar space might be deforming the bubble, Decker says. As a result, it could take longer for Voyager 2 to reach interstellar space – or it could happen sooner, notes Ed Stone, chief Voyager scientist at NASA. Voyager 2 crossed the termination shock, another physical boundary signifying the heliosheath, about 1.5 billion kilometres before anyone expected, he says, so it’s hard to make firm predictions about what it will do in the future.
When Voyager 2 does cross the heliopause, its exit will be definitive, Decker and Stone say. Voyager 1’s plasma sensor broke down sometime in the 1980s, but the younger probe’s still works. The sensor will detect the change between the sun’s sphere of influence, which is warm and less dense, to the interstellar medium, which is cold and denser by a factor of 40. That means Voyager 2’s observations will be much clearer.
“We’re very fortunate to have a second spacecraft,” Stone says.
Journal reference: The Astrophysical Journal, DOI: 10.1088/0004-637X/792/2/126
More on these topics: | 0.806006 | 3.953706 |
* Paper title: Optical Emission of the Ultraluminous X-Ray Source NGC 5408 X-1: Donor Star or Irradiated Accretion Disk?
* Authors: F. Grisé, P. Kaaret, S. Corbel, H. Feng, D. Cseh, L. Tao
* First author’s affiliation: University of Iowa
Continuums seem to be the name of the game in astronomy. On more than one occasion, astronomers have defined discrete subclasses for a type of phenomena only to later discover objects which populate an intermediate space between their original classifications (great examples of this are galaxies and supernova). So… what about black holes?
All black holes discovered to date currently fall into one of only two classifications: Stellar Mass Black Holes and Super Massive Black Holes (SMBHs). Stellar mass black holes are formed during the core collapse of massive stars and typically range from ~1 – 30 M☉. SMBHs, on the other hand, are located in the center of many (perhaps most) galaxies and have masses upward of 105 – 106 M☉. As the more astute among you may have noticed, there is currently a several order of magnitude gap in the distribution of known black hole masses. It is still an open debate whether this gap is real, or if there is an, as yet unconfirmed, sub-class of black holes with masses between 102 and 104 M☉. These hypothetical objects have been termed Intermediate Mass Black Holes (IMBHs).
Central to this debate is a class of objects known as Ultra-Luminous X-ray Sources (ULXs). ULXs are (you guessed it…) incredibly bright X-ray emitting objects. In order to comprehend how ‘ultra’ these sources really are, we first need to define a few terms: the Eddington Luminosity of an object represents the point at which the force outward due to radiation pressure exceeds the force inward due to gravity. For systems (such as X-ray binaries) where the main source of radiation is accretion onto a central object one can therefore define the Eddington accretion rate as the accretion rate which would cause the object to radiate at its Eddington Luminosity. As it turns out, assuming ULXs radiate isotropically, their luminosities actually exceed the Eddington Luminosity for stellar mass black holes.
It is therefore possible that ULXs actually harbor IMBHs. In this scenario, ULXs would simply represent more extreme X-ray binaries: an accreting black hole binary system, radiating below its Eddingtion luminosity, with a black hole mass greater than approximately 100 M☉. This theory is not the only configuration that could explain ULXs, however. If the radiation is beamed towards us, the total luminosity of these object may be far lower that what we calculate assuming isotropic emission and it also may be possible for some stellar mass black holes to accrete at super-eddington rates. Attempts to directly measure the mass of a ULX compact object by measuring the motion of a companion star have thus far been unsuccessful. Current investigations into the nature of ULXs therefore hinge on characterizing the emission from the full electromagnetic spectrum from X-ray to radio.
The current paper obtains simultaneous Hubble Space Telescope and Chandra X-ray Observatory observations of a particular ULX: NGC 5408 X-1 (See Figure 1). The simultaneous observations are critical because they aim to create a physical model which explains the emission emanating from the ULX between the optical and X-ray bands and several ULXs have been shown to be time variable sources. Figure 2, below, shows the X-ray, ultra-violet, optical, and near infra-red flux from the source with four physical models overlaid.
The blue and green lines represent two versions of a modified accretion disc model that has been extrapolated to optical wavelengths. You can see that both models fall an order of magnitude or more below the observed fluxes in the UV and optical bands. The red line, on the other hand, represents an irradiated disc model. In this model, some of the X-ray emission from the central source is absorbed by the material in the accretion disc itself and then re-radiated at longer wavelengths. The authors find that all of the observed flux is consistent with this model. They caution, however, that this is by no means conclusive. The orange model represents the UV/optical emission expected from an B0I supergiant (notice the blackbody-like shape). You can see that this model is also consistent with the Hubble data.
Thus, the authors conclude that their observations alone cannot determine whether the optical emission is due to a donor star or an irradiated accretion disc. They note, however, that such distinctions could made with further monitoring. If the optical emission were due to re-radiated light from the central object, one would expect any variations in optical and X-ray emission to be correlated. One would NOT, however, expect any correlations if the optical emission were due to a donor star. They also emphasize that certain supercritical accretion models are not consistent with the presence of an irradiated disc, and thus confirming the nature of the optical emission in NGC 5408 X-1 will help to unravel the true nature of some ULXs. | 0.863486 | 4.100228 |
Bright spiral galaxy NGC 3169 appears to be unraveling in this cosmic scene, played out some 70 million light-years away just below bright star Regulus toward the faint constellation Sextans.
Its beautiful spiral arms are distorted into sweeping tidal tails as NGC 3169 (left) and neighboring NGC 3166 interact gravitationally, a common fate even for bright galaxies in the local universe.
The picture spans 20 arc minutes, or about 400,000 light-years at the group’s estimated distance, and includes smaller, dimmer NGC 3165 at the right. NGC 3169 is also known to shine across the spectrum from radio to X-rays, harboring an active galactic nucleus that is likely the site of a supermassive black hole.
SEE COMPLETE TEXT | 0.831004 | 3.095127 |
Dr. Duncan Brown
"Gravitational-wave Astronomy with the Laser Interferometer Gravitational-wave Observatory"
Almost all of our knowledge of astronomy and astrophysics comes from observing the Universe with electromagnetic waves. Gravitational waves are one of the most remarkable predictions of Einstein's theory of General Relativity. These waves are ``ripples in the curvature of spacetime which carry information about the changing gravitational fields of distant objects. Gravitational-waves are analogous to electromagnetic waves, but because the coupling between gravity and matter is so much weaker than the coupling between light and matter, it is very difficult to generate detectable gravitational waves. To generate waves strong enough to be detectable with current technology needs extremely dense, massive objects, such as black holes and neutron stars, moving at speeds close to the speed of light. The first detection of gravitational-wave observations will open a new window on the Universe and establish the field of gravitational-wave astronomy.
The U.S. Laser Interferometer Gravitational-wave Observatory (LIGO) and its French-Italian counterpart Virgo are presently searching for gravitational waves. I will review the status of the search for waves emitted during the final moments of binary systems containing black holes and neutron stars. I will describe how information from numerical modeling of binary black holes is being used to improve current and future searches and discuss how observations of these systems will bring us new knowledge of both fundamental physics and astrophysics. | 0.807953 | 3.799803 |
The World From on High
A medieval map combined with a view of the Earth from space is a reminder of humanity’s ancient desire to chart the world from above.
On February 11th, 2016 the British astronaut Tim Peake tweeted a picture from aboard the International Space Station (ISS). Captioned ‘a copy of one of the oldest maps in Britain, now exploring the newest frontier here in space’, it showed a facsimile of the English Hereford map (c.1300), on display at the city’s cathedral, where it has been for at least 400 years.
The 700-year-old Hereford map offers a unique insight into what medieval people thought the Earth looked like. The map, drawn onto a single piece of vellum measuring 5ft 2in by 4ft 4in, is oriented in the true sense of the word with east (Latin oriens) at the top and has Jerusalem at its centre. The map is of the type known as the T-O: the O of the inhabited world is divided into the three known continents – Europe, Africa and Asia – by the watery T formed by the intersection of the rivers Don and Nile and the Mediterranean, whose position in the middle (medius) of the Earth (terra) is clear on Peake’s photograph.
With its 1,091 inscriptions, the map was an attempt to summarise human knowledge in fields as diverse as geography, ethnography, zoology and history. The inscription that shows the location of Hereford has at some point been rubbed away and rewritten, probably as a result of generations of viewers marking their place in the world with their fingers.
What is striking about Peake’s image is how alike the two views of the Earth are. Both show us circles of lands indented with darker bays and seas. Though ‘the newest frontier here in space’ has only recently become accessible to us, the longing for an orbital view of the Earth is not new.
In his Phaedo, Plato, in the fourth century BC, described the Earth from above as a patchwork of terrains and vegetations, stitched together like a leather ball. Another legend, from the fourth century AD, has Alexander the Great build a flying machine – made from a basket tethered to two griffins – which he used to fly to such a height that the Earth looked to him like a threshing floor encircled by a serpent. Mathematical proofs of the sphericity of the Earth have been around since antiquity and medieval people were aware that they inhabited a globe. The English mystic Julian of Norwich (c.1342-c.1416) wrote that she was shown the Earth in a vision: she held it – as round as a ball and about the size of a hazelnut – in the palm of her hand.
The most influential orbital view of the Earth, however, appeared in Cicero’s De re publica (54-51 BC). This dialogue on Roman politics concluded with the dream of the Roman military tribune Scipio Aemilianus (185-129 BC), in which he is visited by his deceased grandfather, the renowned general Scipio Africanus, and taken up to the sky. From ‘a high place full of stars, shining and splendid’ Scipio observes the cosmos and its stellar workings: the Milky Way appeared as a circle of brightest white, the stars were orbs far exceeding the Earth in size and the moon shone with light borrowed from the sun. Scipio despairs at the smallness of the Earth and, even more, that of the Roman Empire, exclaiming that ‘the Earth itself appeared to me so small, that it grieved me to think of our empire, with which we cover but a point, as it were, of its surface’.
Scipio’s cosmic vision became a cornerstone for cosmographical thought in the High Middle Ages through Macrobius’s Commentary on the Dream of Scipio, written in the early fifth century. Macrobius explained Cicero’s literary allusions to the shape and nature of the cosmos, employing a series of maps and diagrams to show the spherical Earth in relation to the celestial sphere – the convexity of the night sky on which the stars appear to turn around the Earth – and delineate the globe into climatic zones based on latitude. The Hereford map belongs to a similar intellectual world: its circle of lands does not represent a flat disk world, but rather the inhabited part of the northern hemisphere.
Peake’s view of the Earth has been anticipated for millennia. Antique and medieval thinkers lacked the technological but not the imaginative means to put themselves into orbit and look down on the Earth. Looking out of the window of the ISS onto a world where, in the astronomer Carl Sagan’s words, ‘everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives’, gives us a share of the breathless wonder with which the ancients mapped their world.
Dale Kedwards is a historian of medieval maps at the University of Zurich. | 0.871739 | 3.115403 |
China’s unmanned Yutu rover, also called Jade Rabbit, has found an entirely new basaltic rock on the moon. The lunar lander was launched in 2013. It has been exploring an ancient flow of volcanic lava in the Mare Imbrium on the moon. The new material is unlike anything collected by the American and Soviet missions during the 1960s and 1970s. The discovery was published Tuesday in the journal Nature Communications.
This rock is intermediate in titanium
China’s unmanned mission Chang’e-3 put down the Yutu rover on a geologically young lava flow that was formed about 3 billion years ago. The basaltic rock identified by the rover had “unique compositional characteristics.” The study is expected to throw new light on the origins of the Earth’s nearest neighbor. It was a surprise for planetary scientists and geochemists.
Basalts collected by the US and Soviet missions were either high or low in titanium. But the new substance is intermediate in titanium and very rich in iron oxide. Bradley Joliff of the Washington University, the only American in the Chinese team, told the Guardian that the diversity suggests the moon’s upper mantle is far less uniform in composition than our planet. Researchers can see how the moon’s volcanism changed over time by correlating chemistry with age.
Yutu rover may help reveal the composition of the moon’s interior
The moon is believed to have formed when a Mars-sized body called Theia crashed into Earth early in the history of the solar system. The debris and rocks from the collision coalesced and cooled to form the moon. But radioactive elements in the interior heated up the rock under the crust for about 500 million years. As a result, volcanic lava slurped into impact craters to form the “seas” or maria.
Since volcanic activity brings minerals from the center to the surface of a planetary body, understanding the volcanic rocks could help researchers determine the lunar composition. Minerals in molten rock usually crystallize at different temperatures. So, the surface rock may offer clues about the deep interior of the moon. | 0.841593 | 3.55845 |
Crescent ♏ Scorpio
Moon phase on 1 January 2095 Saturday is Waning Crescent, 24 days old Moon is in Scorpio.Share this page: twitter facebook linkedin
Previous main lunar phase is the Last Quarter before 2 days on 29 December 2094 at 20:27.
Moon rises after midnight to early morning and sets in the afternoon. It is visible in the early morning low to the east.
Moon is passing about ∠11° of ♏ Scorpio tropical zodiac sector.
Lunar disc appears visually 7.7% narrower than solar disc. Moon and Sun apparent angular diameters are ∠1807" and ∠1951".
Next Full Moon is the Wolf Moon of January 2095 after 19 days on 20 January 2095 at 12:48.
There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak.
The Moon is 24 days old. Earth's natural satellite is moving from the second to the final part of current synodic month. This is lunation 1174 of Meeus index or 2127 from Brown series.
Length of current 1174 lunation is 29 days, 13 hours and 43 minutes. It is 1 hour and 48 minutes longer than next lunation 1175 length.
Length of current synodic month is 59 minutes longer than the mean length of synodic month, but it is still 6 hours and 4 minutes shorter, compared to 21st century longest.
This lunation true anomaly is ∠287.7°. At the beginning of next synodic month true anomaly will be ∠317.1°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°).
4 days after point of apogee on 28 December 2094 at 08:42 in ♍ Virgo. The lunar orbit is getting closer, while the Moon is moving inward the Earth. It will keep this direction for the next 7 days, until it get to the point of next perigee on 9 January 2095 at 02:21 in ♒ Aquarius.
Moon is 396 613 km (246 444 mi) away from Earth on this date. Moon moves closer next 7 days until perigee, when Earth-Moon distance will reach 364 985 km (226 791 mi).
10 days after its ascending node on 21 December 2094 at 16:09 in ♊ Gemini, the Moon is following the northern part of its orbit for the next 3 days, until it will cross the ecliptic from North to South in descending node on 5 January 2095 at 02:54 in ♐ Sagittarius.
10 days after beginning of current draconic month in ♊ Gemini, the Moon is moving from the beginning to the first part of it.
9 days after previous North standstill on 22 December 2094 at 16:45 in ♋ Cancer, when Moon has reached northern declination of ∠24.122°. Next 4 days the lunar orbit moves southward to face South declination of ∠-24.121° in the next southern standstill on 6 January 2095 at 01:32 in ♑ Capricorn.
After 4 days on 6 January 2095 at 09:33 in ♑ Capricorn, the Moon will be in New Moon geocentric conjunction with the Sun and this alignment forms next Sun-Moon-Earth syzygy. | 0.848363 | 3.07967 |
Welcome back to our series on Colonizing the Solar System! Today, we take a look at the largest of Saturn’s Moons – Titan, Rhea, Iapetus, Dione, Tethys, Enceladus, and Mimas.
From the 17th century onward, astronomers made some profound discoveries around the planet Saturn, which they believed was the most distant planet of the Solar System at the time. Christiaan Huygens and Giovanni Domenico Cassini were the first, spotting the largest moons of Saturn – Titan, Tethys, Dione, Rhea and Iapetus. More discoveries followed; and today, what we recognized as the Saturn system includes 62 confirmed satellites.
What we know of this system has grown considerably in recent decades, thanks to missions like Voyager and Cassini. And with this knowledge has come multiple proposals that claim how Saturn’s moons should someday be colonized. In addition to boasting the only body other than Earth to have a dense, nitrogen-rich atmosphere, there are also abundant resources in this system that could be harnessed.
Much like the idea of colonizing the Moon, Mars, the moons of Jupiter, and other bodies in the Solar System, the idea of establishing colonies on Saturn’s moons has been explored extensively in science fiction. At the same time, scientific proposals have been made that emphasize how colonies would benefit humanity, allowing us to mount missions deeper into space and ushering in an age of abundance!
Examples in Fiction:
The colonization of Saturn has been a recurring theme in science fiction over the decades. For example, in Arthur C. Clarke’s 1976 novel Imperial Earth, Titan is home to a human colony of 250,000 people. The colony plays a vital role in commerce, where hydrogen is taken from the atmosphere of Saturn and used as fuel for interplanetary travel.
In Piers Anthony’s Bio of a Space Tyrant series (1983-2001), Saturn’s moons have been colonized by various nations in a post-diaspora era. In this story, Titan has been colonized by the Japanese, whereas Saturn has been colonized by the Russians, Chinese, and other former Asian nations.
In the novel Titan (1997) by Stephen Baxter, the plot centers on a NASA mission to Titan which must struggle to survive after crash landing on the surface. In the first few chapters of Stanislaw Lem’s Fiasco (1986), a character ends up frozen on the surface of Titan, where they are stuck for several hundred years.
In Kim Stanley Robinson’s Mars Trilogy (1996), nitrogen from Titan is used in the terraforming of Mars. In his novel 2312 (2012), humanity has colonized several of Saturn’s moons, which includes Titan and Iapetus. Several references are made to the “Enceladian biota” in the story as well, which are microscopic alien organisms that some humans ingest because of their assumed medicinal value.
As part of his Grand Tour Series, Ben Bova’s novels Saturn (2003) and Titan (2006) address the colonization of the Cronian system. In these stories, Titan is being explored by an artificially intelligent rover which mysteriously begins malfunctioning, while a mobile human Space Colony explores the Rings and other moons.
In his book Entering Space: Creating a Spacefaring Civilization (1999), Robert Zubrin advocated colonizing the outer Solar System, a plan which included mining the atmospheres of the outer planets and establishing colonies on their moons. In addition to Uranus and Neptune, Saturn was designated as one of the largest sources of deuterium and helium-3, which could drive the pending fusion economy.
He further identified Saturn as being the most important and most valuable of the three, because of its relative proximity, low radiation, and excellent system of moons. Zubrin claimed that Titan is a prime candidate for colonization because it is the only moon in the Solar System to have a dense atmosphere and is rich in carbon-bearing compounds.
On March 9th, 2006, NASA’s Cassini space probe found possible evidence of liquid water on Enceladus, which was confirmed by NASA in 2014. According to data derived from the probe, this water emerges from jets around Enceladus’ southern pole, and is no more than tens of meters below the surface in certain locations. This would would make collecting water considerably easier than on a moon like Europa, where the ice sheet is several km thick.
Data obtained by Cassini also pointed towards the presence of volatile and organic molecules. And Enceladus also has a higher density than many of Saturn’s moons, which indicates that it has a larger average silicate core. All of these resources would prove very useful for the sake of constructing a colony and providing basic operations.
In October of 2012, Elon Musk unveiled his concept for an Mars Colonial Transporter (MCT), which was central to his long-term goal of colonizing Mars. At the time, Musk stated that the first unmanned flight of the Mars transport spacecraft would take place in 2022, followed by the first manned MCT mission departing in 2024.
In September 2016, during the 2016 International Astronautical Congress, Musk revealed further details of his plan, which included the design for an Interplanetary Transport System (ITS) and estimated costs. This system, which was originally intended to transport settlers to Mars, had evolved in its role to transport human beings to more distant locations in the Solar System – which could include the Jovian and Cronian moons.
Compared to other locations in the Solar System – like the Jovian system – Saturn’s largest moons are exposed to considerably less radiation. For instance, Jupiter’s moons of Io, Ganymede and Europa are all subject to intense radiation from Jupiter’s magnetic field – ranging from 3600 to 8 rems day. This amount of exposure would be fatal (or at least very hazardous) to human beings, requiring that significant countermeasures be in place.
In contrast, Saturn’s radiation belts are significantly weaker than Jupiter’s – with an equatorial field strength of 0.2 gauss (20 microtesla) compared to Jupiter’s 4.28 gauss (428 microtesla). This field extends from about 139,000 km from Saturn’s center out to a distance of about 362,000 km – compared to Jupiter’s, which extends to a distance of about 3 million km.
Of Saturn’s largest moons, Mimas and Enceladus fall within this belt, while Dione, Rhea, Titan, and Iapetus all have orbits that place them from just outside of Saturn’s radiation belts to well beyond it. Titan, for example, orbits Saturn at an average distance (semi-major axis) of 1,221,870 km, putting it safely beyond the reach of the gas giant’s energetic particles. And its thick atmosphere may be enough to shield residents from cosmic rays.
In addition, frozen volatiles and methane harvested from Saturn’s moons could be used for the sake of terraforming other locations in the Solar System. In the case of Mars, nitrogen, ammonia and methane have been suggested as a means of thickening the atmosphere and triggering a greenhouse effect to warm the planet. This would cause water ice and frozen CO² at the poles to sublimate – creating a self-sustaining process of ecological change.
Colonies on Saturn’s moons could also serve as bases for harvesting deuterium and helium-3 from Saturn’s atmosphere. The abundant sources of water ice on these moons could also be used to make rocket fuel, thus serving as stopover and refueling points. In this way, a colonizing the Saturn system could fuel Earth’s economy, and the facilitate exploration deeper into the outer Solar System.
Naturally, there are numerous challenges to colonizing Saturn’s moons. These include the distance involved, the necessary resources and infrastructure, and the natural hazards colonies on these moons would have to deal with. For starters, while Saturn may be abundant in resources and closer to Earth than either Uranus or Neptune, it is still very far.
On average, Saturn is approximately 1,429 billion km away from Earth; or ~8.5 AU, the equivalent of eight and a half times the average distance between the Earth and the Sun. To put that in perspective, it took the Voyager 1 probe roughly thirty-eight months to reach the Saturn system from Earth. For crewed spacecraft, carrying colonists and all the equipment needed to colonize the surface, it would take considerably longer to get there.
These ships, in order to avoid being overly large and expensive, would need to rely on cryogenics or hibernation-related technology in order to save room on storage and accommodations. While this sort of technology is being investigated for crewed missions to Mars, it is still very much in the research and development phase.
Any vessels involved in the colonization efforts, or used to ship resources to and from the Cronian system, would also need to have advanced propulsion systems to ensure that they could make the trips in a realistic amount of time. Given the distances involved, this would likely require rockets that used nuclear-thermal propulsion, or something even more advanced (like anti-matter rockets).
And while the former is technically feasible, no such propulsion systems have been built just yet. Anything more advanced would require many more years of research and development, and a major commitment in resources. All of this, in turn, raises the crucial issue of infrastructure.
Basically, any fleet operating between Earth and Saturn would require a network of bases between here and there to keep them supplied and fueled. So really, any plans to colonize Saturn’s moons would have to wait upon the creation of permanent bases on the Moon, Mars, the Asteroid Belt, and most likely the Jovian moons. This process would be punitively expensive by current standards and (again) would require a fleet of ships with advanced drive systems.
And while radiation is not a major threat in the Cronian system (unlike around Jupiter), the moons have been subject to a great deal of impacts over the course of their history. As a result, any settlements built on the surface would likely need additional protection in orbit, like a string of defensive satellites that could redirect comets and asteroids before they reached orbit.
Given its abundant resources, and the opportunities it would present for exploring deeper into the Solar System (and maybe even beyond), Saturn and its system of moons is nothing short of a major prize. On top of that, the prospect of colonizing there is a lot more appealing than other locations that come with greater hazards (i.e. Jupiter’s moons).
However, such an effort would be daunting and would require a massive multi-generational commitment. And any such effort would most likely have to wait upon the construction of colonies and/or bases in locations closer to Earth first – such as on the Moon, Mars, the Asteroid Belt, and around Jupiter. But we can certainly hold out hope for the long run, can’t we?
We have written many interesting articles on colonization here at Universe Today. Here’s Why Colonize the Moon First?, How Do We Colonize Mercury?, How Do We Colonize Venus?, Colonizing Venus with Floating Cities, Will We Ever Colonize Mars?, How Do We Colonize Jupiter’s Moons?, and The Definitive Guide to Terraforming.
Astronomy Cast also has many interesting episodes on the subject. Check out Episode 59: Saturn, Episode 61: Saturn’s Moons, Episode 95: Humans to Mars, Part 2 – Colonists, Episode 115: The Moon, Part 3 – Return to the Moon, and Episode 381: Hollowing Asteroids in Science Fiction. | 0.893854 | 3.183989 |
August 25, 2014 – Today marks the 25th anniversary of the historic encounter of NASA’s Voyager 2 spacecraft with Neptune and its planet-sized moon, Triton. The Voyager 2 expedition delivered the first close-up look at the unexplored planet.
Tom Spilker, who was a member of the Voyager 2 radio science team recalls, “I got this overwhelming feeling inside, as if I was standing in the bow of Captain Cook’s expedition into the Gulf of Alaska for the very first time. We were going to places where no one had ever gone before – we were explorers.”
Voyager’s visit to the Neptune system revealed previously unseen features of Neptune itself, such as the Great Dark Spot, a massive storm similar to, but not as long-lived, as Jupiter’s Great Red Spot. Voyager also captured clear images of the ice giant’s ring system, too faint to be clearly viewed from Earth.Voyager scientists were also amazed to see that Triton has active geysers.
Voyager 1 and 2 were launched 16 days apart in 1977, and at least one of the spacecraft visited Jupiter, Saturn, Uranus and Neptune. Voyager 1 now is the most distant human-made object, about 12 billion miles (19 billion kilometers) away from the sun. In 2012, it became the first human-made object to venture into interstellar space. Voyager 2, the longest continuously operated spacecraft, is about 9 billion miles (15 billion kilometers) away from our sun.
The Voyager spacecraft were built and continue to be operated by NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. The Voyager missions are part of NASA’s Heliophysics System Observatory, sponsored by the Heliophysics Division of the Science Mission Directorate.
The Laboratory for Atmospheric and Space Physics in Boulder provided the Photopolarimeter System for the Voyager expeditions. By studying the data from this instrument, scientists gained invaluable information about the surface texture and composition of Jupiter, Saturn, Uranus and Neptune, along with information of size distribution and composition of Saturn’s and Uranus’ rings. It also collected information on atmospheric scattering properties and density for all planets.
In a strange coincidence, the New Horizons spacecraft, which is the first mission sent to explore dwarf planet Pluto and the Kuiper Belt beyond, also crossed Neptune’s orbit today.
“It’s a cosmic coincidence that connects one of NASA’s iconic past outer solar system explorers, with our next outer solar system explorer,” said Jim Green, director of NASA’s Planetary Science Division, NASA Headquarters in Washington. “Exactly 25 years ago at Neptune, Voyager 2 delivered our ‘first’ look at an unexplored planet. Now it will be New Horizons turn to reveal the unexplored Pluto and its moons in stunning detail next summer on its way into the vast outer reaches of the solar system.”
Several senior members of the New Horizons science team were young members of Voyager’s science team in 1989. Many remember how Voyager 2’s approach images of Neptune and its planet-sized moon Triton fueled anticipation of the discoveries to come. They share a similar, growing excitement as New Horizons begins its approach to Pluto.
The Johns Hopkins University Applied Physics Laboratory (APL) manages the New Horizons mission for NASA’s Science Mission Directorate; Alan Stern, of the Southwest Research Institute (SwRI) in Boulder, is the principal investigator and leads the mission. SwRI leads the science team, payload operations and encounter science planning; APL designed, built and operates the New Horizons spacecraft. New Horizons is part of the New Frontiers Program managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama.
New Horizons is the first mission in NASA’s New Frontiers program. The spacecraft is expected to pass Pluto at its closest approach next July. | 0.828218 | 3.63082 |
How do you peer into the dark heart of a vampire star? Try combining four telescopes! At ESO’s Paranal Observatory they created a virtual telescope 130 metres across with vision 50 times sharper than the NASA/ESA Hubble Space Telescope and observed a very unusual event… the transfer of mass from one star to another. While you might assume this to be a violent action, it turns out that it’s a gradual drain. Apparently SS Leporis stands for “super slow”.
“We can now combine light from four VLT telescopes and create super-sharp images much more quickly than before,” says Nicolas Blind (IPAG, Grenoble, France), who is the lead author on the paper presenting the results, “The images are so sharp that we can not only watch the stars orbiting around each other, but also measure the size of the larger of the two stars.”
This stellar duo, cataloged as SS Leporis, are only separated by slightly more than one AU and have an orbital period of 260 days. Of the two, the more massive and cooler member expands to a size of about Mercury’s orbit. It’s this very action of being pushed closer that draws the hot companion to feed on its host – consuming almost half of its mass. Weird? You bet.
“We knew that this double star was unusual, and that material was flowing from one star to the other,” says co-author Henri Boffin, from ESO. “What we found, however, is that the way in which the mass transfer most likely took place is completely different from previous models of the process. The ‘bite’ of the vampire star is very gentle but highly effective.”
The technique of combining telescopes gives us an incredibly candid image – one which shows us the larger star isn’t quite as large as surmised. Rather than clarifying the picture, it complicates. Just how did a red giant lose matter to its companion? Researchers are guessing that rather than streaming material from one star to another, that stellar winds may have released mass – only to be collected by the companion vampire star.
“These observations have demonstrated the new snapshot imaging capability of the Very Large Telescope Interferometer. They pave the way for many further fascinating studies of interacting double stars,” concludes co-author Jean-Philippe Berger.
Where’s van Helsing when you need him?
Original Story Source: ESO Press Release For Further Reading: An Incisive Look At The Symbiotic Star SS Leoporis. | 0.841247 | 3.772615 |
Earth Faces an Increased Risk of Being Hit by an Asteroid, Astronomers Warn
Large asteroids may be lurking undiscovered within a meteoroid stream whose particles are hitting Earth, and scientists are urging a concentrated search for them.
Earth may be threatened by a newly discovered branch of a stream of meteoroids, increasing the risk that the planet will be struck by a meteoroid or asteroid.
A team of astronomers from the Czech Academy of Sciences announced the findings on Tuesday after studying the Taurid meteoroid stream. The stream produces a meteor shower that usually has a long period of activity in October and November and produces a low number of meteors. The meteors — light phenomena that are seen when a meteoroid enters the planet’s atmosphere and vaporizes, also referred to as “shooting stars” — occur when Earth’s orbit plows into the stream of debris left behind by Comet Encke.
Most of these particles are quite tiny and pose no threat whatsoever, but the Czech astronomers have tracked a new branch of the stream from which particles are intersecting with the planet. The branch includes two asteroids with diameters of between 200 and 300 meters (roughly 650-1000 feet). These asteroids are not themselves on a collision course with Earth, but their identification suggests that there may be other asteroids of this size or larger lurking undiscovered within this stream.
As such, the astronomers are urging a concentrated search for more Taurid asteroids, to see if any potentially threatening ones exist.
“Since asteroids of sizes of tens to hundreds meters pose a threat to the ground even if they are intrinsically weak, impact hazard increases significantly when the Earth encounters the Taurid new branch every few years,” they write in the journal Astronomy & Astrophysics. “Further studies leading to [a] better description of this real source of potentially hazardous objects, which can be large enough to cause significant regional or even continental damage on the Earth, are therefore extremely important.”
It’s worth noting, however, that no threatening objects have yet been discovered. Though the prospect of continental damage and regional catastrophe coming from space is alarming, more observations will be needed before drawing conclusions based on the Czech team’s research.
NASA is regularly working to anticipate the possible collision of a massive cosmic particle with Earth and assess any potential impact risks. It operates a collision monitoring system called Sentry that routinely scans for asteroids and determines the likelihood of impact over the next 100 years. It also freely catalogs these rocky bodies at the JPL Small-Body Database Browser.
The debris stream from Comet Encke is influenced in part by the gravity of Jupiter, a massive gas giant planet that is known to influence the orbits of comets and asteroids in that region of the solar system. As such, from time Jupiter’s gravity can redirect the debris so that more particles hit the Earth.
During one of these “enhanced” Taurid meteor showers in 2015 caused by Encke’s redirection, the astronomers from the Czech Academy of Sciences analyzed 144 Taurid fireballs — meteors that produce a large flash when they hit the atmosphere. Observations were taken from the European Fireball Network.
Of the analyzed fireballs, about 113 of them have common characteristics, including sharing approximately the same orbit. The astronomers concluded, based on previous showers, that these fireballs come from a new branch of the Taurid stream being redirected by Jupiter’s gravity.
Next, the astronomers analyzed the orbital paths of asteroids 2015 TX24 and 2005 UR. Due to their similarities to the newly discovered path of the Taurid fireballs, the astronomers in turn argued that these two asteroids — each about 200-300 meters in diameter — are also members of the new branch.
Analysis of the fireballs suggests that the large asteroids are weak in structure, although at larger sizes they would not break up easily in the Earth’s atmosphere.
A few years ago, NASA created a Planetary Defense Coordination Office to bring together the observations of all US networks looking for near-Earth asteroids, and to help come up with plans in the unlikely chance that the Earth is under threat. You can read more about the office’s work at its website. Other countries, such as the European Union, have similar asteroid observation networks in place.
WATCH: NASA's Plan to Give Our Moon Its Own Moon | 0.925116 | 3.479481 |
Tools for Dark Matter in Particle and Astroparticle Physics
Despite several indirect confirmations of the existence of dark matter, the properties of a new dark matter particle are still largely unknown. Several experiments are currently searching for this particle underground in direct detection, in space and on earth in indirect detection and at the LHC. A confirmed signal could select a model for dark matter among the many extensions of the standard model. In this paper we present a short review of the public codes for computation of dark matter observables.
Tools for Dark Matter in Particle and Astroparticle Physics
Joint Institute for Nuclear Research (JINR) 141980, Dubna, Russia
Nowadays there are two crucial problems in particle physics: the search for the Higgs particle or more generally the unravelling of the mechanism of symmetry breaking and the nature of Dark Matter (DM). The Higgs particle, responsible for symmetry breaking, is the cornerstone of the Standard Model (SM) and of some of its extensions. As long as the Higgs particle escapes detection there will be a missing link in our understanding of the nature of fundamental interactions and the SM will be incomplete. As concerns the nature of DM we are facing a different issue: there is robust experimental evidence for DM, yet we have no additional evidence for the existence of a stable massive particle which can play the role of DM. For this we have to consider extensions of the SM and assume an additional discrete unbroken symmetry, for example . This symmetry not only allows the lightest particle of the new physics model to be stable but it also usually makes this model conform more naturally with current data.
Let’s review briefly the experimental evidence for dark matter. First, the radial dependence of rotation curves of galaxies give strong evidence for DM. Typical rotation curves in spiral galaxies show a plateau at a distance of several kpc from the galactic center, see Fig1(a). The numerical value of the velocity at large distances is significantly larger than expected assuming only visible matter. Furthermore such a plateau implies a gravitational mass that increases linearly with the galactic radius, this does not corresponds to the distribution of visible matter. The rotation curve for the Milky Way allows to estimate the density of DM in the Sun orbit. This gives a value of for the local density. This number enters the computation of the signals for DM direct and indirect detection that will be described below.
Very precise estimates of the amount of DM were obtained from the WMAP measurement of fluctuations in the microwave background temperature[1, 2]. Temperature fluctuations, Fig1(b), are connected to fluctuations in the gravitational potential in the time of last scattering. Because ordinary matter is in a plasma at high temperatures, it can not generate such fluctuations. Precise numerical analyses of WMAP results allow to extract the total density of DM particles in the time of last scattering. Assuming that the number of DM particles has not changed since then we should have now a DM density around . More precisely,
where is the critical density. This amount of DM is in good agreement with simulations of large structure formation in Universe. Indeed baryonic matter itself is not able to create galaxies because of the fast expansion rate of the Universe. Because DM particles have to be non-relativistic at the time of last scattering time, from WMAP measurements one can estimate its mass to be . This is sufficient to rule out neutrinos as the main component of DM. Therefore it is necessary to extend the SM to explain the nature of DM. The precise measurement of provides a powerful mean to discriminate various extensions of the SM that propose a DM candidate.
There are three kinds of astroparticle experiments which allow in principle to detect DM particles and measure some of their properties. First, experiments for indirect DM detection such as PAMELA [3, 6], HEAT , AMS01 , Fermi [7, 15], ATIC , HESS [9, 13, 14] INTEGRAL , Veritas, EGRET try to observe the products of DM self annihilation in the galactic halo. The SM particles that are produced in this annihilation will decay to stable particles including , , and neutrinos. Indirect detection experiments search primarily for , , as the and channels suffer from a very large background and the neutrino signal is expected to be low. Interpreting the results of indirect detection experiments requires a good understanding of both the background caused by galactic sources as well as the structure of galactic magnetic fields responsible for the propagation of and . For instance, the excess of positrons recently observed by PAMELA can be caused either by some exotic DM or some galactic source like supernova. Furthermore large uncertainties in the signal can be caused by a clumpy structure in the DM distribution (this can increase the signal by a factor 20).
Direct detection experiments such as Edelweiss, DAMA, CDMS[18, 20], Xenon [19, 21], Zeplin or Cogent measure the recoil energy of the nuclei that would result from an elastic DM - nucleus collision in a large detector. To reduce the cosmic rays background such detectors are located deep underground. We should mention that DAMA has for several years found a positive result, such a signal has not yet been confirmed by other experiments .
High energy neutrinos produced as a result of the annihilation of DM particles captured in the center of the Sun and the Earth, are searched by Super-Kamiokande , Antares or IceCube . The rate of DM annihilation inside the Sun/Earth should be equal to the rate of DM capture by the Sun/Earth. These experiments are therefore similar to direct detection experiments where the Sun or the Earth plays the role of the large detector. These experiments have not yet observed DM events.
Dark matter can also be detected in accelerators such as the Tevatron or the LHC. Despite the fact that the direct production of DM particles has a small cross section and that the DM particle escapes the detector without leaving a track, a DM particle could be detected at LHC. Indeed such particle appears in the decay chains of new particles that can be directly produced at a collider and its signature is a large amount of missing energy . It is therfore possible that the LHC will soon shed light on the two fundamental problems in particle physics: Higgs and dark matter.
2 Short review of theoretical models for DM
Many extensions of the SM that can provide a DM candidate have been proposed. The best studied and most popular models among these are supersymmertic models: the minimal supersymmetric model MSSM [32, 31] and its extensions such as the NMSSM and the CPVMSSM . In these models R-parity conservation guarantees the stability of the DM particle. In models with flat or warped extra dimensions [37, 38], some parity that depend on the field in extra dimensions is responsible for the stability of DM. Furthermore there are models with extended gauge or Higgs sectors [42, 43] as well as little Higgs models [39, 40, 41]. In these models the DM candidate can be either a Majorana fermion, a Dirac fermion, a vector boson or a scalar.
3 Calculations needed for DM analyses.
Because of the large number of astroparticle experiments and the large number of theoretical models we need software tools for the computation of DM properties and DM detection rates in different experiments. The general theoretical formulas for DM calculations are available in the review . Detailed relic density calculations in the MSSM can be found in [28, 29] while direct detection formulas including loop corrections and subleading terms was obtained in . The different tasks which have to be solved are
Calculation of DM relic density. The formalism to calculate the DM density using the freeze-out mechanism and based on the DM annihilation cross sections was developed in [28, 29]. One has to solve a differential equation that gives the temperature dependence of DM density. The dependence on the underlying model appears via the calculation of the thermally averaged cross section for DM annihilation. A rough estimation gives a value for the annihilation cross section around . This corresponds to a typical weak interaction cross section. Nevertheless agreement with WMAP results strongly constrain the parameters of the particle physics model. In addition to DM anihilation processes, processes involving other particles that are odd under the discrete symmetry and whose masses are just above that of DM also contribute to the effective cross section since eventually all these particles will decay into the DM and some other particles. The large number of processes involved and the fact that a priori the matrix elements needed are not known means that relic density calculations in a generic model of Dm can be challenging.
Calculation of DM - nucleon cross sections. These are required for the prediction of rates in direct detection and neutrino telescope experiments. In the standard case one needs to compute the DM-nuclei scattering amplitude in the limit of small momentum transfer. This is obtained from the DM-nucleon amplitude which is in turn related to DM-quark amplitudes.
Calculation of indirect detection signal. In addition to the calculation of the DM annihilation cross section, the computation of the spectra of photons, positrons and antiprotons are required. The initial spectra can be easily obtained even for a generic model. For this one has to calculate all annihilation cross sections and extract the , and spectra using Pythia. The processes also might have to be taken into account. The propagation of and can be done by solving the diffusion equation by the Green function method. A more precise treatment as well as the computation of the background require a Monte Carlo simulation.
Calculation of low energy constraints. Several experimental data restrict the parameter space of SM extensions even though they are not directly related to DM observables. These include precision measurements such as the muon anomalous magnetic moment, , or rare -decays for example . In most cases the theoretical prediction require the computation of higher order processes involving Feynman diagrams at the loop level. These are not completely automatized yet.
LEP and Tevatron constraints. High energy collider experiments are probing the SM and its extensions. Their results can be used to put constraints on the Higgs mass, on new channels in Z decay and on the mass of heavy exotic particles including the supersymmetric partners of SM particles.
Calculation of LHC and ILC signals The computation of signals associated with new particles produced at colliders include the matrix element calculation for the production and for the decays of the new particles as well as Monte Carlo phase space integration and cuts implementation. Several tools have ben developed to perform these tasks.
4 Review of software for DM calculation
There are several public codes used in DM calculations which were designed for the study of physics beyond the SM, for a review of the different tools see [53, 54]. Several codes perform the computation of the particle mass spectrum in supersymmetric models, indeed large loop corrections are generic and need to be taken into account. These codes also solve the renormalization group equations in supersymmetric scenarios with fundamental parameters defined at the GUT scale. Four codes were developed in the framework of the MSSM, SoftSUSY , Isajet [58, 57] and SPheno , while NMSSMTools and CPsuperH deal with extensions of the MSSM. These codes also compute various low energy and collider constraints. A special file interface SLHA [51, 52] was designed for these programs. This interface facilitates their use in DM related code. The package HiggsBounds was designed for testing LEP and Tevatron accelerator constraints on the Higgs sector in generic models. Such constraints are available in NMSSMTools and Isajet but only for the specific class of models they support.
A very important tool for analysis of indirect detection experiments is the GALPROP program. It gives a numerical solution for the differential equation that describes the propagation of different kind of particles in the galactic magnetic fields. Although this code is rather slow it allows to take into account both DM signals and background galactic sources at the same time.
There are four public codes for DM studies in supersymmetry, SuperIso, IsaTools [58, 57] DarkSUSY and micrOMEGAs [60, 61, 62]. All perform the computation of the DM relic density together with other observables that are not necessarily related to DM. SuperIso is a rather new code that is primarily dedicated to flavour physics in the MSSM, the SLHA is used for interfacing spectrum calculators. IsaTools and DarkSUSY were also both developed for the MSSM. They calculate the direct detection and indirect detection rates as well as low energy and accelerator constraints. IsaTools uses Isajet to compute the particle spectrum while DarkSUSY also uses SuSpect or the SLHA. DarkSUSY also calculates the neutrino rates from DM annihilation in the Sun and the Earth, furthermore DarkSUSY includes the propagation of cosmic rays. In particular it is interfaced with GALPROP which allows to study both signal and background in indirect detection measurements. For MSSM applications, DarkSUSY is now the most complete package. On the other hand IsaTools is based on Isajet, a tool for computation of signals for SM and its supersymmetric extensions at colliders and is therfore most suited for DM accelerator studies.
micrOMEGAs is the only package for DM studies in generic extensions of the SM model. Details of the techniques used in micrOMEGAs are explained in the next section. micrOMEGAs computes the DM relic density, direct detection and indirect detection rates. For the propagation of and , micrOMEGAs uses a Green function method which describes well the signals from DM annihilation but does not allow to calculate the background. The neutrino rate from capture in celestial bodies is not yet implemented in micrOMEGAs. Low energy and collider constraints are provided for some models and the predictions of collider signals are obtained from CalCHEP which is included in micrOMEGAs. The current version of micrOMEGAs contains the MSSM, NMSSM, CPVMSSM, the Little Higgs model , and a Dirac Neutrino DM model .
Comparisons of IsaTools/DarkSUSY/micrOMEGAs showed good agreement between the codes. In fact such cross checks were used to remove several bugs in these packages.
5 Applications of automatic matrix element calculators for dark matter studies.
In principle it is not necessary to use only one universal program to study DM properties in any model. On the other hand once such a tool has been tested and debugged for one specific extension of the SM, it can rapidly and straightforwardly be used for other models as well. An automatic approach therfore increases the reliability of the software and considerably reduces the time needed for developing new software as well as the time required for the user to become familiar with a new package.
As mentionned above the most important computer task needed for DM studies is the computation of matrix elements of various reactions which occur in some specific model of particle physics. In the last years several automatic calculators of matrix elements were developed: CompHEP , CalcHEP, FeynArts/FormCalc [65, 66, 67], MadGraph[68, 69], Sherpa, and Omega . In principle any of these could be used for DM related calculations in a generic model. Currently the idea of automatic matrix element generation for DM observables in a generic model is realized in full scope only in the micrOMEGAs package. This approach was first applied for the computation of the relic density . In a numerical algorithm for the calculation of the spin-dependent and spin-independent DM-nucleon amplitudes relevant for direct detection was proposed and implemented. This algorithm which can be applied to a generic model replaces the usual symbolic computation of amplitudes by means of Fiertz identities. Recently in an automatic approach for calculating the spectra of DM self annihilation in the galaxy was designed and takes into account processes with additional photon radiation .
The key point in micrOMEGAs’ approach to DM calculations is the generation of shared libraries with matrix element codes. The calculation of all matrix elements that enter a relic density calculation is computer time consuming and requires a lot of disk space . However for any particular set of model parameters in general only a small number of annihilation channels are needed. micrOMEGAs therefore generates the code only for the channels as they are needed, links them dynamically and stores them on the disk for subsequent usage.
Note that the idea of using automatic calculators in DM codes was also realised in IsaTools and SuperIso albeit only in the context of the MSSM. In IsaTools, CompHEP was used for generating (co-)annihilation cross sections while SuperIso relies on FeynCalc to evaluate the cross sections. In principle both these codes can be generalized for other models.
Several tools for the calculation of DM properties and DM signals for current and future experiments are now available. The currently most developed codes are DarkSUSY and micrOMEGAs. The existence of several independent codes is very important for cross checking the results and for understanding uncertainties which result from different technical implementation of the same algorithms. There are several auxiliary tools designed for the computation of the particle spectra and couplings as well as for calculation of low energy and high energy constraints. The development of interface protocols for data exchange between such programs is needed.
This work was supported in part by the GDRI-ACPP of CNRS, by the ANR project ToolsDMColl, BLAN07-2-194882, by the Russian foundation for Basic Research, RFBR-08-02-92499-a, RPBR-10-02-01443-a and by a State contract No.02.740.11.0244. The visit of A.P. to Jaipur was funded by the organizing committee and by the grant RFBR-10-07-08004-z.
- C. L. Bennett et al. [WMAP Collaboration], First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Preliminary Maps and Basic Results, Astrophys. J. Suppl. 148, 1 (2003).
- D. N. Spergel et al. [WMAP Collaboration], First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Determination of Cosmological Parameters, Astrophys. J. Suppl. 148, 175 (2003) [arXiv:astro-ph/0302209].
- O. Adriani et al. [PAMELA Collaboration], An anomalous positron abundance in cosmic rays with energies 1.5-100 GeV, Nature 458, 607 (2009) [arXiv:0810.4995 [astro-ph]].
- J. J. Beatty et al., New measurement of the cosmic-ray positron fraction from 5 GeV to 15 GeV, Phys. Rev. Lett. 93, 241102 (2004) [arXiv:astro-ph/0412230].
- M. Aguilar et al. [AMS-01 Collaboration], Cosmic-ray positron fraction measurement from 1 GeV to 30 GeV with AMS-01, Phys. Lett. B 646, 145 (2007) [arXiv:astro-ph/0703154].
- O. Adriani et al., A new measurement of the antiproton-to-proton flux ratio up to 100 GeV in the cosmic radiation, Phys. Rev. Lett. 102, 051101 (2009) [arXiv:0810.4994 [astro-ph]].
- A. A. Abdo et al. [Fermi LAT Collaboration], Measurement of the Cosmic Ray e+ plus e- spectrum from 20 GeV to 1 TeV with the Fermi Large Area Telescope, Phys. Rev. Lett. 102, 181101 (2009).
- J. Chang et al., An Excess Of Cosmic Ray Electrons At Energies Of 300-800 Gev, Nature 456, 362 (2008).
- F. Aharonian et al. [H.E.S.S. Collaboration], Probing the ATIC peak in the cosmic-ray electron spectrum with H.E.S.S, Astron. Astrophys. 508, 561 (2009) [arXiv:0905.0105 [astro-ph.HE]].
- A. W. Strong et al., Gamma-ray continuum emission from the inner Galactic region as observed with INTEGRAL/SPI, Astron. Astrophys. 444, 495 (2005) [arXiv:astro-ph/0509290].
- G. Maier [VERITAS Collaboration], Observation of Galactic Gamma-ray Sources with VERITAS, AIP Conf. Proc. 1085, 187 (2009) [arXiv:0810.0515 [astro-ph]].
- D. J. Thompson, Gamma ray astrophysics: the EGRET results, Rept. Prog. Phys. 71, 116901 (2008).
- F. Aharonian et al. [H.E.S.S. Collaboration], Discovery of Very-High-Energy Gamma-Rays from the Galactic Centre Ridge, Nature 439, 695 (2006) [arXiv:astro-ph/0603021].
- F. Aharonian et al. [HESS collaboration], H.E.S.S. upper limit on the very high energy gamma-ray emission from the globular cluster 47 Tucanae, arXiv:0904.0361 [astro-ph.HE].
- C. Meurer [Fermi LAT Collaboration], Dark Matter Searches with the Fermi Large Area Telescope, AIP Conf. Proc. 719, 1085 (2009) [arXiv:0904.2348 [astro-ph.HE]].
- R. Lemrani [EDELWEISS Collaboration], Search for dark matter with EDELWEISS: Status and future, Phys. Atom. Nucl. 69, 1967 (2006).
- R. Bernabei et al., DAMA investigations on dark matter at Gran Sasso: Results and perspectives, AIP Conf. Proc. 878, 91 (2006).
- D. S. Akerib et al. [CDMS Collaboration], CDMS, supersymmetry and extra dimensions, Nucl. Phys. Proc. Suppl. 173, 95 (2007) [arXiv:astro-ph/0609189].
- J. Angle et al. [XENON Collaboration], First Results from the XENON10 Dark Matter Experiment at the Gran Sasso National Laboratory, Phys. Rev. Lett. 100, 021303 (2008).
- Z. Ahmed et al. [The CDMS-II Collaboration], Results from the Final Exposure of the CDMS II Experiment, arXiv:0912.3592 [astro-ph.CO].
- E. Aprile et al. [XENON100 Collaboration], First Dark Matter Results from the XENON100 Experiment, arXiv:1005.0380 [astro-ph.CO].
- T. Sumner [UKDMC Collaboration], Direct Dark Matter Searches: Drift And Zeplin, PoS HEP2005, 003 (2006).
- C. E. Aalseth et al., Results from a Search for Light-Mass Dark Matter with a P- type Point Contact Germanium Detector, arXiv:1002.4703 [astro-ph.CO].
- S. Desai et al. [Super-Kamiokande Collaboration], Search for dark matter WIMPs using upward through-going muons in Super-Kamiokande, Phys. Rev. D 70, 083523 (2004) [Erratum-ibid. D 70, 109901 (2004)] [arXiv:hep-ex/0404025].
- J.P. Ernenwein [ANTARES Collaboration], Indirect dark matter search with the ANTARES neutrino telescope, PoS IDM2008 (2008) 036.
- C. DeClercq et al. [IceCube Collaboration], Search for dark matter with the AMANDA and IceCube neutrino detectors, PoS IDM2008 (2008) 034.
- G. Jungman, M. Kamionkowski and K. Griest, Supersymmetric dark matter, Phys. Rept. 267 (1996) 195 [arXiv:hep-ph/9506380].
- G. B. Gelmini, P. Gondolo and E. Roulet, Neutralino dark matter searches, Nucl. Phys. B 351, 623 (1991).
- J. Edsjo and P. Gondolo, Neutralino Relic Density including Coannihilations, Phys. Rev. D 56, 1879 (1997) [arXiv:hep-ph/9704361].
- H. Baer, X. Tata, Dark matter and the LHC, arXiv:0805.1905 [hep-ph].
- H. Goldberg, Constraint on the photino mass from cosmology, Phys. Rev. Lett. 50, 1419 (1983) [Erratum-ibid. 103, 099905 (2009)].
- J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos, K. A. Olive and M. Srednicki, Supersymmetric relics from the big bang, Nucl. Phys. B 238, 453 (1984).
- U. Ellwanger, C. Hugonie and A. M. Teixeira, The Next-to-Minimal Supersymmetric Standard Model, arXiv:0910.1785 [hep-ph].
- A. Pilaftsis, CP-odd tadpole renormalization of Higgs scalar-pseudoscalar mixing, Phys. Rev. D 58, 096010 (1998) [arXiv:hep-ph/9803297].
- T. Bringmann, L. Bergstrom and J. Edsjo, New Gamma-Ray Contributions to Supersymmetric Dark Matter Annihilation, JHEP 0801, 049 (2008) [arXiv:0710.3169 [hep-ph]].
- M. Drees and M. Nojiri, Neutralino-Nucleon Scattering Revisited, Phys. Rev. D 48, 3483 (1993).
- H. C. Cheng, K. T. Matchev and M. Schmaltz, Radiative corrections to Kaluza-Klein masses, Phys. Rev. D 66, 036005 (2002) [arXiv:hep-ph/0204342].
- K. Agashe and G. Servant, Warped unification, proton stability and dark matter, Phys. Rev. Lett. 93, 231805 (2004) [arXiv:hep-ph/0403143].
- J. Hubisz and P. Meade, Phenomenology of the little Higgs with T-parity, Phys. Rev. D 71, 035016 (2005) [arXiv:hep-ph/0411264].
- A. Martin, Dark matter in the simplest little Higgs model, arXiv:hep-ph/0602206.
- A. Belyaev, C. R. Chen, K. Tobe and C. P. Yuan, Phenomenology of littlest Higgs model with T-parity: Including effects of T-odd fermions, Phys. Rev. D 74 (2006) 115020 [arXiv:hep-ph/0609179].
- J. McDonald, Gauge Singlet Scalars as Cold Dark Matter, Phys. Rev. D 50, 3637 (1994).
- V. Barger et al., Recoil detection of the lightest neutralino in MSSM singlet extensions, Phys. Rev. D 75, 115002 (2007) [arXiv:hep-ph/0702036].
- G. Belanger, A. Pukhov and G. Servant, Dirac Neutrino Dark Matter, JCAP 0801 009 (2008).
- J. S. Lee et al., CPsuperH: A computational tool for Higgs phenomenology in the minimal supersymmetric standard model with explicit CP violation, Comput. Phys. Commun. 156 (2004) 283.
- W. Porod, SPheno, a program for calculating supersymmetric spectra, SUSY particle decays and SUSY particle production at e+ e- colliders, Comput. Phys. Commun. 153 (2003) 275.
- B. C. Allanach, SOFTSUSY: A C++ program for calculating supersymmetric spectra, Comput. Phys. Commun. 143, 305 (2002).
- A. Djouadi, J. L. Kneur and G. Moultaka, SuSpect: A Fortran code for the supersymmetric and Higgs particle spectrum in the MSSM, Comput. Phys. Commun. 176, 426 (2007) [arXiv:hep-ph/0211331].
- U. Ellwanger and C. Hugonie, NMSPEC: A Fortran code for the sparticle and Higgs masses in the NMSSM with GUT scale boundary conditions, Comput. Phys. Commun. 177 (2007) 399.
- P. Bechtle et al., HiggsBounds: Confronting Arbitrary Higgs Sectors with Exclusion Bounds from LEP and the Tevatron, Comput. Phys. Commun. 181 (2010) 138.
- P. Skands et al., SUSY Les Houches Accord: Interfacing SUSY Spectrum Calculators, Decay Packages, and Event Generators, JHEP 0407, 036 (2004) [arXiv:hep-ph/0311123].
- B. Allanach et al., SUSY Les Houches Accord 2, Comput. Phys. Commun. 180 (2009) 8.
- P. Z. Skands et al., A repository for beyond-the-standard-model tools, http://www.ippp.dur.ac.uk/BSM/
- F. Boudjema, J. Edsjo, P. Gondolo, in Particle Dark Matter: Observations, Models and Searches, ed. G. Bertone, Cambridge University Press (2010) arXiv:1003.4748.
- A. W. Strong et al., The GALPROP Cosmic-Ray Propagation Code, arXiv:0907.0559 [astro-ph.HE].
- P. Gondolo et al., DarkSUSY: Computing supersymmetric dark matter properties numerically, JCAP 0407 (2004) 008 [arXiv:astro-ph/0406204].
- H. Baer, C. Balazs and A. Belyaev, Neutralino relic density in minimal supergravity with co-annihilations, JHEP 0203 (2002) 042 [arXiv:hep-ph/0202076].
- H. Baer, A. Belyaev, T. Krupovnickas and J. O’Farrill, Indirect, direct and collider detection of neutralino dark matter, JCAP 0408, 005 (2004) [arXiv:hep-ph/0405210].
- A. Arbey and F. Mahmoudi, SuperIso Relic: A program for calculating relic density and flavor physics observables in Supersymmetry, Comput. Phys. Commun. 181 (2010) 1277.
- G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, micrOMEGAs2.0: A program to calculate the relic density of dark matter in a generic model, Comput. Phys. Commun. 176 (2007) 367.
- G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, Dark matter direct detection rate in a generic model with micrOMEGAs2.1, Comput. Phys. Commun. 180, 747 (2009) [arXiv:0803.2360 [hep-ph]].
- G. Belanger, F. Boudjema, P. Brun, A. Pukhov, S. Rosier-Lees, P. Salati and A. Semenov, Indirect search for dark matter with micrOMEGAs2.4, [arXiv:1004.1092 [hep-ph]].
- E. Boos et al. [CompHEP Collaboration], CompHEP 4.4: Automatic computations from Lagrangians to events, Nucl. Instrum. Meth. A 534, 250 (2004) [arXiv:hep-ph/0403113].
- A. Pukhov, Calchep 2.3: MSSM, structure functions, event generation,and generation of matrix elements for other packages, [arXiv:hep-ph/0412191].
- T. Hahn, Generating Feynman diagrams and amplitudes with FeynArts 3, Comput. Phys. Commun. 140, 418 (2001) [arXiv:hep-ph/0012260].
- T. Hahn and M. Perez-Victoria, Automatized one-loop calculations in four and D dimensions, Comput. Phys. Commun. 118, 153 (1999) [arXiv:hep-ph/9807565].
- T. Hahn and J. I. Illana, Excursions into FeynArts and FormCalc, Nucl. Phys. Proc. Suppl. 160, 101 (2006) [arXiv:hep-ph/0607049].
- J. Alwall et al., MadGraph/MadEvent v4: The New Web Generation, JHEP 0709, 028 (2007).
- F. Maltoni and T. Stelzer, MadEvent: Automatic event generation with MadGraph, JHEP 0302, 027 (2003) [arXiv:hep-ph/0208156].
- T. Gleisberg et al., SHERPA 1.alpha, a proof-of-concept version, JHEP 0402, 056 (2004).
- M. Moretti, T. Ohl and J. Reuter, O’Mega: An optimizing matrix element generator, arXiv:hep-ph/0102195. | 0.888391 | 4.085375 |
A new image of comet 67P/Churyumov-Gerasimenko was taken by the European Space Agency's (ESA) Rosetta spacecraft shortly before its controlled impact into the comet's surface on Sept. 30, 2016. Confirmation of the end of the mission arrived at ESA's European Space Operations Center in Darmstadt, Germany, at 4:19 a.m. PDT (7:19 a.m. EDT / 1:19 p.m. CEST) with the loss of signal upon impact.
The final descent gave Rosetta the opportunity to study the comet's gas, dust and plasma environment very close to its surface, as well as take very high-resolution images.
The final image was taken from an altitude of 66 feet (20 meters) above the comet's surface by the spacecraft's OSIRIS wide-angle camera on Sept. 30. The initial report of 167 feet, or 51 meters, was based on the predicted impact time. Now that the time has been confirmed, and following additional information and timeline reconstruction, the estimated distance has been updated. Analysis is ongoing.
The image scale is about two-tenths of an inch (5 millimeters) per pixel. The image measures about 9 feet (2.4 meters) across.
The decision to end the mission on the surface is a result of Rosetta and the comet heading out beyond the orbit of Jupiter again. Farther from the sun than Rosetta had ever journeyed before, there would be little power to operate the craft. Mission operators were also faced with an imminent month-long period when the sun is close to the line-of-sight between Earth and Rosetta, meaning communications with the craft would have become increasingly more difficult.
The European Space Agency's Rosetta mission was launched in 2004 and arrived at comet 67P/Churyumov-Gerasimenko on Aug. 6, 2014. It is the first mission in history to rendezvous with a comet and escort it as it orbits the sun. On Nov. 4, 2014, a smaller lander name Philae, which had been deployed from the Rosetta mothership, touched down on the comet and bounced several times before finally alighting on the surface. Philae obtained the first images taken from a comet's surface and sent back valuable scientific data for several days.
U.S. contributions aboard the Rosetta spacecraft are the Microwave Instrument for Rosetta Orbiter (MIRO); the Alice spectrograph; the Ion and Electron Sensor (IES), part of the Rosetta Plasma Consortium Suite; and the Double Focusing Mass Spectrometer (DFMS) electronics package for the Rosetta Orbiter Spectrometer for Ion Neutral Analysis (ROSINA). They are part of a suite of 11 total science instruments aboard Rosetta.
Comets are time capsules containing primitive material left over from the epoch when the sun and its planets formed. Rosetta is the first spacecraft to witness at close proximity how a comet changes as it is subjected to the increasing intensity of the sun's radiation. Observations will help scientists learn more about the origin and evolution of our solar system and the role comets may have played in the formation of planets.
Rosetta is an ESA mission with contributions from its member states and NASA. Rosetta's Philae lander is provided by a consortium led by the German Aerospace Center, Cologne; Max Planck Institute for Solar System Research, Gottingen; French National Space Agency, Paris; and the Italian Space Agency, Rome. NASA's Jet Propulsion Laboratory, Pasadena, California, a division of Caltech, manages the U.S. contribution of the Rosetta mission for NASA's Science Mission Directorate in Washington. JPL also built the MIRO and hosts its principal investigator, Mark Hofstadter. The Southwest Research Institute (San Antonio and Boulder, Colorado), developed the Rosetta orbiter's IES and Alice instruments and hosts their principal investigators, James Burch (IES) and Alan Stern (Alice).
For more information on the U.S. instruments aboard Rosetta, visit:
More information about Rosetta is available at:
UPDATED 9/30/16 AT 1:55 PM PDT WITH REVISED ALTITUDE ESTIMATES IN PARAGRAPH 3.
News Media ContactDC Agle
Jet Propulsion Laboratory, Pasadena, Calif.
Dwayne Brown / Laurie Cantillo
NASA Headquarters, Washington
202-358-1726 / 202-358-1077
[email protected] / [email protected] | 0.852082 | 3.356323 |
(Visited 71 times, 1 visits today)FacebookTwitterPinterestSave分享0 New study of craters shows that moon’s surface gets churned every 81,000 years, not every million years.“I like it when theories are proven wrong, or exciting new things come up,” remarked Kathleen Mandt of Southwest Research Institute, quoted by New Scientist. That’s how to put a cheerful spin on an orders-of-magnitude correction. “The Lunar Reconnaissance Orbiter is starting to show there’s a lot we don’t know about the moon.” Data from LRO are showing a much higher influx of meteorites to the moon’s surface, implying that future astronauts stand a bigger-than-trivial chance of being in danger from flying rocks and dust. The data raise questions about the age of the lunar surface.The revised number of craters suggests the moon is pummeled by space rocks much more frequently than predicted, says Kathleen Mandt of the Southwest Research Institute in San Antonio, Texas. It also suggests that the soil on the lunar surface is turning over so often that materials like water molecules could escape into space sooner than previously thought. That could have important implications for researchers trying to date rocks on the moon, or future initiatives hoping to mine resources out of the moon.Space.com says of the “Impact!” of the finding, “New Moon Craters Are Appearing Faster Than Thought.” Part of the new estimate comes from crater counts by LRO, including a whopping 222 new craters appearing just in the last 7 years, says Alexandra Witze in Nature. The other part comes from estimates of secondary craters formed from each new impact.The scientists also found broad zones around these new craters that they interpreted as the remains of jets of debris following impacts. They estimated this secondary cratering process is churning the top 0.8 inches (2 centimeters) of lunar dirt, or regolith, across the entire lunar surface more than 100 times faster than thought.Realization of widespread secondary cratering upset the crater-count dating method a decade ago (9/25/07), rendering the method essentially unreliable (5/22/12). Even if a future moon colonist avoids a direct hit, he or she could be at risk of debris from a distant impact if rocks and dust fly in all directions with no atmosphere to slow them down. Picture yourself working at a futuristic moon base stepping outside to watch the Earthrise:“For example, we found an 18-meter (59-foot) impact crater that formed on March 17, 2013, and it produced over 250 secondary impacts, some of which were at least 30 kilometers (18.6 miles) away,” Speyerer said. “Future lunar bases and surface assets will have to be designed to withstand up to 500 meter per second (1,120 mph) impacts of small particles.“PhysOrg says the meteoritic rain is so heavy, it gives the moon a facelift every 81,000 years, overturning the top two centimeters of lunar dust. Some impactors were big. The astronomers found 33% more craters than expected with diameters at least ten meters.None of the articles asks the obvious question: what does this mean over the assumed lifetime of the moon? If the moon really formed 4.5 billion years ago, as secular planetary scientists believe, that would be 5,555 facelifts. (It should be noted that the 81,000-year estimate uses models that assume the billions-of-years age of the moon. All they can really observe is the current impact rate. The new observations, however, imply a faster production rate than the favored model assumes.)Another consequence of the study affects all the planets and moons of the solar system. Does the impact rate need to be revised upward everywhere else? Are primary and secondary craters occurring much more frequently than expected at Mars and the moons of Saturn, or at Pluto? Meteor flux could vary at different radii from the sun. It’s also a factor of gravitational pull. But without a reconnaissance orbiter at each planet or moon, it’s hard to be sure. New craters have been observed at Mars – again, at a higher rate than expected (2/13/14).Earth, with its higher gravity, attracts meteors at an even higher rate, but our atmosphere causes most of them to burn up high in the sky. Meteors are commonly observed by skywatchers. The occasional meteor shower increases the rate when Earth passes through the dust stream of a comet. The rare meteorites (meteors that reach Earth’s surface) are prized by collectors. Some of them come from Mars and other planets, when glancing blows send rocks our way (3/25/08).The research paper in Nature by Speyerer et al. contains before-and-after photos of impact sites.The trend in crater-count dating has been up and down: up in the number of impact events and secondaries, down in the method’s credibility. It’s clear that these results were surprising. Despite all those thousands of craters, the moon doesn’t have to be billions of years old.This paper should stimulate creationists to revisit the moon dust problem. In the Apollo era, all the secular astronomers were astonished at the thinness of the lunar regolith. The Surveyor landers proved that the dust was not meters thick with fine dust as some had predicted. Apollo astronauts found the dust to be so shallow, they could scratch the bedrock with their boots. It seemed that fine dust had not been accumulating for billions of years. When subsequent estimates of dust influx were substantially reduced, many creationists abandoned the moon-dust argument for a young moon.Perhaps that was premature. This paper shows that impacts send up jets of dust that settle back under ballistic trajectories. If the top two centimeters can be completely “gardened” in just 81,000 years, it seems highly implausible to believe this has happened over 5,000 times. Some creation physicist ought to read the new paper and revisit the implications for age.
2 October 2009The 2010 Fine Art project, a visual celebration of the world’s most-watched sporting event, is assembling an international collection by some of the world’s leading contemporary artists to promote African visual arts and Africa as a powerful cultural destination.2010 Fine Art is a South African company that has acquired a global licence to produce and distribute fine art related to the 2010 Fifa World Cup – the first time in the 80-year history of the tournament that Fifa has granted such a licence.And according to general manager Rob Spaull, the project will be one of the largest international art collaborations in history.“We are assembling an international collection by some of the world’s leading contemporary artists that celebrates Africa and the Fifa World Cup,” Spaull said in a a statement last month.“With five artists from each nation that qualifies to play in South Africa, we will have 160 original works from every corner of the globe.“Add to that the exceptional pieces being assembled for the 2010 African Fine Art Collection, and the fact that we will be exhibiting not only here in South Africa but in all 32 countries during 2010, and you start to get a sense of how big an opportunity this is to promote African art and Africa as a destination of choice.”According to Spaull, 2010 Fine Art is busy adding artists to its international and African collections, and has begun to identify and appoint gallery partners in the 32 countries where it will be exhibiting.“The second phase of development will see the creation of a three-dimensional virtual art gallery in which all of the works from both collections will be able to be viewed online as part of a seamless virtual walkthrough,” Spaull said.The 2010 Fine Art website – www.2010fineart.com – allows visitors to see which countries have qualified for the World Cup and what art is available from each. As new teams qualify, their art will be loaded and updated.“Art is a language common to all,” says Spaull. “It opens windows of understanding between foreign cultures, and unites peoples who might otherwise share no common experiences. Sport, like art, creates bridges between cultures, and brings people together through shared excitement.“The eyes of the world are turning to South Africa as never before. We must make every use of these global opportunities to promote African visual arts and Africa.”SAinfo reporterWould you like to use this article in your publication or on your website? See: Using SAinfo material read more
Polo, usually a sport of the privileged, is now being taken to underprivileged communities in the Free State. (Image: Poloafrica) With 60 ponies on hand, the Poloafrica Development Trust in the Free State is giving more people the opportunity to take part in equestrian sporting activities, like polo.A Laureus Sport for Good Foundation project, Poloafrica aims to make the sport more inclusive and change the perception that it is only for the elite.Based in Uitgedacht Farm on the foothills of the Maluti Mountains, Poloafrica uses the love of riding, polo and ponies as a way to encourage boys and girls from disadvantaged backgrounds to work hard vocationally, at school, and at the life skills lessons provided on the farm.The ponies are used for young people aged between six and 21.Poloafrica founder, Catherine Cairns said the majority of the development polo players in the country belong to the programme and they’re doing well.“Poloafrica teams have numerous wins to their credit in tournaments in Gauteng, KwaZulu-Natal and the Free State.”Skills developmentAccording to Poloafrica’s website, it also provides opportunities for talented underprivileged adults to flourish as equestrian professionals – whether in playing the game, caring for the animals, schooling ponies or coaching others.“Recently the scope of equestrian activities offered by the programme has broadened, with the introduction of dressage and show jumping,” said Cairns.“Poloafrica serves eight villages in the surrounding farming community, with a few children visiting during the holidays from across the Lesotho border.”The children in the programme learn a variety of life skills such as art, singing, needlework, bee-keeping, carpentry, welding, acrobatics, self-defence, computer skills and spoken self-expression.They also receive extra tuition in maths and English, two subjects which present a challenge to rural children in South Africa today.The programme also places importance on having empathy for the animals, good attitude and teamwork. To be a Poloafrica scholar, children must be registered at school.Adults in the community also benefit from the employment opportunities offered by the programme.Breaking the barrierPoloafrica is in line with the government’s Transformation Charter for South African Sport.The charter looks to unleash the sporting potential of black youth by encouraging broader community involvement, the creation of development programmes at grassroots levels and the delivering of facilities to disadvantaged communities.“Poloafrica’s strategy delivers against these exact objectives,” said Cairns. “The programme provides beautiful, first class riding and polo facilities in an under-served area, with extensive community involvement. With little help it has already developed a robust pipeline of promising young riders and polo players from one of the most disadvantaged parts of the country.”Closing the gender dividePoloafrica also tries to break down the gender divide.According to the organisation, a cultural shift in the mindset is necessary for the girls on the programme to develop the same sense of purpose in life and confidence in sport as the boys.The trust uses riding and other sports and life skills to encourage girls to become more independent. “In recent holidays the FLY (First Love Yourself) project was designed especially for the older girls to foster self-worth,” said Cairns.“Girls on the programme are encouraged to learn practical skills that traditionally are only done by men, such as welding and carpentry. Equally, the boys on the programme are encouraged to learn skills such as needlework and cooking.” read more
Share Facebook Twitter Google + LinkedIn Pinterest Bearish.That one word wraps up today’s numbers for corn, soybeans, and wheat. Not only were the yield estimates for both corn and soybeans higher than July, they are much higher. The trade had expected production, yield, and ending stocks to be declining for corn and soybeans. They did not. USDA estimated the corn yield at 168.8 bushels per acre, up 2 bushels from July. The trade had expected ending stocks to drop about 175 million bushels. Instead, they rose 113 million bushels to 1.713 billion bushels. Corn production was estimated to be 13.686 billion bushels, up from last month’s 13.530 billion bushels. Soybean production was pegged at 3.916 billion bushels, compared to last months 3.885 billion bushels, and up 31 million bushels. The trade has been talking for weeks about soybean ending stocks moving lower. They came in at 470 million bushels, up from last months 425 million bushels. Traders had expected ending stocks to drop to 301 million bushels.Prior to the USDA report at noon corn and wheat were up 5 cents with soybeans down 7 cents. Shortly after the report corn was down 21 cents, soybeans down 53 cents, and wheat down 14 cents.Ahead of today’s supply and demand report the market has seen lots of price volatility. Monday brought double digit gains across the board for grains on supply concerns. Tuesday most of those gains were erased with demand concerns, largely for US corn and soybeans exports. Todays USDA report will be the first surveyed report as reporters comb thousands of fields across the country in efforts to best determine corn and soybean yields. For corn they will be looking at stalk counts. Numerous reports since planting took place comment that producers were pushing the envelope pretty hard in orders to get seed counts per acre moving higher. Later in September USDA will be looking at ear weights as part of the surveys they conduct. With soybeans the reporters are looking at pod counts.This year has been an especially difficult year to get an accurate number of acres for corn and soybeans with all of the flooding problems that took place. It was not just a large area for one state that rains played a role. Instead, it was many areas in many states. Missouri in particular was hardest hit with soybean plantings held up for weeks. Going back to late June there were nearly five million acres yet to be planted to soybeans. Again most of that was in Missouri. Yet indications suggest it will be just Indiana where soybean acres will again be surveyed. Shortly after the June 30 acres and stocks report many had thought that the soybean acres would be surveyed in three states, Missouri, Illinois, and Indiana. Closer to home in Ohio, northwest Ohio had many counties where rains played a role in corn and soybean acres not getting planted. It is no wonder that some have said we may not see a really good acres number until the final report for 2015 production comes out in January 2016.Prior to the report many had thought that the numbers would be bullish for soybeans and neutral for corn. Traders were looking for U.S. corn production to be 13.327 billion bushels, down 200 million bushels from the July report. In addition, they estimated the U.S. corn yield to be 164.5 bushels per acres, down from 166.8 bushels in July. Traders had estimate soybean production would decline with at lower yield of 44.7 bushels per acres compared to the July estimate of 46 bushels per acre.The take home from today, the report was bearish without a doubt. Traders and producers were expecting lower yields and lower ending stocks. That did not happen and the markets will close lower today. It may not be immediately seen but there is one more take home for the day, another short phrase to remember. Uncertainty and price volatility. With so many not seeing the high yields that USDA published, many will now be saying, “Show me, show me the yields are there.”It is going to be a most interesting time for the grain markets in the next two months. read more
I did an impromptu webinar with Jeb Blount on the fear of rejection. It morphed into a conversation about all of the different fears that we have as salespeople. Jeb asked me how you build an immunity to those fears. Here’s an action plan:Find Someone Who Has Overcome Your FearIt’s easy for other people to suggest that you should ignore your fear. But it is unlikely that those same individuals ignore their fears. If you are afraid, it is likely that there is a real danger attached to that fear.Maybe you fear making cold calls. The first thing you might do to overcome that fear is to sit with someone who has no fear of making cold calls while they make calls. By sitting with someone who no longer has a fear of making those calls, you will have a chance to witness for yourself that nothing bad happens. The same applies to the fear of asking for referrals, asking for commitments, or revealing that you have a higher price early in the sales process.By spending time with someone who no longer has the fear that you have, you will discover how they think about what they’re doing, how they approach it, and how they succeed in taking action without fear.Objectify Your FearOne way to begin to remove the fear from your body and your mind is to write down what you are afraid of. Give the fear a name.Write down what you believe to be the danger that gives rise to your fear. Write down all the things that may happen when you take the action of which you are afraid. Follow those outcomes to their logical conclusion and write down the worst possible things that can happen. Score them on how likely they are to occur.By writing down your fear, you get it out of your body, from your mind onto paper or your computer screen where you can objectify the fear. Now that it’s an idea written on paper, it loses some of its power over you. It is an object, not a part of you. You may also recognize that the danger isn’t as great as you believed.Fear The Greater DangerA lot of the fears we have are connected to the wrong danger.We fear that by making the call we will be rejected, and that will say something about our value. The real danger is in that not making the call you don’t produce the results you are capable of for you, for your company, or for your prospective client.We fear to ask a contact that’s engaged with us how serious they are about the initiative we’re discussing and whether or not they’re going to be able to get the support and financial backing necessary. There is a danger of offending your contact, but the greater risk is in not asking and later finding out that you’ve both spent months on an initiative that’s not going to happen.We fear discussing our higher price early in the sales process, believing it will frighten our prospective client away. But the greater danger is in not discussing it and going through the process without having justified the delta between our price and our competitors along the way.Fear is a powerful motivator. It can motivate you to avoid taking an action that may harm you. It can also motivate you to act, when not taking that action will harm you. You have to determine what is the greater danger. Essential Reading! Get my 3rd book: Eat Their Lunch “The first ever playbook for B2B salespeople on how to win clients and customers who are already being serviced by your competition.” Buy Now read more
The first Group 2 game in the Super Eights promises to be a fight between two strong bowling line-ups belonging to Pakistan and South Africa.Neither team has played at the R Premadasa Stadium during the World Twenty20, with Pakistan coming in from the batting-friendly Pallekele and the Proteas from Hambantota, which all-rounder Albie Morkel called ‘just like home’.But on the turning tracks here, every opponent will need to be wary of Pakistan’s trump card, Saeed Ajmal, who has become virtually unplayable in any format and conditions. South Africa skipper AB de Villiers said “no” when asked whether any of his batsmen were able to read Ajmal. “But we have a few areas where we’d like to attack him,” he said.”They are a very good team. They have been in the semi-finals of all tournaments and have also won once. But our focus is more on what we can do well,” de Villiers said. The Proteas have a terrific batting line-up led by the runmachine Hashim Amla and young gun Richard Levi at the top, followed by the legendary Jacques Kallis, de Villiers himself, JP Duminy and Francois du Plessis.Dale Steyn and Morne Morkel make for a scary new-ball pair, and can be ably backed up by allrounders Kallis and Albie, and two quality limited-overs spinners in Johan Botha and Robin Peterson.Pakistan, on the other hand, have a misfiring fast bowling line-up, but do possess a lot of in-form batsmen like skipper Mohammad Hafeez, Imran Nazir, Nasir Jamshed and the Akmal brothers – Kamran and Umar. In the end, the match will probably come down to how the South Africans negotiate the Pakistani spinners.advertisementPakistan vs South Africa, Live from Colombo, on STAR Cricket from 3:30 pm read more | 0.810536 | 3.737707 |
Now that I have your attention, I should probably make clear that this post is not about the Earth. I’m just back from a meeting where one of the speakers was Ian Boutle, lead author of a paper in which they Explor[ed] the climate of Proxima B with the Met Office Unified Model (pre-print available here).
Proxima Centauri B is a recently discovered Earth-sized planet in an 11-day orbit around Proxima Centauri, the closest star to the Sun. There are a couple of aspects of this system that may influence the planet’s climate sensitivity. One is that the star is much cooler than the Sun, and so emits most of its radiation at longer wavelengths. The other is that the planet is probably tidally locked – its rotation period will match its orbital period so that one side always faces its host star.
What Boutle et als. model indicates is that the above factors appear to result in a climate sensitivity that is quite a bit lower than that of the Earth (about two-thirds). One reason is that the albedo of ice decreases with increasing wavelength. Since the host star to Proxima Centauri B emits mainly at longer wavelengths (compared to the Sun) the ice albedo feedback is significantly reduced. Also (and this is the bit I wasn’t quite clear on) the changes in cloud cover appear to mainly occur on the night side, and so have little impact on climate sensitivity. There also appears to be global-scale circulations that also suppress the temperature on the day side, due to the efficient cooling of the night side of the planet.
The above has some potentially interesting implications for habitability. To be clear, we don’t really know what is required for a planet to be habitable, or not, so – in this context – it simply refers to the possibility of there being liquid water on the surface. However, if Proxima Centauri does have a smaller climate sensitivity than the Earth, then this implies that it is less sensitive to changes in stellar flux and, hence, that there is a greater range of parameter space over which it could support liquid water on its surface.
Of course, this is all based on models, so we don’t know even if Proxima Centauri B actually has an atmosphere and, if it does, if it can actually support liquid water on its surface. However, future space missions (such as the James Webb Space Telescope) and future ground-based telescopes (such as the European Extremely Large Telescope) might be able to make observations that could tell us something about Proxima Centauri B’s atmosphere, so we may have some idea about this in the not too distant future. | 0.867484 | 3.830575 |
Green was all the rage a couple of billion years after the Big Bang.
Galaxies in the early universe blasted out a specific wavelength of green light, researchers reported January 7 at a meeting of the American Astronomical Society. It takes stars much hotter than most stars found in the modern universe to make that light. The finding offers a clue to what the earliest generation of stars might have been like.
Some nearby galaxies and nebulas produce a little bit of this hue today. But these early galaxies, seen as they were roughly 11 billion years ago, produce an overwhelming amount. “Everybody was doing it,” said Matthew Malkan, an astrophysicist at UCLA. “It seems like all galaxies started this way.”
Malkan and colleagues used the United Kingdom Infrared Telescope in Hawaii and the Spitzer Space Telescope to collect the light from over 5,000 galaxies. They found that, in all of these galaxies, one wavelength of green light — now stretched to infrared by the expansion of the universe — was twice as bright compared with light from the typical mix of stars and gas seen in galaxies today.
The green light comes from oxygen atoms that have lost two of their electrons. To knock off two electrons requires harsh ultraviolet radiation, possibly from lots of extremely hot stars — each roughly 50,000° Celsius. The sun, by comparison, is about a paltry 5,500° C at its surface.
More at Science News. | 0.808625 | 3.77606 |
DeKalb, IL — New research adds to the growing body of evidence suggesting the Red Planet once had an ocean.
In a new study, scientists from Northern Illinois University and the Lunar and Planetary Institute in Houston used an innovative computer program to produce a new and more detailed global map of the valley networks on Mars. The findings indicate the networks are more than twice as extensive (2.3 times longer in total length) as had been previously depicted in the only other planet-wide map of the valleys.
Further, regions that are most densely dissected by the valley networks roughly form a belt around the planet between the equator and mid-southern latitudes, consistent with a past climate scenario that included precipitation and the presence of an ocean covering a large portion of Mars’ northern hemisphere.
Scientists have previously hypothesized that a single ocean existed on ancient Mars, but the issue has been hotly debated.
“All the evidence gathered by analyzing the valley network on the new map points to a particular climate scenario on early Mars,” NIU Geography Professor Wei Luo said. “It would have included rainfall and the existence of an ocean covering most of the northern hemisphere, or about one-third of the planet’s surface.”
Luo and Tomasz Stepinski, a staff scientist at the Lunar and Planetary Institute, publish their findings in the current issue of the Journal of Geophysical Research — Planets.
“The presence of more valleys indicates that it most likely rained on ancient Mars, while the global pattern showing this belt of valleys could be explained if there was a big northern ocean,” Stepinski said.
Valley networks on Mars exhibit some resemblance to river systems on Earth, suggesting the Red Planet was once warmer and wetter than present.
But, since the networks were discovered in 1971 by the Mariner 9 spacecraft, scientists have debated whether they were created by erosion from surface water, which would point to a climate with rainfall, or through a process of erosion known as groundwater sapping. Groundwater sapping can occur in cold, dry conditions.
The large disparity between river-network densities on Mars and Earth had provided a major argument against the idea that runoff erosion formed the valley networks. But the new mapping study reduces the disparity, indicating some regions of Mars had valley network densities more comparable to those found on Earth.
“It is now difficult to argue against runoff erosion as the major mechanism of Martian valley network formation,” Luo said.
“When you look at the entire planet, the density of valley dissection on Mars is significantly lower than on Earth,” he said. “However, the most densely dissected regions of Mars have densities comparable to terrestrial values.
“The relatively high values over extended regions indicate the valleys originated by means of precipitation-fed runoff erosion — the same process that is responsible for formation of the bulk of valleys on our planet,” he added.
The researchers created an updated planet-wide map of the valley networks by using a computer algorithm that parses topographic data from NASA satellites and recognizes valleys by their U-shaped topographic signature. The computer-generated map was visually inspected and edited with help from NIU graduate students Yi Qi and Bartosz Grudzinski to produce the final updated map.
“The only other global map of the valley networks was produced in the 1990s by looking at images and drawing on top of them, so it was fairly incomplete and it was not correctly registered with current datum,” Stepinski said. “Our map was created semi-automatically, with the computer algorithm working from topographical data to extract the valley networks. It is more complete, and shows many more valley networks.”
Stepinski developed the algorithms used in the mapping.
“The basic idea behind our method is to flag landforms having a U-shaped structure that is characteristic of the valleys,” Stepinski added. “The valleys are mapped only where they are seen by the algorithm.”
The Martian surface is characterized by lowlands located mostly in the northern hemisphere and highlands located mostly in the southern hemisphere. Given this topography, water would accumulate in the northern hemisphere, where surface elevations are lower than the rest of the planet, thus forming an ocean, the researchers said.
“Such a single-ocean planet would have an arid continental-type climate over most of its land surfaces,” Luo said.
The northern-ocean scenario meshes with a number of other characteristics of the valley networks.
“A single ocean in the northern hemisphere would explain why there is a southern limit to the presence of valley networks,” Luo added. “The southernmost regions of Mars, located farthest from the water reservoir, would get little rainfall and would develop no valleys. This would also explain why the valleys become shallower as you go from north to south, which is the case.
“Rain would be mostly restricted to the area over the ocean and to the land surfaces in the immediate vicinity, which correlates with the belt-like pattern of valley dissection seen in our new map,” Luo said.
The research was funded by NASA. | 0.850226 | 3.867991 |
Cosmic rays (CRs) are energetic charged particles originating outside the Earth. They could have strong effects on galaxy evolution, which we are going to study with galaxy simulations.
We have successfully implemented CRs and CR feedback in GIZMO:
(The first paper is submitted to MNRAS and available on arXiv now: http://arxiv.org/abs/1812.10496)
- Numerical implementation:
- CRs as a two fluid models (gas+CRs);
- Injection: 10% of supernova energy;
- Loss: Coulomb and hadronic losses;
- Transport: CR diffusion and streaming with the efficient two-moment method (similar to Jiang+Oh 2018), which enables High Resolution galaxy simulations with Fast CR propagation!
- A Wide Range of Simulated Galaxies:
Idealized galaxy simulations: dwarf, sub L star starburst, and L star galaxies;
- A Variety of CR propagation models:
(a) Advection; (b) Isotropic Diffusion; (c) Anisotropic Diffusion; and/or (d) Streaming;
(1) CR distributions strongly depend on propagation models:
(2) CRs can suppress star formation in dwarf and L star galaxies:
(3) The “effective isotropized” CR diffusion coefficients ~ 3e28-29 cm^2/s are consistent with the observed γ ray emission (the dashed line shows the CR proton calorimetry, i.e. most of the CRs dissipate within the galaxies)
(4) Most of the CRs can escape from dwarf and L star galaxies when matching the observed γ ray luminosity, but not starburst galaxies.
(5) Ongoing: dramatic differences between CR and thermal driven winds:
Cooler & Slower
Here are some cool movies featuring cosmic ray feedback on different galaxies!
Images are weighted by mass times temperature, so higher temperature gas (>10^3K) is more visible.
The movie below shows idealized dwarf galaxies with (left) and without cosmic rays (right) (with isotropic diffusion coefficient 3e28cm^2/s). Surprisingly, cosmic rays allow more cold gas (magenta) to stay within the disks.
MW-mass galaxies with (left) and without cosmic rays (right). There is significant fast outflowing hot gas on the left but warm slow outflowing gas on the right at late times:
Starburst galaxies with (right) and without (left) cosmic ray (diffusion coefficient=3e28cm^2/s):
I have given a talk in the 2018 Santa Cruz galaxy workshop:
The slide is available here: TsangKeungChan_CRdrivenwind
I have also worked on simulating pionic gamma ray emission from cosmic rays, which can be seen in the poster ( Poster_CR_A0) I presented in the 15th Potsdam Thinkshop. | 0.845814 | 4.041529 |
Astronomers are now more and more certain that our Sun did have a twin, a star dubbed Nemesis, because it was supposed to have kicked an asteroid into Earth’s orbit that collided with our planet and exterminated the dinosaurs.
Recent studies show most stars have companions, including our nearest neighbor, Alpha Centauri, a triplet system.
Nemesis has never been found. What happened to our Sun’s companion? Astronomers have speculated about the origins of binary and multiple star systems for hundreds of years. Are binary and triplet star systems born that way? Did one star capture another? Do binary stars sometimes split up and become single stars?
Nemesis was not our Sun’s identical twin and it was most likely 17 times farther from the Sun than its most distant planet today, Neptune.
Several years ago, a computer simulation by Pavel Kroupa of the University of Bonn led him to conclude that all stars are born as binaries. Yet direct evidence from observations has been scarce. This can change now.
A new analysis by a theoretical physicist from UC Berkeley and a radio astronomer from the Smithsonian Astrophysical Observatory at Harvard University, suggests that our Sun’s twin most likely escaped and mixed with all the other stars in our region of the Milky Way galaxy, never to be seen again.
A radio image of a triple star system forming within a dusty disk in the Perseus molecular cloud obtained by the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. (Image: Bill Saxton, ALMA (ESO/NAOJ/NRAO), NRAO/AUI/NSF)
The new assertion is based on a radio survey of a giant molecular cloud filled with recently formed stars in the constellation Perseus, and a mathematical model that can explain the Perseus observations only if all sunlike stars are born with a companion. The Perseus molecular cloud is a stellar nursery, about 600 light-years from Earth and about 50 light-years long
“We are saying, yes, there probably was a Nemesis, a long time ago,” said co-author Steven Stahler, a UC Berkeley research astronomer.
“We ran a series of statistical models to see if we could account for the relative populations of young single stars and binaries of all separations in the Perseus molecular cloud, and the only model that could reproduce the data was one in which all stars form initially as wide binaries. These systems then either shrink or break apart within a million years.”
Did “Dark Matter” Or A Star Called Nemesis Kill The Dinosaurs?
Quest For Planet X Continues: Several Never Before Seen Objects Discovered Extremely Far Away From Our Sun
Rare System With ‘Death Star’ Eating Planets Discovered 300 Light Years Away
More About Astronomy
“The idea that many stars form with a companion has been suggested before, but the question is: how many?” said first author Sarah Sadavoy, a NASA Hubble fellow at the Smithsonian Astrophysical Observatory.
“Based on our simple model, we say that nearly all stars form with a companion. The Perseus cloud is generally considered a typical low-mass star-forming region, but our model needs to be checked in other clouds.”
Last year, a team of astronomers completed a survey that used the Very Large Array, a collection of radio dishes in New Mexico, to look at star formation inside the Perseus molecular cloud.
The idea that all stars are born in a litter has implications beyond star formation, including the very origins of galaxies, Stahler said.
This study shows that our Sun most likely had a twin, but we haven’t been able to locate it, at least not yet. | 0.899315 | 3.848882 |
Venus is the sixth largest planet in the Solar System and the second planet from the Sun. Named after the Roman goddess of love and beauty, Venus has the longest rotation period of any planet in the Solar System, which is equivalent to 224.7 Earth days. Venus rotates in the opposite direction of the other planets, including Earth, and is the second-brightest object on the night sky, after the Moon, but does not have any natural satellites of its own. The planet has a dense atmosphere that is composed of 3.7% nitrogen and 96.5% carbon dioxide, and has an atmospheric pressure that is 92 times greater than Earth's. Also a terrestrial planet, and similar in size, mass, and proximity to the Sun, Venus is sometimes referred to as Earth's "sister planet."
Why Are Earth and Venus Known As Twin sisters?
Venus and Earth are sometimes referred to as planetary sisters or twins because they are similar in bulk composition, proximity to the Sun, mass, and size. The mean diameter of Venus is 7,520.8 miles, while Earth’s diameter is 7,926.3 miles. Earth is about 5% bigger than Venus, which compared to other planets, is a very small difference. Earth weighs approximately 19% more than Venus, and the two planets both have metal cores that are covered by silica rock mantles and a thin crust. However, despite the many similarities between these two planets, Venus and Earth are also different in many ways. For example, Venus is the hottest planet in the Solar System, with an average temperature of about 462 °C.
Geography of Venus
Over 80% of the surface of Venus is covered by volcanic plains. More specifically, 10% of the planet's surface is covered in lobate or smooth plains, while 70% is covered in plains with wrinkle ridges. Venus has two highland "continents", one located just south of its equator and the other in the planet's northern hemisphere. The northern continent is named Ishtar Terra, is roughly the size of Australia, and contains the highest mountain on Venus, known as Maxwell Montes. The other continent is named Aphrodite Terra, and is the larger of the two highland continents. Venus has very few impact craters, meaning that its surface is relatively young, with an estimated age of 600 million years old. The planet also has numerous unique surface features, including valleys, mountains, and craters. Venus has countless flat-topped volcanic features, called "farra," which resemble pancakes, and are between 330 ft and 3,280 ft in height and between 12 miles and 31 miles acrooss. Another volcanic feature observed on Venus are novae, which have both concentric and radial fractures that resemble spider webs. Venus has more volcanoes than Earth, with a total of 167 large volcanoes that measure more than 62 miles across. To a naked eye, Venus appears as a white light that is brighter than all other planets and stars, except for the Sun.
Rotation and Orbit
Venus rotates around the Sun at a distance of approximately 67 million miles and completes an orbit every 224.7 Earth days. All planetary objects in the Solar System rotate in an anti-clockwise direction, except Venus, which rotates clockwise. Earth’s equator rotates at 1,037.6 mph, while Venus rotates at 4.05 mph. Since Venus rotates slowly, the planet is nearly spherical. A Venusian year has 1.92 Venusian solar days, and if observed on Venus, the Sun rises in the west and sets in the east. Venus does not have any natural satellites but instead has numerous Trojan asteroids.
Your MLA Citation
Your APA Citation
Your Chicago Citation
Your Harvard CitationRemember to italicize the title of this article in your Harvard citation. | 0.843353 | 3.493092 |
Total annihilation for supermassive stars
A renegade star exploding in a distant galaxy has forced astronomers to set aside decades of research and focus on a new breed of supernova that can utterly annihilate its parent star—leaving no remnant behind. The signature event, something astronomers had never witnessed before, may represent the way in which the most massive stars in the Universe, including the first stars, die.
The European Space Agency's (ESA) Gaia satellite first noticed the supernova, known as SN 2016iet, on November 14, 2016. Three years of intensive follow-up observations with a variety of telescopes, including the Gemini North telescope and its Multi-Object Spectrograph on Maunakea in Hawaiʻi, provided crucial perspectives on the object's distance and composition.
"The Gemini data provided a deeper look at the supernova than any of our other observations," said Edo Berger of the Harvard-Smithsonian Center for Astrophysics and a member of the investigation's team. "This allowed us to study SN 2016iet more than 800 days after its discovery, when it had dimmed to one-hundredth of its peak brightness."
Chris Davis, program director at the National Science Foundation (NSF), one of Gemini's sponsoring agencies, added, "These remarkable Gemini observations demonstrate the importance of studying the ever-changing Universe. Searching the skies for sudden explosive events, quickly observing them and, just as importantly, being able to monitor them over days, weeks, months, and sometimes even years is critical to getting the whole picture. In just a few years, NSF's Large Synoptic Survey Telescope will uncover thousands of these events, and Gemini is well positioned to do the crucial follow-up work."
In this case, this deep look revealed only weak hydrogen emission at the location of the supernova, evidence that the progenitor star of SN 2016iet lived in an isolated region with very little star formation. This is an unusual environment for such a massive star. "Despite looking for decades at thousands of supernovae," Berger resumed, "this one looks different than anything we have ever seen before. We sometimes see supernovae that are unusual in one respect, but otherwise are normal; this one is unique in every possible way."
SN 2016iet has a multitude of oddities, including its incredibly long duration, large energy, unusual chemical fingerprints, and environment poor in heavier elements—for which no obvious analogues exist in the astronomical literature.
"When we first realized how thoroughly unusual SN 2016iet is, my reaction was 'Whoa—did something go horribly wrong with our data?'" said Sebastian Gomez, also of the Center for Astrophysics and lead author of the investigation. The research is published in the August 15th issue of The Astrophysical Journal.
The unusual nature of SN 2016iet, as revealed by Gemini and other data, suggest that it began its life as a star with about 200 times the mass of our Sun—making it one of the most massive and powerful single star explosions ever observed. Growing evidence suggests the first stars born in the Universe may have been just as massive. Astronomers predicted that if such behemoths retain their mass throughout their brief life (a few million years), they will die as pair-instability supernovae, which gets its name from matter-antimatter pairs formed in the explosion.
Most massive stars end their lives in an explosive event that spews matter rich in heavy metals into space, while their core collapses into a neutron star or black hole. But pair-instability supernovae are a different breed. The collapsing core produces copious gamma-ray radiation, leading to a runaway production of particle and antiparticle pairs that eventually trigger a catastrophic thermonuclear explosion that annihilates the entire star, including the core.
Models of pair-instability supernovae predict they will occur in environments poor in metals (astronomer's term for elements heavier than hydrogen and helium), such as dwarf galaxies and the early Universe—and the team's investigation found just that. The event occurred at a distance of one billion light years in a previously uncatalogued dwarf galaxy poor in metals. "This is the first supernova in which the mass and metal content of the exploding star are in the range predicted by theoretical models," Gomez said.
Another surprising feature is SN 2016iet's stark location. Most massive stars are born in dense clusters of stars, but SN 2016iet formed in isolation some 54,000 light years away from the center of its dwarf host galaxy.
"How such a massive star can form in complete isolation is still a mystery," said Gomez. "In our local cosmic neighborhood, we only know of a few stars that approach the mass of the star that exploded in SN 2016iet, but all of those live in massive clusters with thousands of other stars." To explain the event's long duration and slow brightness evolution, the team advances the idea that the progenitor star ejected matter into its surrounding environment at a rate of about three times the mass of the Sun per year for a decade before the star blew itself into oblivion. When the star ultimately exploded, the supernova debris collided with this material powering SN 2016iet's emission.
"Most supernovae fade away and become invisible against the glare of their host galaxies within a few months. But because SN 2016iet is so bright and so isolated we can study its evolution for years to come," said Gomez. "The idea of pair-instability supernovae has been around for decades," said Berger. "But finally having the first observational example that puts a dying star in the right regime of mass, with the right behavior, and in a metal-poor dwarf galaxy is an incredible step forward."
Not long ago, it was not known if such supermassive stars could actually exist. The discovery and follow-up observations of SN 2016iet have provided clear evidence for their existence and potential for affecting the development of the early Universe. "Gemini's role in this amazing discovery is significant," said Gomez, "as it helps us to better understand how the early Universe developed after its 'dark ages'—when no star formation occurred—to form the splendor of the Universe we see today." | 0.874393 | 4.053722 |
Very Light Magnetized Jets on Large Scales - I. Evolution and Magnetic Fields
Magnetic fields, which are undoubtedly present in extragalactic jets and responsible for the observed synchrotron radiation, can affect the morphology and dynamics of the jets and their interaction with the ambient cluster medium. We examine the jet propagation, morphology and magnetic field structure for a wide range of density contrasts, using a globally consistent setup for both the jet interaction and the magnetic field. The MHD code NIRVANA is used to evolve the simulation, using the constrained-transport method. The density contrasts are varied between and with constant sonic Mach number 6. The jets are supermagnetosonic and simulated bipolarly due to the low jet densities and their strong backflows. The helical magnetic field is largely confined to the jet, leaving the ambient medium nonmagnetic. We find magnetic fields with plasma already stabilize and widen the jet head. Furthermore they are efficiently amplified by a shearing mechanism in the jet head and are strong enough to damp Kelvin–Helmholtz instabilities of the contact discontinuity. The cocoon magnetic fields are found to be stronger than expected from simple flux conservation and capable to produce smoother lobes, as found observationally. The bow shocks and jet lengths evolve self-similarly. The radio cocoon aspect ratios are generally higher for heavier jets and grow only slowly (roughly self-similar) while overpressured, but much faster when they approach pressure balance with the ambient medium. In this regime, self-similar models can no longer be applied. Bow shocks are found to be of low excentricity for very light jets and have low Mach numbers. Cocoon turbulence and a dissolving bow shock create and excite waves and ripples in the ambient gas. Thermalization is found to be very efficient for low jet densities.
keywords:galaxies: jets – magnetic fields – MHD – methods: numerical – radio continuum: galaxies – galaxies: clusters: general
Extragalactic jets are amongst the most powerful phenomena in the universe and observable up to high redshift (Miley & De Breuck, 2008). Jet activity is generally accompanied by synchrotron emission from relativistic electrons in the magnetized jet plasma, which is most prominently observable at radio frequencies. Thus radio observations provided us with much insight into jet morphology (beams, knots, hotspots, lobes/cocoons), classification into low-power FR I and high-power FR II sources (Fanaroff & Riley, 1974), magnetic field strengths estimated by minimum energy arguments in the range of a few to several hundreds of microgauss in hotspots (Bridle, 1982; Meisenheimer et al., 1989) as well as magnetic field topology from polarization measurements (Bridle & Perley, 1984) (high power jets generally show magnetic fields parallel to the jet axis) and age estimates from spectral ageing (Alexander & Leahy, 1987; Carilli et al., 1991) of a few years. Extragalactic jets show prominent cocoons, often only partly visible as lobes, with total length to total width aspect ratios mostly between and (Mullin et al., 2008). However, it must be cautioned that visible radio emission does not necessarily trace all the bulk plasma, but depends on magnetic field strength and electron acceleration.
The current X-ray observatories Chandra and XMM-Newton give rise to a complementary view on jets, observing bremsstrahlung radiation from the thermal ambient gas (Smith et al., 2002) and inverse-Compton emission from the jet cocoon (Hardcastle & Croston, 2005; Croston et al., 2005), the latter suggesting near-equipartition magnetic fields. These emission processes are less dependent on microphysics and thus easier to connect with theoretical results and furthermore contain information about the history of these sources.
An extreme example for this is MS0735.6+7421 (McNamara et al., 2005), where radio emission shows a weak source, while the spatially coincident X-ray cavities reveal the true average power of the AGN (), which is a factor of higher. The jet cocoon displaces the ambient thermal gas and drives a bow shock outward. Both work and the bow shock contain signatures of the jet properties and may be excellent diagnostic tools. Nearly three dozen clusters were found to have cavities (McNamara & Nulsen, 2007) and generally show weak bow shocks (Mach 1–2) with aspect ratios (length/width) not much above unity.
Radio observations of high-power FR II jets show wide cocoons (Mullin et al., 2008), which are partly (and more completely at low frequencies) visible as radio lobes. Both jet head and cocoon–ambient gas interface appear smoother in radio maps (e.g. Cygnus A (Lazio et al., 2006), Pictor A (Perley et al., 1997), but also Hercules A (Gizani & Leahy, 2003)) than they do in hydrodynamic simulations; and while the emission of synchrotron radiation obviously indicates the presence of magnetic fields, their importance for jet dynamics and the contact surface is still unclear.
On the theoretical side, early numerical simulations of supersonic jets (Norman et al., 1982) already exhibited basic structures seen in observations of extragalactic radio sources (working surface, cocoon, bow shock) and showed that pronounced cocoons are properties of jets with much lower density than the ambient medium (light jets), although the slow propagation of the jet makes simulations of these computationally very expensive. With the availability of more computing power and new codes, simulations of the long-term evolution (Reynolds et al., 2002), in three dimensions (Balsara & Norman, 1992; Clarke et al., 1997; Krause, 2005; Heinz et al., 2006), of very light (Krause, 2003; Saxton et al., 2002a; Saxton et al., 2002b; Carvalho & O’Dea, 2002a, b; Zanni et al., 2003; Sutherland & Bicknell, 2007) and relativistic jets (Aloy et al., 1999; Komissarov, 1999; Rosen et al., 1999; Hardee, 2000; Leismann et al., 2005) became possible. Furthermore, effects of magnetic fields were examined (Clarke et al., 1986; Lind et al., 1989; Kössl et al., 1990; Hardee & Clarke, 1995; Tregillis et al., 2001, 2004; O’Neill et al., 2005; Li et al., 2006; Mizuno et al., 2007; Keppens et al., 2008), although only for relatively dense jets (density contrast ). In this paper, we extend the studies of very light jets to include magnetic fields.
To explore the interaction of jets with the ambient intra-cluster medium and and the impact of magnetic fields, we performed a series of magnetohydrodynamic (MHD) simulations of very light jets on the scale of up to 200 kpc (200 jet radii) with a globally consistent magnetic field configuration. A constant ambient density was used to avoid effects of a declining cluster gas density on the structural properties which could contaminate the effect of magnetic fields, while the effects of a density profile previously were described in Krause (2005) for the axisymmetric and threedimensional case.
After a description of our simulation setup and the numerical method, our results are described: first about the morphology and dynamics, the evolution of the bow shock and the cocoon; then entrainment of ambient gas and the energy budget; and finally the magnetic fields and their evolution as well as their impact on morphology and propagation. Results are then discussed and put in context with observational findings.
To compensate for different propagation speeds of jets with differing density contrasts, plots will use the axial bow shock diameter or jet length, where appropriate. “Full length” refers to the whole simulated length (considering both jets), while “full width” refers to twice the measured (radial) distance from the axis.
2 Setup and Numerical Method
The idea behind the present study was to explore the behaviour of very light jets with non-dominant magnetic fields in a cluster environment using a plausible global setup for the plasma and the magnetic fields, but still keeping the setup simple enough to see the working physical processes clearly, which is much harder for a complex setup. We performed 2.5D simulations (axisymmetric with 3D vector fields) of both purely hydrodynamic and MHD jets on the scale of up to kpc with a constant ambient gas density, where density contrasts were varied between and to see its effects on the simulation. Jet speed, beam radius, sonic Mach number and magnitude of the helical magnetic field (Gabuzda et al., 2008) were kept fixed, thus yielding a kinetic jet power and plasma varying with density contrast. A summary of the parameters is given in Tab. 1 and 2. The simulations are labelled by a letter and a numeral, indicating the inclusion of magnetic fields (M) vs. pure hydrodynamics (H) as well as their density contrast. Both simplifications – axisymmetry and density distribution – were relaxed in a previous hydrodynamic study (Krause, 2005) and their influence is addressed later in the discussion. The initial gas distribution was randomly perturbed on the resolution scale to break symmetry between both jets.
The bipolar (back-to-back) jets were injected by a cylindrical nozzle (radius , length ) along the Z axis in cylindrical coordinates (), hence allowing for interaction of the backflows in the midplane. The jet radius is resolved by 20 cells. Fully ionized hydrogen () was assumed for both the jet and the external medium. A compressible tracer field was advected with the flow, using a value of for the ambient gas, and and for the jets, allowing to trace back the origin of the plasma. Optically thin cooling is included in the code but was switched off, since the cooling times for our setup ( years) are significantly longer than the simulated time-scale, even for the shocked ambient gas. This choice also makes the simulations scalable, e.g. to other values of the jet radius, which was chosen arbitrarily as kpc.
In the jet nozzle, all hydrodynamic variables (density, pressure and velocity) are kept constant at all times and a toroidal field is prescribed there, being zero outside the nozzle. A dipolar field centred on the origin is used as initial condition for the whole computational domain (magnetic moment aligned with the jet axis) although it is mostly confined to the jet due to the strong decrease in magnitude with distance. For global simulations, the constraint enforces closed field lines, which is satisfied by a dipolar field configuration, but not by the common setup of an infinite axial field, which locally, but not globally fulfills the constraint. Thus, in our setup, the magnetized jet plasma propagates into the essentially nonmagnetic ambient matter. For M3 and especially M4, the magnetic fields become dynamically important and influence the appearance, so another run with lowered magnetic fields was performed in addition (M4L), which is more in line with the other jets. These lightest jets are addressed more specifically in Sect. 3.7.
|jet sound speed|
|ambient gas density|
|ambient gas temperature|
|jet nozzle magnetic field||(M4L: )|
The initial magnetic fields in Tab. 1 are nozzle-averaged initial values. As the poloidal field cannot be kept constant in the nozzle without violating , this field can evolve with time due to the interaction with the enclosing cocoon, quickly adjusting to ( for M4L) but then stays constant. For these nozzle averages, only 90 per cent of the jet radius were considered for M1, M2, M3 and 80 per cent for M4 and M4L to exclude cells at the shearing boundary of the nozzle, where high magnetic fields and opposite field directions can occur, while the core of the jet is unchanged. For plasma , we use the volume-averaged harmonic mean, as very weakly magnetized regions otherwise would misguidingly dominate the average.
Since the jets are very underdense with respect to the ambient gas and have a high internal sound speed, a reconfinement shock develops already very near the jet nozzle. This shock establishes pressure balance between the jet beam and the cocoon, resulting in a pressure-confined beam. In contrast to freely expanding jets, any imposed opening angle becomes unimportant once the beam is reconfined already near the jet inlet. Hence, in contrast to heavier jets, underdense jets quickly find pressure balance with their environment.
The simulations were run until they reach the boundary of the uniform grid which has or cells (for M4 and M4L) and the jet radius () is resolved with 20 cells.
In the following, we will focus only on the MHD jets, as their hydro counterparts are only for comparison (set up exactly as the MHD jets with vanishing magnetic field).
The simulations were performed using the NIRVANA code (Ziegler & Yorke, 1997), which numerically solves the nonrelativistic magnetohydrodynamic equations in three dimensions in cartesian, cylindrical or spherical coordinates. It is based on a finite-differences discretization in an explicit formulation using operator splitting and uses van Leer’s interpolation scheme, which is second order accurate. The advection part is solved in a conservative form and the magnetic fields are evolved using the constrained transport method, which conserves up to machine roundoff errors. The code was vectorized and shared-memory parallelized (Gaibler et al., 2006) for the NEC SX-6 and SX-8.
The density and temperature maps of Fig. 1 show snapshots of all runs at a full jet length of kpc, respectively. In the following, M3 mostly will be used for figures as it has the strongest non-dominant magnetic fields, therefore showing effects of the magnetic fields best and allowing for comparison of features between different figures.
The jet backflow blows up a pronounced cocoon, surrounded by a thick shell of shocked ambient matter. Dense ambient gas is mixed into the cocoon in finger-like structures due to Kelvin–Helmholtz instabilities at the contact surface. Near the jet heads, this instability is suppressed by the magnetic field, which leads to a smoother appearance there. In purely hydrodynamic simulations, this stabilization is absent.
The cocoon is highly turbulent and vortices hitting the jet beam can easily destabilize, deflect or disrupt it if jet densities are low. The Mach number varies considerably along the beam (Krause, 2003; Saxton et al., 2002b) and there is no stable “Mach disc” as seen for heavier jets – the terminal shock moves back and forth and often is not clearly defined.
Because very light jets only propagate slowly, basically hitting the ambient gas as a “solid wall”, the backflow is strong and the turbulence makes the interaction between both jets in the midplane important. Such jets have to be simulated bipolarly to describe the lateral expansion and hence the global appearance correctly. If only one jet were simulated, for very light jets the result would strongly depend on the boundary condition in the equatorial plane (Saxton et al., 2002b).
The surrounding ambient gas is pushed outwards by the cocoon pressure, driving a bow shock outwards. The bow shock for very light jets is different in its shape and strength from that of heavier jets (see Sect. 3.3). It is additionally changed by a density profile in the external medium (Krause, 2005), which increases the aspect ratio with time because increases at the jet head and thus shows cylindrical cocoons.
3.2 Defining the cocoon
In the following, we not only measure properties of the bow shock, which is easy to pin down, but also of the cocoon. While generally we define the cocoon as the region, which is filled by jet-originated matter (not including the beam itself) this definition has to be made in more detail for the simulation analysis. The strong backflow and the fragile beams of very light jets make the distinction between cocoon and beam difficult, while mixing at the contact discontinuity complicates the assignment of cell to cocoon or ambient matter. While we do not attempt to distinguish between beam and cocoon if not stated explicitly (it only seems necessary for energetic investigations), the distinction between cocoon and ambient matter is necessary especially for the entrainment measurements later and thus is described in more detail in this subsection, along with the measurement of the cocoon properties.
3.2.1 Cell assignment
Two properties can be used for the distinction between cocoon and ambient matter: the (compressible) tracer field and the toroidal magnetic field. Tracer field values of and above indicate undisturbed and shocked ambient matter. This is available in all simulations, but mixing with jet matter at the contact discontinuity (due to finite resolution) lowers the tracer and thus requires a threshold value. The cocoon mass is especially sensitive to this threshold value, as the density of the ambient gas is much higher and thus causes large changes of the cocoon mass if the border is shifted.
Figure 2 shows the cocoon mass for a range of tracer thresholds. The injected mass at this time is only ; measured mass above this value is the entrained ambient gas mass.
In contrast, the toroidal field strength can be used for separation, as the toroidal field is zero initially in the ambient medium and is conserved independent from the other field components. Figure 3 shows the cocoon mass depending on the toroidal magnetic field threshold. Using this method, even cells with only a small mass fraction of jet matter can be assigned to the cocoon. There is a clear break visible, but the cocoon mass continuously increases for lower threshold values until machine accuracy is reached. This has two major problems: First, it naturally is not available for the pure hydro simulations and thus cannot be used to compare HD with MHD simulations. Second, the high sensitivity to jet matter is not a real advantage, as the mass values do not nicely converge and we have to choose a threshold.
In the following we will use a tracer threshold of which is available for HD and MHD models and gives cocoon masses that do not strongly depend on the tracer threshold. Furthermore, it selects the regions one would consider cocoon also by looking at the other physical variables.
3.2.2 Shape measurement
To characterize the width of the cocoon, we checked four different measures, which will generally give different results due to the ragged shape of the contact surface. Widths are measured from the symmetry axis (, jet channel) and thus are only “half widths”. Figure 4 shows the temporal evolution of the cocoon width definitions
maximum width: measured at the maximum position of a cocoon cell,
average width: -averaged over the full jet length,
QB width: measured at one quarter the full jet length backwards of the jet heads,
spheroid width: semi-minor axis of a spheroid with a volume equal to the cocoon volume and the semi-major axis equal to half the full jet length.
The QB width is clearly much dependent on vortices near the contact surface and not very straight. Despite that, it grows similar to the Z-averaged width, which mostly has the lowest width value of all four definitions. The spheroid width lies between the maximum width and the Z-averaged width. All of theses measures can be approximated by power laws, although with somewhat different parameters.
In contrast to the cocoon mass, the cocoon shape does not depend strongly on the tracer limit that is used for its determination (Fig. 5). For limits around 0.5, the differences between width definitions is larger than the dependence on the tracer limit.
3.3 Evolution of bow shock and cocoon
3.3.1 Cocoon pressure evolution
The low jet density has two main consequences for the evolution of the cocoon pressure: one is the lower jet power (for a fixed jet bulk velocity), which results in a lower cocoon pressure and a generally weaker bow shock. The other is the slow jet head propagation, which lowers the propagation time-scale compared to the dynamical time-scale of travelling pressure waves within the cocoon. Pressure waves from the jet head together with waves induced by turbulent motion and mixing in the cocoon, try to establish pressure balance within the cocoon and between cocoon and ambient gas, driving the lateral expansion of the cocoon.
Figure 6 shows pressure maps of jets with and at the same lengths. The cocoon of the heavier jet is overpressured by a factor of with respect to the ambient gas, while being a factor of only for the lighter jet (and for this jet at the time of the M1 image).
The strong evolution towards pressure balance is responsible for the much less pronounced high-pressure regions between Mach disc and the advancing bow shock. The bow shock has an elliptical shape with less directional dependence of its strength, more similar to an overpressured bubble, although it is still stronger in axial direction (see Sect. 3.3.2).
The quick pressure adjustment can also be seen in the pressure–density diagrams of Fig. 7. The ambient gas is described by the patch near , the jet nozzle by the cells around ). Adiabatic compression and expansion leads to the oblique and longish features present at different positions. Top right of the jet nozzle position are the cocoon grid points, which spread over a large range of density to the right because of mixing with shocked ambient gas, which is the elongated feature top right of the ambient gas position. Comparing the two different simulation snapshots, we find that the pressure distribution is quickly adjusting towards the external pressure, in agreement to the findings in Krause (2003), and the cocoon is not strongly overpressured anymore.
Another view on this is the average cocoon pressure, shown in Fig. 8, which has a power-law-like behaviour. For the three models M1, M2 and M3, it strikingly decreases with the reciprocal jet length (Tab. 3). While M1 at the end of the simulation is still very overpressured, the cocoon pressure of M3 is already near the ambient gas pressure.
M4 and M4L seem to deviate from this behaviour. At the beginning this is mainly a consequence of the longer-lasting relaxation from initial conditions, where strong shocks are thermalized efficiently, increasing the pressure in the early “cocoon bubble” and because the early phase is shown with higher time resolution. After a jet length of kpc has been reached, they fit into the behaviour of the other simulations, but, as they soon reach the ambient pressure, settle to its value.
It is clear that the cocoon pressure cannot drop much below the ambient pressure and thus approaches its value. At this point we expect the bow shock to softly turn into an ordinary sound wave. This is just about to happen in the last snapshots of M4 and M4L, where there is only a very weak density jump, corresponding to Mach 1.05. The exact value of the average cocoon pressure is insensitive to the exact definition of the cocoon (see Sect. 3.2), but can drop slightly below the ambient pressure due to pressure variation within the cocoon (which can still be as strong as a factor of 2).
The past bow shock is not the only sound wave testifying to the expanding cocoon. Already much before the shock decays, waves and ripples can be seen in the shocked ambient gas (Fig. 9).
3.3.2 Bow Shock
|model||bow shock full length||bow shock width||cocoon full length||cocoon average width||average cocoon pressure|
|30 kpc||15 kpc||30 kpc||5 kpc|
The quick decrease in cocoon pressure naturally affects the strength of the bow shock as it is this pressure that drives the shock laterally. Figure 10 shows the temporal evolution of the bow shock strength, in terms of external Mach numbers, for the forward direction (at ) as well as the lateral direction (at ) for jets with different density contrasts.
The bow shocks in forward direction are always stronger than the sideways shocks due to the direct impact of the jet on to the ambient gas. The lighter jets have a much weaker bow shock in all directions and the differences between the axial and lateral direction are much less pronounced.
The axial diameter of the bow shock grows as a power law with exponents (Fig. 11 and Table 3). For the lateral propagation we find similar exponents. This behaviour agrees with self-similar jet models (Falle, 1991; Begelman, 1996; Kaiser & Alexander, 1997; Komissarov & Falle, 1998) and the spherical blastwave approximation (Krause, 2003), which predict an exponent of . At very early times and lasting longer for the lighter jets, we find lower exponents, as for a Sedov blast wave () from the initial conditions.
The lighter jets have generally lower Mach numbers, as their kinetic power is lower, thus showing smaller bow shock velocities. The aforementioned analytical models yield an expansion speed
that directly translates into the bow shock Mach number and describes the scaling behaviour of the simulations reasonably well (: jet power). We find values for between and in axial direction and between and (M4L) in lateral direction, the latter being increasingly higher for lighter jets.
A clear deviation from this behaviour is M4 at Myr in axial direction. The bow shock propagates much faster due to the formation of a nose cone. The strong toroidal magnetic field collimates the jet, suppresses the pronounced backflow of M4L, and the Lorentz force of the radial current gives the jet additional thrust for the propagation (see 3.7). The other light jets (M3 and M4L) also may propagate somewhat faster due to their appreciable magnetic fields.
As the jet pushes the bow shock forward in axial direction, the cocoon length grows similar to the bow shock length (Fig. 12), showing a power law behaviour with similar exponents. Again, M4 shows a higher exponent () due to its additional thrust support in the nose cone. There might also be a slightly faster propagation for the M3 jet, where the magnetic field is not too much below equipartition at the jet inlet (see Fig. 2), although this might also be just a temporal effect due to the jet–cocoon vortex interaction.
The cocoon width, in contrast, shows different power law exponents depending on the density contrast, after the start-up phase is over. We find exponents of (M1), (M2), (M3) and (M4L) for the different models (Table 3). Thus there seems to be a clear trend of decreasing exponents for lower jet densities, which holds true for all our cocoon width measures (Fig. 13). Widths approaching an asymptotic value might mimic a similar behaviour, but so far, this is beyond our simulation data (except for M4). It seems reasonable that this is due to less overpressured cocoons for lighter jets, as it is the cocoon pressure that drives the lateral cocoon expansion (Kaiser & Alexander, 1997; Carvalho & O’Dea, 2002b). If the cocoon pressure equals the ambient pressure, the sideways expansion of the cocoon would come to an end.
Another consequence of this is the lateral bow shocks, compared to the corresponding cocoon width, being much further away for light jets (Fig. 14), as found by Zanni et al. (2003), too. Hence, except for the H1/M1 models, the thick layer of shocked ambient gas grows continuously.
The expansion of the cocoon for M4 is much different. After the initial phase the cocoon width settles down to a constant value and does not grow anymore. This is a consequence of the suppressed backflow in the nose cone, which then cannot inflate the cocoon anymore.
All simulations with non-dominant magnetic fields show pronounced turbulence in their cocoons. This is evident from Fig. 15, which shows the vector fields of velocity and poloidal magnetic fields in LIC (line integral convolution) representation. The LIC technique (Cabral & Leedom, 1993) allows the fine-grained depiction of vector fields, especially suitable for turbulence, where structures even on smallest scales are present due to the turbulent cascade. We extended this to show the field magnitude, additionally, decomposing the information into brightness (showing the field direction as stream lines) and colour (field magnitude) in HLS colour space.
Cocoon turbulence is driven by quasi-periodic “vortex shedding” (Norman et al., 1982) in the jet head, which injects vortices into the the cocoon. As these vortices move around and interact, vortex shedding affects the whole cocoon and drives its turbulence. While it occurs in our heavier jets, too, narrow cocoons suppress vortex interaction and the establishment of turbulence. We note that there may be feedback on the driving mechanism, as cocoon vortices perturb the jet beam and thus influence the vortex shedding process itself.
3.3.4 Aspect Ratio
A characteristic property of the bow shock or cocoon is their aspect ratio (Fig. 16). Dependent on the density contrast, after a short initial phase of spherical expansion (), the bow shock aspect ratios grow but converge for large bow shock diameters, approaching for lighter jets ( for M3 and for M4L). This means, the bow shock approaches a spherical shape for very light jets. Once again, M4 is different, as the propagation in axial direction is faster, yielding significantly higher aspect ratios then M4L.
The aspect ratios for the cocoons generally increase with jet length and are at early times systematically lower for the lighter jets. However, the light jets soon increase their aspect ratio (earlier for lighter jets) and then at later times, show aspect ratios even higher than their heavy counterparts. As for the cocoon width evolution, we argue that this is due to cocoons, which come to pressure balance with the ambient gas earlier, so that lateral cocoon expansion stalls, but the axial propagation is still growing self-similarly. By comparing Fig. 8 and Fig. 16 one sees that once a source approaches pressure balance with its environment, it drops out of self-similarity and increases its cocoon aspect ratio. For M3 this happens already early, while it does not happen for M1 until the end of the simulation.
The jet backflow at the contact surface between the cocoon and the ambient gas makes it Kelvin–Helmholtz unstable, and thus creates fingers of dense ambient matter that reach into the cocoon and are entrained. In numerical simulations this entrained gas is additionally mixed with the jet plasma due to finite numerical resolution. The amount of entrainment can be measured in terms of the cocoon mass since the mass of jet plasma usually is small compared to the measured cocoon mass. Although the exact numbers depend on the exact cocoon measurement definition (Sect. 3.2), this seems to be a reasonably robust method.
Figure 17 shows the time evolution of the cocoon mass. The entrained mass grows with a power law exponent only slightly below the exponent of the cocoon volume, showing a slowly decreasing but roughly constant fraction (5–10 per cent) of the initial mass in the occupied volume. However, there is no difference visible between purely hydrodynamic and MHD simulations in the entrained mass, as would have been expected. The reason for this is the missing stabilization of the contact surface, which is discussed later. However, it is evident from M3 in Fig. 18 that the entrainment in the jet head is significantly smaller: the mass in a cylindrical volume ( kpc, radius kpc) in the head region of M3 is without magnetic fields (H3), compared to in the magnetized case, which is more than a factor of lower. Hence entrainment is significantly suppressed in the jet head, but no change could be measured regarding the whole cocoon volume.
3.5 Energy budget
From the quick balancing of pressure within the cocoon, one might expect a strong conversion of (kinetic) jet power to thermal energy. This, in fact, is measured for our simulations.
Figure 19 shows the increase in thermal energy as fraction of the total injected power. Already for the heaviest jet (M1), most of the injected (kinetic) power appears as thermal energy due to compression and irreversible entropy generation at shocks. The thermal fraction increases not only with time, but is also much stronger for the lighter jets, where a thermalization of per cent is reached. Half of the thermal energy gain is found in the cocoon and half in the (shocked) ambient gas. O’Neill et al. (2005) find per cent of the jet power in the thermal ambient gas for their 3D jets with density contrast in a uniform atmosphere, while in our simulations we find per cent, which is quite good agreement.
Magnetic energy only has a very small contribution (below 1 per cent), except for M4, which is magnetically dominated and which has a magnetic energy contribution rising up to 5 per cent. More than 90 per cent of the magnetic energy is located in the cocoon. For all runs except M4, the magnetic energy that is actually measured is significantly larger than the injected magnetic energy (this effect is stronger for the lighter jets) and it grows faster then just linearly in time – approximately with a power law exponent of (Fig. 20). Hence, other forms of energy seem to be converted into magnetic energy. For M4, the measured magnetic energy is lower than expected from the nozzle values, which may indicate that the additional thrust in the nose cone actually consumes magnetic energy.
The remaining fraction is kinetic energy, which is decreasing more and more with lower jet density. Although the jet beam is the only energy input to the system (at the nozzle), its contribution to the total kinetic energy is 10 per cent or less, and 50 to 30 per cent is in the cocoon. The remainder, 50 to 70 per cent, comes from the outward moving shocked ambient gas.
3.6 Magnetic fields
Magnetic fields are not only passive properties of the jet plasma, but an active ingredient for the dynamics. One parameter describing this is the ratio between thermal and magnetic pressure of the plasma (). For the simulations described here, we used a fixed value for jet speed, Mach number and magnetic field. Thus, the plasma cannot be constant throughout the different runs (see Tab. 2). While M1 and M2 have passive magnetic fields, M3 and M4L have fields with significant impact, and for M4 they are even dominant.
The helical field configuration in the jet initiates an intriguing interplay between kinetic and magnetic energy. Although the jet matter is injected without any rotation, the Lorentz force from the helical field generates a toroidal velocity component, as also found by Kössl et al. (1990). This effect is stronger for the runs with stronger magnetic fields (lower plasma ). The rotation does not originate from persisting angular momentum from the jet formation, which should be very small due to the expansion of the jet. Also, it is not continuous throughout the beam and even changing sign at some internal shocks and interaction with cocoon vortices.
When the plasma reaches the terminal shock, it flows away from the axis radially and turns back, forming the backflow that inflates the cocoon. Rough conservation of angular momentum then produces a radially declining angular velocity (differential rotation). Writing the induction equation in cylindrical coordinates,
it becomes evident that this shearing transforms poloidal field into toroidal field , also transferring kinetic energy into magnetic energy, which explains why the contribution of magnetic fields to the total energy is higher than its injected contribution.
We note that this creation of toroidal field in the jet head is not an artefact of axisymmetry, but merely a consequence of allowing threedimensional vectors in the simulation ( and ). We do not expect this to be much different in full 3D, apart from a naturally more complex structure in the details. What, in contrast, most probably is an artefact of axisymmetry is the persistence of the toroidal field component in the cocoon. The cocoon plasma is highly turbulent (Sect. 3.3.3) with relatively little systematic motion, which is an intrinsically threedimensional phenomenon. This can easily convert toroidal and poloidal field into one another, establishing some dynamical equilibrium between those components, but maintaining the overall field strength.
Comparing purely hydrodynamical models with the MHD models (Fig. 18), we find that the global properties, such as bow shock and cocoon sizes, are generally robust if the magnetic fields are not dominant (as with M4). The details, though, are different. While the hydro models show a ragged contact surface between jet plasma and ambient gas due to Kelvin–Helmholtz (KH) instabilities excited by the backflow, the MHD runs show a pronounced jet head, which is clearly more stable, since the KH instability is damped by the magnetic fields (e.g. Miura & Pritchett, 1982). Magnetic tension acts as a restoring force on the growing instabilities, suppressing entrainment of ambient matter and “fingers” of dense gas reaching into the backflow, which is evident from the clearly lower average density in the jet head region. The stabilizing effect appears at in the jet head. For the simulations with weaker fields there is no noticeable difference between the magnetized and the pure hydrodynamics case.
Damping of the KH instability by magnetic fields, however, only works with the field component parallel to the instability wave vector, which in turn means that in axisymmetry only the poloidal magnetic field can damp the instabilities at the contact surface.
Although the earlier-mentioned shearing mechanism amplifies magnetic fields and should therefore provide even more damping of KH instabilities, we cannot see this effect further away from the jet head, because in axisymmetry the backward reaction (toroidal to poloidal) cannot work and thus the poloidal component becomes too weak (Fig. 15). As the magnitude of the magnetic field in the cocoon is as strong as in the jet head, it seems reasonable that with balanced magnetic field components in reality, the contact surface could be stabilized.
The toroidal field is directly related to the generating current , which is shown in Fig. 26 as field lines. Our toroidal field setup describes a situation where the poloidal currents leave the nozzle axially in the jet core, turning back in the sheath. As the backflow develops, the poloidal current flows along the contact surface with typical integrated currents of several amperes (Camenzind, 1990; Blandford, 2008). The toroidal field in the cocoon, built-up by the shearing in the jet head, seems to form its own current circuits. The gross radial behaviour (Fig. 22) can be attributed to the relatively uniform distribution of the axial current through the planes perpendicular to the jet beam.
If the toroidal field is strong in the jet head region, the Lorentz force produces additional thrust for the jet propagation due to the strong radial current component, which is evident for M4, showing a pronounced nose cone, and may also explain the slightly faster propagation of M3 with respect to H3 (Fig. 12).
Inside the beam, the magnetic field stays mostly poloidal, as injected, but near the terminal shock it is compressed axially, directed off the axis and sheared, producing strong toroidal field loops (Fig. 27).
Finally, we turn to volume-weighted 2D histograms of magnetic pressure and thermal gas pressure in Fig. 28, where the contributions from only the jet beam and all the jet plasma is shown separately.
The jet nozzle is located at as a vertical line (constant pressure, but radially varying magnetic field). As the matter flows through the beam, internal shocks (cf. Figs. 6 and 18) cause strong changes in pressure whereas the plasma remains unchanged (magnetic field is compressed with the plasma), leading to lines originating from the nozzle location parallel to the overplotted lines. The plasma somewhat increases along the beam when it it interacts with the cocoon vortices, and thus creates some down-shifted parallels. There is no clear separation between the beam and the enclosing cocoon in the beam-only diagram, hence both shear layers of the beam and cocoon gas are contained in the wide area below . Still, there are strong pressure changes indicated by the wide horizontal distribution.
The distribution of cocoon cells widely spreads both to higher and lower magnetic fields from this area. The pronounced trail downwards is the transition to the ambient gas through entrainment; since the ambient gas is essentially unmagnetized, it is located even below the lower border of the figure (Fig. 28). The radial increase of magnetic field in the cocoon yields the extension towards lower plasma (see also Fig. 25). The spiky features around are single vortices in the outer parts of the cocoon, where the pressure drops towards the centre due to centrifugal forces together with a slight increase of toroidal field. Altogether, the spread of the cocoon cells is considerably larger in magnetic pressure than in thermal pressure.
The situation shown in Figs. 28 and 29 is typical for the time evolution of these diagrams. Clearly, some features are appearing, changing and disappearing continuously, such as individual internal shock lines or the cocoon vortex spikes. The general structures in the diagrams persist at all times. There are, however, two systematic changes with time: Firstly, the “cocoon bump” in Fig. 29 () grows due to cocoon expansion, eroding the “ambient bump” (), and moves to the left, faster at early times and then becoming continuously slower. Secondly, as the cocoon pressure drops, the cocoon distribution of Fig. 28 moves towards the left (and somewhat down due to the mostly constant distribution in at late times), and grows with cocoon volume, too.
3.7 The lightest jets
The lightest jet in the series, M4, shows a very different behaviour from the other runs due to its strong magnetic fields, thus a run with lower magnetic fields (M4L) was performed in addition. In this subsection, we focus on the specific properties of and differences between these two runs.
Both simulations show unstable beams, which are temporally stopped, deflected or disrupted. This is quite natural for the very light jets, where the impact of cocoon vortices hitting the beam is stronger, when the beam shows lower inertia but the cocoon gas is dense due to entrainment and mixing with the dense ambient gas. This destabilization is particularly strong in axisymmetry, as the vortices cannot “miss” the beam as they could in 3D. For M4L, after a strong deflection of the right beam near the nozzle ( Myr), a small region with strong poloidal field piles up just next to the nozzle and creates a magnetic layer () at the beam boundary. This protects the beam from cocoon vortices and entrainment, and from there on inhibits disruptions of the right jet, which then propagates more quickly than the left jet. At the end of the simulation, the right jet is per cent longer than the left jet and shows an almost undisturbed beam up to the jet head. More detailed examination of this phenomenon may be interesting, but as it was only introduced by chance, the details are difficult to reproduce and beyond the scope of this paper. None the less, the overall propagation of the jet within the simulated time (Sect. 3.3.3) is not much affected by this.
Keeping the jet speed and the Mach number fixed, the ratio of the thermal pressures of ambient gas and jet nozzle changes with density contrast, yielding an underpressured jet for M4 and M4L. For M4, the magnetic field in the nozzle is already stronger than equipartition and the Alfvén speed is higher than the sound speed. This run is dynamically dominated by the magnetic field and shows a pronounced nose cone, which is known for jets with strong toroidal fields (Clarke et al., 1986). Magnetic tension pinches the jet matter into an narrow tube of to kpc radius, completely suppressing a backflow and thus preventing the formation of a wide cocoon. The simple case of a plasma column in radial magnetostatic equilibrium keeps constant. If approaches the thermal pressure, the magnetic pinch becomes important. In our case, the toroidal field in the plasma column is relatively homogeneous, showing a (volume-weighted) distribution mostly between and , while the thermal pressure lies (radially decreasing) in the range , thus matching and being just around equipartition. These values are not the ones set by the jet nozzle, although those obey , too. The twisting and shearing processes described in the previous subsection are very strong due to the equipartition-level magnetic fields, the rotation around the jet axis can make up a large fraction of the total velocity, and the toroidal field component grows to the measured values.
Krause & Camenzind (2001) examined the convergence of a nose cone simulation and found that the Mach disc retreated towards the nozzle and thus did not converge. Also in M4, the Mach disc is very near to the jet nozzle, and the velocities after that shock are subsonic (although the nose cone itself propagates faster than the jet head in M4L). Thus, it is unclear, how reliable the run is. We also note that the magnetic pinch is subject to MHD instabilities (Clarke, 1993), which might produce blobs and disrupt the plasma column in 3D. However, as this nose cone is produced by the magnetic tension of the strong toroidal field, this is not applicable to strong poloidal fields, which cannot provide the necessary hoop stress, although it seems difficult to maintain a strong poloidal field along an interacting beam without converting part of it into toroidal field, which then might again pinch the plasma.
4.1 Magnetic Fields
Effects of magnetic fields naturally depend on their strength. Trying to understand the smoothness of jet cocoons in galaxy clusters, we concentrated on magnetic fields in jets which are not dominant, but still have significant effects on the jet dynamics, the best example for this being the M3 run with average plasma . It is well known (Miura & Pritchett, 1982) that magnetic tension can damp or suppress Kelvin–Helmholtz (KH) instabilities and hence it may be the key to stabilizing the contact discontinuity between jet and ambient gas. However, how this applies to the complex case of jet–ambient interaction is not yet known.
We emphasize that much care was taken to use a globally consistent setup for our very light jets, in particular: keeping the bow shock inside the computational domain at all times; simulating bipolar (back-to-back) jets to remove an artificial boundary condition in the midplane and allow interaction of the backflows for a realistic lateral expansion; and using a configuration which confines the magnetic field to the jet and has closed field lines instead of a homogeneous magnetic field reaching to infinity, which is then effectively anchored in the ambient gas. The assumed simplifications, axisymmetry and a constant ambient density, make extraction of the underlying physics easier and effects of relaxing those for hydrodynamic jets were previously investigated by Krause (2005). Thus we expect to at least qualitatively model the situation realistically.
Two main effects arise from the inclusion of magnetic fields: Firstly, in the jet head, we see that the provided magnetic fields in the jet do indeed stabilize the contact surface, which produces a pronounced jet head and lobes, similar to the ones seen in Cygnus A (Carilli et al., 1991) and other classical double radio sources. Effects from an ambient density profile can be excluded due to the prescribed constant density atmosphere. Furthermore, the entrainment of ambient gas is significantly smaller there than without magnetic fields.
Secondly, jets prove to be efficient generators of magnetic energy, transferring part of their huge kinetic power to magnetic fields through shearing in the jet head. This relies on some rotation of the beam plasma, which will (as seen in the simulations) generally be present for a non-zero toroidal field component. Some toroidal field is expected if the mostly axial field in the beam (Bridle & Perley, 1984) is perturbed three-dimensionally and from jet formation models, where the toroidal field is necessary for jet collimation at least at small scales. The shearing mechanism provides a source of magnetic energy for the cocoon and furthermore affects the magnetic field structure at the hotspots and possibly some internal shocks. A radial and toroidal field component in the beam is known to be compressed by the terminal shock and is then visible as a strong magnetic field perpendicular to the jet axis. The jet head shearing provides another mechanism, independent from compression, to greatly enhance the toroidal field and thus produce a perpendicular field component stronger than that expected from compression. For jets pointing more towards the observer, the toroidal field around the hotspot region may become observable.
This may be relevant for several observational findings, one being the smoothness of radio cocoons. We have shown that even if the plasma is of order ten, only, the fields in the backflow and the cocoon respectively will be strong enough to damp KH instabilities at the cocoon–ambient gas interface and yield a morphology much smoother than seen in hydrodynamic simulations, reconciling simulations with observations of sources as Cygnus A (Lazio et al., 2006), Pictor A (Perley et al., 1997) or Hercules A (Gizani & Leahy, 2003), where the latter seems to be a past high-power source. Due to the 2.5D nature of the simulations, the effect is restricted to the jet head region. In a full 3D simulation, we expect therefore the cocoon–ambient interface to be more stable even further back from the hotspots. The amplification of beam magnetic fields in the “jet head machine” furthermore is consistent with the observation of magnetic fields in the cocoon just somewhat below equipartition (Hardcastle & Croston, 2005; Migliori et al., 2007). Additionally, the magnetic field predominantly perpendicular to the jet axis in weak FR I sources might be related to the expansion of the jet, which by the shearing would create strong toroidal fields in the absence of strong turbulence. Even though the beam rotation can change much due to interaction with the cocoon and shocks and even change sign, the helicity of the toroidal field is not changed and can thus link the field at large scales with the field topology near the black hole (Gabuzda et al., 2008).
For the magnetic field topology in the rest of the cocoon, axisymmetry is a major limitation, contrary to the effects discussed before. Magnetic field in a toroidal configuration cannot damp KH instabilities in axisymmetry since no magnetic tension is available as restoring force, while poloidal field could do so. Fortunately, the jet head-generated toroidal field in the turbulent cocoon partly would be converted into poloidal field in three dimensions, establishing some dynamical equilibrium between the components but keeping the overall field magnitude or amplify it even further, and this makes the cocoon magnetic field a reasonable explanation for the smooth contact surfaces. As a future step we will examine this effect in three dimensions to be able to quantify the amount of damping and suppressed entrainment of ambient gas in the cocoon.
However, despite the inability to actually produce the expected smooth contact surfaces in axisymmetry away from the jet head region, there is no reason to assume that the amplification of magnetic fields should be in three dimensions any different than shown in our simulations, as the plasma dynamics is not very different and the shearing mechanism in the jet head simply relies on the off-axis flow of plasma, which also happens in 3D. Furthermore, we are not aware of any reason that the field magnitude in the cocoon should be much different in 3D. It is unclear, though, how the spatial distribution of the magnetic field would look like: 3D turbulence might want to distribute field strength rather uniformly in the cocoon, but formation of a large-scale poloidal current may try to establish a radially increasing toroidal field. Observations indicate that magnetic field strengths within the cocoon may vary considerably (Goodger et al., 2008).
It may be interesting to note that the amplification of magnetic field is quite related to dynamo action as in the sun. The shearing ( effect) is just the same and solar convection is replaced by jet-driven cocoon turbulence, but the location of these actions are different and they are externally powered (by the beam thrust) instead of self-sustained. The spatial separation of the two effects and the (at least roughly) isotropic turbulence, however, prevent the formation of an outstanding large-scale poloidal field.
The uncertainty in the magnetic field topology in the cocoon also applies to the distribution of plasma in the system. We (expectedly) found that plasma is unchanged throughout shocks despite a gradual increase along the beam (which might also be due to limited resolution of the beam and entrainment). Thus the assumption of a fixed fraction of equipartition to generate synchrotron emission maps from hydro simulations seems to be quite justified. However, this was not found to be true for the cocoon, where a wide distribution of was found and deriving synchrotron emission from hydro models thus may be far away from MHD results. But as mentioned, this result is expected to change in 3D, apart from having relatively low in the cocoon. Emission maps of our simulations and comparison to hydro models are beyond the scope of this work and will be presented in a subsequent paper.
The amplification of magnetic fields is also particularly interesting for the question of the origin of lobe magnetic fields. De Young (2002) pointed out that equipartion fields in the lobes cannot be passively advected with the plasma from the jet beam due to flux conservation arguments. The beam magnetic fields would have to be of order G or higher, certainly above equipartion, which would result in enormous synchrotron losses, luminosities incompatible with observational limits and probable disruption of the jet due to the magnetic pressure. Hence, the magnetic field must be amplified by some mechanism, and De Young argues for turbulent amplification in the hotspot flow, though it is not easy to meet the necessary requirements for this. The shearing in the jet head, which is seen in our simulations, in contrast, almost inevitably provides this amplification and can therefore explain the strong lobe magnetic fields or at least contribute to their field strength. In fact, the simulations exhibit field magnitudes in the cocoon that are comparable to field magnitudes in the beam and consequently have similar plasma since the beam and cocoon pressures came to balance. We conclude that shearing due to off-axis flow of the plasma provides a natural explanation for the lobe magnetic fields and allows equipartition jets to inflate an equipartition cocoon.
4.2 Dynamical Evolution
X-ray observations of the ambient cluster gas contain valuable information about several jet and AGN properties and self-similar models can give easy access to underlying physical parameters. In the present paper, we are able to confirm agreement of our numerical simulations with self-similar models (Falle, 1991; Begelman, 1996; Kaiser & Alexander, 1997; Komissarov & Falle, 1998) for the bow shock propagation. Excentricity of the bow shock and its Mach number seem to be an easy way to compare theoretical models with observations, without the need for uncertain assumptions on the emission of the radio plasma.
The weak and roundish bow shocks in observations indicate that models of very light jets (with density ratios ) are necessary for most cluster sources. Although we chose a simplified setup with a constant ambient gas and axisymmetry, the simulations are in the regime of observed values for various sources and self-similar models generalize this behaviour for declining cluster profiles, which was already examined for very light hydrodynamic jets by Krause (2005). As our runs, with the clear exception of the magnetically dominated M4, propagate as their hydro counterparts, only minor deviations from those results are expected, except where specific source properties are to be included.
Contrary to the bow shocks and the jet length, we find that the cocoon width in general does not evolve self-similarly but for lighter jets grows with lower power law exponents and the mean cocoon pressure drops more slowly than expected. Although this may seem unexpected, it was already stated by Kaiser & Alexander (1997), that contrary to the bow shock, the self-similar evolution of the cocoon depends on the physical model for the post-hotspot flow and thus, deviations are to be expected if these assumption do not hold in the simulations. Since very light jet cocoons are less overpressured and approach the ambient pressure sooner, the sideways expansion becomes slower and may even stall, letting their aspect ratio (length to width ratio) grow. This is in fact observed by Mullin et al. (2008), who find a wider range of aspect ratios, once the source size approaches kpc. Similar behaviour would be expected for the heavy jets, although at much later times. Thus, cocoon evolution depends sensitively on the question of overpressure, which can be addressed by the strength of the lateral bow shock. Self-similar models, in contrast, assume that the ambient pressure is negligible. Komissarov & Falle (1998) defined two scales, and , between which they found self-similar evolution. The lower bound , where the swept-up mass equals the jet mass, is much smaller than the jet radius in our simulations, and the upper bound , where the ambient pressure becomes important, is comparable to our computational domain size. Accordingly, while they observe the self-similarity being established, we observe its end, explaining why our less overpressured numerical solutions gradually deviate from a self-similar evolution.
Furthermore, cluster density profiles make cylindrical cocoons rather than elliptical ones due to the weaker density contrast at larger distances (Krause, 2005). Altogether, this makes us confident that our simulations reasonably well describe observed cluster sources.
In contrast to bow shocks, measurements of the cocoon shape are complicated by cooling of the relativistic electrons, which limits observations to the outermost parts (lobes). While radio observations show the high-energy electrons in the cocoon as lobes, single-fluid MHD simulations only trace the low-frequency emitting matter and can only show the low-frequency radio morphologies (cf. high and low frequency images in Carilli et al., 1991), which generally suffer from low spatial resolution. This situation fortunately will much improve in the future with new telescopes as LOFAR or the SKA, which will allow more detailed studies of cocoon dynamics and turbulence. Until then, X-ray images of cavities and (in some cases) the inverse-Compton emission off the cosmic microwave background may supplement available low-frequency radio maps.
Scheuer (1982) introduced the “dentist’s drill” to refer to a moving working surface, which therefore widens the jet head and the lobes. Very light jets naturally show extensive cocoons and varying deflection of the beam widens the jet head and hence, even in axisymmetry, show something very similar to a “dentist’s drill”. While this does not exclude beam precession (Steenbrugge & Blundell, 2008), it is does not require it and no large precession amplitudes are needed a priori.
We expect for multiple outbursts of different power in the same cluster, indicated by “ghost cavities” (e.g. Fabian et al., 2006; Wise et al., 2007), that their evolution crucially depends on the history of the past outbursts, as these push the dense cluster gas aside, letting the new outburst propagate with different density contrast. In this case, the new jet might quickly push forward to the old jet size, then resuming its work on the dense ambient gas. The morphology of the cavities may allow the determination of the respective density contrasts and thus could shed light on the outburst history.
The thermal interaction of jets with the intra-cluster medium is less accessible to direct comparison with observations. Slower jet head propagation is responsible for the strong impact of the beam at the working surface and a high thermalization; some conversion of kinetic to thermal energy will additionally occur near or in the beam due to beam destabilization, but may be less effective in 3D. Despite the dominant power source is the kinetic jet power, this strong thermalization converts most of the input power to thermal energy – about half of this in the shocked ambient gas and half in a cocoon filled with high-entropy plasma, which eventually may transfer at least part of its energy to the entrained cluster gas. This is in line with findings of other authors (e.g. Reynolds et al., 2002; O’Neill et al., 2005; Zanni et al., 2005), where the latter authors conclude that up to 75 per cent of the energy can be dissipated irreversibly and thus is available for heating in the intra-cluster medium, as required by the X-ray luminosity–temperature relation (Magliocchetti & Brüggen, 2007) and to provide “radio-mode” feedback for models of galaxy evolution (Croton et al., 2006).
Since only the hot gas phase is simulated, effects on the cold or warm phases of the interstellar medium (ISM) of galaxies are difficult to estimate. Clearly, the thermalization efficiency cannot be simply applied to the cold gas. Simulations of multi-phase turbulence in the jet cocoon by Krause & Alexander (2007) with their higher spatial resolution can resolve the different phases and provide a complementary view (“microphysics”) onto the jet–cloud interaction. However, even if the thermal energy mostly is deposited on the hot gas phase (at larger distances), it is evident from our simulations that the jet cocoon is a rich reservoir of turbulent kinetic energy which will act on the cold gas phase of the galaxy over a time scale corresponding to the decay time scale of the cocoon turbulence. For a jet of power erg s being active for years, the turbulent energy stored in the cocoon is expected to be on the order of a few ergs and over a time possibly longer than the jet activity will interact with the cold ISM phases.
Another interesting result of the present simulations is the excitation of sound waves in the ambient gas by vortices in the turbulent cocoon, which is more effective for the very light jets with their extended cocoons. Vortex shedding (Norman et al., 1982) quasi-periodically occurs in the jet head, and the vortices then are advected with the backflow into the cocoon and provide an intermittent source for the turbulent cascade, producing pressure waves. Waves like these are visible in the Perseus cluster (Fabian et al., 2006; Shabala & Alexander, 2007) and, although being hard to observe, may be an ubiquitous feature in galaxy clusters with current or past jet activity. Their typical wave length might yield a link to jet dynamics and cocoon turbulence. In the lightest of our jets (M4L), the bow shock is just about to turn into a sound wave and then simply would join the enclosed sound waves. Viscous damping may be a mechanism to reduce the amplitudes in addition to the growing wave area and is another candidate for preventing cooling flows (Fabian et al., 2005), but in our scenario would be related to the jets rather than to the AGN itself.
Axisymmetry naturally imposes some constraints on the dynamics, which have to be considered carefully. Jet beams in high-power sources are essentially axisymmetric objects and effects of the full third dimension are merely perturbations from axisymmetry. However, this obviously is not true when beam stability or non-axisymmetric effects are explored specifically. While generally 3D jets are subject to a greater number of instabilities, for very light jets there is an opposing effect of an increased number of dimensions. While in 3D, cocoon vortices often will miss the beam or are slightly deflected, this is not possible in axisymmetry and the beam thus is destabilized, deflected or disrupted more easily which is most evident from our lightest run (M4L). As seen in the very light jets of Krause (2005), the beam stability improves when going to full three dimensions. For most results, however, energetics and scaling behaviour are not expected to change significantly in 3D, notable exceptions to this being cocoon turbulence, magnetic field topology and stability of the contact discontinuity.
Cocoon turbulence further away from the jet head certainly will differ with increased dimensionality as the increased number of degrees of freedom for vortices allow them to turn in all directions and interactions between colliding vortices will be different. Though, we expect the effects on cocoon morphology to be within reasonable limits, as the kinetic energy in the cocoon is lower than the thermal energy by factors of for and hence, effects of thermal pressure will dominate.
5 Conclusions and Summary
We performed a series of axisymmetric hydrodynamic and magnetohydrodynamic simulations of bipolar very underdense jets in a constant density atmosphere. The magnetic field is mostly confined to the jet with a helical topology.
(1) We find that the magnetic fields damp Kelvin–Helmholtz instabilities in the jet head and stabilize it. They produce smoother and more pronounced outer lobes already with a plasma . The entrainment of ambient gas into the cocoon is considerably suppressed there. This morphology is more consistent with observations of powerful double radio sources than are hydrodynamic simulations, which show a ragged cocoon boundary.
(2) Magnetic fields are efficiently amplified in the jet head by shearing as the plasma streams off the jet axis. This originates from a rotation of the beam which we find to be a general result of a toroidal field component being present (yet not necessarily dominant) in the jet. The shearing converts part of the huge kinetic energy into magnetic energy and provides the cocoon with a magnetic field much stronger than expected from flux conservation, in some regions even approaching equipartition. These findings are consisistent with recent observations of near-equipartition magnetic fields in cocoons derived from radio/inverse-Compton emission observations. Already in our axisymmetric simulations the fields are in principle strong enough to stabilize the contact surface between the cocoon and the ambient gas all-over the cocoon and not only in the jet head. The necessary change in field topology would be a consequence expected of fully threedimensional turbulence in the cocoon.
(3) The amplified magnetic field is mostly toroidal, resulting in a stronger contribution of the field component perpendicular to the jet axis than expected from pure compression of magnetic fields at the hotspots. It is also expected at locations where a jet widens considerably (as in FR I sources). In the backflow and the cocoon, however, turbulence will probably establish some balance between the magnetic field components, which could not be established in axisymmetry.
(4) The very light jets show round bow shocks with low Mach numbers. We find that the bow shocks evolve self-similarly and hence give a simple link between observations and some underlying physical parameters. The cocoon width, however, evolves self-similarly only for jets in their highly overpressured phases, but grows slower as the cocoons approach pressure balance with the ambient gas and the bow shock Mach number drops. These sources thus are surrounded by thick layers of shocked ambient gas.
(5) The jet cocoon shows highly turbulent motion. It is driven by vortices shed in the jet head, which are advected with the backflow. Interaction of these vortices with the ambient gas excites waves and ripples in the shocked ambient gas, which are joined by the dissolving bow shock at later stages.
(6) The strong thermalization that occurs for very light jets transfers most of the jet power to the thermal energy of the cocoon and the shocked ambient gas, making it available for heating of the cluster gas and radio-mode feedback. In addition to this, the turbulent motion in the cocoon is associated with a considerable amount of kinetic energy ( per cent of the jet power) that may provide efficient feedback onto the cold phase of the galaxy’s interstellar medium.
We thank Paul Alexander and Martin Hardcastle for helpful discussions as well as the anonymous referee for suggestions that further improved this paper. This work was supported by the Deutsche Forschungsgemeinschaft (Sonderforschungsbereich 439). The simulations partly have been carried out on the NEC SX-6 of the HLRS Stuttgart (Germany).
- Alexander & Leahy (1987) Alexander P., Leahy J. P., 1987, MNRAS, 225, 1
- Aloy et al. (1999) Aloy M. A., Ibáñez J. M., Martí J. M., Gómez J.-L., Müller E., 1999, ApJ, 523, L125
- Balsara & Norman (1992) Balsara D. S., Norman M. L., 1992, ApJ, 393, 631
- Begelman (1996) Begelman M. C., 1996, in Carilli C. L., Harris D. E., eds, Cygnus A – Study of a Radio Galaxy. Cambridge University Press, p. 209
- Blandford (2008) Blandford R., 2008, in Rector T. A., De Young D. S., eds, ASP Conference Series Vol. 386, p. 3
- Bridle (1982) Bridle A. H., 1982, in Heeschen D. S., Wade C. M., eds, Extragalactic Radio Sources Vol. 97, p. 121
- Bridle & Perley (1984) Bridle A. H., Perley R. A., 1984, ARA&A, 22, 319
- Cabral & Leedom (1993) Cabral B., Leedom L. C., 1993, in SIGGRAPH ’93. ACM, New York, NY, USA, pp 263–270
- Camenzind (1990) Camenzind M., 1990, in Klare G., ed., Reviews in Modern Astronomy Vol. 3, pp 234–265
- Carilli et al. (1991) Carilli C. L., Perley R. A., Dreher J. W., Leahy J. P., 1991, ApJ, 383, 554
- Carvalho & O’Dea (2002a) Carvalho J. C., O’Dea C. P., 2002a, ApJS, 141, 337
- Carvalho & O’Dea (2002b) Carvalho J. C., O’Dea C. P., 2002b, ApJS, 141, 371
- Clarke (1993) Clarke D. A., 1993, in Röser H. J., Meisenheimer K., eds, Jets in Extragalactic Radio Sources. Vol. 421, p. 243
- Clarke et al. (1997) Clarke D. A., Harris D. E., Carilli C. L., 1997, MNRAS, 284, 981
- Clarke et al. (1986) Clarke D. A., Norman M. L., Burns J. O., 1986, ApJ, 311, L63
- Croston et al. (2005) Croston J. H., Hardcastle M. J., Harris D. E., Belsole E., Birkinshaw M., Worrall D. M., 2005, ApJ, 626, 733
- Croton et al. (2006) Croton D. J. et al., 2006, MNRAS, 365, 11
- De Young (2002) De Young, D. S. 2002, New Astronomy Review, 46, 393
- Fabian et al. (2005) Fabian A. C., Reynolds C. S., Taylor G. B., Dunn R. J. H., 2005, MNRAS, 363, 891
- Fabian et al. (2006) Fabian A. C., Sanders J. S., Taylor G. B., Allen S. W., Crawford C. S., Johnstone R. M., Iwasawa K., 2006, MNRAS, 366, 417
- Falle (1991) Falle S. A. E. G., 1991, MNRAS, 250, 581
- Fanaroff & Riley (1974) Fanaroff B. L., Riley J. M., 1974, MNRAS, 167, 31
- Gabuzda et al. (2008) Gabuzda D. C., Vitrishchak V. M., Mahmud M., O’Sullivan S. P., 2008, MNRAS, 384, 1003
- Gaibler et al. (2006) Gaibler V., Vigelius M., Krause M., Camenzind M., 2006, in Nagel W. E., Jäger W., Resch M., eds, High Performance Computing in Science and Engineering ’06, p. 35
- Gizani & Leahy (2003) Gizani N. A. B., Leahy J. P., 2003, MNRAS, 342, 399
- Goodger et al. (2008) Goodger J. L., Hardcastle M. J., Croston J. H., Kassim N. E., Perley R. A., 2008, MNRAS, 386, 337
- Hardcastle & Croston (2005) Hardcastle M. J., Croston J. H., 2005, MNRAS, 363, 649
- Hardee (2000) Hardee P. E., 2000, ApJ, 533, 176
- Hardee & Clarke (1995) Hardee P. E., Clarke D. A., 1995, ApJ, 451, L25
- Heinz et al. (2006) Heinz S., Brüggen M., Young A., Levesque E., 2006, MNRAS, 373, L65
- Kaiser & Alexander (1997) Kaiser C. R., Alexander P., 1997, MNRAS, 286, 215
- Keppens et al. (2008) Keppens R., Meliani Z., van der Holst B., Casse F., 2008, A&A, 486, 663
- Komissarov (1999) Komissarov S. S., 1999, MNRAS, 308, 1069
- Komissarov & Falle (1998) Komissarov S. S., Falle S. A. E. G., 1998, MNRAS, 297, 1087
- Kössl et al. (1990) Kössl D., Müller E., Hillebrandt W., 1990, A&A, 229, 378
- Krause (2003) Krause M., 2003, A&A, 398, 113
- Krause (2005) Krause M., 2005, A&A, 431, 45
- Krause & Alexander (2007) Krause M., Alexander P., 2007, MNRAS, 376, 465
- Krause & Camenzind (2001) Krause M., Camenzind M., 2001, A&A, 380, 789
- Lazio et al. (2006) Lazio T. J. W., Cohen A. S., Kassim N. E., Perley R. A., Erickson W. C., Carilli C. L., Crane P. C., 2006, ApJ, 642, L33
- Leismann et al. (2005) Leismann T., Antón L., Aloy M. A., Müller E., Martí J. M., Miralles J. A., Ibáñez J. M., 2005, A&A, 436, 503
- Li et al. (2006) Li H., Lapenta G., Finn J. M., Li S., Colgate S. A., 2006, ApJ, 643, 92
- Lind et al. (1989) Lind K. R., Payne D. G., Meier D. L., Blandford R. D., 1989, ApJ, 344, 89
- McNamara & Nulsen (2007) McNamara B. R., Nulsen P. E. J., 2007, ARA&A, 45, 117
- McNamara et al. (2005) McNamara B. R., Nulsen P. E. J., Wise M. W., Rafferty D. A., Carilli C., Sarazin C. L., Blanton E. L., 2005, Nature, 433, 45
- Magliocchetti & Brüggen (2007) Magliocchetti M., Brüggen M., 2007, MNRAS, 379, 260
- Meisenheimer et al. (1989) Meisenheimer K., Röser H. J., Hiltner P. R., Yates M. G., Longair M. S., Chini R., Perley R. A., 1989, A&A, 219, 63
- Migliori et al. (2007) Migliori G., Grandi P., Palumbo G. G. C., Brunetti G., Stanghellini C., 2007, ApJ, 668, 203
- Miley & De Breuck (2008) Miley G., De Breuck C., 2008, A&A Rev., 15, 67
- Miura & Pritchett (1982) Miura A., Pritchett P. L., 1982, J. Geophys. Res., 87, 7431
- Mizuno et al. (2007) Mizuno Y., Hardee P., Nishikawa K. I., 2007, ApJ, 662, 835
- Mullin et al. (2008) Mullin L. M., Riley J. M., Hardcastle M. J., 2008, MNRAS, 390, 595
- Norman et al. (1982) Norman M. L., Winkler K. H. A., Smarr L., Smith M. D., 1982, A&A, 113, 285
- O’Neill et al. (2005) O’Neill S. M., Tregillis I. L., Jones T. W., Ryu D., 2005, ApJ, 633, 717
- Perley et al. (1997) Perley R. A., Röser H. J., Meisenheimer K., 1997, A&A, 328, 12
- Reynolds et al. (2002) Reynolds C. S., Heinz S., Begelman M. C., 2002, MNRAS, 332, 271
- Rosen et al. (1999) Rosen A., Hughes P. A., Duncan G. C., Hardee P. E., 1999, ApJ, 516, 729
- Saxton et al. (2002a) Saxton C. J., Bicknell G. V., Sutherland R. S., 2002a, ApJ, 579, 176
- Saxton et al. (2002b) Saxton C. J., Sutherland R. S., Bicknell G. V., Blanchet G. F., Wagner S. J., 2002b, A&A, 393, 765
- Scheuer (1982) Scheuer P. A. G., 1982, in Heeschen D. S., Wade C. M., eds, Extragalactic Radio Sources Vol. 97, pp 163–165
- Shabala & Alexander (2007) Shabala S., Alexander P., 2007, Ap&SS, 311, 311
- Smith et al. (2002) Smith D. A., Wilson A. S., Arnaud K. A., Terashima Y., Young A. J., 2002, ApJ, 565, 195
- Steenbrugge & Blundell (2008) Steenbrugge K. C., Blundell K. M., 2008, MNRAS, 388, 1457
- Sutherland & Bicknell (2007) Sutherland R. S., Bicknell G. V., 2007, ApJS, 173, 37
- Tregillis et al. (2001) Tregillis I. L., Jones T. W., Ryu D., 2001, ApJ, 557, 475
- Tregillis et al. (2004) Tregillis I. L., Jones T. W., Ryu D., 2004, ApJ, 601, 778
- Wise et al. (2007) Wise M. W., McNamara B. R., Nulsen P. E. J., Houck J. C., David L. P., 2007, ApJ, 659, 1153
- Zanni et al. (2003) Zanni C., Bodo G., Rossi P., Massaglia S., Durbala A., Ferrari A., 2003, A&A, 402, 949
- Zanni et al. (2005) Zanni C., Murante G., Bodo G., Massaglia S., Rossi P., Ferrari A., 2005, A&A, 429, 399
- Ziegler & Yorke (1997) Ziegler U., Yorke H. W., 1997, Computer Physics Communications, 101, 54 | 0.881049 | 4.060106 |
We already know the universe to be quite the artist, creating visual masterpieces from the swirling movements of clouds of dust, hot gas and stellar explosions. And now, one rare cosmic occurrence in the vast universe can be heard as well.
A team of researchers have detected, or rather heard, a gravitational wave that is much louder than usual, produced by the merger of two black holes that they dubbed GW190412.
“For the very first time we have ‘heard’ in GW190412 the unmistakable gravitational-wave hum of a higher harmonic, similar to overtones of musical instruments,” Frank Ohme, leader of the Independent Max Planck Research Group, and co-author of the study, said in a statement.
GW190412 was observed by the Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors and the Virgo detector on April 12, 2019, taking place 1.9 to 2.9 billion lightyears away from Earth. The observation is detailed in a study published this week on arXiv.
A binary black hole system consists of two black holes orbiting close around one another, drawing closer and closer until they merge into a single black hole. When two cosmic objects of this size orbit each other, they shake up the fabric of spacetime, creating ripples at the speed of light, which results in gravitational waves.
The reason why the sound was so unique is because the mass of the two black holes was vastly different. One of the black holes was eight times the mass of the Sun, while its opponent had an unfair advantage with a mass 30 times the mass of the Sun. Therefore, the gravitational wave frequency from each of the black holes' orbit was different.
The larger black hole was likely producing a lower frequency, while its smaller companion was producing a higher frequency. Combining those two frequencies together resulted in that musical hum recently detected by the researchers.
Ever since scientists were able to detect gravitational waves in 2015, they have observed 10 of those black hole mergers. However, they have all been between black holes of similar masses.
"This is the first binary black-hole system we have observed for which the difference between the masses of the two black holes is so large!” Roberto Cotesta, a PhD student in the Astrophysical and Cosmological Relativity department at the Albert Einstein Institute in Potsdam, Germany, and co-author of the study, said in a statement.
The difference in mass between the two black holes resulted in more precise measurements of the binary system such as how far the black hole system is from Earth, the angle from we look at its orbital planet, and how fast the black hole spins around its axis, according to the researchers.
It also resulted in overtones in the gravitational wave signal that are much louder than usual observations of this kind, one that scientists had not been able to hear before.
Gravitational waves of higher harmonics, that are two to three times higher than the regular frequency observed so far, were predicted by Einstein's theory of general relativity. However, they had not been observed until GW190412.
The new observation also provides new insight into the mysterious activity of black holes, and how these unusual pair-ups may be taking place more frequently than scientists previously believed.
"We have observed the tip of the iceberg of the binary population composed of stellar-mass black holes,” Alessandra Buonanno, director of the Astrophysical and Cosmological Relativity department at the Albert Einstein Institute, and co-author of the study, said in a statement. | 0.829479 | 3.944912 |
It turns out that Mercury has a dust ring too. Image Credit: Mary Pat Hrybyk-Keith / NASA
A dust ring like that found in the orbits of the Earth and Venus has been found in the orbit of Mercury as well.
The discovery was made by solar scientist Guillermo Stenborg and colleagues who analyzed images from NASA's twin Solar and Terrestrial Relations Observatory (STEREO) spacecraft.
"People thought that Mercury, unlike Earth or Venus, is too small and too close to the sun to capture a dust ring," said Stenborg. "They expected that the solar wind and magnetic forces from the sun would blow any excess dust at Mercury's orbit away."
As it turned out however, there was far more dust than anyone had expected.
"It wasn't an isolated thing," said study co-author Russell Howard. "All around the sun, regardless of the spacecraft's position, we could see the same 5 percent increase in dust brightness, or density."
"That said something was there, and it's something that extends all around the sun."
Based on the team's calculations, Mercury's dust ring is around 9.3 million miles wide.
Exactly how this dust came to be there however currently remains unclear.
Source: Space.com | Comments (2)
Similar stories based on this topic:
Mercury, Dust Ring | 0.856431 | 3.327579 |
This was the first asteroid to discoveries was first noticed on January 1, 1801, by the Sicilian astronomer Giuseppe Piazzi.
The asteroid was found after Piazzi had carried out a mathematical prophecy (later determined to be false) that there should be a planet between Mars and Jupiter.
Initially, Ceres was called the planet, but when more asteroid belts were discovered, Ceres was relegated to the asteroid.
His status changes again in 2006 when these promote to the dwarf planet – classification, which he shares with Pluto.
It was named in honor of the Roman goddess of agriculture
Piazzi called his discovery of Ceres after the Roman goddess of harvest and corn.
Cerium is the most abundant of rare-earth metals, as the encyclopedia says and (among other things) is the product of cleavage of plutonium, thorium, and uranium.
Has mysterious bright spots Dawn raced toward the dwarf planet at the end of 2014 and early 2015, the Astronomers found two unexpected bright spots at about 19 degrees north latitude on Ceres, inside the crater.
There seem to be no mounds or elements near these places, suggesting that they are not of volcanic origin.
In Ceres facts, Bright spots indicate a highly reflective material, probably water ice or salts – say the researchers. Members of the Dawn team hope that the spacecraft will solve the mystery.
Ceres may have a water vapor plume
Herschel’s astronomical observatory has recently noticed the water vapor emanating from Ceres. The fumes could also sublimate when the meteorite hits the exposed subsurface ice into space.
Ceres can be a place of the subsurface oceanGeysers from water vapor would indicate the presence of the subsurface ocean on Ceres, which may be able to support life as we know it, say some scientists.
It believes that the icy moons of the outer solar system, such as the Jovian Europa satellite and Saturn’s moon, Enceladus, have underground oceans that are apparently kept fluid by the tidal forces generated by the gravity of neighboring moons and their large host planets.
Ceres facts would not have experienced such tidal forces but could keep radioactive heat from the elements inside him.
Unlike other members of the asteroid belt, Ceres is round because it is large enough for gravity to form its shape into a sphere.
Scientists also believe that round bodies have different interiors, meaning that there are different zones inside. Ceres probably has a rocky core, an ice cloak, maybe some subsurface liquid water and a dusty top layer.
It has an atmosphere Ceres is relatively far away from the Sun, but scientists believe that it’s surface temperature can increase up to minus 37 degrees Fahrenheit (minus 38 degrees Celsius).
If there is ice water on the surface, it will quickly sublimate – it will change directly to gas – which can create an atmosphere around the dwarf planet. That said, only a few sublimation observations so far. Dawn will be looking for something more. | 0.821091 | 3.643606 |
Chalk another one up for Citizen Science. Earlier this month, researchers announced the discovery of 24 new pulsars. To date, thousands of pulsars have been discovered, but what’s truly fascinating about this month’s discovery is that came from culling through old data using a new method.
A pulsar is a dense, highly magnetized, swiftly rotating remnant of a supernova explosion. Pulsars where first discovered by Jocelyn Bell Burnell and Antony Hewish in 1967. The discovery of a precisely timed radio beacon initially suggested to some that they were the product of an artificial intelligence. In fact, for a very brief time, pulsars were known as LGM’s, for “Little Green Men.” Today, we know that pulsars are the product of the natural death of massive stars.
The data set used for the discovery comes from the Parkes 64-metre radio observatory based out of New South Wales, Australia. The installation was the first to receive telemetry from the Apollo 11 astronauts on the Moon and was made famous in the movie The Dish. The Parkes Multi-Beam Pulsar Survey (PMPS) was conducted in the late 1990’s, making thousands of 35-minute recordings across the plane of the Milky Way galaxy. This survey turned up over 800 pulsars and generated 4 terabytes of data. (Just think of how large 4 terabytes was in the 90’s!)
The nature of these discoveries presented theoretical astrophysicists with a dilemma. Namely, the number of short period and binary pulsars was lower than expected. Clearly, there were more pulsars in the data waiting to be found.
Enter Citizen Science. Using a program known as [email protected], researchers were able to sift though the recordings using innovative modeling techniques to tease out 24 new pulsars from the data.
“The method… is only possible with the computing resources provided by [email protected]” Benjamin Knispel of the Max Planck Institute for Gravitational Physics told the MIT Technology Review in a recent interview. The study utilized over 17,000 CPU core years to complete.
[email protected] is a program uniquely adapted to accomplish this feat. Begun in 2005, [email protected] is a distributed computing project which utilizes computing power while machines are idling to search through downloaded data packets. Similar to the original distributed computing program [email protected] which searches for extraterrestrial signals, [email protected] culls through data from the LIGO (Laser Interferometer Gravitational Wave Observatory) looking for gravity waves. In 2009, the [email protected] survey was expanded to include radio astronomy data from the Arecibo radio telescope and later the Parkes observatory.
Among the discoveries were some rare finds. For example, PSR J1748-3009 Has the highest known dispersion measure of any millisecond pulsar (The dispersion measure is the density of free electrons observed moving towards the viewer). Another find, J1750-2531 is thought to belong to a class of intermediate-mass binary pulsars. 6 of the 24 pulsars discovered were part of binary systems.
These discoveries also have implications for the ongoing hunt for gravity waves by such projects as LIGO. Specifically, a through census of binary pulsars in the galaxy will give scientists a model for the predicted rate of binary pulsar mergers. Unlike radio surveys, LIGO seeks to detect these events via the copious amount of gravity waves such mergers should generate. Begun in 2002, LIGO consists of two gravity wave observatories, one in Hanford Washington and one in Livingston Louisiana just outside of Baton Rouge. Each LIGO detector consists of two 2 kilometre Fabry-Pérot arms in an “L” configuration which allow for ultra-precise measurements of a 200 watt laser beam shot through them. Two detectors are required to pin-point the direction of an incoming gravity wave on the celestial sphere. You can see the orientation of the “L’s” on the display on the [email protected] screensaver. Two geographically separate detectors are also required to rule out local interference. A gravity wave from a galactic source would ripple straight through the Earth.
Such a movement would be tiny, on the order of 1/1,000th the diameter of a proton, unnoticed by all except the LIGO detectors. To date, LIGO has yet to detect gravity waves, although there have been some false alarms. Scientists regularly interject test signals into the data to see if system catches them. The lack of detection of gravity waves by LIGO has put some constraints on certain events. For example, LIGO reported a non-detection of gravity waves during the February 2007 short gamma-ray burst event GRB 070201. The event arrived from the direction of the Andromeda Galaxy, and thus was thought to have been relatively nearby in the universe. Such bursts are thought to be caused by neutron star and/or black holes mergers. The lack of detection by LIGO suggests a more distant event. LIGO should be able to detect a gravitational wave event out to 70 million light years, and Advanced LIGO (AdLIGO) is set to go online in 2014 and will increase its sensitivity tenfold.
Knowledge of where these potential pulsar mergers are by such discoveries as the Parkes radio survey will also give LIGO researchers clues of targets to focus on. “The search for pulsars isn’t easy, especially for these “quiet” ones that aren’t doing the equivalent of “screaming” for our attention,” Says LIGO Livingston Data Analysis and EPO Scientist Amber Stuver. The LIGO consortium developed the data analysis technique used by [email protected] The direct detection of gravitational waves by LIGO or AdLIGO would be an announcement perhaps on par with CERN’s discovery of the Higgs Boson last year. This would also open up a whole new field of gravitational wave astronomy and perhaps give new stimulus to the European Space Agencies’ proposed Laser Interferometer Space Antenna (LISA) space-based gravity wave detector. Congrats to the team at Parkes on their discovery… perhaps we’ll have the first gravity wave detection announcement out of LIGO as well in years to come!
-Read the original paper on the discovery of 24 new pulsars here.
-Parkes radio telescope image is copyrighted and used with the permission of CSIRO Operations Scientist John Sarkissian.
-For a fascinating read on the hunt for gravity waves, check out Gravity’s Ghost. | 0.899292 | 3.299367 |
GREENBELT, Md. (NASA PR) — NASA’s Solar Terrestrial Relations Observatory, or STEREO-A spacecraft, captured these images of comet ATLAS as it swooped by the Sun from May 25 – June 1. During the observations and outside STEREO’s field of view, ESA/NASA’s Solar Orbiter spacecraft crossed one of the comet’s two tails.
PARIS (ESA PR) — ESA’s Solar Orbiter will cross through the tails of Comet ATLAS during the next few days. Although the recently launched spacecraft was not due to be taking science data at this time, mission experts have worked to ensure that the four most relevant instruments will be switched on during the unique encounter.
by Jeanette Kazmierczak NASA’s Goddard Space Flight Center
GREENBELT, Md. (NASA PR) — For the first time, NASA’s Neil Gehrels Swift Observatory tracked water loss from an interstellar comet as it approached and rounded the Sun. The object, 2I/Borisov, traveled through the solar system in late 2019.
“Borisov doesn’t fit neatly into any class of solar system comets, but it also doesn’t stand out exceptionally from them,” said Zexi Xing, a graduate student at the University of Hong Kong and Auburn University in Alabama who led the research. “There are known comets that share at least one of its properties.”
MOUNTAIN VIEW, Calif. (SETI Institute PR) — Discovered in December, Comet ATLAS was expected to become the brightest comet of 2020, visible to the naked eye. Several days ago, however, astronomers began to suspect that the comet had split into multiple pieces when it began dimming rapidly. At Unistellar, this created a unique opportunity to summon our community of citizen astronomers together to collect a high-quality image of this beautiful, but dying cosmic phenomenon.
PARIS (ESA PR) — Since March 2017, ESA’s NELIOTA project has been regularly looking out for ‘lunar flashes’ on the Moon, to help us better understand the threat posed by small asteroid impacts. The project detects the flash of light produced when an asteroid collides energetically with the lunar surface, and recently recorded its 100th impact. But this time, it was not the only one watching.
TOKYO (NAOJ PR) — Astronomers at the National Astronomical Observatory of Japan (NAOJ) have analyzed the paths of two objects heading out of the Solar System forever and determined that they also most likely originated from outside of the Solar System. These results improve our understanding of the outer Solar System and beyond.
PARIS (ESA PR) — Scientists analysing the treasure trove of images taken by ESA’s Rosetta mission have turned up more evidence for curious bouncing boulders and dramatic cliff collapses.
Rosetta operated at Comet 67P/Churyumov-Gerasimenko between August 2014
and September 2016, collecting data on the comet’s dust, gas and plasma
environment, its surface characteristics and its interior structure.
PASADENA, Calif. (NASA PR) — A newly discovered comet has excited the astronomical community this week because it appears to have originated from outside the solar system. The object — designated C/2019 Q4 (Borisov) — was discovered on Aug. 30, 2019, by Gennady Borisov at the MARGO observatory in Nauchnij, Crimea. The official confirmation that comet C/2019 Q4 is an interstellar comet has not yet been made, but if it is interstellar, it would be only the second such object detected. The first, ‘Oumuamua, was observed and confirmed in October 2017.
NASA selected two projects for funding focused on developing in-space welding technologies as part of its recent round of Small Business Innovation Research (SBIR) awards.
The space agency selected Busek Company of Natick, Mass., and Made in Space of Jacksonville, Fla., for phase 1 awards worth up to $125,000 apiece for six months.
“Busek proposes to initiate the development of a semi-autonomous, teleoperated welding robot for joining of external (or internal metallic uninhabited volume at zero pressure) surfaces in space,”according to the proposal summary. “This welding robot will be an adaptation of a versatile Busek developed system called SOUL (Satellite On Umbilical Line) with a suitable weld head attached to it.
PARIS, 19 June 2019 (ESA PR) — ‘Comet Interceptor’ has been selected as ESA’s new fast-class mission in its Cosmic Vision Programme. Comprising three spacecraft, it will be the first to visit a truly pristine comet or other interstellar object that is only just starting its journey into the inner Solar System.
WASHINGTON (NASA PR) — While headlines routinely report on “close shaves” and “near-misses” when near-Earth objects (NEOs) such as asteroids or comets pass relatively close to Earth, the real work of preparing for the possibility of a NEO impact with Earth goes on mostly out of the public eye. | 0.905309 | 3.521208 |
New data from the Atacama Large Millimeter/submillimeter Array (ALMA) and other telescopes have been used to create this stunning image showing a web of filaments in the Orion Nebula. These features appear red-hot and fiery in this dramatic picture, but in reality are so cold that astronomers must use telescopes like ALMA to observe them.
The ESOcast Light is a series of short videos bringing you the wonders of the Universe in bite-sized pieces. The ESOcast Light episodes will not be replacing the standard, longer ESOcasts, but complement them with current astronomy news and images in ESO press releases. Credit: ESO
This spectacular and unusual image shows part of the famous Orion Nebula, a star formation region lying about 1350 light-years from Earth. It combines a mosaic of millimeter-wavelength images from the Atacama Large Millimeter/submillimeter Array (ALMA) and the IRAM 30-meter telescope, shown in red, with a more familiar infrared view from the HAWK-I instrument on ESO’s Very Large Telescope, shown in blue. The group of bright blue-white stars at the upper-left is the Trapezium Cluster — made up of hot young stars that are only a few million years old.
The wispy, fibre-like structures seen in this large image are long filaments of cold gas, only visible to telescopes working in the millimeter wavelength range. They are invisible at both optical and infrared wavelengths, making ALMA one of the only instruments available for astronomers to study them. This gas gives rise to newborn stars — it gradually collapses under the force of its own gravity until it is sufficiently compressed to form a protostar — the precursor to a star.
This pan sequence shows part of the famous Orion Nebula star formation region. At the start we see the bright Trapezium Cluster of hot young stars and then see the strange pattern of narrow filaments of cold gas, which appear red in this view from ALMA. The background blue image, which shows the stars and other features, comes from the HAWK-I camera on ESO’s Very Large Telescope. Credit: ESO/H. Drass/A. Hacar/ALMA (ESO/NAOJ/NRAO). Music: Johan B. Monell
The scientists who gathered the data from which this image was created were studying these filaments to learn more about their structure and make-up. They used ALMA to look for signatures of diazenylium gas, which makes up part of these structures. Through doing this study, the team managed to identify a network of 55 filaments.
The Orion Nebula is the nearest region of massive star formation to Earth, and is therefore studied in great detail by astronomers seeking to better understand how stars form and evolve in their first few million years. ESO’s telescopes have observed this interesting region multiple times, and you can learn more about previous discoveries here, here, and here.
This video starts with a broad view of the sky and zooms in on the familiar constellation of Orion (The Hunter). We then get a closeup view of the Orion Nebula star formation region. In the final sequence we see the strange red filaments of cool gas that ALMA has revealed. Credit: ESO, N. Risinger (skysurvey.org), H. Drass, A. Hacar, ALMA (ESO/NAOJ/NRAO). Music: Johan B. Monell
This image combines a total of 296 separate individual datasets from the ALMA and IRAM telescopes, making it one of the largest high-resolution mosaics of a star formation region produced so far at millimeter wavelengths.
Publication: A. Hacar, et al., “An ALMA study of the Orion Integral Filament: I. Evidence for narrow fibers in a massive cloud,” A&A, 2018; doi:10.1051/0004-6361/201731894 | 0.83452 | 3.915405 |
It’s almost three months since a team of scientists announced it had detected polarised light from the afterglow of the Big Bang. But questions are still being asked about whether cosmic dust may have clouded their discovery.
The latest, and most damning, piece was in Nature News last week.
What made the original announcement from the Background Imaging of Cosmic Extragalactic Polarisation (BICEP2) team so exciting was that the twisting pattern on the sky could be caused by gravitational waves.
If true, as we wrote on The Conversation at the time, then these gravitational waves could come from the very earliest times in our universe, a trillionth of a trillionth of a trillionth of a second after it all began. The twisting pattern would be a unique window into these early times.
But as the US astronomer Carl Sagan pointed out: “Extraordinary claims require extraordinary evidence.” As much as there was excitement over the BICEP2 announcement, there were also many questions.
And then it all went crazy
It is normal that scientists debate their new findings and confront them against existing theories and data. This is how science works. Such peer-review is a key aspect of the successful, centuries old tradition of the scientific method.
Ordinarily such scientific debate attracts little attention. But with a discovery that could explain the earliest moments of our universe, the stakes were high, and things became very public.
What about the dust?
At the heart of this heated discussion is something quite cold – cosmic dust.
The light from the Cosmic Microwave Background (CMB) has to pass through a lot of intervening material as it travels for nearly 14 billion years to reach our telescopes.
Our galaxy, in particular the cold dust grains drifting within it, is a very important source of confusion when trying to understand just how much of the light hitting the telescope is from the CMB and how much from the stuff in the way.
Think of trying to take a picture of a beautiful sunset in a sandstorm and you’re getting close.
How you account for dust in our galaxy is crucial. Do it wrong and you can mistake it for the signal you want to find. The best way to remove this dust from the signal is to map the sky in many frequencies (or colours) of light.
The BICEP2 team only had one frequency available in a bid to maximise how sensitive a picture they could make. They then relied on other measurements of the dust to make up for this.
The European Space Agency’s Planck satellite is currently doing just this. Unfortunately, this data is not fully processed yet for general use.
A quest for dusty image
So the BICEP2 team decided to estimate the amount of dust using theoretical models, as well as any data available at that time.
One such measurement – and this is where things get messy – is a digitised powerpoint slide (below) of a Planck work-in-progress map of the sky shown at a conference talk.
This is certainly unusual and not to be encouraged (especially as it was preliminary analysis) but not necessarily cause to throw out the result.
However, meticulous reanalysis of both the Planck picture and BICEP2 data by Princeton physicist Raphael Flauger (whose talk you can watch here with slides here) show that BICEP2 has likely been overly optimistic about the level of contamination from the dust.
A paper that presents this analysis suggests that for all but the lowest estimates of the confusing signal from the dust, the BICEP2 results show no strong evidence for gravitational waves.
Yet, as was the case two months ago, Planck (and others currently running experiments) will have the final say on all this.
What lessons we can learn?
Now the dust is settling on BICEP2 (or at least until Planck releases its results in October) we can ask two important questions, regardless of what has or has not happened within the team’s analysis:
what happens if Planck shows BICEP2 to have been right? Who will get to make the claim of the detection as now, seemingly, without Planck no one can actually be sure that BICEP2 was correct?
how has the debate impacted the public’s view of science, from the high of the initial announcement to the low of the blog-based questioning and criticism?
As a community, we have to decide whether high profile announcements made before publications are peer-reviewed run the risk of the public becoming jaded of science, especially if claims are later retracted.
Or perhaps gaining the public’s attention is worthwhile, along the lines of Irish wit Brendan Behan’s remark: “There’s no such thing as bad publicity, except your own obituary.”
Discoveries and debates such as this one will always attract public and media attention. This can only be good for science so long as the public understands that real science is never a straightforward process. It is a slow, diligent process and for every big step forward there are a few back.
There will be discussions and disagreements among scientists along the way. That’s all just part of the process of trying to advance the boundaries of our knowledge. | 0.873061 | 3.893696 |
The parent asteroid of December's Geminid meteor shower, 3200 Phaethon, is about to make a historically close flyby. Get ready to watch it race across the sky.
We have a fantastic opportunity to see a unique asteroid brighter than it's ever been observed. 3200 Phaethon (FAY-eh-thon), the size of a rural hamlet, will pass within 10.3 million kilometers from Earth at 6 p.m. Eastern Time (23:00 UT) on December 16th. Just before closest approach, it will reach magnitude 10.7, bright enough to track in a 3-inch telescope. And I do mean track. Hold onto your eyepiece! This thing will be scooting along at up 15° per day or 38″a minute (about the distance between Albireo and its companion star) — fast enough to cross the field of view like a slow-moving satellite.
Whenever I see an asteroid or comet move in real time, I'm reminded of a particular 10-km-wide rock that slammed into the Yucatan 65 million years ago and led to the extinction of the dinosaurs. To a T. rex with a telescope, the coming nemesis would have looked like an innocent pinprick of light the night before, inching across the stars just like Phaethon will for us. Only in this instance, we needn't worry about an impact. While classified as a potentially hazardous asteroid, a look ahead shows that Phaeton will keep a safe distance for at least the next four hundred years.
Phaethon distinguishes itself in other ways. It comes closer to the Sun than any other named asteroid, with a perihelion distance of 0.14 AU, or less than half that of Mercury. If lead melts on the innermost planet, where the temperature reaches 800°F (427°C), Phaethon's got it beat by a mile. Its scorched surface tops out around 1200°F (627°C), hot enough for lead to flow like water and aluminum to turn to putty.
Getting roasted by the Sun every 524 days explains Phaethon's other claim to fame as the parent of the Geminid meteor shower. Shortly after it was discovered on October 11, 1983, with the orbiting Infrared Astronomical Telescope (IRAS), American astronomer and comet researcher Fred Whipple noticed that Phaethon's orbit matched that of the Geminids, clinching it as the shower's source.
Most meteor showers originate with comets. When a comet gets near the Sun, solar heating vaporizes dust-rich ice from its nucleus, and the push of sunlight (radiation pressure) blows it back to form a coma and tail. The ejected material forms a trail or stream along the comet's orbit; when the Earth slices through it, the dust slams into the atmosphere and vaporizes, particle by particle, in a meteor shower.
So how do you get dust from an asteroid? An early hypothesis, now discarded, posited that Phaethon might have ice beneath its surface that vaporizes around the time of perihelion. Sounds plausible, but measurements show the asteroid gets too hot for ice to survive either inside or out. Still, you might get blood from a turnip if that turnip is composed of carbonaceous (carbon- and water-rich) minerals like our friend 3200 Phaethon and heated to a temperature high enough to cause the material to break down, crack, and crumble into dust.
If Phaethon was once a traditional comet, it is no longer. These days, it's considered a rock comet. a rare type of object that outgasses rock particles instead of ice.
When you pour boiling water on cold glass, the glass shatters from thermal shock. Similarly, extreme solar heating at the time of perihelion causes rapid expansion of Phaethon's crustal rocks. They fracture and release dust that's carried away by the radiation pressure of sunlight. Experiments have shown that the dust particles are about a millimeter across, approximately the size of the Geminids.
For a brief time during the 2009 perihelion, astronomers recorded a surprise doubling in the asteroid's brightness, likely caused by one of these gritty outbursts. Yet questions remain. A meteor shower requires constant nourishment from its parent to put on a show year after year, decade after decade. Astronomers estimate that to maintain a steady supply of meteor dust, Phaethon would have to flare 10 times as often as observed. Does it shed its skin out of sight, when its in the harsh glare of perihelic sunlight?
Whatever's going on, we're going to have a bang-up December. Not only will Phaethon be brighter than ever, but its appearance coincides with the December 13–14 peak of the annual Geminid meteor shower — a rare pairing! We'll have more on the Geminids next week. For now, our job will be to find and follow this exciting asteroid.
From November 29th through December 3rd, Phaethon brightens from magnitude 14 to 13, but does so handicapped by the glare of the nearly full Moon. Luckily, the Moon's out of the way starting on the 5th, when Phaethon will have reached magnitude 12.6 and be within range of a 6-inch or larger telescope. When brightest from December 12–15, it shines between magnitude 10.7 and 10.9. Because we're seeing the asteroid in full or nearly full phase, Phaethon stays relatively bright for many nights prior and up to closest approach. Soon after, it fades as its phase from our perspective thins to a crescent. By December 19th, it drops back to 13th magnitude and then to 15th magnitude on December 21st.
Phaethon spins once on its axis every 3.6 hours with a +0.4-magnitude variation in brightness that patient observers may be able to detect. Visual observers and astrophotographers should also be on the lookout for brightness flares or the potential appearance of a coma.
Our maps will take you through December 17th. I apologize for their number, but Phaethon covers a lot of ground through a star-rich Milky Way, making detailed maps a necessity. I encourage you to create your own personalized charts, easy to do with sky-mapping programs like Stellarium, MegaStar, and Starry Night. For step-by-step instructions, scroll to the end of this earlier blog.
NASA's 70-meter Goldstone dish will make good use of this apparition to obtain the clearest pictures yet showing Phaethon's shape and surface features by bouncing radio waves off the object and constructing images based on return echoes. Observations are scheduled for 10 days between December 11–21.
If you don't own a telescope and still want a peek at the asteroid, watch it live on Gianluca Masi's Virtual Telescope Project website at these times: December 15th from Arizona starting at 09:00 UT and December 16th from Italy starting at 20:00 UT.
Try to catch Phaethon now because it won't get this bright again until December 2093, when it will pass within 0.02 AU of Earth and brighten to magnitude 9.4.
Additional charts for tracking Phaethon through December 17th: | 0.887099 | 3.818892 |
Mercury is visible as an early morning object from southern and temperate locations during the first week of May. The innermost planet shines at about magnitude -0.5 and can be seen a few degrees above the eastern horizon with much brighter Venus just above. On May 3rd, the very thin crescent waning Moon will appear 3 degrees from Mercury. For the final three weeks of the month, the planet is no longer visible as it passes through superior conjunction on May 21st.
From northern temperate latitudes, Mercury is unsuitably placed for observation throughout May.
Venus is positioned too close to the Sun to be visible this month from northern latitudes. For observers located further south, the brightest planet remains a morning object, low down, in morning twilight. It shines at magnitude -3.8, and on May 2nd, the crescent Moon passes 4 degrees south of Venus.
Mars, mag. +1.7, is visible in the early evening skies throughout May. From northern latitudes, the red planet can be seen at the beginning of the month for about 3 hours after sunset, although the visibility period reduces by over an hour by months end. From southern and temperate locations, Mars sets about 2 hours after the Sun.
On May 7th, the thin waxing crescent moon will pass 3 degrees south of the planet.
Jupiter continues to move retrograde in Ophiuchus and is well placed for observation in the evening sky. At the start of the month, the giant planet rises around midnight from northern latitudes, and even earlier for those located further south.
Jupiter brightens from magnitude -2.5 to -2.6, with its apparent size increasing from 43.5 to 45.8 arc seconds as the month progresses. With binoculars or small telescopes up to 4 Galilean moons are visible. A small scope will also reveal features such as the northern and southern equatorial bands and the Great Red Spot, although it has been diminishing in size for some time now.
The waxing gibbous Moon passes a couple of degrees from Jupiter on May 20th.
Saturn is visible in the evening skies moving retrograde in Sagittarius. The ringed planet is positioned west of Jupiter and rises a couple of hours after its brighter neighbor. During the month, Saturn's magnitude increases from +0.4 to +0.2 with its apparent diameter improving from 17.2 to 17.9 arc seconds. Because of their southerly latitudes, both Jupiter and Saturn are much better placed from southern latitudes. The planet's ring system is currently wide open and can easily be seen with a small refractor telescope scope.
On May 22nd, an occultation of Saturn by the Moon is visible from the South Africa.
Uranus, mag. +5.9, is in Aries. The planet passed by solar conjunction last month and reappears in the morning sky during May. For northern temperate observers, it's swamped by the bright early morning twilight for most of the month. However, at the tail end of May it should be visible with binoculars and small scopes above the eastern horizon, an hour or so before sunrise.
Observers located further south have it much better with the planet rising 3 hours before the Sun by months end. On May 18th and 19th, Uranus is positioned just over a degree north of Venus which acts as a useful marker.
Neptune, mag. +7.9, is now visible as a morning object. From southern locations it rises up to four hours before the Sun at the start of the month, improving by a couple of hours by month's end. Although never bright enough to be seen with the naked eye, Neptune can be spotted with binoculars or small telescopes. From northern temperate latitudes, Neptune is not as well placed, but can be seen in the early morning sky above the eastern horizon before sunrise, especially during the second half of the month.
The planet is in Aquarius near to star phi Aquarii (φ Aqr - mag. +4.2). On May 27th, the waning crescent Moon passes 4 degrees south of Neptune.
Solar System Data Table - May 2019
|Date||Right Ascension||Declination||Mag.||App. Size||Illum. (%)||Dist. (AU)||Constellation|
|Sun||May 01||02h 30m 43.1s||14d 50m 26.1s||-26.7||31.8'||100||1.007||Aries|
|Sun||May 15||03h 25m 01.9s||18d 40m 52.8s||-26.7||31.6'||100||1.011||Taurus|
|Sun||May 31||04h 29m 17.4s||21d 48m 01.6s||-26.7||31.6'||100||1.014||Taurus|
|Mercury||May 01||01h 16m 19.0s||05d 22m 29.4s||-0.4||5.9"||75||1.149||Pisces|
|Mercury||May 15||02h 54m 41.1s||15d 49m 34.6s||-1.4||5.2"||96||1.304||Aries|
|Mercury||May 31||05h 17m 06.5s||24d 46m 22.9s||-1.2||5.4"||89||1.245||Taurus|
|Venus||May 01||00h 47m 39.6s||03d 19m 08.6s||-3.8||11.5"||88||1.445||Pisces|
|Venus||May 15||01h 51m 04.9s||09d 41m 38.0s||-3.8||11.0"||91||1.515||Pisces|
|Venus||May 31||03h 06m 27.1s||16d 07m 52.5s||-3.8||10.5"||94||1.584||Aries|
|Mars||May 01||05h 15m 47.7s||24d 07m 28.7s||1.6||4.2"||96||2.239||Taurus|
|Mars||May 15||05h 55m 38.2s||24d 33m 09.5s||1.7||4.0"||97||2.330||Taurus|
|Mars||May 31||06h 40m 53.8s||24d 15m 42.1s||1.8||3.9"||97||2.422||Gemini|
|Jupiter||May 01||17h 31m 35.3s||-22d 38m 37.2s||-2.5||43.5"||100||4.535||Ophiuchus|
|Jupiter||May 15||17h 26m 44.0s||-22d 35m 34.1s||-2.5||44.8"||100||4.399||Ophiuchus|
|Jupiter||May 31||17h 19m 02.5s||-22d 29m 55.9s||-2.6||45.8"||100||4.306||Ophiuchus|
|Saturn||May 01||19h 27m 22.4s||-21d 31m 23.7s||0.4||17.2"||100||9.667||Sagittarius|
|Saturn||May 15||19h 26m 36.5s||-21d 33m 37.9s||0.3||17.6"||100||9.459||Sagittarius|
|Saturn||May 31||19h 24m 11.6s||-21d 39m 25.8s||0.2||17.9"||100||9.259||Sagittarius|
|Uranus||May 01||02h 02m 47.5s||11d 57m 37.7s||5.9||3.4"||100||20.847||Aries|
|Uranus||May 15||02h 05m 48.5s||12d 13m 40.9s||5.9||3.4"||100||20.792||Aries|
|Uranus||May 31||02h 09m 02.0s||12d 30m 34.3s||5.9||3.4"||100||20.670||Aries|
|Neptune||May 01||23h 16m 31.2s||-05d 44m 27.3s||7.9||2.2"||100||30.542||Aquarius|
|Neptune||May 15||23h 17m 43.0s||-05d 37m 26.2s||7.9||2.3"||100||30.341||Aquarius|
|Neptune||May 31||23h 18m 39.6s||-05d 32m 09.7s||7.9||2.3"||100||30.083||Aquarius| | 0.840034 | 3.608188 |
An ultraviolet spectrograph (UVS) designed and built by Southwest Research Institute (SwRI) is the first scientific instrument to be delivered for integration onto the European Space Agency’s Jupiter Icy Moon Explorer (JUICE) spacecraft. Scheduled to launch in 2022 and arrive at Jupiter in 2030, JUICE will spend at least three years making detailed observations in the Jovian system before going into orbit around the solar system’s largest moon, Ganymede.
Aboard JUICE, UVS will get close-up views of the Galilean moons Europa, Ganymede and Callisto, all thought to host liquid water beneath their icy surfaces. UVS will record ultraviolet light emitted, transmitted and reflected by these bodies, revealing the composition of their surfaces and tenuous atmospheres and how they interact with Jupiter and its giant magnetosphere.
“It has been a huge team effort to get this instrument — known as JUICE-UVS — built, tested and delivered,” said Steven Persyn, project manager for JUICE-UVS and an assistant director in SwRI’s Space Science and Engineering Division. “In 2013, UVS was selected to represent NASA on the first ESA-led mission to an outer planet. Meeting both NASA’s and ESA’s specifications was challenging, but we did it.”
UVS will be one of 10 science instruments and 11 investigations for the JUICE mission. The mission has overarching goals of investigating potentially habitable worlds around the gas giant, as well as, studying the Jupiter system as an archetype for gas giants in our solar system and beyond.
SwRI has provided ultraviolet spectrographs for other spacecraft, including ESA’s Rosetta comet orbiter, NASA’s New Horizons spacecraft to Pluto and the Kuiper Belt, the Lunar Reconnaissance Orbiter, and the Juno spacecraft now orbiting Jupiter. Another UVS is under construction for NASA’s Europa Clipper mission, scheduled to launch not long after JUICE.
“JUICE-UVS is the fifth in this series of SwRI-built ultraviolet spectrographs, and it benefits greatly from the design experience gained by our team from the Juno-UVS instrument, which is currently operating in Jupiter’s harsh radiation environment,” Persyn said. “Each successive instrument we build is more capable than its predecessor.”
JUICE is the first large-class mission in ESA’s Cosmic Vision 2015–2025 program. The spacecraft and science instruments are being built by teams from 15 European countries, Japan and the United States. SwRI’s UVS instrument team includes additional scientists from the University of Colorado Boulder, the SETI institute, the University of Leicester (UK), Imperial College London (UK), the University of Liège (Belgium), and the Laboratoire Atmosphères, Milieux, Observations Spatiales (France). The Planetary Missions Program Office at NASA’s Marshall Space Flight Center oversees the UVS contribution to ESA through the agency’s Solar System Exploration Program. The JUICE spacecraft is being developed by Airbus Defence and Space. | 0.880515 | 3.598564 |
Recently NASA selected the next set of missions for the Explorer Program to be launched in 2017 from four proposed mission concepts. The two winning missions were the Transiting Exoplanet Survey Satellite (TESS) and Neutron Star Interior Composition Explorer (NICER).
In some ways you can think of TESS as Kepler’s successor but while it will be monitoring stars for the drop in light due to transiting exoplanets like Kepler, it’s mission is slightly different. Unlike Kepler which stares continuously at 1 field for it’s entire mission, during TESS’s 2 year primary mission, it will stare at a patch of sky for a short period of time and then move on to examine new stars. In total TESS will survey ~45,000 square degrees of sky, ~400 times larger than the region Kepler monitors.
TESS will target the brightest stars (G-K and M stars) in the sky, much brighter (and therefore closer to the Earth) than the Kepler field stars. TESS will therefore probe the frequency and properties of planetary systems that are in the solar neighborhood, and provide a catalog of the closest planet-hosting stars to be followed-up for many years to come.
One of the exciting things about TESS is that the vast majority of the stars it monitors will be prime candidates for current ground-based radial velocity instruments like HIRES (which has been used to help study Planet Hunters planet candidates), HARPS-N, and HARPS-S. This means for the vast majority of TESS planet candidates, we should be able to get masses or constraints on their masses. This is really important because the transit technique gets you a measure of the radius of the planet (if you know the radius of the star). If you can get the mass from radial velocity measurements, then you’ve got yourself the bulk density of the planet. You can think of bulk density as a proxy for composition, and so there will be a large sample of planets to examine how their size and composition compare to that of the planets in our Solar System. In addition, TESS planet host stars will be prime target for studies with the in-construction James Webb Space Telescope (JWST, a space-based infrared telescope scheduled for launch in 2018), that will enable study of the chemical composition of the atmospheres of many of TESS-discovered planets.
Just like Kepler, I think TESS will open a new era in the search and characterization of exoplanets. I think there is a place for Planet Hunters in the TESS age, and I hope that in the future we’ll be able to share TESS light curves on the Planet Hunters website. You can learn more about TESS here. | 0.857002 | 3.929897 |
For life to take root on a planet, organisms need little more than a rocky surface, liquid water, and a coddling atmosphere to hold warmth in and keep harmful rays out. But for life to thrive, astronomers from the Harvard-Smithsonian Center for Astrophysics write in the journal arXiv, they need something else: a magnetic field.
As it turns out, young planets are likely to orbit around equally young, savage suns with violent solar winds strong enough to blow away the atmospheres of their cosmic companions. The researchers came to this conclusion after studying the nearby star Kappa Ceti, which appears to behave like an immature version of our own Sun.
Kappa Ceti, at 400 million to 600 million years old, is considered a young star. Like human adolescents, it’s pockmarked with spots and violently temperamental. The blotchy starspots on its surface, the researchers write, are a clear sign that it’s magnetically active. On top of that, it’s constantly spewing a stream of hot gas outward into space at a speed 50 times that of our own Sun’s solar wind. That fierce jet of air is strong enough to obliterate the atmosphere of a nearby planet if it doesn’t have the right protection.
Just look at lifeless Mars, which could have been warm enough to support salty oceans, had it been fortunate enough to have a magnetic shield to keep the sun from stripping its warmth-containing atmosphere away.
Earth, on the other hand, got lucky. By taking their data on Kappa Ceti’s solar winds and applying it to a model of our own young planet, the researchers determined that Earth’s magnetic field was fairly weak. They estimate that young Earth’s protected region — its magnetosphere — was about one-third to one-half as large as it is today. But that atmosphere, as thin as it was, maintained the warmth needed to support terrestrial life. | 0.855024 | 3.519224 |
How does the gravity well change as space expands? If we assume that the Earth's gravitational field curves flat space to create a gravity well then how does the gravity well change as space expands also is the change in gravity well measurable.
For a stationary massive object like Earth or the Sun, the gravity well does not change from the expansion of space. The gravity well at one time is the same as the gravity well at future times for the same mass. Any minute force imposed by the expansion of space does not constitute a change in the gravity well itself. The well is not stretched or compressed. It remains constant in proper distance scales and not comoving distance scales.
If we assume that general relativity is the correct theory for this case, and there are currently no indications, that I am aware of, that it isn't, then the expansion of the universe adds a small modifying term to gravity wells. I doubt that it is measurable at the scale of the solar system. The current best estimate for the Hubble constant is 67.80±0.77km/s/Mpc. The size of the solar system (using the heliopause as a border) is about 0.001pc, so the total influence of the expansion would amount to approx. 1e-3pc*68km/s/Mpc=68um/s at the border of the solar system. I seriously doubt, that we can separate that from the gravitational noise of the solar system and the diverse effects of solar wind, light pressure, outgassing etc. that act on spacecraft that may be used to measure this effect.
Having said that, the effects at the edge of our Milky Way are much larger. Now we are talking about 30kpc*68km/s/Mpc=2.04km/s. While the errors bars are much larger (probably mostly due to dark matter distributions), it may be possible to find populations of objects, which show considerable effects on their orbits around the center of the Milky Way. But that's just the gut feeling of a physicist who is not an astronomer, so my intuition into the systematic errors that would undermine such a measurement could be way off.
Update: For those who would like to know what all of this means for orbits of bodies, I believe this paper may contain the required analysis:
"Analytical solution of the geodesic equation in Kerr-(anti) de Sitter space-times", Eva Hackmann, Claus L¨ammerzahl, Valeria Kagramanova and Jutta Kunz | 0.853176 | 3.900585 |
When we think of where else life might exist in the universe, we tend to focus on planets. But on a grander cosmic scale, moons could prove the more common life-friendly abode.
A single gas giant planet in the not-too-warm, not-too-cold habitable zone around its star — where Earth and Mars correspondingly reside — could host several livable moons. At this early point in our hunt for exoplanets, most of the worlds we have found in the habitable zone are giants, not Earths. It's possible that the first inhabited place we discover outside our Solar System will be a moon.
It is this sort of consideration that inspires René Heller, a postdoctoral fellow in astronomy at McMaster University, in Ontario, Canada. He studies how "exomoons" could form, what they might be like and how we might detect them with current or future astronomical instruments. A major part of his work deals with gauging the habitability of exomoons, which is a bit trickier than planets because moons orbit another body besides their star. [The Strangest Alien Planets Ever (Gallery)]
A new paper by Heller and his colleague Rory Barnes of the University of Washington and the NASA Virtual Planetary Laboratory examines how heat emanating from a freshly formed exoplanet, coupled with irradiation from the solar system's star, can roast the planet's moons. Before the planet cools off sufficiently, its close-orbiting moons could lose all their water, leaving them bone-dry and barren.
"An exomoon's habitability is of course constrained by its location in the stellar habitable zone, but it also has a second heat source — its host planet — that has to be accounted for," said Heller, whose paper has been accepted for publication in The International Journal of Astrobiology. "With regard to this second source, our study shows that at close range, the illumination from young and hot giant planets can render their moons uninhabitable."
Researchers believe moons could serve as suitable abodes for life just as well as planets. Even moons far beyond the habitable zone, such as Jupiter's Europa and Saturn's Titan, offer tantalizing hints of potential habitability thanks to the subsurface ocean in the former and the intriguing organic chemistry of the latter. Still, a moon around an exoplanet in the habitable zone stands as a far better bet for life than these frigid candidates.
Heller's findings suggest that we ought to exercise caution, however, before declaring that an Earth-sized, habitable-zone exomoon is a real-life Pandora — the lush moon of science fiction fame in "Avatar." Before assuming an exomoon is habitable based on its host planet's locale, the moon's current and conjectured past orbital distances will need to be assessed.
"Earth-size exomoons that could soon be detected by our telescopes might have been desiccated shortly after formation and still be dry today," said Heller. "In evaluating a moon's habitability, it is crucial to consider its history together with that of its host planet."
Moons are generally thought to arise much like planets do; piecemeal, that is. In the disk of leftover material encircling a star after its birth, planets aggregate as chunks collide and merge together into larger and larger bodies. As their mass and gravity grow in tandem, developing planets similarly attract their own mini-disks of gas and dust. Debris within this secondary disk then coalesces into moons. (Notably, our Moon stands as an exception, likely created by a giant impact to an ur-Earth by another sizable proto-planetary chunk.)
All this crashing about generates a lot of heat. Newly born planetary and lunar bodies should therefore be quite toasty. Yet rocky worlds might be able to retain a water reservoir, or have it be replenished early (or later) on by impacts from icy comets. [9 Exoplanets That Could Host Alien Life]
Where a moon sets up shop around its planet influences the chances of hanging onto any initial water and allowing life a chance without relying upon the fortune of future cometary water. According to formation models, significantly sized satellites should form between about five and 30 planetary radii, or half-planet widths, from their host planet. Jupiter's four biggest moons, dubbed the Galilean moons, fit this profile: Io orbits at 6.1 Jupiter radii; Europa, 9.7; Ganymede, 15.5; and Callisto squeaks in at 27 Jupiter radii. The biggest moon of Saturn, Titan, makes its home at a distance at of 21.3 Saturn radii.
Finding the 'habitable edge'
In their new paper, as well as several prior works, Heller and Barnes have sought to figure out just how close is too close for an exomoon to maintain liquid water on its surface. This inner orbital boundary they call the "habitable edge." Moons within it receive an excess of heat energy from two key sources: firstly, the flexing of the moon, called tidal heating, caused by gravitational interactions with its planetary host, and secondly, from extra illumination from the planet.
Raising the temperature on a watery world can trigger what is known as a runaway greenhouse effect. Water evaporates because of heat. Resulting water vapor is particularly good at trapping heat. In a positive feedback loop, this trapped heat can lead to evaporation of water at a faster rate than cooling, and condensation can restore it back to liquid form. Over time, a world's entire water supply can end up as a hot gas. This gas is broken down by sunlight into constituent oxygen and hydrogen. The latter, the lightest element, can escape off into space, and the world becomes desiccated.
Orbits, however, are not fixed things. Where a moon orbits today might not be where it initially formed and existed for many millions of years. The tidal forces just mentioned usually work to slowly push a moon out to a wider orbit over time. Thus, the observed location of moons today must be taken with a grain of salt—though appearing "safe" now, their pasts could have left them parched.
"Moons that are outside the habitable edge today, and thereby seemingly habitable, may have once been inside the habitable edge and become dry and uninhabitable," said Heller.
Building the model
With these considerations in mind, Heller and Barnes set about creating a model of a potentially habitable moon and gas giant duet. The model moons in their study are purposefully not like anything we have in the Solar System. In order to be broadly habitable, regardless of habitable edge considerations, a moon must possess a certain minimum mass, the same as a potentially habitable planet. A livable world must be massive enough to gravitationally retain an atmosphere and generate a protective magnetic field from a molten, rotating iron core. [The Moon: 10 Surprising Facts]
This mass habitability cutoff point is thought to be at least that of Mars, or 10 percent of Earth's mass. For comparison, the biggest moon in our solar system, Ganymede, is a measly one-fortieth of Earth's mass. That said, various studies have indicated that gas giant planets much bigger than Jupiter should spawn comparatively super-sized satellites.
The researchers accordingly went with a "monster" Jupiter, a Jovian planet 13 times Jupiter's mass, as their model host planet. A 13-mass Jupiter is about as massive as a planet can get, scientists think, before entering into brown dwarf or "failed star" territory; in such a case, the planet would emit way too much heat for most exomoons to ever have a prayer of being habitable.
As for hypothetical test moons in the study, Heller and Barnes went with two: an Earth twin, with the same rockiness and mass, and a "super-Ganymede," an icy body with a quarter of Earth's mass.
Heller and Barnes then placed these planet-and-moon duos in their model at two different orbital distances from a sunlike star. The first location approximated Earth's, about 93 million miles away, considered toward the hotter end of a sunlike star's habitable zone. The second spot was 1.7 times farther away, somewhat past the orbit of Mars, taken here as the outer limit of the habitable zone.
The model also addressed the issue of tidal heating. Moons (and planets) can have oval-shaped orbits that periodically swing them closer to their host. The more "eccentric," or oval-shaped, such an orbit is in swinging an orbit in close to its planet contributes to greater degrees of tidal heating. For this portion of the model, the researchers opted for four different orbital eccentricities to give a good range of results.
A final numerical consideration was the age of the planet-moon system. Younger giant planets emit more heat than older, cooled-off versions of themselves. So, three ages were picked: 100 million, 500 million and 1 billion years, with the last representing a fairly evolved system.
Now, with all these parameters in place, Heller and Barnes plugged in the critical variable of the hypothetical moons' orbital distance from host planets.
Life or death?
For both styles of moon, Earth-like and super-Ganymede-esque, an orbital distance of 10 Jupiter radii or less would be bad news for life. A runaway greenhouse effect would commence based on the host planet's illumination alone for around 200 million years—a fairly decent span of geological time, and certainly long enough to thoroughly dry out the moon. Add in the sun's rays, and the water-vaporizing interval on the Earth-like moon lasts for 500 million years. For the super-Ganymede, it's 600 million.
Bump the hypothetical moon distance from its host to a roomier 15 Jupiter radii and the picture still doesn't improve much; 200 million-plus years or so of moon-cooking still ensues. Out at 20 Jupiter radii, the Earthlike moon is spared a runaway greenhouse effect, but the super-Ganymede still suffers out-of-control heating for a similar couple hundred million year span.
"The thermal irradiation from a super-Jupiter host planet can clearly have a major influence on the habitability of its moons," said Heller. "Depending on the planet's mass and the history of its luminosity, any exomoon discovered today would need to have had a sufficiently wide orbit to have avoided desiccation in the far past."
The findings are somewhat conservative because other sources of heat might factor in enough to tip the scales. Examples include the latent heat within a new moon emanating from forces of friction and pressure during its formation. Plus, life may find it very hard to get going even before the temperature rises enough to kick off a runaway greenhouse effect—the ground could simply be too hot.
For an arid moon, however, its chances of bringing forth life might not be lost forever. Due to gravitational perturbations, it could migrate beyond the habitable edge. Once there, out of the death zone, icy comets pummeling it could deliver vast stores of water after the runaway greenhouse effect has slackened. A cometary bombardment is similarly thought to have deluged Earth several million years after its molten exterior cooled to a hard crust, giving rise to our planet's life-permitting oceans.
So, the overall message of Heller's newest study is that Earth-like exomoons' pasts cannot be ignored. When these worlds are identified, it will be necessary to perform orbital simulations on them to try to glean their histories. The orbital evolution models will be complex, accounting for tidal effects between the planet and the moon, as well as the gravitational perturbations between the moon, other moons, the planet, and the star. Coupled with models of planetary formation and cooling, astrobiologists can hopefully better estimate an exomoon's current habitability.
Said Heller: "It's important that we do our best to look deep into an exomoon's past in order to better understand whether it can possibly support extraterrestrial life." | 0.930018 | 3.971553 |
04/02. High Altitude Observatory (HAO)
Record Group Term
Although the Sun has been the most studied star in the heavens, little was known about the structure of its atmosphere or the role of its magnetic fields, or much at all about the fundamental motions of its surface or interior, until fairly recently. The past six decades have been productive ones for solar physics, and the High Altitude Observatory at NCAR has been a core contributor to that productivity. In 1940, Harvard graduate student Walter Orr Roberts and his doctoral adviser, astrophysicist Donald Menzel, founded a small solar observing station high on the Continental Divide in Climax, Colorado. Here, Walt Roberts installed the Western Hemisphere's first Lyot coronagraph, an instrument that uses a metal occulting disc to block off the face of the Sun, creating an artificial eclipse and rendering the corona visible. The solar corona, the very hot but extremely dim outer part of the Sun's atmosphere, is of particular interest to scientists because its structure reflects the Sun's global magnetic fields, providing a window for investigating fundamental solar processes. Roberts' assignment at the observatory was to last only one year, but, with the country's sudden entry into the war, he remained at Climax as sole observer, making daily observations of the solar chromosphere and corona. These coronal observations from Climax, with their implications for potential disturbance of terrestrial radio communications, became essential to the war effort. From these small beginnings, the station evolved into the High Altitude Observatory and grew substantially after the war. In the late forties, HAO's laboratory and administrative facilities were transferred to the University of Colorado and the gentler climate of Boulder. Walt Roberts also helped the Air Force establish the Sacramento Peak Observatory in Sunspot, New Mexico. Throughout the 1950s, under Walt Roberts as director, HAO scientists modified coronagraphs, flying them in high-altitude balloons to better observe the corona without interference from earth's atmosphere. Starting in 1952, HAO inaugurated the first of numerous total eclipse field expeditions in remote locations around the world. Solar physics entered a new era of solar observations in 1960 -- this time from space. In that year as well, HAO formally became a division of a newly-established research institute in Boulder, the National Center for Atmospheric Research. Roberts was appointed director of NCAR as well as president of the University Corporation for Atmospheric Research (UCAR), which managed the center. The rationale for making an astronomical institution a part of an atmospheric research organization was sound. The radiative input from the Sun is the driving force for all atmospheric motions. Anything that alters that radiative input in any way is important for understanding climatic variations and other large-scale changes in the terrestrial atmospheric system. Today's Observatory program includes numerical simulation of convection, radiation transport, and large-scale dynamics in both the solar and terrestrial atmospheres, plus observational programs to measure the Sun's output of magnetized plasma and radiation over the 11 year sunspot cycle of the Sun. This broad program draws it strength from a professional staff firmly grounded in basic physics, astrophysics, and atmospheric physics.
Found in 2 Collections and/or Records:
Scope and Contents The High Altitude Observatory (HAO) is a lab within the National Center for Atmospheric Research (NCAR) that conducts fundamental and applied research in solar-terrestrial physics using observational, theoretical, and numerical methods. Research at HAO extends from the solar core to the surface of the Earth. It is the mission of HAO to understand the behavior of the Sun and its impact on the Earth, to support, enhance, and extend the capabilities of the university community and the broader...
Collection — Other 1-5
Scope and Contents The John A. Eddy Collection consists of documents dated 1954 to 1984, with the bulk falling from 1970 to 1984. Material types in the collection include correspondence and memoranda, presentations, and other materials. The majority of materials include notes, outlines, promotional materials, and transparencies for presentations. Eddy traveled all over the United States giving presentations on his ground-breaking research. | 0.800094 | 3.604255 |
Red dots is a project to attempt detection of the nearest terrestrial planets to the Sun. Terrestrial planets in temperate orbits around nearby red dwarf stars can be more easily detected using Doppler spectroscopy, hence the name of the project.
For the 2017 campaign we will be focused on three of these red-dwarfs. The observational strategy is the same on all three objects. We will obtain about 90 observations with HARPS (weather permitting) spread over 100 nights while obtaining quasi-simultaneous photometry with different observatories all over the world.
Proxima Centauri – The Pale Red Dot campaign confirmed the existence of a persistent Doppler signal in different datasets. This was interpreted as the presence of Proxima b, a 1.3 Earth mass planet orbiting in a temperate orbit around the star. In addition to this, the 2016 and historical data also show very strong evidence of at least one more signal in the period range of 40-400 days, and some possible hints of additional signals in periods shorter than 6 days. The goal of this campaign is to accumulate more measurements to cover a more substantial part of the 40+ days signal and explore whether or not it is connected with stellar activity (cool spots co-rotating with the star) or it is better explained by the presence of a few Earth-mass planet in cold orbits. Similarly, the increased number of measurements and a more detailed modelling of stellar activity might reveal the presence of additional planets in short periods.
For more information on the star visit its rather detailed the wikipedia entry here. For more information about the previous planet searches, visit Pale Red Dot. All technical information and scientific references can be found via Simbad.
Barnard’s star – There is no robust claim of a planet orbiting this star so far. Barnard’s star is an old halo red dwarf, meaning that it has very low activity levels. With the new set of measurements, we should be able to sample the temperate orbits down to sub-Earth mass objects and confirm the hints of a possible super-Earth mass planet in a cold orbit that is already hinted in historical HARPS, UVES and Keck HIRES data. Barnard’s star is the second stellar system to the Sun beyond Alpha Centauri, and also a prime target for future detailed characterization and -may be one day- in situ exploration.
Barnard’s star is also known to astronomers as the star with highest proper motion. That is, due to its proximity to the Sun and its relatively large motion across the galaxy, it has the largest apparent motion of any known object beyond the solar system. There are numerous other stars than move intrinsically faster, but they are too far away for make this motion so apparent to the eye. The motion is large enough that we should see it moving against the background stars during the Red Dots 2017 campaing (time-lapse to be presented as soon as we start collecting data). For more detailed information on the star and previous planet claims and searches, visit its wikipedia article here
Ross 154 – As opposed to the other two, Ross 154 (also known as Gliese 729) is a fast rotating star. With a suspected rotation period of less than 3 days, its activity levels are rather higher as well including energetic flares. We suspect that the star is not radically different from other red dwarf, but that its enhanced activity levels are due to its young age. Detecting planets with the Doppler technique around active stars is challenging, because the fast rotation will -almost for sure- induce spurious Doppler signals in the same time-scales of rotation. Intensive multi-colour photometric follow-up should enable precise modelling of the stellar surface and allow filtering out all this activity. Similar experiments have been tried before with limited success. The novelty here is the very regular sampling with the simultaneous multi-colour photometry. Will we be able to unambiguously identify bonafide planet signals around this stormy object? | 0.860678 | 3.8676 |
For 30 years, a large near-Earth asteroid wandered its lone, intrepid path, passing before the scrutinizing eyes of scientists while keeping something to itself: 3552 Don Quixote, whose journey stretches to the orbit of Jupiter, now appears to be a comet.
The discovery resulted from an ongoing project led by researchers at Northern Arizona University using the Spitzer Space Telescope. Through a lot of focused attention and a little bit of luck, they found evidence of cometary activity that had evaded detection for three decades.
“Its orbit resembled that of a comet, so people assumed it was a comet that had gotten rid of all its ice deposits,” said Michael Mommert, a post-doctoral researcher at NAU who was a Ph.D. student of professor Alan Harris at the German Aerospace Center (DLR) in Berlin at the time the work was carried out.
What Mommert and an international team of researchers discovered, though, was that Don Quixote was not actually a dead comet—one that had shed the carbon dioxide and water that give comets their spectacular tails.
Instead, the third-biggest near-Earth asteroid out there, skirting Earth with an erratic, extended orbit, is “sopping wet,” said NAU associate professor David Trilling. The implications have less to do with potential impact, which is extremely unlikely in this case, and more with “the origins of water on Earth,” Trilling said. Comets may be the source of at least some of it, and the amount on Don Quixote represents about 100 billion tons of water—roughly the same amount found in Lake Tahoe.
Mommert said it’s surprising that Don Quixote hasn’t been depleted of all of its water, especially since researchers assumed that it had done so thousands of years ago. But finding evidence of CO2, and presumably water, wasn’t easy.
During an observation of the object using Spitzer in August 2009, Mommert and Trilling found that it was far brighter than they expected. “The images were not as clean as we would like, so we set them aside,” Trilling said.
Much later, though, Mommert prompted a closer look, and partners at the Harvard-Smithsonian Center for Astrophysics found something unusual when comparing infrared images of the object: something, that is, where an asteroid should have shown nothing. The “extended emission,” Mommert said, indicated that Don Quixote had a coma—a comet’s visible atmosphere—and a faint tail.
Mommert said this discovery implies that carbon dioxide and water ice also might be present on other near-Earth objects.
This study confirmed Don Quixote’s size and the low, comet-like reflectivity of its surface. Mommert is presenting the research team’s findings this week at the European Planetary Space Conference in London. | 0.878428 | 3.936491 |
In 1954, a high school physics teacher wrote a science fiction story set on a highly oblate exoplanet, one so squished the gravity varied dramatically from equator to pole. Now researchers are contemplating how to detect such bizarre, alien worlds in our ever-expanding search for exoplanets.
Artist's concept of an oblate spheroid of an alien world. Image credit: Shivam Sikroria.
Gravity pulls. As it pulls, it can distort. We know this concept from how the sun and moon drive tides on Earth, and are constantly drawn to how the ramifications of tidal stresses play out on icy moons around the gas giants of our solar system. But we aren't as practiced thinking about how gravity distorts the shape of entire planets.
In the afterward of the 1954 science fiction novel Mission of Gravity, author Hal Clement set out the calculations behind his alien world. He outlines the orbital dynamics of Mesklin, an exoplanet 16 times the mass of Jupiter. The gravitational extremes of this highly oblate spheroid, its chunky rings and the thermodynamics of its methane seas are all key to the plot of its native fauna heading to places us squishy humans can't to retrieve misplaced equipment.
Planetary science from a fiction book published in 1954 is suddenly relevant again.
The story itself is fantastic and an object lesson in how incorporating real science into storytelling is an excellent way to extend the plausibility of world-building, but the afterword is where all that science is explicitly defined. In it, Clement hypothesizes that an observed dimming in the star Cygni C is attributable to this close-set world, a world heavily distorted by the star's gravity into a distinctly squished spheroid. Alas, in the more than half-century since publication, we've learned that Cygni C is actually a binary system (explaining the periodic dimming) with no planets yet detected, pushing Mesklin firmly into the list of fictional worlds. Yet here we are, six decades later, coming back around to the idea of contemplating squished, not-so-spherical planets.
From the literally hundreds (nearly thousands!) of exoplanets we've detected so far, we know that planets tucked in tight to their stars experience tidal distortion. It's relatively easy to spot in hot Jupiters where the atmosphere is stretched and distended by the star's pull (and the star-snuggling planets wreck havoc on their stars), but it gets more challenging when contemplating small, rocky planets with more subtle distortion.
Astronomers Prabal Saxena, Peter Panka and Michael Summers recently published a paper on the theoretical possibility of detecting and identifying solid planets experiencing severe tidal distortion from close, synchronous orbits around large stars. In the ongoing game of figuring out that everything we thought we knew about planetary formation is flawed, the researchers tackle the assumption that any distortion would be too small to detect with current instrumental capacities and find it lacking.
Light curves of assorted planets transiting stars. The theoretical distorted iron planet is the purple dash-dot line. Credit: Saxena et al.
While so far it's just a theory, the research team suspects that for M-class stars, planets within a given orbital distance may experience observationally significant tidal distortions. Both in-transit bulge signatures and ellipsoidal variations might be detectable in the data now, and even more likely detectable once the next-generational planet-hunters roll online.
The orbit of Mesklin has significant impact on its methane seas. Image credit: Mika McKinnon
In the author's afterward, Clement set out science-inspired fiction as a game between author and reader, where it was left to us to ferret out any inconsistencies in the science of the story. While Cygni C is a binary system devoid of squished planets and bug-like alien lifeforms, the science behind Mesklin is looking even better now than it did a few years ago. I can't wait until we start finding these planets and calculating out their varying gravity and extreme weather. Once again, we're living in a universe where if you can imagine a planet — a world of pure desert, a world of pitch blackness, a world of continuous sunsets — it probably exists, somewhere.
Read the The observational effects and signatures of tidally distorted solid exoplanets in the February 2015 volume of the Monthly Notices of the Royal Astronomical Society. | 0.917308 | 3.867656 |
On Argument of Motion. That motion existed was obvious enough, but what caused that motion was not clear. Bible, Quran, and the Hindu scriptures (including the ancient Vedas) stated what was obvious to human senses – Earth is static, firmly in place (rendered even more firm by the mountains as per Quran), and the celestial objects like the Sun, moon and the stars moved around it to give it bright light in the day, passable light at night.
Aristotle (384-322 BC), in his voluminous work Physics (wherefrom for the principal and elementary form of science got its name), implied that Rest is the rule, Motion is caused by impetus. The heaviest objects in the universe – earth and water according to him – settled at the center, others- the air, sun, moon and the stars moved around it under a certain impetus. What about a projectile such as a stone let go with great impetus from hand or a catapult that continues its motion after the impetus is withdrawn? Aristotle presumably imagined a prime mover behind it.
Subsequent to the victory of Islam over vast areas of central and South Asia which extended to Spain other parts of Europe, Arab and Persian scientists and philosophers applied their minds to the theories of Earth and sky that went beyond Allah’s theories of a stationary earth and moving celestial bodies (while ensuring that there was no direct confrontation with the words of Allah.) Thus, the foremost among them, Avicenna (Ibn Cina), modified certain views of Aristotle on motion of bodies, yet contradicted Aristotle with his idea of self-motion(mayl). Al Baruni (973-1048) virtually stumbled upon the Newton’s second law of Motion; Hibat Allah Abu’l-Barakat al-Baghdaadi (1080–1165) proposed that a constant force applied on an object caused acceleration, not a constant speed. None of these great findings impressed the Christian theologians till well into the 17th century.
We do not know much about Aristarchus, who lived in 270 BC, who proposed a different concept. His works are lost, but Archimedes (287 – c. 212 BC), one of the greatest known classical scientists and mathematician (nonetheless a geo-centrist) ridicules, tongue-in-cheek, the Heliocentric (Sun is in the centre) view of Aristarchus thus:
“His hypotheses are that the fixed stars and the sun remain unmoved, that the earth revolves about the sun on the circumference of a circle, the sun lying in the middle of the orbit, and that the sphere of the fixed stars, situated about the same center as the sun, is so great that the circle in which he supposes the earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface.”
(Archimedes, “The Sand Reckoner”)
It’s an irony that we know more about Archimedes than about Aristarchus.
Indian mathematician –Scientist Aryabhata (476–550)AD) showed that earth was spinning around its axis; he almost precisely computed its circumference and the speed of rotation. Aryabhata explained relative motion thus
“Just as a man moving forward in a boat sees the stationary objects on either side of the river moving backward, so are the stationary stars seen by people at Lanka (on the equator) as moving precisesly towards the West. That is how it appears that the entire gamut of stars and planets appear to move as if by a wind, thus rising and setting.
( Chapter Gola, Verses 9 and 10, Aryabhatiya – University of Chicago Press, 1930.)
Though Aryabhata showed that the earth spun on its own axis, he still held that it was the sun that moved around the spinning earth. Aryabhata;s findings made little impact on the West even after Al Baruni translated his Aryabhatiya, meaning the work of Aryabhata.
Ironically, it was Nicolaus Copernicus (1473 –1543), a Catholic priest, who first defined the celestial motions to be Heliocentric – centred on the sun. Copernicus escaped burning on the stakes by not publishing his book till death came calling, and, in any case, did not get as much notice or create half as much sensation as did Galileo Galilei (1564 – 1642), born 21 years after Copernicus was buried, who made the earth-shattering and godless statement that it was really the earth that moved around the sun. He had a newly invented telescope to prove the point. Being a friend of the Pope for whom he had done astronomic work before, Galileo too escaped a full-blooded inquisition which would have led to his burning at the stakes, after his limbs were broken on the latest invention of a breaking wheel. Instead, he spent the rest of his life in house arrest.
Pope John Paul II (1920-2005; Paapacy 1978-2005) in 1992 confessed that punishing Galileo was wrong and pronounced an apology for the mistake. Two years later in 1994 Cardinal Ratzinger (later Pope Benedict XVI – born 1927; papacy 2005-2013) publicly decried this apology, endorsing the statement of Quantum physicist-Philosopher but unpredictable maverick Paul Feyerabend (1924- 1994) that
The church at the time of Galileo was much more faithful to reason than Galileo himself, and also took into consideration the ethical and social consequences of Galileo’s doctrine. Its verdict against Galileo was rational and just, and revisionism can be legitimized solely for motives of political opportunism.
Cardinal Ratzinger, who in 2013 abdicated as Pope had thus called one of the few noblest known Popes, John Paul II, as a political opportunist for admitting that the earth revolved around the sun, not the other way round as suggested in the Bible. The church hates to confess.
Aquinas’s argument of Motion that needed a Prime Mover (and hence the Biblical God) no doubt was based on the Aristotelian notion of the sun going around the earth, not the other way around. The argument presupposed that rest was the fundamental state, motion had to be forced – just as you would need to make move your teenaged son out of bed after a late night of Rock with his friends.
Science now tells us that Motion is the law; state of rest is one of perception. When you come back from a hard day’s work, sit on your chair in the balcony swigging cold beer or hot coffee, you are moving a little faster than 1,600 kilometers an hour, 440 meters a second, round earth’s axis , like a bacterium on a bowling ball spinning while running on a curved lane with no pins in sight. That speed applies to you if your house is anywhere not far from the equator. If you were in Helsinki sitting near the fireplace waiting for the appearance of the sun three months hence, you’d still be spinning though at a lesser (tangential) speed. Whether in Alaska or Ecuador, you are simultaneously moving at a velocity of 30 kilometres per second on a wide arc down a circular celestial bowling lane around the sun with no pins to knock down – unless and unexpected meteor comes in the way. If you could make 11.2 kilometres per second more, you could breakaway from earth’s gravitational pull and shoot off like an arrow shot into the sky.
In the thirteenth century, there was no way Thomas Aquinas could perceive these motions. He based his arguments on the perceived but illusory motion of the sun and partly real motion of the moon and, if he was patient enough, the almost imperceptible movement of the stars. There was no way he knew that he, his monastery, his altar and his library were all in perpetual motion.
Supposing you are sitting in a train that moves at 60 kilometers per hour. That is 1 kilometer per minute, about 16 meters per second. Now let us say you toss a ball vertically up; the ball falls back from a height just short of touching the ceiling of the train – say a meter-and-a-half overhead. It comes down just as vertically into the palm of your hand. The bounce back probably takes a little more than a second. You have by then moved 16 meters with the train from the spot when you tossed the ball up. So, without your seeing it, the ball has travelled 16 meters forward along with you, while also moving vertically three meters up and down. If the ball had not moved horizontally with you, it would fall on the head of the dowager sitting 16 metres several rows behind and get you into serious trouble. You do not, however, see the horizontal motion. A person standing on the kerb, assuming your train has a long enough glass window for her to see through, would see the ball moving in a parabolic arch. So don’t trust your eyes. Seeing is not believing. This example, by the way, owes itself to Einstein.
Supposing the devil or your God himself forced the earth to stop. You would be jerked into a terrible eastward motion, much worse than you would if were were to apply a sudden break while driving at high speed on a highway. Since the air around you would move too, you might face no friction from the air, only with the ground you’re sitting on. You’d find yourself shooting towards the Sun with the earth at an ever increasing speed, and you will incinerate – unless the earth gets back into spin and finds a new equilibrium in a lower orbit – giving you (if you are still alive) a hotter climate or something worse. The vengeful God of the Bible, who dictated to Moses the following lines obviously did not know the basic laws of what is supposed to be his own creation:
Sun, stand still over Gibeon, and moon, you also, over the Vale of Aijalon’. And the sun stood still, and the moon halted, till the people had vengeance on their enemies.” (Joshua 10:12-13)
(A similar occurrence is made to happen in Mahabharata, the epic war of Hinduism. Here Krishna, being somewhat wiser than the Judaic-Christian God, does not try and stop the sun. Instead, he makes an artificial eclipse to make the gullible enemies think that the sun had set (after which battle must not continue), and Krishna’s favorite side wins the battle of the day).
Inside you blood circulates, heart palpitates, lungs expand and squeeze, cells move and change, inside all of them molecules, electrons, protons, neutrons are all in motion unless the temperature falls to an impossible nothing -272 degrees Celsius or 0 degree Kalvin when all molecular motions would come to a stop. Even Bose-Einstein Condensate, first proposed by Sateyndranath Bose (1894 –1974) could cool to a temperature very near absolute zero, but NEVER absolute Zero.
Even before Penzias (Arno Allen Penzias, born 1933) and Wilson (Robert Woodrow Wilson, born 1936) established the (previously predicted) of Cosmological Microwave Background Radiation CMBR in 1965, Arthur Eddington (1882-1944), presumably following the finding of Swiss scientist Charles Guillaume (1861- 1938) 30 years before, calculated that the minimum temperature at inter-stellar space would be 3.18o Kalvin. Unlike ancient Greeks – of whom Aristotle is one – who believed that the higher you went in the sky hotter it got, (recall the tragedy of Icarus in Greek mythology) outer space is frigid cold, but not absolute zero. Since absolute zero temperature is not a possibility, molecules will remain in motion, however slow. If you could hear all the wave motions that are happening around you, there would be enough commotion to shatter your tympanic membranes. Motion is a natural; it needs no prime mover. An absolute static state, like absolute zero, is not a possibility.
Isaac Newton believed that God did course-correction of the universe whenever needed. Though he wrote much on the Bible and on God, he did not believe in the Trinity – a conjoined triplet phenomenon not mentioned in the Bible. He believed that the galaxies were at rest, and to make up for the gravitational calamity that this state would bring about at intermittent but definite points of time when stars and planets would pile on each other, Newton decided that God made frequent adjustments – just like you would keep stirring food in the pan to stop it from sticking to the bottom and burning up. Now we know that such an intervention is not needed. Edwin Hubble (1889-1953), arguably an agnostic who said he believed in some sort of a ‘destiny’, but not a God, discovered through his high-tech telescope of the time (1927) that the galaxies were moving apart at an incredible speed. When there is motion at a certain velocity, gravity is powerless.
To reassure ourselves that such a calamity can never occur and that on any day in future gravity would not overcome the expansion, two teams that were investigating the Supernova level of brightness (as a reference point for measuring stellar distances ) discovered in 1998 that the universe was not just expanding at a constant rate, but it was continually accelerating. Prof Saul Perlmutter (b. 1959) of the University of California, Berkeley, had been awarded half the Nobel prize, with Prof Brian Schmidt (b.1967) of the Australian National University and Prof Adam Riess (b. 1969) of Johns Hopkins University’s Space Telescope Science Institute sharing the other half. Schmidt calls himself a militant agnostic. Perlmutter and Reiss are Jews by birth, but do not appear to have found God’s hand in expanding the universe like ‘raisins while breaking bread.‘ Expansion, they calculated, must have started some 5 billion years ago.
How do they know that the expansion is accelerating when it can only be computed only by integrating the readings over millions of years? Simple. They found that the closer galaxies were moving apart at greater speeds than the distant ones – which were much older- expanded at less speed.
The Big Bang.
Paradoxically, the Big Bang, which challenges the First-Cause theory, was first proposed by a Catholic Priest – Abbe Georges Lemaitre. When he submitted his mathematical proof in a science meeting in 1927, Einstein, already an authority in theoretical physics, is reported to have told him: “”Your calculations are correct, but your physics is atrocious.” Einstein, who expanded Newton’s theories on motion to a level not yet imagined, still believed that the universe as a whole was static. He had replaced Newton’s belief that God intervened to prevent a gravitational pile-up with a number he called Cosmological Constant.
A couple of years later, when Edwin Bubble (1889-1953) experimentally established that the universe was expanding, Lemaitre’s theory that the universe developed from the size of an atom found favor with scientists. Lemaitre wore his priestly attire even to science conferences, but unlike the religious pseudo-scientists of today, refused to link his religion with his scientific findings. When Pope Pius XI: (1857 – 1939; papacy 1922-1939) proposed that Lemaitre’s findings agreed with the Biblical theory of creation’ the latter politely declined to link the two. It was not, however, this priest of the scientific temper who received the most coveted recognition for his literally earth-shattering finding. George F. Smoot (b.1945), and John C. Mather (b. 1946) received their Nobel Prize for measuring the feeble light emitted 13,800,000,000 years after the Big Bang, thereby establishing that the theory had a sound footing.
Motion is a steady-state condition of things already in motion and hence needs no energy to push or pull it. Acceleration is a change in the seemingly placid condition of uniform motion; hence it does need a push (Aristotelian impetus) from behind. That push, say the cosmologists, is from the dark energy that lurks everywhere. At the moment, this is a conjuncture just like – íf that room is empty, and something is moving inside there, it must be a Ghost. Some faithful Christians jump at the idea, just as Pope John Paul II did and Pope Pius XI before him did, and suggest that the dark energy is the Dark Lord – the Voldemort of the Bible – who, but the devil himself.”
The cosmologists who have done the measurements and have published their findings for much peer acceptance are – barring Abbe Lemaitre who is dead – still alive, and are still working at honing their theories. Religion, on the other hand, has its conclusions firmly in place. A Catholic might try and squeeze the Biblical statement of Creation into the new findings, but most Protestants hold hard to the argument that Creation is only 6000 years old; Bible itself is adequate proof for them. Some also quote the proofs advanced by Aquinas without using his name. Fortunately Jews, who are the originators of the creation theory, but also have earned the well-deserved Nobel prizes most (it works out a Nobel prize winner for every 100,000 Jews – presumably more than they have medical doctors) except for a few Rabbis who need to defend the Pentateuch to make a living, have stopped defending the creation theory a long time ago.
Existence is the rule; Motion is the law. Rest is not an option. Nothing, by its very definition, is non-existent. So motion is the natural. Thus neither the universe, nor anything in it needs a prime mover. To stop the universe from its incessant motion, you might need a prime stopper. The universe never stops; hence there is no need for such a jealous and wrathful obstructionist called God. | 0.889257 | 3.011697 |
One of the big puzzles in astrophysics is how stars like the sun manage to form from collapsing molecular clouds in star-forming regions of the universe. The puzzle is known technically as the angular momentum problem in stellar formation. The problem essentially is that the gas in the star-forming cloud has some rotation, which gives each element of the gas an amount of angular momentum. As it collapses inward, eventually it reaches a state where the gravitational pull of the nascent star is balanced by the centrifugal force, so that it will no longer collapse inward of a certain radius unless it can shed some of the angular momentum. This point is known as the centrifugal barrier.
Now, using measurements taken by radio antennas, a group led by Nami Sakai of the RIKEN Star and Planet Formation Laboratory has found clues as to how the gas in the cloud can find its way to the surface of the forming star. To gain a better understanding of the process, Sakai and her group turned to the ALMA observatory, a network of 66 radio dishes located high in the Atacama Desert of northern Chile. The dishes are connected together in a carefully choreographed configuration so that they can provide images on radio emissions from protostellar regions around the sky.
The group chose to observe a protostar designated as L1527, located in a nearby star-forming region known as the Taurus Molecular Cloud. The protostar, located about 450 light years away, has a spinning protoplanetary disk, almost edge-on to our view, embedded in a large envelope of molecules and dust.
Previously, Sakai had discovered, from observations of molecules around the same protostar, that unlike the commonly held hypothesis, the transition from envelope to the inner disk--which later forms into planets--was not smooth but very complex. "As we looked at the observational data," says Sakai, "we realized that the region near the centrifugal barrier--where particles can no longer infall--is quite complex, and we realized that analyzing the movements in this transition zone could be crucial for understanding how the envelope collapses. Our observations showed that there is a broadening of the envelope at that place, indicating something like a "traffic jam" in the region just outside the centrifugal barrier, where the gas heats up as the result of a shock wave. It became clear from the observations that a significant part of the angular momentum is lost by gas being cast in the vertical direction from the flattened protoplanetary disk that formed around the protostar."
This behavior accorded well with calculations the group had done using a purely ballistic model, where the particles behave like simple projectiles that do not need to be influenced by magnetic or other forces.
According to Sakai, "We plan to continue to use observations from the powerful ALMA array to further refine our understanding of the dynamics of stellar formation and fully explain how matter collapses onto the forming star. This work could also help us to better understand the evolution of our own solar system."
The research was published in the Monthly Notices of the Royal Astronomical Society published by Oxford University Press. | 0.877675 | 4.143306 |
The Bortle Dark-Sky Scale
Excellent? Typical? Urban? Use this nine-step scale to rate the sky conditions at any observing site.
By John E. Bortle
How dark is your sky? A precise answer to this question is useful for comparing observing sites and, more important, for determining whether a site is dark enough to let you push your eyes, telescope, or camera to their theoretical limits. Likewise, you need accurate criteria for judging sky conditions when documenting unusual or borderline observations, such as an extremely long comet tail, a faint aurora, or subtle features in galaxies.
On Internet bulletin boards and newsgroups I see many postings from beginners (and sometimes more experienced observers) wondering how to evaluate the quality of their skies. Unfortunately, most of today's stargazers have never observed under a truly dark sky, so they lack a frame of reference for gauging local conditions. Many describe observations made at "very dark" sites, but from the descriptions it's clear that the sky must have been only moderately dark. Most amateurs today cannot get to a truly dark location within reasonable driving distance. Thus, upon finding a semirural observing site where stars of magnitude 6.0 to 6.3 are marginally apparent to the unaided eye, they believe they have located an observing Nirvana!
Thirty years ago one could find truly dark skies within an hour's drive of major population centres. Today you often need to travel 150 miles or more. In my own observing career I have watched the extent to which ever-growing light pollution has sullied the heavens. In years long past I witnessed nearly pristine skies from parts of the highly urbanized north-eastern United States. This is no longer possible.
Limiting Magnitude Isn't Enough
Amateur astronomers usually judge their skies by noting the magnitude of the faintest star visible to the naked eye. However, naked-eye limiting magnitude is a poor criterion. It depends too much on a person's visual acuity (sharpness of eyesight), as well as on the time and effort expended to see the faintest possible stars. One person's "5.5-magnitude sky" is another's "6.3-magnitude sky." Moreover, deep-sky observers need to assess the visibility of both stellar and nonstellar objects. A modest amount of light pollution degrades diffuse objects such as comets, nebulae, and galaxies far more than stars.
To help observers judge the true darkness of a site, I have created a nine-level scale. It is based on nearly 50 years of observing experience. I hope it will prove both enlightening and useful to observers — though it may stun or even horrify some! Should it come into wide use, it would provide a consistent standard for comparing observations. Researchers would also be better able to assess the plausibility of an unusual or marginal observation. All around, it could be a boon to those of us who regularly scan the heavens.
Class 1: Excellent dark-sky site. The zodiacal light, gegenschein, and zodiacal band (S&T: October 2000, page 116) are all visible — the zodiacal light to a striking degree, and the zodiacal band spanning the entire sky. Even with direct vision, the galaxy M33 is an obvious naked-eye object. The Scorpius and Sagittarius region of the Milky Way casts obvious diffuse shadows on the ground. To the unaided eye the limiting magnitude is 7.6 to 8.0 (with effort); the presence of Jupiter or Venus in the sky seems to degrade dark adaptation. Airglow (a very faint, naturally occurring glow most evident within about 15° of the horizon) is readily apparent. With a 32-centimeter (12½-inch) scope, stars to magnitude 17.5 can be detected with effort, while a 50-cm (20-inch) instrument used with moderate magnification will reach 19th magnitude. If you are observing on a grass-covered field bordered by trees, your telescope, companions, and vehicle are almost totally invisible. This is an observer's Nirvana!
Class 2: Typical truly dark site. Airglow may be weakly apparent along the horizon. M33 is rather easily seen with direct vision. The summer Milky Way is highly structured to the unaided eye, and its brightest parts look like veined marble when viewed with ordinary binoculars. The zodiacal light is still bright enough to cast weak shadows just before dawn and after dusk, and its colour can be seen as distinctly yellowish when compared with the blue-white of the Milky Way. Any clouds in the sky are visible only as dark holes or voids in the starry background. You can see your telescope and surroundings only vaguely, except where they project against the sky. Many of the Messier globular clusters are distinct naked-eye objects. The limiting naked-eye magnitude is as faint as 7.1 to 7.5, while a 32-cm telescope reaches to magnitude 16 or 17.
Class 3: Rural sky. Some indication of light pollution is evident along the horizon. Clouds may appear faintly illuminated in the brightest parts of the sky near the horizon but are dark overhead. The Milky Way still appears complex, and globular clusters such as M4, M5, M15, and M22 are all distinct naked-eye objects. M33 is easy to see with averted vision. The zodiacal light is striking in spring and autumn (when it extends 60° above the horizon after dusk and before dawn) and its colour is at least weakly indicated. Your telescope is vaguely apparent at a distance of 20 or 30 feet. The naked-eye limiting magnitude is 6.6 to 7.0, and a 32-cm reflector will reach to 16th magnitude.
Class 4: Rural/suburban transition. Fairly obvious light-pollution domes are apparent over population centres in several directions. The zodiacal light is clearly evident but doesn't even extend halfway to the zenith at the beginning or end of twilight. The Milky Way well above the horizon is still impressive but lacks all but the most obvious structure. M33 is a difficult averted-vision object and is detectable only when at an altitude higher than 50°. Clouds in the direction of light-pollution sources are illuminated but only slightly so, and are still dark overhead. You can make out your telescope rather clearly at a distance. The maximum naked-eye limiting magnitude is 6.1 to 6.5, and a 32-cm reflector used with moderate magnification will reveal stars of magnitude 15.5.
Class 5: Suburban sky. Only hints of the zodiacal light are seen on the best spring and autumn nights. The Milky Way is very weak or invisible near the horizon and looks rather washed out overhead. Light sources are evident in most if not all directions. Over most or all of the sky, clouds are quite noticeably brighter than the sky itself. The naked-eye limit is around 5.6 to 6.0, and a 32-cm reflector will reach about magnitude 14.5 to 15.
Class 6: Bright suburban sky. No trace of the zodiacal light can be seen, even on the best nights. Any indications of the Milky Way are apparent only toward the zenith. The sky within 35° of the horizon glows greyish white. Clouds anywhere in the sky appear fairly bright. You have no trouble seeing eyepieces and telescope accessories on an observing table. M33 is impossible to see without binoculars, and M31 is only modestly apparent to the unaided eye. The naked-eye limit is about 5.5, and a 32-cm telescope used at moderate powers will show stars at magnitude 14.0 to 14.5.
Class 7: Suburban/urban transition. The entire sky background has a vague, greyish white hue. Strong light sources are evident in all directions. The Milky Way is totally invisible or nearly so. M44 or M31 may be glimpsed with the unaided eye but are very indistinct. Clouds are brilliantly lit. Even in moderate-size telescopes, the brightest Messier objects are pale ghosts of their true selves. The naked-eye limiting magnitude is 5.0 if you really try, and a 32-cm reflector will barely reach 14th magnitude.
Class 8: City sky. The sky glows whitish Gray or orangish, and you can read newspaper headlines without difficulty. M31 and M44 may be barely glimpsed by an experienced observer on good nights, and only the bright Messier objects are detectable with a modest-size telescope. Some of the stars making up the familiar constellation patterns are difficult to see or are absent entirely. The naked eye can pick out stars down to magnitude 4.5 at best, if you know just where to look, and the stellar limit for a 32-cm reflector is little better than magnitude 13.
Class 9: Inner-city sky. The entire sky is brightly lit, even at the zenith. Many stars making up familiar constellation figures are invisible, and dim constellations such as Cancer and Pisces are not seen at all. Aside from perhaps the Pleiades, no Messier objects are visible to the unaided eye. The only celestial objects that really provide pleasing telescopic views are the Moon, the planets, and a few of the brightest star clusters (if you can find them). The naked-eye limiting magnitude is 4.0 or less. | 0.856495 | 3.915072 |
The Michelson–Morley experiment was published in 1887 by Albert A. Michelson and Edward W. Morley and performed at what is now Case Western Reserve University in Cleveland, Ohio. It compared the speed of light in perpendicular directions, in an attempt to detect the relative motion of matter through the stationary luminiferous aether ("aether wind"). The negative results are generally considered to be the first strong evidence against the then-prevalent aether theory, and initiated a line of research that eventually led to special relativity, in which the stationary aether concept has no role.[A 1] The experiment has been referred to as "the moving-off point for the theoretical aspects of the Second Scientific Revolution".[A 2]
Michelson–Morley type experiments have been repeated many times with steadily increasing sensitivity. These include experiments from 1902 to 1905, and a series of experiments in the 1920s. In addition, recent resonator experiments have confirmed the absence of any aether wind at the 10−17 level. Together with the Ives–Stilwell and Kennedy–Thorndike experiments, the Michelson–Morley experiment forms one of the fundamental tests of special relativity theory.[A 3]
Physics theories of the late 19th century assumed that just as surface water waves must have a supporting substance, i.e. a "medium", to move across (in this case water), and audible sound
requires a medium to transmit its wave motions (such as air or water), so light must also require a medium, the "luminiferous aether", to transmit its wave motions. Because light can travel
through a vacuum, it was assumed that even a vacuum must be filled with aether. Because the speed of light is so great, and because material bodies pass through the aether without obvious
friction or drag, it was assumed to have a highly unusual combination of properties. Designing experiments to test the properties of the aether was a high priority of 19th century physics.[A
4]:411ff Earth orbits around the Sun at a speed of around 30 km/s (18.75 mi/s) or over 108,000 km/h (67,500 mi/hr). The Earth is in motion, so two main possibilities were considered: (1)
The aether is stationary and only partially dragged by Earth (proposed by Augustin-Jean Fresnel in 1818), or (2) the aether is completely dragged by Earth and thus shares its motion at Earth's
surface (proposed by George Gabriel Stokes in 1844).[A 5] In addition, James Clerk Maxwell (1865) recognized the electromagnetic nature of light and developed what are now called Maxwell's
equations, but these equations were still interpreted as describing the motion of waves through an aether, whose state of motion was unknown. Eventually, Fresnel's idea of an (almost) stationary
aether was preferred because it appeared to be confirmed by the Fizeau experiment (1851) and the aberration of star light.[A 5]
According to this hypothesis, Earth and the aether are in relative motion, implying that a so-called "aether wind" (Fig. 2) should exist. Although it would be possible, in theory, for the Earth's motion to match that of the aether at one moment in time, it was not possible for the Earth to remain at rest with respect to the aether at all times, because of the variation in both the direction and the speed of the motion. At any given point on the Earth's surface, the magnitude and direction of the wind would vary with time of day and season. By analyzing the return speed of light in different directions at various different times, it was thought to be possible to measure the motion of the Earth relative to the aether. The expected relative difference in the measured speed of light was quite small, given that the velocity of the Earth in its orbit around the Sun was about one hundredth of one percent of the speed of light.[A 4]:417ff
During the mid-19th century, measurements of aether wind effects of first order i.e. effects proportional to v/c (v being Earth's velocity, c the speed of light) were thought to be possible, but
no direct measurement of the speed of light was possible with the accuracy required. For instance, the Fizeau–Foucault apparatus could measure the speed of light to perhaps 5% accuracy, which was
quite inadequate for measuring directly a first-order 0.01% change in the speed of light. A number of physicists therefore attempted to make measurements of indirect first-order effects not of
the speed of light itself, but of variations in the speed of light (see First order aether-drift experiments). The Hoek experiment, for example, was intended to detect interferometric fringe
shifts due to speed differences of oppositely propagating light waves through water at rest. The results of such experiments were all negative.[A 6] This could be explained by using Fresnel's
dragging coefficient, according to which the aether and thus light are partially dragged by moving matter. Partial aether-dragging would thwart attempts to measure any first order change in the
speed of light. As pointed out by Maxwell (1878), only experimental arrangements capable of measuring second order effects would have any hope of detecting aether drift, i.e. effects proportional
to v2/c2.[A 7][A 8] Existing experimental setups, however, were not sensitive enough to measure effects of that size.
Michelson had a solution to the problem of how to construct a device sufficiently accurate to detect aether flow. In 1877, while teaching at his alma mater, the United States Naval Academy in Annapolis, Michelson conducted his first known light speed experiments as a part of a classroom demonstration. In 1881, he left active U.S. Naval service while in Germany concluding his studies. In that year, Michelson used a prototype experimental device to make several more measurements.
The device he designed, later known as a Michelson interferometer, sent yellow light from a sodium flame (for alignment), or white light (for the actual observations), through a half-silvered mirror that was used to split it into two beams traveling at right angles to one another. After leaving the splitter, the beams traveled out to the ends of long arms where they were reflected back into the middle by small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference whose transverse displacement would depend on the relative time it takes light to transit the longitudinal vs. the transverse arms. If the Earth is traveling through an aether medium, a beam reflecting back and forth parallel to the flow of aether would take longer than a beam reflecting perpendicular to the aether because the time gained from traveling downwind is less than that lost traveling upwind. Michelson expected that the Earth's motion would produce a fringe shift equal to .04 fringes—that is, of the separation between areas of the same intensity. He did not observe the expected shift; the greatest average deviation that he measured (in the northwest direction) was only 0.018 fringes; most of his measurements were much less. His conclusion was that Fresnel's hypothesis of a stationary aether with partial aether dragging would have to be rejected, and thus he confirmed Stokes' hypothesis of complete aether dragging.
However, Alfred Potier (and later Hendrik Lorentz) pointed out to Michelson that he had made an error of calculation, and that the expected fringe shift should have been only 0.02 fringes.
Michelson's apparatus was subject to experimental errors far too large to say anything conclusive about the aether wind. Definitive measurement of the aether wind would require an experiment with
greater accuracy and better controls than the original. Nevertheless the prototype was successful in demonstrating that the basic method was feasible.[A 5][A 9]
Among other lessons was the need to control for vibration. Michelson (1881) wrote:
"owing to the extreme sensitiveness of the instrument to vibrations, the work could not be carried on during the day. Next, the experiment was tried at night. When the mirrors were placed
half-way on the arms the fringes were visible, but their position could not be measured till after twelve o'clock, and then only at intervals. When the mirrors were moved out to the ends of the
arms, the fringes were only occasionally visible. It thus appeared that the experiments could not be performed in Berlin, and the apparatus was accordingly removed to the Astrophysicalisches
Observatorium in Potsdam. Here, the fringes under ordinary circumstances were sufficiently quiet to measure, but so extraordinarily sensitive was the instrument that the stamping of the pavement,
about 100 meters from the observatory, made the fringes disappear entirely!"
In 1885, Michelson began a collaboration with Edward Morley, spending considerable time and money to confirm with higher accuracy Fizeau's 1851 experiment on Fresnel's drag coefficient, to improve on Michelson's 1881 experiment, and to establish the wavelength of light as a standard of length. At this time Michelson was professor of physics at the Case School of Applied Science, and Morley was professor of chemistry at Western Reserve University (WRU), which shared a campus with the Case School on the eastern edge of Cleveland. Michelson suffered a nervous breakdown in September 1885, from which he recovered by October 1885. Morley ascribed this breakdown to the intense work of Michelson during the preparation of the experiments. In 1886, Michelson and Morley successfully confirmed Fresnel's drag coefficient – this result was also considered as a confirmation of the stationary aether concept.[A 1]
This result strengthened their hope of finding the aether wind. Michelson and Morley created an improved version of the Michelson experiment with more than enough accuracy to detect this hypothetical effect. The experiment was performed in several periods of concentrated observations between April and July 1887, in the basement of Adelbert Dormitory of WRU (later renamed Pierce Hall, demolished in 1962).[A 10][A 11]
As shown in Fig. 5, the light was repeatedly reflected back and forth along the arms of the interferometer, increasing the path length to 11 m. At this length, the drift would be about 0.4 fringes. To make that easily detectable, the apparatus was assembled in a closed room in the basement of the heavy stone dormitory, eliminating most thermal and vibrational effects. Vibrations were further reduced by building the apparatus on top of a large block of sandstone (Fig. 1), about a foot thick and five feet square, which was then floated in an annular trough of mercury. They estimated that effects of about 1/100 of a fringe would be detectable.
Figure 5. This figure illustrates the folded light path used in the Michelson–Morley interferometer that enabled a path length of 11 m. a is the light source, an oil lamp. b is a beam splitter. c
is a compensating plate so that both the reflected and transmitted beams travel through the same amount of glass (important since experiments were run with white light which has an extremely
short coherence length requiring precise matching of optical path lengths for fringes to be visible; monochromatic sodium light was used only for initial alignment[note 2]). d, d' and e are
mirrors. e' is a fine adjustment mirror. f is a telescope.
Michelson and Morley and other early experimentalists using interferometric techniques in an attempt to measure the properties of the luminiferous aether, used (partially) monochromatic light only for initially setting up their equipment, always switching to white light for the actual measurements. The reason is that measurements were recorded visually. Purely monochromatic light would result in a uniform fringe pattern. Lacking modern means of environmental temperature control, experimentalists struggled with continual fringe drift even though the interferometer might be set up in a basement. Because the fringes would occasionally disappear due to vibrations caused by passing horse traffic, distant thunderstorms and the like, an observer could easily "get lost" when the fringes returned to visibility. The advantages of white light, which produced a distinctive colored fringe pattern, far outweighed the difficulties of aligning the apparatus due to its low coherence length. As Dayton Miller wrote, "White light fringes were chosen for the observations because they consist of a small group of fringes having a central, sharply defined black fringe which forms a permanent zero reference mark for all readings."[A 12][note 3] Use of partially monochromatic light (yellow sodium light) during initial alignment enabled the researchers to locate the position of equal path length, more or less easily, before switching to white light.[note 4]
The mercury trough allowed the device to turn with close to zero friction, so that once having given the sandstone block a single push it would slowly rotate through the entire range of possible angles to the "aether wind," while measurements were continuously observed by looking through the eyepiece. The hypothesis of aether drift implies that because one of the arms would inevitably turn into the direction of the wind at the same time that another arm was turning perpendicularly to the wind, an effect should be noticeable even over a period of minutes.
The expectation was that the effect would be graphable as a sine wave with two peaks and two troughs per rotation of the device. This result could have been expected because during each full rotation, each arm would be parallel to the wind twice (facing into and away from the wind giving identical readings) and perpendicular to the wind twice. Additionally, due to the Earth's rotation, the wind would be expected to show periodic changes in direction and magnitude during the course of a sidereal day.
Because of the motion of the Earth around the Sun, the measured data were also expected to show annual variations.
After all this thought and preparation, the experiment became what has been called the most famous failed experiment in history.[A 13] Instead of providing insight into the properties of the aether, Michelson and Morley's article in the American Journal of Science reported the measurement to be as small as one-fortieth of the expected displacement (Fig. 7), but "since the displacement is proportional to the square of the velocity" they concluded that the measured velocity was "probably less than one-sixth" of the expected velocity of the Earth's motion in orbit and "certainly less than one-fourth." (Afterward, Michelson and Morley ceased their aether drift measurements and started to use their newly developed technique to establish the wavelength of light as a standard of length.) Although this small "velocity" was measured, it was considered far too small to be used as evidence of speed relative to the aether, and it was understood to be within the range of an experimental error that would allow the speed to actually be zero.[A 1] For instance, Michelson wrote about the "decidedly negative result" in a letter to Lord Rayleigh in August 1887:[A 14]
The Experiments on the relative motion of the earth and ether have been completed and the result decidedly negative. The expected deviation of the interference fringes from the zero should
have been 0.40 of a fringe – the maximum displacement was 0.02 and the average much less than 0.01 – and then not in the right place. As displacement is proportional to squares of the relative
velocities it follows that if the ether does slip past the relative velocity is less than one sixth of the earth’s velocity.
From the standpoint of the then current aether models, the experimental results were conflicting. The Fizeau experiment and its 1886 repetition by Michelson and Morley apparently confirmed the
stationary aether with partial aether dragging, and refuted complete aether dragging. On the other hand, the much more precise Michelson–Morley experiment (1887) apparently confirmed complete
aether dragging and refuted the stationary aether.[A 5] In addition, the Michelson–Morley null result was further substantiated by the null results of other second-order experiments of different
kind, namely the Trouton–Noble experiment (1903) and the Experiments of Rayleigh and Brace (1902–1904). These problems and their solution led to the development of the Lorentz transformation and
Figure 7. Michelson and Morley's results. The upper solid line is the curve for their observations at noon, and the lower solid line is that for their evening observations. Note that the theoretical curves and the observed curves are not plotted at the same scale: the dotted curves, in fact, represent only one-eighth of the theoretical displacements. | 0.821138 | 3.987821 |
The weather patterns on Mars are rather fascinating, owing to their particular similarities and differences with those of Earth. For one, the Red Planet experiences dust storms that are not dissimilar to storms that happen regularly here on Earth. Due to the lower atmospheric pressure, these storms are much less powerful than hurricanes on Earth, but can grow so large that they cover half the planet.
Recently, the ESA’s Mars Express orbiter captured images of the towering cloud front of a dust storm located close to Mars’ northern polar region. This storm, which began in April 2018, took place in the region known as Utopia Planitia, close to the ice cap at the Martian North Pole. It is one of several that have been observed on Mars in recent months, one which is the most severe to take place in years.
The images (shown above and below) were created using data acquired by the Mars Express‘ High Resolution Stereo Camera (HRSC). The camera system is operated by the German Aerospace Center (DLR), and managed to capture images of this storm front – which would prove to be the harbinger of the Martian storm season – on April 3rd, 2018, during its 18,039th orbit of Mars.
This storm was one of several small-scale dust storms that have been observered in recent months on Mars. A much larger storm emerged further southwest in the Arabia Terra region, which began in May of 2018 and developed into a planet-wide dust storm within several weeks.
Dust storms occur on Mars when the southern hemisphere experiences summer, which coincides with the planet being closer to the Sun in its elliptical orbit. Due to increased temperatures, dust particles are lifted higher into the atmosphere, creating more wind. The resulting wind kicks up yet more dust, creating a feedback loop that NASA scientists are still trying to understand.
Since the southern polar region is pointed towards the Sun in the summer, carbon dioxide frozen in the polar cap evaporates. This has the effect of thickening the atmosphere and increases surface pressure, which enhances the storms by helping to suspend dust particles in the air. Though they are common and can begin suddenly, Martian dust storms typically stay localized and last only a few weeks.
While local and regional dust storms are frequent, only a few of them develop into global phenomena. These storms only occur every three to four Martian years (the equivalent of approximately 6 to 8 Earth years) and can persist for several months. Such storms have been viewed many times in the past by missions like Mariner 9 (1971), Viking I (1971) and the Mars Global Surveyor (2001).
In 2007, a large storm covered the planet and darkened the skies over where the Opportunity rover was stationed – which led to two weeks of minimal operations and no communications. The most recent storm, which began back in May, has been less intense, but managed to create a state of perpetual night over Opportunity’s location in Perseverance Valley.
As a result, the Opportunity team placed the rover into hibernation mode and shut down communications in June 2018. Meanwhile, NASA’s Curiosity rover continues to explore the surface of Mars, thanks to its radioisotope thermoelectric generator (RTG), which does not rely on solar panels. By autumn, scientists expect the dust storm will weaken significantly, and are confident Opportunity will survive.
According to NASA, the dust storm will also not affect the landing of the InSight Lander, which is scheduled to take place on November 26th, 2018. In the meantime, this storm is being monitored by all five active ESA and NASA spacecraft around Mars, which includes the 2001 Mars Odyssey, the Mars Reconnaissance Orbiter, the Mars Atmosphere and Volatile EvolutioN (MAVEN), the Mars Express, and the Exomars Trace Gas Orbiter.
Understanding how global storms form and evolve on Mars will be critical for future solar-powered missions. It will also come in handy when crewed missions are conducted to the planet, not to mention space tourism and colonization!
Further Reading: DLR
Where do they come from, those beguiling singularities that flummox astrophysicists—and the rest of us.…
Astronomers don’t know exactly when the first stars formed in the Universe because they haven’t…
Our measurements of dark energy give contradictory results. A new study confirms dark energy, but…
There's an unusual paradox hampering research into parts of the Milky Way. Dense gas blocks… | 0.856834 | 3.936076 |
The European Space Agency’s Rosetta spacecraft has been witnessing growing activity from comet67P/Churyumov-Gerasimenko as it approaches perihelion (its closest point to the sun during its orbit). On July 29, while the spacecraft orbited at a distance of 116 miles (186 kilometers) from the comet, it observed the most dramatic outburst to date. Early science results collected during the outburst came from several instruments aboard Rosetta, including the Double Focusing Mass Spectrometer (DFMS), which uses NASA-built electronics. The DFMS is part of the spacecraft’s spectrometer for Ion and Neutral Analysis (ROSINA) instrument.
When the outburst occurred, the spectrometer recorded dramatic changes in the composition of outpouring gases from the comet when compared to measurements made two days earlier. As a result of the outburst, the amount of carbon dioxide increased by a factor of two, methane by four, and hydrogen sulphide by seven, while the amount of water stayed almost constant. There were also hints of heavy organic material that might have been dust.
Kathrin Altwegg, principal investigator for the ROSINA instrument from the University of Bern in Switzerland noted that although the outburst my have been freed from beneath the comet’s surface, it is too early to say for sure that this is the case.
A sequence of images taken by Rosetta’s scientific camera OSIRIS shows the sudden onset of a well-defined, jet-like feature emerging from the side of the comet. The jet, the brightest seen to date, was first recorded in an image taken at 6:24 a.m. PDT (9:24 a.m. EDT, 13:24 GMT) on July 29, but not in an image taken 18 minutes earlier. The jet then faded significantly in an image captured 18 minutes later. The OSIRIS camera team estimates the material in the jet was traveling at 33 feet per second (10 meters per second), at least.
On Thursday, August 13, the comet and Rosetta will be 116 million miles (186 million kilometers) from the sun – the closest to the sun they will be in their 6.5-year orbit. In recent months, the increasing solar energy has been warming the comet’s frozen ices, turning them to gas, which pours out into space, dragging dust along with it. The period around perihelion is scientifically very important, as the intensity of the sunlight increases and parts of the comet previously cast in years of darkness are flooded with sunlight. The comet’s general activity is expected to peak in the weeks following perihelion.
Comets are time capsules containing primitive material left over from the epoch when the sun and its planets formed. Rosetta is the first spacecraft to witness at close proximity how a comet changes as it is subjected to the increasing intensity of the sun’s radiation. Observations are helping scientists learn more about the origin and evolution of our solar system and the role comets may have played in seeding Earth with water, and perhaps even life. | 0.861095 | 3.946275 |
Hubble Telescope Catches a 'Stellar Thief' Star Stealing From a Nearby Supernova
In this particular instance, the crime happened a long time ago, but astronomers only just cracked the cold case open using new evidence from the Hubble Space Telescope. While examining the remnants of a star named NGC 7424 which had gone supernova 17 years ago, the telescope picked up something unusual: another star conveniently close to the crime scene.
To break away from the silly crime metaphors, Hubble had picked up concrete evidence that a supernova had occurred in a double-star system or binary system - which scientists had predicted but never found proof of - and that the presence of a second star likely played a role in this supernova happening early. According to new research on the incident, this companion star was siphoning hydrogen from NCG 7424 and gradually destabilizing it.
Going back to the silly crime metaphors, this wasn't just theft - it was murder.
17 years ago, in a galaxy far, far away (40 million light-years to be exact), astronomers witnessed a massive star explosion. Now, in the fading afterglow of the blast, @NASAHubble space telescope captured the first ? of...a surviving lustrous larcenist? https://t.co/9KdDEOZJ4J pic.twitter.com/U20DzuEp7H— NASA (@NASA) April 28, 2018
NGC 7424 had large amounts of hydrogen in its "stellar envelope", a region of the star which transports materials from its core to its atmosphere. This companion star, due to its close proximity to NGC 7424, had been absorbing the star's hydrogen into its own gravity for millions of years prior to the supernova, causing the supernova to be known as a "stripped-envelope supernova" which detonates without any hydrogen.
Even though light from the supernova only reached Earth 17 years ago, with the star being 40 million lightyears away inside the Grus constellation (which is known as the Crane), the initial blast was so bright that it concealed any other stars hiding around it. It was only recently that the glow had faded enough for Hubble to spot this second star, pointing to the supernova being just one half of a very bright binary sunset.
According to Stuart Ryder, the lead author of the new research from the Australian Astronomical Observatory (AAO) in Sydney, most large stars tend to be in binary systems, meaning the famous Tattooine sunset from Star Wars is hardly a unique view throughout the universe (assuming there are solid planets to view it from).
Which is why it's so refreshing to find evidence of this behavior, as Ryder says in a press release from Hubble's website:
The criminal star is likely to get away with it, sadly. But if you ever figure out how to fit a large, distant star inside of a courtroom, then you let NASA know. | 0.864936 | 3.421603 |
NASA’s New Horizons is en route to Ultima Thule, a journey that will see the NASA spacecraft whiz past this mysterious Kuiper Belt object on New Year’s Day. But as the probe nears, mission specialists are already having to deal with a rather strange observation—an anomaly in the way Ultima Thule is reflecting incoming light.
New Horizons will zoom past Ultima Thule at 12:33am ET on January 1, 2019, at speeds in excess of 31,500 miles per hour (50,700 kilometers per hour) and at a distance of around 2,200 miles (3,500 kilometers). We’ll be able to see the object in exquisite detail, but until then, project scientists are having to contend with an unexpected mystery. By analyzing the hundreds of photos taken of the object by New Horizons thus far, project scientists have been trying to measure its brightness— but they’ve failed to detect periodic changes in Ultima’s luminosity as it rotates.
Ultima Thule, as we already know, is not spherically shaped. Back in 2017, observations made from telescopes in Argentina suggested it was ovular or cigar-shaped, or possibly even two objects that are in super close proximity to each other (a binary pair) or possibly even touching (a contact binary). That’s all cool, as we’ve seen objects such as these before (here, here and here). What’s weird in this case, however, is that Ultima is not exhibiting repeated variations in brightness—the kind of thing you’d expect from a rotating object as its surface reflects incoming light from the Sun. These periodic pulsations of light—or light curves, in the parlance of astronomers—is all but negligible in Ultima Thule.
“It’s really a puzzle,” said New Horizons Principal Investigator Alan Stern in a statement. “I call this Ultima’s first puzzle—why does it have such a tiny light curve that we can’t even detect it? I expect the detailed flyby images coming soon to give us many more mysteries, but I did not expect this, and so soon.”
So why is this distant Kuiper Belt object devoid of a detectable light curve?
Marc Buie, a mission scientist from the Southwest Research Institute, said it’s possible that Ultima Thule’s rotation pole is pointing directly toward New Horizons as it approaches. So from the spacecraft’s perspective, Ultima Thule is spinning, but the spacecraft is only able to see the same reflective side—hence the absence of a light curve. It would be like watching a merry-go-round from directly above. It’s a good, and possibly the most plausible, explanation, but it requires that New Horizons just happens to find itself locked in this peculiar orientation with Ultima Thule.
“Another explanation,” said Mark Showalter from the SETI Institute, “is that Ultima may be surrounded by a cloud of dust that obscures its light curve, much the way a comet’s coma often overwhelms the light reflected by its central [core].”
It’s another decent explanation, but as Showalter admitted, a heat source would be required to produce a coma of this magnitude. The Sun is 4 billion miles away from Ultima, and its rays are likely too weak to produce such an effect.
Anne Verbiscer, a researcher at the University of Virginia and a New Horizons assistant project scientist, said Ultima Thule may be surrounded by many tumbling moons. In this multiple-moonlet scenario, each moon would produce its own light curve, but collectively, these curves would appear, the words of Verbiscer, as a “jumbled superposition of light curves.” From the perspective of New Horizons, it would look like a single, small light curve. The trouble with this theory is that we’ve never actually seen anything like this in the Solar System, so if it’s true, it would represent a new kind of astronomical phenomenon.
It’s a neat mystery, but the conundrum should be resolved in the coming days as New Horizons gets closer to its target. My hope is that it’s an extraterrestrial telecommunications array pointed directly at Earth, but sadly, it’s probably just a dark, dead rock with a rather peculiar spin. | 0.835214 | 3.868768 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.