text
stringlengths 286
572k
| score
float64 0.8
0.98
| model_output
float64 3
4.39
|
---|---|---|
This app uses a simple radiative transfer model for a planet with two leaky* atmospheric layers. It begins by calculating Te, the emission temperature of the planet by using the Solar constant and planetary Albedo. Te is called the Blackbody temperature because it is inferred by fitting a Blackbody curve to the observed outbound LWIR radiation.
You can choose any of the 9 planets in our solar system, or choose one of your own making. The app then uses 5 parameters for the chosen “base” planet: the Albedo (α), distance from the sun (r), extinction coefficients of two atmospheric layers (ε1, ε2), and the solar constant S0 to calculate temperatures and radiative flux densities. One can modify these 5 adjustable parameters from their base values, and update the result. A flux diagram is generated showing the incoming short wavelength (SW), and outgoing long wavelength (LW) radiation. Two model run results can be saved and differences displayed. Also, runs can be saved to a .csv file for E-mail export and spreadsheet analysis.
Calculate the “natural” 33K greenhouse effect (compare Earth with and without an atmosphere), or change the extinction coefficients to see the effect of adding or reducing absorbing gasses. Predict what Mars might be like with an atmosphere, or see what would happen if the characteristics of our sun, albedo or planetary orbit change. Our sun’s luminosity will increase ~10% in the next billion years.
This simple model can create hours of fun but realize it cannot generate the surface temperature (Ts) of Venus. It’s > 96% CO2 atmosphere would require N = (Ts/Te)4 -1 = 110 fully absorbing atmospheric layers to predict it’s Ts of 737K. However, the base parameter values for Earth nicely calculate Ts, Te, and T1 (upper troposphere) values, and the model correctly predicts Te’s for all the planets.
Load app, choose a “base” planet, segue with Update.
Click Update to calculate the temperatures and flux densities.
Segue to inspect the flux densities, and/or modify the base parameters to see changes.
Save and compare differences in 2 runs. (C/C0) values are in CO2 equivalents.
Save runs to .csv file for spreadsheet analysis.
Start with a base planet, but remember this 2 layer model cannot accurately predict the surface temperatures of the gas giants or Venus.
Pressing the Back button allows you to refresh your parameter or base planet choices.
Remember an ~ 5 K change in Ts resulted in the Earth’s last Ice Age!
For convenience, the Test planet can be used to create your own set of parameters without entering a planet name.
Increasing the solar constant increases all temperatures. Increasing ε does nothing to Te, which depends only on S0, r, and α.
Press On/Off & Home takes flux diagram screenshot.
It is easy to remove a saved data file run (row) after import to spreadsheet.
Atmospheric Model – Version 1.01 12/17/17 – What’s New in this version: Pinch, pan, and double Tap (restore) gestures added to Update and Flux views for better iPhone visibility. Minor bug fix.
RADIATIVE FORCING and CLIMATE SENSITIVITY:
Radiative forcing (ΔF) can be used to estimate the change in surface temperature (ΔTs) arising from that forcing using:
ΔTs = λ x ΔF, where λ is the Climate Sensitivity in K / (W/m2).
Forcing due to an atmospheric greenhouse gas such as CO2 can be expressed as:
ΔF (in W/m2) = 5.35 × ln (C/C0), where C is the CO2 concentration [CO2] and C0 is the initial concentration (in ppm).
For a doubling of [CO2] ΔF = 5.35 x ln (2) = 3.71 W/m2. An IPCC assessment reported that such a forcing (including feedbacks) would result in a ∆Ts of 3 ± 1.5 K. Thus a value for Climate Sensitivity would be (3 ± 1.5) / 3.71 = λ = 0.8 ± 0.4 K/(W/m2).
For a single atmospheric layer Earth, changing the base value of ε = 0.78 to ε = 0.83 (∆ ? = 0.05) gives an ~ 3K T rise; roughly the equivalent of doubling [CO2] (and a forcing of 3.71 W/m2).
There has been a [CO2] increase between the years 1750 (280 ppm) and 2000 (380 ppm). Thus ∆F = 5.35 x ln (370/280) = 1.5 W/m2. ΔTs = λ x ΔF = 0.8 (K/(W/m2)) x 1.5 (W/m2) = 1.2 K over that timeframe (∆ ε ~ 0.02 used).
* Leaky implies ε < 1. | 0.841033 | 3.592142 |
The cigar-shaped interstellar visitor to our solar system known as ‘Oumuamua could be the remnants of a larger body that was torn apart by its host star, according to researchers.
The dark, reddish object that hurtled into our solar system in 2017 and was named after the Hawaiian word for messenger or scout has long puzzled scientists.
Among its peculiarities is the lack of an envelope of gas and dust that comets typically give off as they heat up. Further work by experts suggested the body was accelerated by the loss of water vapour and other gases – as seen with comets but not asteroids. The upshot was that ‘Oumuamua was labelled a “comet in disguise”.
Now scientists say they have shed light on the mystery and addressed the myriad pieces of the ‘Oumuamua puzzle.
They say ‘Oumuamua is an “active asteroid” formed from a body that was torn apart by its parent star and then ejected into interstellar space.
“Most planetary bodies … consist of numerous pieces of rock that have coalesced under the influence of gravity. You could imagine them as sandcastles floating in space,” said Dr Yun Zhang, a co-author of the new study from the Observatoire de la Côte d’Azur in France.
These bodies experience a number of forces as they pass their star.
“A tidal encounter between a planet or small body and a star is a tug-of-war game between the gravitational pull of the star and the self-gravity of the flyby body,” said Zhang, noting that when the body passes too close to the star and enters the tidal disruption region, it can stretch and be torn apart giving rise to fragments.
Writing in the journal Nature Astronomy, Zhang and Prof Doug Lin of the University of California’s Lick Observatory, report how they used computer models to reveal that such a process could have produced ‘Oumuamua and explain features including its tumbling motion, colour and unusual shape.
Zhang said near and far parts of ‘Oumuamua’s parent body would have been pulled apart from each other in the tidal disruption region, resulting in the formation of elongated fragments, such as ‘Oumuamua, which would be glued together by surface material that melted near the star and froze as it flew on.
Zhang said most of the volatile substances on ‘Oumuamua’s surface would have been lost from heating by the star around which it formed, but some residual water ice could have been preserved below its surface and subsequently heated by our, hotter, sun – explaining its unusual acceleration. “We may call ‘Oumuamua an active asteroid,” said Zhang.
The team say ‘Oumuamua could have formed from either a comet or a planet several times the size of Earth, but the former better explains ‘Oumuamua’s apparent subsurface water ice. The star around which ‘Oumuamua formed, they add, would probably have been similar to our sun but smaller and denser – or possibly a white dwarf.
Zhang says the findings not only scotch again the much-publicised notion that ‘Oumuamua is an alien spacecraft, but offer an efficient way in which asteroidal interstellar objects, previously thought to be rare, can be formed.
What’s more, she says, with objects such as ‘Oumuamua passing through “habitable zones”, such as our own solar system, they may even carry seeds of life.
Dr Alan Jackson, of Arizona State University, who was not involved in the study but has previously carried out research into ‘Oumuamua, welcomed the work.
“The idea of ‘Oumuamua being a fragment of a larger body that was tidally disrupted by passing close to its parent star was suggested by Matija Ćuk in 2018,” he said. “But this is the first work that I have seen that really explores that idea in detail and shows that it might explain how ‘Oumuamua was produced and a lot of its unusual features.” | 0.938277 | 3.898549 |
Beryllium is a chemical element with the symbol Be and atomic number 4. It is a relatively rare element in the universe, usually occurring as a product of the spallation of larger atomic nuclei that have collided with cosmic rays. Within the cores of stars, beryllium is depleted as it is fused into heavier elements. It is a divalent element which occurs naturally only in combination with other elements in minerals. Notable gemstones which contain beryllium include beryl (aquamarine, emerald) and chrysoberyl. As a free element it is a steel-gray, strong, lightweight and brittle alkaline earth metal.
In structural applications, the combination of high flexural rigidity, thermal stability, thermal conductivity and low density (1.85 times that of water) make beryllium metal a desirable aerospace material for aircraft components, missiles, spacecraft, and satellites. Because of its low density and atomic mass, beryllium is relatively transparent to X-rays and other forms of ionizing radiation; therefore, it is the most common window material for X-ray equipment and components of particle detectors. The high thermal conductivities of beryllium and beryllium oxide have led to their use in thermal management applications. When added as an alloying element to aluminium, copper (notably the alloy beryllium copper), iron or nickel beryllium improves many physical properties. Tools made of beryllium copper alloys are strong and hard and do not create sparks when they strike a steel surface. Beryllium does not form oxides until it reaches very high temperatures.
The commercial use of beryllium requires the use of appropriate dust control equipment and industrial controls at all times because of the toxicity of inhaled beryllium-containing dusts that can cause a chronic life-threatening allergic disease in some people called berylliosis.
The mineral beryl, which contains beryllium, has been used at least since the Ptolemaic dynasty of Egypt. In the first century CE, Roman naturalist Pliny the Elder mentioned in his encyclopedia Natural History that beryl and emerald (“smaragdus”) were similar. The Papyrus Graecus Holmiensis, written in the third or fourth century CE, contains notes on how to prepare artificial emerald and beryl. Louis-Nicolas Vauquelin discovered beryllium
Early analyses of emeralds and beryls by Martin Heinrich Klaproth, Torbern Olof Bergman, Franz Karl Achard, and Johann Jakob Bindheim always yielded similar elements, leading to the fallacious conclusion that both substances are aluminium silicates. Mineralogist René Just Haüy discovered that both crystals are geometrically identical, and he asked chemist Louis-Nicolas Vauquelin for a chemical analysis.
In a 1798 paper read before the Institut de France, Vauquelin reported that he found a new “earth” by dissolving aluminium hydroxide from emerald and beryl in an additional alkali. The editors of the journal Annales de Chimie et de Physique named the new earth “glucine” for the sweet taste of some of its compounds. Klaproth preferred the name “beryllina” due to the fact that yttria also formed sweet salts. The name “beryllium” was first used by Wöhler in 1828.
Friedrich Wöhler and Antoine Bussy independently isolated beryllium in 1828 by the chemical reaction of metallic potassium with beryllium chloride, as follows:
BeCl2 + 2 K → 2 KCl + Be
Using an alcohol lamp, Wöhler heated alternating layers of beryllium chloride and potassium in a wired-shut platinum crucible. The above reaction immediately took place and caused the crucible to become white hot. Upon cooling and washing the resulting gray-black powder he saw that it was made of fine particles with a dark metallic luster. The highly reactive potassium had been produced by the electrolysis of its compounds, a process discovered 21 years before. The chemical method using potassium yielded only small grains of beryllium from which no ingot of metal could be cast or hammered.
The direct electrolysis of a molten mixture of beryllium fluoride and sodium fluoride by Paul Lebeau in 1898 resulted in the first pure (99.5 to 99.8%) samples of beryllium. However, industrial production started only after the First World War. The original industrial involvement included subsidiaries and scientists related to the Union Carbide and Carbon Corporation in Cleveland OH and Siemens & Halske AG in Berlin. In the US, the process was ruled by Hugh S. Cooper, director of The Kemet Laboratories Company. In Germany, the first commercially successful process for producing beryllium was developed in 1921 by Alfred Stock and Hans Goldschmidt.
A sample of beryllium was bombarded with alpha rays from the decay of radium in a 1932 experiment by James Chadwick that uncovered the existence of the neutron. This same method is used in one class of radioisotope-based laboratory neutron sources that produce 30 neutrons for every million α particles.
Beryllium production saw a rapid increase during World War II, due to the rising demand for hard beryllium-copper alloys and phosphors for fluorescent lights. Most early fluorescent lamps used zinc orthosilicate with varying content of beryllium to emit greenish light. Small additions of magnesium tungstate improved the blue part of the spectrum to yield an acceptable white light. Halophosphate-based phosphors replaced beryllium-based phosphors after beryllium was found to be toxic.
Electrolysis of a mixture of beryllium fluoride and sodium fluoride was used to isolate beryllium during the 19th century. The metal’s high melting point makes this process more energy-consuming than corresponding processes used for the alkali metals. Early in the 20th century, the production of beryllium by the thermal decomposition of beryllium iodide was investigated following the success of a similar process for the production of zirconium, but this process proved to be uneconomical for volume production.
Pure beryllium metal did not become readily available until 1957, even though it had been used as an alloying metal to harden and toughen copper much earlier. Beryllium could be produced by reducing beryllium compounds such as beryllium chloride with metallic potassium or sodium. Currently, most beryllium is produced by reducing beryllium fluoride with magnesium. The price on the American market for vacuum-cast beryllium ingots was about $338 per pound ($745 per kilogram) in 2001.
Between 1998 and 2008, the world’s production of beryllium had decreased from 343 to about 200 tonnes. It then increased to 230 tonnes by 2018, of which 170 tonnes came from the United States.
Early precursors of the word beryllium can be traced to many languages, including Latin beryllus; French béry; Ancient Greek βήρυλλος, bērullos, ‘beryl’; Prakrit वॆरुलिय (veruliya); Pāli वेलुरिय (veḷuriya), भेलिरु (veḷiru) or भिलर् (viḷar) – “to become pale”, in reference to the pale semiprecious gemstone beryl. The original source is probably the Sanskrit word वैडूर्य (vaidurya), which is of South Indian origin and could be related to the name of the modern city of Belur. Until c. 1900, beryllium was also known as glucinum or glucinium (with the accompanying chemical symbol “Gl”, or “G”), the name coming from the Ancient Greek word for sweet: γλυκύς, due to the sweet taste of beryllium salts.
The Sun has a concentration of 0.1 parts per billion (ppb) of beryllium. Beryllium has a concentration of 2 to 6 parts per million (ppm) in the Earth’s crust. It is most concentrated in the soils, 6 ppm. Trace amounts of 9Be are found in the Earth’s atmosphere. The concentration of beryllium in sea water is 0.2–0.6 parts per trillion. In stream water, however, beryllium is more abundant with a concentration of 0.1 ppb.
Beryllium is found in over 100 minerals, but most are uncommon to rare. The more common beryllium containing minerals include: bertrandite (Be4Si2O7(OH)2), beryl (Al2Be3Si6O18), chrysoberyl (Al2BeO4) and phenakite (Be2SiO4). Precious forms of beryl are aquamarine, red beryl and emerald. The green color in gem-quality forms of beryl comes from varying amounts of chromium (about 2% for emerald).
The two main ores of beryllium, beryl and bertrandite, are found in Argentina, Brazil, India, Madagascar, Russia and the United States. Total world reserves of beryllium ore are greater than 400,000 tonnes.
The extraction of beryllium from its compounds is a difficult process due to its high affinity for oxygen at elevated temperatures, and its ability to reduce water when its oxide film is removed. Currently the United States, China and Kazakhstan are the only three countries involved in the industrial-scale extraction of beryllium. Kazakhstan produces Be from a concentrate stockpiled before the breakup of the Soviet Union around 1991. This resource has become nearly depleted by mid-2010s.
Production of beryllium in Russia was halted in 1997, and is planned to being resumed in the 2020s.
Beryllium is most commonly extracted from the mineral beryl, which is either sintered using an extraction agent or melted into a soluble mixture. The sintering process involves mixing beryl with sodium fluorosilicate and soda at 770 °C (1,420 °F) to form sodium fluoroberyllate, aluminium oxide and silicon dioxide. Beryllium hydroxide is precipitated from a solution of sodium fluoroberyllate and sodium hydroxide in water. Extraction of beryllium using the melt method involves grinding beryl into a powder and heating it to 1,650 °C (3,000 °F). The melt is quickly cooled with water and then reheated 250 to 300 °C (482 to 572 °F) in concentrated sulfuric acid, mostly yielding beryllium sulfate and aluminium sulfate. Aqueous ammonia is then used to remove the aluminium and sulfur, leaving beryllium hydroxide.
Beryllium hydroxide created using either the sinter or melt method is then converted into beryllium fluoride or beryllium chloride. To form the fluoride, aqueous ammonium hydrogen fluoride is added to beryllium hydroxide to yield a precipitate of ammonium tetrafluoroberyllate, which is heated to 1,000 °C (1,830 °F) to form beryllium fluoride. Heating the fluoride to 900 °C (1,650 °F) with magnesium forms finely divided beryllium, and additional heating to 1,300 °C (2,370 °F) creates the compact metal. Heating beryllium hydroxide forms the oxide, which becomes beryllium chloride when combined with carbon and chlorine. Electrolysis of molten beryllium chloride is then used to obtain the metal.
Because of its low atomic number and very low absorption for X-rays, the oldest and still one of the most important applications of beryllium is in radiation windows for X-ray tubes. Extreme demands are placed on purity and cleanliness of beryllium to avoid artifacts in the X-ray images. Thin beryllium foils are used as radiation windows for X-ray detectors, and the extremely low absorption minimizes the heating effects caused by high intensity, low energy X-rays typical of synchrotron radiation. Vacuum-tight windows and beam-tubes for radiation experiments on synchrotrons are manufactured exclusively from beryllium. In scientific setups for various X-ray emission studies (e.g., energy-dispersive X-ray spectroscopy) the sample holder is usually made of beryllium because its emitted X-rays have much lower energies (≈100 eV) than X-rays from most studied materials.
Low atomic number also makes beryllium relatively transparent to energetic particles. Therefore, it is used to build the beam pipe around the collision region in particle physics setups, such as all four main detector experiments at the Large Hadron Collider (ALICE, ATLAS, CMS, LHCb), the Tevatron and the SLAC. The low density of beryllium allows collision products to reach the surrounding detectors without significant interaction, its stiffness allows a powerful vacuum to be produced within the pipe to minimize interaction with gases, its thermal stability allows it to function correctly at temperatures of only a few degrees above absolute zero, and its diamagnetic nature keeps it from interfering with the complex multipole magnet systems used to steer and focus the particle beams.
Because of its stiffness, light weight and dimensional stability over a wide temperature range, beryllium metal is used for lightweight structural components in the defense and aerospace industries in high-speed aircraft, guided missiles, spacecraft, and satellites, including the James Webb telescope. Several liquid-fuel rockets have used rocket nozzles made of pure beryllium. Beryllium powder was itself studied as a rocket fuel, but this use has never materialized. A small number of extreme high-end bicycle frames have been built with beryllium. From 1998 to 2000, the McLaren Formula One team used Mercedes-Benz engines with beryllium-aluminium-alloy pistons. The use of beryllium engine components was banned following a protest by Scuderia Ferrari.
Mixing about 2.0% beryllium into copper forms an alloy called beryllium copper that is six times stronger than copper alone. Beryllium alloys are used in many applications because of their combination of elasticity, high electrical conductivity and thermal conductivity, high strength and hardness, nonmagnetic properties, as well as good corrosion and fatigue resistance. These applications include non-sparking tools that are used near flammable gases (beryllium nickel), in springs and membranes (beryllium nickel and beryllium iron) used in surgical instruments and high temperature devices. As little as 50 parts per million of beryllium alloyed with liquid magnesium leads to a significant increase in oxidation resistance and decrease in flammability.
The high elastic stiffness of beryllium has led to its extensive use in precision instrumentation, e.g. in inertial guidance systems and in the support mechanisms for optical systems. Beryllium-copper alloys were also applied as a hardening agent in “Jason pistols”, which were used to strip the paint from the hulls of ships.
Beryllium was also used for cantilevers in high performance phonograph cartridge styli, where its extreme stiffness and low density allowed for tracking weights to be reduced to 1 gram, yet still track high frequency passages with minimal distortion.
An earlier major application of beryllium was in brakes for military airplanes because of its hardness, high melting point, and exceptional ability to dissipate heat. Environmental considerations have led to substitution by other materials.
To reduce costs, beryllium can be alloyed with significant amounts of aluminium, resulting in the AlBeMet alloy (a trade name). This blend is cheaper than pure beryllium, while still retaining many desirable properties.
Beryllium mirrors are of particular interest. Large-area mirrors, frequently with a honeycomb support structure, are used, for example, in meteorological satellites where low weight and long-term dimensional stability are critical. Smaller beryllium mirrors are used in optical guidance systems and in fire-control systems, e.g. in the German-made Leopard 1 and Leopard 2 main battle tanks. In these systems, very rapid movement of the mirror is required which again dictates low mass and high rigidity. Usually the beryllium mirror is coated with hard electroless nickel plating which can be more easily polished to a finer optical finish than beryllium. In some applications, though, the beryllium blank is polished without any coating. This is particularly applicable to cryogenic operation where thermal expansion mismatch can cause the coating to buckle.
The James Webb Space Telescope will have 18 hexagonal beryllium sections for its mirrors. Because JWST will face a temperature of 33 K, the mirror is made of gold-plated beryllium, capable of handling extreme cold better than glass. Beryllium contracts and deforms less than glass – and remains more uniform – in such temperatures. For the same reason, the optics of the Spitzer Space Telescope are entirely built of beryllium metal.
Beryllium is non-magnetic. Therefore, tools fabricated out of beryllium-based materials are used by naval or military explosive ordnance disposal teams for work on or near naval mines, since these mines commonly have magnetic fuzes. They are also found in maintenance and construction materials near magnetic resonance imaging (MRI) machines because of the high magnetic fields generated. In the fields of radio communications and powerful (usually military) radars, hand tools made of beryllium are used to tune the highly magnetic klystrons, magnetrons, traveling wave tubes, etc., that are used for generating high levels of microwave power in the transmitters.
Thin plates or foils of beryllium are sometimes used in nuclear weapon designs as the very outer layer of the plutonium pits in the primary stages of thermonuclear bombs, placed to surround the fissile material. These layers of beryllium are good “pushers” for the implosion of the plutonium-239, and they are good neutron reflectors, just as in beryllium-moderated nuclear reactors.
Beryllium is also commonly used in some neutron sources in laboratory devices in which relatively few neutrons are needed (rather than having to use a nuclear reactor, or a particle accelerator-powered neutron generator). For this purpose, a target of beryllium-9 is bombarded with energetic alpha particles from a radioisotope such as polonium-210, radium-226, plutonium-238, or americium-241. In the nuclear reaction that occurs, a beryllium nucleus is transmuted into carbon-12, and one free neutron is emitted, traveling in about the same direction as the alpha particle was heading. Such alpha decay driven beryllium neutron sources, named “urchin” neutron initiators, were used in some early atomic bombs. Neutron sources in which beryllium is bombarded with gamma rays from a gamma decay radioisotope, are also used to produce laboratory neutrons.
Beryllium is also used in fuel fabrication for CANDU reactors. The fuel elements have small appendages that are resistance brazed to the fuel cladding using an induction brazing process with Be as the braze filler material. Bearing pads are brazed in place to prevent fuel bundle to pressure tube contact, and inter-element spacer pads are brazed on to prevent element to element contact.
Beryllium is also used at the Joint European Torus nuclear-fusion research laboratory, and it will be used in the more advanced ITER to condition the components which face the plasma. Beryllium has also been proposed as a cladding material for nuclear fuel rods, because of its good combination of mechanical, chemical, and nuclear properties. Beryllium fluoride is one of the constituent salts of the eutectic salt mixture FLiBe, which is used as a solvent, moderator and coolant in many hypothetical molten salt reactor designs, including the liquid fluoride thorium reactor (LFTR).
The low weight and high rigidity of beryllium make it useful as a material for high-frequency speaker drivers. Because beryllium is expensive (many times more than titanium), hard to shape due to its brittleness, and toxic if mishandled, beryllium tweeters are limited to high-end home, pro audio, and public address applications. Some high-fidelity products have been fraudulently claimed to be made of the material.
Some high-end phonograph cartridges used beryllium cantilevers to improve tracking by reducing mass.
Beryllium is a p-type dopant in III-V compound semiconductors. It is widely used in materials such as GaAs, AlGaAs, InGaAs and InAlAs grown by molecular beam epitaxy (MBE). Cross-rolled beryllium sheet is an excellent structural support for printed circuit boards in surface-mount technology. In critical electronic applications, beryllium is both a structural support and heat sink. The application also requires a coefficient of thermal expansion that is well matched to the alumina and polyimide-glass substrates. The beryllium-beryllium oxide composite “E-Materials” have been specially designed for these electronic applications and have the additional advantage that the thermal expansion coefficient can be tailored to match diverse substrate materials.
Beryllium oxide is useful for many applications that require the combined properties of an electrical insulator and an excellent heat conductor, with high strength and hardness, and a very high melting point. Beryllium oxide is frequently used as an insulator base plate in high-power transistors in radio frequency transmitters for telecommunications. Beryllium oxide is also being studied for use in increasing the thermal conductivity of uranium dioxide nuclear fuel pellets. Beryllium compounds were used in fluorescent lighting tubes, but this use was discontinued because of the disease berylliosis which developed in the workers who were making the tubes.
Beryllium is a component of several dental alloys.
Beryllium is a health and safety issue for workers. Exposure to beryllium in the workplace can lead to a sensitization immune response and can over time develop chronic beryllium disease (CBD). The National Institute for Occupational Safety and Health (NIOSH) in the United States researches these effects in collaboration with a major manufacturer of beryllium products. The goal of this research is to prevent sensitization and CBD by developing a better understanding of the work processes and exposures that may present a potential risk for workers, and to develop effective interventions that will reduce the risk for adverse health effects. NIOSH also conducts genetic research on sensitization and CBD, independently of this collaboration. The NIOSH Manual of Analytical Methods contains methods for measuring occupational exposures to beryllium.
Approximately 35 micrograms of beryllium is found in the average human body, an amount not considered harmful. Beryllium is chemically similar to magnesium and therefore can displace it from enzymes, which causes them to malfunction. Because Be2+ is a highly charged and small ion, it can easily get into many tissues and cells, where it specifically targets cell nuclei, inhibiting many enzymes, including those used for synthesizing DNA. Its toxicity is exacerbated by the fact that the body has no means to control beryllium levels, and once inside the body the beryllium cannot be removed. Chronic berylliosis is a pulmonary and systemic granulomatous disease caused by inhalation of dust or fumes contaminated with beryllium; either large amounts over a short time or small amounts over a long time can lead to this ailment. Symptoms of the disease can take up to five years to develop; about a third of patients with it die and the survivors are left disabled. The International Agency for Research on Cancer (IARC) lists beryllium and beryllium compounds as Category 1 carcinogens. In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) in the workplace with a time-weighted average (TWA) 2 µg/m3 and a constant exposure limit of 5 µg/m3 over 30 minutes, with a maximum peak limit of 25 µg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of constant 500 ng/m3. The IDLH (immediately dangerous to life and health) value is 4 mg/m3.
The toxicity of finely divided beryllium (dust or powder, mainly encountered in industrial settings where beryllium is produced or machined) is very well-documented. Solid beryllium metal does not carry the same hazards as airborne inhaled dust, but any hazard associated with physical contact is poorly documented. Workers handling finished beryllium pieces are routinely advised to handle them with gloves, both as a precaution and because many if not most applications of beryllium cannot tolerate residue of skin contact such as fingerprints.
Acute beryllium disease in the form of chemical pneumonitis was first reported in Europe in 1933 and in the United States in 1943. A survey found that about 5% of workers in plants manufacturing fluorescent lamps in 1949 in the United States had beryllium-related lung diseases. Chronic berylliosis resembles sarcoidosis in many respects, and the differential diagnosis is often difficult. It killed some early workers in nuclear weapons design, such as Herbert L. Anderson.
Beryllium may be found in coal slag. When the slag is formulated into an abrasive agent for blasting paint and rust from hard surfaces, the beryllium can become airborne and become a source of exposure.
Early researchers tasted beryllium and its various compounds for sweetness in order to verify its presence. Modern diagnostic equipment no longer necessitates this highly risky procedure and no attempt should be made to ingest this highly toxic substance. Beryllium and its compounds should be handled with great care and special precautions must be taken when carrying out any activity which could result in the release of beryllium dust (lung cancer is a possible result of prolonged exposure to beryllium-laden dust). Although the use of beryllium compounds in fluorescent lighting tubes was discontinued in 1949, potential for exposure to beryllium exists in the nuclear and aerospace industries and in the refining of beryllium metal and melting of beryllium-containing alloys, the manufacturing of electronic devices, and the handling of other beryllium-containing material.
A successful test for beryllium in air and on surfaces has been recently developed and published as an international voluntary consensus standard ASTM D7202. The procedure uses dilute ammonium bifluoride for dissolution and fluorescence detection with beryllium bound to sulfonated hydroxybenzoquinoline, allowing up to 100 times more sensitive detection than the recommended limit for beryllium concentration in the workplace. Fluorescence increases with increasing beryllium concentration. The new procedure has been successfully tested on a variety of surfaces and is effective for the dissolution and ultratrace detection of refractory beryllium oxide and siliceous beryllium (ASTM D7458). | 0.832942 | 3.587538 |
The idea of linking two orbing satellites via an interferometer to continuously measure their separation with ultraprecision may seem incredible, but it is being successfully done now.
It’s easy to get jaded about technology advances. All of us – and especially the non-technical public – are so inundated with truly impressive feats that soon many, if not all, become “so what, that’s no big deal” events. When you create technical “miracles” on a regular basis, it’s easy for the audience to have little regard for the work and persistence it takes to make them happen. There’s an old engineering maxim that the last 10% of a project takes 90% of the effort, and that’s largely true (those percentages do vary among projects, of course).
Think about basic and-based measurement and surveying. Back when George Washington was a surveyor, it required dragging and stretching chains across the terrain plus laborious calculations (and yet many of their results are amazingly accurate) but now it is done quickly and painlessly using GPS-guided, laser-based rangefinders plus complex algorithms executed nearly effortlessly. This simultaneous improvement of many orders of magnitude in both accuracy and speed has not come quickly or easily, of course, even if it seems that way.
Despite these truly incredible advances, scientists and engineers are striving for more and better, and improvements require almost unimaginably sophisticated and complex instrumentation. This is demonstrated, for example, by the Gravity Recovery and Climate Experiment (GRACE) which implements laser-based interferometry between orbiting satellites and operated continuously without interruption for over 55 day (about 850 orbits) during its first months of operation. I’ll be clear: this is not an Earth-based link to an orbiting spacecraft. Instead, its laser-ranging interferometer (LRI) directly links two satellites about 220 km apart to allow precise and real-time measurements. The LRI weighs just 25 kg and requires 35 W, an impressive pair of “big picture” specifications for these two common parameters.
Why do this project? Among other reasons, it allows for ultra-precise assessment of orbital changes which, in turn, are largely due to variations in the Earth’s gravitational field, which is far from uniform and in fact can change (earthquakes, for example, result in shifts of mass). Another use will be as Laser Interferometer Space Antenna (LISA), which will detect gravitational waves at much lower frequencies and higher sensitivity than the existing ground-based Laser Interferometer Gravitational-Wave Observatory (LIGO) which achieved such stunning success in recent years.
The five-degrees of freedom two-way laser link between the spacecraft succeeded in linking and synchronizing on the first attempt. Every element of this system represents truly cutting-edge and beyond, technology. The LRI’s laser output power is a mere 25 mW at 1064.5 nm, and both satellites carry identical optical cavities with one of them stabilizing the frequency of the laser on the “master” satellite. The beam-steering mirror steers a beam with a 140 μrad half-cone angle and has a range of several milliradians in two axes and a speed of greater than 100 Hz. The LRI transmit beam must point to the other distant spacecraft with better than 100 μrad accuracy to ensure that enough light – just a few nanowatts are needed – arrives at the distant receiver’s aperture.
The high-level block diagram of the core of the LRI design (one in each satellite) is complicated, (Figure 1); I can’t imagine what a detailed system block diagram or electronic, optical, and mechanical schematic shows. The software must execute a large number of procedures ranging from basic beam management to high-level data corrections based on “distortion” as described by general relativity.
Figure 1 Functional overview of the LRI units on both spacecraft. The LRI units include the laser, cavity, laser ranging processor (LRP),optical bench electronics (OBE), triple mirror assembly (TMA), and optical bench assembly (OBA) with a fast steering mirror (FSM). (Image source: Physics Review Letters)
I won’t repeat details of the design, implementation, operation, and test results, including noise and error analysis or confidence level in the results. It is all discussed in their detailed paper “In-Orbit Performance of the GRACE Follow-on Laser Ranging Interferometer” published in the prestigious Physical Review Letters of the American Physical Society, with over 50 authors from 13 universities and commercial organizations). Just managing this project must have been a challenge in addition to the technology and implementation.
Advances like this are extremely impressive but don’t get much attention due to their complexity, esoteric nature, and hard-to-describe impact. In contrast, the earth-based Laser Interferometer Gravitational-Wave Observatory (LIGO) project did get a lot of attention – plus a well-deserved Nobel Prize for the project’s lead physicists (see “https://www.laserfocusworld.com/test-measurement/research/article/16569615/ligo-scientists-receive-2017-nobel-prize-in-physics”) ; I think the widespread attention was partially due to the crisp headlines it spurred such as “Experiment finally confirms Einstein’s prediction of gravity waves” – you can’t beat that for keyword hits!
Do you follow any of these extreme-precision, sophisticated physics and electro-optical projects? Are there some that have impressed you, or ones that you feel are overrated? Are they too far removed from your area of interest? | 0.882143 | 3.461335 |
When is there going to be another mission to look for life on Mars? It is a question I have been asked time and again since Christmas Day 2003, when my team lost contact with our Beagle 2 lander. It was due to call home at 0528 GMT that morning, after landing on the surface of Mars, but there was only silence.
Beagle 2 was carrying an instrument that I believe could have detected traces of living things on the Red Planet. None of the three landers that NASA has since successfully sent to Mars has had the ability to do anything similar. NASA initially agreed to work with the European Space Agency (ESA) on a mission to send two rovers to search for life, planned for 2018. But it announced this year that budget constraints would require a rethink that could mean major reductions in these vehicles’ payload and capabilities. ESA itself initially promised there would be a follow-up mission to search for life as soon as 2007, but that date has slipped many times.
Fortunately, none of this means we have to give up on looking for evidence of life on Mars. We have a remarkable resource in the form of fragments of Martian rock blasted from the planet’s surface by an asteroid impact, which have ended up landing on Earth many thousands of years later. We know of more than 90 examples of such Martian meteorites, although some come from the same object that disintegrated in the atmosphere. Many of them have been recovered from the deserts of north Africa, including one called NWA 2975 – a piece of which New Scientist is offering as a prize.
We know these fragments come from Mars because all meteorites contain clues about their origins. Buried inside some are small pockets of glass formed during the asteroid impact, which can contain traces of gas. Measurements of the composition of this gas match the analysis of the Martian atmosphere by NASA’s Viking landers in the 1970s – a discovery that provided the first confirmation that some meteorites found on Earth really do come from Mars.
Another indicator of a meteorite’s origin is the relative abundance of three isotopes of oxygen – oxygen-16, oxygen-17 and oxygen-18 – in the molecules of silicate they contain. Because the relative abundance of these isotopes varies throughout the solar system, it is possible to establish whether a meteorite comes from the moon, the asteroid belt or Mars. At the Open University, we pioneered a method in which we use a laser to melt the silicate minerals in the presence of chemicals that liberate oxygen, and then make very precise isotope measurements. Meteorites from Mars have a slight excess in the abundance of oxygen-17. This is how we authenticated the prize meteorite.
Martian meteorites can also tell us about the existence of water on Mars. That’s because they contain minerals such as carbonates, which are likely to have been precipitated from water. Orbiting spacecraft have never managed to locate these minerals in copious quantities, though NASA’s Phoenix lander did find various salts that might have originated in a similar way.
There is plenty of other evidence that water, the key ingredient needed by life, has been present on Mars for a long time. NASA’s orbiters and ESA’s Mars Express have found surface features that can only have been made by large quantities of water perhaps 3 billion years ago. This is consistent with the age of the carbonate deposits in Mars meteorites, which have been dated using radioisotopes to originating as early as 3.9 billion years ago.
Studies of Martian meteorites have outpaced the findings made by NASA’s rovers, too. In 1978, Robert Hutchison at the Natural History Museum in London found evidence in a Martian meteorite of minerals deposited by water – more than 25 years before similar evidence was uncovered by Steve Squyres and his team at NASA, thanks to the Spirit and Opportunity rovers.
Meteorites have, of course, given rise to the most widely publicised suggestion that life may once have thrived on the Red Planet. The rock, known as ALH 84001, came down in Allan Hills, Antarctica, around 13,000 years ago. In August 1996, US president Bill Clinton announced what he called “stupendous” news: Everett Gibson and his colleagues at NASA had discovered what appeared to be a nanometre-sized fossil within ALH 84001.
But this was not the first time a Martian meteorite had yielded evidence of life. In 1989, at the Open University, we made a remarkable discovery in another meteorite also from Antarctica, called EETA 79001. Within the carbonate present in the meteorite, we found a measurable proportion of organic material, typical of that left by the remains of living things on Earth. We stopped short of saying we had discovered life on Mars, preferring, like good scientists, to remain sceptical. In our paper, we merely said that if we are correct “the implications are obvious” (Nature, vol 340, p 220). Later, in the furore surrounding ALH 84001, I found myself being described in the press as “the man who missed life on Mars”.
After the fuss had died down, geologists and biologists began to question the validity of ALH 84001’s supposed fossil and the organic material we had reported. Some preferred to believe that the fossil was an artefact and that the organic material was contamination picked up after the meteorite landed in Antarctica. Nevertheless I was convinced our findings were real. It was this that provided the impetus for the Beagle 2 mission.
Since 1996 we have analysed several more carbonate deposits in EETA 79001. The organic materials are confined to one part of the rock, which would seem to exclude the possibility of contamination because there is no obvious way extraneous carbon could find its way into just one bit of the rock and not others. In any case, Antarctic melt water contains such small amounts of carbon that vast amounts of water would need to have percolated though the meteorite to accumulate the organics we found.
It is eight years since Beagle 2 didn’t call home and it looks like being at least that long until another lander sends back any information about life on Mars. So we will just have to wait – unless, of course, some hitherto undiscovered secret is found hiding in another meteorite. Maybe there is one lurking in the piece of Mars that you could win.
More on these topics: | 0.817058 | 3.691814 |
Starting from his miraculous year of 1905, Einstein has dominated physics with his astonishing insights on space and time, and on mass and gravity. True, there have been other physicists who, with their own brilliance, have shaped and moved modern physics in directions that even Einstein couldn’t have foreseen; and I don’t mean to trivialize neither their intellectual achievements nor our giant leaps in physics and technology. But all of modern physics, even the bizarre reality of quantum mechanics, which Einstein himself couldn’t quite come to terms with, is built on his insights. It is on his shoulders that those who came after him stood for over a century now.
One of the brighter ones among those who came after Einstein cautioned us to guard against our blind faith in the infallibility of old masters. Taking my cue from that insight, I, for one, think that Einstein’s century is behind us now. I know, coming from a non-practicing physicist, who sold his soul to the finance industry, this declaration sounds crazy. Delusional even. But I do have my reasons to see Einstein’s ideas go.
Let’s start with this picture of a dot flying along a straight line (on the ceiling, so to speak). You are standing at the centre of the line in the bottom (on the floor, that is). If the dot was moving faster than light, how would you see it? Well, you wouldn’t see anything at all until the first ray of light from the dot reaches you. As the animation shows, the first ray will reach you when the dot is somewhere almost directly above you. The next rays you would see actually come from two different points in the line of flight of the dot — one before the first point, and one after. Thus, the way you would see it is, incredible as it may seem to you at first, as one dot appearing out of nowhere and then splitting and moving rather symmetrically away from that point. (It is just that the dot is flying so fast that by the time you get to see it, it is already gone past you, and the rays from both behind and ahead reach you at the same instant in time.Hope that statement makes it clearer, rather than more confusing.).
Why did I start with this animation of how the illusion of a symmetric object can happen? Well, we see a lot of active symmetric structures in the universe. For instance, look at this picture of Cygnus A. There is a “core” from which seem to emanate “features” that float away to the “lobes.” Doesn’t it look remarkably similar to what we would see based on the animation above? There are other examples in which some feature points or knots seem to move away from the core where they first appear at. We could come up with a clever model based on superluminality and how it would create illusionary symmetric objects in the heavens. We could, but nobody would believe us — because of Einstein. I know this — I tried to get my old physicist friends to consider this model. The response is always some variant of this, “Interesting, but it cannot work. It violates Lorentz invariance, doesn’t it?” LV being physics talk for Einstein’s insistence that nothing should go faster than light. Now that neutrinos can violate LV, why not me?
Of course, if it was only a qualitative agreement between symmetric shapes and superluminal celestial objects, my physics friends are right in ignoring me. There is much more. The lobes in Cygnus A, for instance, emit radiation in the radio frequency range. In fact, the sky as seen from a radio telescope looks materially different from what we see from an optical telescope. I could show that the spectral evolution of the radiation from this superluminal object fitted nicely with AGNs and another class of astrophysical phenomena, hitherto considered unrelated, called gamma ray bursts. In fact, I managed to publish this model a while ago under the title, “Are Radio Sources and Gamma Ray Bursts Luminal Booms?“.
You see, I need superluminality. Einstein being wrong is a pre-requisite of my being right. So it is the most respected scientist ever vs. yours faithfully, a blogger of the unreal kind. You do the math. 🙂
Such long odds, however, have never discouraged me, and I always rush in where the wiser angels fear to tread. So let me point out a couple of inconsistencies in SR. The derivation of the theory starts off by pointing out the effects of light travel time in time measurements. And later on in the theory, the distortions due to light travel time effects become part of the properties of space and time. (In fact, light travel time effects will make it impossible to have a superluminal dot on a ceiling, as in my animation above — not even a virtual one, where you take a laser pointer and turn it fast enough that the laser dot on the ceiling would move faster than light. It won’t.) But, as the theory is understood and practiced now, the light travel time effects are to be applied on top of the space and time distortions (which were due to the light travel time effects to begin with)! Physicists turn a blind eye to this glaring inconstancy because SR “works” — as I made very clear in my previous post in this series.
Another philosophical problem with the theory is that it is not testable. I know, I alluded to a large body of proof in its favor, but fundamentally, the special theory of relativity makes predictions about a uniformly moving frame of reference in the absence of gravity. There is no such thing. Even if there was, in order to verify the predictions (that a moving clock runs slower as in the twin paradox, for instance), you have to have acceleration somewhere in the verification process. Two clocks will have to come back to the same point to compare time. The moment you do that, at least one of the clocks has accelerated, and the proponents of the theory would say, “Ah, there is no problem here, the symmetry between the clocks is broken because of the acceleration.” People have argued back and forth about such thought experiments for an entire century, so I don’t want to get into it. I just want to point out that theory by itself is untestable, which should also mean that it is unprovable. Now that there is direct experimental evidence against the theory, may be people will take a closer look at these inconsistencies and decide that it is time to say bye-bye to Einstein. | 0.846672 | 3.69066 |
NGC 891 is an unbarred spiral galaxy about 30 million light-years away, that lies in the constellation Andromeda. The galaxy is of scientific interest due to a supernova that was observed in 1986; it also served as the first light of the Large Binocular Telescope. The galaxy appears edge-on in this image, taken November 26, 1916.
Double Cluster (NGC 869 & 884)
The double cluster is a group of two open clusters that lie close to each other in the sky. The clusters are thought to be very massive, each containing thousands of solar masses of stars and very young - about 12 million years old. Both clusters are about 7500 light-years away.
M46 is an open cluster first recorded by Charles Messier in 1771. It is an example of a smaller open cluster - thought to contain about 500 stars and lies about 5000 light-years away. The planetary nebula NGC 2438 (the blurry object slightly larger than the stars in the photo) appears to lie within the cluster, but it is probably a more distant object.
The Snake Nebula (Barnard 72) is a dark nebula in the constellation Ophiuchus. While it does not glow on its own, it is a collection of gas and dust dense enough to block out nearly all the light from stars that are behind it. Often dark nebulae are responsible for star formation and other astronomically interesting phenomena. Taken July 4, 1921, through the 36-inch refractor at the Lick Observatory.
Comet Morehouse was a bright comet discovered by Daniel Walter Morehouse and was first observed in 1908. Comet Morehouse is unique in that it first formed its tail while it was still far from the Sun, about 2 AU away, and sprouted as many as six tails during its pass by the Sun. Comet Morehouse is believed to be a non-periodic comet; if it is a periodic comet, then its orbit is very large and will likely not return for million of years.
Solar prominences are glowing hot gas that extend far away from the photosphere -- the sun's surface. Prominences are large, often several times the size of Earth, and can last anywhere from a day to several weeks. The center of this image is black since the light form the Sun's disk is blocked - which allows astronomers to see fainter features. Taken August 13, 1908 from the Yerkes Observatory.
Halley's comet is a relatively short period comet which takes about 75 years to orbit the Sun. Its orbit is very elliptical. At its closest approach, it passes closer to the Sun than Venus, whereas its further approach lies beyond Neptune. The image was taken the last time it was visible, on May 7th, 1986 from Fuertes Observatory. Comet Halley is scheduled to next be visible in 2061.
Andromeda Galaxy (M31)
The Andromeda Galaxy is the largest galaxy in our galactic neighborhood, and is thought to contain about a trillion stars -- roughly twice as many as the Milky Way. It is also roughly twice the size of our galaxy, stretching about 220,000 light-years in diameter. The blurry object to the upper left of the galaxy is M32, and the one to the lower right is M110. Both are dwarf elliptical galaxies and are satellites of M31.
Whirlpool Galaxy (M51)
M51 is the larger spiral galaxy in this image, is about 20 million light-years away, and is about 1/3 the size of the Milky Way. The smaller galaxy visible in this image is NGC 5195; it is a dwarf elliptical galaxy that is interacting with the Whirlpool Galaxy, and is currently connected by a tidal bridge.
M81 is a spiral galaxy that lies about 12 million light-years away from Earth. It has an active galactic nucleus and is suspected to harbor a black hole weighing about 70 million solar masses. It is one of the brighter galaxies in the night sky, visible with only binoculars in dark conditions.
Pinwheel Galaxy (M101)
The Pinwheel Galaxy is a large galaxy, about twice the size of the Milky Way. However, due to its distance, about 21 million light-years, it is difficult to see except on the darkest of nights. The galaxy is believed to be asymmetrical due to tidal interactions with other galaxies.
The Sun's Surface
This is an image of sunspots on the surface of the Sun. Sunspots are slightly cooler regions of the Sun that appear dark in comparison to their hotter surroundings. The vertices lines in the image is a spectra of calcium vapor overlaid on the image. Taken August 10, 1917 at Yerkes Observatory.
The Great Sun-Spot of 1905
This is an image taken of a particularly large sunspot on July 1905, during that year's solar maximum, and was measured to be over 95,000 miles (152,000km) in length. The spot later fragmented over a period of months.
The Waning Moon
This image was taken of the waning moon when it was 20 days into its 28 day cycle. It was taken at Fuertes Observatory on April 7, 1923.
The Waxing Moon
This image was taken of the waxing moon when it was 9 days into its 28 day cycle. It was taken at Fuertes Observatory on March 26, 1923.
Sombrero Galaxy (M104)
The Sombrero Galaxy is a bright spiral galaxy in the constellation Virgo, which is about 31 million light-years from Earth. Here the galaxy appears like a flat disk because it is viewed edge-on from earth.
Orion Nebula (M42)
The Orion Nebula is one of the brightest nebulae in the sky, often visible to the naked eye even from relatively light polluted conditions. At only 1300 light-years away, it is one of the closest regions to earth experiencing massive star formation, and is estimated to be about 25 light years across. This image was a 3-hr exposure taken November 19, 1920.
The Veil Nebula is a supernova remnant from a star that exploded between 3000-6000 B.C. The nebula is one of the brightest features in the x-ray sky, and recent spectroscopic measurements indicate the presence of oxygen, sulfur, and hydrogen in the remnant.
Trifid Nebula (M20)
The Trifid Nebula is a combination of a nebula and an open cluster of stars, the nebula being responsible for the dark gas and the cluster being responsible for the bright internal regions. The nebula is about 5200 light-years away and has been used by astronomers to study the birth of stars.
M22 is one of the brightest globular clusters in the night sky; it lies in the constellation Sagittarius. The globular contains about 70,000 stars and is about 50 light-years across. It is also relatively close to earth, lying only 10,000 light-years away. The cluster is actually the brightest globular visible from most northern latitudes; however, it never rises very high in the sky and so appears less impressive than other dimmer clusters.
Jupiter and Saturn
These images show changes in the appearance of Jupiter and Saturn over time. On the left we see the Saturn's rings disappear over a period of about a decade. The right most image shows Jupiter's clouds evolving over time on December 1917. Both images were taken at the Lowell Observatory.
Comet Halley from Fuertes
Comet Halley is the dim star-like object at the end of the arrow. Comet Giacobini-Zinner is on the lower right, while NGC 2174 is on the upper right. Taken at Fuertes Observatory, September 14, 1985. This image was taken when Comet Halley was very far from the Sun; consequently, it was much dimmer than it was at close approach and did not have a visible tail.
These images show changes in solar prominences over a period of only 35 minutes. The size of Earth is illustrated by the black dot. The prominences are the large bright features extending outwards from the dark disk -- the Sun's surface, which is blocked in order to see the prominences. Some prominences can be as large as the planet Jupiter.
A close-up of features believed in the early twentieth century to be on Mars, which were labeled "canals". Early astronomers believed they saw canals on Mars and hypothesized that they were used to bring water from Mars' polar regions to the dryer equator. Today it is know that these features do not exist and their observation was a combination of optical illusions and poor optics.
Milky Way Star Field
Star-Cloud in Scutum. 6in Bruce telescope. Exposure 2h40m, taken on April 20 1904
Milky Way Star Field
Star Cloud and Black Holes in Sagittarius. taken on July 31 1905, near 8h7m, -18 20' through the 10 inch Bruce Telescope. Exposure 4hrs 30min
Cat's Eye Nebula
The Cat's Eye Nebula (NGC 6543) is a planetary nebula that appears here as a bull's eye pattern of many shells around the center star. While each shell looks 2-D, it is actually a spherical bubble seen projected onto the sky, which is why it appears bright along its outer edge.
The Helix Nebula is one of the closest planeary nebulae to Earth, and is only about 700 light-years away. It is estimated to be about 2 light-years across, and 10 thousand years old. Strictly speaking, the Helix Nebula should appear very large in the sky, since its diameter is about half that of the full moon. However, since it is an extremely dim object, it is often not visible and easily overlooked.
Ring Nebula (M57)
The Ring Nebula is a planetary nebula that is about one light-year across and about 20,000 light years away. The greenish interior of the nebula is caused by ionized oxygen, which produces this color only in conditions of very low density.
Uranus And Neptune
On the left is an image of the planet Uranus, taken by the Voyager 2 spacecraft. It orbits about 1.8 billion miles from the Sun and is roughly 14 times the mass of Earth. Uranus is often covered by thick haze and can appear featureless. On the right is an image of Neptune, also taken by Voyager 2. It orbits about 2.8 billion miles from the Sun and weighs about 17 Earths. The Great Dark spot can be seen near Neptune's left rim. Unlike the storms on Earth, the dark spot is a region of high pressure, and it lasted a few years.
Pinwheel Galaxy (M101)
The Pinwheel Galaxy is located near the Big Dipper in the sky, but it is extremely faint. It is about 21 million light-years from Earth. This image is composed of 51 individual Hubble exposures, in addition to elements from images from ground-based photos, and is one of the most detailed images of the galaxy.
Whirlpool Galaxy (M51)
The Whirlpool Galaxy was first recorded by Charles Messier in 1773; it is located 31 million light-years from Earth in the constellation Canes Venatici. It is fairly dim and is difficult to see except on very dark nights. The Whirlpool galaxy’s beautiful face-on view and closeness to Earth allow astronomers to study a classic spiral galaxy’s structure and its star-forming processes.
The Antenna Galaxies are two that are undergoing a starburst due to their collision. The original centers of the galaxies can be seen in the leftmost image as two yellow cores, surrounded by blue gas and dust. It is expected that in about 400 million years, the galaxies will finally fuse to form a giant elliptical galaxy.
Sombrero Galaxy (M104)
This image is a compilation of two different images of the Sombrero Galaxy. The above image was taken in visible light by the Hubble Space Telescope and shows the galaxy as it would appear to the naked eye. The lower image is in infrared light and is a compilation of images from the Hubble and Spitzer space telescopes, with starlight subtracted so that the dust in the galaxy is more visible.
This image of Pluto was taken by NASA’s New Horizons spacecraft, on July 13, 2015, when the spacecraft was 476,000 miles away from the planet. It was the last and most detailed image sent to Earth before the spacecraft’s closest approach to Pluto on July 14. Clearly visible in the image is the large, bright beige "heart" which measures approximately 1,000 miles (1,600 kilometers) across.
This is a composite image of Centaurus A, revealing the lobes and jets emanating from the active galaxy’s central black hole. This is a composite of images obtained with three instruments, operating at different wavelengths -- a submillimeter image in orange, x-ray imagery from the Chandra satellite, and a visible light image from the MPG/ESO telescope in La Silla, Chile.
Saturn's Polar Hexagon
The image was taken with the Cassini spacecraft's wide-angle camera on Nov. 27, 2012 using a spectral filter sensitive to wavelengths of near-infrared light centered at 750 nanometers. It shows a hexagonal storm on Saturn's north pole. The shape is believed to be the result of a perturbed jet in Saturn's upper atmosphere.
Comet Hale-Bopp was discovered on July 23, 1995 by Alan Hale and Thomas Bopp, and it was one of the brightest and most observed comets of the 20th century as it was visible to observers on Earth for a record of 18 months. The blue tail in this image streams opposite from the Sun and carries ions away from the comet's nucleus. The yellow tail is composed largely of dust and traces the curve of the comet's orbit.
The large disk of gas surrounding the star Fomalhaut is clearly visible in this image. A planet
- Fomalhaut b - was later found in the disk. It is likely less than twice Jupiter's mass that is either enshrouded in a spherical cloud of dust from ongoing planetesimal collisions or surrounded by a large circumplanetary ring system. Its orbital period is estimated to be about 1700 years.
This image of the Orion nebula is one of the most detailed astronomical images ever produced; it was created using the Hubble Space Telescope over 105 Hubble orbits. All imaging instruments aboard the telescope were used simultaneously to study Orion. It is one of the brightest nebulae in the night sky, and one of the only major star forming regions near Earth.
This image of Westerlund 2 was taken to celebrate the Hubble Space Telescope anniversary of 25 years in space. The star cluster is located inside large cloud of gas known as Gum 29, and is about 20,000 light-years away. The cluster measures between 6 to 13 light-years across.
This image of the Horsehead Nebula in infrared light was made by the Hubble Space Telescope to mark the 23rd anniversary of the famous observatory's launch aboard the space shuttle Discovery on April 24, 1990. The Horsehead is an example of a dark nebula - it is only visible because it is lit from behind.
Sun in UV
This image is an extreme ultraviolet snapshot of the Sun and was made on August 1, 2010 using the Solar Dynamics Observatory. It shows a large solar flare (white area on upper left), a solar tsunami (wave-like structure, upper right), multiple filaments of magnetism lifting off the stellar surface, large-scale shaking of the solar corona, radio bursts, a coronal mass ejection, and more.
Zeta Oph- Bow Shock
The star Zeta Ophiuchi produces the arcing interstellar bow wave or bow shock which can be seen in this infared image. In the false-color view, bluish Zeta Oph, which is about 20 solar masses is moving toward the left at 24 kilometers per second. Its strong stellar wind precedes it, which compresses the interstellar medium and produces the curved shock front. It is likely that Zeta Oph was once a member of a binary star system whose companion exploded as a supernova, which caused Zeta Oph to be flung out of the system at high speed.
Jupiter and its moons
This image is a compilation of Jupiter an its four largest moons. From left to right, they are Io, Europa, Ganymede and Callisto. On the bottom of the image, the great red spot can be seen. The individual images of Jupiter and its moons were taken by spacecraft, but cannot be seen in this orientation.
This natural color image of the planet Saturn was created from images collected shortly after Cassini began its extended Equinox Mission in July 2008. Several of Saturn's moons can be seen in this image, most prominently Titan, which appears as a brown ball just off Saturn's rings.
This is an image of M80, also known as NGC 6093, taken by the Hubble Space Telescope. It is one of about 250 globular clusters that orbit our Galaxy. Many of the stars in M80 are older and redder than our Sun, but some enigmatic stars appear to be bluer and younger. These blue stars are known as blue stragglers, and by analyzing pictures like this one, astronomers have been able identify one of the largest populations of blue stragglers to date. As blue stragglers are thought to be the result of nearby stars colliding, it is believed that stars are much more closely spaced, and thus collide more frequently, in M80 than in our stellar neighborhood. | 0.920872 | 3.834912 |
Much anticipated comet ISON may be in trouble
(CNN) — ISON, the most closely watched comet in recent years, may be falling apart as it nears its close encounter with the sun.
Comets are giant snowballs of frozen gases, rock and dust that can be several miles in diameter. When they get near the sun, they warm up and spew out some of the gas and dirt, creating a tail that can stretch for thousands of miles. Most comets are in the outer part of our solar system. When they get close enough for us to see them, scientists study them for clues about how our solar system formed.
When ISON was first discovered, hopes were high that it might become visible to the naked eye, meaning everyone might be able see it, not just those with good telescopes who took the trouble to find it. There was talk it might even rival some of the Great Comets like Halley’s or Hale-Bopp and spread a huge tail across the sky.
But some observers on Tuesday reported online that the comet is not nearly as bright as it has been in recent days and that it may be pouring out dust.
This could mean the comet’s core, or nucleus, has “completely disrupted, releasing an enormous volume of dust,” NASA’s Comet ISON Observing Campaign says in its November 25 online update.
But other observers say images taken by NASA’s STEREO spacecraft are “encouraging evidence that the comet still exists,” Padma Yanamandra-Fisher with the ISON campaign told reporters on the campaign’s Facebook page. She added that it’s too early to tell what kind of shape the comet is in, though.
“I believe the next couple of days will be crucial to determine the post-perihelion appearance of the comet,” Yanamandra-Fisher said. Perihelion is the point in an object’s path that is closest to the sun.
Whatever its final fate, she said, ISON has “provided a wonderful window into the world of comets. The full understanding of this comet and its place in the taxonomy of comets will only come in hindsight.”
ISON was discovered in September of 2012 by astronomers Vitali Nevski and Artyom Novichonok using a telescope near Kislovodsk, Russia, that is part of the International Scientific Optical Network (ISON). ISON — officially named C/2012 S — was 585 million miles away at the time. Its amazing journey through the solar system has been chronicled by amateur astronomers and by space telescopes. NASA has even created a toolkit for ISON fans.
Confusion about its fate isn’t new for ISON watchers.
“From the moment of discovery, ISON has been a confusing, frustrating, dynamic and unpredictable object. In other words, it has been a very typical comet!” said Karl Battams, an astrophysicist with the Naval Research Laboratory in Washington.
The glare of the sun has blocked most ground-based observations, but NASA has a fleet of spacecraft watching as ISON plunges toward the sun. If it hasn’t already broken up, it will skim about 730,000 miles above its surface on Thanksgiving Day and could put on a sky show in early December when it moves out of the glare of the sun.
The comet will make its closest approach to Earth on December 26, and, no, it won’t hit us. But for now, we wait to learn ISON’s fate.
“I am excited at marking the progress of this comet that has captivated the world from its discovery and the possibility of it being a Great Comet,” Yanamandra-Fisher told CNN.com. “I am glad that I was able to be part of its journey.” | 0.831128 | 3.252799 |
We have both good and bad news coming in from the astronomy community. The good news is that the Hubble Space Telescope managed to capture some once-in-a-decade images of a fragmenting comet. The bad news is that the very same comet was expected to put on an extraordinary, luminous show for skywatchers in May—something that almost certainly will not happen now.
Like exploding aerial fireworks shells, comet ATLAS is breaking apart into more than 30 pieces, each roughly the size of a house. Hubble captured detailed images of the breakup last week: https://t.co/PYcgDD64hA pic.twitter.com/hV2n2OrVnY— Hubble (@NASAHubble) April 28, 2020
Gizmodo picked up on the good-slash-bad news, which was recently announced by NASA (among others). According to NASA’s post, Hubble identified 30 fragments of the comet (dubbed C/2019 Y4 or Comet ATLAS) on April 20, then 25 fragments on April 23. Comet ATLAS was first spotted in December of last year by the Asteroid Terrestrial-impact Last Alert System (or ATLAS), a robotic astronomical survey and early warning system developed and operated by the University of Hawaii’s Institute for Astronomy.
“This [comet breakup] is really exciting—both because such events are super cool to watch and because they do not happen very often,” said Quanzhi Ye of the University of Maryland, College Park. Ye is the leader of one of the two Hubble teams that captured the breakup. Ye added that “Most comets that fragment are too dim to see,” and that “Events at such scale only happen once or twice a decade.”
Comet ATLAS’s fragments on April 20 and April 23. NASA, ESA, D. Jewitt
Ye and the other researchers involved in studying the breakup of Comet ATLAS say that this instance is evidence that comet fragmentation is likely common. The researchers note that fragmentation may even be the most prevalent method by which the solid nuclei of comets “die.” (Recall that a comet’s nucleus is its solid, central part made up of frozen gases, dust, and rock.)
Although the cause of Comet ATLAS’ breakup is not confirmed as of now, some astronomy outlets (e.g. Sky & Telescope) have said that accelerating spin is a likely catalyst for nuclear breakups in comets in general. This accelerating spin would—probably—be the result of jets of gas blasting from the surface of a given comet’s nucleus. That gas would be catalyzed by the Sun’s heating of the comet as it approaches perihelion (the point at which an orbiting body is closest to the Sun).
UCLA professor David Jewitt, the leader of the other Hubble team to capture the breakup of Comet ATLAS, said in a UCLA News report that “Suddenly, [Comet ATLAS] been thrust into the hot zone near the sun and the stress of the new environment is causing it to disintegrate.” Jewitt added that “It is quite special to get a look with Hubble at this dying comet.”
Prior to the confirmed breakup of Comet ATLAS, skywatchers were hoping to be able to see it with the naked eye in May. NASA, unfortunately, now says that “If any of it survives, the comet will make its closest approach to Earth on May 23 at a distance of about 72 million miles….” Essentially, that means it may not be visible at all.
What do you think about these images of Comet ATLAS breaking up? Do you think we’ll still be able to see any fragments of it in May during its closest approach to Earth? Let us know your thoughts in the comments!
Feature image: NASA, ESA, D. Jewitt | 0.847104 | 3.290402 |
Image: NGC 4485 has been involved in a dramatic gravitational interplay with its larger galactic neighbour NGC 4490 — out of frame to the bottom right in this image. This ruined the original, ordered spiral structure of the galaxy and transformed it into an irregular one.
The interaction also created a stream of material about 25 000 light-years long, connecting the two galaxies. The stream, visible to the right of the galaxy is made up of bright knots and huge pockets of gassy regions, as well as enormous regions of star formation in which young, massive, blue stars are born.
Below NGC 4485 one can see a bright, orange background galaxy: CXOU J123033.6+414057. This galaxy is the source of X-ray radiation studied by the Chandra X-ray Observatory. It’s distance from Earth is about 850 million light-years. Credit: ESA/Hubble, NASA
The NASA/ESA Hubble Space Telescope has taken a new look at the spectacular irregular galaxy NGC 4485, which has been warped and wound by its larger galactic neighbour. The gravity of the second galaxy has disrupted the ordered collection of stars, gas and dust, giving rise to an erratic region of newborn, hot, blue stars and chaotic clumps and streams of dust and gas.
SulutPos.com, Garching, Germany – The irregular galaxy NGC 4485 has been involved in a dramatic gravitational interplay with its larger galactic neighbour NGC 4490 — out of frame to the bottom right in this image. Found about 30 million light-years away in the constellation of Canes Venatici (the Hunting Dogs), the strange result of these interacting galaxies has resulted in an entry in the Atlas of Peculiar galaxies: Arp 269.
Having already made their closest approach, NGC 4485 and NGC 4490 are now moving away from each other, vastly altered from their original states. Still engaged in a destructive yet creative dance, the gravitational force between them continues to warp each of them out of all recognition, while at the same time creating the conditions for huge regions of intense star formation.
This galactic tug-of-war has created a stream of material about 25 000 light-years long which connects the two galaxies. The stream is made up of bright knots and huge pockets of gassy regions, as well as enormous regions of star formation in which young, massive, blue stars are born. Short-lived, however, these stars quickly run out of fuel and end their lives in dramatic explosions. While such an event seems to be purely destructive, it also enriches the cosmic environment with heavier elements and delivers new material to form a new generation of stars.
Two very different regions are now apparent in NGC 4485; on the left are hints of the galaxy’s previous spiral structure, which was at one time undergoing “normal” galactic evolution. The right of the image reveals a portion of the galaxy ripped towards its larger neighbour, bursting with hot, blue stars and streams of dust and gas.
This image, captured by the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope, adds light through two new filters compared with an image released in 2014. The new data provide further insights into the complex and mysterious field of galaxy evolution.
Zoom-in on NGC 4485
The Hubble Space Telescope is a project of international cooperation between ESA and NASA.
Image credit: ESA, NASA
ESA/Hubble Photo Release | 0.88546 | 3.941913 |
The Gravity Assist Podcast is hosted by NASA's Chief Scientist, Jim Green, who each week talks to some of the greatest planetary scientists on the planet, giving a guided tour through the Solar System and beyond in the process. This week, he's joined by Dr Kelly Fast, who is the Near-Earth Object Observation Program Manager in NASA's Planetary Defense Coordination Office.
You can listen to the full podcast here, or read the abridged transcript below. [Deflecting Killer Asteroids Away From Earth: How We Could Do It]
Dr. Jim Green: To find near-Earth objects we have to scan the whole sky looking for those things that cross our orbit and that may hit our Earth one day. So, there's an array of tools that you use as part of your job. Tell us a little bit about them.
Dr. Kelly Fast: Well, in th Near-Earth Objects Observations Program, the idea is to find near-Earth asteroids before they find us. Ground-based telescopes are used to survey the skies every night to look for near-Earth asteroids and try to discover the ones that haven't been seen before.
These telescopes include the Pan-STARRS telescopes that belong to the University of Hawaii, and the Catalina Sky Survey, which is part of the University of Arizona. So those are the new telescopes that provide us with new discoveries.
Dr. Jim Green: Once you have found them, you also need to know how big they are and maybe their spin rate and some attributes of them, such as whether they're iron or stony meteorites. How do we get that kind of information?
Dr. Kelly Fast: That kind of information comes from other telescopes that seek to characterize these objects. One in particular is actually a NASA telescope on the ground, the Infrared Telescope Facility on Mauna Kea on the Big Island of Hawaii. At that telescope observations are made and spectra taken in the infrared, the part of the spectrum that the eye can't see. From that information it's possible to tell something about what these asteroids are made of, which is important for understanding what sort of hazard they might pose to Earth.
In addition, there are other telescopes in the space, like NEOWISE, which is the re-purposed Wide Infrared Survey Explorer (WISE), and the same observations are made from space in order to learn more about the characteristics of objects.
Dr. Jim Green: There's another new tool that looks down at the Earth and sees in-falling material. What is that telescope?
Dr. Kelly Fast: It's actually an Earth Science Mission, called the GOES-16 satellite, which is looking at the Earth. There's stuff hitting the Earth all the time, but thankfully it's small material. When you look at the night sky, you see shooting stars. Those are really dust very small rocks hitting the atmosphere from outside, and they make that streak in the sky [as they burn up].
On the GOES-16 satellite is an instrument called the Geostationary Lightning Mapper, or GLM, and it's there to detect lightning—looking down at the Earth to detect lightning. But when meteors travel through the atmosphere they also create a flash of light, and the GLM is detecting those also. It turns out that there's valuable information that you can get from that. So, this is a case where you get this bonus science from a particular instrument.
Dr. Jim Green: What's really neat about that is, as I understand it, there's nearly 100 tons of meteoric material that falls into Earth's atmosphere every day.
Dr. Kelly Fast: That's true. It sounds like a lot of material, but the atmosphere is an incredible protector for us, and so much of that just never even reaches the ground. It burns up in the atmosphere, creating beautiful shooting stars and then, if there's something that's larger, it will create something brighter such as a fireball or a bolide.
Dr. Jim Green: So, as we see fireballs come in, we can then predict where the debris will make landfall. The the debris tells us what type of meteorite it was that came in.
Dr. Kelly Fast: We had an incredible experience in June 2018, when a very small asteroid, just a few meters in size, was discovered eight hours before it impacted the Earth Its designation was 2018 LA. Once it impacted with the Earth's atmosphere, the meteor was seen by surveillance cameras and government sensors and a meteorite was recovered afterwards. That's very important for science to connect the meteorite back to an actual asteroid whose actual orbit was determined, which tells you more about where it came from in the first place.
Dr. Jim Green: In October 2017, an asteroid was found coming through our Solar System. What was that?
Dr. Kelly Fast: The Pan-STARRS telescope in Hawaii was doing its normal Near-Earth Object survey operation, scanning the sky at night looking for new asteroids, and it found one, but the motion of this one was different. It was moving quite fast. And when its orbit was calculated it was realized that it had came from outside our Solar System, from interstellar space, and it was moving so fast that it was going to leave the Solar System. We call the object 'Oumuamua and to be able to come to the conclusion that this isn't one of our own asteroids was a very important discovery.
Dr. Jim Green: The concept that here something that was created in another solar system and was just passing through really energized many of the ground-based astronomers and our scientists wanted to know more about it. And so, there were a lot of observations of 'Oumuamua.
Dr. Kelly Fast: And there was very little time because this asteroid was discovered after it had passed by Earth and was on its way out of the Solar System, so there was limited time to study it.
It's difficult with asteroids because, even as when they are going by the Earth, they still looks like a point of light, unless they are close enough to put radar on it. And 'Oumuamua wasn't. But, there are other things that we could learn. First of all, by looking at the light that's coming from it, you can see how the light is changing, and the light was changing a lot: it was getting brighter and fainter, brighter and fainter. That's not unusual with asteroids because they tend not to always be round. So depending on how that thing is oriented toward you, the light is going to be different. The changing light can tell you about the shape. But, this one was a little more extreme, and it appeared to possibly be even more elongated than other asteroids that have been studied in our Solar System, so that was a neat discovery. [See the Dramatic Increase in Near-Earth Asteroids NASA Has Discovered (Video)]
Dr. Jim Green: Looking at that light over time, the best fit seems to be a cigar- or elongated-shaped object.
Dr. Kelly Fast: There have been some different numbers that have come out in terms of its aspect ratio, meaning like how wide is it compared to how tall it is or its length to its width. It may be as much as 10:1 or maybe more like 5:1 or 6:1, but still something that's longer than what we've generally seen in our Solar System.
There were other properties, too, that were useful for trying to understand more about this object. One thing that was seen was its color, although unfortunately it was a little too faint to take a spectrum of it, but the color still tells you something about it. And this seemed to have a reddened color, which kind of comes with space weathering, by being bombarded by radiation in space.
Dr. Jim Green: What's really exciting about it is, now that we've found one, there must be others?
Dr. Kelly Fast: Even before it was found, it was predicted that such objects should be passing through our solar system. So, this is probably not the only one, it's just the first one that we've seen.
Dr. Jim Green: How many do we expect, now that we've seen this one?
Dr. Kelly Fast: There are folks who do that kind of modeling. Some of them are saying that there should be one inside the Earth's orbit at all times. But telescopes — at least those on the ground — can only look at night so they can only cover so much of the sky. The nice thing is, since this was a pathfinder, it gave an idea of what to look for, and so it might be possible to recognize this sort of thing sooner or perhaps go into older data and look for some things that were missed.
Things weren't quite exactly as they seemed at first because the people who do this sort of modeling really would have expected that this would have been an icy body like a cometary body as opposed to an asteroid. But, no coma or tail were seen when 'Oumuamua was discovered, no atmosphere that forms around as happens when the Sun heats up an icy body such as a comet. But later on, as it continued to be observed, the motion was a little odd. By very carefully measuring its motion and seeing that it was a little off, what it indicated was that this might actually be an icy body after all, because when the ice is heated on comets it produce jets when the gasses are released, and the jets can affect the object's motion, like little rocket motors on there. So it turns out that this asteroid 'Oumuamua may actually be a comet.
Dr. Jim Green: You know, this object is about 700 meters long, and it has a funny, unusual spin to it, like a cigar that is moving in a very unsystematic way. We call it nutating. But if it was a rubble pile made up of loose material, how could it even be held together? So, maybe what is, indeed, holding it together is an icy body that is allowing it to hang in that manner.
Dr. Kelly Fast: There was a lot of discussion about this. Everybody was puzzling over this because a rubble pile object probably wouldn't have been able to hold itself together and rotate like that. So, is it like a big slab of material? And then, how would that have formed? What was its history? The presence of ice adds another piece to the puzzle.
Dr. Jim Green: One of the things I always like to do is ask what your gravity assist was. What were the things that happened to you as you became the planetary scientist you are today?
Dr. Kelly Fast: I just have to credit the people around me. I think of one in particular, Ted Kostiuk of that Goddard Space Flight Center, who I was working for after I got my Masters. And I had had kids, and I was doing science and being a mom and just loving it. He said to me one day, "I want to talk to you about the future." And he encouraged me to go back to grad school and finish my Ph.D. It led to all kinds of things that I never would have expected or planned. So that was a major, major gravity assist for me.
This story was provided by Astrobiology Magazine, a web-based publication sponsored by the NASA astrobiology program. This version of the story published on Space.com. Follow us on Twitter @Spacedotcom or on Facebook. | 0.923619 | 3.578838 |
Mean Distance from Earth (Average): 383,990 km
Distance from Earth at Perigee: 362,570 km
DIstance from Earth at Apogee: 405,410 km
Orbital speed: 3,679.2 km/h
Radius: 1,737.1 km (0.273 Earths)
Mass: 7.3477 x 1022 kg (0.0123 Earths)
Age: 4.5 billion years
Why does the Moon change shape?
- The Moon does not actually change shape but rather the part of the moon that is lit by the sun changes and gives the appearance of the moon changing shape. Soon we will put up a diagram here that explains this more fully.
How does the Moon influence werewolves?
- What is well known is that the moon and in particular the full moon have a very strong influence on werewolves. Almost all new werewolves and certainly young werewolves experience transformation during the full moon and for 24 hours before and after.
Older and trained werewolves can control their transformations better but even they experience the pull of the full moon in terms of stronger powers and a higher likelihood of transforming on the full moon.
Why this happens is less well known, whether it’s due to the pull of gravitational forces or the tricks of the moonlight. This is a very complex issue and we will have more detail on this posted soon.
Does the Moon influence anything else on Earth?
- Yes it certainly does. Tides on the ocean are one such example. The gravitational pull of the Moon pulls the oceans so much that between high tide and low tide the water might rise as much as 16 meters (53 feet).
Why does the Moon change?
- The Moon appears to change shape because of the light from the Sun and the system of the Moon orbiting the earth. There will soon be a diagram here to help explain this better.
Glossary of Lunar Terms
Apogee – The point in the Moon’s orbit of the Earth where it is the farthest from the Earth.
Perigee - The point in the Moon’s orbit of the Earth where it is the closest to the Earth.
Supermoon – When the Full Moon and the Perigee (closest point to the Earth) occur at the same time it creates an effect known as a Supermoon which causes very high tides and very intense and powerful werewolf transformations.
Moon Age - The moon’s lunar cycle that lasts about 29 days. The moon age is the number of days since the beginning of that cycle which starts at 0 with the New Moon. | 0.861194 | 3.158701 |
Astronomers have revealed details of mysterious signals emanating from a distant galaxy, picked up by a telescope in Canada.
The precise nature and origin of the blasts of radio waves is unknown.
Among the 13 fast radio bursts, known as FRBs, was a very unusual repeating signal, coming from the same source about 1.5 billion light years away.
Such an event has only been reported once before, by a different telescope.
“Knowing that there is another suggests that there could be more out there,” said Ingrid Stairs, an astrophysicist from the University of British Columbia (UBC).
“And with more repeaters and more sources available for study, we may be able to understand these cosmic puzzles – where they’re from and what causes them.”
The CHIME observatory, located in British Columbia’s Okanagan Valley, consists of four 100-metre-long, semi-cylindrical antennas, which scan the entire northern sky each day.
The telescope only got up and running last year, detecting 13 of the radio bursts almost immediately, including the repeater.
The research has now been published in the journal Nature.
“We have discovered a second repeater and its properties are very similar to the first repeater,” said Shriharsh Tendulkar of McGill University, Canada.
“This tells us more about the properties of repeaters as a population.”
FRBs are short, bright flashes of radio waves, which appear to be coming from almost halfway across the Universe.
So far, scientists have detected about 60 single fast radio bursts and two that repeat. They believe there could be as many as a thousand FRBs in the sky every day.
There are a number of theories about what could be causing them.
They include a neutron star with a very strong magnetic field that is spinning very rapidly, two neutron stars merging together, and, among a minority of observers, some form of alien spaceship. | 0.857641 | 3.596892 |
Astronomers from Cornell have developed a practical model that they call an environmental color decoder. The new practical model is designed to find climate clues to help in the search for potentially habitable exoplanets. The astronomers looked at how different planetary surfaces in the habitable zones of distant solar systems could impact the climate on the exoplanets.
The team says reflected light on the surface of the planet plays a role in the overall climate and also the detectable spectra of Earth-like planets. The scientist combined the detail of the planet’s surface color, and the light from its host start to calculate a climate. The team says that a rocky, black basalt planet absorbs light well and would be very hot.
However, if sand or clouds was added, the planet cools. Meanwhile, a planet that has vegetation and circles a reddish K-star would have cool temperatures because of how the surface reflects the sunlight. The color of the planet can mitigate some of the energy given off by its host star.
What makes up the surface of the exoplanet, how many clouds are around the planet, and the color of the sun can change the planet’s climate significantly. The scientists are looking forward to instruments that will come online, like the Earth-bound Extremely Large Telescope, that will allow them to gather data to test a catalog of climate predictions.
Astronomers say there is an important interaction between the color of an exoplanet’s surface and the light that hits it. These effects can help scientists in the search for extraterrestrial life. | 0.860012 | 3.352731 |
When asteroid 2012 DA14 zips past Earth on Friday on an arc that cuts beneath communications satellites, David Trilling won’t be peering through a telescope. But he will eagerly await the data that will almost certainly find its way to his desk at Northern Arizona University.
“I’m excited to see what they come up with,” said Trilling, an assistant professor of astronomy who works partly to synthesize existing information—drawing from extensive archives of data to characterize known asteroids—while also looking for new objects hurtling through space.
Through a variety of NASA-funded grants to support research on near-Earth objects, Trilling and his colleagues look for asteroids in orbits that might come close to Earth. “We want to know what they’re made of and understand their internal properties,” he said.
Part of that work is to add to knowledge about the universe. “But if we want to deflect one, then we want to know more about what it’s made of,” Trilling said.
Trilling describes the upcoming flyby of the 150-foot-wide asteroid as “a pretty big deal” even though there is no chance of an impact with Earth, unlike the asteroid of about the same size that created Arizona’s Meteor Crater. Scientists are sure that the asteroid will get no closer than 17,200 miles at its closest approach at 12:26 p.m. MST.
“We’re confident that there’s no risk because we’ve made so many measurements of the position of this object,” Trilling said. “We know its orbit quite well.”
Still, much of the data that Trilling analyzes comes from “accidental information” that arises when an asteroid shows up in the field of view of a powerful telescope that was looking for something else.
“We find them through some clever tricks, measure their brightness in the field of observation, then do some mathematical modeling that combines image data with other information we have to get a measurement of asteroid properties,” Trilling said.
The upcoming flyby will be no accidental find, though, so data collection by amateurs and professionals, filtered through a formal clearinghouse and posted online, will provide valuable insights.
“This is an asteroid that’s so close that it’s pretty easy for us to study,” Trilling said. “Then, as we understand one asteroid better, we can extrapolate that information to thousands of others that look like it.”
Next week’s event serves as a preview for the International Planetary Defense Conference to be held in Flagstaff in April. Trilling, the local organizer, said more than 200 scientists will gather to discuss all aspects of asteroid research, including how to “eradicate risk.” And if a collision cannot be prevented? “We’re planning a session that covers the environmental and social effects” of an impact, Trilling said. | 0.83541 | 3.655174 |
A Planet Suitable for Life
Many planets formed, but one was especially suitable for life. The Earth unites all the special conditions we discuss below.
A Host Galaxy Rich with Dust
Making the complex molecules of life requires most of the elements. The heavy elements are found in star dust. Some galaxies have very little dust. Life requires a dusty galaxy.
A Galactic Location among New Stars
Galactic centers usually have old, hot, bluish-white stars. We must look in the galactic rim, where there are new yellow stars burning at a lower temperature. These stars incorporate carbon and oxygen nuclei to catalyze their nuclear reactions.
A Solitary Parent Star
The parent star must be a bachelor star. The planet that hosts life must have nearly uniform lighting and heating. Double, triple, or multiple stars would make planet orbits too complicated. Complicated orbits, with the planet sometimes near one star, sometimes near another, sometimes far from any star, will not do. Only a single star can have planets with simple orbits.
A Star of the Right Size
A star’s luminosity depends on its size: the bigger, the brighter.
A little parent star has very low light output, and a planet has to orbit very close to the star to get enough warmth. The star would fill up a great deal of the daytime sky. There would hardly be shadows. Sundials would not work. But that’s not the only problem. A planet that is too close to its parent star will become tide-locked, synchronizing its rotation with its orbital revolutions. The Moon rotates only once in its orbit around the Earth. Mercury, the planet closest to the Sun, rotates three times every two orbits. Venus, the planet between Mercury and the Earth, makes complete rotations very slowly. The cooling and heating of night and day on a tide-locked planet is so slow that there are big temperature swings. Every night most of the water freezes and every day most of the water boils. Nearly all life would have to migrate daily around the planet, keeping close to the sunrise or sunset terminator. On Earth relatively few species migrate and of them only a few migrate distances comparable to the diameter of the planet. None migrates daily.
A big parent star has tremendous luminosity. The light output from a big parent star is very great, and the planet must orbit very far from the star to obtain the right temperature. The star would look like a point of light, much smaller than our Sun’s disk. The star would still provide information. It would serve as a sign for timekeeping. However, the planet could have no intelligent inhabitants to read a sundial. Big stars burn up their fuel very rapidly. Plant life would not have sufficient time to oxygenate the atmosphere. We need not search any planets orbiting a big star for animals or more intelligent life.
Once stars burn all their hydrogen, they begin to collapse and rise in temperature. If they are big enough, the temperature rises until they can burn helium. Helium makes a much hotter nuclear fire. The outer layers of the star expand outward, perhaps engulfing any planet that was at the right distance during the hydrogen-burning phase. Whether or not some planets are engulfed, the helium also burns up rapidly, the fire goes low again, and the star starts to collapse again, raising the temperature still higher. If the star is only 40 percent bigger than the Sun, the temperature rises to the ignition temperature of all the remaining 90 elements at once. The star becomes a supernova. The resulting conflagration blows the star apart in a few days. That would be the end if there were any life on any planet in the vicinity.
A Star of the Right Color
There are stars of different colors and temperatures. Stars range from the hottest, the bluish-white stars, on down through the intermediate yellow stars to the relatively cool red giant stars. The constellation Orion contains the range of colors. Orion is a hunter who carries a sword or dagger that hangs from his belt. The star on his right shoulder, alpha orionis or Betelgeuse, is a red giant star, with a surface temperature of 3 000 kelvins or Celsius, 5 400º F. The star on his left foot, beta orionis or Rigel, is a bluish-white star, with a surface temperature of 25 000 kelvins or Celsius, 45 000º F. There are stars twice as hot as Rigel, but they are either too far away to be bright and easily recognized, or located too close to the southern celestial pole to be seen from the northern hemisphere. Our Sun, with a surface temperature of about 6 000 kelvins or Celsius, 10 000º F, is intermediate.
Some people call the Sun average or even mediocre. This is not so. A star can have a planet orbiting around it at practically any distance. If the planet orbits close to the star, the planet will be hot. If the planet is far from the star, the planet will be cold. But a planet may orbit any star at the appropriate distance to maintain an average temperature of about 280 kelvins, 7º C or 45º F.
The four gases needed for life are oxygen, carbon dioxide, water vapor, and nitrogen. The latter is needed for plant fertilizer. They are all transparent in a band of frequencies called the visible band. The visible band transmits the red and green colors that photosynthesis needs. A yellow star emits most of its light in the middle of the visible band. If the star were any other color, the atmosphere of the planet would block most of the star’s light. A planet could be at the right distance from a bluish-white star or a red star to get the right temperature, but daylight would be dim at the surface. That would hardly be useful to intelligent beings with vision. If the planet orbits a red star, there will not be enough blue and violet light for normal rates of photosynthesis. There is some photosynthesis with red light only, but the lack of blue and violet light is a limiting factor. If the planet orbits a bluish-white star there will be far too much ultraviolet radiation. Intense ultraviolet radiation destroys the complex biochemical compounds needed for life. Plant life has to thrive for millions of years to oxygenate the atmosphere of a planet. Free oxygen will dissociate into ozone in the upper atmosphere under the action of the more energetic and harmful ultraviolet rays from the parent star. The ozone layer must be present to protect life on the planet’s surface from these rays. If the planet’s atmosphere doesn’t have free oxygen, it can’t have an ozone layer in the upper atmosphere to protect its surface. Harmful ultraviolet rays would reach the ground and prevent complex compounds like chlorophyll from ever forming.
We need not search for life near either red stars or bluish-white stars. Yellow stars are special. Also, there are little parent stars and big parent stars. Our Sun is a parent star of intermediate size. Is that special? Yes! Who says our Sun is mediocre? It is very special. Are we just lucky, or did a benevolent, powerful intelligence choose the Sun for us?
A Bright Sun in a Dark Sky
Livable temperatures on Earth require a bright Sun in a dark sky. We have already seen that the expansion of the universe makes the night sky dark. Sunlight governs the day because it comes from one direction only. We can use the angle of sunlight to determine the time of day. When the sky is sufficiently unclouded to allow the Sun to cast a shadow, we can get this information from the sunlight with a sundial. Even when a heavy overcast blurs all shadows it is usually possible to determine the position of the Sun with some accuracy. The Sun therefore serves as a sign of the time of day.
The same physical arrangement, a bright source in a dark sky, makes sunlight useful for powering heat engines. One very important kind of heat engine is animals, including people. We will examine this way of interpreting ourselves in the chapter below about the thermodynamics of life.
The host planet’s orbit must be nearly circular to avoid extremes of temperature throughout the year. If the star has more than one planet, the orbits must nest neatly to prevent collisions. All the orbits must be nearly circular. Many extrasolar systems have one large planet in a highly elliptical orbit. Imagine Jupiter in a highly elliptical orbit that overlaps the Earth’s orbit. It would run around in our solar system like a bull in a china shop. The Earth would sooner or later suffer a fatal collision that would wipe out all known life.
A Court of Planets
A court of planets will collect elements not needed in abundance on the host planet, like the extra hydrogen found in the atmospheres of Jupiter, Saturn, Uranus, and Neptune. Large planets should go on the outside, to use their gravity to disrupt the orbits of comets and eccentric asteroids and defend the inner planets.
A Nearly Spherical Planet
The planet must be nearly spherical. Otherwise the planet might become tide-locked, that is, its rotation could slow until its day was comparable to its year. The Moon is elliptical and one side always faces the Earth. This makes a day on the Moon a month long. Mercury is elliptical like a football and on its closest approach to the Sun one of its ends points toward the Sun. This makes Mercury’s day equal two thirds of its year. The Earth’s more rapid rotation provides more uniform heating.
Moderate Orbital Inclination
The planet must not be so inclined that its poles are close to the orbital plane, or the day on most of the planet will be the same as the planet’s year. The North and South Poles of the planet Uranus point alternately toward the Sun. Since it takes Uranus 84 years to revolve around the Sun, one cycle of light and darkness there is also 84 years long. On the other hand, some orbital inclination is needed to produce seasons. The Earth’s spin axis is inclined 23 degrees from vertical relative to the Earth’s orbital plane. This makes noontime sunlight shine from high in the sky on a summer day. It shines from low near the horizon in winter. The variation in temperature this produces during the year corresponds to important cycles of renovation among plants and animals.
A Large Satellite
Preferably the planet will have a satellite of appreciable relative size. The Moon helps to defend the Earth from impacts by disrupting the orbit of any stray asteroid that approaches the Earth.
The Moon also stabilizes the Earth’s orbital inclination against disturbances from the other planets. Without the Moon, the Earth would periodically tip its axis so far that its poles would point toward the Sun, and the day would be equal to the year. All animal life on Earth would have to make a semiannual migration of 10 000 miles or 16 000 kilometers.
A large satellite will cause tides to scrub the continental shelves and increase a healthy interaction between the hydrosphere and lithosphere.
Animals can use the satellite for some illumination at night. Intelligent life can use the satellite to gauge the passing of time in units longer than days and less than years.
The Right Temperature
The orbit must be at the right distance from the parent star. The distance and type of star determines the temperature range. Water should be liquid most of the time to permit a wide variety of chemical reactions. This is also the right temperature range for making long hydrocarbon chains.
The old science-fiction idea of high-temperature life based on hydro-silicon chains does not work. Silicon does not form nearly the variety of complex molecules that carbon does.
The Right Size for Just Enough Atmosphere
The inhabitable planet should be large enough to retain an atmosphere, but not too large, or the atmosphere will be too thick. The planet must be small enough to let excess hydrogen escape. The size for retaining an atmosphere is related to the average temperature.
A Molten Core
The planet should have a molten, electrically conducting core to permit currents that generate a magnetic field. This defends the planet from those cosmic rays that consist of charged particles moving very fast. The iron core of the Earth is very suitable. The core must contain radioactive materials to keep it hot and fluid.
Various Kinds of Rocks
There should be a good mix of rock materials, some of high density and some of low density. Then the low-density parts of the crust will be thicker than the high-density parts. Since the crust will float on the molten core, the thick parts of the crust will be continents. Their outer surface elevation will be higher than that of the thin parts of the crust, which will become the ocean basins. Such a mix will provide a variety of habitats. Otherwise, a planet with much water would be almost all ocean, and one with little water would be almost all continent.
Abundant Water, Not Other Liquids
The water molecule has an odd angle between the hydrogen atoms, not exactly 90 degrees. In water vapor the angle is 104 degrees 40 minutes. Water’s property of expanding when freezing is due to flexibility in the angle. When the molecules are warm enough to move past one another, dynamically they occupy less volume than they do when they are cold enough to have fixed positions and a random structure.
Let’s explain this point in more detail. The oxygen atom is strongly electronegative. This means that it holds all the electrons very close to itself. There are ten electrons in a water molecule but the nucleus of the oxygen atom has only eight protons. Therefore the oxygen atom in a water molecule has almost two unbalanced negative charges. The two hydrogen nuclei are left nearly bare, so each has almost one unbalanced positive charge.
The oxygen atom in one water molecule is attracted to the hydrogen atoms of other water molecules. At a low temperature this kind of “extra-molecular hydrogen bond” can attach one water molecule to four others. The latticework that builds up occupies a great deal of space. At higher temperatures, in liquid water, there is another structure in which one molecule is attached to three others. The latticework is unstable and occupies less space than the low-temperature latticework. Liquid water molecules do not fill up as much space as they do in ice. This is what makes ice float on liquid water. Here is a case where some disorder, or order less than perfect, is necessary for life.
Liquid water has a specific gravity of 1.000 by definition at 4º C. It expands when it freezes to a specific gravity of 0.92. This means that ice floats. If water had molecules that fit neatly together like almost all other molecules, lakes and oceans would be frozen solid from the bottom up. In summer there might be pools of chilly water on top of the ice. Aquatic life could not exist.
On land, expansion on freezing enables water to break up rock masses. Water creeps into crevices and then freezes, exerting pressure that opens cracks. Then when a thaw comes the water fills the new crack and is ready to expand the crack some more. It took many cycles of freezing and thawing before water broke up the rocky surface of the land or the volcanic shield into fine fragments. When water had done its work, the Earth’s surface was loose enough to permit roots to penetrate, and plants could grow.
Neither methane nor ammonia has this property of expanding when freezing. These chemicals are abundant in the atmospheres of Jupiter, Saturn, Uranus, Neptune, and Titan, a moon of Saturn. One or more of the planets we just mentioned may be gas all the way through, but we know that Titan has a solid surface. Titan does not have soil like the Earth because any surface water present is too cold to alternate between freezing and thawing. The surface is not like soil and cannot support plants.
The Right Balance between Gases
The balance between carbon dioxide, water vapor, and temperature is critical. If the temperature is low enough but not too low, the water will be liquid. The oceans will dissolve a great amount of the carbon dioxide. Water containing carbon dioxide is a weak acid that wears down limestone and makes material available for seashells. The air will not contain too much carbon dioxide, and some of the heat will escape to outer space. Thus a temperature balance can be maintained on a planet like the Earth.
But if the temperature becomes too high, the water will release the carbon dioxide, like a warm soft drink releases bubbles. This will trap heat under the atmosphere and raise the temperature higher. Venus has a surface temperature of 500 degrees and no liquid water on the surface because of this runaway greenhouse effect. For the same reason Venus is subject to violent storms. Winds there are constantly moving at double the speed of the most violent hurricane winds on Earth.
Surface Soil and Dissolved Gases
The surface soil must be loose but not too loose. If there is nothing to hold it down there will be constant dust storms. Volcanoes can make the soil loose and porous if the molten lava has dissolved gases in it. These are present on Earth because tectonic plates are always sliding over one another. This drags earlier ocean floors with their seashells and diatomaceous matter down into the mantle. The biological material, including dissolved carbon dioxide, later returns to the Earth’s surface in molten rock. If there were no dissolved gases the lava flows would produce a hard, impervious surface like that of Venus.
Physical, geological, and climatic conditions on the Earth are relatively tranquil. Everywhere else in the solar system we find violence. Scientists have puzzled over many “Goldilocks” coincidences. Looking at Venus, Earth, and Mars we see that the Earth is “not too hot, not too cold, but just right.” Earth is the only known place in the universe with liquid water.
The daily weather report for Venus features winds twice as fast as the most powerful hurricanes on Earth. The Red Spot, bigger than the Earth, that Galileo saw on Jupiter turned out to be a storm that has been raging more than 350 years and perhaps forever. In 1989 we discovered a similar storm, the Black Spot of Neptune.
There are volcanoes and lava flows on Earth, but ours are dwarfs compared to those of Venus and Mars. One of the Voyager space probes photographed a volcano in eruption on Io, the innermost large satellite of Jupiter. Io is smaller than our own Moon, but it is covered with volcanoes and fresh lava flows. In the farthest, coldest reaches of the solar system there may be no hot magma, but the surface of Triton, Neptune’s largest moon, may have arisen from ice volcanism.
We can’t see the surface of Jupiter, Uranus, or Neptune, and we’re not sure that Saturn even has a solid surface. We haven’t yet seen the surface of Pluto or its satellite, Charon. But the surfaces of all the other bodies in the solar system are pockmarked with craters. Mercury has so many craters that more won’t fit. Any new crater obliterates parts of old craters. Venus has a protective atmosphere much thicker than Earth’s, and new lava flows resurface Venus every million years. Even so, radar found hundreds of impact craters there. Mars has what many astronomers thought was a smooth plain, until careful measurements with the Mars Laser Altimeter showed it to be a crater big enough to hold Western Europe or the United States from the East Coast to the Rockies. The moons of Jupiter, Saturn, Uranus, and Neptune are all as heavily cratered as our own Moon. The Earth has a few impact craters, but nothing like the number we see everywhere else. How did we escape?
An asteroid impact supposedly destroyed the dinosaurs and opened up a niche for the mammals at the end of an age about 65 million years ago. The asteroid needed to be big enough, at least 10 km in diameter, to destroy most large life forms. If it had been too big, say 30 km or more in diameter, it would have destroyed all forms larger than bacteria. That would have put the Earth back to pre-Cambrian conditions, and the progressive population of the Earth with plants, animals, and people would have had to start over. A really big one has never yet hit us.
There are an estimated 700 asteroids larger than 1 km whose orbits come as close to the Earth’s as 40 million km (30 million miles or 30% of the distance between the Earth and the Sun). The Asteroid Belt has about 1000 objects bigger than 50 km in diameter. Happily for us the known big ones are nicely shepherded by Jupiter’s enormous gravitational field and kept in the asteroid belt beyond Mars.
NASA has undertaken a survey to see if the Earth is at risk any time soon from asteroids whose orbits come close to or cross the Earth’s orbit. How long must they continue to survey the solar system until they can be sure they’ve seen most of the big ones? As an example we will take Halley’s Comet. Its nucleus is an irregular “potato shaped” object about 15 km long and 10 km in diameter. It spends most of its time out near the orbit of Neptune in the dark and cold where it can’t reflect enough light to be seen. We get to see it only once every 76 years when it ventures close to the Sun within the orbit of Jupiter. Every time a comet comes close to the Sun some of its material boils off and streams away in a long tail. Halley’s Comet will not last more than another thousand round trips. That means it will not last more than 76 000 years. Its present age must likewise be of the order of thousands, not millions, of years. If the solar system is about 5 000 million years old, why are there still comets? New objects are constantly being bumped down into lower orbits by close encounters with the outer planets. How many other big chunks are out there? In the newly discovered Kuiper belt, beyond Neptune, there are perhaps 100 000 objects bigger than 50 km in diameter.
From all of this we know that the Earth should have received as many hits as the Moon. Somehow we have escaped. Is the Earth just the luckiest planet in the universe? Or is there a better explanation?
In Isaiah 45:18 the prophet says, For this is what the LORD says—he who created the heavens, he is God; he who fashioned and made the earth, he founded it; he did not create it to be empty, but formed it to be inhabited—he says: “I am the LORD, and there is no other.” Can the reason the Earth is so well adapted for life be that God designed it that way? Does His invisible, protecting hand defend us from disastrous asteroid impacts?
One believes either in God or in the goddess Lady Luck. Our solar system and home planet seem to have “benevolent creative design” written all over them.
Schwarzschild, Bertram, “Survey Halves Estimated Population of Big Near-Earth Asteroids,” Physics Today, 53 (Number 3, March 2000), pp. 21–23. | 0.914738 | 3.233799 |
Quarter* ♍ Virgo
Moon phase on 19 June 2018 Tuesday is Waxing Crescent, 6 days young Moon is in Virgo.Share this page: twitter facebook linkedin
Previous main lunar phase is the New Moon before 5 days on 13 June 2018 at 19:43.
Moon rises in the morning and sets in the evening. It is visible toward the southwest in early evening.
Moon is passing about ∠16° of ♍ Virgo tropical zodiac sector.
Lunar disc appears visually 2.4% wider than solar disc. Moon and Sun apparent angular diameters are ∠1935" and ∠1888".
Next Full Moon is the Strawberry Moon of June 2018 after 8 days on 28 June 2018 at 04:53.
There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak.
The Moon is 6 days young. Earth's natural satellite is moving from the beginning to the first part of current synodic month. This is lunation 228 of Meeus index or 1181 from Brown series.
Length of current 228 lunation is 29 days, 7 hours and 5 minutes. This is the year's shortest synodic month of 2018. It is 5 minutes shorter than next lunation 229 length.
Length of current synodic month is 5 hours and 39 minutes shorter than the mean length of synodic month, but it is still 30 minutes longer, compared to 21st century shortest.
This lunation true anomaly is ∠340.3°. At the beginning of next synodic month true anomaly will be ∠356°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°).
4 days after point of perigee on 14 June 2018 at 23:55 in ♋ Cancer. The lunar orbit is getting wider, while the Moon is moving outward the Earth. It will keep this direction for the next 10 days, until it get to the point of next apogee on 30 June 2018 at 02:43 in ♑ Capricorn.
Moon is 370 428 km (230 173 mi) away from Earth on this date. Moon moves farther next 10 days until apogee, when Earth-Moon distance will reach 406 061 km (252 315 mi).
2 days after its ascending node on 16 June 2018 at 17:50 in ♋ Cancer, the Moon is following the northern part of its orbit for the next 11 days, until it will cross the ecliptic from North to South in descending node on 30 June 2018 at 16:44 in ♒ Aquarius.
2 days after beginning of current draconic month in ♋ Cancer, the Moon is moving from the beginning to the first part of it.
4 days after previous North standstill on 15 June 2018 at 00:52 in ♋ Cancer, when Moon has reached northern declination of ∠20.754°. Next 9 days the lunar orbit moves southward to face South declination of ∠-20.775° in the next southern standstill on 28 June 2018 at 14:30 in ♑ Capricorn.
After 8 days on 28 June 2018 at 04:53 in ♑ Capricorn, the Moon will be in Full Moon geocentric opposition with the Sun and this alignment forms next Sun-Earth-Moon syzygy. | 0.848363 | 3.19873 |
About This Chapter
Below is a sample breakdown of the Solar System's Smaller Objects and Satellites chapter into a 5-day school week. Based on the pace of your course, you may need to adapt the lesson plan to fit your needs.
|Day||Topics||Key Terms and Concepts Covered|
|Monday|| Satellite formation|
Moons of the Jovian planets
Tidal forces and heat transfer on Jovian satellites
|Theories of how the Jovian satellites were formed;|
Similarities and differences of the moons of the Jovian planets;
Role of tidal forces in heat transfer on the Jovian planet satellites and how this makes them more amenable to supporting life
|Tuesday||Asteroids, meteorites, comets and meteoroids||Definitions and characteristics;|
Origins and orbits of meteoroids, meteor shows, sporadic meteors
Meteorite classifications and characteristics
|How a shooting star occurs;|
Iron, stone and stony-iron meteorites, subtypes of meteorites
|Origins and properties of asteroids, NEOs, Apollo objects, the asteroid belt, Trojan asteroids, centaurs;|
Origins and properties of comets, dust tails, coma, gas tails, Oort cloud, the Kuiper belt
|TNOs, KBOs and Plutinos;|
Pluto, Eris, Haumea and Ceres
1. Moons of the Jovian Planets
This lesson will introduce you to four well-known Jovian satellites called Titan, Triton, Miranda, and Ganymede. You'll be able to appreciate how vastly different just four of the hundreds of moons in our solar system are.
2. Satellite Formation in Our Solar System
This lesson will describe regular, irregular, collision fragments, captured asteroids, and other types of satellites orbiting around the planets of the solar system.
3. Tidal Forces & Heat Transfer on Jovian Satellites
In this lesson, you'll learn about one of the ways by which Jovian satellites may stay warm through tidal heating, orbital resonance, convection, and conduction.
4. Asteroids, Meteorites & Comets: Definitions and Characteristics
This lesson will cover the definitions and characteristics of asteroids, comets and meteorites. It will also explore what impact they have had on Earth and the impact they might have in the future.
5. Meteoroids: Origin & Orbits
This lesson will describe the origins of meteoroids, what comets have to do with their orbits, and the parent bodies of the meteorites we find on Earth.
6. The Formation of Shooting Stars
This lesson will explain to you the differences between a meteor, meteoroid, and meteorite as well as how falling stars form and what the different kinds of meteorites are.
7. Meteorite Classifications & Characteristics
This lesson will go over the major types of meteorites and their major subclasses. Important terms and concepts we'll cover include the iron meteorites, stony-iron meteorites, stony meteorites, and carbonaceous chondrites.
8. Asteroids: Origin & Properties
This lesson will go over the history, origin, and orbit of asteroids. You'll also learn which asteroids are the most common type and what NEOs, Apollo objects, and many other objects are.
9. Comets: Origin & Properties
This lesson will define and discuss comets, a coma, dust tail, gas tail, sublimation, the composition of comets and their origins in the Oort cloud and Kuiper belt.
10. What Are Trans-Neptunian Objects?
This lesson will discuss trans-Neptunian objects, the Kuiper belt, Kuiper belt objects, dwarf planets, and plutinos, as well as where they are found in our solar system.
11. Dwarf Planets of the Solar System: Pluto, Eris, Haumea & Ceres
Discover why Pluto had to leave the league of planets and was downgraded to a dwarf planet. Learn the definitions for planet and dwarf planet. Find out about the currently classified dwarf planets in our solar system: Pluto, Eris, Ceres, Haumea and Makemake.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the Astronomy 101 Syllabus Resource & Lesson Plans course
- The History of Astronomy Lesson Plans
- How Scientists Think & Work Lesson Plans
- Matter in Astronomy Lesson Plans
- Light in Astronomy Lesson Plans
- Newton's Laws in Astronomy Lesson Plans
- Momentum & Energy Lesson Plans
- Rotational Motion & Momentum Lesson Plans
- Earth's Spheres Lesson Plans
- Climate Influences Lesson Plans
- Orbits, Gravity & Orbital Motion Lesson Plans
- The Earth, Sky & Moon Lesson Plans
- The Formation & Phases of the Moon Lesson Plans
- Earth & Planet Atmospheres Lesson Plans
- The Sun & Energy Lesson Plans
- The Solar System Formation Lesson Plans
- The Solar System Characteristics Lesson Plans
- Star Qualities Lesson Plans
- Star Types Lesson Plans
- The Birth & Life of Stars Lesson Plans
- Star Death & Stellar Remnants Lesson Plans
- The Milky Way Galaxy Lesson Plans
- Properties of Galaxies Lesson Plans
- Universe Theories Lesson Plans
- Life in the Universe Lesson Plans
- Celestial Time & Navigation Lesson Plans
- Relativity in Time & Space Lesson Plans
- Telescopes Lesson Plans | 0.885793 | 3.283809 |
A black hole flickering in the Milky Way galaxy has been filmed in unprecedented detail, with a new high frame-rate technique that is helping us understand the wild dynamics of these most enigmatic objects.
The black hole is named MAXI J1820+070, discovered in 2018, roughly 7 times the mass of the Sun and just 10,000 light-years from Earth.
As far as black holes go, it's way small - the lowest mass that we think a black hole can be is around 5 Suns - but there's something else really interesting about it. It's flickering, emitting a whole bunch of X-ray and visible light radiation, as it actively slurps down matter from a nearby star.
Normally black holes - especially small, quiescent black holes - are very hard to see. Sagittarius A*, the supermassive black hole at the centre of the Milky Way, is relatively quiet, but it's also easy, because we can track the orbits of things moving around it.
But Sgr A* is 4 million times the mass of the Sun, and therefore acts as the centre of a massive system. A black hole only 7 times the mass of the Sun isn't likely to have as many orbiters. However, many stars (including dead stars like black holes) are in binary systems with other stars - and such black holes can eat up material stripped off their binary companions.
That's what astronomers believe is happening with MAXI J1820+070. As the black hole strips its BFF star, the material forms an accretion disc around the black hole, where frictional, magnetic and gravitational forces compress it and create incredible heat.
In turn, this produces flickering electromagnetic radiation, and this is what the researchers captured, at a rate of over 300 frames per second, in optical light using the HiPERCAM instrument on the Gran Telescopio Canarias and in X-rays by NASA's NICER observatory on the International Space Station.
"The movie was made using real data, but slowed down to 1/10th of actual speed to allow the most rapid flares to be discerned by the human eye," said astronomer John Paice of the University of Southampton and the Inter-University Centre for Astronomy & Astrophysics .
"We can see how the material around the black hole is so bright, it's outshining the star that it is consuming, and the fastest flickers last only a few milliseconds - that's the output of a hundred Suns and more being emitted in the blink of an eye!"
This multi-messenger approach meant that the team could simultaneously track both types of radiation, a rise in one was accompanied by a rise in the other.
But, interestingly, there was a time gap: optical light flashes were preceded just a split-second before by X-ray flashes - a signal that, the researchers said, indicated the presence of plasma, a highly ionised and electrically conductive state of matter, extremely close to the black hole.
This third time is indeed the charm, being the most detailed observation of the phenomenon yet. And, to mangle an Ian Fleming quote, once is chance; twice is a coincidence; but the third time is indicative of a pattern.
"The fact that we now see this in three systems strengthens the idea that it is a unifying characteristic of such growing black holes. If true, this must be telling us something fundamental about how plasma flows around black holes operate," said astronomer Poshak Gandhi of the University of Southampton.
"Our best ideas invoke a deep connection between inspiralling and outflowing bits of the plasma. But these are extreme physical conditions that we cannot replicate in Earth laboratories, and we don't understand how nature manages this. Such data will be crucial for homing in on the correct theory."
The research has been published in Monthly Notices of the Royal Astronomical Society. | 0.809994 | 3.951577 |
Humans have always been explorers. Even when our population was still numbered in the millions rather than billions, the great civilizations of old traveled to the edges of the known world to find out what lay beyond. Today mankind conquered almost every space on Earth. And we discovered that a planet that once seemed incredibly vast only has limited space and resources. In his best-selling book Cosmos, the great astrophysicist Carl Sagan wrote:
“Exploration is in our nature. We began as wanderers, and we are wanderers still. We have lingered long enough on the shores of the cosmic ocean. We are ready at last to set sail for the stars.”
– Carl Sagan in Cosmos
It was his opinion that the exploration of space is a natural consequence of humanity’s inherent thirst for probing the unknown. Many of the great minds of the past and present share this opinion. Some even go further and claim that venturing into space is crucial for the survival of humanity. Among them was the late physicist Stephen Hawking, who wrote:
“I believe that life on Earth is at an ever-increasing risk of being wiped out by a disaster. I think the human race has no future if it doesn’t go to space. We need to inspire the next generation to become engaged in space and in science in general, to ask questions: What will we find when we go to space? Is there alien life, or are we alone? What will a sunset on Mars look like?”
– Stephen Hawking in How to Make a Starship by Julian Guthrie
To many, these grand visions seem far-fetched and remote. After all, they don’t really apply to our daily life but concern all of humanity and its future. So why do we send humans into space?
Indeed, the exploration of space and its resources is at the center of the answer to this question. The scientific interest in understanding our universe and its evolution, right up to the point where humans came to exist, is one aspect of it. But there’s also the more practical aspects that do apply to our daily life—today and in the future.
Much of our life is dominated by technologies enabled by spaceflight. GPS, weather forecasts, satellite telephones, Google Maps—all these services would be unthinkable without satellites orbiting our planet. But, you may say, these satellites are robotic, they don’t need humans to function. That is true. But many other current and future activities cannot be performed by robots alone.
Astronauts on the International Space Station perform research that not only helps us to further explore outer space but also has applications back on Earth. In fact, many space agencies favor research that has applications on Earth when they select new experiments for funding. It’s a wide-ranging field, including biology, medicine, materials science, and many more. Oftentimes space is the only place where this research can be performed, for example because it requires prolonged periods of weightlessness.
In the future, activities requiring humans in space will only expand. Asteroid mining, for instance, can be a great source of materials that are rare or hardly accessible on Earth. Such as the rare-earth metals powering the electronic device you’re reading this post on. Every day, large swathes of our planet are destroyed to mine these metals. Asteroids could supply our demand for centuries to come without environmental disaster. And that’s just one of many examples.
Over the years, space research has created many technologies that have entered our daily life. NASA lists almost 2,000 spinoffs that have found their way into commercial products in areas such as transportation, health and medicine, information technology, public safety, and others. Last, but not least, it is worth to mention that while only few astronauts go into space, tens of thousands of people worldwide have secure jobs because of the space industry.
Photo credit: NASA | 0.867885 | 3.052428 |
A project called the Event Horizon Telescope delivered a fuzzy view of the dark monster at the center of an elliptical galaxy known as M87. The edge of the black hole’s dark circle, known as the event horizon, was surrounded by the bright glare of superheated material falling into the black hole.
The light that makes up the image is not coming from the black hole – black holes do not emit any light, hence the name. Instead, the image shows the black hole’s silhouette against a background of hot, glowing matter that is being inexorably pulled in by its powerful gravity.
We have peered into the abyss for the very first time. The Event Horizon Telescope (EHT), which uses a network of telescopes around the globe to turn all of Earth into an enormous radio telescope, has taken the first direct image of a black hole.
Need a kick-start? You would be hard-pressed to find one more powerful than a supernova – a sudden explosion in which a dying star ejects most of its mass. That is what happened to the pulsar pictured here, sending it racing away from its home with a tail of particles and magnetic energy stretching behind it for 13 light years.
The $8.9 billion James Webb Space Telescope may be the last big-budget observatory that NASA launches for a while. The White House’s proposed 2020 budget cancels the Wide-Field Infrared Survey Telescope (WFIRST), a $3.2 billion space mission viewed as a linchpin of astrophysics research through the 2020s and beyond.
If someone asks you what planet is closest to Earth, you’ll probably blurt out Venus. That’s a perfectly normal thing to say, but it’s also wrong. Numerous websites and even NASA itself say Venus is our closest planetary neighbor. A new article in Physics Today lays out a more accurate way to determine which planets are closest together. It turns out the averages are highly counterintuitive. Mercury is the closest planet to Earth -- in fact, it’s the closest planet to every other planet.
Two new academic papers, one published in The Astronomical Journal and the other in Physics Reports, present new evidence that a large, as yet undiscovered planet is lurking in the outer solar system. Both papers coincide with the three-year anniversary of an announcement by astronomers Michael Brown and Konstantin Batygin, both of Caltech, of their theory that a large, distant planet is responsible for the unique clustering of several Kuiper Belt Objects (KBOs) far beyond Neptune and Pluto. Specifically, these KBOs are in orbits perpendicular to the plane of the solar system.
Our image of the outer solar system in decades past was much simpler than it is today. Pluto was the ninth planet, and that was the end of it except for some scattered asteroids and comets. Now, science doesn’t consider Pluto a planet, but some believe there’s a still-undiscovered ninth planet out there tweaking the orbit of small planetoids.
Despite recent issues with one of its instruments, the Hubble Space Telescope is expected to last at least another five years. A new report suggests that the iconic spacecraft has a strong chance of enduring through the mid-2020s."Right now, all of the subsystems and the instruments have a reliability exceeding 80 percent through 2025...
What is the "dark side" of the moon? The short answer? It's a misnomer. A cool-sounding misnomer! But a misnomer. Assuming they aren't talking about the Pink Floyd album or the French mockumentary, people who say "the dark side of the moon" are almost always referring to the moon's far side--which, despite pointing permanently away from those of us planetside, actually sees as much sunlight as the side facing Earth. | 0.896217 | 3.770889 |
ESA and NASA are testing defenses against an asteroid threat:
25 June 2018: Planning for humankind’s first mission to a binary asteroid system has entered its next engineering phase. ESA’s proposed Hera mission would also be Europe’s contribution to an ambitious planetary defence experiment.
Named for the Greek goddess of marriage, Hera would fly to the Didymos pair of Near-Earth asteroids: the 780 m-diameter mountain-sized main body is orbited by a 160 m moon, informally called ‘Didymoon’, about the same size as the Great Pyramid of Giza.
“Such a binary asteroid system is the perfect testbed for a planetary defence experiment but is also an entirely new environment for asteroid investigations. Although binaries make up 15% of all known asteroids, they have never been explored before, and we anticipate many surprises,”
explains Hera manager Ian Carnelli.
“The extremely low-gravity environment also presents new challenges to the guidance and navigation systems. Fortunately we can count on the unique experience of ESA’s Rosetta operations team which is an incredible asset for the Hera mission.”
The smaller Didymoon is Hera’s main focus: the spacecraft would perform high-resolution visual, laser and radio science mapping of the moon, which will be the smallest asteroid visited so far, to build detailed maps of its surface and interior structure.
By the time Hera reaches Didymos, in 2026, Didymoon will have achieved historic significance: the first object in the Solar System to have its orbit shifted by human effort in a measurable way.
A NASA mission called the Double Asteroid Redirection Test, or DART, is due to collide with it in October 2022. The impact will lead to a change in the duration of Didymoon’s orbit around the main body. Ground observatories all around the world will view the collision, but from a minimum distance of 11 million km away.
“Essential information will be missing following the DART impact – which is where Hera comes in,” adds Ian. “Hera’s close-up survey will give us the mass of Didymoon, the shape of the crater, as well as physical and dynamical properties of Didymoon.
“This key data gathered by Hera will turn a grand but one-off experiment into a well-understood planetary defence technique: one that could in principle be repeated if we ever need to stop an incoming asteroid.”
The traditional method of estimating the mass of a planetary body is to measure its gravitational pull on a spacecraft. That is not workable within the Didymos system: Didymoon’s gravitational field would be swamped by that of its larger partner.
Instead, Hera imagery will be used to track key landmarks on the surface on the bigger body, ‘Didymain’, such as boulders or craters. By measuring the ‘wobble’ Didymoon causes its parent, relative to the common centre of gravity of the overall two-body system, its mass could be determined with an accuracy over 90%.
Hera will also measure the crater left by DART to a resolution of 10 cm, accomplished through a series of daring flybys, giving insight into the surface characteristics and internal composition of the asteroid.
“Hera benefits from more than five years of work put into ESA’s former Asteroid Impact Mission,” comments Ian. “Its main instrument is a replica of an asteroid imager already flying in space – the Framing Camera used by NASA’s Dawn mission as it surveys Ceres, which is provided by the German Aerospace Center, DLR.
“It would also carry a ‘laser radar’ lidar for surface ranging, as well as a hyperspectral imager to characterise surface properties. In addition, Hera will deploy Europe’s first deep space CubeSats to gather additional science as well as test advanced multi-spacecraft intersatellite links.”
NASA’s DART mission meanwhile has passed its preliminary design review and is about to enter its ‘Phase C’ detailed design stage. | 0.868179 | 3.650651 |
Syzygy V: Red Shift went live this week! Like every title in my Syzygy hexalogy, this one is an astronomy term that implies various levels of significance for the story. What is red shift in the scientific sense? When an object in space (like a star) moves away from us, its light wavelength stretches out, “shifting” the light toward the red end of the visible spectrum. (Objects moving closer shorten their wavelengths and “blueshift” in the opposite direction.) This phenomenon helps astronomers do things like find new stars and study the expansion of the universe.
I expanded my own appreciation for astronomy a few weeks ago, when my Laddie and I traveled to Hawaii. We snorkeled with sea turtles, steered an ocean kayak through treacherous turquoise waves, and hiked the unpaved trails of two different islands. But one of the most stunning experiences was our visit to Mauna Kea. This dormant volcano stands more than 4,000 meters above sea level. If measured from its base on the ocean floor, it tops 10,000 meters, surpassing Mt Everest as the tallest mountain on Earth.
Hawaiian mythology describes Mauna Kea as a realm sacred to various deities. Today, it also houses some of the most sophisticated astronomy equipment in the world, including the Keck Observatories that examine exoplanets discovered by the Kepler mission. Mauna Kea’s arid conditions and removal from light pollution make it one of the best spots on earth for astronomical observation. Arriving at the visitor’s station in the wee hours, we shivered in the parking lot (think Hawaii is all tropical heat? Not at almost 3,000 meters!). Kilauea cast a ruby glow in the distance beyond Mauna Loa, the Big Island’s other major peak, where in September researchers concluded a simulated Mars mission. The chill felt distant, however, once I raised my eyes.
Orion–the familiar figure I admire from the driveway on clear autumn mornings, my moment of communion with the cosmos before schlepping off to a cubicle–greeted me. And he’d brought friends. In the extreme clarity I counted about a dozen stars in the Pleiades (“Seven Sisters”) with my eyes alone. Binoculars helped me spot deep space objects like the hazy corona of the M43 nebula. The faint lavender banner of the Milky Way unfurled across the black as we watched the brilliant flare of Venus, the “dawn star”, ascend over the eastern peaks.
Polynesian wayfinders (no, Disney did not invent them for Moana, they actually existed) relied on a staggeringly vast knowledge of celestial objects like these to navigate thousands of leagues across uncharted oceans. Unlike the localized groupings associated with ancient Greek astronomy, some of the traditional Hawaiian constellations cover vast swaths of the sky, such as Ke Ka o Makali‘i (“The Canoe-Bailer of Makali‘i”) and Hoku-‘iwa, the frigate-bird star. Fascinating, how many stories we humans have told ourselves about the same configurations of stars! It forges a connection not only between cultural spheres, but between points in space-time. Under Hawaii’s spangled night, the same view an ancient wayfinder might have used to guide a vessel steered me to a new perspective on my place in the universe and the continuum of human life, a humbling yet uplifting experience.
After marveling at ancient interpretations of the heavens, we journeyed even higher to take in the modern counterparts. A tortuous path up the mountain brought us near the summit. The summit proper is off-limits because of its Hawaiian cultural significance, but the surrounding area hosts 13 observation facilities supported by 11 different countries. Although they’re called telescopes, they’re not the sort you peer through with your eyes: these high-tech beauties study both visible and infrared light, the sub-millimeter spectrum, and even conduct radio astronomy. Visitor information recommends taking it slow at the high elevation. Heedless, I trotted around the red gravel in a heady state that had nothing to do with the altitude, admiring the observatories and thinking of all their incredible discoveries.
Dawn warmed the silver domes as the sun rose over Hilo Bay. The sky performed a red shift of its own, fading backward through the rainbow from deep violet and indigo gradients to gold, orange, and molten red. Mountains in silhouette pierced a fantastical cloudscape. No wonder indigenous people held this place sacred. But I find spiritual satisfaction in what astronomy provides us, too. Mauna Kea boosts us higher than its 4,000 meter summit to peek over the edge of our galaxy and gaze at the universe beyond. We’re still telling ourselves stories about the stars, adding chapters each time those observatories–or an adventurous fiction writer–turn their gazes skyward. | 0.825232 | 3.36679 |
Five years after astronomers discovered a pair of mysterious plumes of radiation at the heart of our galaxy, scientists may be one step closer to understanding exactly what these ginormous "bubbles" are made of and how they came to be.
The so-called "Fermi Bubbles" extend about 25,000 lights-years above and below the Milky Way's galactic plane and were first spotted in 2010 by NASA's gamma ray-detecting Fermi telescope. Scientists believe the bubbles are evidence of an ancient cataclysm at the Milky Way's center.
Now, a new study suggests the violent event occurred some 2.5 million to 4 million years ago, and that it blasted gas outward at speeds of up to two million miles an hour.
(Story continues below image.)
This graphic shows how NASA's Hubble Space Telescope probed light from a distant quasar to analyze the Fermi Bubbles.
For the research, scientists at the Space Telescope Science Institute in Baltimore used instrumentation on the Hubble Space Telescope to study ultraviolet light from a faraway quasar as it passed through one of the bubbles--the astronomers liken this light to "a needle piercing a balloon." The light carries information about the speed, composition, temperature, and mass of the gas that makes up the bubbles.
What did the scientists discover? In addition to gauging the speed of the expelled gas, they found it was made up of silicon, carbon, and aluminum--all of which are heavy elements produced inside stars.
The researchers hope the new Hubble data will help them determine what caused the massive outburst, NASA says.
One theory is that a group of stars fell into Sagittarius A*, the supermassive black hole at the heart of the Milky Way, which then "belched" out the gas. Another theory is that a spurt of star formation occurred near the Milky Way's galactic center, which produced gas-ejecting supernovas.
"At the moment we cannot distinguish between these theories," Dr. Andrew Fox, an astronomer at the institute and the leader of the scientists who did the research, told The Huffington Post in an email, "but that may change as we analyze more sightlines in our Hubble Space Telescope survey because then we can calculate the energetics of the outflow, and compare that with the theoretical predictions from the two models."
As a next step, Fox and his team plan to examine the full spectrum of light from the quasar and study light from other quasars.
"It looks like the outflows are a hiccup," Fox said in a written statement. "There may have been repeated ejections of material that have blown up, and we're catching the latest one. By studying the light from the other quasars in our program, we may be able to detect the ancient remnants of previous outflows."
The new research has been accepted for publication in The Astrophysical Journal Letters and was presented on Jan. 5 at the American Astronomical Society meeting in Seattle. | 0.862684 | 3.792675 |
Image credit: NASA
NASA has selected five proposals as part of its Small Explorer (SMEX) missions – these are low-cost, highly specialized missions to help advance science in a specific area. The candidates are: the Normal-incidence Extreme Ultraviolet Spectrometer, the Dark Universe Observatory, the Interstellar Boundary Explorer, the Nuclear Spectroscopic Telescope Array, and the Jupiter Magnetospheric Explorer. Two finalists will eventually be chosen for launch by 2007-2008.
NASA recently selected candidate mission proposals that would study the universe, from Jupiter and the sun to black holes and dark matter. The proposals are candidates for missions in NASA’s Explorer Program of lower cost, highly focused, rapid-development scientific spacecraft.
Following detailed mission concept studies, NASA intends to select two of the mission proposals by fall 2004 for full development as Small Explorer (SMEX) missions. The two missions developed for flight will be launched in 2007 and 2008.
NASA has also decided to fund as a “Mission of Opportunity” a balloon-borne experiment to detect high-energy neutrinos, ghostly particles that fill the universe.
“The Small Explorer mission proposals we received show that the scientific community has a lot of innovative ideas on ways to study some of the most vexing questions in science, and to do it on a relatively small budget,” said Dr. Ed Weiler, associate administrator for space science at NASA Headquarters, Washington. “It was difficult to select only a few from among the many great proposals we received, but I think the selected proposals have a great chance to really push back the frontiers of knowledge,” he said.
The selected proposals were judged to have the best science value among 36 submitted to NASA in February 2003. Each will receive $450,000 ($250,000 for the Mission of Opportunity) to conduct a five-month implementation feasibility study. The selected SMEX proposals are:
- The Normal-incidence Extreme Ultraviolet Spectrometer (NEXUS): a solar spectrometer with major advances in sensitivity and resolution to reveal the cause of coronal heating and solar wind acceleration. Joseph M. Davila of NASA’s Goddard Space Flight Center (GSFC), Greenbelt, Md., would lead NEXUS at a total mission cost to NASA of $131 million.
- The Dark Universe Observatory (DUO): seven X-ray telescopes to measure the dark matter and dark energy that dominate the content of the universe with 100 times the sensitivity of previous X-ray studies. Richard E. Griffiths of Carnegie Mellon University, Pittsburgh, would lead DUO at a total mission cost to NASA of $132 million.
- The Interstellar Boundary Explorer (IBEX): a pair of cameras to image the boundary between the solar system and interstellar space with 100 times the sensitivity of previous experiments. David J. McComas of the Southwest Research Institute, San Antonio, would lead IBEX at a total mission cost to NASA of $132 million.
- The Nuclear Spectroscopic Telescope Array (NuSTAR): a telescope to carry out a census of black holes with 1000 times more sensitivity than previous experiments. NuSTAR would be lead by Fiona Anne Harrison of the California Institute of Technology, Pasadena, at a total mission cost to NASA of $132 million.
- The Jupiter Magnetospheric Explorer (JMEX): a telescope to study Jupiter’s aurora and magnetosphere from Earth orbit. Nicholas M. Schneider of the University of Colorado at Boulder would lead JMEX, at a total mission cost to NASA of $133 million.
NASA selected a long-duration balloon payload as the mission of opportunity. The Antarctic Impulsive Transient Antenna (ANITA) would detect radio waves emitted when high-energy neutrinos interact in the Antarctic ice shelf. ANITA would be led by Peter W. Gorham of the University of Hawaii at Manoa in Honolulu, at a total mission cost to NASA of $35 million.
In addition, NASA selected a proposed mission for technology-development funding of the proposed instrument. Jean Swank of GSFC will develop a polarization sensitive X-ray detector. Swank will receive up to $300,000 over the next two years for her study.
The five selected SMEX proposals are vying to be the tenth and eleventh SMEX missions selected for full development. Recent selections include the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), launched in February 2002; the Galaxy Evolution Explorer (GALEX), launched in April 2003; and the Aeronomy of Ice in the Mesosphere mission (AIM), to be launched in 2006. The Explorer Program, managed by GSFC for NASA’s Office of Space Science, is designed to provide frequent, low-cost access to space for physics and astronomy missions with small to mid-sized spacecraft.
Original Source: NASA News Release | 0.912882 | 3.272668 |
Constellation Map generated with Starry Night Pro 6.
This month's constellation is one of the best in the Night Sky for combining ancient tradition, mythology, modern astronomy, world history, stellar eye candy, and even modern engineering into one reasonably small bordered pen of celestial real estate. The early evening sight of the constellation Taurus the Bull in the November southeast sky at Darling Hill might appear to CNY viewers as a snow divining rod pointing to the western Great Lakes in anticipation of winter and the upcoming lake-effect snow. Taurus is a distinctive constellation and very easy to identify once its central asterism is identified. The brightest star in the constellation is almost equidistant from the easily identified Pleiades and the shoulder of the constellation Orion, the celestial hunter Taurus is running from as the sky appears to move (or, from the most commonly drawn orientation, right towards him!). While Taurus is mildly sparse in quantity when it comes to dark sky objects, it more than makes up for it in quality, hosting two of the most significant stellar sights in the Night Sky.
Like its neighbor Orion, Taurus the Bull is a very, very old constellation and has been recognized as a bull for the duration of its existence in Middle Eastern and European traditions. Earliest records of any kind place the birth of Taurus in the Copper (Chalcolithic) Age (4500 – 3500 B.C.E.), although some records support its existence even earlier. The presence of a bull and what appears to be a Pleiades-like star formation exists on a wall in the Lascaux Caves of France (see right). Although the interpretation of the Constellation set is controversial, this arrangement may date back as far as 16,500 years. Personally, I find even the thought of that kind of continuity between what we might see in the winter skies and what our ancestors also saw at night both comforting and humbling. Many of the same stand-out patterns we know today no doubt stood out immediately to them as the brightest objects in the sky marked out regular places as the Sun set, and the great distance we've traveled in history might be barely perceptible to an ancient astronomer going simply by the positions of stars.
Lascaux Cave bull and star pattern. From the Institute for Interdisciplinary Studies and spacetoday.org.
We begin the tour by aiming our sights at the bright eye of the bull, the star Aldebaran. This orange giant is 44 times the diameter of our own Sun and has already used its hydrogen fuel, leaving this fusion engine to now graze on a steady diet of helium. Its name is derived from the Arabic for "the follower," often reported as in reference to its position below the Pleiades (so "following" this open cluster as we progress into winter). The other stars in Taurus are easy to see in darker skies but not otherwise noteworthy for their brightness at either naked-eye or binocular viewing magnification. Several of the bright stars closest to Aldebaran make up an asterism that a new observer might confuse with the complete constellation. The V-shaped Hyades (center of the image below and shown at right with white border) are composed of five stars, with Aldebaran the brightest tip. I'll admit that the first time I marked out the space for Taurus, I confused this asterism (and lambda-Tau to the west) with the entire object before double-checking the size. No bull. The Hyades star closest to Aldebaran, theta-Tau, is actually a pair of pairs, although they only appear as a single bright pair in binoculars and telescopes.
The Hyades (white) and Pleiades (red). From Lynn Laux, nightskyinfo.com.
Caught within the bull pen is the Pleiades (M45, shown labeled below from a Hubble image). This Tiny Dipper is visible year-round during the daytime in parking lots and slow-moving traffic everywhere (as the object embedded within the emblem on every Subaru, the Japanese name for this asterism) and is one of the treats of winter viewing in CNY (unless VERY early morning viewing is your game or you've been trying to see Mars in the late Summer skies, in which case you've been enjoying the pre-dawn sight of M45 since August). The amount of information available on the Pleiades online and as part of space research could easily (and very likely has) fill an entire book. While the seven bright stars are identified from Greek mythology as the Seven Sisters (Sterope, Merope, Electra, Maia, Taygete, Celaeno, and Alcyone), the counting aid that comes from a pair of binoculars easily reveals nine stars. The two stars that make up the handle of this tiny dipper are the proud parents Atlas and Pleione, placed to the east of the dipper to protect their daughters from either Taurus (for being a bull) or Orion (for being a male). Given the long history of this asterism, it is perhaps not surprising that the parents decided not to stop at seven. In fact, there are over 1,000 distinct stars in the Pleiades that have been revealed as part of multiple high-resolution studies. This density of stars makes the Pleiades a unique open cluster, as there is a wealth of stars and patterns visible at virtually any magnification, from small binoculars to the largest ground-based telescopes. For my first proper viewing session, I spent one full hour simply looking at this cluster through my Nikon 12×50's, amazed at just how little we really see of the Night Sky using the 1×7 binoculars built into our heads (and, perhaps, corrected by horn-rimmed glasses).
On the opposite side of Taurus and caught between the horns is the first of the categorized Messier objects, the Crab Nebula. M1 to its friends, this nebula is a supernova remnant with a remarkable history. As documented in both Arab and Chinese texts (Europe was just coming out its, er, Dark Ages at the time), this supernova was so bright on July 4, 1054 that it was visible during daylight hours (and, as you can guess by the date, visible without any magnification). The supernova remnant we know today as the Crab Nebula was discovered (and correlated to the original supernova) first by John Bevis in 1731, then by Charles Messier in 1758 while, as it happens, observing a comet (that Messier is known best for his catalogue of objects that were NOT comets instead of the comets he worked so diligently to discover is one of the great fun ironies of astronomy). The NASA images of the Crab Nebula reveal a dense sponge-like structure full of filaments of all sizes. The image above shows a remarkable sight – the full cycle of the pulsar at the heart of the crab that continues to magnetically drive the expansion of the nebula (in the series of frames, the pulsar lies below and to the right of a constant-brightness star).
The Crab Nebula pulsar. Image from www.strw.leidenuniv.nl
Stepping forward several hundred years, Taurus also marks the present locations of Pioneer 10 and COSMOS 1844. Pioneer 10 is currently speeding in the direction of Aldebaran, having been successfully steered through the asteroid belt to make a series of images of Jupiter. At its current velocity, this trip to Aldebaran's current location would take 2 million years, about the same amount of time it might take most of the world to decipher the meaning of the emblematic plaque attached to its exterior (below). Perhaps someday we'll have to explain to the aliens how a civilization that could launch a complicated probe into space couldn't see the multitude of planets in their own Solar System, then perhaps have to explain what happened to Pluto hat it no longer appears in our Solar System images. COSMOS 1844 is one of over 2440 satellites launched by the Soviet Union (and now Russia) since the first of the COSMOS series in 1962. At mag. 5, this satellite makes for a fun artificial viewing target (with a good map in hand).
The Pioneer 10 plaque. From wikipedia.org.
The final sights for telescope viewers include four NGC objects. NGC 1746, 1647, and 1807 are open clusters with magnitudes between 6 and 7. NGC 1514 (below) is a mag 10 planetary nebula just at the far edge of the Taurus border that should be increasingly good viewing as Taurus works its way towards our zenith (1514 will be the closest it will get to our zenith by midnight, a perfect last-good-look before Darling Hill completely freezes over).
NGC 1514. From Martin Germano, seds.org)
Phenomenal viewing at a reasonably safe distance. Just be mindful not to wave your red flashlights at Aldebaran! | 0.893769 | 3.577832 |
Tuesday, April 30, 2019
Scientists have long known that Earth and Mercury have metallic cores. Like Earth, Mercury’s outer core is composed of liquid metal, but there have only been hints that Mercury’s innermost core is solid. Now, in a new study, scientists report evidence that Mercury’s inner core is indeed solid and that it is very nearly the same size as Earth’s solid inner core.
Some scientists compare Mercury to a cannonball because its metal core fills nearly 85 percent of the volume of the planet. This large core — huge compared to the other rocky planets in our solar system — has long been one of the most intriguing mysteries about Mercury. Scientists had also wondered whether Mercury might have a solid inner core.
The findings of Mercury’s solid inner core, published in AGU’s journal Geophysical Research Letters, help scientists better understand Mercury but also offer clues about how the solar system formed and how rocky planets change over time.
“Mercury’s interior is still active, due to the molten core that powers the planet’s weak magnetic field, relative to Earth’s,” said Antonio Genova, an assistant professor at Sapienza University of Rome who led the research while at NASA Goddard Space Flight Center in Greenbelt, Maryland. “Mercury’s interior has cooled more rapidly than our planet’s. Mercury may help us predict how Earth’s magnetic field will change as the core cools.”
To figure out what Mercury’s core is made of, Genova and his colleagues had to get, figuratively, closer. The team used several observations from NASA’s MESSENGER mission to probe Mercury’s interior. The researchers looked, most importantly, at the planet’s spin and gravity.
The MESSENGER spacecraft entered orbit around Mercury in March 2011 and spent four years observing this nearest planet to our Sun until it was deliberately brought down to the planet’s surface in April 2015.
Scientists used radio observations from MESSENGER to determine Mercury’s gravitational anomalies (areas of local increases or decreases in mass) and the location of its rotational pole, which allowed them to understand the orientation of the planet.
Each planet spins on an axis, also known as the pole. Mercury spins much more slowly than Earth, with its day lasting about 58 Earth days. Scientists often use tiny variations in the way an object spins to reveal clues about its internal structure. In 2007, radar observations made from Earth revealed small shifts in Mercury’s spin, called librations, that proved some of the planet’s core must be liquid-molten metal. But observations of the spin rate alone were not sufficient to give a clear measurement of what the inner core was like. Could there be a solid core lurking underneath, scientists wondered?
Gravity can help answer that question. “Gravity is a powerful tool to look at the deep interior of a planet because it depends on the planet’s density structure,” said Sander Goossens, a researcher at NASA Goddard and co-author of the new study.
As MESSENGER orbited Mercury over the course of its mission and got closer and closer to the surface, scientists recorded how the spacecraft accelerated under the influence of the planet’s gravity. The density structure of a planet can create subtle changes in a spacecraft’s orbit. In the later parts of the mission, MESSENGER flew about 120 miles above the surface, and less than 65 miles during its last year. The final low-altitude orbits provided the best data yet and allowed for Genova and his team to make the most accurate measurements about the internal structure of Mercury yet taken.
Genova and his team put data from MESSENGER into a sophisticated computer program that allowed them to adjust parameters and figure out what the interior composition of Mercury must be like to match the way it spins and the way the spacecraft accelerated around it. The results showed that for the best match, Mercury must have a large, solid inner core. They estimated that the solid, iron core is about 1,260 miles (2,000 kilometers) wide and makes up about half of Mercury’s entire core (about 2,440 miles, or nearly 4,000 kilometers, wide). In contrast, Earth’s solid core is about 1,500 miles (2,400 kilometers) across, taking up a little more than a third of this planet’s entire core.
“We had to pull together information from many fields: geodesy, geochemistry, orbital mechanics and gravity to find out what Mercury’s internal structure must be,” said Erwan Mazarico, a planetary scientist at NASA Goddard and co-author of the new study.
The fact that scientists needed to get close to Mercury to find out more about its interior highlights the power of sending spacecraft to other worlds, according to the researchers. Such accurate measurements of Mercury’s spin and gravity were simply not possible to make from Earth. New discoveries about Mercury are practically guaranteed to be waiting in MESSENGER’s archives, with each discovery about our local planetary neighborhood giving us a better understanding of what lies beyond.
“Every new bit of information about our solar system helps us understand the larger universe,” Genova said.
A diet rich in animal protein and meat in particular is not good for the health, a new study from the University of Eastern Finland finds, providing further backing for earlier research evidence. Men who favored animal protein over plant-based protein in their diet had a greater risk of death in a 20-year follow-up than men whose diet was more balanced in terms of their sources of protein. The findings were published in the American Journal of Clinical Nutrition.
Men whose primary sources of protein were animal-based had a 23% higher risk of death during the follow-up than men who had the most balanced ratio of animal and plant-based protein in their diet. A high intake of meat in particular seemed to associate with adverse effects: men eating a diet rich in meat, i.e. more than 200 grams per day, had a 23% greater risk of death during the follow-up than men whose intake of meat was less than 100 grams per day. The men participating in the study mainly ate red meat. Most nutrition recommendations nowadays limit the intake of red and processed meats. In Finland, for example, the recommended maximum intake is 500 grams per week.
The study also found that a high overall intake of dietary protein was associated with a greater risk of death in men who had been diagnosed with type 2 diabetes, cardiovascular disease or cancer at the onset of the study. A similar association was not found in men without these diseases. The findings highlight the need to investigate the health effects of protein intake especially in people who have a pre-existing chronic medical condition. The mean age of the men participating in the study was 53 years at the onset, and diets clearly lacking in protein were not typical among the study population.
“However, these findings should not be generalised to older people who are at a greater risk of malnutrition and whose intake of protein often remains below the recommended amount,” PhD Student Heli Virtanen from the University of Eastern Finland points out.
Earlier studies have suggested that a high intake of animal protein, and especially the consumption of processed meats such as sausages and cold cuts, is associated with an increased risk of death. However, the big picture relating to the health effects of protein and different protein sources remains unclear.
The study is based on the Kuopio Ischaemic Heart Disease Risk Factor Study (KIHD) that analysed the dietary habits of approximately 2,600 Finnish men aged between 42 and 60 at the onset of the study in 1984-1989. The researchers studied the mortality of this study population in an average follow-up of 20 years by analysing registers provided by Statistics Finland. The analyses focused on the associations of dietary protein and protein sources with mortality during the follow-up, and other lifestyle factors and dietary habits were extensively controlled for, including the fact that those eating plenty of plant-based protein followed a healthier diet.
Monday, April 29, 2019
Researchers at the University of Minnesota have developed a unique 3D-printed transparent skull implant for mice that provides an opportunity to watch activity of the entire brain surface in real time. The device allows fundamental brain research that could provide new insight for human brain conditions such as concussions, Alzheimer’s and Parkinson’s disease.
The research is published in Nature Communications. Researchers also plan to commercialize the device, which they call See-Shell.
“What we are trying to do is to see if we can visualize and interact with large parts of the mouse brain surface, called the cortex, over long periods of time. This will give us new information about how the human brain works,” said Suhasa Kodandaramaiah, Ph.D., a co-author of the study and University of Minnesota Benjamin Mayhugh Assistant Professor of Mechanical Engineering in the College of Science and Engineering. “This technology allows us to see most of the cortex in action with unprecedented control and precision while stimulating certain parts of the brain.”
In the past, most scientists have looked at small regions of the brain and tried to understand it in detail. However, researchers are now finding that what happens in one part of the brain likely affects other parts of the brain at the same time.
One of their first studies using the See-Shell device examines how mild concussions in one part of the brain affect other parts of the brain as it reorganizes structurally and functionally. Kodandaramaiah said that mouse brains are very similar in many respects to human brains, and this device opens the door for similar research on mice looking at degenerative brain diseases that affect humans such as Alzheimer’s or Parkinson’s disease.
The technology allows the researchers to see global changes for the first time at an unprecedented time resolution. In a video produced using the device, changes in brightness of the mouse’s brain correspond to waxing and waning of neural activity. Subtle flashes are periods when the whole brain suddenly becomes active. The researchers are still trying to understand the reason for such global coordinated activity and what it means for behavior.
See film: https://www.youtube.com/watch?v=pETFswXWx0E
To make the See-Shell, researchers digitally scanned the surface of the mouse skull and then used the digital scans to create an artificial transparent skull that has the same contours as the original skull. During a precise surgery, the top of the mouse skull is replaced with the 3D-printed transparent skull device. The device allows researchers to record brain activity simultaneously while imaging the entire brain in real time.
Another advantage to using this device is that the mouse’s body did not reject the implant, which means that the researchers were able to study the same mouse brain over several months. Studies in mice over several months allow researchers to study brain aging in a way that would take decades to study in humans.
“This new device allows us to look at the brain activity at the smallest level zooming in on specific neurons while getting a big picture view of a large part of the brain surface over time,” Kodandaramaiah said. “Developing the device and showing that it works is just the beginning of what we will be able to do to advance brain research.”
Saturday, April 27, 2019
Using CRISPR gene editing, a team from Children’s Hospital of Philadelphia (CHOP)and Penn Medicine have thwarted a lethal lung disease in an animal model in which a harmful mutation causes death within hours after birth. This proof-of-concept study, published today in Science Translational Medicine, showed that in utero editing could be a promising new approach for treating lung diseases before birth.
“The developing fetus has many innate properties that make it an attractive recipient for therapeutic gene editing,” said study co-leader William H. Peranteau, MD, an investigator at CHOP’s Center for Fetal Research, and a pediatric and fetal surgeon in CHOP’s Center for Fetal Diagnosis and Treatment. “Furthermore, the ability to cure or mitigate a disease via gene editing in mid- to late gestation before birth and the onset of irreversible pathology is very exciting. This is particularly true for diseases that affect the lungs, whose function becomes dramatically more important at the time of birth.”
The lung conditions the team is hoping to solve–congenital diseases such as surfactant protein deficiency, cystic fibrosis, and alpha-1 antitrypsin–are characterized by respiratory failure at birth or chronic lung disease with few options for therapies. About 22 percent of all pediatric hospital admissions are because of respiratory disorders, and congenital causes of respiratory diseases are often lethal, despite advances in care and a deeper understanding of their molecular causes. Because the lung is a barrier organ in direct contact with the outside environment, targeted delivery to correct defective genes is an attractive therapy.
“We wanted to know if this could work at all,” said study co-leader Edward E. Morrisey, PhD, a professor of Cardiovascular Medicine in the Perelman School of Medicine at the University of Pennsylvania. “The trick was how to direct the gene-editing machinery to target cells that line the airways of the lungs.”
The researchers showed that precisely timed in utero delivery of CRISPR gene-editing reagents to the amniotic fluid during fetal development resulted in targeted changes in the lungs of mice. They introduced the gene editors into developing mice four days before birth, which is analogous to the third trimester in humans.
The cells that showed the highest percentage of editing were alveolar epithelial cells and airway secretory cells lining lung airways. In 2018, a team led by Morrisey identified the alveolar epithelial progenitor (AEP) lineage, which is embedded in a larger population of cells called alveolar type 2 cells. These cells generate pulmonary surfactant, which reduces surface tension in the lungs and keeps them from collapsing with every breath. AEPs are a stable cell type in the lung and turn over very slowly, but replicate rapidly after injury to regenerate the lining of the alveoli and restore gas exchange.
In a second experiment, the researchers used prenatal gene-editing to reduce the severity of an interstitial lung disease, surfactant protein C (SFTPC) deficiency, in a mouse model that has a common disease-causing mutation found in the human SFTPC gene. One hundred percent of untreated mice with this mutation die from respiratory failure within hours of birth. In contrast, prenatal gene editing to inactivate the mutant Sftpc gene resulted in improved lung morphology and survival of over 22 percent of the animals.
Future studies will be directed towards increasing the efficiency of the gene editing in the epithelial lining of lungs as well as evaluating different mechanisms to deliver gene editing technology to lungs. “Different gene editing techniques are also being explored that may one day be able to correct the exact mutations observed in genetic lung diseases in infants,” Morrisey said.
Morrisey collaborated on a recent study led by Peranteau and Kiran Musunuru, MD, PhD, an associate professor of Cardiovascular Medicine at Penn, demonstrating the feasibility of in utero gene editing to rescue a lethal metabolic liver disease in a mouse model – the first time in utero CRISPR-mediated gene editing prevented a lethal metabolic disorder in animals. Similar to that study, Peranteau says “the current research is a proof-of-concept study highlighting the exciting future prospects for prenatal treatments including gene editing and replacement gene therapy for the treatment of congenital diseases.” | 0.890736 | 3.838185 |
Around 500 astronomers and space scientists will gather at Venue Cymru in Llandudno, Wales, from 5-9 July, for the Royal Astronomical Society National Astronomy Meeting 2015 (NAM2015, Cyfarfod Seryddiaeth Cenedlaethol 2015). The conference is the largest regular professional astronomy event in the UK and will see leading researchers from around the world presenting the latest work in a variety of fields. In his first report from the event this week, science writer and editor Kulvinder Singh Chadha presents his pick of the day’s presentations:
Rings and Loops in the stars: Planck’s stunning new images
Amazing images were released today from a new map made with data from The European Space Agency’s Planck satellite. Dr Mike Peel and Dr Paddy Leahy of the Jodrell Bank Centre for Astrophysics (JCBA) presented their research. Two separate results show a ring of dust 200 light-years across, and a loop covering a third of the sky.
The new maps show vast regions of the sky producing anomalous microwave radiation (AME). this process was only discovered in 1997 and could account for a large amount of galactic microwave emission around 1-centimetre in wavelength. A 200 light-year-wide dust ring around the Lambda Orionis Nebula (the ‘head’ of the familiar Orion constellation) is one area where it is exceptionally bright. This is the first time the ring’s been seen in this way.
A wide-field map also shows synchrotron loops and spurs (where charged particles spiral around magnetic fields at close to light speed), including the huge Loop 1, discovered over 50 years ago. Even now, its distance is still very uncertain. It could be anywhere between 400 to 25,000 light-years away. Therefore, although it covers around a third of the sky, it’s impossible to say how big it actually is.These — and other — different physical processes for generating microwaves were differentiated using multi-wavelength measurements from Planck, NASA’s WMAP satellite, and from ground-based radio telescopes.
School solar satellite success
A satellite experiment devised by school students to study cosmic rays and the solar wind is now successfully collecting data. The Langton Ultimate Cosmic ray Intensity Detector (LUCID), uses particle detectors from CERN to study the radiation environment in low Earth orbit. The satellite was developed by Surrey Satellite Technologies Ltd. and students from the Simon Langton Grammar School for Boys. Cal Hewitt, a 16-year old student from the ‘Langton Star Centre’ research department at the school, presented LUCID’s first results at NAM2015 on 6 July.
“When orbiting the sunlit side of Earth, the signal detected by LUCID is dominated by the solar wind, allowing it to map the number and energy of protons and electrons against geographical area and time. But when it’s shielded from the Sun on the night-time leg of its orbit, we can identify cosmic ray events,” says Hewitt, who has just finished his GCSEs. He has worked with a team of other students to prepare LUCID for anticipated major data runs. The sheer computing power required for the process has meant that Hewitt has become the youngest student certified to use GridPP, the UK’s segment of a worldwide-grid of computers processing data from the Large Hadron Collider (LHC). This grid numbers thousands of computers.
LUCID tracks the direction of incoming cosmic ray and solar wind particles in three dimensions and determines the type of particles (whether protons, electrons, etc), the energy they deposit and the resulting radiation dose. Monitoring these energetic particles is important for understanding space weather and protecting astronauts from high levels of radiation. The mission has captured half a million events during its commissioning phase and a further 250,000 will be captured during the summer to build up a map of Low Earth Orbit radiation environment.
The satellite was launched on 8 July 2014 on the Innovate UK-funded TechDemoSat-1, which carries payloads from a number of UK academic and governmental institutions, having been first conceived by students in 2008. It is one of a number of research projects developed at the Langton Star Centre. “LUCID has been developed from pixel detectors used at the LHC, which were originally designed for medical imaging,” says Hewitt. “This type of detector has never been used before in open space and our first data was extremely noisy but we have optimised detector settings for day and night captures.”The concept of involving school students in cutting edge research via the Langton Star Centre is the brainchild of teacher Professor Becky Parker. The programme is now being rolled out nationally through the establishment of the Institute for Research in Schools. | 0.870958 | 3.675323 |
The Discovery That Forever Changed Our Universe…
…came from a deaf American woman born on the 4th of July in 1868. Shortly after her graduation from what we now call Radcliffe, an illness caused Henrietta Swan Leavitt to lose her hearing. The Harvard College Observatory eventually hired her as a human “computer.” Her job: review the hordes of glass photographic plates and calculate the brightness of the stars in them. While reviewing a study of variable stars in the Large and Small Magellanic Clouds (small satellite galaxies orbiting our own Milky Way) she developed a fondness for the many Cepheid Variable stars within those two galaxies. A Cepheid Variable star dims and brightens over a regular period, so named because, in 1784, John Goodricke identified the first such example with the star δ Cephei in the constellation Cepheus. Leavitt became an accomplished variable star hunter, cataloguing 2,400 such stars during the course of her work – more than half the total known at the time.
In analyzing the plates, Leavitt began to notice the brighter Cepheids exhibited a longer period of variability. Four years later, after further analysis, she surmised the brightness of Cepheid Variables had a direct relationship with their period of variability. She deduced this relationship because all the stars in the Magellanic Clouds have the same distance from Earth. Since their distance is known to be constant, their relative brightness can be directly compared. She published her results in 1912. Unknown to all at the time, her discovery would forever change our understanding of the universe.
Cepheid Variables (and their kin RR Lyrae) have since become “standard candles” used to measure intergalactic distances. This discovery allows us to more precisely measure the distance of globular clusters and galaxies. Ironically, at the time of Henrietta Leavitt’s discovery of the period-luminosity relationship, astronomers did not know the galactic “nebula” they saw lay outside the boundaries of the Milky Way. It wasn’t until 1923 when Edwin Hubble conclusively proved for the first time one of these galactic “nebula” was indeed another galaxy – the Andromeda Galaxy. He did this only by discovering a Cepheid Variable within the 2.2 million light year distant galaxy. Unfortunately, Henrietta Leavitt never saw the cosmological implications of her stellar discovery. She died of cancer in 1921. | 0.904451 | 3.507216 |
Though once big enough to swallow three Earths with room to spare, Jupiter’s Great Red Spot has been shrinking for a century and a half. Nobody is sure how long the storm will continue to contract or whether it will disappear altogether.
A new study suggests that it hasn’t all been downhill, though. The storm seems to have increased in area at least once along the way, and it’s growing taller as it gets smaller.
“Storms are dynamic, and that’s what we see with the Great Red Spot. It’s constantly changing in size and shape, and its winds shift, as well,” said Amy Simon, an expert in planetary atmospheres at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, and lead author of the new paper, published in the Astronomical Journal.
Scientists have noticed that Jupiter’s Great Red Spot has been getting smaller over time. Now, there’s evidence the storm is actually growing taller as it shrinks. Credits: NASA’s Goddard Space Flight Center
Observations of Jupiter date back centuries, but the first confirmed sighting of the Great Red Spot was in 1831. (Researchers aren’t certain whether earlier observers who saw a red spot on Jupiter were looking at the same storm.)
Keen observers have long been able to measure the size and drift of the Great Red Spot by fitting their telescopes with an eyepiece scored with crosshairs. A continuous record of at least one observation of this kind per year dates back to 1878.
Simon and her colleagues drew on this rich archive of historical observations and combined them with data from NASA spacecraft, starting with the two Voyager missions in 1979. In particular, the group relied on a series of annual observations of Jupiter that team members have been conducting with NASA’s Hubble Space Telescope as part of the Outer Planets Atmospheres Legacy, or OPAL, project. The OPAL team scientists are based at Goddard, the University of California at Berkeley, and NASA’s Jet Propulsion Laboratory in Pasadena, California,
The team traced the evolution of the Great Red Spot, analyzing its size, shape, color and drift rate. They also looked at the storm’s internal wind speeds, when that information was available from spacecraft.
The new findings indicate that the Great Red Spot recently started to drift westward faster than before. The storm always stays at the same latitude, held there by jet streams to the north and south, but it circles the globe in the opposite direction relative to the planet’s eastward rotation. Historically, it’s been assumed that this drift is more or less constant, but in recent observations, the team found the spot is zooming along much faster.
The study confirms that the storm has been decreasing in length overall since 1878 and is big enough to accommodate just over one Earth at this point. But the historical record indicates the area of the spot grew temporarily in the 1920s.
“There is evidence in the archived observations that the Great Red Spot has grown and shrunk over time,” said co-author Reta Beebe, an emeritus professor at New Mexico State University in Las Cruces. “However, the storm is quite small now, and it’s been a long time since it last grew.”
Because the storm has been contracting, the researchers expected to find the already-powerful internal winds becoming even stronger, like an ice skater who spins faster as she pulls in her arms.
Instead of spinning faster, the storm appears to be forced to stretch up. It’s almost like clay being shaped on a potter’s wheel. As the wheel spins, an artist can transform a short, round lump into a tall, thin vase by pushing inward with his hands. The smaller he makes the base, the taller the vessel will grow.
In the case of the Great Red Spot, the change in height is small relative to the area that the storm covers, but it’s still noticeable.
The Great Red Spot’s color has been deepening, too, becoming intensely orange since 2014. Researchers aren’t sure why that’s happening, but it’s possible that the chemicals which color the storm are being carried higher into the atmosphere as the spot stretches up. At higher altitudes, the chemicals would be subjected to more UV radiation and would take on a deeper color.
In some ways, the mystery of the Great Red Spot only seems to deepen as the iconic storm contracts. Researchers don’t know whether the spot will shrink a bit more and then stabilize, or break apart completely.
“If the trends we see in the Great Red Spot continue, the next five to 10 years could be very interesting from a dynamical point of view,” said Goddard co-author Rick Cosentino. “We could see rapid changes in the storm’s physical appearance and behavior, and maybe the red spot will end up being not so great after all.”
Publication: Amy A. Simon, et al., “Historical and Contemporary Trends in the Size, Drift, and Color of Jupiter’s Great Red Spot,” AJ, 2018; doi:10.3847/1538-3881/aaae01 | 0.888894 | 3.904238 |
Neptune is not visible to the unaided eye and is the only planet in the Solar System found by mathematical prediction rather than by empirical observation. Unexpected changes in the orbit of Uranus led Alexis Bouvard to deduce that its orbit was subject to gravitational perturbation by an unknown planet. The position of Neptune was subsequently calculated from Bouvard’s observations, independently, by John Couch Adams and Urbain Le Verrier after his death. Neptune was subsequently observed with a telescope on 23 September 1846 by Johann Galle within a degree of the position predicted by Le Verrier. Its largest moon, Triton, was discovered shortly thereafter, though none of the planet’s remaining known 13 moons were located telescopically until the 20th century. The planet’s distance from Earth gives it a very small apparent size, making it challenging to study with Earth-based telescopes. Neptune was visited by Voyager 2, when it flew by the planet on 25 August 1989. The advent of the Hubble Space Telescope and large ground-based telescopes with adaptive optics has recently allowed for additional detailed observations from afar. | 0.901346 | 3.249503 |
|Part of a series on|
The monsoon of South Asia is among several geographically distributed global monsoons. It affects the Indian subcontinent, where it is one of the oldest and most anticipated weather phenomena and an economically important pattern every year from June through September, but it is only partly understood and notoriously difficult to predict. Several theories have been proposed to explain the origin, process, strength, variability, distribution, and general vagaries of the monsoon, but understanding and predictability are still evolving.
The unique geographical features of the Indian subcontinent, along with associated atmospheric, oceanic, and geophysical factors, influence the behavior of the monsoon. Because of its effect on agriculture, on flora and fauna, and on the climates of nations such as Bangladesh, Bhutan, India, Nepal, Pakistan, and Sri Lanka — among other economic, social, and environmental effects — the monsoon is one of the most anticipated, tracked,and studied weather phenomena in the region. It has a significant effect on the overall well-being of residents and has even been dubbed the "real finance minister of India".
The word monsoon (derived from the Arabic "mausim", meaning "seasonal reversal of winds"), although generally defined as a system of winds characterized by a seasonal reversal of direction,lacks a consistent, detailed definition. Some examples are:
Observed initially by sailors in the Arabian Seatraveling between Africa, India, and Southeast Asia, the monsoon can be categorized into two branches based on their spread over the subcontinent:
Alternatively, it can be categorized into two segments based on the direction of rain-bearing winds:
Based on the time of year that these winds bring rain to India, the monsoon can also be categorized into two periods:
The complexity of the monsoon of South Asia is not completely understood, making it difficult to accurately predict the quantity, timing, and geographic distribution of the accompanying precipitation. These are the most monitored components of the monsoon, and they determine the water availability in India for any given year.
Monsoons typically occur in tropical areas.One area that monsoons impact greatly is India. In India monsoons create an entire season in which the winds reverse completely.
The rainfall is a result of the convergence of wind flow from the Bay of Bengal and reverse winds from the South China Sea.
The onset of the monsoon occurs over the Bay of Bengal in May,arriving at the Indian Peninsula by June, and then the winds move towards the South China Sea.
Although the southwest and northeast monsoon winds are seasonally reversible, they do not cause precipitation on their own.
Two factors are essential for rain formation:
Additionally, one of the causes of rain must happen. In the case of the monsoon, the cause is primarily orographic, due to the presence of highlands in the path of the winds. Orographic barriers force wind to rise. Precipitation then occurs on the windward side of the highlands because of adiabatic cooling and condensation of the moist rising air.
The unique geographic relief features of the Indian subcontinent come into play in allowing all of the above factors to occur simultaneously. The relevant features in explaining the monsoon mechanism are as follows:
There are some unique features of the rains that the monsoon brings to the Indian subcontinent.
Bursting of monsoon refers to the sudden change in weather conditions in India (typically from hot and dry weather to wet and humid weather during the southwest monsoon), characterized by an abrupt rise in the mean daily rainfall.Similarly, the burst of the northeast monsoon refers to an abrupt increase in the mean daily rainfall over the affected regions.
One of the most commonly used words to describe the erratic nature of the monsoon is "vagaries", used in newspapers,magazines, books, web portals to insurance plans, and India's budget discussions. In some years, it rains too much, causing floods in parts of India; in others, it rains too little or not at all, causing droughts. In some years, the rain quantity is sufficient but its timing arbitrary. Sometimes, despite average annual rainfall, the daily distribution or geographic distribution of the rain is substantially skewed. In the recent past, rainfall variability in short time periods (about a week) were attributed to desert dust over the Arabian Sea and Western Asia.
Normally, the southwest monsoon can be expected to "burst" onto the western coast of India (near Thiruvananthapuram) at the beginning of June and to cover the entire country by mid-July.Its withdrawal from India typically starts at the beginning of September and finishes by the beginning of October.
The northeast monsoon usually "bursts" around 20 October and lasts for about 50 days before withdrawing.
However, a rainy monsoon is not necessarily a normal monsoon — that is, one that performs close to statistical averages calculated over a long period. A normal monsoon is generally accepted to be one involving close to the average quantity of precipitation over all the geographical locations under its influence (mean spatial distribution) and over the entire expected time period (mean temporal distribution). Additionally, the arrival date and the departure date of both the southwest and northeast monsoon should be close to the mean dates. The exact criteria for a normal monsoon are defined by the Indian Meteorological Department with calculations for the mean and standard deviation of each of these variables.
Theories of the mechanism of the monsoon primarily try to explain the reasons for the seasonal reversal of winds and the timing of their reversal.
Because of differences in the specific heat capacity of land and water, continents heat up faster than seas. Consequently, the air above coastal lands heats up faster than the air above seas. These create areas of low air pressure above coastal lands compared with pressure over the seas, causing winds to flow from the seas onto the neighboring lands. This is known as sea breeze.
Also known as the thermal theory or the differential heating of sea and land theory, the traditional theory portrays the monsoon as a large-scale sea breeze. It states that during the hot subtropical summers, the massive landmass of the Indian Peninsula heats up at a different rate than the surrounding seas, resulting in a pressure gradient from south to north. This causes the flow of moisture-laden winds from sea to land. On reaching land, these winds rise because of the geographical relief, cooling adiabatically and leading to orographic rains. This is the southwest monsoon. The reverse happens during the winter, when the land is colder than the sea, establishing a pressure gradient from land to sea. This causes the winds to blow over the Indian subcontinent toward the Indian Ocean in a northeasterly direction, causing the northeast monsoon. Because the southwest monsoon flows from sea to land, it carries more moisture, and therefore causes more rain, than the northeast monsoon. Only part of the northeast monsoon passing over the Bay of Bengal picks up moisture, causing rain in Andhra Pradesh and Tamil Nadu during the winter months.
However, many meteorologists argue that the monsoon is not a local phenomenon as explained by the traditional theory, but a general weather phenomenon along the entire tropical zone of Earth. This criticism does not deny the role of differential heating of sea and land in generating monsoon winds, but casts it as one of several factors rather than the only one.
The prevailing winds of the atmospheric circulation arise because of the difference in pressure at various latitudes and act as means for distribution of thermal energy on the planet. This pressure difference is because of the differences in solar insolation received at different latitudes and the resulting uneven heating of the planet. Alternating belts of high pressure and low pressure develop along the equator, the two tropics, the Arctic and Antarctic circles, and the two polar regions, giving rise to the trade winds, the westerlies, and the polar easterlies. However, geophysical factors like Earth's orbit, its rotation, and its axial tilt cause these belts to shift gradually north and south, following the Sun's seasonal shifts.
The dynamic theory explains the monsoon on the basis of the annual shifts in the position of global belts of pressure and winds. According to this theory, the monsoon is a result of the shift of the Inter Tropical Convergence Zone (ITCZ) under the influence of the vertical sun. Though the mean position of the ITCZ is taken as the equator, it shifts north and south with the migration of the vertical sun toward the Tropics of Cancer and Capricorn during the summer of the respective hemispheres (Northern and Southern Hemisphere). As such, during the northern summer (May and June), the ITCZ moves north, along with the vertical sun, toward the Tropic of Cancer. The ITCZ, as the zone of lowest pressure in the tropical region, is the target destination for the trade winds of both hemispheres. Consequently, with the ITCZ at the Tropic of Cancer, the southeast trade winds of the Southern Hemisphere have to cross the equator to reach it.However, because of the Coriolis effect (which causes winds in the Northern Hemisphere to turn right, whereas winds in the Southern Hemisphere turn left), these southeast trade winds are deflected east in the Northern Hemisphere, transforming into southwest trades. These pick up moisture while traveling from sea to land and cause orographic rain once they hit the highlands of the Indian Peninsula. This results in the southwest monsoon.
The dynamic theory explains the monsoon as a global weather phenomenon rather than just a local one. And when coupled with the traditional theory (based on the heating of sea and land), it enhances the explanation of the varying intensity of monsoon precipitation along the coastal regions with orographic barriers.
This theory tries to explain the establishment of the northeast and southwest monsoons, as well as unique features like "bursting" and variability.
The jet streams are systems of upper-air westerlies. They give rise to slowly moving upper-air waves, with 250-knot winds in some air streams. First observed by World War II pilots, they develop just below the tropopause over areas of steep pressure gradient on the surface. The main types are the polar jets, the subtropical westerly jets, and the less common tropical easterly jets. They follow the principle of geostrophic winds.
Over India, a subtropical westerly jet develops in the winter season and is replaced by the tropical easterly jet in the summer season. The high temperature during the summer over the Tibetan Plateau, as well as over Central Asia in general, is believed to be the critical factor leading to the formation of the tropical easterly jet over India.
The mechanism affecting the monsoon is that the westerly jet causes high pressure over northern parts of the subcontinent during the winter. This results in the north-to-south flow of the winds in the form of the northeast monsoon. With the northward shift of the vertical sun, this jet shifts north, too. The intense heat over the Tibetan Plateau, coupled with associated terrain features like the high altitude of the plateau, generate the tropical easterly jet over central India. This jet creates a low-pressure zone over the northern Indian plains, influencing the wind flow toward these plains and assisting the development of the southwest monsoon[ clarification needed ].
The "bursting"of the monsoon is primarily explained by the jet stream theory and the dynamic theory.
According to this theory, during the summer months in the Northern Hemisphere, the ITCZ shifts north, pulling the southwest monsoon winds onto the land from the sea. However, the huge landmass of the Himalayas restricts the low-pressure zone onto the Himalayas themselves. It is only when the Tibetan Plateau heats up significantly more than the Himalayas that the ITCZ abruptly and swiftly shifts north, leading to the bursting of monsoon rains over the Indian subcontinent. The reverse shift takes place for the northeast monsoon winds, leading to a second, minor burst of rainfall over the eastern Indian Peninsula during the Northern Hemisphere winter months.
According to this theory, the onset of the southwest monsoon is driven by the shift of the subtropical westerly jet north from over the plains of India toward the Tibetan Plateau. This shift is due to the intense heating of the plateau during the summer months. The northward shift is not a slow and gradual process, as expected for most changes in weather pattern. The primary cause is believed to be the height of the Himalayas. As the Tibetan Plateau heats up, the low pressure created over it pulls the westerly jet north. Because of the lofty Himalayas, the westerly jet's movement is inhibited. But with continuous dropping pressure, sufficient force is created for the movement of the westerly jet across the Himalayas after a significant period. As such, the shift of the jet is sudden and abrupt, causing the bursting of southwest monsoon rains onto the Indian plains. The reverse shift happens for the northeast monsoon.
The jet stream theory also explains the variability in timing and strength of the monsoon.
Timing: A timely northward shift of the subtropical westerly jet at the beginning of summer is critical to the onset of the southwest monsoon over India. If the shift is delayed, so is the southwest monsoon. An early shift results in an early monsoon.
Strength: The strength of the southwest monsoon is determined by the strength of the easterly tropical jet over central India. A strong easterly tropical jet results in a strong southwest monsoon over central India, and a weak jet results in a weak monsoon.
El Niño is a warm ocean current originating along the coast of Peru that replaces the usual cold Humboldt Current. The warm surface water moving toward the coast of Peru with El Niño is pushed west by the trade winds, thereby raising the temperature of the southern Pacific Ocean. The reverse condition is known as La Niña.
Southern oscillation, a phenomenon first observed by Sir Gilbert Thomas Walker, director general of observatories in India, refers to the seesaw relationship of atmospheric pressures between Tahiti and Darwin, Australia.Walker noticed that when pressure was high in Tahiti, it was low in Darwin, and vice versa. A Southern Oscillation Index (SOI), based on the pressure difference between Tahiti and Darwin, has been formulated by the Bureau of Meteorology (Australia) to measure the strength of the oscillation. Walker noticed that the quantity of rainfall in the Indian subcontinent was often negligible in years of high pressure over Darwin (and low pressure over Tahiti). Conversely, low pressure over Darwin bodes well for precipitation quantity in India. Thus, Walker established the relationship between southern oscillation and quantities of monsoon rains in India.
Ultimately, the southern oscillation was found to be simply an atmospheric component of the El Niño/La Niña effect, which happens in the ocean.Therefore, in the context of the monsoon, the two together came to be known as the El Niño-Southern Oscillation (ENSO) effect. The effect is known to have a pronounced influence on the strength of the southwest monsoon over India, with the monsoon being weak (causing droughts) during El Niño years, while La Niña years bring particularly strong monsoons.
Although the ENSO effect was statistically effective in explaining several past droughts in India, in recent decades, its relationship with the Indian monsoon seemed to weaken.For example, the strong ENSO of 1997 did not cause drought in India. However, it was later discovered that, just like ENSO in the Pacific Ocean, a similar seesaw ocean-atmosphere system in the Indian Ocean was also in play. This system was discovered in 1999 and named the Indian Ocean Dipole (IOD). An index to calculate it was also formulated. IOD develops in the equatorial region of the Indian Ocean from April to May and peaks in October. With a positive IOD, winds over the Indian Ocean blow from east to west. This makes the Arabian Sea (the western Indian Ocean near the African coast) much warmer and the eastern Indian Ocean around Indonesia colder and drier. In negative dipole years, the reverse happens, making Indonesia much warmer and rainier.
A positive IOD index often negates the effect of ENSO, resulting in increased monsoon rains in years such as 1983, 1994, and 1997.Further, the two poles of the IOD — the eastern pole (around Indonesia) and the western pole (off the African coast) — independently and cumulatively affect the quantity of monsoon rains.
As with ENSO, the atmospheric component of the IOD was later discovered and the cumulative phenomenon named Equatorial Indian Ocean oscillation (EQUINOO).When EQUINOO effects are factored in, certain failed forecasts, like the acute drought of 2002, can be further accounted for. The relationship between extremes of the Indian summer monsoon rainfall, along with ENSO and EQUINOO, have been studied, and models to better predict the quantity of monsoon rains have been statistically derived.
Since 1950s, the South Asian summer monsoon has been exhibiting large changes, especially in terms of droughts and floods.The observed monsoon rainfall indicates a gradual decline over central India, with a reduction of up to 10%. This is primarily due to a weakening monsoon circulation as a result of the rapid warming in the Indian Ocean, and changes in land use and land cover, while the role of aerosols remain elusive. Since the strength of the monsoon is partially dependent on the temperature difference between the ocean and the land, higher ocean temperatures in the Indian Ocean have weakened the moisture bearing winds from the ocean to the land. The reduction in the summer monsoon rainfall have grave consequences over central India because at least 60% of the agriculture in this region is still largely rain-fed.
A recent assessment of the monsoonal changes indicate that the land warming has increased during 2002-2014, possibly reviving the strength of the monsoon circulation and rainfall.Future changes in the monsoon will depend on a competition between land and ocean—on which is warming faster than the other.
Meanwhile, there has been a three-fold rise in widespread extreme rainfall events during the years 1950 to 2015, over the entire central belt of India, leading to a steady rise in the number of flash floods with significant socioeconomic losses. mm/day and spread over a region large enough to cause floods.Widespread extreme rainfall events are those rainfall events which are larger than 150
Since the Great Famine of 1876–78 in India, various attempts have been made to predict monsoon rainfall.At least five prediction models exist.
The Centre for Development of Advanced Computing (CDAC) at Bengaluru facilitated the Seasonal Prediction of Indian Monsoon (SPIM) experiment on the PARAM Padma supercomputing system.This project involved simulated runs of historical data from 1985 to 2004 to try to establish the relationship of five atmospheric general circulation models with monsoon rainfall distribution.
The department has tried to forecast the monsoon for India since 1884,and is the only official agency entrusted with making public forecasts about the quantity, distribution, and timing of the monsoon rains. Its position as the sole authority on the monsoon was cemented in 2005 by the Department of Science and Technology (DST), New Delhi. In 2003, IMD substantially changed its forecast methodology, model, and administration. A sixteen-parameter monsoon forecasting model used since 1988 was replaced in 2003. However, following the 2009 drought in India (worst since 1972), The department decided in 2010 that it needed to develop an "indigenous model" to further improve its prediction capabilities.
The monsoon is the primary delivery mechanism for fresh water in the Indian subcontinent. As such, it affects the environment (and associated flora, fauna, and ecosystems), agriculture, society, hydro-power production, and geography of the subcontinent (like the availability of fresh water in water bodies and the underground water table), with all of these factors cumulatively contributing to the health of the economy of affected countries.
The monsoon turns large parts of India from semi-deserts into green grasslands. See photos taken only three months apart in the Western Ghats.
Mawsynram and Cherrapunji, both in the Indian state of Meghalaya, alternate as the wettest places on Earth given the quantity of their rainfall,though there are other cities with similar claims. They receive more than 11,000 millimeters of rain each from the monsoon.
In India, which has historically had a primarily agrarian economy, the services sector recently overtook the farm sector in terms of GDP contribution. However, the agriculture sector still contributes 17-20% of GDPand is the largest employer in the country, with about 60% of Indians dependent on it for employment and livelihood. About 49% of India's land is agricultural; that number rises to 55% if associated wetlands, dryland farming areas, etc., are included. Because more than half of these farmlands are rain-fed, the monsoon is critical to food sufficiency and quality of life.
Despite progress in alternative forms of irrigation, agricultural dependence on the monsoon remains far from insignificant. Therefore, the agricultural calendar of India is governed by the monsoon. Any fluctuations in the time distribution, spatial distribution, or quantity of the monsoon rains may lead to floods or droughts, causing the agricultural sector to suffer. This has a cascading effect on the secondary economic sectors, the overall economy, food inflation, and therefore the general population's quality and cost of living.
The economic significance of the monsoon is aptly described by Pranab Mukherjee's remark that the monsoon is the "real finance minister of India".A good monsoon results in better agricultural yields, which brings down prices of essential food commodities and reduces imports, thus reducing food inflation overall. Better rains also result in increased hydroelectric production. All of these factors have positive ripple effects throughout the economy of India.
The down side however is that when monsoon rains are weak, crop production is low leading to higher food prices with limited supply.As a result, the Indian government is actively working with farmers and the nation's meteorological department to produce more drought resistant crops.
The onset of the rains bring about a host of diseases and infections like mosquito-borne, water-borne and air-borne infections as a result of the change in the ececosystem.
D. Subbarao, former governor of the Reserve Bank of India, emphasized during a quarterly review of India's monetary policy that the lives of Indians depend on the performance of the monsoon.His own career prospects, his emotional well-being, and the performance of his monetary policy are all "a hostage" to the monsoon, he said, as is the case for most Indians. Additionally, farmers rendered jobless by failed monsoon rains tend to migrate to cities. This crowds city slums and aggravates the infrastructure and sustainability of city life.
In the past, Indians usually refrained from traveling during monsoons for practical as well as religious reasons. But with the advent of globalization, such travel is gaining popularity. Places like Kerala and the Western Ghats get a large number of tourists, both local and foreigners, during the monsoon season. Kerala is one of the top destinations for tourists interested in Ayurvedic treatments and massage therapy. One major drawback of traveling during the monsoon is that most wildlife sanctuaries are closed. Also, some mountainous areas, especially in Himalayan regions, get cut off when roads are damaged by landslides and floods during heavy rains.
The monsoon is the primary bearer of fresh water to the area. The peninsular/Deccan rivers of India are mostly rain-fed and non-perennial in nature, depending primarily on the monsoon for water supply. [ citation needed ]Most of the coastal rivers of Western India are also rain-fed and monsoon-dependent. As such, the flora, fauna, and entire ecosystems of these areas rely heavily on the monsoon.
Monsoon is traditionally defined as a seasonal reversing wind accompanied by corresponding changes in precipitation, but is now used to describe seasonal changes in atmospheric circulation and precipitation associated with the asymmetric heating of land and sea. Usually, the term monsoon is used to refer to the rainy phase of a seasonally changing pattern, although technically there is also a dry phase. The term is sometimes incorrectly used for locally heavy but short-term rains.
El Niño–Southern Oscillation (ENSO) is an irregularly periodic variation in winds and sea surface temperatures over the tropical eastern Pacific Ocean, affecting the climate of much of the tropics and subtropics. The warming phase of the sea temperature is known as El Niño and the cooling phase as La Niña. The Southern Oscillation is the accompanying atmospheric component, coupled with the sea temperature change: El Niño is accompanied by high air surface pressure in the tropical western Pacific and La Niña with low air surface pressure there. The two periods last several months each and typically occur every few years with varying intensity per period.
Orographic lift occurs when an air mass is forced from a low elevation to a higher elevation as it moves over rising terrain. As the air mass gains altitude it quickly cools down adiabatically, which can raise the relative humidity to 100% and create clouds and, under the right conditions, precipitation.
The Intertropical Convergence Zone (ITCZ), known by sailors as the doldrums or the calms because of its monotonous, windless weather, is the area where the northeast and southeast trade winds converge. It encircles Earth near the thermal equator, though its specific position varies seasonally. When it lies near the geographic Equator, it is called the near-equatorial trough. Where the ITCZ is drawn into and merges with a monsoonal circulation, it is sometimes referred to as a monsoon trough, a usage more common in Australia and parts of Asia.
The prevailing wind in a region of the Earth's surface is a surface wind that blows predominantly from a particular direction. The dominant winds are the trends in direction of wind with the highest speed over a particular point on the Earth's surface. A region's prevailing and dominant winds are the result of global patterns of movement in the Earth's atmosphere. In general, winds are predominantly easterly at low latitudes globally. In the mid-latitudes, westerly winds are dominant, and their strength is largely determined by the polar cyclone. In areas where winds tend to be light, the sea breeze/land breeze cycle is the most important cause of the prevailing wind; in areas which have variable terrain, mountain and valley breezes dominate the wind pattern. Highly elevated surfaces can induce a thermal low, which then augments the environmental wind flow
The climate of India comprises a wide range of weather conditions across a vast geographic scale and varied topography, making generalisations difficult. Climate in south India is generally hotter than north India. Most part of the nation doesn't experience temperatures below 10 °C (50 °F) even in winter the temperature exceeds 40 °C (104 °F) during summer across the nation. Based on the Köppen system, India hosts six major climatic subtypes, ranging from arid deserts in the west, alpine tundra and glaciers in the north, and humid tropical regions supporting rainforests in the southwest and the island territories. Many regions have starkly different microclimates, making it one of the most climatically diverse countries in the world. The country's meteorological department follows the international standard of four climatological seasons with some local adjustments: winter, summer, monsoon (rainy) season, and a post-monsoon period.
The South Pacific Convergence Zone (SPCZ), a reverse-oriented monsoon trough, is a band of low-level convergence, cloudiness and precipitation extending from the Western Pacific Warm Pool at the maritime continent south-eastwards towards French Polynesia and as far as the Cook Islands. The SPCZ is a portion of the Intertropical Convergence Zone (ITCZ) which lies in a band extending east-west near the Equator but can be more extratropical in nature, especially east of the International Date Line. It is considered the largest and most important piece of the ITCZ, and has the least dependence upon heating from a nearby landmass during the summer than any other portion of the monsoon trough. The SPCZ can affect the precipitation on Polynesian islands in the southwest Pacific Ocean, so it is important to understand how the SPCZ behaves with large-scale, global climate phenomenon, such as the ITCZ, El Niño–Southern Oscillation, and the Interdecadal Pacific oscillation (IPO), a portion of the Pacific decadal oscillation.
The Madden–Julian oscillation (MJO) is the largest element of the intraseasonal variability in the tropical atmosphere. It was discovered in 1971 by Roland Madden and Paul Julian of the American National Center for Atmospheric Research (NCAR). It is a large-scale coupling between atmospheric circulation and tropical deep atmospheric convection. Unlike a standing pattern like the El Niño–Southern Oscillation (ENSO), the Madden–Julian oscillation is a traveling pattern that propagates eastward, at approximately 4 to 8 m/s, through the atmosphere above the warm parts of the Indian and Pacific oceans. This overall circulation pattern manifests itself most clearly as anomalous rainfall.
Sohra, is a subdivisional town in the East Khasi Hills district in the Indian state of Meghalaya. It is the traditional capital of ka hima Nongkhlaw
In the Indian Ocean north of the equator, tropical cyclones can form throughout the year on either side of India. On the east side is the Bay of Bengal, and on the west side is the Arabian Sea.
The 2005 North Indian Ocean cyclone season was destructive and deadly to southern India, despite the weak storms. The basin covers the Indian Ocean north of the equator as well as inland areas, sub-divided by the Arabian Sea and the Bay of Bengal. Although the season began early with two systems in January, the bulk of activity was confined from September to December. The official India Meteorological Department tracked 12 depressions in the basin, and the unofficial Joint Typhoon Warning Center (JTWC) monitored two additional storms. Three systems intensified into a cyclonic storm, which have sustained winds of at least 63 km/h (39 mph), at which point the IMD named them.
The geography of South America contains many diverse regions and climates. Geographically, South America is generally considered a continent forming the southern portion of the landmass of the Americas, south and east of the Panama–Colombia border by most authorities, or south and east of the Panama Canal by some. South and North America are sometimes considered a single continent or supercontinent, while constituent regions are infrequently considered subcontinents.
The monsoon trough is a portion of the Intertropical Convergence Zone in the Western Pacific, as depicted by a line on a weather map showing the locations of minimum sea level pressure, and as such, is a convergence zone between the wind patterns of the southern and northern hemispheres.
Drought in India has resulted in tens of millions of deaths over the 18th, 19th, and 20th centuries. Indian agriculture is heavily dependent on the country's climate: a favorable southwest summer monsoon is critical to securing water for irrigating India's crops. In parts of India, failure of the monsoons causes water shortages, resulting in below-average crop yields. This is particularly true of major drought-prone regions such as southern and eastern Maharashtra, northern Karnataka, Andhra Pradesh, Odisha, Gujarat, Telangana, and Rajasthan.
A Western Disturbance is an extratropical storm originating in the Mediterranean region that brings sudden winter rain to the northwestern parts of the Indian subcontinent. It is a non-monsoonal precipitation pattern driven by the westerlies. The moisture in these storms usually originates over the Mediterranean Sea and the Indian Ocean. Extratropical storms are a global phenomena with moisture usually carried in the upper atmosphere, unlike their tropical counterparts where the moisture is carried in the lower atmosphere. In the case of the Indian subcontinent, moisture is sometimes shed as rain when the storm system encounters the Himalayas. Western Disturbances are more frequent and strong in winter season.
Earth rainfall climatology Is the study of rainfall, a sub-field of Meteorology. Formally, a wider study includes water falling as ice crystals, i.e. hail, sleet, snow. The aim of rainfall climatology is to measure, understand and predict rain distribution across different regions of planet Earth, a factor of air pressure, humidity, topography, cloud type and raindrop size, via direct measurement and remote sensing data acquisition. Current technologies accurately predict rainfall 3–4 days in advance using numerical weather prediction. Geostationary orbiting satellites gather IR and visual wavelength data to measure realtime localised rainfall by estimating cloud albedo, water content, and the corresponding probability of rain. Geographic distribution of rain is largely governed by climate type, topography and habitat humidity. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation. On the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah climes. The urban heat island effect leads to increased rainfall, both in amounts and intensity, downwind of cities. Warming may also cause changes in the precipitation pattern globally, including wetter conditions at high latitudes and in some wet tropical areas. Precipitation is a major component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately 505,000 cubic kilometres (121,000 cu mi) of water falls as precipitation each year; 398,000 cubic kilometres (95,000 cu mi) of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in). Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes.
The climate of Asia is dry across southeast sections, with dry across much of the interior. Some of the largest daily temperature ranges on Earth occur in western sections of Asia. The monsoon circulation dominates across southern and eastern sections, due to the presence of the Himalayas forcing the formation of a thermal low which draws in moisture during the summer. Southwestern sections of the continent experience low relief as a result of the subtropical high pressure belt; they are hot in the summer, warm to cool in winter, and may snow at higher altitudes. Siberia is one of the coldest places in the Northern Hemisphere, and can act as a source of arctic air mass for North America. The most active place on Earth for tropical cyclone activity lies northeast of the Philippines and south of Japan, and the phase of the El Nino-Southern Oscillation modulates where in Asia landfall is more likely to occur.
The Subtropical Indian Ocean Dipole (SIOD) is featured by the oscillation of sea surface temperatures (SST) in which the southwest Indian Ocean i.e. south of Madagascar is warmer and then colder than the eastern part i.e. off Australia. It was first identified in the studies of the relationship between the SST anomaly and the south-central Africa rainfall anomaly; the existence of such a dipole was identified from both observational studies and model simulations .
The 2014 North Indian Ocean cyclone season was an event in the annual cycle of tropical cyclone formation. The season included two very severe cyclonic storms, both in October, and one other named cyclonic storm, classified according to the tropical cyclone intensity scale of the India Meteorological Department. Cyclone Hudhud is estimated to have caused US$3.58 billion in damage across eastern India, and more than 120 deaths.
The 2015 North Indian Ocean cyclone season was an event in the annual cycle of tropical cyclone formation. The North Indian Ocean cyclone season has no official bounds, but cyclones tend to form between April and December, with the peak from May to November. These dates conventionally delimit the period of each year when most tropical cyclones form in the northern Indian Ocean.
|Wikimedia Commons has media related to Monsoon in Bangladesh .|
|Wikimedia Commons has media related to Monsoon in India .| | 0.825618 | 3.137903 |
The Heliospheric current sheet (HCS) is the surface within the Solar System where the polarity of the Sun’s magnetic field changes from north to south. This field extends from the Sun’s equatorial plane throughout the entire Solar System and is the largest structure in the heliosphere. The shape of the current sheet results from the influence of the Sun’s rotating magnetic field on the plasma in the interplanetary medium (Solar Wind). (see also Unipolar generator). A small electrical current flows within the sheet, about 10-10 amps/m2. The thickness of the current sheet is about 10,000km.
The underlying magnetic field is called the interplanetary magnetic field, which has an associated interplanetary electric field , and the resulting electric current forms part of the heliospheric current circuit. The Heliospheric current sheet is also sometimes called the Interplanetary Current Sheet and Heliospheric neutral sheet. See also “Current sheet“.
Ballerina’s skirt shape
As the Sun rotates, its magnetic field twists into a Parker spiral, a form of an Archimedean spiral, named after its discovery by Eugene Parker. As the spiraling magnetic sheets changes polarity, it warps into a wavy spiral shape that has been likened to a ballerina’s skirt. Further dynamics have suggested that “The Sun with the heliosheet is like a bashful ballerina who is repeatedly trying to push her excessively high flaring skirt downward”.
The heliospheric current sheet rotates along with the Sun once every 27 days, during which time the peaks and troughs of the skirt pass through the Earth’s magnetosphere, interacting with it. Near the surface of the Sun, the magnetic field produced by the radial electric current in the sheet is of the order of 5×10-6T.
The magnetic field at the surface of the Sun is about 10-4 tesla. If the form of the field were a magnetic dipole, the strength would decrease with the cube of the distance, resulting in about 10-11 tesla at the Earth’s orbit. The heliospheric current sheet results in higher order multipole components so that the actual magnetic field at the Earth due to the Sun is 100 times greater.
The electric current in the heliospheric current sheet has a radially component, the circuit being closed by currents aligned with the Sun’s magnetic field in the solar polar regions. The total current in the circuit is on the order of 3×109 amperes. As a comparison with other astrophysical electric currents, the Birkeland currents that supply the Earth’s aurora are about a thousand times weaker at a million amperes. The maximum current density in the sheet is on the order of 10-10 A/m2 (10-4 amps/km2).
It has been noted that:
- “It is remarkable that the radial component of the spiral structure implies a current the continually flows towards the Sun. The charge accumulating from this process must be removed elsewhere. This occurs most simply via line currents that originate over the Sun’s poles”
Interplanetary electric field
The interplanetary electric field (IEF) extends throughout the interplanetary current sheet, and is generally orientated north-south. The separation of the field is relatively small, but its extent is the same as the heliospheric current sheet which extends throughout the plasmasphere.
The interplanetary electric field is caused by ions leaving the Sun, initially flowing along and parallel to the Sun’s magnetic field. But as the ions move further outwards, the azimuthal component of the Sun’s magnetic field becomes more influential, and protons are deflected to the south and electrons to the north, resulting in an electric field that compensates the magnetic forces.
- “The Solar Wind consists of a hot plasma — an electrically neutral mixture of electrons and ions (principally protons with some heavier atomic nuclei) at roughly 100,000°K. Its source is the Sun’s atmosphere, or corona, and it is continually present in interplanetary space. The gas flows radially outwards at a typical speed of 450km per second to at least 70 AU and probably much further. The average speed of the flowing gas is remarkably independent of its distance from the Sun”.
Solar wind acceleration
- “The speed of the solar wind away from Sun increases as the distance from the Sun increases. The wind accelerates rapidly in the first few tens of Ro, and accelerates only slowly after this.”
The heliospheric current sheet was discovered by John M. Wilcox and Norman F. Ness, who published their finding in 1965 .
The image above is a painting by NASA artist, Werner Heil. It was developed by Prof. John Wilcox as a tool for visualizing the surface that separates the two magnetic polarity regions produced by the Sun in the solar system. His concept was that a “baseball seam” shape located near the Sun separates the two magnetic hemispheres of the interplanetary medium; the shape was determined by the large-scale magnetic field at the Sun. That geometrical shape is carried radially outward by the solar wind. As the Sun, and the magnetic field configuration it generates, continue to rotate underneath the structure, the resulting surface becomes the one you see in the painting.
Hannes Alfvén and Per Carlqvist speculate on the existence of a galactic current sheet, a counterpart of the heliospheric current sheet, with an estimated galactic current of 1017 – 1019 Amps, that might flow in the plane of symmetry of the galaxy.
- Dr. Tony Phillips, A Star with two North Poles April 22, 2003, [email protected] via Archive.org
- Artist’s Conception of the Heliospheric Current Sheet, Wilcox Solar Observatory
- Duncan Alan Bryant “Electron Acceleration in the Aurora and Beyond”, Published 1999, CRC Press, 311 pages, ISBN 0750305339 (page 176)
- Israelevich, P. L., et al, “MHD simulation of the three-dimensional structure of the heliospheric current sheet” (2001) Astronomy and Astrophysics, v.376, p.288-291
- Parker, E. N., “Dynamics of the Interplanetary Gas and Magnetic Fields“, (1958) Astrophysical Journal, vol. 128, p.664
- Rosenberg, R. L. and P. J. Coleman, Jr., Heliographic latitude dependence of the dominant polarity of the interplanetary magnetic field, J. Geophys. Res., 74 (24), 5611-5622, 1969.
- Wilcox, J. M.; Scherrer, P. H.; Hoeksema, J. T., “The origin of the warped heliospheric current sheet” (1980)
- Mursula, K.; Hiltula, T., “Bashful ballerina: Southward shifted heliospheric current sheet]” (2003), Geophysical Research Letters, Volume 30, Issue 22, pp. SSC 2-1
- Gerd W. Prölss, “Physics of the Earth’s Space Environment: An Introduction” Translated by M. K. Bird, Published 2004, Springer, 514 pages ISBN 3540214267 (page 309)
- Gerd W. Prölss, Physics of the Earth’s Space Environment: An Introduction, (2004) Translated by M. K. Bird, Springer, 514 pages, ISBN 3540214267 (pages 312-313)
- J. Kelly Beatty, Carolyn Collins Petersen, Andrew Chaikin, The New Solar System, Edition: 4, illustrated, revised, Published by Cambridge University Press, 1999, ISBN 0521645875, ISBN 9780521645874, 421 pages (page 40)
- Simon F. Green, Mark H. Jones, S. Jocelyn Burnell, An Introduction to the Sun and Stars, Published by Cambridge University Press, 2004, ISBN 0521546222, ISBN 9780521546225, 373 pages (page 75)
- John M. Wilcox and Norman F. Ness, “Quasi-Stationary Corotating Structure in the Interplanetary Medium” (1965) Journal of Geophysical Research, 70, 5793.
- Personal correspondence, Todd Hoeksema, Wilcox Solar Observatory
- Hannes Alfvén and Per Carlqvist, “Interstellar clouds and the formation of stars” (1978) in Astrophysics and Space Science, vol. 55, no. 2, May 1978, p. 487-509. | 0.842997 | 3.963996 |
Our simplistic nine-planet view of the solar system was shattered years ago when scientists learned Pluto was not unique in the outer solar system. We have since discovered more “dwarf planets,” and an international team of astronomers has just spotted the most distant such planetoid yet. The object known as “Farout” is 120 times farther from the sun than Earth, putting it far beyond the orbit of Pluto.
NASA's Parker Solar Probe has made the closest-ever approach to a star (the sun) and shared an image of the sun's atmosphere on Twitter on Wednesday. NASA's image, captured Nov. 8, shows the corona, which is the sun's outer atmosphere, when the spacecraft was just 16.9 million miles from the star.
A newly released composite photo of the galaxy cluster Abell 1033, which lies about 1.6 billion light-years from Earth, bears a striking resemblance to the Starship Enterprise from "Star Trek."
Hubble went into safe mode when one of its three working gyroscopes failed, leaving mission managers with a weighty challenge: They could try getting a glitchy gyro working again, bringing the telescope’s pointing system back to its normal three-gyro mode. Otherwise, they would have to go to a one-gyro procedure for pointing at observational targets, and keep the second gyro in reserve.
Called the Event Horizon Telescope, or EHT, the project is “the biggest telescope in the history of humanity,” EHT director Shep Doeleman of the Harvard-Smithsonian Center for Astrophysics says in the book. EHT unifies far-flung radio telescopes through a technique called very long baseline interferometry, which involves combining the light waves spotted by each telescope to determine how the light adds up, through a process called interference.
A new calculation shows that if space is an ocean, we’ve barely dipped in a toe. The volume of observable space combed so far for E.T. is comparable to searching the volume of a large hot tub for evidence of fish in Earth’s oceans, astronomer Jason Wright at Penn State and colleagues say in a paper posted online September 19 at arXiv.org.
Japan’s Hayabusa2 spacecraft, which arrived at the near-Earth asteroid on June 27 after a journey of more than three years, released the MINERVA-II1 container from a height of about 60 meters (SN Online: 6/27/18). The container then released two 18-centimeter-wide, cylindrical rovers. Because Ryugu’s gravity is so weak, the rovers can hop using rotating motors that generate a torque and send them airborne for about 15 minutes.
Astronomers have finally found the last of the missing universe. It’s been hiding since the mid-1990s, when researchers decided to inventory all the “ordinary” matter in the cosmos--stars and planets and gas, anything made out of atomic parts. (This isn’t “dark matter,” which remains a wholly separate enigma.)
A massive number of new signals have been discovered coming from the notorious repeating fast radio source FRB 121102 - and we can thank artificial intelligence for these findings. Researchers at the search for extraterrestrial intelligence (SETI) project Breakthrough Listen applied machine learning to comb through existing data, and found 72 fast radio bursts that had previously been missed.
Evidence for Planet Nine continues to mount, but there may be a good reason why scientists have yet to find it - it may be hiding. In October 2017, NASA released a statement saying that Planet Nine may be 20 times further from the Sun than Neptune is, going so far as to say "it is now harder to imagine our solar system without a Planet Nine than with one." | 0.903196 | 3.170237 |
On a clear, moonless night, you can look up and see the Milky Way. Actually, we are in the Milky Way, a spiral galaxy of 200 billion stars one of which is our Sun. We are located in a spiral arm of that galaxy 26,000 light-years from its center. Our location seems to indicate many galactic coincidences.
At the center of the Milky Way (and perhaps all galaxies), there’s a black hole sending out lethal radiation to a distance of 20,000 light-years. Farther out than 26,000 light-years from the center, heavy elements that are vital to our existence and survival are scarce. We are in what astronomers call the “galactic habitable zone.”
Spiral galaxies rotate, and we are near the co-rotation spot where our solar system moves at almost the same rate as the spiral arm we are in. If we were in precisely the co-rotation spot, we would experience gravitational “kicks” which could send us out of the habitable zone. If we were far away from the co-rotation spot, we would fall out of the arm and be subjected to deadly radiation.
In the vast majority of spiral galaxies, the habitable zone and co-rotation spot do not overlap. Most other spiral galaxies are not as stable as ours. Most galaxies are not spiral galaxies and would not have a stable location for advanced life.
Furthermore, galaxies exist in clusters, and our cluster called the “Local Group” has fewer, smaller, and more spread-out galaxies than nearly all other clusters. Most galaxies are in dense clusters with giant or supergiant galaxies which create deadly radiation and gravitational distortion making advanced life impossible.
These are only a few of the many factors that “just happen to be” true of the place where we live. Are these just galactic coincidences? Some say it’s all accidental. We say it’s a grand design by a Master Designer. The next time you look up at the Milky Way, thank God that we are precisely where we are.
–Roland Earnst © 2018 | 0.845473 | 3.472006 |
Like Goldilocks’s third bowl of porridge, Earth’s just-right environment sits between that of frigid Mars and boiling Venus, largely thanks to our well balanced not-too-weak not-too-strong greenhouse effect. Now, a Harvard team suggests that methane may have warmed early Mars in a similar way.
Little doubt remains that ancient Mars was warmer and wetter than the frozen red world we see today. Signs of water erosion, riverbeds, lake basins, and even hints of a huge ocean all but confirm that at some point Martian temperatures regularly exceeded 32 degrees, which puts it in an unusual cosmic position.
"Early Mars is unique in the sense that it’s the one planetary environment, outside Earth, where we can say with confidence that there were at least episodic periods where life could have flourished," said Robin Wordsworth, assistant professor of environmental science and engineering at Harvard, in a press release.
The mystery is how that could be possible. Earthbound scientists are all too familiar with the mechanisms that warm our home planet, but despite accounting for 95 percent of what passes on Mars for an atmosphere, famous greenhouse gas carbon dioxide isn’t up to the task.
According to the Harvard press release, even if scientists crank up the atmospheric pressure to hundreds of times its current levels, Martian models just won’t melt. One challenge to liquid water was the fact that the young sun was almost a third less bright than it is today.
Dr. Wordsworth's team's insight was to consider other greenhouse gases, specifically methane. While only present at trace the level of a handful of parts per billion today, places such as Martian moon Titan have an abundance of the organic molecule.
By modeling the ways methane, hydrogen, and carbon dioxide behave together when warmed by sunlight, the researchers found that such an atmosphere could have indeed warmed the planet enough to maintain liquid water. Their findings were published Tuesday in the journal Geophysical Research Letters.
"This research shows that the warming effects of both methane and hydrogen have been underestimated by a significant amount," said Wordsworth in the press release. "We discovered that methane and hydrogen, and their interaction with carbon dioxide, were much better at warming early Mars than had previously been believed."
As for direct evidence, further on-the-ground investigation is needed. Nevertheless, some recent discoveries provide promising hints. In 2013, NASA rover Curiosity caught a whiff of methane when the atmospheric concentration spiked to ten times its normal level, which scientists said could have been the result of either biological or geological activity.
More recently, researchers crushed bits of Martian meteorites on Earth, and found they released surprisingly large amounts of methane gas. The team concluded that underground methane could support extremophile bacteria, Space.com reports.
Even if it turns out that early Mars was methane-poor, the Harvard scientists’ findings could have consequences that extend outside our solar system. The team explained in the paper: "Our results also suggest that inhabited exoplanets could retain surface liquid water at significant distances from their host stars."
While exoplanets are constantly expanding our understanding of how and where planets can form, exobiologists are particularly interested in those that exist in what’s sometimes called the "Goldilocks Zone," where surface temperatures are just right for water. If methane, hydrogen, or other common gasses can warm planets more easily than carbon dioxide can alone, that habitable zone may be wider than we thought.
"If we understand how early Mars operated, it could tell us something about the potential for finding life on other planets outside the solar system," said Wordsworth. | 0.83833 | 3.950721 |
eso1214 — Publikim shkencor
Many Billions of Rocky Planets in the Habitable Zones around Red Dwarfs in the Milky Way
28 Mars 2012
A new result from ESO’s HARPS planet finder shows that rocky planets not much bigger than Earth are very common in the habitable zones around faint red stars. The international team estimates that there are tens of billions of such planets in the Milky Way galaxy alone, and probably about one hundred in the Sun’s immediate neighbourhood. This is the first direct measurement of the frequency of super-Earths around red dwarfs, which account for 80% of the stars in the Milky Way.
This first direct estimate of the number of light planets around red dwarf stars has just been announced by an international team using observations with the HARPS spectrograph on the 3.6-metre telescope at ESO’s La Silla Observatory in Chile . A recent announcement (eso1204), showing that planets are ubiquitous in our galaxy, used a different method that was not sensitive to the important class of exoplanets that lie in the habitable zones around red dwarfs.
The HARPS team has been searching for exoplanets orbiting the most common kind of star in the Milky Way — red dwarf stars (also known as M dwarfs ). These stars are faint and cool compared to the Sun, but very common and long-lived, and therefore account for 80% of all the stars in the Milky Way.
“Our new observations with HARPS mean that about 40% of all red dwarf stars have a super-Earth orbiting in the habitable zone where liquid water can exist on the surface of the planet,” says Xavier Bonfils (IPAG, Observatoire des Sciences de l'Univers de Grenoble, France), the leader of the team. “Because red dwarfs are so common — there are about 160 billion of them in the Milky Way — this leads us to the astonishing result that there are tens of billions of these planets in our galaxy alone.”
The HARPS team surveyed a carefully chosen sample of 102 red dwarf stars in the southern skies over a six-year period. A total of nine super-Earths (planets with masses between one and ten times that of Earth) were found, including two inside the habitable zones of Gliese 581 (eso0915) and Gliese 667 C respectively. The astronomers could estimate how heavy the planets were and how far from their stars they orbited.
By combining all the data, including observations of stars that did not have planets, and looking at the fraction of existing planets that could be discovered, the team has been able to work out how common different sorts of planets are around red dwarfs. They find that the frequency of occurrence of super-Earths in the habitable zone is 41% with a range from 28% to 95%.
On the other hand, more massive planets, similar to Jupiter and Saturn in our Solar System, are found to be rare around red dwarfs. Less than 12% of red dwarfs are expected to have giant planets (with masses between 100 and 1000 times that of the Earth).
As there are many red dwarf stars close to the Sun the new estimate means that there are probably about one hundred super-Earth planets in the habitable zones around stars in the neighbourhood of the Sun at distances less than about 30 light-years .
"The habitable zone around a red dwarf, where the temperature is suitable for liquid water to exist on the surface, is much closer to the star than the Earth is to the Sun," says Stéphane Udry (Geneva Observatory and member of the team). "But red dwarfs are known to be subject to stellar eruptions or flares, which may bathe the planet in X-rays or ultraviolet radiation, and which may make life there less likely."
One of the planets discovered in the HARPS survey of red dwarfs is Gliese 667 Cc . This is the second planet in this triple star system (see eso0939 for the first) and seems to be situated close to the centre of the habitable zone. Although this planet is more than four times heavier than the Earth it is the closest twin to Earth found so far and almost certainly has the right conditions for the existence of liquid water on its surface. This is the second super-Earth planet inside the habitable zone of a red dwarf discovered during this HARPS survey, after Gliese 581d was announced in 2007 and confirmed in 2009.
“Now that we know that there are many super-Earths around nearby red dwarfs we need to identify more of them using both HARPS and future instruments. Some of these planets are expected to pass in front of their parent star as they orbit — this will open up the exciting possibility of studying the planet’s atmosphere and searching for signs of life,” concludes Xavier Delfosse, another member of the team (eso1210).
Correction (added 30 March 2012):
Please note that the original version of this press release incorrectly implied that the microlensing method was not sensitive to all planets around red dwarfs. This has now been corrected to say that it is not sensitive to planets in the habitable zones around red dwarfs.
HARPS measures the radial velocity of a star with extraordinary precision. A planet in orbit around a star causes the star to regularly move towards and away from a distant observer on Earth. Due to the Doppler effect, this radial velocity change induces a shift of the star’s spectrum towards longer wavelengths as it moves away (called a redshift) and a blueshift (towards shorter wavelengths) as it approaches. This tiny shift of the star’s spectrum can be measured with a high-precision spectrograph such as HARPS and used to infer the presence of a planet.
These stars are called M dwarfs because they have the spectral class M. This is the coolest of the seven classes in the simplest scheme for classifying stars accordingly to decreasing temperature and the appearance of their spectra.
Planets with a mass between one and ten times that of the Earth are called super-Earths. There are no such planets in our Solar System, but they appear to be very common around other stars. Discoveries of such planets in the habitable zones around their stars are very exciting because — if the planet were rocky and had water, like Earth — they could potentially be an abode of life.
The name means that the planet is the second discovered (c) orbiting the third component (C) of the triple star system called Gliese 667. The bright stellar companions Gliese 667 A and B would be prominent in the skies of Gliese 667 Cc. The discovery of Gliese 667 Cc was independently announced by Guillem Anglada-Escude and colleagues in February 2012, roughly two months after the electronic preprint of the Bonfils et al. paper went online. This confirmation of the planets Gliese 667 Cb and Cc by Anglada-Escude and collaborators was largely based on HARPS observations and data processing of the European team that were made publicly available through the ESO archive.
Më shumë informacion
This research was presented in a paper “The HARPS search for southern extra-solar planets XXXI. The M-dwarf sample”, by Bonfils et al. to appear in the journal Astronomy & Astrophysics.
The team is composed of X. Bonfils (UJF-Grenoble 1 / CNRS-INSU, Institut de Planétologie et d’Astrophysique de Grenoble, France [IPAG]; Geneva Observatory, Switzerland), X. Delfosse (IPAG), S. Udry (Geneva Observatory), T. Forveille (IPAG), M. Mayor (Geneva Observatory), C. Perrier (IPAG), F. Bouchy (Institut d’Astrophysique de Paris, CNRS, France; Observatoire de Haute-Provence, France), M. Gillon (Université de Liège, Belgium; Geneva Observatory), C. Lovis (Geneva Observatory), F. Pepe (Geneva Observatory), D. Queloz (Geneva Observatory), N. C. Santos (Centro de Astrofísica da Universidade do Porto, Portugal), D. Ségransan (Geneva Observatory), J.-L. Bertaux (Service d’Aéronomie du CNRS, Verrières-le-Buisson, France), and Vasco Neves (Centro de Astrofísica da Universidade do Porto, Portugal and UJF-Grenoble 1 / CNRS-INSU, Institut de Planétologie et d’Astrophysique de Grenoble, France [IPAG]).
The year 2012 marks the 50th anniversary of the founding of the European Southern Observatory (ESO). ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive astronomical observatory. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 40-metre-class European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.
- Research papers: Bonfils et al. and Delfosse et al.
- Photos of the ESO 3.6-metre Telescope at La Silla
Université Joseph Fourier - Grenoble 1/Institut de Planétologie et d’Astrophysique de Grenoble
Tel: +33 47 65 14 215
ESO, La Silla, Paranal, E-ELT and Survey Telescopes Public Information Officer
Garching bei München, Germany
Tel: +49 89 3200 6655
Cel: +49 151 1537 3591 | 0.824774 | 3.773309 |
Today is the day that the Dawn Mission completes a long 7.5 year long journey that has taken it past the orbit of Mars and into the asteroid belt, studying the second largest asteroid Vesta before heading toward the dwarf planet Ceres, where it has now injected itself into orbit, as of 7:39 am EST.
This marks the first time in history a spacecraft has seen a dwarf planet up close, and with New Horizons passing Pluto in July, Dawn won the race in an astronomical photo finish.
The Story So Far
Launching on September 27th, 2007, Dawn orbited the Sun and used its Ion propulsion thrusters to accelerate slowly out to the orbit of Mars. Ion propulsion is a highly efficient form of thrusting in space, using minimal fuel but resulting in a slow and nearly continuous acceleration. Dawn used its thrusters for approximately 80% of the time it was orbiting the Sun, before using Mars as a gravity assist in 2009 to push it further out into the asteroid belt.
Two years later, Dawn arrived at Vesta, slowing down in order to insert into orbit around the ~500 Km asteroid. It spent 14 months snapping photos and mapping the surface of Vesta in detail before leaving Vesta on September 4th, 2012, to head for Ceres.
Now that Dawn has begun its orbit of Ceres today, it will map the entire surface in detail, and begin studying surface features. Over time it will determine if the surface features are changing, giving evidence of any current geological activity. Ceres is also about 25% water by mass, so astronomers on Earth hope to gather some insight into how water behaves on the dwarf planet, in contrast to the earlier observations of the much drier Vesta.
10:07 EST UPDATE: Mission Engineers received confirmation of the orbit injection at 8:39 am EST, with indication that the spacecraft had entered orbit as planned. | 0.811307 | 3.463618 |
When NASA’s new drone Dragonfly arrives on Titan, Saturn’s largest moon, it won’t roll across the surface like Curiosity, Spirit, and Opportunity have on Mars. Instead, Dragonfly is a dual-rotor quadcopter that will fly from point to point, using a vertical takeoff and landing (VTOL) system. It leverages existing drone technology we have on Earth to make the system work.
Titan is, in many ways, an ideal spot to try this kind of deployment. That moon’s combination of low gravity and a thick nitrogen-dominated atmosphere make it easy to fly in — or, at least, easy as things go when you’re flying a remote drone from nearly 800 million miles away and can’t make any mistakes.
XKCD addressed this concept in a substantial “What if” that evaluated all of the planets and moons in the solar system according to how well they’d support the flight of a Cessna 172 Skyhawk. In most cases, the plane would crash; sustained flight on Mars, for example, requires a ground speed of over Mach 1 just to take off. For Venus, XKCD author Randall Munroe notes, “Your plane would fly pretty well, except it would be on fire the whole time, and then it would stop flying, and then stop being a plane.”
But Titan? Titan is a different story. Munroe writes:
When it comes to flying, Titan might be better than Earth. Its atmosphere is thick but its gravity is light, giving it a surface pressure only 50 percent higher than Earth’s with air four times as dense. Its gravity—lower than that of the Moon—means that flying is easy. Our Cessna could get into the air under pedal power.
In fact, humans on Titan could fly by muscle power. A human in a hang glider could comfortably take off and cruise around powered by oversized swim-flipper boots—or even take off by flapping artificial wings. The power requirements are minimal—it would probably take no more effort than walking.
Designing a drone to fly remotely on a world where humans could take off under their own muscle power isn’t as difficult as engineering the same feat on Earth. Dragonfly will be an octocopter capable of surviving the loss of at least one rotor or motor. The aircraft should have a speed of ~36km/h (21mph) and can fly at up to 4km in altitude, in temperatures as low as 94K (-180C). It uses a combination of batteries and a radioisotope thermal generator to provide power. At night, the generator will recharge the batteries, which can then be used for another day of flying.
Solar power wasn’t an option for this mission; Titan only receives about 1 percent as much sunlight on its surface as Earth does, once the combined impact of distance and its thick nitrogen atmosphere are taken into account.
“Almost everyone who gets exposed to Dragonfly has a similar thought process. The first time you see it, you think: ‘You gotta be kidding, that’s crazy,’ ” Doug Adams, the mission’s spacecraft systems engineer, told NPR Tuesday. But, he says, “eventually, you come to realize that this is a highly executable mission.”
You can see a video of how Dragonfly will land below.
Dragonfly will have to fly autonomously; the delay between Earth and Titan is too large to allow for direct remote control. The aircraft won’t fly during Titan’s night (night on Titan is ~8 Earth days long).
During these periods of time, Dragonfly will collect and analyze samples, study seismology, monitor Titan’s weather, and perform local microscopic analysis with LED lights. It will carry a mass spectrometer, a gamma-ray and neutron spectrometer, meteorological sensors and equipment, and both microscopic and panoramic cameras for imaging. The mission is intended to allow Dragonfly to sample the materials at many different sites, scattered over far more terrain than the Martian rovers have been able to cover even after years of work.
Each NASA probe has expanded our understanding of the universe and given us a bigger, better window into the worlds that make up our solar system. Each new generation of probe has improved and expanded on the scientific capability of the one that came before. The Cassini-Huygens probe already vastly expanded our understanding of Saturn and its moon, Titan. Now, Dragonfly may tell us whether the chemical soup on Titan — which resembles Earth in its earliest days, albeit at a much lower temperature — is capable of producing any analogs to life, or chemical processes we can identify as part of the expected series of events for how life arose on Earth.
Even if we don’t find anything biological, however, Titan is still the only other world with sustained liquid on its surface. There are hydrological systems on Earth that may only be mirrored on Titan (albeit via liquid methane, not water). In some ways, it’s the closest thing to a mirror of our own planet that we know of, and the only one we can reach with current rocket technology.
Dragonfly is set for a 2026 launch and will arrive at Titan in 2034.
- Astronomers Find Evidence of Seasonal Weather on Titan
- Titan’s Lakes May Have ‘Bathtub Rings’ of Bizarre Crystals
- NASA New Frontiers Finalists: Titan Quadcopter, Comet Sample Return Mission | 0.872299 | 3.593786 |
The HARPS team, led by Michel Mayor (University of Geneva, Switzerland), announced the discovery of more than 50 new exoplanets orbiting nearby stars, including sixteen super-Earths, (planets with a mass between one and ten times that of Earth), including one that orbits at the edge of the habitable zone — a narrow zone around a star in which water may be present in liquid form if conditions are right.This is the largest number of such planets ever announced at one time bringing the total number of planets discovered outside our solar system to 645. More than 1200 exo-planet candidates have been found by NASA’s Kepler mission using an alternate method. To date, HARPS has discovered two super-Earths that may lie within the habitable zone.
In the eight years since it started surveying stars like the Sun, HARPS has been used to discover more than 150 new planets.”In the coming ten to twenty years we should have the first list of potentially habitable planets in the Sun’s neighbourhood. Making such a list is essential before future experiments can search for possible spectroscopic signatures of life in the exoplanet atmospheres,” concludes Dr. Mayor, who discovered the first-ever exoplanet around a normal star in 1995.
The results were presented yesterday, September 12, at the conference on Extreme Solar Systems held at the Grand Teton National Park, Wyoming, USA. | 0.831674 | 3.320847 |
The first real search for extraterrestrial life was a reconnaissance of the moon. In the early 1600s, Johannes Kepler trained a low-grade telescope on that pockmarked rock, and figured that the roundish craters he saw were the domes of subterranean cities, teeming with lunar inhabitants.
Kepler was badly mistaken, of course, though that wasn't obvious for another century when better telescopes suggested that the dry, airless moon was likely to be deader than Latin. But today there's a renewed interest in satellite worlds — moons — as homes to life.
Phil Sutton, an astrophysicist at the University of Lincoln in northeast England, recently pointed out that moons around gas giant planets — planets similar to Jupiter and Saturn — could be bristling with biology.
Why should anyone think that?
In the past few decades, we've learned that moons don't have to rely on starshine to stay warm. If they're in orbit around a king-size planet — even one that's at a great distance from their sun — they could still have reservoirs of liquid water and thus the potential for life as we know it. That's because the gravitational pull holding these moons in check isn't necessarily constant but varies slightly depending on their shape, their orbit and whether they have sibling moons nearby.
As a consequence, such moons are subject to periodic stretching and squeezing. The distortions are small — typically on the order of a few meters — but this never-ending moon massage produces internal warming, just like kneading bread dough causes it to heat up slightly.
The surprising bottom line is that even moons that are captive to planets in the frigid outer realms of a solar system might be sufficiently heated to have liquid-water oceans below their surface, rather than a layer of rock-hard ice. Picture deep, pitch-black seas, barely above freezing and laced with salts. While that's less than ideal for your personal lifestyle, microbes — with their lower standards — might consider these subsurface oceans perfectly satisfactory.
So when scientists think about where they might find biology in our own neighborhood, as opposed to other star systems, they're particularly enthused by three moons of Jupiter — Europa, Callisto and Ganymede — and two of Saturn — Enceladus and Titan. Liquids are thought to ebb and flow in, or on, all of these small orbs. And in the more than four billion years since they were whelped, it's hardly unthinkable that some might have spawned life. So, if you also count Mars, that makes a half-dozen nearby extraterrestrial worlds where life may be moving and shaking.
Sutton's point is this: Moons seem to be as numerous as gnats (in our own solar system there are nearly 40 times as many known moons as planets). Presumably, they're also plentiful in other solar systems. And any that are relatively big and in orbit around large planets will have a source of heat no matter how far they are from their host star. So we should consider these moons in our searches for life, and not just restrict our attention to star systems that might have an Earthlike planet.
That's easier said than done, of course. Just finding a so-called exomoon (a natural satellite orbiting a planet in a far-off solar system) is a crushing technical challenge. So far only one candidate has been reported, a Neptune-size satellite believed to orbit Kepler 1625b, a planet that's a daunting 8,000 light-years from Earth. Even if we're right about this exomoon, how might we ever learn if it has life — especially if that biology exists in an underground aquifer? That's a problem left for the student, and most likely a student a generation or two hence.
Extraterrestrial microbes may not fully float your boat, of course. Anyone who's seen the movie "Avatar" might find the idea of a Pandora-like moon more tantalizing. This fictional world was home to a species of blue-skinned aliens who traded in the exotic element unobtanium (whatever that is) to build up their foreign reserves. This sort of extraterrestrial would likely not arise in the buried oceans of a satellite like Europa, but Pandora was much bigger than the moons in our own solar system. Under the right circumstances, a large moon could have surface oceans and an atmosphere, although maybe no unobtanium.
There are roughly a trillion planets in our galaxy. The number of moons is probably close to 100 trillion. That's a lot of real estate, and it's probably not all lifeless. Indeed, as Sutton points out, it's entirely possible that the zip code of most cosmic life is on a moon, not a planet.
Want more stories about space?
- New Hubble Space Telescope photo is a 'living history book' of our universe
- The moon is shrinking, and a new study shows it's racked by moonquakes
- Scientists finally solve the mystery of weird aurora-like lights in the sky | 0.905359 | 4.002108 |
Scientists led by Professor Kerry Sieh, from the Earth Observatory of Singapore, are poring over complex data. They’re hunting for the impact crater of a massive meteorite that slammed into the Earth’s surface around 800,000 years ago. Frustratingly, the crater’s location has eluded scientists for over a century. But Sieh and his researchers believe they finally have an answer to this enduring mystery.
Even without a known crater, researchers know that a large meteorite crashed into Earth some 790,000 years ago. And they can say this with confidence because traces left behind by the impact provide concrete evidence. These remnants left from the meteorite’s impact are called tektites.
And tektites are created by the massive blast when a meteorite hits our planet’s crust. For the impact generates temperatures so high that the earth’s own rock can be transformed into molten liquid. This melted rock is then shot upwards into the air before cooling and falling to the ground. These formations are called tektites, and they scatter widely when coming back down to earth.
Crucially, scientists can trace tektites and calculate their age. The distribution of the tektites can also indicate the likely location of an impact. So, for example, researchers know that a meteorite hit Earth around 790,000 years ago and that its size was more than one mile across. But, until very recently, they have been unable to pin down exactly where this massive rock from space landed.
Of course, meteorites and asteroids have been smashing into Earth from outer space for millions of years. In case you don’t know, an asteroid is a large lump of rock in space, which orbits the Sun. A meteoroid is a smaller rock, also spinning round the Sun. But a meteor is a meteoroid that has disintegrated in Earth’s atmosphere – also known as a shooting star. And a meteorite is what lands on Earth’s surface after a large meteoroid (or small asteroid) fails to fully vaporize.
Now, a meteor is largely harmless since it falls apart before hitting the ground. On the other hand, a meteorite can cause huge disruption to the planet if it’s big enough. And asteroids, or comets, can be something else altogether. For instance, the asteroid (or comet) that scientists believe landed 66 million years ago was so violent that it led to the extinction of the dinosaurs, as well as many other animals and plants.
That giant rock, estimated to have been anywhere between six and 50 miles across, left a crater more than 90 miles wide. And that crater is called Chicxulub and was discovered in the 1970s during an oil exploration operation in Mexico’s Yucatan Peninsula. As recently as 2016, analysis confirmed the identity of the crater to the satisfaction of most researchers.
But the meteorite that caused our mystery crater was far smaller than the rock that caused the Chicxulub crater, as well as more recent. As we’ve learnt, it landed only 790,000 years ago and is estimated to have been a bit more than a mile across. However, even this much smaller rock would have caused devastation in the region where it hit.
Indeed, when that meteorite crashed into the ground, it cast tektites across the planet’s Eastern hemisphere, over the lands of Australasia. In fact, the impact was such that the tektites from the blast were distributed over some 20 percent of the Eastern hemisphere. That’s equivalent to around ten percent of the Earth’s entire surface, a huge area.
Anyway, Dr. Sieh and his team described the problem they faced in a paper published in the academic journal Proceedings of the National Academy of Sciences in January 2020. They wrote, “A field of black glassy blobs, strewn across about 20 percent of Earth’s Eastern Hemisphere, resulted from the impact of a large meteorite about 790,000 years ago.”
And the authors continued, “The large crater from which these tektites originated has eluded discovery for over a century…” But the distribution of the tektites provided essential clues. As the researchers explained, “Evidence has long pointed to a location somewhere within Indochina, near the northern limit of the strewn field.”
So Sieh and his colleagues had limited the area of search for the crater. But that still left an enormous amount of territory to scan – a large chunk of Indochina. Those tektites, millions of them, had covered an even wider area, stretching from Antarctica to Southeast Asia and also landing across much of the Pacific and Indian Oceans. So narrowing it down to Indochina was already a worthwhile achievement.
Now, this spread of tektites across the planet’s surface is known as the Australasian strewn field. And strewn field is the scientific term for the area covered by tektites from a meteorite impact, with this one reportedly being the youngest on the planet. That makes our meteorite the biggest impact on Earth over the last million years or so.
So just how often do significant meteorites smash into our planet? In fact, material from outer space is constantly raining down on us, although we are blissfully unaware since no harm is caused. According to NASA, around 100 tons of particles the size of a sand grain and other dust comes into our atmosphere each day.
And once a year on average, a rock the size of a car zooms through earth. But even that is not dangerous since a meteor that size will entirely burn up in the atmosphere. However, around every 2,000 years, a meteoroid 300 feet across will penetrate the atmosphere, hit the Earth and cause local damage.
But as we go up the scale, the impact of a rock – regardless of its classification – becomes increasingly dramatic. And NASA reckons that once every few million years one large enough to wipe out civilization is likely to hit us. Just think of the fate of the dinosaurs 66 million years ago. And recall that our meteorite from 790,000 years ago was probably a little over a mile across.
As we’ve seen, tektites help researchers to identify crater sites. Interestingly, the word comes from the Greek tektos, and it was Austrian geologist Franz Eduard Suess that first used the term. In fact, tektites can be confused with glass-like pebbles formed by volcanic action. But careful analysis identifies them by their unique chemical make-up and by their extremely low water content.
Although the term tektite was coined relatively recently, people have spotted these strange pebbles for thousands of years. You see, the first recorded instance of them comes from China around 900 B.C. One Liu Sun described what were almost certainly tektites calling them “inkstones of the Thundergods,” according to the University of Texas at Austin website.
Speaking to CNN in January 2020 about the tektites from our meteorite, Prof. Sieh said, “Their existence means that the impacting meteorite was so large and its velocity so fast that it was able to melt the rocks that it hit.” However, as we’ve seen, despite all of the evidence from the tektites the precise location of the crater has remained elusive.
So we know that the tektites from that meteorite impact around 790,000 years ago came pouring down from the sky over a vast area covering thousands of miles. But by analyzing the spread of the tektites it should be possible to come up with a likely epicenter for the impact. However, in this case the size of the field of tektite landings has made it very difficult to pinpoint.
You might have thought that a meteorite more than a mile across would have created a crater that was quite easy to spot with the naked eye, or at least with satellite scanning. And the crater’s size had been estimated at a minimum of several miles across with a depth of hundreds of feet.
Now, Aaron Cavosi, a scientist with Curtin University in Perth, Australia, hit the nail on the head in an interview with The New York Times in January 2020. As Cavosi put it, “That’s a very difficult size hole to make go away.” But there are reasons why this crater may be so difficult to find.
You see, when a new “impact” crater is first formed on the Earth’s surface it will be easier to spot. But Earth’s surface is not a static body. Over millennia, it can change drastically. For instance, the movement of tectonic plates over the earth’s mantle can drastically reshape entire continents given enough time.
Furthermore, volcanoes can substantially remodel the planet’s surface as can large earthquakes. Remember, it wasn’t until the 1970s that the Chicxulub crater was found. And when it was, it was quite impossible to see with the naked eye. But in the case of our meteorite’s crater, the timescale is in hundreds of thousands of years rather than tens of millions. However, that’s still enough time for significant change.
And to add to the mystery, Southeast Asia, the most likely location for the crater, is a region that generally has low rates of surface change caused by erosion. Yet it’s still been extremely difficult to find this crater. And researchers had looked far and wide for it without success – until very recently.
As Prof. Sieh explained to CNN, “There have been many, many attempts to find the impact site and many suggestions, ranging from northern Cambodia, to central Laos, and even southern China, and from eastern Thailand to offshore Vietnam.” So it seems that those telltale tektites were not enough on their own to uncover the whereabouts of the missing crater.
So Sieh and his colleagues had to look at other strands of evidence in their latest hunt for the crater. Now, Prof. Sieh himself had spent many years looking for it. But until recently his searches have ended up in a series of frustrating blanks. So a new approach using different techniques was clearly called for.
As Prof. Sieh went on to explain to CNN, “Our study is the first to put together so many lines of evidence, ranging from the chemical nature of the tektites to their physical characteristics, and from gravity measurements to measurements of the age of lavas that could bury the crater.” So applying different scientific methods was the key to Sieh’s new research.
Indeed, using these different methods allowed Sieh and his colleagues to make a major breakthrough. You see, they’ve actually identified a site that they now believe is the location of the impact crater. And it’s in the Southeast Asian country of Laos at a place called the Bolaven Plateau. Their favored spot is in the south of the country, not far from the Mekong River.
And the detective work that’s allowed Prof. Sieh and his team to settle on this spot is fascinating. First of all, the plateau they’ve identified was covered in a heavy layer of volcanic lava, up to 1,000 feet deep. So although much of Southeast Asia has not been subject to drastic change on its surface due to erosion forces, a coating of lava could nevertheless easily cover a large crater.
The next step was to compare tektites from the Laos site with others from different locations in the Australasian strewn field. Well, researchers found that there was a good match, indicating that the Laos tektites were the product of the same meteorite strike as those in the wider strewn field.
Then Prof. Sieh’s team went on to calculate the dates of the lava flows at the site of the crater. Now, some of those flows had happened before the meteorite had crashed into the Bolaven Plateau. But others would’ve been younger than the date of the meteorite strike and therefore could’ve helped bury it .
After this, the experts pursued another line of evidence. Yes, they analyzed the strength of the gravitational field around the site of the suspected meteorite strike. And they did that because research has established that the gravity field over a crater exhibits a measurable weaker strength than elsewhere. Indeed, they did end up detecting a lower level of gravity.
That diminished gravitational field comes about because, over time, a crater is filled with fractured material that’s not so closely packed as the geology around the impact site. In fact, the results were consistent with an oval-shaped crater some 11 miles in length and eight miles in width. And Prof. Sieh and his colleagues calculated that the crater was around 300 feet deep.
So the research team was definitely on the track of the mystery crater. Their next move was to search around the location they’d identified in Laos for the remnants of the meteorite. After all, the impact of a rock of that magnitude would certainly have scattered a large amount of debris around its crater.
It turned out that the scientists were in luck. For construction workers had dug into the flank of a hill within a few miles of the suspected impact site. And in doing so, they had unwittingly exposed just what the geologists wanted to examine. The workers revealed a pile of sandstone boulders which Sieh told The New York Times fitted together “like a jigsaw puzzle.”
One of Prof. Sieh’s colleagues, Vanpheng Sihavong, a geologist with the Laos Ministry of Energy and Mines, told The New York Times, “We calculated that they [the rocks] were ejected from the crater and landed at about 1,500 feet per second, fast enough to shatter them upon impact.”
And as a final test, the scientists analyzed some of the grains of quartz in the sandstone rocks. You see, they were looking for evidence of fracturing in the particles. Prof. Sieh told the Times, “We think we’ve found that.” Fractures like these are generally seen as evidence of the tremendous forces that can be generated by a meteorite impact.
Therefore, Prof. Sieh and the other scientists on his team had now examined multiple strands of evidence. And it was increasingly clear that they had, at the very least, identified a highly plausible location for this crater that people had been searching for over a century. So was this the definitive answer to the mystery of the crater’s location?
In fact, it’s not quite that yet. And Prof. Sieh himself is not ready to claim absolute certainty at this point. He told CNN that researchers must next “drill down a few hundred meters to see if the rocks below the lavas are indeed the rocks you’d expect at an impact site – that is, lots of evidence for melting and shattering.” | 0.881045 | 3.962081 |
Our view of the Solar System has changed utterly in the last fifty years. Mention that at a cocktail party and your listener will assume you’re talking about Pluto, the demotion of which has stirred more response than any other recent planetary news. But in addition to all we’ve learned through our spacecraft, our view of the Solar System has gone from a small number of orbiting planets to huge numbers of objects at vast distances. Fifty years ago, a Kuiper Belt many times more massive than the main asteroid belt was only theory. And the early Solar System models I grew up with never included any representation of a vast cloud of comets all the way out to 50,000 AU.
We’ve also begun to learn that liquid water, once thought confined to the Earth, may be plentiful throughout the system. Caleb Scharf goes to work on this in a recent post in Life Unbounded, noting what our models are telling us about internal oceans on a variety of objects:
Much can be done with purely theoretical models that seek to determine the appropriate hydrostatic balance between an object’s own gravity and its internal pressure forces – be they from gaseous, liquid, or solid states of matter. Thermal energy from formation, and critically from radiogenic heating (radioactive decay of natural isotopes), all play a role. Throw in a few actual datapoints, measurements of places like Europa or Titan, and these models get much better calibrated. The intriguing thing is that one can play around with compositions and the internal layering of material in a planet-like body to find the best looking fit. As a consequence the nature and extent of any subsurface zones of liquid water can be estimated.
Detecting Interior Oceans
The numbers get to be striking, as Hauke Hussmann and colleagues show in a 2006 paper in Icarus. Start with Galileo, the mission to Jupiter that brought home how much we needed to modify our view of the giant planet’s moons. Galileo discovered secondary induced magnetic fields in the vicinity of Europa, Callisto and Ganymede, offering strong observational evidence for subsurface oceans on all three. The fields are thought to be generated by ions contained in the liquid water layer underneath the icy outer shells. Europa has, of course, become a prime target for future study re astrobiology thanks to the prospect of water combined with a possibly thin ice layer.
The Hussmann paper goes on to calculate interior structure models for medium-sized icy bodies in the outer Solar System, assuming thermal equilibrium between radiogenic heat produced by the core and the loss of heat through the ice shell. Now we really start expanding the picture: The paper shows that subsurface oceans are feasible not just on the now obvious case of Europa, but also on Rhea, Titania, Oberon, Triton and Pluto. A case can also be made for the Trans-Neptunian Objects 2003 UB313 , Sedna and 2004 DW. And note this:
For the bodies discussed here, the liquid layers are in direct contact with the rocky cores. This contrasts with subsurface oceans inside the large icy satellites like Ganymede, Callisto, or Titan, where they are enclosed between ice-I at the top and high-pressure ice layers at the bottom. The silicate–water contact would allow the highly efficient exchange of minerals and salts between the rocks and the ocean in the interiors of those medium-sized satellites.
Image: Triton as seen by Voyager 2. Credit: NASA.
Interestingly, given the continued examination of the moon by the Cassini spacecraft, Enceladus does not fit the Hussmann model, the paper noting that sources other than radiogenic heating would be required to sustain such an ocean, the obvious option being tidal heating. We have much to learn about Enceladus (and the paper goes into issues regarding the orbital history of the moon, and the comparison between it and Mimas, where tidal forces are much stronger). But the upshot is clear: We need more observations to confirm whether subsurface oceans are not a common phenomenon in our system’s moons and icy bodies like Trans-Neptunian Objects.
Oceans in the Outer Dark
Hussmann and colleagues assume that subsurface reservoirs on these outer worlds are located beneath a thick ice shell of more than 100 kilometers in thickness — thick enough, in fact, that there is little link between internal oceans and surface features. But study of the interaction between these internal oceans and the surrounding magnetic fields and charged particles, and the response of the objects to tides exerted by the primary, may help us to confirm whether the oceans exist. There’s work here for generations of spacecraft, but get the model right early on and we can make reasonable extrapolations about water’s ubiquity.
The paper’s model, say its authors, is not applicable to Ganymede, Callisto and Titan, but I see that Scharf’s article cites Titan as having possibly ten times the volume of Earth’s oceans in water. This is lively stuff. Quoting Scharf: “…from these bodies alone there could readily be 10 to 16 times more liquid water slurping around off-Earth than on it.” Then factor in those Trans-Neptunian Objects, add the prospect of radiogenic heating, and you wind up with at least the possibility that TNOs could be the largest source of liquid water in the entire Solar System.
Did I say our view of the Solar System has changed? This revolution continues as we push into the Kuiper Belt. New Horizons, it’s hoped, will locate a small Trans-Neptunian Object for study at some point during its journey past Pluto/Charon, but eventually we can hope for the kind of instrumentation around outer planet satellites and other objects that will help us understand their internal composition. If the prospect of internal water bears out on the kind of scales mentioned above, then we have astrobiological potential, even if faint, all the way into the Kuiper Belt.
The paper is Hussmann et al., “Subsurface oceans and deep interiors of medium-sized outer planet satellites and large trans-neptunian objects,” Icarus Vol. 185, Issue 1 (2006), p. 258-273. | 0.869267 | 3.947267 |
A Farewell to Plutoshine
Sometimes, its not the eye candy aspect of the image, but what it represents. A recent image of Pluto’s large moon Charon courtesy of New Horizons depicting what could only be termed ‘Plutoshine’ caught our eye. Looking like something from the grainy era of the early Space Age, we see a crescent Charon, hanging against a starry background…
So what, you say? Sure, the historic July 14th , 2015 flyby of New Horizons past Pluto and friends delivered images with much more pop and aesthetic appeal. But look closely, and you’ll see something both alien and familiar, something that no human eye has ever witnessed, yet you can see next week.
We’re talking about the reflected ‘Plutoshine‘ on the dark limb of Charon. This over-exposed image was snapped from over 160,000 kilometers distant by New Horizons’ Ralph/Multispectral imager looking back at Charon, post flyby. For context, that’s just shy of half the distance between the Earth and the Moon. “Bigger than Texas” (Cue Armageddon), Charon is about 1200 kilometers in diameter and 1/8th the mass of Pluto. Together, both form the only true binary (dwarf) planetary pair in the solar system, with the 1/80th Earth-Moon pair coming in at a very distant second.
We see reflected sunlight coming off of a gibbous Pluto which is just out of frame, light that left the Sun 4 hours ago and took less than a second to make the final Pluto-Charon-New Horizons bounce. You can see a similar phenomenon next week, as Earthshine or Ashen Light illuminates the otherwise dark nighttime side of the Earth’s Moon, fresh off of passing New phase this weekend. Snow and cloud cover turned Moonward can have an effect on how bright Earthshine appears. One ongoing study based out of the Big Bear Solar observatory in California named Project Earthshine seeks to characterize long-term climate variations looking at this very phenomenon.
Standing on Pluto, you’d see a 3.5 degree wide Charon, 7 times larger than our own Full Moon. Of course, you’d need to be standing in the right hemisphere, as Pluto and Charon are tidally locked, and keep the same face turned towards each other. It would be a dim view, as the Sun shines at -20 magnitude at 30 AU distant, much brighter than a Full Moon, but still over 600 times fainter than sunny Earth. Dim Plutoshine on the nightside of Charon would, however, be easily visible to the naked eye.
A small 6 cm instrument, Ralph images in the visual to near-infrared range. Ralph compliments New Horizons larger LORRI instrument, which has a diameter and very similar optical configuration to an amateur 8-inch Schmidt-Cassegrain telescope.
Don’t look for Pluto now; it just passed solar conjunction on the far side of the Sun on January 7th, 2017. Pluto reaches opposition and favorable viewing for 2017 on July 10th, one of the 101 Astronomical Events for 2017 that you’ll find in our free e-book, out from Universe Today.
And for an encore, New Horizons will visit the 45 kilometer in diameter Kuiper Belt Object 2014 MU69 on New Year’s Day 2019. From there, New Horizons will most likely chronicle the environs of the the distant solar system, as it joins Pioneer 10 and 11 and Voyagers 1 and 2 as human built artifacts cast adrift along the galactic plane.
And to think, it has taken New Horizons about 18 months for all of its flyby data to trickle back to the Earth. Enjoy, as it’ll be a long time before we visit Pluto and friends again.
Go to Source
Powered by WPeMatico | 0.863213 | 3.580952 |
The problem of Polaris has been a stubborn one for those investigating the Flat Earth. The crux of this problem has to do with viewing angles, distances and elevation.
Using trigonometry, one should be able to measure the height of any object from any particular distance. Unfortunately, the math just doesn’t seem to add up. Using the supposed radius of the Earth – which is 3959.16 miles – we should be able to figure out the height of Polaris based upon the viewing angle and distance from the North Pole. The two assumptions – the radius of Earth and the distance from the North Pole – are generally agreed to values from both FE (Flat Earth) and GE (Globe Earth) people.
The viewing angle (“VA”) is where the problem starts – and ultimately will be resolved. GE theory states that the viewing angle of Polaris is equal to the particular latitude the observer views Polaris (ie. 49th parallel has a viewing angle of 49°). The distance from the 49th parallel to the North Pole is 2,597.55 miles or the radius of the Earth at that parallel. In the GE theory, the viewing angle is dependant upon the curvature of the Earth.
In the traditional FE view, Polaris is approximately 3600 miles above the North Pole. However, the viewing angle from the Equator is supposed to be 1° but according to traditional trigonometry, the viewing angle should be around 42° – Hence the paradox (or in GE theory, proof of a globe).
In examining this problem, I began by using a classic trigonometry set and drew, in 10° increments, the viewing angles from an object at 3600 miles above the North Pole. Several interesting anomalies appeared that, in the end, helped me resolve this problem.
You can see from this image that the distances between the viewing angles are not equal. In GE theory, the distances between viewing angles are equal since the curvature is doing the work. I summed up these observations as follows:
- distance between degrees on a sphere/circle are equal (degrees of parallel)
- distance between degrees on a flat plane are not equal (this is important)
- As the height of an object from a flat plane decreases, the angle of view decreases and tends towards infinity (law of perspective using geometry). As the observer increases distance from the object, the angle of view decreases.
- The viewing angle is inversely proportional to the distance from the object. As the viewing angle doubles, the distance to the object is reduced by half.
- An object of 3959 miles from a flat plane would have a viewing angle of 10° and would be at a distance of 22,962.2 miles.
Observations 1-4 are all perfectly logical and fit well with the FE model. However, the 5 observation does not fit with known distances whether FE or GE. There is the possibility the FE model is incorrect but direct observations have shown that there is no curvature. We are right back in the middle of the paradox.
In an effort to resolve this confounding riddle, I began to model distances, heights and viewing angles in Excel and look for patterns or answers of some kind. After a few weeks of tinkering I developed this model:
There are 2 assumptions in the model:
- The radius of the Earth (3.959.16 miles). All other numbers are generated using standard trigonometry and are without opinion or conjecture.
- There are 90° between the North Pole and the Equator
The model is defined by 1° increments (1-89) and uses TAN, COS and ATAN functions to obtain either an angle or a distance. There are two main sections separated by a blue line. The left hand section takes each viewing angle (starting at 1°) use a TAN function (H/TAN(VA)) to derive the distance. For example, and object that is 69.101 miles above the observer, would have a VA of 1° and a distance of 3958.79 miles. This equation is applied to each VA up to 89°.
I noticed that the VA and the distance are related to each other (see observations 3 & 4) up to and including the 32°. After that, the relationship doubles and the distance an observer is required to travel to double the VA is 4x the distance. I added a column that calculates the distance whenever the distance doubles starting at 1°. The distances correlate well but not perfectly. Plus, any differences increase as the distance decreases up to the 32° and then returns to normal after that. The distances are variable and change as the height of the object changes. By doing this, the actual VA is maintained and the distance alters the equation mentioned above. Another column (Apparent VA) was finally added but I will return to that one later as it is directly related to the solving of the paradox.
The right hand column uses the radius of the Earth as the fix value (rather then the VA in the left side). To obtain the actual viewing angle based upon distance from object, I used an ATAN function – ATAN(H/R). The common value between both sides is the object height. I derived the radius of the Earth at each degree by the following equation [Radius of Earth*COS((Degree)/180*3.14159)]. This essentially flattens out the Earth into a series of concentric circles.
Once all these relationships were in place, all I had to do was change one single value – the height – to see how the entire model behaved. The biggest pattern that I observed was the “compression” of VA as the distance and height increased. For example, at a relatively low height of 2 miles, all of the distances and VA matched the degree relative to the equator. However, as the height increased the VA began to “compress” at the lower VA values. As I increased the height, the VA differential (difference between the degrees from the equator and the actual VA) increased. You can observe the graph “Angle Differential” begin to form a SIN wave as the height increases. I haven’t taken the VA into decimal increments at the top and the bottom but my guess is that the pattern repeats.
So how important is this VA “compression”? As it turns out, it makes all the difference in the world. The model suggests that VA on a flat plane do not operate the same as VA on curved surfaces. As the distance from the object increases, the change in the actual viewing angle per degree increases at a slower rate. Take the above example of 69.101 miles – the rate of change from 1° to 16° is only 1° of VA (33° to 34°). As we can see, the VA does not increase at the same rate as the degrees from the equator. Therefore, on a flat plane we would expect a variable VA per degree from the equator whereas on a curved surface it would not be variable.
At 69.101 the VA from the equator is equal to 1°. However, as the height increases from this point, the VA becomes “compressed”. What do I mean by “compresses”? If you examine the “actual viewing angle” column on the right side of the model, you will notice that for the first 70° there is only a 3° change in VA. Within those first 3° the observer will notice very little change in the height of the object even over a great distance. It is only in the last few miles (from 76° to 89°) that any real movement in the object would be noticeable.
The phenomenon becomes even more exaggerated as the object increases in height. As I continually increased the height, I found that the height of Polaris would be 2,513.5 miles above the North Pole. At this height, the VA from the equator would actually be between 32° and 33°; all the remaining degrees are hidden from view since they are “compressed” into a small area below that degree. Of course no actual compression is happening but it is a phenomenon of perspective on a flat plane.
How can this be possible?
I was contemplating the problem of “compressed” VA but I could find anything that worked with the trigonometry – until I saw this awesome video by p-brane:
This video provides the mechanism with which the VA becomes “compressed” for objects near the horizon.
The human eye and perspective
An important piece to this puzzle is within the nature of the human eye. I have included two major references that the reader can take the time to read. The first is from Ian P. Howard (Perceiving in Depth, Volume 3: Other Mechanisms of Depth Perception, Volume 3 Chapter 26.4 .1(Effect of Height in the field of view)) and the second is Zetetic Astronomy, by ‘Parallax’ (pseud. Samuel Birley Rowbotham), chapter 14.
When looking at the horizon with the naked eye, (as opposed to using a telescope or binoculars) there are various laws of perspective that need to be considered.
Let’s take another example with an object at a height of 2.571 miles above the observer. To achieve a VA of 1° the observer would need to be 147.29 miles from the object. The observer would then have to move half that distance – 73.65 miles – to achieve a VA of 2°. However, if the observer traveled half the distance again – 36.82 miles – the VA would become 4°. This continues at the same rate until the VA is 32° at which the observer is merely 4.60 miles from the object. To achieve a 64° VA the observer would have to travel 4 times the distance – 1.15 miles from the object.
As we can see, the non-linear changes would have a direct impact on the VA based upon distance. We can put this model into practice through the observation and measurement of distant objects in relation to their height. For example, from Vancouver, BC, the distance to Mount Baker is approximately 68.39 miles. This would mean the VA would be approximately 1.71°. As an aside, if we assume a curvature of the planet, 1/3rd of Mt. Baker should be below the horizon when observed from Vancouver. In fact, 3,184 feet of 10,781 feet of the mountain would be below the horizon. As anyone from Vancouver has seen with their own eyes, the entire height of Mt. Baker can been seen (from base to peak) from 68.39 miles.
Take into consideration that 97% of the distance between Vancouver and Mt. Baker is traveled in the first 3rd of the VA (up to the 32°). The remaining 68° of VA occur during the last 3% of the journey. This will have an impact on how the human eye perceives objects at a distance. The VA is not constant. Because of this objects in the sky will appear higher or lower than they really are.
For example, an airplane flying overhead at 500 miles/hr will approach the observer slowly at first and then begin to accelerate as the distance decreases. The airplane will reach maximum velocity (from the point of view of the observer) when it is directly overhead. The plane will then begin to decrease velocity as it moves away. We all know that the speed of the plane has not changed but the VA is changing. The more distance the plane gets the slower it appears to go. If we take this example and apply it to static objects (like a mountain) the same rules apply. However, the change in VA is due to the observer. As the observer approaches the mountain, the VA changes but at an inconsistent rate. At a great distance the mountain will appear to “rise” up from the horizon at a very slow rate until the first 32° of VA are completed. After that the mountain will begin to “rise” at an accelerated rate.
If we now apply these observations to Polaris, we can see that 97% of the VA is far behind the Equator. According to the model, the star will rise at a much faster and consistent rate after the 32° (which the actual degree of parallel of the Equator). You have to imagine an observer that is 147,298 miles from the North Pole. At that distance the actual VA is 1°. It would take 97% of the journey before the star will begin to “rise” up from the horizon. After that point, the star “rises” and a relatively consistent rate (albeit not at 1° per degree of parallel – but close).
Another important observation that needs to be taken into consideration is the human eye itself. It is documented that the total VA that the eye is able to perceive if the observer is looking directly at the horizon is 60°. In other words, the total field of view is only 60°. Also, the field of view also contains the ground beneath our feet. So within that 60° we have 100% of the field of view.
It is important to note that the model being presented is scalable for any object at any height (assuming the radius of earth being valid). This means we can use this model to accurately map the surface of the Earth using an object at a constant height (ie. Polaris). | 0.815132 | 3.643586 |
‘Young’ stars that seem to have formed impossibly close to our galaxy’s supermassive black hole could in fact be ancient interlopers merely masquerading as youngsters, a new study claims.
Several clusters of what appear to be massive young stars have been found just a few dozen light years from the black hole at the centre of the galaxy. Watch a video that zooms in on one such cluster, called Arches.
But that is puzzling, since astronomers think the black hole’s intense gravity should rip apart gas clouds before they have a chance to condense and form stars (although some recent work has disputed this). At the same time, such massive stars are too short-lived to have survived a journey from much farther out.
Now, Douglas Lin of the University of California in Santa Cruz and Stephen Murray of the Lawrence Livermore National Laboratory in Livermore, California, both in the US, have proposed a third possibility.
They say the clusters are actually old and look young only because they are collecting a lot of fresh gas in the region.
They believe the clusters migrated to the galactic centre from elsewhere, gradually spiralling inwards as they lost energy due to the drag exerted by stars and gas they encountered in their orbits. Although most of the stars in each cluster would be ejected in the migration process, the cluster’s inner core could survive all the way to the black hole’s neighbourhood.
Once there, compact stellar corpses called white dwarfs and neutron stars, which are known to collect in the centres of massive star clusters, could absorb material from nearby gas clouds. The corpses would heat up and glow where the gas fell on them, making them appear as bright as young, massive stars.
“You don’t need to accrete very much before the individual stars start to become quite luminous,” Lin told New Scientist.
Lin says the scenario avoids a problem associated with the idea that the clusters formed where they are now. Unless such star formation began only recently, successive generations of stars over the galaxy’s lifetime would have left behind a lot of heavy elements around the galactic centre – something that is not seen there.
But Mark Morris of the University of California in Los Angeles, US, who studies the galactic centre, is sceptical of this scenario. Aside from white dwarfs and neutron stars, an old cluster should contain some bloated, dying stars called red giants.
But no red giants have been found there. “So far there’s been no evidence for that and believe me, we’ve looked,” he told New Scientist.
Warren Brown of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, US, is also unconvinced. He says migrating clusters would likely be torn apart before reaching the galactic centre. “I tend to buy into this idea that maybe these clusters formed where they are now,” he told New Scientist.
Lin says the red giants could escape detection because they would be relatively faint, a problem compounded by the fact that this whole area is obscured by dust. And although clusters would tend to get torn apart on their way to the galactic centre, only a small percentage would need to survive to explain the massive clusters seen there, Lin’s co-author Stephen Murray says. | 0.802807 | 3.912211 |
In old science fiction stories (1950's), one of the space travel themes was the use of solar sails for propulsion. The idea was that the photon pressure from the sun would push the sail (like wind sails) and move the spacecraft. What once was science fiction is now reality as solar sails are being developed and tested for modern space travel.
Photoelectric Effect and the Particle Nature of Light
In 1905 Albert Einstein (1879 - 1955) proposed that light be described as quanta of energy that behave as particles. A photon is a particle of electromagnetic radiation that has zero mass and carries a quantum of energy. The energy of photons of light is quantized according to the \(E = h \nu\) equation. For many years light had been described using only wave concepts, and scientists trained in classical physics found this wave-particle duality of light to be a difficult idea to accept. A key concept that was explained by Einstein using light's particle nature was called the photoelectric effect.
The photoelectric effect is a phenomenon that occurs when light shined onto a metal surface causes the ejection of electrons from that metal. It was observed that only certain frequencies of light are able to cause the ejection of electrons. If the frequency of the incident light is too low (red light, for example), then no electrons were ejected even if the intensity of the light was very high or it was shone onto the surface for a long time. If the frequency of the light was higher (green light, for example), then electrons were able to be ejected from the metal surface even if the intensity was very low or it was shone for only a short time. This minimum frequency needed to cause electron ejection is referred to as the threshold frequency.
Classical physics was unable to explain the photoelectric effect. If classical physics applied to this situation, the electron in the metal could eventually collect enough energy to be ejected from the surface even if the incoming light was of low frequency. Einstein used the particle theory of light to explain the photoelectric effect as shown in the figure below.
Low frequency light (red) is unable to cause ejection of electrons from the metal surface. At or above the threshold frequency (green) electrons are ejected. Even higher frequency incoming light (blue) causes ejection of the same number of electrons but with greater speed.
Consider the \(E = h \nu\) equation. The \(E\) is the minimum energy that is required in order for the metal's electron to be ejected. If the incoming light's frequency, \(\nu\), is below the threshold frequency, there will never be enough energy to cause electrons to be ejected. If the frequency is equal to or higher than the threshold frequency, electrons will be ejected. As the frequency increases beyond the threshold, the ejected electrons simply move faster. An increase in the intensity of incoming light that is above the threshold frequency causes the number of electrons that are ejected to increase, but they do not travel any faster. The photoelectric effect is applied in devices called photoelectric cells, which are commonly found in everyday items such as a calculator which uses the energy of light to generate electricity.
Photoelectric cells convert light energy into electrical energy which powers this calculator.
Light has properties of both a wave and a particle. The photoelectric effect is produced by light striking a metal and dislodging electrons from the surface of the metal.
CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon. | 0.814896 | 4.013518 |
Satellite reflections over CTA Site
Hundreds of astronomical objects are visible in this ESO Picture of the Week, including star clusters, nebulae, dust clouds, and other galaxies — most notably the Large and Small Magellanic Clouds, visible to the upper right. However, something much closer to home is vying for our attention. To the far right of the image, a silver arc streaks across the sky. This arc is actually composed of two closely-spaced lines, caused by sunlight bouncing off the antennae of two Iridium communication satellites currently orbiting the Earth.
It may be empty now, but this dry, barren section of the Chilean Atacama Desert will soon be bustling with activity. The site has been selected to host the southern part of the Cherenkov Telescope Array (CTA), a remarkable array of 99 antennas that will gaze up at this incredible sky in search of high-energy gamma rays. Gamma rays are a type of electromagnetic radiation that is emitted by the hottest and most powerful objects in the Universe — supermassive black holes, supernovae, and possibly remnants of the Big Bang itself.
However, the Earth’s atmosphere prevents gamma rays from reaching its surface, so rather than hunting for these rays directly the CTA will observe something known as Cherenkov radiation — ghostly blue flashes of light produced when gamma rays interact with particles in our atmosphere. Pinpointing the source of this radiation allows each gamma ray to be traced back to its cosmic source. Just like its neighbour, ESO’s Very Large Telescope, the CTA requires a dry, isolated location to do its work successfully — and for this the Atacama is perfect.Credit:
About the Image
|Release date:||14 January 2019, 06:00|
|Size:||9430 x 3960 px|
About the Object
|Name:||Atacama Desert, Cherenkov Telescope Array|
|Type:||Unspecified : Sky Phenomenon : Night Sky| | 0.827981 | 3.584527 |
Crescent ♐ Sagittarius
Moon phase on 10 September 2013 Tuesday is Waxing Crescent, 5 days young Moon is in Scorpio.Share this page: twitter facebook linkedin
Previous main lunar phase is the New Moon before 5 days on 5 September 2013 at 11:36.
Moon rises in the morning and sets in the evening. It is visible toward the southwest in early evening.
Moon is passing about ∠19° of ♏ Scorpio tropical zodiac sector.
Lunar disc appears visually 1.5% wider than solar disc. Moon and Sun apparent angular diameters are ∠1935" and ∠1906".
Next Full Moon is the Harvest Moon of September 2013 after 8 days on 19 September 2013 at 11:13.
There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak.
The Moon is 5 days young. Earth's natural satellite is moving from the beginning to the first part of current synodic month. This is lunation 169 of Meeus index or 1122 from Brown series.
Length of current 169 lunation is 29 days, 12 hours and 58 minutes. It is 43 minutes longer than next lunation 170 length.
Length of current synodic month is 14 minutes longer than the mean length of synodic month, but it is still 6 hours and 49 minutes shorter, compared to 21st century longest.
This lunation true anomaly is ∠249.2°. At the beginning of next synodic month true anomaly will be ∠285.3°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°).
10 days after point of apogee on 30 August 2013 at 23:46 in ♋ Cancer. The lunar orbit is getting closer, while the Moon is moving inward the Earth. It will keep this direction for the next 5 days, until it get to the point of next perigee on 15 September 2013 at 16:34 in ♑ Capricorn.
Moon is 370 448 km (230 186 mi) away from Earth on this date. Moon moves closer next 5 days until perigee, when Earth-Moon distance will reach 367 388 km (228 284 mi).
1 day after its ascending node on 9 September 2013 at 17:29 in ♏ Scorpio, the Moon is following the northern part of its orbit for the next 12 days, until it will cross the ecliptic from North to South in descending node on 22 September 2013 at 13:48 in ♉ Taurus.
1 day after beginning of current draconic month in ♏ Scorpio, the Moon is moving from the beginning to the first part of it.
11 days after previous North standstill on 29 August 2013 at 17:03 in ♊ Gemini, when Moon has reached northern declination of ∠19.801°. Next 2 days the lunar orbit moves southward to face South declination of ∠-19.685° in the next southern standstill on 12 September 2013 at 18:30 in ♐ Sagittarius.
After 8 days on 19 September 2013 at 11:13 in ♓ Pisces, the Moon will be in Full Moon geocentric opposition with the Sun and this alignment forms next Sun-Earth-Moon syzygy. | 0.848363 | 3.073716 |
Meteorite ALH84001 was found in 1984, lying like a chunk of black coal on the dazzling white ice wastes of the Allan Hills region of Antarctica. According to National Science Foundation geologist Roberta Score, who first picked up the meteorite and held it in her hand, it looked “kind of weird”.
Now this unassuming stone has proved to be weird indeed, but also wonderful. And certainly unlike any meteorite ever found on Earth care with him a mystery that disturbe.
Meteorite ALH84001 was cataloged, bagged, and shipped still frozen to the Johnson Space Center in Houston. There, along with a thousand other meteorite specimens, it was thawed out in a nitrogen atmosphere to maintain its uncontaminated state.
Technicians duly wrote up their descriptions of the specimen, which included the fact that the stone’s dimensions were about half that of a typical brick. That it weighed nearly two kilograms (four pounds) and that it appeared to be in pristine condition. Its description about meteorite ALH84001 was then circulated to the community of planetary scientists, who are routinely invited to request Antarctic meteorite samples for research.
For nearly a decade, Meteorite ALH84001 was not recognized as a rock from Mars. The stone remained filed-away, not much better known than it was in the Antarctic wastelands.
But this particular meteorite would soon prove to be from Mars, and it also harbored a secret that would rattle the cages of scientists everywhere. I invite you to descovery the Meteorite ALH84001 mistery followed by a verry intresting story:
Resourcess for this investigation about Meteorite ALH84001:
The Planetary Materials Curation area at Johnson Space center contains lots of information on Antarctic meteorite samples, lunar samples, cosmic dust and other interesting stuff. The Antarctic Meteorite Newsletter can be found there; a periodical published twice yearly that describes newly available Antarctic meteorite specimens.
If you have specific questions about lunar and martian meteorites you couldn’t find answers to at the Johnson Space Center, Try Washington Universities lunar meteorite site , the Mars Meteorite Compendium at JSC, or the Lunar and Planetary Institute’s page dedicated to ALH84001.
AMLAMP, the Antarctic Meteorite Location and Mapping Program, keeps a database of the sites where meteorites have been found by US researchers. Cruise the table of contents to see images of meteorite stranding surfaces in Antarctica with meteorites superimposed.
NIPR Research Program for Antarctic Meteorites is the Japanese program that leads expeditions to Antarctica to recover meteorites. The Japanese were the first to systematically collect meteorites in Antarctica, and in fact they have collected more meteorites than the US program.
Meteorite and Impacts Advisory Committee is an advisory group to the Canadian Space Agency, dealing with issues concerning impact craters, meteorites, and related phenomena.
Logistical support for Antarctic Projects is provided by Raytheon Polar Services Company, the prime contractor with the National Science Foundation. | 0.834659 | 3.343865 |
Momentum is building among planetary scientists to send a major mission to Uranus or Neptune—the most distant and least explored planets in the Solar System. Huge gaps remain in scientists’ knowledge of the blueish planets, known as the ice giants, which have been visited only once by a spacecraft. But the pressure is on to organize a mission in the next decade, because scientists want to take advantage of an approaching planetary alignment that would significantly cut travel time.
Interest in the ice giants has grown exponentially, says Amy Simon, a planetary scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, who co-organized a meeting at the Royal Society in London in January, dedicated to exploring such a mission. NASA’s Voyager 2 is the only spacecraft to have visited Uranus and Neptune, in brief fly-bys in the 1980s. The ice giants therefore represent fresh territory for a wide range of researchers—for the study of planetary rings, atmospheres, moons and oceans, says Simon.
The rare celestial alignment, between Neptune, Uranus and Jupiter, occurs next in the early 2030s, and would allow a spacecraft to slingshot around Jupiter on its way to the planets. This would reduce the travel time, and allow the craft to arrive well within the lifetimes of its instruments and power systems—usually around 15 years. It would also cut fuel mass, enabling the craft to carry a full suite of scientific instruments (see ‘Journey to the ice giants’). To take advantage of the alignment, a mission to Neptune would need to launch by around 2031 and one to Uranus by the mid-2030s.
The window is “the right time to launch”, Mark Hofstadter, a planetary scientist at the Jet Propulsion Laboratory in Pasadena, California, said at the London meeting. “We don’t want to miss this one.” But the timing is tight. NASA is the most likely space agency to lead the kind of multibillion-dollar ‘flagship’ mission that scientists want. These typically take 7–10 years to prepare, and any green light from NASA would depend on the mission being prioritized in the agency’s Planetary Science Decadal Survey, which reports in 2022. A mission to Neptune or Uranus would also face competition from proposals to return a sample from Mars or explore Venus.
But whereas Mars and Venus scientists are building on decades of exploration, “Uranus and Neptune are genuinely out on their own, as we haven’t completed the very first phase of their exploration yet”, says Leigh Fletcher, a planetary scientist at the University of Leicester, UK, who co-organized the meeting.
Fletcher says that a mission to either planet should include going into orbit around it and sending at least one probe into its atmosphere or to one of its moons, as Cassini–Huygens, a joint mission by NASA and the European Space Agency (ESA), did for Saturn.
Scientists think of the two planets as twins because of their similar sizes and masses. But no one knows how similar they are, their composition or how they formed, Ravit Helled, a planetary scientist at the University of Zurich, Switzerland, told the meeting. Models struggle to explain the planets’ internal structures, or why more distant Neptune seems to be warmer than Uranus. Everybody assumes they are made of forms of water, or maybe ammonia ice, says Helled. “But actually we don’t really know that.”
A major mission to the ice giants would also benefit exoplanet studies, said Hannah Wakeford, an exoplanet scientist at the University of Bristol, UK. About 40% of known exoplanets are ice-giant-sized; understanding what these planets’ size and atmosphere reveal about their formation relies on understanding those in our own Solar System.
Delegates at the meeting agreed that they would be happy to visit either planet, because both would yield rich results. Studies show that it would be feasible to send probes from a mission to both planets, but this would be prohibitively expensive. Neptune is appealing because its moon Triton seems to be geologically active and might host a subsurface ocean, potentially of liquid water.
But Uranus—which has an unusual magnetic field that is tilted relative to the planet’s rotation axis—has more “odd” features than Neptune does, which challenge existing scientific models, said Hofstadter. The later launch window for Uranus also makes the planet a more realistic target, says Fletcher.
But some are concerned by the timescale. It is “the day after tomorrow” in space terms, Fabio Favata, head of strategy, planning and community coordination at ESA, told the meeting. The agency is already working on two major missions for the early 2030s, he said, so even if its forthcoming prioritization exercise, called Voyage 2050, recommends a visit to the ice giants, the agency could not make the launch window.
Alternatively, ESA could contribute to a NASA-led mission, but that would require a US decision, he added. Either agency could also send lighter, cheaper missions, for example to fly by one of the ice giants. These would produce valuable science, but not provide the comprehensive study that scientists hope for, said Hofstadter.
If planetary scientists miss the coming opportunity, then they will have to wait for the next alignment, in the mid-2040s, or rely on a more powerful launch system, such as NASA’s heavy-lift Space Launch System. But that technology is still in development.
Heidi Hammel, a planetary astronomer and executive vice-president of the Association of Universities for Research in Astronomy in Washington DC, flagged another issue scientists might face with a mission to Uranus: jokes about the planet’s name. “I’m sorry I’m saying this. But I really do think that’s a legitimate problem we would face,” she said.
This article is reproduced with permission and was first published on March 3 2020. | 0.912941 | 3.710205 |
It looks like the Death Star’s laser cannon, but this proposed lunar telescope—built inside a natural crater on the Moon’s far side—could be used to peer back into the earliest days of the cosmos.
Earlier this month, NASA awarded additional funding to a host of projects in its Innovative Advanced Concepts (NIAC) program, in which contributors are encouraged to pitch out-of-the-box ideas meant to “change the possible.”
Some of the more interesting proposals included a solution for exploring the subsurface ocean of Jupiter’s moon Europa, instant landing pads for the upcoming Artemis mission to the Moon, and a fascinating pitch to use antimatter as a way of slowing down interstellar spacecraft en route to exoplanet Proxima Centauri b (like I said: high-concept ideas).
One of the more intriguing proposals is from from JPL roboticist Saptarshi Bandyopadhyay, who wants to build a telescope inside a natural crater on the far side of the Moon. He calls it the Lunar Crater Radio Telescope (LCRT). NASA has awarded this project Phase 1 status and given the team $120,000 to move the idea forward. Should Bandyopadhyay and his colleagues produce a convincing proposal, the idea would advance to the second of three phases, so this project is by no means a done deal.
“The objective of NIAC Phase 1 is to study the feasibility of the LCRT concept,” Bandyopadhyay told Gizmodo. “During Phase 1, we will mostly be focusing on the mechanical design of LCRT, searching for suitable craters on the Moon, and comparing the performance of LCRT against other ideas that have been proposed in the literature.”
In terms of when such a structure could be built, he said his team would have a better idea once Phase 1 is complete. But wow—what an amazing thing this would be if it is actually built.
LCRT would be an ultra-long-wavelength radio telescope capable of capturing some of the weakest signals traveling through space.
“It is not possible to observe the universe at wavelengths greater than 10 meters [33 feet], or frequencies below 30 MHz, from Earth-based stations, because these signals are reflected by the Earth’s ionosphere,” said Bandyopadhyay. “Moreover, Earth-orbiting satellites would pick up significant noise from Earth’s ionosphere,” which is why “such observations are very difficult.”
It’s for this reason that wavelengths greater than 10 meters have yet to be explored by scientists. Consequently, this telescope would be a tremendous boon to astronomers and cosmologists, who would use it to study the early universe as it existed some 13.8 billion years ago, including the formation of the earliest stars.
By placing the LCRT on the far side of the Moon, the observatory would be protected from radio interference and other annoyances coming from Earth, whether natural or artificial.
“The Moon acts as a physical shield that isolates the lunar-surface telescope from radio interferences/noises from Earth-based sources, ionosphere, Earth-orbiting satellites, and the Sun’s radio-noise during the lunar night,” explained Bandyopadhyay.
As described in the abstract, the telescope would be built on a crater measuring 3 to 5 kilometers (1.9 to 3.1 miles) in diameter. Several DuAxel robots would string up, suspend, and anchor a mesh measuring 1 kilometers (0.6 miles) in diameter inside the crater, making it the “largest filled-aperture radio telescope in the Solar System,” according to the abstract.
The DuAxel robots “are awesome and have already been field-tested in challenging scenarios,” Bandyopadhyay told Gizmodo. JPL roboticist Issa Nesnas has led the design of these robots over the past decade, and he, along with JPL roboticist Patrick Mcgarey, are also working on the LCRT project.
When asked how much technology still needs to be developed for this proposal to be possible, Bandyopadhyay said “quite a lot,” as most of the tech needed for LCRT is currently at a low technology readiness level, in NASA terms.
“I don’t want to go into specifics, but we have a long road ahead,” he said. “Hence we are very thankful for this NIAC Phase 1 funding!” | 0.881938 | 3.721588 |
Thanks to the New Horizons probe's Tuesday flyby, there are, for the first time in history, some really good images of Pluto, and its moon, Charon. And they're pretty fascinating. They're showing details that we didn't know were there, and even some things we've never seen before, anywhere. There are ice mountains on Pluto that are roughly as tall as the Rockies, and on Charon there are chasms that may be six times deeper than the Grand Canyon.
Scientists had long theorized that Pluto was essentially a dead, frozen ball of ice, but remarkably the new images show a total absence of impact craters, even in a zoom-in shot of an otherwise rugged section. This suggests that Pluto has active geology still shaping its surface. John Spencer, a scientist at the Southwest Research Institute, ground control for the New Horizons mission, calls these findings "just astonishing", He's pretty sure that they're going to "send a lot of geophysicists back to the drawing boards."
The zoom-in covered a swath of Pluto that is about 150 miles (or 241 kilometers), wide. It shows details of a mountain range that's about 11,000 feet (3,353 meters) tall and tens of miles wide. Interestingly, it appears that these mountains aren't made of rock. They're made of ice. Of course, that far away from the Sun, many of the materials that are usually found in gaseous or liquid form on Earth are solid, rock solid. In Pluto's extreme cold methane and water may behave like rock does here on Earth. This appears to be the case with this mountain range, the peaks of which seem to have been pushed up from Pluto's subterranean bed of ice. Fascinating, yes, but even more significant as evidence of Pluto's active geology: these peaks appeared to be only 100 million years old, though Pluto is 4.5 billion years old.
"Who would have supposed that there were ice mountains?" asks project scientist Hal Weaver. "It's just blowing my mind."
Alan Stern, principal scientist for the New Horizons mission said of the new images: "I think the whole system is amazing. ... The Pluto system IS something wonderful."
In case you were wondering about the heart shaped area on Pluto, Stern and his team have named it the Tombaugh Reggio, in tribute to American astronomer Clyde Tombaugh. In 1930 he spied the cold, farthest out world on the edge of the solar system.
The new images have also allowed scientists to properly measure Pluto. With a diameter of 1,473 miles (2,370 kilometers), it turns out to be a bit bigger than was previously thought.
"We've tended to think of these midsize worlds ... as probably candy-coated lumps of ice," Spencer says. "This (the information shown by the new images) means they could be equally diverse and be equally amazing if we ever get a spacecraft out there to see them close up." | 0.817599 | 3.64642 |
A galaxy-mapping satellite could be used to find alien life—maybe.
At least that’s the idea behind a pre-print study recently posted on ArXiv.org to spark community dialogue. The paper states that ESA's Gaia spacecraft, which recently produced the largest map of the galaxy in history, could witness stars dimming due to the presence of alien megastructures.
Nothing Gaia has spotted so far (or that researchers have spotted in the data) suggests that aliens are out there building planet-sized structures. The paper simply suggests that it's worth taking a look, which is exactly what the researchers did when the first Gaia data was released.
Yesterday was the ESA’s second data release for the Gaia spacecraft. Gaia is meant to map the precise position of stars and other objects in our galaxy (as well as a little bit beyond the galactic borders). It has provided an unprecedented amount of data about the Milky Way. Along with new stars, the spacecraft has been used to map asteroids in our solar system and will, in subsequent data releases, likely find more than a few exoplanets. But using Gaia as a backdoor SETI search? That’s something a little bit different.
“We've previously searched for Dyson spheres in other galaxies, but when Gaia DR1 came along, this made it possible for us to carry out a new type of search for Dyson spheres in our own galaxy, the Milky Way,” lead author Erik Zackrisson of Uppsala University says. “Our department has many researchers involved in various types of Gaia-related research, so there was a lot of in-house knowledge that we could use when developing this project.”
The paper outlined a few methods for finding large-scale structures around stars applied to the first Gaia data release. The team is looking for objects with unusual dimming like the phenomenon seen at the most famous “megastructure” star, Tabby’s Star. But it's looking for them in how the apparent position changes, as Gaia primarily works by measuring the distance to a star by triangulating its position between another star and Earth. Small deviations in its position often indicate the presence of another object.
Two possible candidates for alien structures were identified in the new paper: TYC 7169-1532-1 and TYC 6111-1162-1. “I wouldn't call either of the two objects particularly strong Dyson sphere candidates—they're simply weird outliers that happen to match the expected Dyson sphere signatures,” Zackrisson says.
The team investigated the latter object further, and, indeed, found something massive disturbing the star. However, the object was a white dwarf, a star that has exhausted its fuel and collapsed into a planet-sized ball of electron matter that orbits another star slightly larger than the sun, periodically shifting the position of the star and giving rise to variability in its apparent position. But given that this paper was more about feasibility and identifying candidates than actually finding aliens, it was fairly successful.
“Future data releases are going to produce very large numbers of candidates, for sure,” Zackrisson says. “However, doing catalog searches to find outliers is a very quick thing to do, whereas doing follow-up studies on individual candidates is very time consuming. We only want to do that for the very best candidates. The road forward, I think, is to learn as much as we can about our current outliers so that we can weed out the apparently-weird-but-actually-mundane objects more efficiently in the future.”
"This is a clever method that will identify anomalous stars, but there are other possible sources for such anomalies," says Avi Loeb, a Harvard astrophysicist who was not involved in the study. False-positives include star spots or flares, shrouds of dust causing a misreading of the star’s temperature, a debris field like that around Tabby's Star, or even, as this team found, a binary star system.
“Nevertheless, it is worth sorting out all candidates,” Loeb says. “Perhaps one of them would show evidence for a vast engineering project undertaken by an alien civilization, which will inspire us to be more ambitious in our future plans for space exploration.”
Like Tabby’s Star, the alien hypothesis is a sort of “last resort” that isn't considered until astrophysicists rule out all other possibilities. Still, with billions of stars at the disposal of the Gaia, it’s worth checking—just in case. | 0.861685 | 3.395403 |
Probing Exoplanet Atmospheric Properties from Phase Variations and Polarization
Laura Mayorga, NMSU
The study of exoplanets is evolving past simple transit and Doppler method discovery and characterization. One of the many goals of the upcoming mission WFIRST-AFTA is to directly image giant exoplanets with a coronagraph. We undertake a study to determine the types of exoplanets that missions such as WFIRST will encounter and what instruments these missions require to best characterize giant planet atmospheres. We will first complete a benchmark study of how Jupiter reflects and scatters light as a function of phase angle. We will use Cassini flyby data from late 2000 to measure Jupiter’s phase curve, spherical albedo, and degree of polarization. Using Jupiter as a comparison, we will then study a sample of exoplanet atmosphere models generated to explore the atmospheric parameter space of giant planets and estimate what WFIRST might observe. Our study will provide valuable refinements to Jupiter-like models of planet evolution and atmospheric composition. We will also help inform future missions of what instruments are needed to characterize similar planets and what science goals will further our knowledge of giant worlds in our universe.
On the Edge: Exoplanets with Orbital Periods Shorter Than a Peter Jackson Movie
Brian Jackson, Boise State Univeristy
From wispy gas giants to tiny rocky bodies, exoplanets with orbital periods of several days and less challenge theories of planet formation and evolution. Recent searches have found small rocky planets with orbits reaching almost down to their host stars’ surfaces, including an iron-rich Mars-sized body with an orbital period of only four hours. So close to their host stars that some of them are actively disintegrating, these objects’ origins remain unclear, and even formation models that allow significant migration have trouble accounting for their very short periods. Some are members of multi-planet system and may have been driven inward via secular excitation and tidal damping by their sibling planets. Others may be the fossil cores of former gas giants whose atmospheres were stripped by tides.
In this presentation, I’ll discuss the work of our Short-Period Planets Group (SuPerPiG), focused on finding and understanding this surprising new class of exoplanets. We are sifting data from the reincarnated Kepler Mission, K2, to search for additional short-period planets and have found several new candidates. We are also modeling the tidal decay and disruption of close-in gaseous planets to determine how we could identify their remnants, and preliminary results suggest the cores have a distinctive mass-period relationship that may be apparent in the observed population. Whatever their origins, short-period planets are particularly amenable to discovery and detailed follow-up by ongoing and future surveys, including the TESS mission.
Characterization of Biosignatures within Geologic Samples Analyzed using a Suite of in situ Techniques
Kyle Uckert, NMSU
I investigated the biosignature detection capabilities of several in situ techniques to evaluate their potential to
detect the presence of extant or extinct life on other planetary surfaces. These instruments included: a laser desorption
time-of- flight mass spectrometer (LD-TOF-MS), an acousto-optic tunable filter (AOTF) infrared (IR) point spectrometer, a
laser-induced breakdown spectrometer (LIBS), X-ray diffraction (XRD)/X-ray fluorescence (XRF), and scanning electron
microscopy (SEM)/energy dispersive X-Ray spectroscopy (EDS). I measured the IR reflectance spectra of several speleothems
in caves in situ to detect the presence of biomineralization. Microorganisms (such as those that may exist on other solar
system bodies) mediate redox reactions to obtain energy for growth and reproduction, producing minerals such as
carbonates, metal oxides, and sulfates as waste products. Microbes occasionally become entombed in their mineral
excrement, essentially acting as a nucleation site for further crystal growth. This process produces minerals with a
crystal lattice distinct from geologic precipitation, detectable with IR reflectance spectroscopy. Using a suite of
samples collected from three subterranean environments, along with statistical analyses including principal component
analysis, I measured subsurface biosignatures associated with these biomineralization effects, including the presence of
trace elements, morphological characteristics, organic molecules, and amorphous crystal structures.
I also explored the optimization of a two-step LD-TOF-MS (L2MS) for the detection of organic molecules and other
biosignatures. I focused my efforts on characterizing the L2MS desorption IR laser wavelength dependence on organic
detection sensitivity in an effort to optimize the detection of high mass (≤100 Da) organic peaks. I analyzed samples
with an IR reflectance spectrometer and an L2MS with a tunable desorption IR laser whose wavelength range (2.7 – 3.45
microns) overlaps that of our IR spectrometer (1.6 – 3.6 microns), and discovered a IR resonance enhancement effect. A
correlation between the maximum IR absorption of organic functional group and mineral vibrational transitions – inferred
from the IR spectrum – and the optimal IR laser configuration for organic detection using L2MS indicates that IR
spectroscopy may be used to inform the optimal L2MS IR laser wavelength for organic detection. This work suggests that a
suite of instruments, particularly LD-TOF-MS and AOTF IR spectroscopy, has strong biosignature detection potential on a
future robotic platform for investigations of other planetary surfaces or subsurfaces.
Utilizing Planetary Oscillations to Constrain the Interior Structure of the Jovian Planets
Seismology has been the premier tool of study for understanding the
interior structure of the Earth, the Sun, and even other stars. Yet in this
thesis proposal, we wish to utilize these tools to understand the interior
structure of the Jovian planets, Saturn in particular. Recent observations
of spiral density structures in Saturn’s rings caused by its oscillations
have provided insight into which modes exist within Saturn and at what
frequencies. Utilizing these frequencies to compare to probable mode can-
didates calculated from Saturn models will also us to ascertain the interior
profiles of state variables such as density, sound speed, rotation, etc. Using
these profiles in a Saturn model, coupled with tweaking the interior struc-
ture of the model, i.e. the inclusion of stably stratified regions, should
allow us to explain which modes are responsible for the density structures
in the rings, as well as predict where to look to find more such structures.
In doing so, we will not only have a much greater understanding of Sat-
urn’s interior structure, but will have constructed a method that can also
be applied to Jupiter once observations of its mode frequencies become
available. In addition, we seek to explain if moist convection on Jupiter is
responsible for exciting its modes. We aim to do this by modeling Jupiter
as a 2D harmonic oscillator. By creating a resonance between moist con-
vective storms and Jovian modes, we hope to match the expected mode
energies and surface displacements of Jupiter’s oscillations. | 0.876426 | 3.934395 |
TEHRAN (Press Shia) – Only one spacecraft has flown near Uranus and Neptune, the mysterious ice giant planets on the edge of our solar system.
Yet the wealth of data captured by NASA’s Voyager 2 spacecraft some 34 years ago is still revealing tantalizing hints and reminding scientists of why we need to go back.
Voyager 2 flew by Uranus in 1986, and now, thanks to a little blip discovered in some data, NASA scientists know it also flew through a plasmoid. A plasmoid is a giant magnetic bubble that likely pinched off part of the planet’s atmosphere, sending it out into space.
They found it while looking through old data to find questions they wanted to answer on future potential missions to Uranus.
It’s not unusual for planets to lose their atmospheres. Venus, Jupiter, Saturn and even Earth leak their atmospheres into space, according to NASA.
Over time, this can cause a big impact to a planet. Take Mars, for example: The red planet lost its atmosphere almost entirely over four billion years.
“Mars used to be a wet planet with a thick atmosphere,” said Gina DiBraccio, space physicist at NASA’s Goddard Space Flight Center and project scientist for the Mars Atmosphere and Volatile Evolution, or MAVEN mission. “It evolved over time to become the dry planet we see today.”
DiBraccio and her colleague Dan Gershman study planetary magnetic fields and the way they interact with the sun.
Planetary magnetic fields act like gatekeepers, at times protecting their atmospheres from the stream of solar wind released by the sun. But they can also allow atmospheres to escape as lines in the magnetic field tangle together.
“The way in which the sun’s solar wind interacts with Uranus is unlike any planet we’ve ever explored,” DiBraccio said. “We are left with questions regarding to what degree the solar wind affects dynamics at Uranus such as transporting atmospheric particles, transferring energy and even changing the planet’s climate over time.”
Uranus is a strange planet, and it’s also unlike any exoplanet discovered so far. It has a barrel roll orbit, tilted 90 degrees and spinning on its side. It takes 17 hours to complete one spin.
“This tilt causes dramatic seasons on the planet, and we don’t fully understand the impact,” she said. “It is believed that the seasons change the way that the solar wind interacts with Uranus’ magnetic field and may also have an impact on atmospheric heating, for example.”
NASA wants to return to investigate the planet’s oddities, and DiBraccio and Gershman were part of a team looking at designing a future mission to revisit the ice giants.
A 60-second zigzag in Voyager 2’s magnetometer readings, which measured the strength and direction of the planet’s magnetic field, revealed what looked like a plasmoid to them. It’s the first time one has been detected escaping Uranus.
In this case, the giant plasma bubble full of energized hydrogen detached from the magnetotail. It truly is like the tail end of the magnetic field, which is pushed off of the planet by the sun.
The plasmoid, which resembled a cylinder, was 127,000 miles long and 250,000 miles across. Inside the plasmoid, they observed loops — shaped by the planet’s spin as they released into space.
Such a large plasmoid could effectively remove between 15% to 55% of atmospheric mass — and this could be the main way Uranus is losing its atmosphere, they said.
Only future observations will provide more information about what’s really happening with the planet and its runaway atmosphere.
“Scientists are currently exploring future opportunities to visit Uranus and Neptune,” DiBraccio said. “This includes mission concepts that would send spacecraft to orbit one of these planets and even send a probe into its atmosphere.”
A mission like this would launch around 2030, NASA estimates. Teams are working hard to develop mission concepts, based on 2017 review.
It’s something that the Voyager teams — which are still actively monitoring the two spacecraft as they explore uncharted territory beyond our solar system — have hoped for.
When the Voyager spacecraft flew by the planets in our solar system, they helped answer some questions while creating more. But no missions have followed up on Uranus or Neptune since the 1986 and 1989 flybys by Voyager 2.
“We need to develop an orbiter for each of those planets,” said Suzanne Dodd, Voyager project manager. “At Uranus, the five major moons are very different. They have unique geological history, so we need to understand how they were formed or captured. Uranus has a rotational pole that is tipped on its side more than the Earth, so we need an understanding of why that happened.
“At Neptune, there are a great amount of features in [the] atmosphere similar to Jupiter and Saturn. And Neptune’s moon Triton is of interest because of the methane geysers on it.”
One more immediate way to follow up on the atmospheres of these ice giants is NASA’s James Webb Space Telescope, launching next year. The telescope can characterize their atmospheres by peering into them and even discern chemistry, weather and circulation patterns, the agency said.
Not only could future missions answer key questions about the ice giant planets, but help us understand planets outside of our solar system as well.
“As scientists, we are really eager to study Uranus and Neptune to not only learn more about our solar system planets, but also because these ice giants will teach us about exoplanets,” DiBraccio said.
“Exploring the diverse set of planetary systems in our own solar system gives us an opportunity to apply our findings to extraterrestrial worlds. These findings are definitely applicable to exoplanets because a majority of the exoplanets that have been discovered are sub-Neptune size.” | 0.890693 | 3.677585 |
- Title: Light echoes reveal an unexpectedly cool Eta Carinae during its 19th-century Great Explosion
- Authors: Armin Rest et al.
- First Author’s Institution: Space Telescope Science Institute, Baltimore, MD
What looks like a supernova but isn’t one? Not a rhetorical question! Giant eruptions of Luminous Blue Variable stars (LBVs) have luminosities just less than those of faint core-collapse supernovae (SNes) , and so have occasionally been mistaken for them. In particular, in our galaxy there have been two in the last four hundred years: the giant eruption of P Cygni in the 17th century, and the Great Eruption of Eta Carinae in the 19th. The paper we look at today focuses on the second, using light echoes (like sound echoes, they can arrive long after the event that produced them) to make new observations of the Great Eruption.
An exemplary imposter?
Eta Carinae’s Great Eruption was observed for twenty years, from 1838 to 1858, and for ten of those it exceeded the Eddington luminosity limit, emitting 10% of the energy of a typical core-collapse supernova (that is a ton of energy!) and yet somehow managing not to destroy itself! During this period, Eta Carinae was the second-brightest star in the sky.
Traditionally, the Great Eruption is considered a prototypical example of the so-called “supernova imposters” (in addition to the two in our galaxy, there have also been about two dozen in other galaxies). The mechanism thought to underly the imposters is an opaque stellar wind driven by a sudden increase in the star’s luminosity. This wind would in turn produce a minimum effective temperature of 7000 K and a spectrum similar to that for an A or F-type supergiant. However, this paper questions that status by arguing that the mechanism behind the Great Eruption must actually be different from that thought to underly the “supernova imposters.”
The authors find that the Great Eruption is much more consistent with a supergiant spectrum between G2 and G5 (remember OBAFGKMLT?) This corresponds to a lower temperature of around 5000 K (for comparison, our Sun is a G2 star with effective temperature 5777 K). Indeed, their analysis rules out spectral types of F7 or earlier (earlier means higher temperature because the spectral order is also, more or less, an evolutionary track; stars cool and go from left to right on it, as, perversely, temperature decreases left to right). The authors also point out that the opaque stellar wind idea fails to explain the high kinetic energy and fast blast wave at large radii inferred from observations (mainly in the optical and infrared (IR), with blast wave speed found using blueshift of features in IR spectrum). Further, the opaque stellar wind model would imply strong emission lines, a feature the authors do not find in the observations.
So in short, the authors conclude Eta Carinae’s Great Eruption is unlikely to have been driven by an opaque stellar wind. Instead, they suggest it might have been caused by a hydrodynamic explosion of unknown origin, a second model usually proposed for LBV explosions. The absorption lines and temperature they observe are consistent with the opaque, cooling photosphere likely to follow such an explosion. What could have caused such an explosion? That is a matter for radiative transfer simulations of the star, which now have new observational constraints to match in seeking to explain the Great Eruption. | 0.814101 | 4.161306 |
Upgrade to remove ads
CHAPTER 19 PRACTICE
Terms in this set (50)
True Or False? A stage 4 protostar may temporarily be thousands of times more luminous than the Sun.
True Or False? Collisions between galaxies tend to destroy, not create, stars.
True Or False? Stars evolve to the upper left along the main sequence, after forming in the middle.
True Or False? If a star is spinning rapidly, this may limit its final mass
True Or False? Many of the brightest stars we see are only a few million years old.
True Or False? Brown dwarfs most commonly form in the nebular disk around another object, much like a planet.
True Or False? Rotation and magnetism both play key roles in protostar formation.
True Or False? It takes about 1032 hydrogen atoms to make a star.
True Or False? Shock waves from supernovae disrupt an interstellar cloud and prevent it from forming stars.
True Or False? A protostar of 20 solar masses should form a star that will stay on the main sequence twenty times
longer than our Sun.
Higher mass protostars enter the main sequence:
faster and at a higher luminosity and temperature.
Objects which have contracted, but are of too little mass to establish thermonuclear
reactions in their cores, are ________, which slowly continue to cool.
In general, the greater the mass of the protostar, the ________ it contracts to the main
More massive stars are able to form in a(n) ________ time than less massive stars.
After stage 3 in star formation, the protostar develops a surface better known as a(n)
To deduce the interior conditions of the dense regions in which stars form, it is necessary
for astronomers to observe in the ________ part of the spectrum.
The Great Explosion of Eta Carinae expelled about ________ solar masses of material and
released about as much visible energy as a supernova.
2 or two
Star formation may be triggered by ________ which aid gravity through compression of
interstellar clouds to greater densities
pressure or shock waves (from supernovae)
During the Kelvin-Helmholtz contraction phase, a protostar has a much ________
brightness as compared to the Sun.
The Orion Nebula and Tarantula Nebula are examples of ________ Nurseries.
A protostar develops a bipolar flow of gas when it is still surrounded by an equatorial disk
From stage 4 to stage 7 of star formation, the object plotted on the H-R diagram moves so that:
its luminosity decreases, while its temperature increases.
The stars found in nebulae like the Orion Nebula probably formed:
a few million years ago
Atomic bomb tests demonstrated which aspect of star formation?
a shock wave surrounding and compressing a molecular cloud
Protostars can be observed in:
the Orion Nebula.
Most open clusters in our Milky Way are about how old?
less than a billion years old
As a star forms, the photosphere first appears:
when the protostar forms.
Which relationship concerning the mass of protostars is FALSE?
A) The more massive ones create a lot of ultraviolet as well as visible light.
B) The more massive ones will reach the main sequence first.
C) The more massive ones will be the hottest and most luminous.
D) The more massive ones are so luminous they ionize the gas, hence red H II regions.
E) The more massive ones will be made of the heaviest elements.
The more massive ones will be made of the heaviest elements
If the initial interstellar cloud in star formation has a mass sufficient to form hundreds of stars, how
does a single star form from it?
The cloud fragments into smaller clouds and forms many stars at one time.
Which of these is NOT a source of the shock waves that lead to protostars?
expanding Herbig-Haro objects
What is characteristic of a main sequence star?
The rate of nuclear energy generated in the hydrogen to helium fusing core equals the rate
radiated from the surface.
Our Sun, along with most of the stars in our neighborhood, probably formed:
billions of years ago.
Whether an object is a brown dwarf or a planet can be determined by:
the composition of the object.
How long does it take for a star like our Sun to form?
fifty million years
The most important fact about a cluster of stars that makes them useful for studying star formation
all the stars formed at about the same time
What are some complications that interfere with star formation?
A rapidly spinning nebula may prevent more matter from collapsing to the central protostar, and magnetic fields may
deflect the gas away from the star as well.
What are thought to be some possible causes of triggering the contraction of an interstellar cloud?
The shockwave from the formation of nearby type O and B stars, the shockwave from a nearby supernova, or parts of
the cloud become too cold to balance the inward force of gravity.
A typical protostar may be several thousand times more luminous than our Sun. Where does this energy come fr
From the release of gravitational energy as the protostar shrinks in stages 2 and 3.
Explain how the composition of the object can help distinguish a brown dwarf from a planet.
Planets are enriched in heavy elements compared to the host star; brown dwarfs should have starlike composition.
Explain how rotation and magnetic fields influence the formation of the protostar.
Rotation flattens the dusty cocoon into a disk around the protostar's equator, and the magnetic fields guide bipolar
flows of material outward to make Herbig-Haro objects.
Contrast the brightest stars of young open and old globular clusters.
Hot blue main sequence stars light up young open clusters, but the aging globular clusters have long since lost such
short-lived massive stars; they have red giants as their brightest, most evolved survivors.
Why does gravitational contraction halt in collapsing protostars?
Gravitational contraction heats up the protostar in stages 2-4, but once the star has heated up enough, its radiation
pressure stabilizes it as nuclear fusion ignites
What do we mean by the term "molecular clouds"? Why do they exist? How can observation of their properties yield
clues to the processes of stellar formation?
Molecular clouds are very cool interstellar gas clouds; cool enough that complex molecules can form. A shock wave,
from an emission nebula or an exploding star, can lead to star formation in the molecular cloud.
An interstellar gas cloud has the mass to form hundreds of stars. What generally happens to it?
The cloud fragments into smaller clouds and forms many stars.
What stage of star formation is ZAMS?
At stage 7, the zero age main sequence star is fusing hydrogen and on the main sequence, to stay there for as long as it
has sufficient hydrogen to remain stable.
Why does a star's luminosity drop so much during the Hayashi track?
The surface of the star maintains a fairly constant temperature but the size continues to shrink significantly. Since the
luminosity depends on the square of the radius, as the radius decreases by a factor of 100, the luminosity drops by a
factor of 10,000. The slight increase in surface temperature changes the luminosity only a little.
Discuss the relationship between young stars and the interstellar medium.
Young stars give off ultraviolet radiation, heating the surrounding gas cloud and causing it to glow as an emission
nebula. Solar wind from the young stars pushes on the nebula, which can sweep away the gas, but can also cause
compression leading to more new star formation.
What are some sources of the shock waves that initiate star formation?
Emission nebulae produce shock waves, resulting from their hot ionized gases. Supernovae are an excellent source of
shock waves. Both of these are caused by O-B stars. Collisions between galaxies can also initiate star formation on a
grand scale in "starburst" galaxies.
Outline the process of star formation, including all relevant factors that influence the outcome.
An interstellar cloud, disturbed in some way, begins to gravitationally collapse. It heats up as it collapses, and spins
faster. (If the spinning is too great it may defeat the entire process, as the protostar tears itself apart.) Eventually the
core of the cloud is hot enough that it develops a definite glowing edge (photosphere), and begins to resemble a star.
When the core temperature reaches 10 million K and nuclear fusion of hydrogen into helium begins, it is officially a
star. When the outward pressure equalizes with gravity, the star enters the main sequence. Collisions between galaxies
can set up a shock wave to initiate collapse, as can the shock waves from supernovae and the density wave moving
through the spiral arms of the galaxy.
What factors can complicate the collapse of an interstellar cloud into a star?
The initial spin of the cloud and the conservation of angular momentum result in the cloud flattening into a rapidly
rotating disk. For the disk not to fly apart requires a larger mass to produce sufficiently strong gravity to hold the disk
together. Magnetic fields can hinder the collapse in a direction perpendicular to the field lines. The ionized gas is tied
to the magnetic field and the field is pulled in by the gas. The ionized gas responds more to the magnetic field than it
does to gravity.
YOU MIGHT ALSO LIKE...
ASTR 207 - Ch. 19: Star Formation
ASTR 207 - Ch. 19: Star Formation
Astronomy Ch 19 Quiz Q&As
Astronomy 11-Chapter 19 Exam
OTHER SETS BY THIS CREATOR
Unit 2- Chapter 8: Mediterranean Society Under the…
Chapter 7 - State, Society, and the Quest for Salv…
Chapter 6 - The Unification of China
Chapter 5: The Empires of Persia | 0.886972 | 3.548238 |
Jan 9, 2018
Stars are not gravitational entities.
A recent press release relates that there is something wrong with how conventional theories describe star formation. Star clusters in the Large Magellanic Cloud (LMC) are not assembled correctly: some stars are too young. As Dr. Bi-Qing For wrote:
“Our models of stellar evolution are based on the assumption that stars within star clusters formed from the same material at roughly the same time.”
Stars are born and age according to theories that involve hydrogen fusion. Stars form a hydrogen-burning region outside their helium cores. As they age, stars use heavier atomic nuclei as fuel. Astronomers believe they should be cooler and brighter at that stage. More changes take place in temperature and brightness as they continue to evolve. Stars are supposed to follow that sequence until they burn out or explode based on the well-known Hertzsprung–Russell diagram.
Stars residing in clusters within the LMC do not conform to the standard model. The theory of star formation requires gravity to concentrate dust and gas into smaller and smaller dimensions until fusion ignites, so regions where the gas is most dense are where more stars should be born. They found that stars on the outside of many nebulae are older than those inside.
In an Electric Universe, gravity is overshadowed by plasma behavior. Stars are not compressed hydrogen gas, they are the loci of galactic z-pinches in Birkeland currents, so electricity forms double layers along their current axes. As elsewhere written many times, positive charge builds up on one side of a double layer and negative charge on the other. Strong electric fields initiate electric charge flow through nebular plasmas, causing charges to spiral down into filaments that can eventually form arc mode emissions.
Arc lamps emit light at specific frequencies, depending on the gases that they contain. Electricity causes the gaseous plasma to glow. Since 90% of the light in planetary nebulae comes from ionized oxygen, they should be thought of as gas discharge tubes and not as balls of gas, just as the Electric Star theory explains. Rather than kinetic energy supplied by gravity, nebulae are powered by electricity.
As mentioned, power density in dusty plasma is greatest along the axes of Birkeland current filaments. Electromagnetic fields draw matter toward them from surrounding space more strongly and from a greater volume than is possible with gravity. When sufficient matter accumulates in the filaments they luminesce and generate stellar parturition.
Conventional models cannot discover how old stars might be, so astronomers will always find new mysteries in their theoretical models. | 0.85388 | 3.857385 |
MPEC 2012-Y30, issued 2012 December 26, reports our recovery of comet 26P/Grigg–Skjellerup. We found the comet on 2012 December 05.6 and December 14.5 at about magnitude 20. We imaged it remotely with the 2.0-m f/10 from the Siding Spring-Faulkes Telescope South.
This comet is named after the singing teacher and amateur astronomer John Grigg and after J. Frank Skjellerup, an Australian telegraphist working at the Cape of Good Hope in South Africa. On July 10, 1992, comet 26P was visited by Giotto spacecraft after its successful close encounter with comet Halley. The Giotto camera has been damaged in the Halley flyby and there are no pictures of the nucleus. In 1972 the comet was discovered to produce a meteor shower (first predicted by Harold Ridley), the Pi Puppids, and its current orbit makes them peak around April 23, for observers in the southern hemisphere, best seen when the comet is near perihelion.
Our recovery image (click on the image for a bigger version):
While below you can see an animation showing the movement of the comet (7 frames x 30 seconds each). North is up, East is to the left (click on the thumbnail below for a bigger version):
Comet 26P/Grigg–Skjellerup was last observed (before our recovery) on August 09, 2008 by mpc code 204 (Schiaparelli Observatory). While there is also a single night observation by F51 – Pan-STARRS 1, Haleakala dated November 25, 2011, a comet recovery requires 2 nights of observations, "as it is not possible to unambiguously identify a comet by position and rate alone without a second night of data to verify the orbit." (Hainaut et al. A&A 1997).
by Nick Howes & Ernesto Guido | 0.831929 | 3.297871 |
WASP-43b is the “hot Jupiter” exoplanet with the orbit closest-in to its star, producing an ultra-short orbital period of only 20 hours. The dayside face is thus strongly heated, making it a prime system for studying exoplanet atmospheres.
Kevin Stevenson et al have pointed NASA’s Spitzer Space Telescope at WASP-43, covering the full orbit of the planet on three different occasions. Spitzer observed the infrared light from the heated face in two bands around 3.6 microns and 4.5 microns.
The three resulting “phase curves” are shown in the figure:
The 4.5-micron data from one visit are shown in red in the lower panel; the 3.6-micron data from the two other visits are in the upper panel. The transit (when the planet passes in front of the star) is at phase 1.0, and drops below the plotted figure. The planet occultation (when it passes behind the star) is at phase 0.5. The sinusoidal variation results from the heated face of the planet facing towards us (near phase 0.5) or away (near phase 1.0).
Intriguingly, the depth of the variation in the 3.6-micron data is clearly different between the two visits. Why is this? Well, Stevenson et al are not sure. One possibility is that the data are not well calibrated and that the difference results from systematic errors in the observations. After all, such observations are pushing the instruments to their very limits, beyond what they had been designed to do (back when no exoplanets were known and such observations were not conceived of).
More intriguingly, the planet might genuinely have been different on the different occasions. The authors report that, in order to model the spectra of the planet as it appears to be during the “blue” Visit 2 in the figure, the night-time face needs to be predominantly cloudy. But, if the clouds cleared, more heat would be let out and the infrared emission would be stronger. That might explain the higher flux during the “yellow” Visit 1. Here on Earth the sky regularly turns from cloudy to clear; is the same happening on WASP-43b? | 0.828465 | 3.83572 |
Interesting news about Jupiter this morning even as the Juno spacecraft crosses into the realm of Jupiter’s gravity. It was six days ago that Juno made the transition into Jupiter space, where the gravitational influence of Jupiter now dominates over all other celestial bodies. And it will be on July 4 of this year that Juno performs a 35-minute burn of its main engine, imparting a 542 meters per second mean change in velocity to the spacecraft for orbital insertion.
The spacecraft’s 37 flybys will close to within 5000 kilometers of the cloud tops. I only wish Poul Anderson could be alive to see some of the imagery. I always think of him in relation to Jupiter because of his stunning 1957 story “Call Me Joe,” describing the exploration of the planet by remote-controlled life forms (available in Anderson’s collection The Dark Between the Stars as well as various science fiction anthologies).
Image: Launched in 2011, the Juno spacecraft will arrive at Jupiter in 2016 to study the giant planet from an elliptical, polar orbit. Juno will repeatedly dive between the planet and its intense belts of charged particle radiation, traveling from pole to pole in about an hour, and coming within 5,000 kilometers of the cloud tops at closest approach. Credit: NASA/JPL-Caltech.
Our view of Jupiter has changed a lot since 1957, and Anderson’s low temperature, high pressure surface conditions have been ruled out, but the tale still carries quite a punch. As to Jupiter itself, today we get news that data from the Very Large Array (New Mexico) have been used to create the most detailed radio map ever made of its atmosphere. The work allows researchers to probe about 100 kilometers below the cloud tops using radio emissions at wavelengths where the clouds themselves are transparent.
Recent upgrades to the VLA have improved the array’s sensitivity by a factor of 10, a fact made apparent by the new Jupiter maps. Working the entire frequency range between 4 and 18 gigahertz, the team from UC-Berkeley supplements the Juno mission, anticipating its arrival to create a map that can put the spacecraft’s findings into context. Because the thermal radio emissions are partially absorbed by ammonia, it’s possible to track flows of the gas that define cloud-top features like bands and spots at various depths within the atmosphere.
We’re learning how the interactions between internal heat sources and the atmosphere produce the global circulation and cloud formation we see in Jupiter and other gas giant planets. The three-dimensional view shows ammonium hydrosulfide clouds rising into the upper cloud layers along with ammonia ice clouds in colder regions, while ammonia-poor air sinks into the planet amidst ‘hotspots’ (bright in radio and thermal infrared) that are low in ammonia and circle the planet just north of its equator.
“With radio, we can peer through the clouds and see that those hotspots are interleaved with plumes of ammonia rising from deep in the planet, tracing the vertical undulations of an equatorial wave system,” said UC Berkeley research astronomer Michael Wong.
Image: The VLA radio map of the region around the Great Red Spot in Jupiter’s atmosphere shows complex upwellings and downwellings of ammonia gas (upper map), that shape the colorful cloud layers seen in the approximately true-color Hubble map (lower map). Two radio wavelengths are shown in blue (2 cm) and gold (3 cm), probing depths of 30-90 kilometers below the clouds. Credit: Radio: Michael H. Wong, Imke de Pater (UC Berkeley), Robert J. Sault (Univ. Melbourne). Optical: NASA, ESA, A.A. Simon (GSFC), M.H. Wong (UC Berkeley), and G.S. Orton (JPL-Caltech).
Fine structure becomes visible in this work, especially in the areas near the Great Red Spot. The resolution is about 1300 kilometers, considered to be the best spatial resolution ever achieved in a radio map. “We now see high ammonia levels like those detected by Galileo from over 100 kilometers deep, where the pressure is about eight times Earth’s atmospheric pressure, all the way up to the cloud condensation levels,” says principal author Imke de Pater (UC-Berkeley). The work is reported in the June 3 issue of Science.
Image: In this animated gif, optical images of the surface clouds encircling Jupiter’s equator –including the famous Great Red Spot — alternate with new detailed radio images of the deep atmosphere (up to 30 kilometers below the clouds). The radio map shows ammonia-rich gases rising to the surface (dark) intermixed with descending, ammonia-poor gases (bright). In the cold temperatures of the upper atmosphere (160 to 200 Kelvin, or -170 to -100 degrees Fahrenheit), the rising ammonia condenses into clouds, which are invisible in the radio region. Credit: Radio: Robert J. Sault (Univ. Melbourne), Imke de Pater and Michael H. Wong (UC Berkeley). Optical: Marco Vedovato, Christopher Go, Manos Kardasis, Ian Sharp, Imke de Pater.
Earlier VLA measurements of ammonia levels in Jupiter’s atmosphere had shown much less ammonia than what the Galileo probe found when it plunged into the atmosphere in 1995. The new work resolves the issue by applying a technique to remove the blurring in radio maps that occurs because of Jupiter’s fast rotation. The UC-Berkeley team reports that it can clearly distinguish upwelling and downwelling ammonia flows using the new methods, preventing the confusion between the two that had led to the earlier mis-estimates of ammonia levels.
The paper is de Pater et al., “Peering through Jupiter’s Clouds with Radio Spectral Imaging,” Science 3 June 2016 (abstract). | 0.860013 | 3.805846 |
Flare Sightings by the Japanese
In December of 1949 Tsuneo Saheki recorded a "brilliant glow" on Mars. The brilliant glow lasted for several minutes and Saheki interpreted it as an atmospheric nuclear explosion. What Saheki saw was not a man-made event, but a natural atmospheric explosion, which may explain why the explosion lacked the "double signature" that usually goes with nuclear warheads. Saheki also recorded a yellow-grey "luminescent cloud" some 700 miles in diameter that formed after the initial explosion. The yellow cloud may have been a cloud of Martian dust, as seems natural, but Saheki noted that it reached a height of 40 miles. Since the atmosphere of Mars is only 60 miles high, the dust cloud reached two-thirds of the way to outer space! Another explanation is that the cloud was a layer of atomic sodium residing in the atmosphere. This layer was ionized by the blast, and minutes later gave off a clearly visible yellow glow when it recombined after being ionized.
On November 6 1958, S. Tanabe recorded a Martian "flare". A few days later, on November 10, S. Fuikui recorded a second Martian flare. It's not entirely clear that these were natural phenomena. October of 1958 was an Earth-Mars opposition, when the distance between the planets is shortest. Studies have shown a relationship between such an opposition and decametric radiation from Jupiter and perhaps Saturn. Such oppositions also create magnetic disturbances in the Earth's atmosphere. So it's not out of the question for these oppositions to have an effect on Mars.
Solar flares generally occur in a 26-month pattern that matches the oppositions of Earth and Mars. The solar wind is ejected from the Sun at about 360 kilometers per second, and the distance between the Sun and Earth is about 149.5 million kilometers, so that a solar shockwave would take about 5 days to reach Earth, and about 8 days to reach Mars. Thus solar flares may be partly responsible for the flare sightings on Mars.
Solar flares excited by galactic cosmic radiation are "relativistic," which means they travel near the speed of light, something like radio waves. A similar relativistic effect might be created by a nuclear blast. It's possible that the British nuclear tests conducted in the Pacific in September of 1958 sent some relativistic radiation towards Mars, but a relativistic shockwave would reach Mars in minutes. The flares were seen on Mars 65 days later. Thus, nuclear tests in September can't explain the flares that were observed on Mars in November.
However, there's a more intriguing explanation, though it lacks any kind of evidence to support it. Did the British and Americans send two nuclear warheads to Mars? For example, nuclear testing in the Pacific by the British and Americans may have been a way to "open a hole" in the atmosphere for rockets to be fired into outer space. If a rocket was fired from the Pacific on September 2 on a trajectory to Mars, and it traveled 60,000 miles per hour, it would travel 93,600,000 miles in 65 days, which on a parabolic arc would certainly cover the distance between Earth and Mars during a good opposition. The opposition of August 1956 was "perfection" meaning it was very close, so October 1958 would have been "good." There's no evidence that the British and Americans did this, but it would explain the Martian flares and the delay between the Earth events and the Mars sightings.
The first volume of the Air Force's Project A119 has been released online, and offers a possible solution to the Mars flare mystery. Project A119 was funded by the Air Force and it investigated the scientific and political ramifications of detonating an atomic bomb on or near the surface of the Moon. Interestingly, Carl Sagan wrote Chapter VIII, "Organic Matter and the Moon," which is vastly entertaining in the classic Sagan style. He postulated that the Moon does indeed harbor complex organic molecules beneath its surface. But the more relevant portion of the study is Chapter III, "Optical Studies Related to the Lunar Research Flights," and specifically Section F, "Sodium Vapor."
The Project A119 report is cautious throughout about the reality of sending a nuclear warhead to the Moon -- the dangers are well delineated. The study was carried out with the participation of the Armour Research Foundation from May 1958 to January 1959. Volume I, available online, was published in June 1959.
Section F outlines a plan to use sodium vapor in the lunar atmosphere instead of an atomic bomb. The study mentions U.S. experiments from 1955 in which sodium was vaporized in the atmosphere at an altitude of 30 to 40 kilometers (18 to 25 miles) to study chemical reactions with sodium as well as wind directions at these altitudes. "The sodium cloud is discharged at a specific moment and it glows because of resonance fluorescence emitting the sodium lines." The study pointed out that the yellow light of sodium vapor "is preferred from the point of view of visual sensitivity." The authors further noted that narrow band interference filters can enhance the contrast from the observation standpoint.
The brilliance of a sodium cloud containing a kilogram of sodium, discharged 113,000 km (70,200 miles) from Earth, is equal to a sixth magnitude star. This brightness is at the threshold of naked eye visibility against an average sky background. The study concluded that 100 kilograms of sodium would be visible against the Moon's dark side with the naked eye. The authors added that "the sodium requirements are substantially reduced when appropriate telescope magnification is employed."
Section F ends with the following advice: "One might envision a dual purpose use of solid propellant rocket motors as (1) retrodirective devices and (2) marking flares in the present context. The propellant, containing sodium, would be released on firing the rockets near the moon. It is quite likely that a known formulation can be used, thus precluding the costly development of a new special propellant formulation. Most likely, formulations currently undergoing development at NOTS (Naval Ordnance Test Station) will be applicable. These formulations include metal additives such as aluminum, boron, and magnesium. Several factors would bear on the amount of light detectable, such as missile configuration, spinning or tumbling, and angle of viewing."
Here then, in a study conducted just before the Argus and Grapple nuclear tests, is a possible modus operandi for the military: the launch of rockets with retro-rocket propellant containing sodium, as opposed to rockets with nuclear payloads. This configuration eliminated the safety concern associated with live nuclear warheads, and lightened the rocket weight at the same time. What was the objective of launching such flare rockets to Mars? Simply this: to hit the target and receive visual confirmation of such. In 1958 nobody had sent a rocket to Mars. If the U.S. could beat the Soviets there, it would be a scientific first and a great victory for the Cold War.
On October 31 1958 President Eisenhower announced a unilateral testing moratorium, with the understanding that the United States and the Soviet Union would refrain from conducting nuclear tests. The Soviet Union broke the moratorium and resumed atmospheric nuclear tests in September 1961. The largest blast ever achieved by the Soviets took place on October 30 1961, with a yield of 58 megatons. There is also evidence that the United States conducted atmospheric tests in 1959 and 1960. However, one must be cautious since there is also evidence that meteor explosions have a very similar signature to atomic explosions in the atmosphere. An Army infrasound network in operation between 1950 and 1974 collected readings on about 100 bolide events in the atmosphere, as well as readings of nuclear explosions.
A Meteor Hit in 1958 on Mars?
One problem with the Project A119 theory is that there was no deep-space radar in 1958, which means there would have been no way to send the minor course corrections to the rockets on their way to Mars. There would have been no way to fire the retro-rockets when the rockets arrived at Mars, to release the sodium vapor cloud.
Two other flares were observed on Mars on November 21 1958 by Ichiro Tasaka. The first was seen at Edom Promontorium at 13:35 Universal Time, and the second at Northern Hellas at 13:50 UT. The sightings mentioned above, by Tanabe and Fuikui, were located northeast of Solis Lacus, and at Tithonius Lacus. Astronaut Clark McClelland observed a flare on Mars at the Allegheny Observatory on July 24 1954. He hypothesized that the flare was a volcanic eruption. Tsuneo Saheki, who observed a Martian flare in 1949, also observed flares in 1951 and on July 1 1954.
The flares observed by McClelland and Saheki in 1954, both at Edom Promontorium, have all the earmarks of a periodic phenomenon caused by specular reflections of sunlight off water-ice crystals in surface frosts or atmospheric clouds, specifically at times when the sub-Sun and sub-Earth points are nearly coincident and close to the planet's central meridian (the imaginary line running down the center of the visible disk from pole to pole). This theory was proposed and subsequently observed by Thomas Dobbins in 2001. Also, Edom Promontorium has historically always been one of the brightest observable spots on Mars.
The simultaneity of events on November 21 1958, as well as the great distance between the flare sites, argues against both volcanic activity and specular reflection, and in favor of two meteors that impacted Mars fifteen minutes apart. There is very little information available about meteor hits on Mars. On Earth, meteors are captured in the mesosphere, about 93 to 96 kilometers in altitude. There they burn out or are ablated, and leave trails of metallic ions, mostly sodium. The photograph below shows two meteors and their explosive disintegration in the center. The line extending from the top border is the laser radar (lidar) beam aimed at the meteor area from the ground, just seconds after the fireball first appeared.
CCD image of two Leonid meteor trails, taken in 1998 from Kirtland AFB, New Mexico.
Strangely enough, Mars also has a "meteor layer" located about 90 kilometers in altitude. This is strange because the Martian surface pressure is about 7 millibars, or less than one percent of Earth's. At 90 kilometers altitude in the Martian ionosphere, the pressure is a thousand times weaker still. Nonetheless, electron densities hold meteor trains at that altitude, both on Earth and on Mars. The graph below shows the three electron density peaks in the Mars ionosphere.
The X-ray and meteor electron-density peaks varied during a six-year observation period.
Solar flares may have partly caused this variation, which also affected the altitude of the meteor layer.
Meteors in the Martian atmosphere follow a seasonal trend that seems to peak every eleven years, with the sunspot cycle. With this background knowledge, the flares seen on November 21 1958 can be explained as either a single bolide that broke into pieces on its collision course with Mars, or as a meteor "stack" in the Mars meteor layer that, because of a solar flare or some other solar disturbance, was pushed out of its high-density trough and exploded due to atmospheric friction. The first flare was seen at Edom Promontorium, at 0 deg S, 15 deg E, at 13:35 UT; and the second flare was seen at Northern Hellas, at 42 deg S, 70 deg E, at 13:50 UT.
The Mars meteor theory finds an analogy in the famous "Great Fireball Procession" of February 1913, which was seen by hundreds of people along a path through Canada and the United States as shown in the map below.
The trajectory of the meteor path of 1913.
The sightings took place from 8:00 pm to 8:10 pm on February 9. Mebane gives a thorough account of the dozens of eyewitness reports, but the main thing to notice is the northwest to southeast trajectory. Both Earth and Mars are inclined 23.5 degrees from the orbital plane, and the meteor path on Earth in 1913 as above, and the path from Edom to Hellas in 1958, are similarly inclined.
The Martian flare seen by Saheki in 1949 lasted for several minutes, much longer than flares attributed to specular reflections, which last about 5 seconds. The flares seen on November 21 1958 were probably similar to Saheki's 1949 sighting. Typically, on Earth anyway, a meteor drills a "tube" into the atmosphere, and this tube is heated to high temperatures around its perimeter, where the meteor is ablated due to friction. The central area of the tube is cooler and isn't as bright. A shockwave generated by a meteor would add more brightness to the impact, even if the meteor blew up in the atmosphere before reaching the ground. If the meteor hit the ground, dust would certainly be raised into the atmosphere: but there is no indication that the hit in 1958 was so drastic.
A meteor swarm was most likely orbiting Mars in the orbital plane. It could be imagined that a vertically-extended swarm or rather "stack" of large meteorites, whose lower end reached down to the 30- or 40-mile-level, was the actual satellite, and that the flares witnessed along its path were caused by the successive "peeling off" of its lower members by the air as the stack slowly lost altitude in its orbital flight. This is the best explanation for the Earth sightings in 1913, when different groups of witnesses saw bright trails and streaks belonging to different parts of the "procession."
The sightings were of slow-speed bolides in an incandescent state, which formed a chain. It was moreover a
"procession" in that the meteors didn't emanate from all directions of a meteor radiant, but were members of clusters of closely related fragments originating from the partial or complete disruption of larger bodies. The clusters broke into groups, and some groups produced a thundering sound (over New York), while others were silent (over New Jersey).
Sound can be attributed to air-racked meteors. No sounds were heard over Michigan, but explosive sounds began to be heard near Hamilton, Ontario. The sounds continued to be heard into Pennsylvania. Thus the Michigan meteors began at a higher level in the atmosphere, gradually dropped downwards, and burnt themselves out somewhere in Pennsylvania after roaring over western New York at a destructively (to the meteor itself) low altitude.
The typical path-length of the 1913 procession, assuming different continuities, is estimated to have ranged from 500 to 750 miles.
On Mars, the length of the path from Edom Promontorium to Northern Hellas can be estimated as 2343 miles. This is too long for the fifteen minute difference in flare observations if one assumes that the bolides are vertically stacked. For comparison, the Mars Rover package took six minutes to make its descent to the Martian surface. One minute into its descent, friction from the atmosphere slowed it to a speed of 3.35 miles per second. The Rover module was about the weight of a light meteor, but a different shape. Therefore, a meteor of comparable size could be expected to hit the surface within six minutes of atmospheric entry.
The scenario can be calculated by assuming: a) a free-space velocity for the bolides of 12 miles per second, which is the high end of the meteor speed scale, which ranges from 3 to 12 miles per second; b) an atmospheric descent velocity identical to the Rover package, 3.35 miles per second; and c) that the top of the Martian atmosphere is 180 miles high. After doing all of the calculations the second bolide will lag behind the first by about 4000 surface miles, and will be higher than the first by about 1000 miles. Thus two separate bolides in the orbital plane of Mars in a train may have come down separately but along the same general path.
The meteorite theory has many good points, but one distracting negative point is that Edom Promontorium is one of the impact points. As early as 1896 Edom Promontorium was seen to be very bright by several astronomers. It was seen as especially conspicuous when it was near the limb of the planet. Molesworth in his drawings of the time showed the whole of Edom whitish as far as Euphrates and Orontes, and noted that "on one occassion the Sinus Sabaeus was seen to be notched by a minute circular island jutting out from Edom into the Sinus, south of the estuary of Hiddekel, with apparently a narrow canal separating it from Edom." A contemporary British astronomer noted that "the appearance of a short curved canal limiting the whiteness of Edom Promontorium might be a phenomenon of contrast, carrying us back to Mr. Maunder's theory (1882) that some of the canals might be due to differences in shade in neighboring districts."
In 1905 astronomers at the Lowell Observatory made a note of "light spots" to the effect that "two light areas were visible in Aeria on January 4, one just southwest of Pseboas Lacus, the other on the eastern side of the Sabaeus Sinus gulf, embracing Edom Promontorium. A third lay just across the long filament of the Sabaeus Sinus or Mare Icarium, over against the second in Deucalionis Regio. Edom Promontorium has a way of being bright."
During the opposition of 1939 M. Geddes noted: "Sinus Sabaeus. There was little worthy of note here except that the feature was always very dark, probably the darkest region of Mars. During August Edom Promontorium was the brightest feature on the planet, standing out sharply and clearly." These remarks confirm the large albedo difference between the adjacent regions, which led early astronomers to mistake the contrasts as canals.
Thus it seems very suspicious for Edom Promontorium to also be a meteorite impact site.
Just as Edom Promontorium has been the focus of flare sightings attributed to specular reflection off of ice, it may be that the northern rim of Hellas is also ice-bound. Craters in the eastern region of Hellas basin are associated with ice and glacier formation. Hellas basin is also known as Hellas Planitia. It has a diameter of about 2300 kilometres (1400 miles) and is the largest unambiguous crater on the planet. It's surrounded by an elevated "debris ring." The center part of the crater lies 8 kilometers below the surrounding material. The transition region is where glaciers may have existed in the past, and may continue to exist underground today. Thus, specular reflection off of ice fields near the rim of Hellas may be responsible for flares sighted from Earth during an opposition, just as with Edom Promontorium.
It turns out that a bit of detective work is required to find out where Edom Promontorium really is on today's maps. It turns out it's the northern rim of Schiaparelli crater.
An artist's conception of Schiaparelli crater after Mars terraforming has taken place.
The crater is filled with water. The artist has even added clouds. Note that the lower
right-hand region is uplands, and that a rivercourse empties into the crater.
This is the real deal taken by one of the Viking orbiters. The river channel is conspicuous
as a dark crack along the southeastern rim of the crater. Notice the dark area south
of the crater, known as Sinus Sabaeus in the albedo terminology. It's shown as green hills
in the terraforming image above. Could it be green at certain times of the year in reality?
The Viking photograph above shows how the northern rim of Schiaparelli has a much brighter albedo than the area south of it, which shows up very dark. This is the albedo difference that astronomers were seeing through their telescopes in the nineteenth and twentieth centuries.
The easy answer is to attribute flares seen on Mars to solar reflections off of ice or ice clouds. But in all fairness, one must finally attribute different sightings to different causes, and in general leave the issue open.
Clouston and Gaydon. Excitation of Molecular Spectra by Shock Waves, Nature 180 : 1342 - 1344, 1957
Wikipedia entry: Project A119, online
Chael, Eric. Whitaker, Rodney. Infrasound Signal Library, 25th Seismic Research Review, 1994?, online
Chu, Kelly, Drummond et al. Lidar Observations of Elevated Temperatures in Bright Chemiluminescent Meteor Trails..., Geophysical Research Letters, Vol. 27, No. 13, 2000, online
Withers et al. Space weather effects on the Mars ionosphere due to solar flares and meteors, EPSC 2006, online
Mebane, A.D. Observations of the Great Fireball Procession of 1913 February 9, Made in the United States, Journal of Meteoritics, Vol. 1, Number 4, p. 405, 1956, online | 0.819581 | 3.691031 |
A group of Australian researchers reveals that they have identified the fast cosmic waves related to an explosion that occurred in a distant galaxy 3.6 billion light-years away. This is a discovery that could be of fundamental importance to really understand the mysterious fast radio flashes.
The identification was carried out thanks to the observations of the Australian Square Kilometer Array Pathfinder (ASKAP) radio telescope of the Commonwealth Scientific and Industrial Research Organization (CSIRO), located in the region of Western Australia. This is a result that had long been expected in the astronomical community, a result that then resulted in a study published in Science.
Fast radio flashes are energy emissions caused by a cosmic explosion and are very difficult to intercept because they are emitted on long waves at the end of the electromagnetic spectrum. They are also very powerful so that they can develop in the same millisecond the same amount of energy that the Sun radiates in 10,000 years.
The first FRB was detected in 2007 and since then 85 have been identified, a number which however has not proved sufficient for a total understanding of the phenomenon. The researchers this time used a new method based on new software capable of calculating a billion measurements per second, which made it possible to “capture” these very fast flashes.
The new fast radio flash has been called FRB 180924 and is the first for which the position has been identified in a relatively precise manner. The lightning started from the galaxy Des J214425.25−405400.81. This galaxy was then photographed with the Very Large Telescope of the Southern European Observatory and its distance was measured with the Keck telescope of Hawaii.
Various hypotheses have been made regarding the explosion that these fast flashes generate and one of them sees the formation of a magnetar, a neutron star with a very pronounced magnetic field that is formed by the death of a very massive star. However, this discovery also reinforces the idea that there are two types of FRB, some repeated and others not, which may have completely different origins.
Those that are not repeated are much more difficult to identify but in this case, the researchers were able to identify with extreme precision the position of FRB 180924 locating it at 4000 parsecs (each parsec corresponds to about 3.26 light-years) from the galactic center of a galaxy distant from us 3.6 billion years ago. | 0.8883 | 3.835774 |
- Gives detailed descriptions of double star systems, both northern and southern hemisphere, whereas most books provide only lists and tables
- Includes extensive maps and finding charts, plotted by Mike Swan, so no additional star atlases are needed
- Incorporates stellar distance measurements from the recently published Gaia Data Release 2 catalogue, where available
Modern telescopes of even modest aperture can show thousands of double stars. Many are faint and unremarkable but hundreds are worth searching out. Veteran double-star observer Bob Argyle and his co-authors take a close-up look at their selection of 175 of the night sky's most interesting double and multiple stars.
The history of each system is laid out from the original discovery to what we know at the present time about the stars. Wide-field finder charts are presented for each system along with plots of the apparent orbits and predicted future positions for the orbital systems. Recent measurements of each system are included which will help you to decide whether they can be seen in your telescope, as well as giving advice on the aperture needed.
Double star observers of all levels of experience will treasure the level of detail in this guide to these jewels of the night sky.
Table of Contents:
- 1. Introduction
- 2. Observing double stars
- 3. Measurement techniques
- 4. Observational double star groups
- 5. Double star online resources
- 6. Biographies of visual double star observers
- 7. Myths, mysteries and one-offs
- 8. A selection of double stars
- 9. Double star catalogue in constellation order
- 10. Introduction to the Catalogue
- The Catalogue 1-175
About the Authors:
Bob Argyle has observed double stars since 1966. He writes monthly columns on double stars for Astronomy Now and the Webb Society. He is a Fellow of Royal Astronomical Society, a Member of the International Astronomical Union and Editor of Observatory magazine.
Mike Swan worked for the Ordnance Survey in England. He has extensive experience in computer graphics and uranography and was solely responsible for the Webb Deep-Sky Society Star Atlas. In this volume he produced the finder charts, the all-sky charts and orbital plots.
Andrew James has been interested in double stars since the late 1970s. His interests include the historical backgrounds and works of various discoverers of southern double stars. | 0.854303 | 3.415695 |
In 2011, NASA’s Dawn spacecraft arrived at its destination, enabling planetary scientists with the American space agency to study a particularly captivating asteroid in the asteroid belt called Vesta.
The Dawn mission revealed a lot about Vesta, such as its odd shape, which was made possible by its size and impact history. Two particularly massive impacts shaped massive craters on Vesta’s surface, and their details were carefully analyzed in various photographs captured by Dawn during its stay.
The mission painted a detailed picture of Vesta’s past, and scientists now think that the asteroid contains a solid iron core that was once active like those at the center of terrestrial planets today. Moreover, had it not have been impacted twice, it would likely be a round body, and categorized as a dwarf planet under current standards due to its size and characteristics.
Some science suggests that Vesta was well on its way to becoming a full-blown planet in the early solar system, but that progress was stifled by Jupiter, which had a much stronger gravitational influence over all and any surrounding space rocks.
Had it not been for Dawn, we likely wouldn’t know as much about Vesta as we do today. The knowledge acquired from the mission continues to entice scientists today, and some of it even helps planetary scientists better understand other asteroids in the solar system. | 0.9057 | 3.517805 |
Residents of high northern latitudes can take heart this frigid January: this coming weekend offers a chance to replicate a unique astronomical sighting.
Veteran sky watcher Bob King recently wrote a post for Universe Today describing what observers can expect from the planet Venus for the last few weeks of this current evening apparition leading up to Venus’s passage between the Earth and the Sun on January 11th. Like so many other readers, we’ve been holding a nightly vigil to see when the last date will be that we can spot the fleeing world… and some great pics have been pouring in.
But did you know that when the conditions are just right, that you can actually spy Venus at the moment of inferior conjunction?
No, we’re not talking about a rare transit of Venus as last occurred on June 6th, 2012, when Venus crossed the disk of the Sun as seen from our Earthly perspective… you’ll have to wait until 2117 to see that occur again. What we’re talking about is a passage of Venus high above or below the solar disk, when spying it while the Sun sits just below the horizon might just be possible.
Not all inferior conjunctions of Venus are created equal. The planet’s orbit is tilted 3 degrees with respect to our own and can thus pass a maximum of eight degrees north or south of the Sun. Venus last did this on inferior conjunction in 2009 and will once again pass a maximum distance north of the Sun in 2017. For the southern hemisphere, the red letter years are 2007, and next year in 2015.
You’ll note that the above periods mark out an 8-year cycle, a period after which a roughly similar apparition of the planet Venus repeats. This is because Venus takes just over 224 days to complete one orbit, and 13 orbits of Venus very nearly equals 8 Earth years.
And while said northern maximum is still three years away, this week’s inferior conjunction is close at five degrees from the solar limb. The best prospects to see Venus at or near inferior conjunction occur for observers “North of the 60”. We accomplished this feat two Venusian 8-year cycles ago during the inferior conjunction of January 16th, 1998 from latitude 65 degrees north just outside of Fairbanks, Alaska. We set up on the Chena Flood Channel, assuring as low and as flat a horizon as possible… and we kept the engine of our trusty Jeep Wrangler idling as a refuge from the -40 degrees Celsius temperatures!
It took us several frigid minutes of sweeping the horizon with binoculars before we could pick up the dusky dot of Venus through the low atmospheric murk and pervasive ice fog. We could just glimpse Venus unaided afterward, once we knew exactly where to look!
This works because the ecliptic is at a relatively shallow enough angle to the horizon as seen from the high Arctic that Venus gets its maximum ~five degree “boost” above the horizon.
A word of warning is also in order not to attempt this sighting while the dazzling (and potentially eye damaging) Sun is above the horizon. Start sweeping the horizon for Venus about 30 minutes before local sunrise, with the limb of the Sun safely below the horizon.
Venus presents a disk 1’ 02” across as seen from Earth during inferior conjunction, the largest of any planet and the only one that can appear larger than an arc minute in size. Ironically, both Venus and Earth reach perihelion this month. Said disk is, however, only 0.4% illuminated and very near the theoretical edge of visibility known as the Danjon Limit. And although the technical visual magnitude of Venus at inferior conjunction is listed as -3.1, expect that illumination scattered across that razor thin crescent to be more like magnitude -0.6 due to atmospheric extinction.
Are you one of the +99% of the world’s citizens that doesn’t live in the high Arctic? You can still watch the passage of Venus from the relative warmth of your home online, via the Solar Heliospheric Observatory’s (SOHO) vantage point in space. SOHO sits at the sunward L1 point between the Earth and the Sun and has been monitoring Sol with a battery on instruments ever since its launch in 1995. A great side benefit of this is that SOHO also catches sight of planets and the occasional comet that strays near the Sun in its LASCO C2 and C3 cameras. Venus will begin entering the 15 degree wide field of view for SOHO’s LASCO C3 camera on January 7th, and you’ll be able to trace it all the way back out until January 14th.
From there on out, Venus will enter the early morning sky. When is the first date that you can catch it from your latitude with binoculars and /or the naked eye? Venus spends most of the remainder of 2014 in the dawn, reaching greatest elongation 46.6 degrees west of the Sun on March 22nd, 2014 and is headed back towards superior conjunction on the farside of the Sun on October 25th, 2014. But there’s lots more Venusian action in 2014 in store…. more to come! | 0.883627 | 3.52077 |
About 8,000 light-years from Earth lies a star system unlike any astronomers have ever seen. And within that star system lies a ticking bomb: a large star that could one day produce one of the most powerful explosions in the universe, known as a gamma-ray burst.
Gamma-ray bursts have been observed in other galaxies, but never in our own. These powerful explosions come in two types: long-duration and short-duration. They can give off more energy in a few seconds than our sun will in its entire lifetime. They are so powerful, that it's believed a gamma-ray burst could be behind an extinction event on Earth about 450 million years ago.
The objects responsible for this poorly understood phenomenon are just as interesting as the gamma-ray bursts themselves. Wolf-Rayet stars are massive, more than 20 times that of our sun. These titans live only a few million years — a blink of an eye when you consider stars like our own sun live for 10 billion years.
In a paper published Monday in the journal Nature Astronomy, an international team of researchers reveals their findings on this new object, dubbed Apep.
Apep had been seen in X-ray and radio observations more than 20 years ago, but had never been studied in-depth.
In 2012, astronomer Joe Callingham, then working on his Ph.D at the University of Sydney, came across the observations. It left him scratching his head. There was clearly something unusual going on.
Hoping to get more data, he booked time on the European Southern Observatory's Very Large Telescope in Chile. What he got back stunned him. There, as clear as day, was an image unlike anything ever seen before: a beautiful pinwheel.
"This is kind of a once-in-a-career image … that's just nature right there," Callingham said. "And it kind of captures something special, almost poetic or artistic, rather than just scientific."
It's believed that the curved tails of Apep form as the two stars orbiting at the centre throw dust into the expanding winds, almost like a rotating lawn sprinkler.
What the researchers suggest in Monday's paper is that at the heart of the pinwheel are two massive Wolf-Rayet stars (with a third much further away) with winds that collide in the centre and produce dust.
They calculate the winds are travelling at almost 12 million kilometres an hour, or one per cent of the speed of light. One of the stars is at the end of its life, and will undoubtedly die in a powerful explosion, called a supernova.
"There's no doubt it will explode. It will go supernova, probably in 100,000 years," Callingham said. "The question is, will it go in a gamma-ray burst? Well, at the moment, if it exploded today, yes, it would."
But whether or not the conditions will remain, astronomers can't say for certain.
The good news is that, even if it does, it looks like Earth isn't in the line of fire.
Curiouser and curiouser
The more the researchers studied these stars, the stranger it became.
It's believed that gamma-ray bursts can only occur in stars with low metallicity, as metals produced in a star's core would slow its rotation.
But this rapidly rotating Wolf-Rayet star is turning that theory on its head, as our young galaxy holds stars with high metallicity.
"It's an oddball in every way," Callingham said.
Excited by these findings, Callingham and his team of researchers decided to conduct further observations, hoping to see the dust that would be released from the star moving over time.
Once again, this weird little star system left astronomers scratching their heads. The dust barely moved.
The researchers theorize that one star is rapidly rotating, producing fast winds at poles, but not at the equator where the dust would be.
But another possibility is the system is farther away than believed. Still, that would make it the brightest object in our galaxy.
"We say that the best model is that it's a long gamma-ray burst progenitor, but actually it's a really unique system that we don't understand, and that's our best model," said Benjamin Pope, a co-author and astronomer at New York University.
"But we need more data. Whatever it is is a highly unusual stellar system … and that's our best model for it."
The research team hopes to do follow-up observations with other telescopes, and believe that this is just the start of Apep's story.
"I think people are going to be writing about this star for years," Pope said. "This has baffled all the experts in the field that we've shown it to, which is essentially all of them. I think we're at the beginning of a really interesting story." | 0.883954 | 3.920848 |
High-pressure experiments solve meteorite mystery
With high-pressure experiments at DESY's X-ray light source PETRA III and other facilities, a research team around Leonid Dubrovinsky from the University of Bayreuth has solved a long standing riddle in the analysis of meteorites from Moon and Mars. The study, published in the journal Nature Communications, can explain why different versions of silica can coexist in meteorites, although they normally require vastly different conditions to form. The results also mean that previous assessments of conditions at which meteorites have been formed have to be carefully re-considered.
The scientists investigated a silicon dioxide (SiO2) mineral that is called cristobalite. "This mineral is of particular interest when studying planetary samples, such as meteorites, because this is the predominant silica mineral in extra-terrestrial materials," explains first author Ana Černok from Bayerisches Geoinstitut (BGI) at University Bayreuth, who is now based at the Open University in the UK. "Cristobalite has the same chemical composition as quartz, but the structure is significantly different," adds co-author Razvan Caracas from CNRS, ENS de Lyon.
Different from ubiquitous quartz, cristobalite is relatively rare on Earth's surface, as it only forms at very high temperatures under special conditions. But it is quite common in meteorites from Moon and Mars. Ejected by asteroid impacts from the surface of Moon or Mars, these rocks finally fell to Earth.
Surprisingly, researchers have also found the silica mineral seifertite together with cristobalite in Martian and lunar meteorites. Seifertite was first synthesised by Dubrovinsky and colleagues 20 years ago and needs extremely high pressures to form. "Finding cristobalite and seifertite in the same grain of meteorite material is enigmatic, as they form under vastly different pressures and temperatures," underlines Dubrovinsky. "Triggered by this curious observation, the behaviour of cristobalite at high-pressures has been examined by numerous experimental and theoretical studies for more than two decades, but the puzzle could not be solved."
Using the intense X-rays from PETRA III at DESY and the European Synchrotron Radiation Facility ESRF in Grenoble (France), the scientists could now get unprecedented views at the structure of cristobalite under high pressures of up to 83 giga-pascals (GPa), which corresponds to roughly 820,000 times the atmospheric pressure. "The experiments showed that when cristobalite is compressed uniformly or almost uniformly – or as we say, under hydrostatic or quasi-hydrostatic conditions – it assumes a high-pressure phase labelled cristobalite X-I," explains DESY co-author Elena Bykova who works at the Extreme Conditions Beamline P02.2 at PETRA III, where the experiments took place. "This high-pressure phase reverts back to normal cristobalite when the pressure is released."
But if cristobalite is compressed unevenly under what scientists call non-hydrostatic conditions, it unexpectedly converts into a seifertite-like structure, as the experiments have now shown. This structure forms under significantly less pressure than necessary to form seifertite from ordinary silica. "The ab initio calculations confirm the dynamical stability of the new phase up to high pressures," says Caracas. Moreover it also remains stable when the pressure is released. "This came as a surprise," says Černok. "Our study clarifies how squeezed cristobalite can transform into seifertite at much lower pressure than expected. Therefore, meteorites that contain seifertite associated with cristobalite have not necessarily experienced massive impacts." During an impact, the propagation of the shock wave through the rock can create very complex stress patterns even with intersecting areas of hydrostatically and non-hydrostatically compressed materials, so that different versions of silica can form in the same meteorite.
"These results have immediate implications for studying impact processes in the solar system," underlines Dubrovinsky. "They provide clear evidence that neither cristobalite nor seifertite should be considered as reliable tracers of the peak shock conditions experienced by meteorites." But the observations also show more generally that the same material can react very differently to hydrostatic and non-hydrostatic compression, as Dubrovinsky explains. "For materials sciences our results suggest an additional mechanism for the manipulation of the properties of materials: Apart from pressure and temperature, different forms of stress may lead to completely different behaviour of solid matter." | 0.880721 | 3.948855 |
Where does space begin? Let’s look up into our planet’s atmosphere, that shell of nitrogen (about 78%), oxygen (about 20%), various other gases (2%) that makes life on Earth possible, to find out. The atmosphere gets thinner as you go further up, in fact 90% of the Earth’s atmosphere by weight is in the bottom 10 miles (16 km).
The atmosphere is stratified, that is divided into layers based the bulk properties and behaviours of the air at that altitude. Most familiar of these to us, because we live in it, is the troposphere. Most massive (about 80% of the atmosphere), warmest and most turbulent of the atmosphere’s layers, the troposphere starts at ground level and extends about a dozen kilometres on average above our head. The troposphere is warmed by heat from the Earth’s surface, and as any fule kno warm air rises. This is exactly what happens in the troposphere, great columns of warmer air rise from the surface to higher altitudes cool off and sink down again. Effectively the troposphere is churning like the fluids in a lava lamp, hence its names, derived from the Greek word trope, meaning “turn” or “overturn”. Its turbulent nature means that the upper extent of the troposphere (called the tropopause) varies, over the chilly poles it reaches 9 km (30 000 ft) and a height of 17 km (56 000 ft) at the warmer equator. It is cold up there at the tropopause perhaps −60 °C (−76 °F).
Just to put things into perspective, Mt. Everest’s summit is 8848 m (29 029 ft) above sea level and commercial airliners commonly cruise about 10 km (33 000 ft) overhead.
Rising through the tropopause we come to the second best -known layer of the atmosphere, the stratosphere. The stratosphere encompasses a much greater volume than the troposphere as it extends much further above the Earth, to an altitude of almost 60 km. In the central portion of the stratosphere we find a sub-layer which is without a doubt the best-known layer of the atmosphere: the ozone layer.
Ozone is a molecule made of three oxygen atoms (the oxygen molecules in stuff we breathe is made of two oxygen atoms; an extra atom makes a huge difference as ozone is horrendously toxic). Ozone in the stratosphere is made by sunlight. Ultraviolet light (UV) from the Sun rips apart oxygen molecules (made of two oxygen atoms) into individual oxygen atoms. Eager to have company again, these lonely atoms combine with unbroken oxygen molecules to create molecules of three oxygen atoms. These three oxygen atom molecules are ozone molecules and they are continually forming in the stratosphere (heat is released too during all this, so surprisingly the stratosphere is considerably warmer than the upper troposphere). If you think about this for a minute, this suggests that if this goes on all the oxygen molecules in our atmosphere are eventually going to be turned into ozone… The reason this does not happen is because as ultraviolet light shines on ozone it breaks it up again into a molecule of oxygen and an isolated atom of oxygen. This on-going process creates a layer about 10 to 50 km (33 000 to 160 000 ft) above Earth’s surface where there is a steady concentration of ozone (for a while we humans did our best to put a stop to this, but we have since mended our ways and the ozone layer seems to be recovering). This is all so important because all that UV absorbed in tearing up ozone atoms in the stratosphere does not get to reach the Earth’s surface. If it did, the green hills of Earth would be as barren as the rocky dunes of Mars.
At the top of the stratosphere, the stratopause, we are in an alien environment: the atmospheric pressure here is only 1/1000 of that at sea level. Even the wispy near-nothingness that is the Martian atmosphere is thicker than this.
Above the stratopause we have the mesosphere which extends upwards to 80–85 km (50–53 mile) above the surface. This is the most mysterious and poorly understood region on Earth, too high for aircraft and too low for satellites, it is hard to investigate. We do know the air (what there is of it) in the mesosphere gets steadily colder as we rise through it and there are strong east to west winds. This is also the home of red sprites and blue jets, vast and spectacular electrical discharges considered to be meteorologists’ tall tales until a sprite was photographed in 1989.
Despite its inaccessibility, you can see into the mesosphere any clear night when you watch the bright streak of a meteor. Most meteoroids burning up upon entering the atmosphere meet their ends in the mesosphere. It is also here that clouds of fine ice crystals form the noctilucent clouds observed on some summer nights.
Rising through the mesosphere, the temperature grows steadily warmer, so much so that the layer above the mesopause is called the thermosphere. It is well named for it is hot up there, in fact the temperature could be as high as 1500 °C (2730 °F). This seems fantastic, how can this be? How come rockets passing through it are not vaporised? Firstly, the rare atoms and molecules of air here are warmed by solar radiation. An energised atom is a fast moving atom; a fast moving atom is a hot atom. The second question is answered by the fact that there is virtually nothing there. The thermosphere is a fine approximation to a vacuum. The gases here are too dilute to transfer any heat to a passing traveller.
Remote and all but empty, the thermosphere may seem of little interest but a century ago it became of great practical use. Pioneers of radio communications were amazed to discover that their signals (which ought to travel in straight lines) could be received far away, right around the curvature of the Earth. It was as though hundreds of kilometres overhead there was a huge invisible mirror reflecting radio waves around the planet. Indeed there was. This is the ionosphere, a shell of free electrons and ionised gas molecules. This is another product of sunlight, up here the Sun’s rays are powerful enough to tear electrons away from atoms. As the ionosphere is created by the Sun shining on the atmosphere, the extent and density of the ionosphere over a particular location varies with the seasons. For a brief few decades the ionosphere played a vital role in intercontinental radio communications, but today is little used thanks to the ubiquity of communication satellites.
The thermosphere encountered its first visitor from the Earth’s surface some time in 1944 in the sleek shape of an A4 (aka V-2) rocket launched from Nazi Germany. Within a decade this first invader had been followed by literally thousands more rockets, and soon rockets (some with people on board) would be going higher still. In the mid-1950s it was decided that we needed a boundary between Earth and space and this would be in the thermosphere. The Fédération Aéronautique Internationale declared that this would be at an altitude of 100 kilometres (62 mles) above sea level. This is often called the Kármán Line (named for a Hungarian scientist who calculated that at this altitude a vehicle could not rely on aerodynamic lift to stay aloft).
The thermosphere ends as you might expect at the thermopause. Where exactly this is varies, being higher in direct sunlight as the Sun’s energy allowing the scarce but fast-moving molecules to rise high above the planet. Depending on the time of day and season the thermosphere can be 500–1000 km over any given location. Most satellites and spacecraft orbit the Earth at these altitudes, and the faint traces of atmosphere do exert a slight drag to the satellite’s motion hence the phenomena of a satellite’s orbit decaying until it falls to Earth (unless it is periodically reboosted to compensate as the ISS is).
Uppermost of the Earth’s atmospheric layers is the exosphere where the atmosphere gradually turns into outer space. Here there are only the rarest molecules of hydrogen, helium, carbon dioxide and oxygen atoms. They are as much under the influence of the cosmic environment as they are Earth’s and in fact some will escape into space, never to return. Some estimate the exosphere ends about halfway to the Moon.
Normally we are more concerned with what is beyond the Earth’s atmosphere, I hope you have found it as fascinating as I have to look into, rather than through, our atmosphere! | 0.866612 | 3.864218 |
Astronomers have snatched a peek at the innards of a neutron star, combining a series of observations to pin down the type of matter squeezed into the ultra-dense stellar ball.
The approach is expected to enable future astronomers to gain glimpses of the stuff inside other neutron stars and boost their understanding of matter, energy and the fundamental particles that make up the universe.
"Neutron stars are a sort of cosmic lab in a sense that the material at their centers is so dense it can't be reproduced on Earth," said study leader Tod Strohmayer in a telephone interview. "We can't get a piece of this material and examine it ourselves."
About the size of a city, a neutron star is the remnant of an exploded star whose matter is so compressed, the protons and electrons within its atoms fuse into neutrons. A teaspoon of the dense stuff would weigh about a billion tons on Earth.
Understanding the internal structure of a neutron star would allow scientists to determine the object's basic properties, explained Lars Hernquist, an astronomer with the Harvard-Smithsonian Center for Astrophysics unaffiliated with Strohmayer's study.
Strohmayer's star, part of a star system called EXO 0748-676, sits in the southern sky constellation Volans (the Flying Fish) about 30,000 light-years away from Earth. One light-year is the distance light travels in a year, or roughly 6 trillion miles (10 trillion kilometers).
The neutron star has a radius of about 7 miles (11.5 kilometers) and a mass about 1.75 times of the Sun. It is also part of a binary system; it strips gas from a companion star and then blows the material outward in repetitive thermonuclear explosions.
Strohmayer, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, studied the neutron star with colleague and graduate student Adam Villarreal of the University of Arizona. The pair presented their research before the High Energy Astrophysics Division of the American Astronomical Society during a meeting last week in New Orleans.
The relationship between the size and heft of the EXO 0748-676 neutron star was a critical tool for researchers trying to determine is matter makeup.
"Knowing the mass and radius of these objects tells about the properties of matter inside the star," Strohmayer said, adding that the relationship between the two quantities can describe a star's internal pressure and density. "It tells you how the particles interact, the forces between fundamental particles and how much you can compress material."
But determining the dimensions of a neutron star is challenging.
"Measuring the radii of these stars is difficult because these things are very small," Hernquist told SPACE.com. "We can't image them directly."
Neutron stars in binary systems like EXO 0748-676 steal matter from their companions, then belch it out in explosions at frequencies related to their spin rates. The stellar burps can be detected by X-ray instruments.
Strohmayer and Villarreal used a relationship between their star's spin rate -- 45 times per second -- and the Doppler shift of its emissions to determine its radius, then plugged that number into a mass-radius ratio already known for the object to generate the mass.
The result, they say, is a detailed description of the state of matter inside a neutron star, where material is packed so tightly the neutrons swirl about in a frictionless superfluid. But the star is apparently not yet compressed to the point that its neutrons are smashed and their quarks -- even tinier subentities -- liberated into a so-called quark star.
"At this point, it's too much to say a quark star is absolutely ruled out," Strohmayer said. "But we've squeezed out a lot of the parameters."
A wealth of data
One of the foundations for Strohmayer's approach was the availability of two orbiting X-ray facilities, each flush with observations of the EXO 0748-676 system.
Strohmayer and Villarreal used the space-based Rossi X-ray Timing Explorer to determine their neutron star's spin frequency, and archived data from the European Space Agency's XMM-Newton satellite for other measurements.
"It was partly used as a calibration source, where [researchers] stared at it for quite some time," Strohmayer said of the neutron star. "It takes a lot of data to make these measurements, and there have even been more recent observations on the star that researchers are still working with."
This neutron star spins quite slowly compared to similar objects -- which can range from 200 to 600 revolutions per second, researchers said. The leisurely 45-times-per second spin rate made it easier to capture the neutron star's emissions and split it into a spectra, much like visible light is separated in a prism. Analysis of a spectra yields insight into the material that emitted the various wavelengths.
Strohmayer hopes to refine and extend the method.
"We hope to do that and perhaps expand the number of neutron stars per spin frequency," he said. "But I think we can do a little better with [the current star's radius], so we'll do a little fine tuning." | 0.888076 | 3.980096 |
Crescent ♑ Capricorn
Moon phase on 30 October 2060 Saturday is Waxing Crescent, 5 days young Moon is in Capricorn.Share this page: twitter facebook linkedin
Previous main lunar phase is the New Moon before 6 days on 24 October 2060 at 09:25.
Moon rises in the morning and sets in the evening. It is visible toward the southwest in early evening.
Moon is passing about ∠15° of ♑ Capricorn tropical zodiac sector.
Lunar disc appears visually 5.5% narrower than solar disc. Moon and Sun apparent angular diameters are ∠1829" and ∠1933".
Next Full Moon is the Beaver Moon of November 2060 after 8 days on 8 November 2060 at 04:17.
There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak.
The Moon is 5 days young. Earth's natural satellite is moving from the beginning to the first part of current synodic month. This is lunation 752 of Meeus index or 1705 from Brown series.
Length of current 752 lunation is 29 days, 18 hours and 50 minutes. This is the year's longest synodic month of 2060. It is 26 minutes longer than next lunation 753 length.
Length of current synodic month is 6 hours and 6 minutes longer than the mean length of synodic month, but it is still 57 minutes shorter, compared to 21st century longest.
This lunation true anomaly is ∠174.6°. At the beginning of next synodic month true anomaly will be ∠199.1°. The length of upcoming synodic months will keep decreasing since the true anomaly gets closer to the value of New Moon at point of perigee (∠0° or ∠360°).
5 days after point of apogee on 25 October 2060 at 00:25 in ♏ Scorpio. The lunar orbit is getting closer, while the Moon is moving inward the Earth. It will keep this direction for the next 8 days, until it get to the point of next perigee on 7 November 2060 at 22:11 in ♉ Taurus.
Moon is 391 910 km (243 522 mi) away from Earth on this date. Moon moves closer next 8 days until perigee, when Earth-Moon distance will reach 356 812 km (221 713 mi).
6 days after its descending node on 24 October 2060 at 04:14 in ♎ Libra, the Moon is following the southern part of its orbit for the next 7 days, until it will cross the ecliptic from South to North in ascending node on 7 November 2060 at 00:56 in ♉ Taurus.
19 days after beginning of current draconic month in ♈ Aries, the Moon is moving from the second to the final part of it.
1 day after previous South standstill on 29 October 2060 at 17:05 in ♐ Sagittarius, when Moon has reached southern declination of ∠-28.126°. Next 11 days the lunar orbit moves northward to face North declination of ∠28.098° in the next northern standstill on 11 November 2060 at 10:15 in ♋ Cancer.
After 8 days on 8 November 2060 at 04:17 in ♉ Taurus, the Moon will be in Full Moon geocentric opposition with the Sun and this alignment forms next Sun-Earth-Moon syzygy. | 0.848363 | 3.185115 |
Cornell astronomers have created five models representing key points from our planet’s evolution, like chemical snapshots through Earth’s own geologic epochs.
They will use them as spectral templates in the hunt for Earth-like planets in distant solar systems in the approaching new era of powerful telescopes.
“These new generation of space- and ground-based telescopes coupled with our models will allow us to identify planets like our Earth out to about 50 to 100 light-years away,” said Lisa Kaltenegger, associate professor of astronomy and director of the Carl Sagan Institute.
For the research and model development, Kaltenegger, doctoral student Jack Madden and Zifan Lin ’20 authored “High-Resolution Transmission Spectra of Earth through Geological Time,” published March 26 in Astrophysical Journal Letters.
“Using our own Earth as the key, we modeled five distinct Earth epochs to provide a template for how we can characterize a potential exo-Earth – from a young, prebiotic Earth to our modern world,” she said. “The models also allow us to explore at what point in Earth’s evolution a distant observer could identify life on the universe’s ‘pale blue dots’ and other worlds like them.”
Kaltenegger and her team created atmospheric models that match the Earth of 3.9 billion years ago, a prebiotic Earth, when carbon dioxide densely cloaked the young planet. A second throwback model chemically depicts a planet free of oxygen, an anoxic Earth, going back 3.5 billion years. Three other models reveal the rise of oxygen in the atmosphere from a 0.2% concentration to modern-day levels of 21%.
“Our Earth and the air we breathe have changed drastically since Earth formed 4.5 billions years ago,” Kaltenegger said, “and for the first time, this paper addresses how astronomers trying to find worlds like ours, could spot young to modern Earth-like planets in transit, using our own Earth’s history as a template.”
In Earth’s history, the timeline of the rise of oxygen and its abundancy is not clear, Kaltenegger said. But, if astronomers can find exoplanets with nearly 1% of Earth’s current oxygen levels, those scientists will begin to find emerging biology, ozone and methane – and can match it to ages of the Earth templates.
“Our transmission spectra show atmospheric features, which would show a remote observer that Earth had a biosphere as early as about 2 billion years ago,” Kaltenegger said.
Using forthcoming telescopes like NASA’s James Webb Space Telescope, scheduled to launch in March 2021, or the Extremely Large Telescope in Antofagasta, Chile, scheduled for first light in 2025, astronomers could watch as an exoplanet transits in front of its host star, revealing the planet’s atmosphere.
“Once the exoplanet transits and blocks out part of its host star, we can decipher its atmospheric spectral signatures,” Kaltenegger said. “Using Earth’s geologic history as a key, we can more easily spot the chemical signs of life on the distant exoplanets.”
The research was funded by the Brinson Foundation and the Carl Sagan Institute. | 0.865191 | 3.74316 |
ESA Science & Technology - Publication Archive
Published online in Science Express, 7 April 2011.
Initial images of Venus's South Pole by the Venus Express mission showed the presence of a bright, highly variable vortex, similar to that at the planet's North Pole. Using high-resolution infrared measurements of polar winds from the Venus Express's Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) instrument, we show the vortex to have a constantly varying internal structure, with a centre of rotation displaced from the geographic South Pole by ~3 degrees of latitude, and which drifts around the pole with a period of 5 to 10 Earth days. This is indicative of a nonsymmetric and varying precession of the polar atmospheric circulation with respect to the planetary axis.
The mapping IR channel of the Visual and Infrared Thermal Imaging Spectrometer (VIRTIS-M) on board the Venus Express spacecraft observes the CO2 band at 4.3 Œm at a spectral resolution adequate to retrieve the atmospheric temperature profiles in the 65-96 km altitude range.
Observations acquired in the period June 2006 - July 2008 were used to derive average temperature fields as a function of latitude, subsolar longitude (i.e.: local time, LT) and pressure. Coverage presented here is limited to the nighttime because of the adverse effects of daytime non-LTE emission on the retrieval procedure, and to southernmost latitudes because of the orientation of the Venus-Express orbit. Maps of air temperature variability are also presented as the standard deviation of the population included in each averaging bin.
At the 100 mbar level (about 65 km above the reference surface) temperatures tend to decrease from the evening to the morning side, despite a local maximum observed around 20-21LT. The cold collar is evident around 65S, with a minimum temperature at 3LT. Moving to higher altitudes, local time trends become less evident at 12.6 mbar (about 75 km) where the temperature monotonically increases from middle-latitudes to the southern pole. Nonetheless, at this pressure level, two weaker local time temperature minima are observed at 23LT and 2LT equatorward of 60S. Local time trends in temperature reverse about 85 km, where the morning side is the warmer.
The variability at the 100 mbar level is maximum around 80S and stronger toward the morning side. Moving to higher altitudes, the morning side always shows the stronger variability. Southward of 60S, standard deviation presents minimum values around 12.6 mbar for all the local times.
More than 25 spacecraft from the United States and the Soviet Union visited Venus in the 20th century, but in spite of the many successful measurements they made, a great number of fundamental problems in the physics of the planet remained unsolved [Taylor, 2006; Titov et al., 2006]. In particular, a systematic and long-term survey of the atmosphere was missing, and most aspects of atmospheric behavior remained puzzling. After the Magellan radar mapping mission ended in 1994, there followed a hiatus of more than a decade in Venus research, until the European Space Agency took up the challenge and sent its own spacecraft to our planetary neighbor. The goal of this mission, Venus Express, is to carry out a global, long-term remote and in situ investigation of the atmosphere, the plasma environment, and some aspects of the surface of Venus from orbit [Titov et al., 2001; Svedhem et al., 2007].
Venus Express continues and extends the investigations of earlier missions by providing detailed monitoring of processes and phenomena in the atmosphere and near-space environment of Venus. Radio, solar, and stellar occultation, together with thermal emission spectroscopy, sound the atmospheric structure in the altitude range from 150 to 40 km with vertical resolution of few hundred meters, revealing strong temperature variations driven by radiation and dynamical processes.
- The remainder of the abstract is truncated -
Beyond their intrinsic interest, ground-based observations have proven their usefulness in supporting spacecraft observations of Solar System bodies. Probably the most spectacular illustration ever was provided during the descent of the Huygens Probe on Titan, when the radio astronomy segment detected the "channel A" carrier signal from Huygens and allowed the recovery of the Doppler Wind Experiment that had been compromised by the failure of the corresponding Cassini channel (Lebreton et al., 2005). Furthermore, ground-based science observations performed during or around the Huygens mission provided new, complementary information on Titan's atmosphere and surface, helping to put the Huygens observations into context (Witasse et al., 2006). Another example of a successful ground-based campaign is the Deep Impact event, when numerous Earth-based and Earth-orbiting observatories monitored comet 9P/Tempel 1 when it was hit by the impactor (Meech et al., 2005).
- The remainder of the abstract is truncated - | 0.916752 | 3.817437 |
Burning Flames in Microgravity
Have you ever considered how flames burn in a microgravity environment? How they appear, or how properties such as size, soot formation, and temperature change?
The subject of microgravity combustion has become a prominent branch of combustion science and research. Microgravity combustion research is essential for gaining an in-depth understanding of combustion processes on earth. Without gravitational effects, novel flame behaviors are revealed. Some of the advantages of microgravity combustion research are:
1. Understanding the physical phenomena necessary to improve spacecraft safety.
2. Understanding fundamental combustion processes on earth.
3. Understanding combustion at very small scales.
The number of combustion-related experiments in microgravity has increased significantly from the first conducted by Kumagai and Isoda in 1957 regarding the burning rates of spherical droplets, to the implementation of combustion experiments aboard past space shuttle missions such as the Laminar Soot Processes (LSP 1 and LSP 2) experiments. The entire space shuttle mission STS-107 was dedicated to microgravity research, with the microgravity glovebox remaining aboard the International Space Station.
Combustion studies in microgravity environments have included droplet combustion, flame spread over liquid and solid fuels, smoldering combustion, premixed flames, and diffusion flames. Additionally, both laminar and turbulent diffusion flame regimes have been studied, and understanding laminar flames is a prerequisite to understanding the more complex turbulent combustion processes.
Laminar diffusion flames involve the mixing of both fuel and oxidizer flow streams upon reaction. Diffusion flames are characterized by the initial separation of the fuel and oxidizer. A reaction only occurs once the reactants come into contact with each other during ignition.
Generally, the fuel molecules diffuse outwards in a co-flow laminar diffusion flame, while the oxidizer molecules diffuse toward the flame from the opposite direction.
Microgravity refers to the weightlessness of an object; a state that is achievable through free-fall against the effects of gravity. The gravitational force acts as a single downwards non-contact force on an object. Any object has weight when there is a reaction force opposing the gravitational force. However, when an object is free-falling, no force opposes gravity and the object appears to be weightless. These effects can also be translated into buoyant forces on an object.
The most significant difference between combustion processes in normal gravity (1-g) and in microgravity is due to the absence of buoyancy in a microgravity environment. Hydrostatic pressure differences exerted by the surrounding fluid (usually air) develop a buoyant force during combustion at 1-g. More specifically, the hottest location of the flame is inside the reaction zone. Therefore the temperature gradient is in the opposite direction to gravity on the oxidant side of the reaction zone. The net motion of the high temperature, low density combustion products is a result of the difference between the buoyant and gravitational forces. The overall motion results in the recognizable teardrop flame shape.
Under normal gravity conditions, the convective buoyant force adds increasing complexity to many combustion processes. Since convection is minimized in microgravity, the products of combustion reactions tend to stay trapped within the reaction zone and therefore prevent fresh oxidizer from reaching the reaction zone. Instead of convection, molecular diffusion operating on a much slower time frame is the dominant means of transporting the oxidizer to the reaction zone.
Furthermore, at 1-g, gas velocities increase with increasing flame size and with increasing distance from the jet exit. With higher velocities, the laminar flow regime transitions to the turbulent regime over a smaller range of flame lengths. There is a larger range of laminar flames sizes that can be studied in microgravity before buoyancy interferes with the results.
The formation and emission of soot in flames has been a long-standing research topic among scientists and engineers. The processes by which soot forms within flames are still not completely understood and various theories are described extensively in literature. Soot formation is a result of incomplete combustion within flames and exemplifies both inefficiency and a loss of usable energy during combustion. While soot formation is a health and environmental issue, soot is an unavoidable by-product in most practical applications of combustion.
Under normal gravity conditions, soot forms in aggregates that are roughly spherical particles. The soot particles have nearly uniform diameters on the order of 20 nm and soot formation begins with the pyrolysis of fuel molecules and the formation of Polycyclic Aromatic Hydrocarbons (PAH).
Soot formation has been studied extensively at both atmospheric pressures and at high pressures. Experiments at high pressures are necessary in order to simulate most practical combustion technology that operates at high pressures. Results have shown that increases in pressure cause an increase in the concentration of soot formed in the flames, although this could change under reduced gravity conditions due to the lack of buoyancy.
In a 1-g environment, buoyancy increases the axial velocities of the combustion species with increasing distance from the jet exit. The particles at the flame tip have the highest velocities because buoyancy has accelerated the particles over the largest distance. Contrary to normal gravity, the absence of buoyancy in a microgravity environment tends to slow down the axial velocities of combustion species, which increases the combustion time scale or residence time (the length of time the particles spend between the burner rim and the flame tip).
In addition to affecting soot formation, an increase in the reactant residence time in microgravity significantly alters the thermochemical flame environment. This indicates that radiation is an important mode of heat transfer and heat loss for non-buoyant diffusion flames.
Increased radiative heat losses decrease the flame temperature, which decreases luminous flame lengths, eventually leading to flame extinction. Most current analytical studies under normal gravity conditions neglect heat loss caused by radiation. However, it is evident from experimental results that this assumption is not valid for microgravity combustion, where the combustion rate is much slower resulting from the dominance of the diffusive transport mechanism.
More specifically, heat loss due to radiation causes a decrease in the fuel burning rate, eventually producing a low-power flame. This low-power flame now has a longer time period for heat loss to occur. The higher radiative heat losses experienced in microgravity indicate lower temperatures in the soot production regions and lower maximum flame temperatures for diffusion flames. Eventually, radiative heat transfer cools diffusion flames to the point of visible soot luminosity disappearance.
Overall, eliminating buoyancy simplifies combustion processes. The dominance of a diffusional transport mechanism in microgravity produces flames that are rounder, thicker, sootier, more stable, and cooler than normal gravity flames.
Naturally, studying the effects of combustion without the interference of gravity requires facilities free from gravitational influences that seek innovation in apparatus design to accommodate various user constraints. There are many methods for simulating a microgravity environment, including drop towers (tall shafts provide free-fall environments), sounding rockets (launching a payload), flying in aircraft with parabolic trajectories (‘vomit comet’), flying in spacecraft (flying experiments on-orbit), or simulating microgravity using sub-atmospheric pressures (vacuum).
In a microgravity environment, luminous flame height decreases with decreasing pressure to the point of visible luminosity disappearance, resulting in blue flames. Flame width increases with decreasing pressure until the flame is almost spherical. Soot formation decreases with decreasing pressure to negligible concentrations in a near vacuum. | 0.834454 | 3.673051 |
In response to some of the comments below, let me explain one of the concerns over the standard assurances given by particle physicists about the safety of the LHC.
Their argument is that the Earth has been bombarded by high energy cosmic rays for billions of years. These particles would have collided with particles in our atmosphere at much higher energies than are possible at the LHC. So if a catastrophe were possible, it would have happened by now. That means the continued existence of the Earth, and indeed many other astronomical bodies, is powerful evidence that the LHC is safe.
The problem is this: there is an important difference between the collisions that occur in the atmosphere and those that occur at the LHC.
Cosmic rays hit the atmosphere at a substantial fraction of the speed of light. That means the debris from these collisions also travels at a substantial fraction of the speed of light, giving it limited time to interact with the Earth.
The collisions at the LHC are different. These involve two beams, both travelling at almost the speed of light but colliding head on. So the collision occurs at rest with respect to the Earth.
That’s a significant point. It means that the debris from the collision can hang around for longer and so have a greater chance of interacting with the Earth.
When this effect is taken into account, it is not at all clear that similar events have taken place regularly in our atmosphere or indeed anywhere else.
That doesn’t prove the LHC is dangerous, far from it. But it does show that the standard safety assurance is not as water tight as particle physicists would have us believe. If there are any doubts over this assurance, they must be addressed.
CERN has not addressed this concern or any of the others that have emerged since it published its original safety report. That’s not really surprising: it has an obvious vested interest in the LHC.
But this situation cannot continue. That’s why the safety of the LHC needs to be reviewed by an independent group of scientists with a background in risk analysis but no professional or financial connections with CERN.
The proposed upgrade to the LHC provides the perfect opportunity
It’s been 10 years now since physicists first raised the possibility that particle accelerators on Earth could produce microscopic black holes. This phenomenon initially seemed hugely exciting since it hinted at a way scientists could test their ideas about quantum gravity, the theory that reconciles quantum mechanics with general relativity. .
Since then, much of the excitement has died down. It turns out that the energy required to create these objects vastly exceeds what is possible in the world’s most powerful accelerators and, indeed, is far more than found in the most powerful cosmic ray ever recorded.
There are various loopholes that allow micro-black holes to form at lower energies, however. The most widely discussed is the possibility that the universe has extra dimensions on microscopic scales that significantly weaken gravity at this level. These dimensions would need to operate at a scale greater than 10^-19 metres to allow microscopic black holes to form more easily.
But here again, the evidence is constraining this idea. The world’s most powerful accelerator, the Large Hadron Collider, has been running for a year or so and so far failed to produce black holes with masses up to 4.5 TeV. That means any extra dimensions must be smaller than 10^-12 metres in size.
Nevertheless, black holes could still be produced at the LHC at a rate of perhaps 100 per year. But how to spot them?
Today, Marcus Bleicher at the Frankfurt Institute for Advanced Studies in Germany and a few pals outline some of the open problems concerning black hole production and detection at the LHC, assuming it takes place at all.
These guys assume that after microscopic black holes form, they would go through four phases. First there is the balding phase in which the newly formed back hole evolves from a highly asymmetric object to a more symmetric one, shedding its asymmetry through gravitational radiation.
In the second phase, called the spin-down phase, the black hole loses mass and angular momentum by emitting Hawking radiation. The third, the Schwarzschild phase, the black hole becomes spherical and the rate of mass loss slows down. And in the final Planck phase, the black hole winks out of existence.
Of these phases, only the Schwarzschild phase in is understood in any detail mainly because of the symmetry involved. The other phases are poorly understood, particularly the Planck phase which can only be described in terms of quantum gravity, which is itself an untested idea.
One thing that could help clarify many of these questions is more data and the possibility of an upgrade to the LHC at some point in the future.
The 800 pound gorilla in all this is the safety of these kinds of experiments. There is a widespread belief in the particle physics community that black hole production is a zero risk procedure. Indeed, particle physicists brook no discussion on this topic and Bleicher and co do not mention it.
By contrast, they point out that the physics involved is highly speculative. Indeed what interests them is the possibility that these processes will reveal new physics beyond our existing understanding of the universe. That’s hard to reconcile with the categorical assurances that the public has been given over safety.
There’s little confidence to be gained from safety assessments that have been carried out in the past.. Back in the late 90s, a reader’s letter in Scientific American raised the question of whether the Relativistic Heavy Ion Collider (RHIC) then being built at the Brookhaven National Laboratory, could produce black holes that might destroy the planet.
As a result, Brookhaven’s director commissioned a report from four physicists on the safety of the machine. This report concluded that the probability of catastrophe was 2 x 10^-4, describing this as “a comfortable margin of error”. Another report by a group of CERN physicists came to the “”extremely conservative conclusion [that] it is safe to run RHIC for 500 million years.”
These papers were widely used at the time to provide reassurance to the public and yet both later turned out to contain serious errors. The “comfortable margin of error” is actually a 1 in 5000 chance–not one that most people would consider comfortable. When this was pointed out, the team revised its figures by adding another zero onto the number making it a 1 in 50,000 chance, adding that “we do not attempt to decide what is an acceptable upper limit on [the probability of a disaster].”
The CERN group had mangled its numbers too. It turned out that their calculations merely suggested that there was a low probability that Earth would be destroyed very early on in a run at RHIC. In fact, their calculations were consistent with a high probability of planetary destruction over a long run.
None of these errors were widely reported.
Just before the LHC was due to be switched on, CERN commissioned its own report on the safety of what is now the world’s most powerful accelerator. This report concluded that the machine was safe.
An important question is what confidence the public should place in this report. There are various reasons to be cautious, not least of which are the errors that appeared in earlier assessments.
Just as serious is the fact that the report was written by five employees at CERN who relied on the scientific work of one other CERN employee and a scientist with a pending visiting position at the organisation.
These are people whose entire careers and livelihoods depended on the LHC being switched on. With the best will in the world, it’s hard to see how this was a sensible choice.
Since then the debate has moved on, with a number of new concerns being raised over safety. We’ve covered this in this blog on several occasions. These concerns have yet to be addressed.
What’s needed, of course, is for the safety of the LHC to be investigated by an independent team of scientists with a strong background in risk analysis but with no professional or financial links to CERN. A competent team could surely be put together even though this condition would probably exclude most particle physicists.
The talk now is of an LHC upgrade to increase the machine’s luminosity and its energy to some 16.5 TeV. Safety should be a central part of these plans and yet it is not. The public should demand to know why. | 0.827303 | 3.66292 |
The last few years has seen an explosion of exoplanet discoveries. Some of those worlds are in what we deem the “habitable zone,” at least in preliminary observations. But how many of them will have life-supporting, oxygen-rich atmospheres in the same vein as Earth’s?
A new study suggests that breathable atmospheres might not be as rare as we thought on planets as old as Earth.
Earth took a long time to develop the oxygenated atmosphere that we enjoy now. Until about 2.4 billion years ago, our planet had much less oxygen in its atmosphere and oceans. That all changed when a major oxygenation event took place; the first of three that shaped the Earth.
The three-step model of Earth’s oxygenation is pretty widely understood and accepted, though it’s not without controversy. The model outlines three major shifts in Earth’s history, with each one substantially altering the Earth’s atmosphere by adding more oxygen.
The three events were:
- The Great Oxidation Event occurred about 2.4 billion years ago during the Paleoproterozoic Era. In this event, biologically produced oxygen accumulated in the oceans and atmosphere, likely leading to an initial mass extinction.
- The Neoproterozoic Oxygenation Event saw a dramatic rise in oxygen levels, and preceded the Cambrian Explosion about 540 million years ago.
- The Paleozoic Oxygenation Event happened about 400 million years ago and saw oxygen reach its current level of about 21%.
The history of Earth’s oxygenation is complicated. It wasn’t a linear progression. At first, oxygen was produced as a waste by-product by life forms, and much of it was absorbed by the Earth’s crust. Oxygen is highly reactive and it formed all sorts of compounds with other elements and became locked in the crust. In particular, it reacted with iron to produce iron oxide in the geological record, one of our best indicators of when oxygen entered the atmosphere.
There’s a lot of debate around this model though. According to one understanding of the model, photosynthetic bacteria in the ocean produced much of the early oxygen. Then land-based planets came along hundreds of millions of years later, raising the oxygen level again. There’s also evidence that plate tectonics and massive volcanic eruptions played a role.
An article by the authors of this new study says this model implies that a certain level of luck is required to create an oxygen-rich world. “If one volcanic eruption hadn’t happened, or a certain type of organism hadn’t evolved, then oxygen might have stalled at low levels,” it says.
But maybe that’s not the case.
Their new study is titled “Stepwise Earth oxygenation is an inherent property of global biogeochemical cycling” and the word “inherent” is key here. The authors say that once we had the right microbes and plate tectonics, which were both established 3 billion years ago, it was only a matter of time before we reached the oxygen level we have now. Regardless of volcanoes and land-based plants.
Rather than external forces, it was “a set of internal feedbacks involving the global phosphorus, carbon, and oxygen cycles” that led to the Earth’s oxygenation, as the study says. In fact, those cycles would have “produced the same three-step pattern observed in the geological record.”
It all comes down to this, from the paper: “We conclude that Earth’s oxygenation events are entirely consistent with gradual oxygenation of the planetary surface after the evolution of oxygenic photosynthesis.”
But how did they arrive at that conclusion?
The researchers are from Leeds University in the UK. The lead author is Lewis J. Alcott, a PhD student based in the Earth Surface Science Institute. Alcott and the other researchers worked with a well-established model of marine biogeochemistry and modified it. They ran that model across all of Earth’s history, and found that it produced the three main oxygenation events all by itself.
In a press release Alcott said, “This research really tests our understanding of how the Earth became oxygen-rich, and thus able to support intelligent life.”
The dominant thinking behind the Earth’s history of oxygenation relies on a couple broad categories of events to explain it. One is major evolutionary developments in life-forms that produce oxygen. Basically “biological revolutions,” where lifeforms became progressively more complex, and engineered an oxygen-rich environment. The second category is tectonic revolutions: a dramatic and particular increase in tectonic activity, including significant volcanic activity, that altered the crust and led to greater oxygen levels.
There’s been a lot of debate around the exact nature of both of those broad categories, but this new study is giving scientists something more to think about. Rather than relying on “step-wise” events that can be pinpointed in the geological record to explain oxygenation, the new study points to feedback cycles between phosphorous, carbon, and oxygen.
The study also suggests that oxygenation was inevitable.
Study co-author Professor Simon Poulton, also from the School of Earth and Environment at Leeds, said: “Our model suggests that oxygenation of the Earth to a level that can sustain complex life was inevitable, once the microbes that produce oxygen had evolved.”
At the heart of this new model is the marine phosphorous cycle. Their model produced the same three step oxygenation pattern the Earth experienced “when driven solely by a gradual shift from reducing to oxidising surface conditions over time. The transitions are driven by the way the marine phosphorus cycle responds to changing oxygen levels, and how this impacts photosynthesis, which requires phosphorus.”
“Our work shows that the relationship between the global phosphorus, carbon and oxygen cycles is fundamental to understanding the oxygenation history of the Earth. This could help us to better understand how a planet other than our own may become habitable,” said senior author Dr. Benjamin Mills.
So there’s hope for some of those exoplanets yet.
This study won’t be the final word on the matter. But it’s an intriguing result, and if it stands up to further scientific scrutiny, it may well impact how we characterize the exoplanets we’ve found already, and the thousands more we’ll find with TESS and other future planet-finding telescopes.
- Press Release: Breathing new life into Earth’s oxygen debate
- Research Paper: Stepwise Earth oxygenation is an inherent property of global biogeochemical cycling
- Article: Breathable atmospheres may be more common in the universe than we first thought
- Research Paper (2014): The rise of oxygen in Earth’s early ocean and atmosphere | 0.823646 | 3.868018 |
"What caused this outburst of V838 Mon? For reasons unknown, star V838 Mon's outer surface suddenly greatly expanded with the result that it became the brightest star in the entire Milky Way Galaxy in January 2002. Then, just as suddenly, it faded. A stellar flash like this had never been seen before- supernovas and novas expel matter out into space.
Click image for larger size.
Although the V838 Mon flash appears to expel material into space, what is seen in the above image from the Hubble Space Telescope is actually an outwardly moving light echo of the bright flash. In a light echo, light from the flash is reflected by successively more distant rings in the complex array of ambient interstellar dust that already surrounded the star. V838 Mon lies about 20,000 light years away toward the constellation of the unicorn (Monoceros), while the light echo above spans about six light years in diameter.” | 0.910067 | 3.152595 |
This is the Core of the Milky Way, Seen in Infrared, Revealing Features Normally Hidden by Gas and Dust
The world’s largest airborne telescope, SOFIA, has peered into the core of the Milky Way and captured a crisp image of the region. With its ability to see in the infrared, SOFIA (Stratospheric Observatory For Infrared Astronomy) is able to observe the center of the Milky Way, a region dominated by dense clouds of gas and dust that block visible light. Those dense clouds are the stuff that stars are born from, and this latest image is part of the effort to understand how massive stars form.
One of the mysteries in the core region of our galaxy involves the formation of stars, particularly massive ones. While the region contains much more gas and dust than other regions of the galaxy, fewer massive stars form there: 10 times fewer than expected. Untangling the reasons for that is difficult because of the intervening gas and dust between Earth and the core.
Astronomers working with SOFIA captured an image that may shed light on the birth of massive stars. Scientists combined SOFIA’s power with NASA’s Spitzer Space Telescope and the ESA’s Herschel Space Observatory to get this image. The image shows the Arches Cluster, which contains the densest concentration of stars in the Milky Way. It also highlights the Quintuplet Cluster, which is home to stars a million times more luminous than the Sun. Both clusters are about 100 light years from the Milky Way’s galactic center.
SOFIA is designed to bypass the Earth’s atmosphere and all the problems it poses for infrared astronomy, without the expense of a space telescope. SOFIA’s FORCAST instrument (Faint Object Infrared CAmera for the SOFIA Telescope) can see material in the core of the galaxy that’s warm and emits infrared light in a wavelength that other telescopes can’t. By combining FORCAST’s data with data from the Spitzer and Herschel space telescopes, astronomers created a composite image showing new details.
A paper highlighting early results from this work has been submitted to The Astrophysical Journal. The image was also presented for the first time at the 2020 annual meeting of the American Astronomical Society.
James Radomski is a Universities Space Research Association scientist at the SOFIA Science Center at NASA’s Ames Research Center in California’s Silicon Valley. In a press release, Radomski said “It’s incredible to see our galactic center in detail we’ve never seen before. Studying this area has been like trying to assemble a puzzle with missing pieces. The SOFIA data fills in some of the holes, putting us significantly closer to having a complete picture.”
The data is giving astronomers a new, detailed look at structures near the Quintuplets Cluster that may indicate star birth. It also shows some warm material near the Arches Cluster that could be the seeds for the formation of new stars. This new high-resolution look at these features could be a clue to how some of the most massive stars can from so close to each other in a small region, while the surrounding areas show a surprisingly low rate of star birth.
“Understanding how massive star birth happens at the center of our own galaxy gives us information that can help us learn about other, more distant galaxies,” said Matthew Hankins, a postdoctoral scholar at the California Institute of Technology in Pasadena, California and principal investigator of the project. “Using multiple telescopes gives us clues we need to understand these processes, and there’s still more to be uncovered.”
There’s a lot to untangle when it comes to understanding star birth at the Milky Way’s core. The galactic core may be the most extreme region when it comes to the formation of stars. Though the region contains about 80% of the galaxy’s star-forming material, something is slowing down the process. It’s a region of complex magnetic fields, a powerful gravity well, dense molecular clouds, turbulence, and high temperatures.
At the core of the galaxy, the rate of star formation is only 0.1 solar masses per year out of the 1.2 solar masses per year produced by the entire galaxy. That’s 10 times less than predictions by current theoretical models. Scientists hope that this new image data will help make sense of the region and its lack of star birth.
But the low frequency of star birth in the Milky Way’s core is only one of the mysteries of that region. Another involves Sagittarius A-Star (Sgr. A*,) the supermassive black hole at the center of the galaxy.
A ring of material that’s about 10 light years in diameter surrounds Sgr. A*. Though Sgr. A* is quieter than its counterparts in other spiral galaxies, it still swallow material and emits high-energy radiation as a result. The ring of material plays an important role in feeding material into the black hole itself. But the origin of the ring itself is a puzzle, partly because the ring should get depleted over time. But the new data from SOFIA, Spitzer, and Herschel shows some structures in the region that could show new material being incorporated into the ring.
The data for these images was captured in July 2019 when SOFIA was operating near Christchurch, New Zealand, to study the southern skies.
DJCyberBlog,Photography,My Lesson Planning,Sawagi Noise
via Universe Today https://ift.tt/2nDj081
January 7, 2020 at 10:54AM | 0.877698 | 3.826465 |
IF NASA wants to search for life on Mars it will have to rethink its
series of missions to the Red Planet—and redesign its spacecraft to look
for traces of life.
NASA may try to bring back samples of rock from Mars as early as 2001, four
years sooner than planned. “I think we are going to have to accelerate some
activities,” says Daniel Goldin, who heads the agency. Goldin, who not long ago
was pooh-poohing the idea of dispatching humans to Mars any time soon, even
raises the possibility of sending scientists to hunt for microfossils.
In the meantime, planetary scientists will have their patience sorely tested.
NASA is launching four space probes to Mars in the next two years, but none is
designed to look for life. In November, NASA will launch the Mars Global
Surveyor, which will simply orbit the planet and map it. Following close behind
in December, is the Mars Pathfinder. This craft will land on Mars and release a
free-ranging “rover”. However, the rover is not equipped to dig
underground—the most likely place to find fossils in Mars’s inhospitable
And in two years’ time, the US is to launch Mars Surveyor 1998, which
consists of an orbiter and a lander. The lander will release two small probes
that will dig some 2 metres into the Martian surface. Unfortunately, this is
probably not deep enough to find any fossils.
The US has tentative plans to send further probes in 2001, 2003 and 2005. The
mission in 2001 is probably the first which could be tailored to hunt for
microfossils and bring them back to Earth, says Joseph Burns, an astronomer at
Cornell University in Ithaca, New York, who led a team that analysed the Mars
programme for the National Academy of Sciences. “We shouldn’t rush on this
thing,” he says.
While engineers design a lander that can move around and drill deep into the
surface to extract any microfossils, the earlier missions can scout around for
the most promising spots to send the later craft, he says.
Politically, the announcement is good news for NASA. Much of its Mars
research has been sold to politicians as part of the search for extraterrestrial
life. NASA could well win an increase in its budget after the “space summit”
that President Clinton has ordered to be held in November to plan the search for
microfossils. “I am determined that the American space programme will put its
full intellectual power and technological prowess behind the search for further
evidence of life on Mars,” Clinton said last week.
The discovery could also head off the cuts that were threatening the National
Science Foundation’s programme for collecting meteorites in Antarctica. The
programme turns up an average of 400 meteorites a year from the continent’s ice
cap. The threat of cuts may now vanish. “I think they’ll stop talking like that
for a while,” says Bill Cassidy of the University of Pittsburgh, leader of the
search team that found ALH84001.
Both Japan and Russia also have Mars missions in the pipeline. In 1998, Japan
is scheduled to send a craft to orbit Mars and study its upper atmosphere.
Russia’s Mars 1996 mission should set out in November, and although it will
release some landers that can dig into the surface of the planet, it will not
return any samples to Earth. In November, the European Space Agency will decide
whether to send a mission to Mars. ESA’s spacecraft would have four landers, But
it would not be launched until 2007 at the earliest. | 0.830553 | 3.091763 |
« PreviousContinue »
Brewster as to the habitability of the planets. The new arguments are not yet generally accepted. Lowell believes he has, with the spectroscope, proved the existence of water on Mars.
One of the most unexpected and interesting of all telescopic discoveries took place in the opposition of 1877, when Mars was unusually near to the earth. The Washington Observatory had acquired the fine 26-inch refractor, and Asaph Hall searched for satellites, concealing the planet's disc to avoid the glare. On August 11th he had a suspicion of a satellite. This was confirmed on the 16th, and on the following night a second one was added. They are exceedingly faint, and can be seen only by the most powerful telescopes, and only at the times of opposition. Their diameters are estimated at six or seven miles. It was soon found that the first, Deimos, completes its orbit in 3oh. 18m. But the other, Phobos, at first was a puzzle, owing to its incredible velocity being unsuspected. Later it was found that the period of revolution was only 7h. 39m. 22s. Since the Martian day is twentyfour and a half hours, this leads to remarkable results. Obviously the easterly motion of the satellite overwhelms the diurnal rotation of the planet, and Phobos must appear to the inhabitants, if they exist, to rise in the west and set in the east, showing two or even three full moons in a day, so that, sufficiently well for the ordinary purposes of life, the hour of the day can be told by its phases. The discovery of these two satellites is, perhaps, the most interesting telescopic visual discovery made with the large telescopes of the last , half century; photography having been the means of discovering all the other new satellites except Jupiter's fifth (in order of discovery). Jupiter. — Galileo's discovery of Jupiter's satellites was followed by the discovery of his belts. Zucchi and Torricelli seem to have seen them. Fontana, in 1633, reported three belts. In 1648 Grimaldi saw but two, and noticed that they lay parallel to the ecliptic. Dusky spots were also noticed as transient. Hooke' measured the motion of one in 1664. In 1665 Cassini, with a fine telescope, 35 feet focal length, observed many spots moving from east to west, whence he concluded that Jupiter rotates on an axis like the earth. He watched an unusually permanent spot during twenty-nine rotations, and fixed the period at 9h. 56m. Later he inferred that spots near the equator rotate quicker than those in higher latitudes (the same as Carrington found for the sun); and W. Herschel confirmed this in 1778–9. Jupiter's rapid rotation ought, according to Newton's theory, to be accompanied by a great flattening at the poles. Cassini had noted an
oval form in 1691. This was confirmed by La Hire, Römer, and Picard. Pound measured the ellipticity = ra's 5.
From a drawing by E. M. Antoniadi, showing transit of a satellite's shadow, the belts, and the “great red spot” (Monthly Motices, R. A. S., vol. lix., pl. x.).
W. Herschel supposed the spots to be masses of cloud in the atmosphere — an opinion still accepted. Many of them were very permanent. Cassini's great spot vanished and reappeared nine times between 1665 and 1713. It was close to the northern margin of the southern belt. Herschel supposed the belts to be the body of the planet, and the lighter parts to be clouds confined to certain latitudes. In 1665 Cassini observed transits of the four satellites, and also saw their shadows on the planet, and worked out a lunar theory for Jupiter. Mathematical astronomers have taken great interest in the perturbations of the satellites, because their relative periods introduce peculiar effects. Airy, in his delightful book, Gravitation, has reduced these investigations to simple geometrical explanations. In 1707 and 17 13 Miraldi noticed that the fourth satellite varies much in brightness. W. Herschel found this variation to depend upon its position in its orbit, and concluded that in the positions of feebleness it is always presenting to us a portion of its surface, which does not well reflect the sun's light; proving that it always turns the same face to Jupiter, as is the case with our moon. This fact had also been established for Saturn's fifth satellite, and may be true for all satellites. In 1826 Stower measured the diameters of the four satellites, and found them to be 2,429, 2, 18o, 3,561, and 3,046 miles. In modern times much interest has been taken in watching a rival to Cassini's famous spot. The “great red spot” was first observed by Niesten, Pritchett, and Tempel, in 1878, as a rosy cloud attached to a whitish zone beneath the dark southern equatorial band, shaped like the new war balloons, 30,000 miles long and 7,000 miles across. The next year it was brick-red. A white spot beside it completed a rotation in less time by 5% minutes than the red spot — a difference of 260 miles an hour. Thus they came together again every six weeks, but the motions did not continue uniform. The spot was feeble in 1882–4, brightened in 1886, and, after many changes, is still visible.
Galileo's great discovery of Jupiter's four moons was the last word in this connection until September 9th, 1892, when Barnard, using the 36-inch refractor of the Lick Observatory, detected a tiny spot of light closely following the planet. This proved to be a new satellite (fifth), nearer to the planet than any other, and revolving round it in 11h. 57m. 23s. Between its rising and setting there must be an interval of 23 Jovian days, and two or three full moons. The sixth and seventh satellites were found by the examination of photographic plates at the Lick Observatory in 1905, since which time they have been continuously photographed, and their orbits traced, at Greenwich. On examining these plates in 1908 Mr. Melotte | 0.801163 | 3.691876 |
Suppose you picked up a grain of sand and held it at arm’s length. If you held it up in the night sky, it would block a tiny fraction of the visible heavens. Now suppose instead of of a sand grain it were a tiny window, through which you could see even the faintest light. Finally, suppose you were to take your tiny window and point it at the darkest patch of night you could find. What would you see?
Of course we have such a “window,” called the Hubble telescope, and we did just what I’ve described. We aimed it at one of the darkest patches of sky we could find, in the Fornax constellation. After gathering light for a total of about 55 hours, what we got was the image below.
Think on that for a bit. This image is what we got when we pointed the Hubble telescope at what looked like empty space. Instead of empty space, we found about 10,000 galaxies. These are young galaxies, from about 400 to 800 million years after the big bang. Ten thousand galaxies in a patch of sky the size of a grain of sand.
Of course there isn’t anything particularly special about the direction we looked other than the fact that there wasn’t anything in the way. If we looked in any other direction we would see basically the same thing. Imagine the sky covered with grains of sand, and in each sand grain thousands of galaxies.
It’s estimated that there are 100 billion galaxies in the visible universe. That’s more than 10 galaxies for every man, woman and child on Earth. Those galaxies might have an average of about 100 billion stars. Around most of those stars might be tens of planets. Countless cosmic grains of sand.
And on one of those cosmic sand grains are humans, looking out at the night sky and realizing the universe is much bigger than they once imagined. | 0.882644 | 3.235841 |
In this post, we will take a peek at 5 amazing futuristic telescopes that have been built in recent times. While you’re reading, ponder on the implications of mankind having the ability to see into the deepest reaches of space.
Ever since Galileo Galilei looked up into the sky during the 1600s, the vast and empty expanse of space has seemed a little closer to home. Today, mankind has created some of the most magnificent and powerful telescopes of all time, with the sole intent being to discover more of the dark abyss above us. The question is: how far will we go on our journey to solve the mysteries of the universe? And what will we find at the end of the journey? Whatever the answer is, the fact remains; the future is looking brighter and more exciting – thanks to the technological innovations which are shaping our world today. Let’s dive in and check out the top 5 most futuristic telescopes of the next era.
These satellites are located on earth. There are many space telescopes that exist too. These telescopes are located in space and orbit our planet like satellites. In fact, we use satellite communication techniques to communicate with these satellite telescopes. We shall enlist the top space telescopes in a different post.
European Extremely Large Telescope (E-ELT) – Is This an Exaggeration or A Real Thing?
Deep in the dusty Atacama Desert in Chile, construction is underway on a telescope that’s slated to be the largest in the world when it’s completed. The European Extremely Large Telescope, or E-ELT for short, is, believe it or not, actually a real observatory. It began construction in May 2017 and is projected to complete by 2024. It’s located on the flat surface of a 10,000-foot mountain. The ELT is a brainchild of the European Southern Observatory (ESO) made up of 14 European countries and Brazil. The ESO also operates the Atacama Large Millimeter/submillimeter Array (ALMA) that discovered the birth of a solar system around the star HL Tauri.
A desert was chosen as a prime location for the 39-meter wide telescope. Mostly due to the lack of vegetation and precipitation. Both of those physical characteristics can cause skies to become cloudy and severely debilitate a telescope’s view. Its mirror consists of 798 hexagons each measuring 1.4 meters across. The E-ELT’s sister telescope (which is smaller) is also present in the same desert and is known as the Very Large Telescope.
When the E-ELT is completed, it’s expected to discover a whole lot of new and interesting things about our universe which will surely only serve to create more unanswered questions in our minds as it scans the skies for new planets, dark matter, and other spatial phenomena. This telescope of the future is the next step in the evolution of the human race. Its successful invention makes us realize that anything is possible if we put our minds to it.
Thirty Meter Telescope (TMT) – Is This the Most Important Optical Telescope Ever?
Meanwhile, over in Hawaii, the Thirty Meter Telescope is also currently under construction. The project has been ongoing since the start of the ‘90s. However, its progress is constantly being halted due to protests from Native Hawaiians.
The protesters vehemently oppose the construction of the telescope due to its location on Mauna Kea – the tallest mountain on Earth when measured from peak to base and considered to be a sacred place by the Native Hawaiians. Mauna Kea is a famous location for astronomers around the world due to the clear skies viewable from the top of the peak.
The Thirty Meter Telescope, or TMT, will utilize a mirror that has the triple the diameter of any telescope in use today. This means that scientists will be able to see deeper and farther into space than ever before. Mysterious celestial objects which are currently faint are can become quite clearer with the use of the TMT.
Even after the approval for a construction permit by Hawaii’s Board of Land and Natural Resources in September 2017, word on the street is that the ruling is currently under an appeal. What will we discover beyond our known universe? New planets? Alien life? We’ll have to wait and see.
Large Synoptic Survey Telescope (LSST) – Does the Size of the telescope Matter?
The Large Synoptic Survey Telescope is another large scope under construction which is making an effort to change the astronomy game in a dramatic fashion. This huge apparatus measures 8.4 meters in diameter. Moreover, it utilizes a 3.2 billion-pixel camera that’s as big as a small car.
The LSST construction site is also located in Chile due to the favorable weather conditions. Although it may seem to be the same as every other telescope on this list, the LSST can do something unique.
Being a survey telescope, this large scope will be able to scan the entire night sky rather than focusing on single objects. The plan is for it to run every couple of nights. So that it can use the largest digital camera on Earth to take time-lapse videos of the night sky in all its glory.
Due to the sheer size of the camera, an extremely wide field of view is available for the futuristic telescope. This will allow it to take an array of crystal-clear photos.
The LSST Corporation together with the National Science Foundation and the U.S. Energy Department is overlooking the construction of the LSST. Their overall aim is to map out the known universe and create the first-ever three-dimensional map.
This will be useful in providing a complete and total census of our solar system as well as spotting any rogue asteroids that threaten to collide with planet Earth. Construction is estimated to complete in 2022.
Giant Magellan Telescope (GMT) – Will this be on of the Tallest futuristic Telescopes ever?
Well, what a surprise – another huge telescope in Chile; and this one’s also in the Atacama Desert! The Giant Magellan Telescope is currently still in the design phase but the blueprints reveal some exciting and unique features.
For starters, it uses an unprecedented reflector system that features seven of the largest stiff monolith mirrors in the world. These large mirrors reflect light to seven smaller secondary mirrors. Which in turn send the light to a central primary mirror where it finally reaches advanced imaging cameras.
Sounds pretty cool, but that’s not all. Underneath each secondary mirror lies hundreds of actuators which work to counteract atmospheric turbulence. This turbulence, if unchecked, would cause celestial objects in the view to become blurry and obscured.
The actuators are controllable remotely by state-of-the-art computers. With the use of this technology, the Giant Magellan Telescope Organization (GMTO) promises to provide images that are 10 times sharper than the pictures produced by the Hubble Space Telescope!
The GMTO hopes to use its massive creation to find proof of extraterrestrial life and shed light on some of the mysteries of space. The Giant Magellan Telescope will achieve completion by 2023.
Large Binocular Telescope (LBT) – Is This the Largest Pair of Binoculars in The World?
Do you still remember the days when your father handed you a pair of binoculars and you were amazed by what it could do? Well, prepare to be amazed again, because the LBT could be considered the largest binoculars in the world.
The LBT visually resembles a giant pair of binoculars. It has two large identical telescopes placed side-by-side. These two telescopes connect together in order to produce a much clearer and higher-resolution picture. If you’ve read some telescope reviews, you might have heard of this one before.
This ingenious invention is funded by NASA. It is actually the only one on our list that has already completed construction. NASA’s primary goal with the LBT is to take high-quality infrared images of dust around stars that formed long ago in order to learn more about the planet-formation process in the hopes of finding habitable planets similar to Earth.
As of January 2015, the LBTI project has already finished its first study of habitable zones. | 0.800891 | 3.335873 |
The search for planets beyond our solar system is about to gain some new recruits.
Today, a team that includes MIT and is led by the Carnegie Institution for Science has released the largest collection of observations made with a technique called radial velocity, to be used for hunting exoplanets. The huge dataset, taken over two decades by the W.M. Keck Observatory in Hawaii, is now available to the public, along with an open-source software package to process the data and an online tutorial.
By making the data public and user-friendly, the scientists hope to draw fresh eyes to the observations, which encompass almost 61,000 measurements of more than 1,600 nearby stars.
“This is an amazing catalog, and we realized there just aren’t enough of us on the team to be doing as much science as could come out of this dataset,” says Jennifer Burt, a Torres Postdoctoral Fellow in MIT’s Kavli Institute for Astrophysics and Space Research. “We’re trying to shift toward a more community-oriented idea of how we should do science, so that others can access the data and see something interesting.”
Burt and her colleagues have outlined some details of the newly available dataset in a paper to appear in The Astronomical Journal. After taking a look through the data themselves, the researchers have detected over 100 potential exoplanets, including one orbiting GJ 411, the fourth-closest star to our solar system.
“There seems to be no shortage of exoplanets,” Burt says. “There are a ton of them out there, and there is ton of science to be done.”
The newly available observations were taken by the High Resolution Echelle Spectrometer (HIRES), an instrument mounted on the Keck Observatory’s 10-meter telescope at Mauna Kea in Hawaii. HIRES is designed to split a star’s incoming light into a rainbow of color components. Scientists can then measure the precise intensity of thousands of color channels, or wavelengths, to determine characteristics of the starlight.
Early on, scientists found they could use HIRES’ output to estimate a star’s radial velocity — the very tiny movements a star makes either as a result of its own internal processes or in response to some other, external force. In particular, scientists have found that when a star moves toward and away from Earth in a regular pattern, it can signal the presence of an exoplanet orbiting the star. The planet’s gravity tugs on the star, changing the star’s velocity as the planet moves through its orbit.
“[HIRES] wasn’t specifically optimized to look for exoplanets,” Burt says. “It was designed to look at faint galaxies and quasars. However, even before HIRES was installed, our team worked out a technique for making HIRES an effective exoplanet hunter.”
For two decades, these scientists have pointed HIRES at more than 1,600 “neighborhood” stars, all within a relatively close 100 parsecs, or 325 light years, from Earth. The instrument has recorded almost 61,000 observations, each lasting anywhere from 30 seconds to 20 minutes, depending on how precise the measurements needed to be. With all these data compiled, any given star in the dataset can have several days’, years’, ore even more than a decade’s worth of observations.
“We recently discovered a six-planet system orbiting a star, which is a big number,” Burt says. “We don’t often detect systems with more than three to four planets, but we could successfully map out all six in this system because we had over 18 years of data on the host star.”
More eyes on the skies
Within the newly available dataset, the team has highlighted over 100 stars that are likely to host exoplanets but require closer inspection, either with additional measurements or further analysis of the existing data.
The researchers have, however, confirmed the presence of an exoplanet around GJ 411, which is the fourth-closest star to our solar system and has a mass that is roughly 40 percent that of our sun. The planet has an extremely tight orbit, circling the star in less than 10 days. Burt says that there is a good chance that others, looking through the dataset and combining it with their own observations, may find similarly intriguing candidates.
“We’ve gone from the early days of thinking maybe there are five or 10 other planets out there, to realizing almost every star next to us might have a planet,” Burt says.
HIRES will continue to record observations of nearby stars in the coming years, and the team plans to periodically update the public dataset with those observations.
“This dataset will slowly grow, and you’ll be able to go on and search for whatever star you’re interested in and download all the data we’ve ever taken on it. The dataset includes the date, the velocity we measured, the error on that velocity, and measurements of the star’s activity during that observation,” Burt says. “Nowadays, with access to public analysis software like Systemic, it’s easy to load the data in and start playing with it.”
Then, Burt says, the hunt for exoplanets can really take off.
“I think this opens up possibilities for anyone who wants to do this kind of work, whether you’re an academic or someone in the general public who’s excited about exoplanets,” Burt says. “Because really, who doesn’t want to discover a planet?”
This research was supported, in part, by the National Science Foundation. | 0.881065 | 3.90306 |
Study Suggests Our Universe May Be 2 Billion Years Younger
The universe is looking younger every day, it seems.
New calculations suggest the universe could be a couple billion years younger than scientists now estimate, and even younger than suggested by two other calculations published this year that trimmed hundreds of millions of years from the age of the cosmos.
The huge swings in scientists’ estimates — even this new calculation could be off by billions of years — reflect different approaches to the tricky problem of figuring the universe’s real age. “We have large uncertainty for how the stars are moving in the galaxy,” said Inh Jee, of the Max Plank Institute in Germany, lead author of the study in Thursday’s journal Science.
Scientists estimate the age of the universe by using the movement of stars to measure how fast it is expanding. If the universe is expanding faster, that means it got to its current size more quickly, and therefore must be relatively younger.
The expansion rate, called the Hubble constant, is one of the most important numbers in cosmology. A larger Hubble Constant makes for a faster moving — and younger — universe.
The generally accepted age of the universe is 13.7 billion years, based on a Hubble Constant of 70. Jee’s team came up with a Hubble Constant of 82.4, which would put the age of the universe at around 11.4 billion years.
Jee used a concept called gravitational lensing — where gravity warps light and makes far away objects look closer. They rely on a special type of that effect called time delay lensing, using the changing brightness of distant objects to gather information for their calculations.
But Jee’s approach is only one of a few new ones that have led to different numbers in recent years, reopening a simmering astronomical debate of the 1990s that had been seemingly settled.
In 2013, a team of European scientists looked at leftover radiation from the Big Bang and pronounced the expansion rate a slower 67, while earlier this year Nobel Prize winning astrophysicist Adam Riess of the Space Telescope Science Institute used NASA’s super telescope and came up with a number of 74. And another team earlier this year came up with 73.3.
Jee and outside experts had big caveats for her number. She used only two gravitational lenses, which were all that were available, and so her margin of error is so large that it’s possible the universe could be older than calculated, not dramatically younger.
Harvard astronomer Avi Loeb, who wasn’t part of the study, said it an interesting and unique way to calculate the universe’s expansion rate, but the large error margins limits its effectiveness until more information can be gathered.
“It is difficult to be certain of your conclusions if you use a ruler that you don’t fully understand,” Loeb said in an email.
Headline Image: © NASA, ESA, R. Ellis (Caltech), HUDF 2012 Team via AP | 0.808393 | 3.780322 |
Can’t wait for NASA’s Juno probe to arrive at Jupiter in 2016? Then check out the gas giant shining bright in the overnight sky in August.
To find the largest planet in the solar system just look for a super-bright creamy colored star rising above the eastern horizon just after local midnight. As the night progresses Jupiter will continue to move ever higher and highlight the southern sky by the predawn hours. The fifth planet from the Sun is sitting within the boundaries of the zodiac constellation Aries, just underneath the front hooves of the celestial ram.
You can’t miss Jupiter – even if you’re stuck within a light polluted city – right now it’s one of the most brilliant star-like objects in the entire sky. What makes it such a sparkler? First off, it’s a true monster in size – with a diameter measuring 142,000 km over 1300 Earth-sized worlds could easily fit inside it, making it a wide enough object in the sky to see as a disk even when using the smallest of optical aids. Jupiter’s also completely shrouded in highly reflective, light colored hydrogen and helium clouds, which just adds to its brilliance.
Your views get even better with a small telescope which can reveal the planet’s signature cloud belts and even hints of the famous Great Red Spot – a hurricane the size of 3 Earths, raging for at least three centuries. Check out my views of the planet and red spot.
So while we wait for Juno to make its 2.8 billion km, 5 year trek , it’s amazing to think that Jupiter’s so bright in our skies despite it being so far away and that the reflected sunlight off its cloud-tops takes over 40 minutes to reach your eye!
Sky Extras: While your out gazing at Jupiter remember this Friday, August 5th also marks when asteroid Vesta reaches opposition, which means for skywatchers it’s at is brightest it can get in our skies. Check out my observer’s guide posted last week.
Also spaceweather.com is reporting that we should be on alert for possible auroras this weekend – especially for those in high and mid-latitude locations. A few days ago a group of sunspots flung solar flares in the direction of Earth – check out an awesome movie of the eruption. So you if you have clear skies you may want to take a peak towards the northern sky around midnight the next few days.
Andrew Fazekas, aka The Night Sky Guy, is a science writer, broadcaster, and lecturer who loves to share his passion for the wonders of the universe through all media. He is a regular contributor to National Geographic News and is the national cosmic correspondent for Canada’s Weather Network TV channel, space columnist for CBC Radio network, and a consultant for the Canadian Space Agency. As a member of the Royal Astronomical Society of Canada, Andrew has been observing the heavens from Montreal for over a quarter century and has never met a clear night sky he didn’t like. | 0.922354 | 3.276544 |
While Mars may be significantly behind its sunward neighbor in terms of the number of motor vehicles crawling over its surface, it seems like we’re doing our best to close that gap. Over the last 23 years, humans have sent four successful rovers to the surface of the Red Planet, from the tiny Sojourner to the Volkswagen-sized Curiosity. These vehicles have all carved their six-wheeled tracks into the Martian dust, probing the soil and the atmosphere and taking pictures galore, all of which contribute mightily to our understanding of our (sometimes) nearest planetary neighbor.
You’d think then that sending still more rovers to Mars would yield diminishing returns, but it turns out there’s still plenty of science to do, especially if the dream of sending humans there to explore and perhaps live is to come true. And so the fleet of Martian rovers will be joined by two new vehicles over the next year or so, lead by the Mars 2020 program’s yet-to-be-named rover. Here’s a look at the next Martian buggy, and how it’s built for the job it’s intended to do.
If It Ain’t Broke…
The Mars 2020 mission is part of the broader Mars Exploration Program, or MEP. The MEP was born from the failure of the Mars Observer mission in 1992, NASA’s first attempted mission to Mars since the successful Viking program in the 1970s. The soil chemistry experiments performed by the static Viking landers suggested that life may have been possible on Mars, but the results were equivocal. NASA launched the MEP to answer the question of life on Mars definitively, as well as to characterize the geology and atmosphere of the planet to prepare for human exploration.
Unfortunately, a lot of the missions that were to make up MEP were lost to budget cutbacks in 2012, and the only money earmarked for planetary exploration was contingent of being spent on missions capable of returning samples to Earth. Curiosity had already made it to Mars by that point, though, and was returning exciting results and glorious photos of the Martian landscape. And while it was capable of sampling the Martian regolith, Curiosity was not able to collect samples that could one day be returned to Earth.
Curiosity did, however, prove that a large rover with a complex mission profile could land successfully and perform under challenging conditions. Not willing to mess with success, and operating under budget restrictions, NASA decided to essentially clone Curiosity for the Mars 2020 rover. The rovers would be mechanically very similar, with different science packages bolted on, as well as the addition of the hardware needed to package samples for eventual retrieval and return to Earth.
Super-Charged for Science
Outwardly, it’s hard to tell the difference between Curiosity and the Mars 2020 rover. Both use the proven six-wheel articulated bogie design, with each wheel powered by its own electric motor. The wheels have been redesigned for Mars 2020, though, thanks to lessons learned from seven years of abuse suffered by Curiosity‘s wheels.
The main hulls of the two rovers look almost identical, with the same angled “trunk” area at the rear of the vehicle supporting the same plutonium-powered Multi-Mission Radioisotope Thermal Generator (MMRTG) module to provide 110 Watts of electrical power and 2,000 Watts of heat for the rovers’ guts. The Mars 2020 MMRTG is literally a leftover from Curiosity, as are many other parts and instruments.
Things start to differ when you start looking at the science the two rovers were designed to support. The most obvious difference is the main robotic arm of Mars 2020, which is stronger than the arm on Curiosity and sports different instruments, such as an X-ray fluorescence spectrometer dubbed PIXL and a pair of adorably named geological instruments: SHERLOC, an ultraviolet Raman spectrometer for fine-scale mineralogy and detection of organic molecules, and WATSON, a high-resolution camera to provide images of targets that SHERLOC might be interested in.
The Mars 2020 rover arm also supports a coring drill, designed to cut cylindrical core sections from rock rather than just pulverize them as Curiosity‘s drill does. A “bit carousel” allows the arm to select from a number of other tools, including grinding tools to abrade rocks.
Return to Sender
Should a particular sample prove promising based on the results of on-board experiments, a sample handling system in the belly of the rover will get to work. The bit carousel has slots to accept special sample containers, which are stored in a rack under the rover. A small robotic arm, looking somewhat like a SCARA arm from a semiconductor fab, places a sample tube in the bit carousel, which rotates it up so the arm can access it. The core sample is ejected into the sample tube, which is then returned to storage in the sample handling area before being hermetically sealed. The sample handling hardware is shown nicely in the video below:
The rover can collect and store up to 43 samples on-board. The mission plan calls for the team to designate a “caching depot” where the samples collected during the one Martian year-long primary mission will be dropped. Sample tubes, along with control tubes to assess unintended contamination, will be released from the sample handler into a pile on the regolith. Any samples collected during the subsequent extended mission will also be left at the cache, to await a future sample return mission.
The Mars 2020 rover’s landing site, Jezero Crater, was selected because it was once a 250 meter deep lake at about the time life was first appearing on Earth. On Earth, the sediments that are deposited into lake beds are rich in life, and it’s hoped that Martian sediments have preserved any signs of life that developed 3.5 billion years ago. Also, the ancient lake bed features a delta structure from a river that once fed into it, again holding potential for finding “biosignatures” from any life that got a toehold on Mars.
One of the most interesting pieces of hardware making the trip aboard the rover is the Mars Helicopter Scout. Primarily included to test the technology and explore the challenges of extraterrestrial aviation, the small drone will make several short flights sometime in the early part of the rover’s primary mission. Stored in the rover’s belly, the coaxial-rotor drone carries an array of technology that will seem familiar to most hackers: a Snapdragon SoC running Linux, MCUs for flight control, and a Zig-Bee link back to the rover. It even has a lithium-ion battery pack and camera for navigation and observation.
Each of the MHS flights will last only about 3 minutes and get no more than 10 meters above the surface. Navigation will use a solar tracker and inertial guidance. NASA hopes that the high-resolution camera will provide detailed images of the sample cache to inform the design of sample return mission hardware.
Between the first extraterrestrial aircraft, the slate of science experiments planned – including making oxygen from the thin Martian atmosphere – and the potential to actually return pieces of the Martian regolith, Mars 2020 has the potential to be a breakthrough mission. And with the rover safely bundled up and being prepared for integration with the launch vehicle, everything seems on-track for the mission’s July launch, and the rover’s date with destiny.
[Featured images: NASA/JPL] | 0.839223 | 3.284239 |
The thermonuclear research was initiated by theoretical physicists in the early 1920s based on pure speculation. Georgii Gamow, Robert Atkinson, and Fritz Houtermans proposed that the energy production in stars was derived from the collision of atomic nuclei.
Gamow was the first to suggest that in the interior of stars, atomic nuclei can occasionally collide and fuse, and as a result, release a huge amount of energy. This energy is what powers the stars. In 1929 Houtermans and Atkinson published a paper, in line with the concept of Gamow. On the other hand, Gamow started his research on thermonuclear fusion in George Washington University and was joined by Edward Teller. They were soon joined by another mathematical physicist Hans Bethe.
They proposed two types of thermonuclear reactions to the questions of stellar evolution, (H–H) and (D–D). Rutherford, in 1937 proposed another reaction, namely, D–T) between deuterium and the tritium produced in the D–D reaction. Two years later Hans Bethe who was a professor at Cornell University, published his famous paper, Energy Production in Stars, H. A. Bethe. Phys. Rev. 55, 434 – Published March 1, 1939 where he though to identify the most likely thermonuclear reactions which generate energy in the Sun and other stars.
Bethe assumed based on his theoretical model that process called proton-proton fusion is taking place within the cores of the stars with two possible cycles. P-P and the CNO cycles. According to Bethe stars heavier than the Sun, CNO (Carbon-Nitrogen- Oxygen) cycle of nuclear fusion is the dominant source of energy generation. See the figures below.
The two figures (P-P Cycle on the left and CNO Cycle on the right) are adapted from J. N. Bahcall, Neutrinos from the Sun, Scientific American, Volume 221, Number 1, July 969, pp. 28-37.
It is important to note that Eddington and the rest of theoretical physicists thought that the pressure and temperature at the Sun’s core can be determined by its mass, coupled with the standard gas laws. And, they assumed that the mass of the Sun could be calculated from the orbital motions of the planets. The following are quotations from Eddington’s paper 'The Internal Constitution of the Stars.'
"It is not enough to provide for the external radiation of the star, we must provide for the maintenance of the high internal temperature, without which the star would collapse."
"The problem of the source of a star’s energy will be considered, by a process of exhaustion we are driven to conclude that the only possible source of a star’s energy is subatomic yet it must be confessed that the only hypothesis shows little disposition to accommodate itself to the detailed requirements of observation, and a critic might count up a large number of fatal objections."
"In seeking a source of energy other than contraction the first question is whether the energy to be radiated in future is now hidden in the star or whether it is being picked up continuously from outside. Suggestions have been made that the impact of meteoric matter provides the heat, or that there is some subtle radiation traversing space which the star picks up. Strong objection may by be urged against these hypotheses individually, but it is unnecessary to consider them in detail because they have arisen through a misunderstanding of the nature of the problem. No source of energy is of any avail unless it liberates energy in the deep interior of the star."
In 1920, at the meeting of the British Association for the Advancement of Science, Eddington, said: "A star is drawing on some vast reservoir of energy by means unknown to us. This reservoir can hardly be other than the subatomic energy which, it is known, exists independently in all matter. … The store is well nigh inexhaustible, if only it could be tapped … F. W. Aston’s experiments seem to leave no doubt that all the elements are constituted out of hydrogen atoms (protons) bound together with negative electrons. But, the mass of the helium atom is less than the sum of the masses of the four hydrogen atoms which enter into it. There is a loss of mass in the synthesis amounting to about one part in 120. … Now mass cannot be annihilated, and the deficit can only represent the electrical energy set free in the transmutation. We can therefore at once calculate the quantity of energy liberated when helium is made out of hydrogen. If five percent of a star’s mass consists initially of hydrogen atoms, which are gradually being combined to form more complex elements. We need to look no further for the source of a star’s energy. … If indeed the subatomic energy in the stars is being freely used to maintain their great furnaces, it seems to bring a little nearer to the fulfillment of our dream of controlling this latent power for the well-being of the human race or for its suicide."
Those theoretical physicists came to the conclusion that hydrogen is the right element for thermonuclear fusion. They could not realize that hydrogen has the highest potential energy (M/A) and the highest charge density (Z/A). Nevertheless, hydrogen, which is supposed to make 74 percent of the Sun is completely ionized, and that means the electrons and protons are free to collide. Bethe and Eddington proposed that such collisions induce a chain of nuclear reactions. In the first stage, two protons are fused together forming a deuteron, a positron and a neutrino. But, in order for this reaction to take place, the protons must be too close to each other, approximately 0.1 trillionth of a centimeter and at the same time, one of them has to decay to a neutron and positron. The second stage of (p-p) chain would involve the formation of a nucleus of an isotope of helium, 3He, which consists of two protons, one neutron, and a gamma-ray photon. The second stage is supposed to be the result of the fusion of Deuteron with another proton. In the last stage, the 3He isotope has to fuse with Deuteron to form a helium nucleus, 4He, and two protons. One must keep in mind that before the last stage can take place, the first and the second ones must occur twice. Although, in the early years (the first half of the 20th century), theoretical physicists were arguing that p-p chain reaction can occur since there is a vast supply of protons available, but later on calculations based on the theoretical model showed that the reaction cannot possibly take place within the core of the sun. It is extremely improbable. The probability is one reaction per particle in 14000,000,000 years. In other words, this reaction cannot happen even with the hypothetical extreme conditions that supposed to exist at the center of the sun.
Theoretical physicists combined two hypotheses that they do not exist in physical reality. In spite of that, this pseudo astrophysics theory was enhanced further by the mathematical model of Hans Bethe and theoretical physicist Subrahmanyan Chandrasekhar. The most basic problem that faced the advocates of thermonuclear fusion, was the Coulomb barrier. According to the standard gas laws, the temperature and pressure assumed at the core of the Sun are not sufficient for thermonuclear reaction to take place. But, theoreticians came to the rescue. They proposed the so-called Quantum Tunneling (QT). (QT) is a real subatomic physics phenomenon but it has nothing to do with quantum theory. The basic question that one should ask, is the reason for the tunneling effect. Why does it take place even if the potential is higher than the kinetic energy? QT is a magic notion and it is not the only one. Quantum mystics came up with plenty of them, like virtual particles, entanglement, re-normalization, borrowing energy from the vacuum and so many others. These mystic notions are attempts to explain - and only superficially-observations on subatomic and cosmological scales that Newtonian physics can't explain them.
Nevertheless, the so-called QT is a subatomic magnetic phenomenon similar in some ways to superconductivity. Superconductivity is characterized by the Meissner effect, the complete ejection of magnetic flux fields from the interior of the superconductor as it transitions into the superconducting state. QT is also an expulsion of magnetic flux fields from conducting materials. When two conducting materials are very close and separated by a small insulating barrier, just a few nano-meters at very low temperatures or a very high temperature. In these situations, the conducting materials on either side of the barrier would form a layer -due to the expulsion of magnetic flux fields- on their surface and overcome the barrier width, rather than penetrating through it as currently believed. Nature does not perform magic.
However, it is extremely important to keep in mind that if QT is taking place within the core of the Sun then neutrino production would depend sensitively on the central temperature of the Sun. Since, the number of charged particles (P-P) that must collide to generate thermonuclear fusion is small, compared with the energy of the potential barrier. Only a tiny fraction of the nuclear collisions in the Sun can overcome the potential barrier and cause the fusion. This fraction is so sensitive to the temperature. Just 1% error in the temperature corresponds to about 30% error in the predicted number of neutrinos, and 3% error in the temperature results in an error of factor of two in the predicted number of neutrinos. Recent observations with sensitive imaging devices on board advanced space telescope, the (SDO) have shown that the motions of the plasma within the core of the Sun to be two orders of magnitude slower than what theoretical models predict. This is clear verdict about the real temperature range within the core of the Sun and that means this subatomic magnetic phenomenon (QT) cannot possibly take place within the core of the Sun.
The temperature and pressure within the interior are totally different than what theoreticians believe. So, in spite of this observed fact, physicists still do not question the validity of thermonuclear reaction. Because this hypothetical reaction is considered by mainstream physicists as one of the two most fundamental physics facts that dominate astrophysics and astronomy. The other one is the so-called gravitational collapse. In fact, the hypothetical thermonuclear fusion is supposed to be a consequence of the gravitational collapse. The fusion is occurring in response to gravity’s attempts to compress the mass of the Sun, a countermeasure to the compression attempts by gravity. Gravitational compression or gravitational collapse cannot possibly take place in the interior of the stars. This notion is a consequence of the misunderstanding of gravity.
All types of thermonuclear reactions are quasi-nuclear fusion reactions. These reactions can never be sustained for sufficient time which would allow the fusion process to be completed. Without sustainability, the final phase of the fusion reaction cannot be reached and consequently the output energy or rather the energy gain cannot be obtained. In other words, thermonuclear reactions are laboratory induced quasi-nuclear reactions that should not be characterized as natural nuclear fusion reactions that take place in stars or anywhere else in the Universe.
Decades of experimental research using state of the art fusion devices have shown the quasi nature of thermonuclear reactions. Generated Plasma from Thermonuclear Reactions would stay forever Unsustainable and Inefficient.
But, the failure of controlled thermonuclear reactions is not the only evidence for their quasi-nuclear fusion nature. The so-called H-bomb is another empirical evidence that reveals the unsustainable of these kinds of reactions. Contrary to what is currently believed, the H-bomb is not a demonstration of thermonuclear fusion. The weapon device is based mostly, if not entirely on the process of fission reactions.
On the other hand, decades of observations from space using the latest advanced telescopes and other instruments have shown the notion of thermonuclear powered stars to be worse than the worst science theory of the middle ages. And if we restrict the discussion to the Standard Solar Model (SSM), we find that from the last four decades of the 20th century the model needed continuous mathematical interventions to save it. And in the last one and a half decade, however, the mathematical interventions became increasingly difficult to be implemented.
This is due to the new generations of space telescopes and other space instruments that have more sensitive devices on board for imaging and detection purposes. The data collected from these advanced space instruments have exposed the endless mathematical fantasies that have no relations with the physical reality of our star. In fact, all solar observations since the 1960s using space-based devices have shown the SSM to be an obsolete mathematical model. The claim that the model explains the basic features of the Sun is not even wrong. None of the basic features of the Sun - without exception - can be explained by the SSM. The so-called solar neutrino problem-which is in fact, not a problem or a puzzle to start with-that supposed to be solved in the beginning of the last decade, by the invented quantum mechanical concept of neutrino oscillations, is just one of them. Moreover, Kirchhoff's law of blackbody radiation has provided another indirect superficial evidence to the current solar model and stellar evolution model in general. Kirchhoff's law of blackbody radiation is definitely wrong as it has been pointed out by Dr. Pierre-Marie Robitaille.
Nothing could be further from the truth, the current mainstream model which treats the Sun as a ball of neutral gas is totally wrong. The crisis of solar pulsations and the recent crisis of solar abundance have exposed the fundamental flawed within the SSM beyond any shadow of a doubt. They are very serious crises and should be considered as the last nails in the coffin of the current solar dogma.
However, the misunderstanding of gravity and its real role in the distribution of matter in the Universe is the most fundamental reason which prevents the true understanding of our star, the planets of the solar system and the cosmos at large. The misunderstanding of gravity is also the fundamental reason that led to the invention of Quantum Mechanics (QM) and Relativity Theories (RT). QM and RT contaminated astrophysics and subatomic physics with very complicated pseudo-physics notions. Theoretical physics has been gradually transformed to a new field which is now a billion times closer to mysticism and metaphysics than to the field that deals with the principle of cause and effect (Causality).
The force that rules, powers and absolutely unifies the Universe cannot be realized without comprehending the real building blocks of matter. This task is the most fundamental requirement in physics. Without it, gravity cannot be understood or revised and physical facts about our solar system cannot be revealed, not to mention about the Universe. On the other hand, if the real building blocks of matter and the force which is permanently present in them can be comprehended, then all physical phenomena observed on any scale can be explained logically and without the need to invent new pseudo physics theories for superficial explanation. The trend of inventing arbitrary notions, like imaginary particles, unobserved substance or energy, in order to enforce any observations into the myth of the big bang and gravity-dominated Universe are the basic features of theoretical and astrophysics. One wonders why should we build space telescopes and other astronomy tools if the data obtained by them have to be enforced into outdated theories that are mostly based on speculations and mathematical fallacies.
Jamal Shrair, Founder of the Helical Universe
Jamal S. Shrair has a B.Sc. in Electrical Engineering from Canada’s Queen’s University and a M.Sc.in experimental and particle physics from the Eötvös Loránd University of Science in Budapest, Hungary. His M.Sc. thesis project was an investigation of cosmic muons using a Cherenkov detector. In his post-graduate studies, he joined the faculty of electrical engineering and informatics at the Budapest University of Technology and Economics, where he studied surface physics and electron devices. The title of his thesis project is „The application of Nanoporous Silicon Layers for Efficient Gas Sensors”. | 0.870834 | 3.76136 |
Meteor showers are some of the most exciting spectacles to watch in all of astronomy. However, the best views require dark, clear eastern skies and a willingness to be awake when most people are sleeping. And this time of year, they require warm clothes! If skies are clear, the Leonid Meteor Shower should be visible this weekend during the morning hours of November 17-18 and 18-19.
Meteors are tiny bits of rock and dust that enter the earth's atmosphere and burn up. These bits of rock and dust are floating in long orbits in space and the earth "runs into" these clouds of rock and dust. Because the earth is moving so fast, the rocks and dust that are struck by the earth heat up from the friction of earth's atmosphere. The result of this are brilliant streams of light that are often called "shooting stars" but aren't stars at all, just very small visitors that shine briefly and flicker out.
Most major meteor showers repeat on an annual basis. Why is that? The bits of dust and rock that cause meteor showers are typically the remnants of a comet or other object in the Solar System which moved across the sky tens, hundreds or even thousands of years ago. The debris trail marks the path that the comet took some time ago. If that path intersects the earth's orbit then we experience a meteor shower each time the earth passes that specific point in space. Many people are familiar with a summer meteor shower that takes place around August 11-12. This shower is called the Persied Meteor Shower and is one of the best of the year.
The meteor shower coming up this weekend is called the Leonids and it is the result of a comet known as Temple-Tuttle (comets are usually named after their discoverers). The shower is called Leonids because the comets appear to originate in the part of the sky where we find the constellation Leo the Lion. This constellation does not rise in the east until very late in the night and as such, we don't get a good view of the Leonids meteors until after midnight. You won't see all of them at once but rather one every few minutes if conditions are good. Dawn does not break until 6:00 am each day this weekend so I will be looking out at 5:30 or so instead of staying up until 2:00 or 3:00 in the morning. How about you? | 0.838054 | 3.318572 |
The Orionid meteor shower is peaks in the early morning of Tuesday, Oct. 22, but a bright moon will disrupt viewing until shortly before dawn. The meteors that streak across the sky are some of the fastest among meteor showers, because the Earth is hitting a stream of particles almost head on.
“The saving grace for the Orionids, if you go out the last hour or two before dawn, the moon might have set in time for you to catch a few,” NASA meteor expert Bill Cooke told Space.com. “The rate’s going to be about 30, 40 per hour, but the moonlight will wash out most of those meteors.”
The particles come from Comet 1P/Halley, better known as Halley’s Comet. This famous comet swings by Earth every 75 to 76 years, and as the icy comet makes its way around the sun, it leaves behind a trail of comet crumbs. At certain times of the year, Earth’s orbit around the sun crosses paths with the debris.
“You can see pieces of Halley’s Comet during the Eta Aquarids [in May] and the Orionid meteor shower [in October and November],” Cooke told Space.com.
The Orionids are named after the direction from which they appear to radiate, which is near the constellation Orion (The Hunter). In October, Orion is best visible around 2 a.m. Cooke told Space.com that the best viewing will be around that time on Oct. 21 and Oct. 22. If you miss the peak, the show is also visible between Oct. 15 and 29, as long as the moon isn’t washing the meteors out.
Sometimes the shower peaks at 80 meteors an hour; at others it is closer to 20 or 30. Cooke predicted that in 2018, the peak would be at the smaller end of the scale, echoing the peaks of 2017 and years before.
How to view the show
Orionid meteors are visible from anywhere on Earth and can be seen anywhere across the sky. If you find the shape of Orion the Hunter, the meteor shower’s radiant (or point of origin) will be near Orion’s sword, slightly north of his left shoulder (the star Betelgeuse). But don’t stare straight at this spot, Cooke said, “because meteors close to the radiant have short trails and are harder to see — so you want to look away from Orion.”
As is the case with most nighttime skywatching events, light pollution can hinder your view of the Orionid meteor shower (although this year, the moon will do damage as well). If possible, get far away from city lights, which can hinder the show. Go out around 1:30 a.m. and let your eyes adjust to the dark for about 20 minutes. Bundle up against the cold if necessary. Lie back and use only your eyes to watch the sky. Binoculars and telescopes won’t improve the view, because they are designed to see more stationary objects in the sky.
Some Orionids will appear very fast and bright, since they can whiz by at up to 148,000 mph (238,000 km/h) in relative speed. That’s just six kilometers an hour slower than the Leonids, the speediest show of the year, Cooke said.
It’s tempting to think that the brighter meteors represent fragments that would reach the ground, but Cooke said that isn’t the case with the Orionids. These tiny comet fragments — some as small as a grain of sand — are called meteoroids. When they enter Earth’s atmosphere, they become meteors. Friction from air resistance causes meteors to heat up, creating a bright, fiery trail commonly referred to as a shooting star. Most meteors disintegrate before making it to the ground. The few that do strike the Earth’s surface are called meteorites. [How Often Do Meteorites Hit the Earth?]
Astronomers have recorded Halley’s Comet as far back as 240 B.C. but no one realized that the same comet was making multiple appearances. In 1705, then-University of Oxford professor and astronomer Edmund Halley published “Synopsis Astronomia Cometicae” (“A Synopsis of the Astronomy of Comets”), which showed the first evidence that the comet is reoccurring. By studying the historical records of a comet that appeared in 1456, 1531, 1607 and 1682, Halley calculated that it was in fact the same comet and predicted it would reappear in 1758. While Halley died before the comet’s return, it did appear on schedule and was named after him.
Reports of the Orionids, however, did not first appear until 1839 when an American in Connecticut spotted the shower, Cooke said. More observations of the shower were recorded during the Civil War between 1861 and 1865. Cooke told Space.com he wasn’t sure why the meteor shower was discovered so late, given that records of Halley’s Comet exist for millennia.
The next perihelion (closet approach of Halley’s Comet to the sun) is expected around July 2061.
Editor’s note: If you snap a great photo of an Orionid meteor or any other night sky sight you’d like to share with Space.com and our news partners for a story or image gallery, send images and comments in to: [email protected].
Have a news tip, correction or comment? Let us know at [email protected]. | 0.804509 | 3.538174 |
People living along the coast are no strangers to significant weather — they often deal with hurricanes, rising sea levels and tropical storms. But there's another natural phenomenon that is slowly taking its toll on many seaside towns — the king tide.
In fact, Labor Day Weekend 2019 in the Florida, the usual masses tourists were nowhere to be seen. Instead the beaches along the East coast were empty and locals across the state were bracing for Hurricane Dorian. But what made the already unprecedented Category 5 Atlantic Ocean hurricane worse was it was making landfall during Florida's king tides.
What Are King Tides?
King tides are the highest astronomical tides of the year, the result of a perfect alignment between the Earth, sun and moon. Think of them as high tides on steroids. Every two weeks, the moon and the sun position themselves on the same side of the planet. Their combined gravitational pull tugs at Earth's oceans, resulting in tides that are about 20 percent higher than normal. Three or four times a year the moon, which has an elliptical orbit, snuggles up to Earth on its closest approach to the planet. When both of these astronomical events converge, a king tide rolls in.
"King tides can occur any time of the year, but tend to occur during the fall or spring near the full moon tides surrounding the equinox, when the moon is at perigee [closest to Earth in its orbit]," Derek Loftis, a research scientist at the College of William and Mary's Virginia Institute of Marine Science, says in an email. "The fall season's highest tides are slightly more pronounced due to the Earth being slightly closer to perihelion with the sun [on the solstice]."
King tides can worsen if they happen during a coastal storm, like what happened during Hurricane Dorian. The tide, along with a storm surge, pushed extra water inland. Meteorologists and other scientists are always concerned when storms line up with king tides. While scientists can calculate a king tide in advance, predicting storms before they blow through is much trickier.
Are King Tides Increasing?
For example, in October 2017 in Virginia and in other locations, Hurricane Jose, which never made landfall but churned hundreds of miles off the U.S. coast, contributed to king-tide flooding. Off Sewells Point near Norfolk, the water rose 0.73 feet (.22 meters) above what the National Oceanic and Atmospheric Administration considers minor flood stage there.
Such situations are becoming increasingly common and more troublesome. In Florida, the number of king tides has soared 400 percent between 2006 and 2016. And during summer of 2019, Miami set multiple high-tide records in late July and early August.
Why the increase? Although there are many reasons why king tides are driving water further inland, none is more worrisome than climate change. As Earth warms, ice melts and the temperature of the ocean gets hotter. When that happens, water expands and rises.
"The king tides are garnering more attention now due to the derivative effects of sea level rise, and coastal land subsidence due to the draining of subsurface aquifers for drinking water in many coastal regions," Loftis says.
Brian McNoldy, a scientist at the University of Miami's Rosenstiel School of Marine and Atmospheric Science, says seasonal effects, such as the amount of precipitation, also play a role. In addition, when trees and plants begin to shut down for the winter and shed their leaves, they cannot absorb as much rain runoff as they do in the spring and summer. As a result, the increase runoff contributes to flooding during king tides.
Although wind doesn't have a direct impact on king tides, it does play a role. In South Florida, McNoldy says, a persistent onshore wind increases the water level up to a foot (.03 meters) on top of the regular astronomic tides. The impact on communities can be devastating. Right after Hurricane Irma blew through in 2017, king tides flooded some neighborhoods of Anna Maria Island, Bradenton Beach and parts of South Florida for several days, creating unease in an already ravaged area.
Is This the "New Normal?"
Both McNoldy and Loftis say the king tide is the "new normal," which is one reason why some communities have been pouring money into shoring up infrastructure, housing and habitat restoration projects to minimize damage. Early in 2017, Miami Beach announced a new $100 million flood prevention project to keep neighborhoods from flooding. The city plans to raise roads, install pumps and water pipes, and make sure sewer connections hold up.
"King tides have always been around," Loftis says. "We have only just started calling them this in the past couple of years in the U.S. In Australia, where the name hails from, the king tide truly reigns supreme over the other tides, as they bring much higher water levels than the mean highest astronomical tides. Here in North America, the king tide is barely the king, and in many cases, is only a few inches higher than the high tides on the days before or after."
This story is part of Covering Climate Now, a global collaboration of more than 250 news outlets to strengthen coverage of the climate story.
Last editorial update on Sep 15, 2019 03:19:19 pm. | 0.824791 | 3.393713 |
ann17051 — Kunngjøring
Hint of Relativity Effects in Stars Orbiting Supermassive Black Hole at Centre of Galaxy
9. august 2017
A new analysis of data from ESO’s Very Large Telescope and other telescopes suggests that the orbits of stars around the supermassive black hole at the centre of the Milky Way may show the subtle effects predicted by Einstein’s general theory of relativity. There are hints that the orbit of the star S2 is deviating slightly from the path calculated using classical physics. This tantalising result is a prelude to much more precise measurements and tests of relativity that will be made using the GRAVITY instrument as star S2 passes very close to the black hole in 2018.
At the centre of the Milky Way, 26 000 light-years from Earth, lies the closest supermassive black hole, which has a mass four million times that of the Sun. This monster is surrounded by a small group of stars orbiting at high speed in the black hole’s very strong gravitational field. It is a perfect environment in which to test gravitational physics, and particularly Einstein’s general theory of relativity.
A team of German and Czech astronomers have now applied new analysis techniques to existing observations of the stars orbiting the black hole, accumulated using ESO’s Very Large Telescope (VLT) in Chile and others over the last twenty years . They compare the measured star orbits to predictions made using classical Newtonian gravity as well as predictions from general relativity.
The team found suggestions of a small change in the motion of one of the stars, known as S2, that is consistent with the predictions of general relativity . The change due to relativistic effects amounts to only a few percent in the shape of the orbit, as well as only about one sixth of a degree in the orientation of the orbit . If confirmed, this would be the first time that a measurement of the strength of the general relativistic effects has been achieved for stars orbiting a supermassive black hole.
Marzieh Parsa, PhD student at the University of Cologne, Germany and lead author of the paper, is delighted: "The Galactic Centre really is the best laboratory to study the motion of stars in a relativistic environment. I was amazed how well we could apply the methods we developed with simulated stars to the high-precision data for the innermost high-velocity stars close to the supermassive black hole."
The high accuracy of the positional measurements, made possible by the VLT’s near-infrared adaptive optics instruments, was essential for the study . These were vital not only during the star’s close approach to the black hole, but particularly during the time when S2 was further away from the black hole. The latter data allowed an accurate determination of the shape of the orbit.
"During the course of our analysis we realised that to determine relativistic effects for S2 one definitely needs to know the full orbit to very high precision," comments Andreas Eckart, team leader at the University of Cologne.
As well as more precise information about the orbit of the star S2, the new analysis also gives the mass of the black hole and its distance from Earth to a higher degree of accuracy .
Co-author Vladimir Karas from the Academy of Sciences in Prague, the Czech Republic, is excited about the future: "This opens up an avenue for more theory and experiments in this sector of science."
This analysis is a prelude to an exciting period for observations of the Galactic Centre by astronomers around the world. During 2018 the star S2 will make a very close approach to the supermassive black hole. This time the GRAVITY instrument, developed by a large international consortium led by the Max-Planck-Institut für extraterrestrische Physik in Garching, Germany , and installed on the VLT Interferometer , will be available to help measure the orbit much more precisely than is currently possible. Not only is GRAVITY, which is already making high-precision measurements of the Galactic Centre, expected to reveal the general relativistic effects very clearly, but also it will allow astronomers to look for deviations from general relativity that might reveal new physics.
Data from the near-infrared NACO camera now at VLT Unit Telescope 1 (Antu) and the near-infrared imaging spectrometer SINFONI at the Unit Telescope 4 (Yepun) were used for this study. Some additional published data obtained at the Keck Observatory were also used.
S2 is a 15-solar-mass star on an elliptical orbit around the supermassive black hole. It has a period of about 15.6 years and gets as close as 17 light-hours to the black hole — or just 120 times the distance between the Sun and the Earth.
A similar, but much smaller, effect is seen in the changing orbit of the planet Mercury in the Solar System. That measurement was one of the best early pieces of evidence in the late nineteenth century suggesting that Newton’s view of gravity was not the whole story and that a new approach and new insights were needed to understand gravity in the strong-field case. This ultimately led to Einstein publishing his general theory of relativity, based on curved spacetime, in 1915.
When the orbits of stars or planets are calculated using general relativity, rather than Newtonian gravity, they evolve differently. Predictions of the small changes to the shape and orientation of orbits with time are different in the two theories and can be compared to measurements to test the validity of general relativity.
An adaptive optics system compensates for the image distortions produced by the turbulent atmosphere in real time and allows the telescope to be used at much angular resolution (image sharpness), in principle limited only by the mirror diameter and the wavelength of light used for the observations.
The University of Cologne is part of the GRAVITY team (http://www.mpe.mpg.de/ir/gravity) and contributed the beam combiner spectrometers to the system.
This research was presented in a paper entitled “Investigating the Relativistic Motion of the Stars Near the Black Hole in the Galactic Center”, by M. Parsa et al., to be published in the Astrophysical Journal.
The team is composed of Marzieh Parsa, Andreas Eckart (I.Physikalisches Institut of the University of Cologne, Germany; Max Planck Institute for Radio Astronomy, Bonn, Germany), Banafsheh Shahzamanian (I.Physikalisches Institut of the University of Cologne, Germany), Christian Straubmeier (I.Physikalisches Institut of the University of Cologne, Germany), Vladimir Karas (Astronomical Institute, Academy of Science, Prague, Czech Republic), Michal Zajacek (Max Planck Institute for Radio Astronomy, Bonn, Germany; I.Physikalisches Institut of the University of Cologne, Germany) and J. Anton Zensus (Max Planck Institute for Radio Astronomy, Bonn, Germany).
ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 16 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope and its world-leading Very Large Telescope Interferometer as well as two survey telescopes, VISTA working in the infrared and the visible-light VLT Survey Telescope. ESO is also a major partner in two facilities on Chajnantor, APEX and ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre Extremely Large Telescope, the ELT, which will become “the world’s biggest eye on the sky”.
- Research paper in the Astrophysical Journal
- Final online version of paper
- Earlier VLT observations of the Galactic Centre (eso0846, eso1151, eso1332 and eso1512)
- MPE web page on the Galactic Centre
- Photos of the VLT
I. Physikalisches Institut, Universität zu Köln
I. Physikalisches Institut, Universität zu Köln
Astronomical Institute, Academy of Science
Prague, Czech Republic
Tel: +420-226 258 420
ESO Public Information Officer
Garching bei München, Germany
Tel: +49 89 3200 6655
Cell: +49 151 1537 3591 | 0.918696 | 3.96055 |
New study suggests that our star will become "one of the most beautiful objects in the night sky."
"Planetary nebulae are among the most beautiful objects in the night sky," said Albert Zijlstra, professor of astrophysics at the University of Manchester, England, and a member of the team, in email to NBC News MACH. "It's good to know that the sun will one day also make one, even if we're not around to enjoy it!" Zijlstra said the nebula will form from an "envelope" of dust and gas ejected by the setting sun, which will already have swollen to turn into a red giant that extends to the orbit of Venus and perhaps beyond. After ejection, what's left of the sun will heat up as it shrinks into a white dwarf the size of the Earth, but much denser.
The nebula will be visible for 10,000 to 20,000 years-a blink of an eye on the cosmic time scale. Its gas and dust disperse slowly, eventually providing the raw material for a new generation of stars and planets. The new discovery, published on May 7 in the journal Nature Astronomy, seems to solve a long debate about the distant future of the sun.
It has long been known that most stars end up producing a planetary nebula, but astronomers thought that the sun-a ball of overheated gas with a diameter 109 times higher than that of the Earth-was too small to form a visible nebula. "The data says you can get bright nebulae from low-mass stars like the sun," Zijlstra said in a statement. "The models said that this was not possible, no less than twice the mass of the sun would give a very weak planetary nebula to see."
For their new research, astronomers have created a series of computer models that show how quickly the dying stars heat up after ejecting their envelopes. The models indicate that the stars heat up three times faster than the previous models indicated, showing that there is still enough heat from the stars the size of our sun to illuminate a nebula. "Our understanding of the fate of the sun has made ping-pong back and forth," said Karen Kwitter, professor of astronomy at Williams College in Williamstown, Massachusetts, in an email, adding that "now, these new models say yes, the Sun will produce a planetary nebula. "
She called the new discovery a "victory" for the little boy. The nebula that our Sun produces will not be as bright as those produced by larger stars. But as Zijlstra told the Guardian, "If you lived in the Andromeda Galaxy at 2 million light-years away, you would still be able to see it." | 0.846645 | 3.640188 |
Around sunrise on Feb. 15, 2013, an extremely bright and otherworldly object was seen streaking through the skies over Russia before it exploded about 97,000 feet above the Earth's surface. The resulting blast damaged thousands of buildings and injured almost 1,500 people in Chelyabinsk and the surrounding areas. While this sounds like the first scene of a science fiction movie, this invader wasn't an alien spaceship attacking humanity, but a 20-meter-wide asteroid that had collided with the Earth.
What is worrisome is that no one had any idea this 20-meter asteroid existed until it entered the Earth's atmosphere that morning.
As an astronomer, I study objects in the sky that change in brightness over short time scales – observations that I use to detect planets around other stars. A large part of my research is understanding how we can better design and run telescopes to monitor an ever-changing sky. That's important because the same telescopes I'm using to explore other star systems are also being designed to help my colleagues discover objects in our own solar system, like asteroids on a collision course with with Earth.
A meteor is any chunk of matter that enters the Earth's atmosphere. Before the Chelyabinsk meteor met its demise on Earth, it was orbiting our sun as an asteroid. These rocky objects are normally thought to be restricted to the asteroid belt between Mars and Jupiter. However, there are many asteroids throughout the solar system. Some, like the Chelyabinsk meteor, are known as near-Earth objects (NEOs).
The Chelyabinsk meteor likely came from a group of NEOs called Apollo asteroids, named after the asteroid 1862 Apollo. There are more than 1,600 known Apollo asteroids logged in the JPL Small-Body Database that have orbits that may cross the Earth's path, and are large enough (over 140 meters), that they're considered potentially hazardous asteroids (PHAs) because a collision with Earth would devastate the region hit.
The scars of these past collisions are prominent on the moon, but the Earth also bears the marks of such impacts. Chicxulub crater on Mexico's Yucatan Peninsula was created by the Chicxulub asteroid that drove the dinosaurs to extinction. The Barringer Crater in Arizona is just 50,000 years old. The question is not if a dangerously large asteroid will collide with the Earth, but when?
Searching for threats
The U.S. government is taking the threat of an asteroid collision seriously. In Section 321 of the NASA Authorization Act of 2005, Congress required NASA develop a program to search for NEOs. NASA was assigned the task of identifying 90 percent of all NEOs greater than 140 meters in diameter. Currently, they estimate that three-quarters of the 25,000 PHAs have yet to be found.
To reach this goal, an international team of of hundreds of scientists, including myself, is completing construction of the Large Synoptic Survey Telescope (LSST) in Chile, which will be an essential tool for alerting us of PHAs.
With significant funding from the U.S. National Science Foundation, LSST will search for PHAs during its 10-year mission by observing the same area of sky at hourly intervals searching for objects that have changed position. Anything that moves in just one hour has to be so close that it is within our solar system. Teams led by researchers at the University of Washington and JPL have both produced simulations showing that LSST on its own will be capable of finding around 65 percent of PHAs. If we combine LSST data with other astronomical surveys like Pan-STARRS and the Catalina Sky Survey, we think we can help reach that goal of discovering 90 percent of potentially hazardous asteroids.
Preparing to avert disaster
Both the Earth and these asteroids are orbiting the sun, just on different paths. The more observations taken of a given asteroid, the more precisely its orbit can be mapped and predicted. The biggest priority, then, is finding asteroids that may collide with the Earth in the future.
If an asteroid is on a collision course hours or days before it occurs, the Earth won't have many options. It's like a car suddenly pulling out in front of you. There is little that you can do. If, however, we find these asteroids years or decades before a potential collision, then we may be able to use spacecraft to nudge the asteroid enough to change its path so that it and the Earth don't collide.
This is, however, easier said than done, and currently, no one really knows how well an asteroid can be redirected. There have been several proposals for missions by NASA and the European Space Agency to do this, but so far, they have not passed early stages of mission development.
The B612 Foundation, a private nonprofit group, is also trying to privately raise money for a mission to redirect an asteroid, and they may be the first to attempt this if the government space programs don't. Pushing an asteroid sounds like an odd thing to do, but when we one day find an asteroid on a collision course with Earth, it may well be that knowledge that will save humanity.
This article was originally published on The Conversation. Read the original article. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google +. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. | 0.810606 | 3.830426 |
Crescent ♐ Sagittarius
Moon phase on 10 February 2083 Wednesday is Waning Crescent, 23 days old Moon is in Sagittarius.Share this page: twitter facebook linkedin
Previous main lunar phase is the Last Quarter before 1 day on 9 February 2083 at 16:39.
Moon rises after midnight to early morning and sets in the afternoon. It is visible in the early morning low to the east.
Moon is passing first ∠2° of ♐ Sagittarius tropical zodiac sector.
Lunar disc appears visually 1.2% wider than solar disc. Moon and Sun apparent angular diameters are ∠1968" and ∠1944".
Next Full Moon is the Worm Moon of March 2083 after 21 days on 4 March 2083 at 07:34.
There is low ocean tide on this date. Sun and Moon gravitational forces are not aligned, but meet at big angle, so their combined tidal force is weak.
The Moon is 23 days old. Earth's natural satellite is moving from the second to the final part of current synodic month. This is lunation 1027 of Meeus index or 1980 from Brown series.
Length of current 1027 lunation is 29 days, 14 hours and 25 minutes. It is 1 hour and 16 minutes shorter than next lunation 1028 length.
Length of current synodic month is 1 hour and 41 minutes longer than the mean length of synodic month, but it is still 5 hours and 22 minutes shorter, compared to 21st century longest.
This New Moon true anomaly is ∠55.2°. At beginning of next synodic month true anomaly will be ∠89.1°. The length of upcoming synodic months will keep increasing since the true anomaly gets closer to the value of New Moon at point of apogee (∠180°).
2 days after point of perigee on 8 February 2083 at 11:16 in ♏ Scorpio. The lunar orbit is getting wider, while the Moon is moving outward the Earth. It will keep this direction for the next 12 days, until it get to the point of next apogee on 23 February 2083 at 07:16 in ♉ Taurus.
Moon is 364 141 km (226 267 mi) away from Earth on this date. Moon moves farther next 12 days until apogee, when Earth-Moon distance will reach 404 622 km (251 420 mi).
7 days after its descending node on 3 February 2083 at 00:47 in ♌ Leo, the Moon is following the southern part of its orbit for the next 5 days, until it will cross the ecliptic from South to North in ascending node on 15 February 2083 at 23:06 in ♒ Aquarius.
21 days after beginning of current draconic month in ♒ Aquarius, the Moon is moving from the second to the final part of it.
11 days after previous North standstill on 29 January 2083 at 19:57 in ♊ Gemini, when Moon has reached northern declination of ∠27.490°. Next day the lunar orbit moves southward to face South declination of ∠-27.496° in the next southern standstill on 11 February 2083 at 23:44 in ♐ Sagittarius.
After 6 days on 16 February 2083 at 18:15 in ♒ Aquarius, the Moon will be in New Moon geocentric conjunction with the Sun and this alignment forms next Sun-Moon-Earth syzygy. | 0.848363 | 3.110955 |
When large stars many times more massive than the sun exhaust their nuclear fuel, they eventually collapse and produce a supernova, an explosion that can be observed across the cosmos. In many cases, the explosion will leave behind a neutron star, a collapsed stellar core that will have a mass larger than the sun with a radius of less than 10 miles across. These are the densest objects in the universe not found inside black holes! They can spin incredibly rapidly, rotating in fractions of a second, and when they have large magnetic fields can be seen as pulsars, beaming radio waves into space like a lighthouse.
When a neutron star spins so rapidly, any small bumps on the surface will generate periodic gravitational waves (GWs). They are very weak compared with the waves from inspiralling and merging neutron stars and/or black holes, but because the signal is continuous, we may be able to observe GWs from spinning neutron stars in our own galaxy. Possible sources include pulsars, central compact objects in supernova remnants, and neutron stars in low-mass X-ray binaries (LMXBs). In an LMXB, a compact object accretes matter from a less-massive companion. The accretion can "spin up" the neutron star to the point where GWs are emitted in the sensitive frequency band of detectors like LIGO and Virgo. On the other hand, the binary orbit Doppler-shifts the signal from the neutron star, complicating the detection problem.
In the CCRG, we develop and apply the search methods to search for continuous GWs, especially those from the brightest LMXB, Scorpius X-1. The long observation times mean the search is very sensitive to the spin frequency and other parameters, and different approaches are needed depending on how well known those parameters are. | 0.874779 | 3.883931 |
A minor chemical difference between Earth and Moon rocks could have big implications for theories about how the Moon was born. Moon rocks contain a tiny bit more of the rare isotope oxygen-17 than do the rocks on Earth, say geochemists who measured oxygen using very precise methods.
“It changes the nature of the debate,” says Robin Canup, a planetary scientist at the Southwest Research Institute in Boulder, Colorado, who was not involved in the study. “If the difference between Earth and the Moon is a small amount as opposed to zero, we need to know that.”
Most researchers think that the Moon formed in the very early days of the Solar System, 4.5 billion years ago, when a large protoplanet smashed into the embryonic Earth. Debris from the collision mingled together and then settled into orbit around Earth, where it coalesced into the Moon. If that were the case, however, scientists would expect to see more of the remains of the original impactor in the Moon. The chemistry of Moon rocks would be different from that of Earth rocks.
“The big question was always, why do we not see this difference, why are Earth and the Moon so similar?” says Daniel Herwartz, an isotope geochemist at the University of Cologne in Germany and a member of the study team. The giant impact “is a nice theory that explains a lot of things, but there was this problem”.
Finding a difference
Herwartz and his colleagues decided to examine oxygen isotopes because planets and moons have a distinct oxygen fingerprint that records the exact environmental conditions in which they were born. The paper is published today in Science. Earlier studies found that the proportions of the different oxygen isotopes in Earth and the Moon — as averaged over their entire bulk — were essentially identical.
For the new study, the researchers used an extremely precise laser-based method to measure oxygen isotopes in a range of Earth rocks, meteorites and three lunar samples gathered by the Apollo astronauts. They found 12 parts per million more oxygen-17 in the Moon rocks as opposed to the Earth rocks. “It’s a tiny difference, that’s why it hasn’t been seen before,” says Herwartz.
He suggests that the body that triggered the Moon-forming impact, which some scientists call Theia, may have been chemically similar to a class of meteorites called enstatite chondrites. Those are similar enough to Earth, at least in terms of oxygen, that Theia wouldn’t have left a major imprint in the Moon’s chemistry, Herwartz says.
Some scientists are not impressed. According to Robert Clayton, a professor emeritus at the University of Chicago in Illinois who pioneered the use of oxygen isotopes in cosmochemistry, the authors may have done little more than find a more precise method of measurement. “I don’t see anything new in this paper — they’ve just repackaged the error bars,” says Clayton. The observed difference is simply not large enough to say anything significant about the Moon's formation, he says.
Lydia Hallis, an isotope researcher at Glasgow University, UK, notes that oxygen-17 can vary among Moon rocks, and so three Apollo samples may not necessarily represent the Moon as a whole. She adds that researchers might want to look more closely at the isotopes of other elements. If oxygen isotopes are more different than previously thought, perhaps elements such as titanium and silicon, which in past analyses seemed to be identical in Earth and the Moon, also could have minute but noteworthy differences.
Canup, though, says that the oxygen findings are likely to shake up the field. Planetary modellers have been trying to develop collision scenarios in which the Moon and Earth ended up chemically similar, but not identical, after the Moon-forming impact. “That’s the kind of debate I’m very happy to see,” she says. | 0.880822 | 3.940323 |
If you want to hear a little bit of the Big Bang, you're going to have to turn down your stereo.
That's what neighbors of MIT's Haystack Observatory found out. They were asked to make a little accommodation for science, and now the results are in: Scientists at Haystack have made the first radio detection of deuterium, an atom that is key to understanding the beginning of the universe. The findings are being reported in an article in the Sept. 1 issue of Astrophysical Journal Letters.
The team of scientists and engineers, led by Alan E.E. Rogers, made the detection using a radio telescope array designed and built at the MIT research facility in Westford, Mass. Rogers is currently a senior research scientist and associate director of the Haystack Observatory.
After gathering data for almost one year, a solid detection was obtained on May 30.
The detection of deuterium is of interest because the amount of deuterium can be related to the amount of dark matter in the universe, but accurate measurements have been elusive. Because of the way deuterium was created in the Big Bang, an accurate measurement of deuterium would allow scientists to set constraints on models of the Big Bang.
Also, an accurate measurement of deuterium would be an indicator of the density of cosmic baryons, and that density of baryons would indicate whether ordinary matter is dark and found in regions such as black holes, gas clouds or brown dwarfs, or is luminous and can be found in stars. This information helps scientists who are trying to understand the very beginning of our universe.
Until now the deuterium atom has been extremely difficult to detect with instruments on Earth. Emission from the deuterium atom is weak since it is not very abundant in space-there is approximately one deuterium atom for every 100,000 hydrogen atoms, thus the distribution of the deuterium atom is diffuse. Also, at optical wavelengths the hydrogen line is very close to the deuterium line, which makes it subject to confusion with hydrogen; but at radio wavelengths, deuterium is well separated from hydrogen and measurements can provide more consistent results.
In addition, our modern lifestyle, filled with gadgets that use radio waves, presented quite a challenge to the team trying to detect the weak deuterium radio signal. Radio frequency interference bombarded the site from cell phones, power lines, pagers, fluorescent lights, TV, and in one case from a telephone equipment cabinet where the doors had been left off. To locate the interference, a circle of yagi antennas was used to indicate the direction of spurious signals, and a systematic search for the RFI sources began.
At times, Rogers asked for help from Haystack's neighbors, and in several instances replaced a certain brand of answering machine that was sending out a radio signal with one that did not interfere with the experiment. The interference caused by one person's stereo system was solved by having a part on the sound card replaced by the factory.
The other members of the team working with Rogers are Kevin Dudevoir, Joe Carter, Brian Fanous and Eric Kratzenberg (all of Haystack Observatory) and Tom Bania of Boston University.
The Deuterium Array at Haystack is a soccer-field size installation conceived and built at the Haystack facility with support from the National Science Foundation, MIT and TruePosition Inc. | 0.80698 | 3.908917 |
NASA’s space snowman is revealing fresh secrets from its home far beyond Pluto. More than a year after its close encounter with the snowman-shaped object, the New Horizons spacecraft is still sending back data from more than 4 billion miles (6.4 billion kilometers) away.
“The data rate is painfully slow from so far away,” said Will Grundy of Lowell Observatory in Flagstaff, Arizona, one of the lead authors.
Astronomers reported Thursday that this pristine, primordial cosmic body now called Arrokoth — the most distant object ever explored — is relatively smooth with far fewer craters than expected. It’s also entirely ultrared, or highly reflective, which is commonplace in the faraway Twilight Zone of our solar system known as the the Kuiper Belt.
Grundy said in an email that to the human eye, Arrokoth would look less red and more dark brown, sort of like molasses. The reddish color is indicative of organic molecules.
While frozen methane is present, no water has yet been found on the body, which is an estimated 22 miles (36 kilometers) long tip to tip. At a news conference Thursday in Seattle, New Horizons’ chief scientist Alan Stern of Southwest Research Institute said its size was roughly that of the city.
Express Tech is now on Telegram. Click here to join our channel (@expresstechie) and stay updated with the latest tech news
As for the snowman shape, it’s not nearly as flat on the backside as previously thought. Neither the small nor big sphere is fully round, but far from the flatter pancake shape scientists reported a year ago. The research team likened the somewhat flattened spherical forms to the shape of M&Ms.
No rings or satellites have been found. The light cratering suggests Arrokoth dates back to the formation of the solar system 4.5 billion years ago. It likely was created by a slow, gentle merger between two separate objects that possibly were an orbiting pair. The resulting fused body is considered a contact binary.
This kind of slow-motion hookup likely arose from collapsing clouds in the solar nebula, as opposed to intense collisions theorized to form these planetesimals, or little orbiting bodies.
New Horizons flew past Arrokoth on January 1, 2019, more than three years after the spacecraft visited Pluto. Originally nicknamed Ultima Thule, the object received its official name in November; Arrokoth means sky in the language of the Native American Powhatan people.
Launched in 2006, the spacecraft is now 316 million miles (509 million kilometers) beyond Arrokoth. The research team is looking for other potential targets to investigate. Powerful ground telescopes still under construction will help survey this part of the sky.
Emerging technology will enable scientists to develop a mission that could put a spacecraft in orbit around Pluto, 3 billion miles (5 billion kilometers) away, according to Stern. After a few years, that same spacecraft could be sent even deeper into the Kuiper Belt to check out other dwarf planets and objects, he said.
The New Horizons scientists reported their latest findings at the annual meeting of the American Association for the Advancement of Science, as well as in three separate papers in the journal Science.
David Jewitt of the University of California, Los Angeles, who was not involved in the studies, said a flyby mission like New Horizons, where encounters last just a few days, is hardly ideal.
“For future missions, we need to be able to send spacecraft to the Kuiper Belt and keep them there” in orbit around objects, Jewitt wrote in a companion piece in Science. That would allow “these intriguing bodies to be studied in stunning geological and geophysical detail,” he noted.
📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines | 0.876574 | 3.599277 |
A Hitchcock mashup where Kubrick is the villain. “Jimmy was having a rather beautiful day…
The data Bjoraker and his team collected will supplement the information NASA’s Juno spacecraft is gathering as it circles the planet from north to south once every 53 days. “If it works, then maybe we can apply it elsewhere, like Saturn, Uranus or Neptune, where we don’t have a Juno,” she said.
Juno is the latest spacecraft tasked with finding water, likely in gas form, on this giant gaseous planet.
The Great Red Spot is the dark patch in the middle of this infrared image. It is dark due to the thick clouds that block thermal radiation. The yellow strip denotes the portion of the Great Red Spot used in astrophysicist Gordon L. Bjoraker’s analysis. Credit: NASA’s Goddard Space Flight Center/Gordon Bjoraker For centuries, scientists have worked to understand the makeup of Jupiter. It’s no wonder: this mysterious planet is the biggest one in our solar system by far, and chemically, the closest relative to the Sun. Understanding Jupiter is a key to learning more about how our solar system formed, and even about how other solar systems develop.
Is there water deep in Jupiter’s atmosphere, and if so, how much?
Gordon L. Bjoraker, an astrophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, reported in a recent paper in the Astronomical Journal that he and his team have brought the Jovian research community closer to the answer.
By looking from ground-based telescopes at wavelengths sensitive to thermal radiation leaking from the depths of Jupiter’s persistent storm, the Great Red Spot, they detected the chemical signatures of water above the planet’s deepest clouds. The pressure of the water, the researchers concluded, combined with their measurements of another oxygen-bearing gas, carbon monoxide, imply that Jupiter has 2 to 9 times more oxygen than the sun.
This finding supports theoretical and computer-simulation models that have predicted abundant water (H 2 O) on Jupiter made of oxygen (O) tied up with molecular hydrogen (H 2 ).
The revelation was stirring given that the team’s experiment could have easily failed. The Great Red Spot is full of dense clouds, which makes it hard for electromagnetic energy to escape and teach astronomers anything about the chemistry within.
“It turns out they’re not so thick that they block our ability to see deeply,” said Bjoraker. “That’s been a pleasant surprise.”
New spectroscopic technology and sheer curiosity gave the team a boost in peering deep inside Jupiter, which has an atmosphere thousands of miles deep, Bjoraker said:
“We thought, well, let’s just see what’s out there.”
The data Bjoraker and his team collected will supplement the information NASA’s Juno spacecraft is gathering as it circles the planet from north to south once every 53 days.
Among other things, Juno is looking for water with its own infrared spectrometer and with a microwave radiometer that can probe deeper than anyone has seen—to 100 bars, or 100 times the atmospheric pressure at Earth’s surface. (Altitude on Jupiter is measured in bars, which represent atmospheric pressure, since the planet does not have a surface, like Earth, from which to measure elevation.)
If Juno returns similar water findings, thereby backing Bjoraker’s ground-based technique, it could open a new window into solving the water problem, said Goddard’s Amy Simon, a planetary atmospheres expert. “If it works, then maybe we can apply it elsewhere, like Saturn, Uranus or Neptune, where we don’t have a Juno,” she said. Juno is the latest spacecraft tasked with finding water, likely in gas form, on this giant gaseous planet.
This animation takes the viewer on a simulated flight into, and then out of, Jupiter’s upper atmosphere at the location of the Great Red Spot. It was created by combining an image from the JunoCam imager on NASA’s Juno spacecraft with a computer-generated animation. The perspective begins about 2,000 miles (3,000 kilometers) above the cloud tops of the planet’s southern hemisphere. The bar at far left indicates altitude during the quick descent; a second gauge next to that depicts the dramatic increase in temperature that occurs as the perspective dives deeper down. The clouds turn crimson as the perspective passes through the Great Red Spot. Finally, the view ascends out of the spot. Credit: NASA/JPL | 0.854108 | 3.635217 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.